Franklin, Elda
1981-01-01
Reviews studies on the etiology of monotonism, the monotone being that type of uncertain or inaccurate singer who cannot vocally match pitches and who has trouble accurately reproducing even a familiar song. Neurological factors (amusia, right brain abnormalities), age, and sex differences are considered. (Author/SJL)
Monotonous braking of high energy hadrons in nuclear matter
International Nuclear Information System (INIS)
Strugalski, Z.
1979-01-01
Propagation of high energy hadrons in nuclear matter is discussed. The possibility of the existence of the monotonous energy losses of hadrons in nuclear matter is considered. In favour of this hypothesis experimental facts such as pion-nucleus interactions (proton emission spectra, proton multiplicity distributions in these interactions) and other data are presented. The investigated phenomenon in the framework of the hypothesis is characterized in more detail
Heckman, James J; Pinto, Rodrigo
2018-01-01
This paper defines and analyzes a new monotonicity condition for the identification of counterfactuals and treatment effects in unordered discrete choice models with multiple treatments, heterogenous agents and discrete-valued instruments. Unordered monotonicity implies and is implied by additive separability of choice of treatment equations in terms of observed and unobserved variables. These results follow from properties of binary matrices developed in this paper. We investigate conditions under which unordered monotonicity arises as a consequence of choice behavior. We characterize IV estimators of counterfactuals as solutions to discrete mixture problems.
Matching by Monotonic Tone Mapping.
Kovacs, Gyorgy
2018-06-01
In this paper, a novel dissimilarity measure called Matching by Monotonic Tone Mapping (MMTM) is proposed. The MMTM technique allows matching under non-linear monotonic tone mappings and can be computed efficiently when the tone mappings are approximated by piecewise constant or piecewise linear functions. The proposed method is evaluated in various template matching scenarios involving simulated and real images, and compared to other measures developed to be invariant to monotonic intensity transformations. The results show that the MMTM technique is a highly competitive alternative of conventional measures in problems where possible tone mappings are close to monotonic.
High current high accuracy IGBT pulse generator
International Nuclear Information System (INIS)
Nesterov, V.V.; Donaldson, A.R.
1995-05-01
A solid state pulse generator capable of delivering high current triangular or trapezoidal pulses into an inductive load has been developed at SLAC. Energy stored in a capacitor bank of the pulse generator is switched to the load through a pair of insulated gate bipolar transistors (IGBT). The circuit can then recover the remaining energy and transfer it back to the capacitor bank without reversing the capacitor voltage. A third IGBT device is employed to control the initial charge to the capacitor bank, a command charging technique, and to compensate for pulse to pulse power losses. The rack mounted pulse generator contains a 525 μF capacitor bank. It can deliver 500 A at 900V into inductive loads up to 3 mH. The current amplitude and discharge time are controlled to 0.02% accuracy by a precision controller through the SLAC central computer system. This pulse generator drives a series pair of extraction dipoles
Monotone piecewise bicubic interpolation
International Nuclear Information System (INIS)
Carlson, R.E.; Fritsch, F.N.
1985-01-01
In a 1980 paper the authors developed a univariate piecewise cubic interpolation algorithm which produces a monotone interpolant to monotone data. This paper is an extension of those results to monotone script C 1 piecewise bicubic interpolation to data on a rectangular mesh. Such an interpolant is determined by the first partial derivatives and first mixed partial (twist) at the mesh points. Necessary and sufficient conditions on these derivatives are derived such that the resulting bicubic polynomial is monotone on a single rectangular element. These conditions are then simplified to a set of sufficient conditions for monotonicity. The latter are translated to a system of linear inequalities, which form the basis for a monotone piecewise bicubic interpolation algorithm. 4 references, 6 figures, 2 tables
International Nuclear Information System (INIS)
Korshunov, A D
2003-01-01
Monotone Boolean functions are an important object in discrete mathematics and mathematical cybernetics. Topics related to these functions have been actively studied for several decades. Many results have been obtained, and many papers published. However, until now there has been no sufficiently complete monograph or survey of results of investigations concerning monotone Boolean functions. The object of this survey is to present the main results on monotone Boolean functions obtained during the last 50 years
High Accuracy Transistor Compact Model Calibrations
Energy Technology Data Exchange (ETDEWEB)
Hembree, Charles E. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Mar, Alan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Robertson, Perry J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-09-01
Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirements require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.
High accuracy FIONA-AFM hybrid imaging
International Nuclear Information System (INIS)
Fronczek, D.N.; Quammen, C.; Wang, H.; Kisker, C.; Superfine, R.; Taylor, R.; Erie, D.A.; Tessmer, I.
2011-01-01
Multi-protein complexes are ubiquitous and play essential roles in many biological mechanisms. Single molecule imaging techniques such as electron microscopy (EM) and atomic force microscopy (AFM) are powerful methods for characterizing the structural properties of multi-protein and multi-protein-DNA complexes. However, a significant limitation to these techniques is the ability to distinguish different proteins from one another. Here, we combine high resolution fluorescence microscopy and AFM (FIONA-AFM) to allow the identification of different proteins in such complexes. Using quantum dots as fiducial markers in addition to fluorescently labeled proteins, we are able to align fluorescence and AFM information to ≥8 nm accuracy. This accuracy is sufficient to identify individual fluorescently labeled proteins in most multi-protein complexes. We investigate the limitations of localization precision and accuracy in fluorescence and AFM images separately and their effects on the overall registration accuracy of FIONA-AFM hybrid images. This combination of the two orthogonal techniques (FIONA and AFM) opens a wide spectrum of possible applications to the study of protein interactions, because AFM can yield high resolution (5-10 nm) information about the conformational properties of multi-protein complexes and the fluorescence can indicate spatial relationships of the proteins in the complexes. -- Research highlights: → Integration of fluorescent signals in AFM topography with high (<10 nm) accuracy. → Investigation of limitations and quantitative analysis of fluorescence-AFM image registration using quantum dots. → Fluorescence center tracking and display as localization probability distributions in AFM topography (FIONA-AFM). → Application of FIONA-AFM to a biological sample containing damaged DNA and the DNA repair proteins UvrA and UvrB conjugated to quantum dots.
DEFF Research Database (Denmark)
Busck, Jens; Heiselberg, Henning
2004-01-01
We have developed a mono-static staring 3-D laser radar based on gated viewing with range accuracy below 1 m at 10 m and 1 cm at 100. We use a high sensitivity, fast, intensified CCD camera, and a Nd:Yag passively Q-switched 32.4 kHz pulsed green laser at 532 nm. The CCD has 752x582 pixels. Camera...
High accuracy satellite drag model (HASDM)
Storz, Mark F.; Bowman, Bruce R.; Branson, Major James I.; Casali, Stephen J.; Tobiska, W. Kent
The dominant error source in force models used to predict low-perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index ap, to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low-perigee satellites.
Fast and High Accuracy Wire Scanner
Koujili, M; Koopman, J; Ramos, D; Sapinski, M; De Freitas, J; Ait Amira, Y; Djerdir, A
2009-01-01
Scanning of a high intensity particle beam imposes challenging requirements on a Wire Scanner system. It is expected to reach a scanning speed of 20 m.s-1 with a position accuracy of the order of 1 μm. In addition a timing accuracy better than 1 millisecond is needed. The adopted solution consists of a fork holding a wire rotating by a maximum of 200°. Fork, rotor and angular position sensor are mounted on the same axis and located in a chamber connected to the beam vacuum. The requirements imply the design of a system with extremely low vibration, vacuum compatibility, radiation and temperature tolerance. The adopted solution consists of a rotary brushless synchronous motor with the permanent magnet rotor installed inside of the vacuum chamber and the stator installed outside. The accurate position sensor will be mounted on the rotary shaft inside of the vacuum chamber, has to resist a bake-out temperature of 200°C and ionizing radiation up to a dozen of kGy/year. A digital feedback controller allows maxi...
Czech Academy of Sciences Publication Activity Database
Jeřábek, Emil
2012-01-01
Roč. 58, č. 3 (2012), s. 177-187 ISSN 0942-5616 R&D Projects: GA AV ČR IAA100190902; GA MŠk(CZ) 1M0545 Institutional support: RVO:67985840 Keywords : proof complexity * monotone sequent calculus Subject RIV: BA - General Mathematics Impact factor: 0.376, year: 2012 http://onlinelibrary.wiley.com/doi/10.1002/malq.201020071/full
Computation of Optimal Monotonicity Preserving General Linear Methods
Ketcheson, David I.
2009-07-01
Monotonicity preserving numerical methods for ordinary differential equations prevent the growth of propagated errors and preserve convex boundedness properties of the solution. We formulate the problem of finding optimal monotonicity preserving general linear methods for linear autonomous equations, and propose an efficient algorithm for its solution. This algorithm reliably finds optimal methods even among classes involving very high order accuracy and that use many steps and/or stages. The optimality of some recently proposed methods is verified, and many more efficient methods are found. We use similar algorithms to find optimal strong stability preserving linear multistep methods of both explicit and implicit type, including methods for hyperbolic PDEs that use downwind-biased operators.
Electron ray tracing with high accuracy
International Nuclear Information System (INIS)
Saito, K.; Okubo, T.; Takamoto, K.; Uno, Y.; Kondo, M.
1986-01-01
An electron ray tracing program is developed to investigate the overall geometrical and chromatic aberrations in electron optical systems. The program also computes aberrations due to manufacturing errors in lenses and deflectors. Computation accuracy is improved by (1) calculating electrostatic and magnetic scalar potentials using the finite element method with third-order isoparametric elements, and (2) solving the modified ray equation which the aberrations satisfy. Computation accuracy of 4 nm is achieved for calculating optical properties of the system with an electrostatic lens
BIMOND3, Monotone Bivariate Interpolation
International Nuclear Information System (INIS)
Fritsch, F.N.; Carlson, R.E.
2001-01-01
1 - Description of program or function: BIMOND is a FORTRAN-77 subroutine for piecewise bi-cubic interpolation to data on a rectangular mesh, which reproduces the monotonousness of the data. A driver program, BIMOND1, is provided which reads data, computes the interpolating surface parameters, and evaluates the function on a mesh suitable for plotting. 2 - Method of solution: Monotonic piecewise bi-cubic Hermite interpolation is used. 3 - Restrictions on the complexity of the problem: The current version of the program can treat data which are monotone in only one of the independent variables, but cannot handle piecewise monotone data
A stable route to high-{beta}{sub p} plasmas with non-monotonic q-profiles
Energy Technology Data Exchange (ETDEWEB)
Soeldner, F X; Baranov, Y; Bhatnagar, V P; Bickley, A J; Challis, C D; Fischer, B; Gormezano, C; Huysmans, G T.A.; Kerner, W; Rimini, F; Sips, A C.C.; Springmann, R; Taroni, A [Commission of the European Communities, Abingdon (United Kingdom). JET Joint Undertaking; Goedbloed, J P; Holties, H A [Institute for Plasmas Physics, Nieuwegein (Netherlands); Parail, V V; Pereverzev, G V [Kurchatov Institute of Atomic Energy, Moscow (Russian Federation)
1994-07-01
Steady-state operation of tokamak reactors seems feasible in so-called Advanced Scenarios with high bootstrap current in high-beta{sub p} operation. The stabilization of such discharges with noninductive profile control will be attempted on JET in pursuit of previous high bootstrap current studies. Results of modelling studies of full noninductive current drive scenarios in JET and ITER are presented. Fast Waves (FW), Lower Hybrid (LH) Waves and Neutral Beam Injection (NBI) are used for heating and current drive, alternatively or in combination. A stable route to nonmonotonic q-profiles has been found with a specific ramp-up scenario which combines LH-current drive (LHCD) and a fast Ohmic ramp-up. A hollow current profile with deep shear reversal over the whole central region is thereby formed in an early low-beta phase and frozen in by additional heating. (authors). 5 refs., 4 figs.
Optimal Monotone Drawings of Trees
He, Dayu; He, Xin
2016-01-01
A monotone drawing of a graph G is a straight-line drawing of G such that, for every pair of vertices u,w in G, there exists abpath P_{uw} in G that is monotone in some direction l_{uw}. (Namely, the order of the orthogonal projections of the vertices of P_{uw} on l_{uw} is the same as the order they appear in P_{uw}.) The problem of finding monotone drawings for trees has been studied in several recent papers. The main focus is to reduce the size of the drawing. Currently, the smallest drawi...
High accuracy in silico sulfotransferase models.
Cook, Ian; Wang, Ting; Falany, Charles N; Leyh, Thomas S
2013-11-29
Predicting enzymatic behavior in silico is an integral part of our efforts to understand biology. Hundreds of millions of compounds lie in targeted in silico libraries waiting for their metabolic potential to be discovered. In silico "enzymes" capable of accurately determining whether compounds can inhibit or react is often the missing piece in this endeavor. This problem has now been solved for the cytosolic sulfotransferases (SULTs). SULTs regulate the bioactivities of thousands of compounds--endogenous metabolites, drugs and other xenobiotics--by transferring the sulfuryl moiety (SO3) from 3'-phosphoadenosine 5'-phosphosulfate to the hydroxyls and primary amines of these acceptors. SULT1A1 and 2A1 catalyze the majority of sulfation that occurs during human Phase II metabolism. Here, recent insights into the structure and dynamics of SULT binding and reactivity are incorporated into in silico models of 1A1 and 2A1 that are used to identify substrates and inhibitors in a structurally diverse set of 1,455 high value compounds: the FDA-approved small molecule drugs. The SULT1A1 models predict 76 substrates. Of these, 53 were known substrates. Of the remaining 23, 21 were tested, and all were sulfated. The SULT2A1 models predict 22 substrates, 14 of which are known substrates. Of the remaining 8, 4 were tested, and all are substrates. The models proved to be 100% accurate in identifying substrates and made no false predictions at Kd thresholds of 100 μM. In total, 23 "new" drug substrates were identified, and new linkages to drug inhibitors are predicted. It now appears to be possible to accurately predict Phase II sulfonation in silico.
High accuracy autonomous navigation using the global positioning system (GPS)
Truong, Son H.; Hart, Roger C.; Shoan, Wendy C.; Wood, Terri; Long, Anne C.; Oza, Dipak H.; Lee, Taesul
1997-01-01
The application of global positioning system (GPS) technology to the improvement of the accuracy and economy of spacecraft navigation, is reported. High-accuracy autonomous navigation algorithms are currently being qualified in conjunction with the GPS attitude determination flyer (GADFLY) experiment for the small satellite technology initiative Lewis spacecraft. Preflight performance assessments indicated that these algorithms are able to provide a real time total position accuracy of better than 10 m and a velocity accuracy of better than 0.01 m/s, with selective availability at typical levels. It is expected that the position accuracy will be increased to 2 m if corrections are provided by the GPS wide area augmentation system.
International Nuclear Information System (INIS)
Delattre, P.
1983-01-01
On the basis of a general descriptive framework which takes into account the intensity factor and the time distribution of radiation, a detailed justification for which is to be found in earlier publications, the three fundamental problems mentioned in the title of this paper can be approached in a new way. If the biological effect e for a given dose D delivered at different radiation intensities phi is studied, we find that the curve e=f(phi) can exhibit non-monotonic shapes. This type of phenomenon is known in pharmacology and toxicology and may well exist also for low- or medium-intensity radiation effects. Extrapolation of the effects of a given dose between high and low radiation intensities phi is usually carried out by means of an empirical linear or linear-quadratic formulation. This procedure is insufficiently justified from a theoretical point of view. It is shown here that the effects can be written in the form e=k(phi)D and that the factor of proportionality k(phi) is a generally very complicated function of phi. Hence, the usual extrapolation procedures cannot deal with certain ranges of values of phi within which the effects observed at a given dose may be greater than when the dose is delivered at higher intensity. The problem of thresholds is actually far more difficult than the current literature on the subject would suggest. It is shown here, on the basis of considerations of qualitative dynamics, that several types of threshold must be defined, starting with a threshold for the radiation intensity phi. All these thresholds are interrelated hierarchically in fairly complex ways which must be studied case by case. These results show that it is illusory to attempt to define a universal notion of threshold in terms of dose. The conceptual framework used in the proposed approach proves also to be very illuminating for other studies in progress, particularly in the investigation of phenomena associated with ageing and carcinogenesis. (author)
Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units
Directory of Open Access Journals (Sweden)
Qingzhong Cai
2016-06-01
Full Text Available An inertial navigation system (INS has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10−6°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs using common turntables, has a great application potential in future atomic gyro INSs.
MUSCLE: multiple sequence alignment with high accuracy and high throughput.
Edgar, Robert C
2004-01-01
We describe MUSCLE, a new computer program for creating multiple alignments of protein sequences. Elements of the algorithm include fast distance estimation using kmer counting, progressive alignment using a new profile function we call the log-expectation score, and refinement using tree-dependent restricted partitioning. The speed and accuracy of MUSCLE are compared with T-Coffee, MAFFT and CLUSTALW on four test sets of reference alignments: BAliBASE, SABmark, SMART and a new benchmark, PREFAB. MUSCLE achieves the highest, or joint highest, rank in accuracy on each of these sets. Without refinement, MUSCLE achieves average accuracy statistically indistinguishable from T-Coffee and MAFFT, and is the fastest of the tested methods for large numbers of sequences, aligning 5000 sequences of average length 350 in 7 min on a current desktop computer. The MUSCLE program, source code and PREFAB test data are freely available at http://www.drive5. com/muscle.
Monotonicity of social welfare optima
DEFF Research Database (Denmark)
Hougaard, Jens Leth; Østerdal, Lars Peter Raahave
2010-01-01
This paper considers the problem of maximizing social welfare subject to participation constraints. It is shown that for an income allocation method that maximizes a social welfare function there is a monotonic relationship between the incomes allocated to individual agents in a given coalition...
High accuracy wavelength calibration for a scanning visible spectrometer
Energy Technology Data Exchange (ETDEWEB)
Scotti, Filippo; Bell, Ronald E. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States)
2010-10-15
Spectroscopic applications for plasma velocity measurements often require wavelength accuracies {<=}0.2 A. An automated calibration, which is stable over time and environmental conditions without the need to recalibrate after each grating movement, was developed for a scanning spectrometer to achieve high wavelength accuracy over the visible spectrum. This method fits all relevant spectrometer parameters using multiple calibration spectra. With a stepping-motor controlled sine drive, an accuracy of {approx}0.25 A has been demonstrated. With the addition of a high resolution (0.075 arc sec) optical encoder on the grating stage, greater precision ({approx}0.005 A) is possible, allowing absolute velocity measurements within {approx}0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.
Chang, Sin-Chung; Chang, Chau-Lyan; Venkatachari, Balaji Shankar
2017-01-01
Traditionally high-aspect ratio triangular/tetrahedral meshes are avoided by CFD re-searchers in the vicinity of a solid wall, as it is known to reduce the accuracy of gradient computations in those regions and also cause numerical instability. Although for certain complex geometries, the use of high-aspect ratio triangular/tetrahedral elements in the vicinity of a solid wall can be replaced by quadrilateral/prismatic elements, ability to use triangular/tetrahedral elements in such regions without any degradation in accuracy can be beneficial from a mesh generation point of view. The benefits also carry over to numerical frameworks such as the space-time conservation element and solution element (CESE), where triangular/tetrahedral elements are the mandatory building blocks. With the requirement of the CESE method in mind, a rigorous mathematical framework that clearly identities the reason behind the difficulties in use of such high-aspect ratio triangular/tetrahedral elements is presented here. As will be shown, it turns out that the degree of accuracy deterioration of gradient computation involving a triangular element is hinged on the value of its shape factor Gamma def = sq sin Alpha1 + sq sin Alpha2 + sq sin Alpha3, where Alpha1; Alpha2 and Alpha3 are the internal angles of the element. In fact, it is shown that the degree of accuracy deterioration increases monotonically as the value of Gamma decreases monotonically from its maximal value 9/4 (attained by an equilateral triangle only) to a value much less than 1 (associated with a highly obtuse triangle). By taking advantage of the fact that a high-aspect ratio triangle is not necessarily highly obtuse, and in fact it can have a shape factor whose value is close to the maximal value 9/4, a potential solution to avoid accuracy deterioration of gradient computation associated with a high-aspect ratio triangular grid is given. Also a brief discussion on the extension of the current mathematical framework to the
Rak, Michal Bartosz; Wozniak, Adam; Mayer, J. R. R.
2016-06-01
Coordinate measuring techniques rely on computer processing of coordinate values of points gathered from physical surfaces using contact or non-contact methods. Contact measurements are characterized by low density and high accuracy. On the other hand optical methods gather high density data of the whole object in a short time but with accuracy at least one order of magnitude lower than for contact measurements. Thus the drawback of contact methods is low density of data, while for non-contact methods it is low accuracy. In this paper a method for fusion of data from two measurements of fundamentally different nature: high density low accuracy (HDLA) and low density high accuracy (LDHA) is presented to overcome the limitations of both measuring methods. In the proposed method the concept of virtual markers is used to find a representation of pairs of corresponding characteristic points in both sets of data. In each pair the coordinates of the point from contact measurements is treated as a reference for the corresponding point from non-contact measurement. Transformation enabling displacement of characteristic points from optical measurement to their match from contact measurements is determined and applied to the whole point cloud. The efficiency of the proposed algorithm was evaluated by comparison with data from a coordinate measuring machine (CMM). Three surfaces were used for this evaluation: plane, turbine blade and engine cover. For the planar surface the achieved improvement was of around 200 μm. Similar results were obtained for the turbine blade but for the engine cover the improvement was smaller. For both freeform surfaces the improvement was higher for raw data than for data after creation of mesh of triangles.
Generalized monotone operators in Banach spaces
International Nuclear Information System (INIS)
Nanda, S.
1988-07-01
The concept of F-monotonicity was first introduced by Kato and this generalizes the notion of monotonicity introduced by Minty. The purpose of this paper is to define various types of F-monotonicities and discuss the relationships among them. (author). 6 refs
High-accuracy measurements of the normal specular reflectance
International Nuclear Information System (INIS)
Voarino, Philippe; Piombini, Herve; Sabary, Frederic; Marteau, Daniel; Dubard, Jimmy; Hameury, Jacques; Filtz, Jean Remy
2008-01-01
The French Laser Megajoule (LMJ) is designed and constructed by the French Commissariata l'Energie Atomique (CEA). Its amplifying section needs highly reflective multilayer mirrors for the flash lamps. To monitor and improve the coating process, the reflectors have to be characterized to high accuracy. The described spectrophotometer is designed to measure normal specular reflectance with high repeatability by using a small spot size of 100 μm. Results are compared with ellipsometric measurements. The instrument can also perform spatial characterization to detect coating nonuniformity
High accuracy 3D electromagnetic finite element analysis
International Nuclear Information System (INIS)
Nelson, E.M.
1996-01-01
A high accuracy 3D electromagnetic finite element field solver employing quadratic hexahedral elements and quadratic mixed-order one-form basis functions will be described. The solver is based on an object-oriented C++ class library. Test cases demonstrate that frequency errors less than 10 ppm can be achieved using modest workstations, and that the solutions have no contamination from spurious modes. The role of differential geometry and geometrical physics in finite element analysis will also be discussed
High accuracy 3D electromagnetic finite element analysis
International Nuclear Information System (INIS)
Nelson, Eric M.
1997-01-01
A high accuracy 3D electromagnetic finite element field solver employing quadratic hexahedral elements and quadratic mixed-order one-form basis functions will be described. The solver is based on an object-oriented C++ class library. Test cases demonstrate that frequency errors less than 10 ppm can be achieved using modest workstations, and that the solutions have no contamination from spurious modes. The role of differential geometry and geometrical physics in finite element analysis will also be discussed
Why is a high accuracy needed in dosimetry
International Nuclear Information System (INIS)
Lanzl, L.H.
1976-01-01
Dose and exposure intercomparisons on a national or international basis have become an important component of quality assurance in the practice of good radiotherapy. A high degree of accuracy of γ and x radiation dosimetry is essential in our international society, where medical information is so readily exchanged and used. The value of accurate dosimetry lies mainly in the avoidance of complications in normal tissue and an optimal degree of tumor control
Achieving High Accuracy in Calculations of NMR Parameters
DEFF Research Database (Denmark)
Faber, Rasmus
quantum chemical methods have been developed, the calculation of NMR parameters with quantitative accuracy is far from trivial. In this thesis I address some of the issues that makes accurate calculation of NMR parameters so challenging, with the main focus on SSCCs. High accuracy quantum chemical......, but no programs were available to perform such calculations. As part of this thesis the CFOUR program has therefore been extended to allow the calculation of SSCCs using the CC3 method. CC3 calculations of SSCCs have then been performed for several molecules, including some difficult cases. These results show...... vibrations must be included. The calculation of vibrational corrections to NMR parameters has been reviewed as part of this thesis. A study of the basis set convergence of vibrational corrections to nuclear shielding constants has also been performed. The basis set error in vibrational correction...
Specific non-monotonous interactions increase persistence of ecological networks.
Yan, Chuan; Zhang, Zhibin
2014-03-22
The relationship between stability and biodiversity has long been debated in ecology due to opposing empirical observations and theoretical predictions. Species interaction strength is often assumed to be monotonically related to population density, but the effects on stability of ecological networks of non-monotonous interactions that change signs have not been investigated previously. We demonstrate that for four kinds of non-monotonous interactions, shifting signs to negative or neutral interactions at high population density increases persistence (a measure of stability) of ecological networks, while for the other two kinds of non-monotonous interactions shifting signs to positive interactions at high population density decreases persistence of networks. Our results reveal a novel mechanism of network stabilization caused by specific non-monotonous interaction types through either increasing stable equilibrium points or reducing unstable equilibrium points (or both). These specific non-monotonous interactions may be important in maintaining stable and complex ecological networks, as well as other networks such as genes, neurons, the internet and human societies.
A high accuracy land use/cover retrieval system
Directory of Open Access Journals (Sweden)
Alaa Hefnawy
2012-03-01
Full Text Available The effects of spatial resolution on the accuracy of mapping land use/cover types have received increasing attention as a large number of multi-scale earth observation data become available. Although many methods of semi automated image classification of remotely sensed data have been established for improving the accuracy of land use/cover classification during the past 40 years, most of them were employed in single-resolution image classification, which led to unsatisfactory results. In this paper, we propose a multi-resolution fast adaptive content-based retrieval system of satellite images. Through our proposed system, we apply a Super Resolution technique for the Landsat-TM images to have a high resolution dataset. The human–computer interactive system is based on modified radial basis function for retrieval of satellite database images. We apply the backpropagation supervised artificial neural network classifier for both the multi and single resolution datasets. The results show significant improved land use/cover classification accuracy for the multi-resolution approach compared with those from single-resolution approach.
Two high accuracy digital integrators for Rogowski current transducers
Luo, Pan-dian; Li, Hong-bin; Li, Zhen-hua
2014-01-01
The Rogowski current transducers have been widely used in AC current measurement, but their accuracy is mainly subject to the analog integrators, which have typical problems such as poor long-term stability and being susceptible to environmental conditions. The digital integrators can be another choice, but they cannot obtain a stable and accurate output for the reason that the DC component in original signal can be accumulated, which will lead to output DC drift. Unknown initial conditions can also result in integral output DC offset. This paper proposes two improved digital integrators used in Rogowski current transducers instead of traditional analog integrators for high measuring accuracy. A proportional-integral-derivative (PID) feedback controller and an attenuation coefficient have been applied in improving the Al-Alaoui integrator to change its DC response and get an ideal frequency response. For the special design in the field of digital signal processing, the improved digital integrators have better performance than analog integrators. Simulation models are built for the purpose of verification and comparison. The experiments prove that the designed integrators can achieve higher accuracy than analog integrators in steady-state response, transient-state response, and temperature changing condition.
High Accuracy Piezoelectric Kinemometer; Cinemometro piezoelectrico de alta exactitud (VUAE)
Energy Technology Data Exchange (ETDEWEB)
Jimenez Martinez, F. J.; Frutos, J. de; Pastor, C.; Vazquez Rodriguez, M.
2012-07-01
We have developed a portable computerized and low consumption, our system is called High Accuracy Piezoelectric Kinemometer measurement, herein VUAE. By the high accuracy obtained by VUAE it make able to use the VUAE to obtain references measurements of system for measuring Speeds in Vehicles. Therefore VUAE could be used how reference equipment to estimate the error of installed kinemometers. The VUAE was created with n (n=2) pairs of ultrasonic transmitter-receiver, herein E-Rult. The transmitters used in the n couples E-Rult generate n ultrasonic barriers and receivers receive the echoes when the vehicle crosses the barriers. Digital processing of the echoes signals let us to obtain acceptable signals. Later, by mean of cross correlation technics is possible make a highly exact estimation of speed of the vehicle. The log of the moments of interception and the distance between each of the n ultrasounds allows for a highly exact estimation of speed of the vehicle. VUAE speed measurements were compared to a speed reference system based on piezoelectric cables. (Author) 11 refs.
High accuracy 3D electromagnetic finite element analysis
International Nuclear Information System (INIS)
Nelson, E.M.
1997-01-01
A high accuracy 3D electromagnetic finite element field solver employing quadratic hexahedral elements and quadratic mixed-order one-form basis functions will be described. The solver is based on an object-oriented C++ class library. Test cases demonstrate that frequency errors less than 10 ppm can be achieved using modest workstations, and that the solutions have no contamination from spurious modes. The role of differential geometry and geometrical physics in finite element analysis will also be discussed. copyright 1997 American Institute of Physics
Quantisation of monotonic twist maps
International Nuclear Information System (INIS)
Boasman, P.A.; Smilansky, U.
1993-08-01
Using an approach suggested by Moser, classical Hamiltonians are generated that provide an interpolating flow to the stroboscopic motion of maps with a monotonic twist condition. The quantum properties of these Hamiltonians are then studied in analogy with recent work on the semiclassical quantization of systems based on Poincare surfaces of section. For the generalized standard map, the correspondence with the usual classical and quantum results is shown, and the advantages of the quantum Moser Hamiltonian demonstrated. The same approach is then applied to the free motion of a particle on a 2-torus, and to the circle billiard. A natural quantization condition based on the eigenphases of the unitary time--development operator is applied, leaving the exact eigenvalues of the torus, but only the semiclassical eigenvalues for the billiard; an explanation for this failure is proposed. It is also seen how iterating the classical map commutes with the quantization. (authors)
High-accuracy mass spectrometry for fundamental studies.
Kluge, H-Jürgen
2010-01-01
Mass spectrometry for fundamental studies in metrology and atomic, nuclear and particle physics requires extreme sensitivity and efficiency as well as ultimate resolving power and accuracy. An overview will be given on the global status of high-accuracy mass spectrometry for fundamental physics and metrology. Three quite different examples of modern mass spectrometric experiments in physics are presented: (i) the retardation spectrometer KATRIN at the Forschungszentrum Karlsruhe, employing electrostatic filtering in combination with magnetic-adiabatic collimation-the biggest mass spectrometer for determining the smallest mass, i.e. the mass of the electron anti-neutrino, (ii) the Experimental Cooler-Storage Ring at GSI-a mass spectrometer of medium size, relative to other accelerators, for determining medium-heavy masses and (iii) the Penning trap facility, SHIPTRAP, at GSI-the smallest mass spectrometer for determining the heaviest masses, those of super-heavy elements. Finally, a short view into the future will address the GSI project HITRAP at GSI for fundamental studies with highly-charged ions.
Read-only high accuracy volume holographic optical correlator
Zhao, Tian; Li, Jingming; Cao, Liangcai; He, Qingsheng; Jin, Guofan
2011-10-01
A read-only volume holographic correlator (VHC) is proposed. After the recording of all of the correlation database pages by angular multiplexing, a stand-alone read-only high accuracy VHC will be separated from the VHC recording facilities which include the high-power laser and the angular multiplexing system. The stand-alone VHC has its own low power readout laser and very compact and simple structure. Since there are two lasers that are employed for recording and readout, respectively, the optical alignment tolerance of the laser illumination on the SLM is very sensitive. The twodimensional angular tolerance is analyzed based on the theoretical model of the volume holographic correlator. The experimental demonstration of the proposed read-only VHC is introduced and discussed.
Synchrotron accelerator technology for proton beam therapy with high accuracy
International Nuclear Information System (INIS)
Hiramoto, Kazuo
2009-01-01
Proton beam therapy was applied at the beginning to head and neck cancers, but it is now extended to prostate, lung and liver cancers. Thus the need for a pencil beam scanning method is increasing. With this method radiation dose concentration property of the proton beam will be further intensified. Hitachi group has supplied a pencil beam scanning therapy system as the first one for M. D. Anderson Hospital in United States, and it has been operational since May 2008. Hitachi group has been developing proton therapy system to correspond high-accuracy proton therapy to concentrate the dose in the diseased part which is located with various depths, and which sometimes has complicated shape. The author described here on the synchrotron accelerator technology that is an important element for constituting the proton therapy system. (K.Y.)
POLARIZED LINE FORMATION IN NON-MONOTONIC VELOCITY FIELDS
Energy Technology Data Exchange (ETDEWEB)
Sampoorna, M.; Nagendra, K. N., E-mail: sampoorna@iiap.res.in, E-mail: knn@iiap.res.in [Indian Institute of Astrophysics, Koramangala, Bengaluru 560034 (India)
2016-12-10
For a correct interpretation of the observed spectro-polarimetric data from astrophysical objects such as the Sun, it is necessary to solve the polarized line transfer problems taking into account a realistic temperature structure, the dynamical state of the atmosphere, a realistic scattering mechanism (namely, the partial frequency redistribution—PRD), and the magnetic fields. In a recent paper, we studied the effects of monotonic vertical velocity fields on linearly polarized line profiles formed in isothermal atmospheres with and without magnetic fields. However, in general the velocity fields that prevail in dynamical atmospheres of astrophysical objects are non-monotonic. Stellar atmospheres with shocks, multi-component supernova atmospheres, and various kinds of wave motions in solar and stellar atmospheres are examples of non-monotonic velocity fields. Here we present studies on the effect of non-relativistic non-monotonic vertical velocity fields on the linearly polarized line profiles formed in semi-empirical atmospheres. We consider a two-level atom model and PRD scattering mechanism. We solve the polarized transfer equation in the comoving frame (CMF) of the fluid using a polarized accelerated lambda iteration method that has been appropriately modified for the problem at hand. We present numerical tests to validate the CMF method and also discuss the accuracy and numerical instabilities associated with it.
High accuracy mantle convection simulation through modern numerical methods
Kronbichler, Martin
2012-08-21
Numerical simulation of the processes in the Earth\\'s mantle is a key piece in understanding its dynamics, composition, history and interaction with the lithosphere and the Earth\\'s core. However, doing so presents many practical difficulties related to the numerical methods that can accurately represent these processes at relevant scales. This paper presents an overview of the state of the art in algorithms for high-Rayleigh number flows such as those in the Earth\\'s mantle, and discusses their implementation in the Open Source code Aspect (Advanced Solver for Problems in Earth\\'s ConvecTion). Specifically, we show how an interconnected set of methods for adaptive mesh refinement (AMR), higher order spatial and temporal discretizations, advection stabilization and efficient linear solvers can provide high accuracy at a numerical cost unachievable with traditional methods, and how these methods can be designed in a way so that they scale to large numbers of processors on compute clusters. Aspect relies on the numerical software packages deal.II and Trilinos, enabling us to focus on high level code and keeping our implementation compact. We present results from validation tests using widely used benchmarks for our code, as well as scaling results from parallel runs. © 2012 The Authors Geophysical Journal International © 2012 RAS.
On the size of monotone span programs
Nikov, V.S.; Nikova, S.I.; Preneel, B.; Blundo, C.; Cimato, S.
2005-01-01
Span programs provide a linear algebraic model of computation. Monotone span programs (MSP) correspond to linear secret sharing schemes. This paper studies the properties of monotone span programs related to their size. Using the results of van Dijk (connecting codes and MSPs) and a construction for
Edit Distance to Monotonicity in Sliding Windows
DEFF Research Database (Denmark)
Chan, Ho-Leung; Lam, Tak-Wah; Lee, Lap Kei
2011-01-01
Given a stream of items each associated with a numerical value, its edit distance to monotonicity is the minimum number of items to remove so that the remaining items are non-decreasing with respect to the numerical value. The space complexity of estimating the edit distance to monotonicity of a ...
MONOTONIC DERIVATIVE CORRECTION FOR CALCULATION OF SUPERSONIC FLOWS WITH SHOCK WAVES
Directory of Open Access Journals (Sweden)
P. V. Bulat
2015-07-01
Full Text Available Subject of Research. Numerical solution methods of gas dynamics problems based on exact and approximate solution of Riemann problem are considered. We have developed an approach to the solution of Euler equations describing flows of inviscid compressible gas based on finite volume method and finite difference schemes of various order of accuracy. Godunov scheme, Kolgan scheme, Roe scheme, Harten scheme and Chakravarthy-Osher scheme are used in calculations (order of accuracy of finite difference schemes varies from 1st to 3rd. Comparison of accuracy and efficiency of various finite difference schemes is demonstrated on the calculation example of inviscid compressible gas flow in Laval nozzle in the case of continuous acceleration of flow in the nozzle and in the case of nozzle shock wave presence. Conclusions about accuracy of various finite difference schemes and time required for calculations are made. Main Results. Comparative analysis of difference schemes for Euler equations integration has been carried out. These schemes are based on accurate and approximate solution for the problem of an arbitrary discontinuity breakdown. Calculation results show that monotonic derivative correction provides numerical solution uniformity in the breakdown neighbourhood. From the one hand, it prevents formation of new points of extremum, providing the monotonicity property, but from the other hand, causes smoothing of existing minimums and maximums and accuracy loss. Practical Relevance. Developed numerical calculation method gives the possibility to perform high accuracy calculations of flows with strong non-stationary shock and detonation waves. At the same time, there are no non-physical solution oscillations on the shock wave front.
High accuracy magnetic field mapping of the LEP spectrometer magnet
Roncarolo, F
2000-01-01
The Large Electron Positron accelerator (LEP) is a storage ring which has been operated since 1989 at the European Laboratory for Particle Physics (CERN), located in the Geneva area. It is intended to experimentally verify the Standard Model theory and in particular to detect with high accuracy the mass of the electro-weak force bosons. Electrons and positrons are accelerated inside the LEP ring in opposite directions and forced to collide at four locations, once they reach an energy high enough for the experimental purposes. During head-to-head collisions the leptons loose all their energy and a huge amount of energy is concentrated in a small region. In this condition the energy is quickly converted in other particles which tend to go away from the interaction point. The higher the energy of the leptons before the collisions, the higher the mass of the particles that can escape. At LEP four large experimental detectors are accommodated. All detectors are multi purpose detectors covering a solid angle of alm...
Accuracy assessment of high-rate GPS measurements for seismology
Elosegui, P.; Davis, J. L.; Ekström, G.
2007-12-01
Analysis of GPS measurements with a controlled laboratory system, built to simulate the ground motions caused by tectonic earthquakes and other transient geophysical signals such as glacial earthquakes, enables us to assess the technique of high-rate GPS. The root-mean-square (rms) position error of this system when undergoing realistic simulated seismic motions is 0.05~mm, with maximum position errors of 0.1~mm, thus providing "ground truth" GPS displacements. We have acquired an extensive set of high-rate GPS measurements while inducing seismic motions on a GPS antenna mounted on this system with a temporal spectrum similar to real seismic events. We found that, for a particular 15-min-long test event, the rms error of the 1-Hz GPS position estimates was 2.5~mm, with maximum position errors of 10~mm, and the error spectrum of the GPS estimates was approximately flicker noise. These results may however represent a best-case scenario since they were obtained over a short (~10~m) baseline, thereby greatly mitigating baseline-dependent errors, and when the number and distribution of satellites on the sky was good. For example, we have determined that the rms error can increase by a factor of 2--3 as the GPS constellation changes throughout the day, with an average value of 3.5~mm for eight identical, hourly-spaced, consecutive test events. The rms error also increases with increasing baseline, as one would expect, with an average rms error for a ~1400~km baseline of 9~mm. We will present an assessment of the accuracy of high-rate GPS based on these measurements, discuss the implications of this study for seismology, and describe new applications in glaciology.
Accuracy assessment of cadastral maps using high resolution aerial photos
Directory of Open Access Journals (Sweden)
Alwan Imzahim
2018-01-01
Full Text Available A cadastral map is a map that shows the boundaries and ownership of land parcels. Some cadastral maps show additional details, such as survey district names, unique identifying numbers for parcels, certificate of title numbers, positions of existing structures, section or lot numbers and their respective areas, adjoining and adjacent street names, selected boundary dimensions and references to prior maps. In Iraq / Baghdad Governorate, the main problem is that the cadastral maps are georeferenced to a local geodetic datum known as Clark 1880 while the widely used reference system for navigation purpose (GPS and GNSS and uses Word Geodetic System 1984 (WGS84 as a base reference datum. The objective of this paper is to produce a cadastral map with scale 1:500 (metric scale by using aerial photographs 2009 with high ground spatial resolution 10 cm reference WGS84 system. The accuracy assessment for the cadastral maps updating approach to urban large scale cadastral maps (1:500-1:1000 was ± 0.115 meters; which complies with the American Social for Photogrammetry and Remote Sensing Standards (ASPRS.
Determination of UAV position using high accuracy navigation platform
Directory of Open Access Journals (Sweden)
Ireneusz Kubicki
2016-07-01
Full Text Available The choice of navigation system for mini UAV is very important because of its application and exploitation, particularly when the installed on it a synthetic aperture radar requires highly precise information about an object’s position. The presented exemplary solution of such a system draws attention to the possible problems associated with the use of appropriate technology, sensors, and devices or with a complete navigation system. The position and spatial orientation errors of the measurement platform influence on the obtained SAR imaging. Both, turbulences and maneuvers performed during flight cause the changes in the position of the airborne object resulting in deterioration or lack of images from SAR. Consequently, it is necessary to perform operations for reducing or eliminating the impact of the sensors’ errors on the UAV position accuracy. You need to look for compromise solutions between newer better technologies and in the field of software. Keywords: navigation systems, unmanned aerial vehicles, sensors integration
Modified sine bar device measures small angles with high accuracy
Thekaekara, M.
1968-01-01
Modified sine bar device measures small angles with enough accuracy to calibrate precision optical autocollimators. The sine bar is a massive bar of steel supported by two cylindrical rods at one end and one at the other.
Measurement system with high accuracy for laser beam quality.
Ke, Yi; Zeng, Ciling; Xie, Peiyuan; Jiang, Qingshan; Liang, Ke; Yang, Zhenyu; Zhao, Ming
2015-05-20
Presently, most of the laser beam quality measurement system collimates the optical path manually with low efficiency and low repeatability. To solve these problems, this paper proposed a new collimated method to improve the reliability and accuracy of the measurement results. The system accuracy controlled the position of the mirror to change laser beam propagation direction, which can realize the beam perpendicularly incident to the photosurface of camera. The experiment results show that the proposed system has good repeatability and the measuring deviation of M2 factor is less than 0.6%.
Diagnostic accuracy of high-definition CT coronary angiography in high-risk patients
International Nuclear Information System (INIS)
Iyengar, S.S.; Morgan-Hughes, G.; Ukoumunne, O.; Clayton, B.; Davies, E.J.; Nikolaou, V.; Hyde, C.J.; Shore, A.C.; Roobottom, C.A.
2016-01-01
Aim: To assess the diagnostic accuracy of computed tomography coronary angiography (CTCA) using a combination of high-definition CT (HD-CTCA) and high level of reader experience, with invasive coronary angiography (ICA) as the reference standard, in high-risk patients for the investigation of coronary artery disease (CAD). Materials and methods: Three hundred high-risk patients underwent HD-CTCA and ICA. Independent experts evaluated the images for the presence of significant CAD, defined primarily as the presence of moderate (≥50%) stenosis and secondarily as the presence of severe (≥70%) stenosis in at least one coronary segment, in a blinded fashion. HD-CTCA was compared to ICA as the reference standard. Results: No patients were excluded. Two hundred and six patients (69%) had moderate and 178 (59%) had severe stenosis in at least one vessel at ICA. The sensitivity, specificity, positive predictive value, and negative predictive value were 97.1%, 97.9%, 99% and 93.9% for moderate stenosis, and 98.9%, 93.4%, 95.7% and 98.3%, for severe stenosis, on a per-patient basis. Conclusion: The combination of HD-CTCA and experienced readers applied to a high-risk population, results in high diagnostic accuracy comparable to ICA. Modern generation CT systems in experienced hands might be considered for an expanded role. - Highlights: • Diagnostic accuracy of High-Definition CT Angiography (HD-CTCA) has been assessed. • Invasive Coronary angiography (ICA) is the reference standard. • Diagnostic accuracy of HD-CTCA is comparable to ICA. • Diagnostic accuracy is not affected by coronary calcium or stents. • HD-CTCA provides a non-invasive alternative in high-risk patients.
Directory of Open Access Journals (Sweden)
Chakkrid Klin-eam
2009-01-01
Full Text Available We prove strong convergence theorems for finding a common element of the zero point set of a maximal monotone operator and the fixed point set of a hemirelatively nonexpansive mapping in a Banach space by using monotone hybrid iteration method. By using these results, we obtain new convergence results for resolvents of maximal monotone operators and hemirelatively nonexpansive mappings in a Banach space.
High-accuracy user identification using EEG biometrics.
Koike-Akino, Toshiaki; Mahajan, Ruhi; Marks, Tim K; Ye Wang; Watanabe, Shinji; Tuzel, Oncel; Orlik, Philip
2016-08-01
We analyze brain waves acquired through a consumer-grade EEG device to investigate its capabilities for user identification and authentication. First, we show the statistical significance of the P300 component in event-related potential (ERP) data from 14-channel EEGs across 25 subjects. We then apply a variety of machine learning techniques, comparing the user identification performance of various different combinations of a dimensionality reduction technique followed by a classification algorithm. Experimental results show that an identification accuracy of 72% can be achieved using only a single 800 ms ERP epoch. In addition, we demonstrate that the user identification accuracy can be significantly improved to more than 96.7% by joint classification of multiple epochs.
High Accuracy Nonlinear Control and Estimation for Machine Tool Systems
DEFF Research Database (Denmark)
Papageorgiou, Dimitrios
Component mass production has been the backbone of industry since the second industrial revolution, and machine tools are producing parts of widely varying size and design complexity. The ever-increasing level of automation in modern manufacturing processes necessitates the use of more...... sophisticated machine tool systems that are adaptable to different workspace conditions, while at the same time being able to maintain very narrow workpiece tolerances. The main topic of this thesis is to suggest control methods that can maintain required manufacturing tolerances, despite moderate wear and tear....... The purpose is to ensure that full accuracy is maintained between service intervals and to advice when overhaul is needed. The thesis argues that quality of manufactured components is directly related to the positioning accuracy of the machine tool axes, and it shows which low level control architectures...
Methodology for GPS Synchronization Evaluation with High Accuracy
Li Zan; Braun Torsten; Dimitrova Desislava
2015-01-01
Clock synchronization in the order of nanoseconds is one of the critical factors for time based localization. Currently used time synchronization methods are developed for the more relaxed needs of network operation. Their usability for positioning should be carefully evaluated. In this paper we are particularly interested in GPS based time synchronization. To judge its usability for localization we need a method that can evaluate the achieved time synchronization with nanosecond accuracy. Ou...
Methodology for GPS Synchronization Evaluation with High Accuracy
Li, Zan; Braun, Torsten; Dimitrova, Desislava Cvetanova
2015-01-01
Clock synchronization in the order of nanoseconds is one of the critical factors for time-based localization. Currently used time synchronization methods are developed for the more relaxed needs of network operation. Their usability for positioning should be carefully evaluated. In this paper, we are particularly interested in GPS-based time synchronization. To judge its usability for localization we need a method that can evaluate the achieved time synchronization with nanosecond accuracy. O...
Multipartite classical and quantum secrecy monotones
International Nuclear Information System (INIS)
Cerf, N.J.; Massar, S.; Schneider, S.
2002-01-01
In order to study multipartite quantum cryptography, we introduce quantities which vanish on product probability distributions, and which can only decrease if the parties carry out local operations or public classical communication. These 'secrecy monotones' therefore measure how much secret correlation is shared by the parties. In the bipartite case we show that the mutual information is a secrecy monotone. In the multipartite case we describe two different generalizations of the mutual information, both of which are secrecy monotones. The existence of two distinct secrecy monotones allows us to show that in multipartite quantum cryptography the parties must make irreversible choices about which multipartite correlations they want to obtain. Secrecy monotones can be extended to the quantum domain and are then defined on density matrices. We illustrate this generalization by considering tripartite quantum cryptography based on the Greenberger-Horne-Zeilinger state. We show that before carrying out measurements on the state, the parties must make an irreversible decision about what probability distribution they want to obtain
Research on Horizontal Accuracy Method of High Spatial Resolution Remotely Sensed Orthophoto Image
Xu, Y. M.; Zhang, J. X.; Yu, F.; Dong, S.
2018-04-01
At present, in the inspection and acceptance of high spatial resolution remotly sensed orthophoto image, the horizontal accuracy detection is testing and evaluating the accuracy of images, which mostly based on a set of testing points with the same accuracy and reliability. However, it is difficult to get a set of testing points with the same accuracy and reliability in the areas where the field measurement is difficult and the reference data with high accuracy is not enough. So it is difficult to test and evaluate the horizontal accuracy of the orthophoto image. The uncertainty of the horizontal accuracy has become a bottleneck for the application of satellite borne high-resolution remote sensing image and the scope of service expansion. Therefore, this paper proposes a new method to test the horizontal accuracy of orthophoto image. This method using the testing points with different accuracy and reliability. These points' source is high accuracy reference data and field measurement. The new method solves the horizontal accuracy detection of the orthophoto image in the difficult areas and provides the basis for providing reliable orthophoto images to the users.
Innovative Fiber-Optic Gyroscopes (FOGs) for High Accuracy Space Applications, Phase I
National Aeronautics and Space Administration — NASA's future science and exploratory missions will require much lighter, smaller, and longer life rate sensors that can provide high accuracy navigational...
High Accuracy Positioning using Jet Thrusters for Quadcopter
Directory of Open Access Journals (Sweden)
Pi ChenHuan
2018-01-01
Full Text Available A quadcopter is equipped with four additional jet thrusters on its horizontal plane and vertical to each other in order to improve the maneuverability and positioning accuracy of quadcopter. A dynamic model of the quadcopter with jet thrusters is derived and two controllers are implemented in simulation, one is a dual loop state feedback controller for pose control and another is an auxiliary jet thruster controller for accurate positioning. Step response simulations showed that the jet thruster can control the quadcopter with less overshoot compared to the conventional one. Over 10s loiter simulation with disturbance, the quadcopter with jet thruster decrease 85% of RMS error of horizontal disturbance compared to a conventional quadcopter with only a dual loop state feedback controller. The jet thruster controller shows the possibility for further accurate in the field of quadcopter positioning.
High-accuracy contouring using projection moiré
Sciammarella, Cesar A.; Lamberti, Luciano; Sciammarella, Federico M.
2005-09-01
Shadow and projection moiré are the oldest forms of moiré to be used in actual technical applications. In spite of this fact and the extensive number of papers that have been published on this topic, the use of shadow moiré as an accurate tool that can compete with alternative devices poses very many problems that go to the very essence of the mathematical models used to obtain contour information from fringe pattern data. In this paper some recent developments on the projection moiré method are presented. Comparisons between the results obtained with the projection method and the results obtained by mechanical devices that operate with contact probes are presented. These results show that the use of projection moiré makes it possible to achieve the same accuracy that current mechanical touch probe devices can provide.
Pettersson, Per
2013-05-01
The stochastic Galerkin and collocation methods are used to solve an advection-diffusion equation with uncertain and spatially varying viscosity. We investigate well-posedness, monotonicity and stability for the extended system resulting from the Galerkin projection of the advection-diffusion equation onto the stochastic basis functions. High-order summation-by-parts operators and weak imposition of boundary conditions are used to prove stability of the semi-discrete system.It is essential that the eigenvalues of the resulting viscosity matrix of the stochastic Galerkin system are positive and we investigate conditions for this to hold. When the viscosity matrix is diagonalizable, stochastic Galerkin and stochastic collocation are similar in terms of computational cost, and for some cases the accuracy is higher for stochastic Galerkin provided that monotonicity requirements are met. We also investigate the total spatial operator of the semi-discretized system and its impact on the convergence to steady-state. © 2013 Elsevier B.V.
Pettersson, Per; Doostan, Alireza; Nordströ m, Jan
2013-01-01
The stochastic Galerkin and collocation methods are used to solve an advection-diffusion equation with uncertain and spatially varying viscosity. We investigate well-posedness, monotonicity and stability for the extended system resulting from the Galerkin projection of the advection-diffusion equation onto the stochastic basis functions. High-order summation-by-parts operators and weak imposition of boundary conditions are used to prove stability of the semi-discrete system.It is essential that the eigenvalues of the resulting viscosity matrix of the stochastic Galerkin system are positive and we investigate conditions for this to hold. When the viscosity matrix is diagonalizable, stochastic Galerkin and stochastic collocation are similar in terms of computational cost, and for some cases the accuracy is higher for stochastic Galerkin provided that monotonicity requirements are met. We also investigate the total spatial operator of the semi-discretized system and its impact on the convergence to steady-state. © 2013 Elsevier B.V.
Assessing the Health of LiFePO4 Traction Batteries through Monotonic Echo State Networks
Anseán, David; Otero, José; Couso, Inés
2017-01-01
A soft sensor is presented that approximates certain health parameters of automotive rechargeable batteries from on-vehicle measurements of current and voltage. The sensor is based on a model of the open circuit voltage curve. This last model is implemented through monotonic neural networks and estimate over-potentials arising from the evolution in time of the Lithium concentration in the electrodes of the battery. The proposed soft sensor is able to exploit the information contained in operational records of the vehicle better than the alternatives, this being particularly true when the charge or discharge currents are between moderate and high. The accuracy of the neural model has been compared to different alternatives, including data-driven statistical models, first principle-based models, fuzzy observers and other recurrent neural networks with different topologies. It is concluded that monotonic echo state networks can outperform well established first-principle models. The algorithms have been validated with automotive Li-FePO4 cells. PMID:29267219
Assessing the Health of LiFePO4 Traction Batteries through Monotonic Echo State Networks
Directory of Open Access Journals (Sweden)
Luciano Sánchez
2017-12-01
Full Text Available A soft sensor is presented that approximates certain health parameters of automotive rechargeable batteries from on-vehicle measurements of current and voltage. The sensor is based on a model of the open circuit voltage curve. This last model is implemented through monotonic neural networks and estimate over-potentials arising from the evolution in time of the Lithium concentration in the electrodes of the battery. The proposed soft sensor is able to exploit the information contained in operational records of the vehicle better than the alternatives, this being particularly true when the charge or discharge currents are between moderate and high. The accuracy of the neural model has been compared to different alternatives, including data-driven statistical models, first principle-based models, fuzzy observers and other recurrent neural networks with different topologies. It is concluded that monotonic echo state networks can outperform well established first-principle models. The algorithms have been validated with automotive Li-FePO4 cells.
Monotonicity and bounds on Bessel functions
Directory of Open Access Journals (Sweden)
Larry Landau
2000-07-01
Full Text Available survey my recent results on monotonicity with respect to order of general Bessel functions, which follow from a new identity and lead to best possible uniform bounds. Application may be made to the "spreading of the wave packet" for a free quantum particle on a lattice and to estimates for perturbative expansions.
New concurrent iterative methods with monotonic convergence
Energy Technology Data Exchange (ETDEWEB)
Yao, Qingchuan [Michigan State Univ., East Lansing, MI (United States)
1996-12-31
This paper proposes the new concurrent iterative methods without using any derivatives for finding all zeros of polynomials simultaneously. The new methods are of monotonic convergence for both simple and multiple real-zeros of polynomials and are quadratically convergent. The corresponding accelerated concurrent iterative methods are obtained too. The new methods are good candidates for the application in solving symmetric eigenproblems.
Compact, High Accuracy CO2 Monitor, Phase I
National Aeronautics and Space Administration — This Small Business Innovative Research Phase I proposal seeks to develop a low cost, robust, highly precise and accurate CO2 monitoring system. This system will...
Compact, High Accuracy CO2 Monitor, Phase II
National Aeronautics and Space Administration — This Small Business Innovative Research Phase II proposal seeks to develop a low cost, robust, highly precise and accurate CO2 monitoring system. This system will...
High-accuracy Subdaily ERPs from the IGS
Ray, J. R.; Griffiths, J.
2012-04-01
Since November 2000 the International GNSS Service (IGS) has published Ultra-rapid (IGU) products for near real-time (RT) and true real-time applications. They include satellite orbits and clocks, as well as Earth rotation parameters (ERPs) for a sliding 48-hr period. The first day of each update is based on the most recent GPS and GLONASS observational data from the IGS hourly tracking network. At the time of release, these observed products have an initial latency of 3 hr. The second day of each update consists of predictions. So the predictions between about 3 and 9 hr into the second half are relevant for true RT uses. Originally updated twice daily, the IGU products since April 2004 have been issued every 6 hr, at 3, 9, 15, and 21 UTC. Up to seven Analysis Centers (ACs) contribute to the IGU combinations. Two sets of ERPs are published with each IGU update, observed values at the middle epoch of the first half and predicted values at the middle epoch of the second half. The latency of the near RT ERPs is 15 hr while the predicted ERPs, based on projections of each AC's most recent determinations, are issued 9 hr ahead of their reference epoch. While IGU ERPs are issued every 6 hr, each set represents an integrated estimate over the surrounding 24 hr. So successive values are temporally correlated with about 75% of the data being common; this fact should be taken into account in user assimilations. To evaluate the accuracy of these near RT and predicted ERPs, they have been compared to the IGS Final ERPs, available about 11 to 17 d after data collection. The IGU products improved dramatically in the earlier years but since about 2008.0 the performance has been stable and excellent. During the last three years, RMS differences for the observed IGU ERPs have been about 0.036 mas and 0.0101 ms for each polar motion component and LOD respectively. (The internal precision of the reference IGS ERPs over the same period is about 0.016 mas for polar motion and 0
Accuracy of Handheld Blood Glucose Meters at High Altitude
de Mol, Pieter; Krabbe, Hans G.; de Vries, Suzanna T.; Fokkert, Marion J.; Dikkeschei, Bert D.; Rienks, Rienk; Bilo, Karin M.; Bilo, Henk J. G.
2010-01-01
Background: Due to increasing numbers of people with diabetes taking part in extreme sports (e. g., high-altitude trekking), reliable handheld blood glucose meters (BGMs) are necessary. Accurate blood glucose measurement under extreme conditions is paramount for safe recreation at altitude. Prior
Innovative Fiber-Optic Gyroscopes (FOGs) for High Accuracy Space Applications, Phase II
National Aeronautics and Space Administration — This project aims to develop a compact, highly innovative Inertial Reference/Measurement Unit (IRU/IMU) that pushes the state-of-the-art in high accuracy performance...
A High-Throughput, High-Accuracy System-Level Simulation Framework for System on Chips
Directory of Open Access Journals (Sweden)
Guanyi Sun
2011-01-01
Full Text Available Today's System-on-Chips (SoCs design is extremely challenging because it involves complicated design tradeoffs and heterogeneous design expertise. To explore the large solution space, system architects have to rely on system-level simulators to identify an optimized SoC architecture. In this paper, we propose a system-level simulation framework, System Performance Simulation Implementation Mechanism, or SPSIM. Based on SystemC TLM2.0, the framework consists of an executable SoC model, a simulation tool chain, and a modeling methodology. Compared with the large body of existing research in this area, this work is aimed at delivering a high simulation throughput and, at the same time, guaranteeing a high accuracy on real industrial applications. Integrating the leading TLM techniques, our simulator can attain a simulation speed that is not slower than that of the hardware execution by a factor of 35 on a set of real-world applications. SPSIM incorporates effective timing models, which can achieve a high accuracy after hardware-based calibration. Experimental results on a set of mobile applications proved that the difference between the simulated and measured results of timing performance is within 10%, which in the past can only be attained by cycle-accurate models.
Impact of a highly detailed emission inventory on modeling accuracy
Taghavi, M.; Cautenet, S.; Arteta, J.
2005-03-01
During Expérience sur Site pour COntraindre les Modèles de Pollution atmosphérique et de Transport d'Emissions (ESCOMPTE) campaign (June 10 to July 14, 2001), two pollution events observed during an intensive measurement period (IOP2a and IOP2b) have been simulated. The comprehensive Regional Atmospheric Modeling Systems (RAMS) model, version 4.3, coupled online with a chemical module including 29 species is used to follow the chemistry of a polluted zone over Southern France. This online method takes advantage of a parallel code and use of the powerful computer SGI 3800. Runs are performed with two emission inventories: the Emission Pre Inventory (EPI) and the Main Emission Inventory (MEI). The latter is more recent and has a high resolution. The redistribution of simulated chemical species (ozone and nitrogen oxides) is compared with aircraft and surface station measurements for both runs at regional scale. We show that the MEI inventory is more efficient than the EPI in retrieving the redistribution of chemical species in space (three-dimensional) and time. In surface stations, MEI is superior especially for primary species, like nitrogen oxides. The ozone pollution peaks obtained from an inventory, such as EPI, have a large uncertainty. To understand the realistic geographical distribution of pollutants and to obtain a good order of magnitude in ozone concentration (in space and time), a high-resolution inventory like MEI is necessary. Coupling RAMS-Chemistry with MEI provides a very efficient tool able to simulate pollution plumes even in a region with complex circulations, such as the ESCOMPTE zone.
Switched-capacitor techniques for high-accuracy filter and ADC design
Quinn, P.J.; Roermund, van A.H.M.
2007-01-01
Switched capacitor (SC) techniques are well proven to be excellent candidates for implementing critical analogue functions with high accuracy, surpassing other analogue techniques when embedded in mixed-signal CMOS VLSI. Conventional SC circuits are primarily limited in accuracy by a) capacitor
High accuracy laboratory spectroscopy to support active greenhouse gas sensing
Long, D. A.; Bielska, K.; Cygan, A.; Havey, D. K.; Okumura, M.; Miller, C. E.; Lisak, D.; Hodges, J. T.
2011-12-01
Recent carbon dioxide (CO2) remote sensing missions have set precision targets as demanding as 0.25% (1 ppm) in order to elucidate carbon sources and sinks [1]. These ambitious measurement targets will require the most precise body of spectroscopic reference data ever assembled. Active sensing missions will be especially susceptible to subtle line shape effects as the narrow bandwidth of these measurements will greatly limit the number of spectral transitions which are employed in retrievals. In order to assist these remote sensing missions we have employed frequency-stabilized cavity ring-down spectroscopy (FS-CRDS) [2], a high-resolution, ultrasensitive laboratory technique, to measure precise line shape parameters for transitions of O2, CO2, and other atmospherically-relevant species within the near-infrared. These measurements have led to new HITRAN-style line lists for both 16O2 [3] and rare isotopologue [4] transitions in the A-band. In addition, we have performed detailed line shape studies of CO2 transitions near 1.6 μm under a variety of broadening conditions [5]. We will address recent measurements in these bands as well as highlight recent instrumental improvements to the FS-CRDS spectrometer. These improvements include the use of the Pound-Drever-Hall locking scheme, a high bandwidth servo which enables measurements to be made at rates greater than 10 kHz [6]. In addition, an optical frequency comb will be utilized as a frequency reference, which should allow for transition frequencies to be measured with uncertainties below 10 kHz (3×10-7 cm-1). [1] C. E. Miller, D. Crisp, P. L. DeCola, S. C. Olsen, et al., J. Geophys. Res.-Atmos. 112, D10314 (2007). [2] J. T. Hodges, H. P. Layer, W. W. Miller, G. E. Scace, Rev. Sci. Instrum. 75, 849-863 (2004). [3] D. A. Long, D. K. Havey, M. Okumura, C. E. Miller, et al., J. Quant. Spectrosc. Radiat. Transfer 111, 2021-2036 (2010). [4] D. A. Long, D. K. Havey, S. S. Yu, M. Okumura, et al., J. Quant. Spectrosc
The accuracy of QCD perturbation theory at high energies
Dalla Brida, Mattia; Korzec, Tomasz; Ramos, Alberto; Sint, Stefan; Sommer, Rainer
2016-01-01
We discuss the determination of the strong coupling $\\alpha_\\mathrm{\\overline{MS}}^{}(m_\\mathrm{Z})$ or equivalently the QCD $\\Lambda$-parameter. Its determination requires the use of perturbation theory in $\\alpha_s(\\mu)$ in some scheme, $s$, and at some energy scale $\\mu$. The higher the scale $\\mu$ the more accurate perturbation theory becomes, owing to asymptotic freedom. As one step in our computation of the $\\Lambda$-parameter in three-flavor QCD, we perform lattice computations in a scheme which allows us to non-perturbatively reach very high energies, corresponding to $\\alpha_s = 0.1$ and below. We find that perturbation theory is very accurate there, yielding a three percent error in the $\\Lambda$-parameter, while data around $\\alpha_s \\approx 0.2$ is clearly insufficient to quote such a precision. It is important to realize that these findings are expected to be generic, as our scheme has advantageous properties regarding the applicability of perturbation theory.
A note on monotone real circuits
Czech Academy of Sciences Publication Activity Database
Hrubeš, Pavel; Pudlák, Pavel
2018-01-01
Roč. 131, March (2018), s. 15-19 ISSN 0020-0190 EU Projects: European Commission(XE) 339691 - FEALORA Institutional support: RVO:67985840 Keywords : computational complexity * monotone real circuit * Karchmer-Wigderson game Subject RIV: BA - General Mathematics OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) Impact factor: 0.748, year: 2016 http ://www.sciencedirect.com/science/article/pii/S0020019017301965?via%3Dihub
A note on monotone real circuits
Czech Academy of Sciences Publication Activity Database
Hrubeš, Pavel; Pudlák, Pavel
2018-01-01
Roč. 131, March (2018), s. 15-19 ISSN 0020-0190 EU Projects: European Commission(XE) 339691 - FEALORA Institutional support: RVO:67985840 Keywords : computational complexity * monotone real circuit * Karchmer-Wigderson game Subject RIV: BA - General Mathematics OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) Impact factor: 0.748, year: 2016 http://www.sciencedirect.com/science/article/pii/S0020019017301965?via%3Dihub
A New Three-Dimensional High-Accuracy Automatic Alignment System For Single-Mode Fibers
Yun-jiang, Rao; Shang-lian, Huang; Ping, Li; Yu-mei, Wen; Jun, Tang
1990-02-01
In order to achieve the low-loss splices of single-mode fibers, a new three-dimension high-accuracy automatic alignment system for single -mode fibers has been developed, which includes a new-type three-dimension high-resolution microdisplacement servo stage driven by piezoelectric elements, a new high-accuracy measurement system for the misalignment error of the fiber core-axis, and a special single chip microcomputer processing system. The experimental results show that alignment accuracy of ±0.1 pin with a movable stroke of -±20μm has been obtained. This new system has more advantages than that reported.
Accuracy Assessment for the Three-Dimensional Coordinates by High-Speed Videogrammetric Measurement
Directory of Open Access Journals (Sweden)
Xianglei Liu
2018-01-01
Full Text Available High-speed CMOS camera is a new kind of transducer to make the videogrammetric measurement for monitoring the displacement of high-speed shaking table structure. The purpose of this paper is to validate the three-dimensional coordinate accuracy of the shaking table structure acquired from the presented high-speed videogrammetric measuring system. In the paper, all of the key intermediate links are discussed, including the high-speed CMOS videogrammetric measurement system, the layout of the control network, the elliptical target detection, and the accuracy validation of final 3D spatial results. Through the accuracy analysis, the submillimeter accuracy can be made for the final the three-dimensional spatial coordinates which certify that the proposed high-speed videogrammetric technique is a better alternative technique which can replace the traditional transducer technique for monitoring the dynamic response for the shaking table structure.
Stepsize Restrictions for Boundedness and Monotonicity of Multistep Methods
Hundsdorfer, W.; Mozartova, A.; Spijker, M. N.
2011-01-01
In this paper nonlinear monotonicity and boundedness properties are analyzed for linear multistep methods. We focus on methods which satisfy a weaker boundedness condition than strict monotonicity for arbitrary starting values. In this way, many
International Nuclear Information System (INIS)
Shin, J. K.; Choi, Y. D.
1992-01-01
QUICKER scheme has several attractive properties. However, under highly convective conditions, it produces overshoots and possibly some oscillations on each side of steps in the dependent variable when the flow is convected at an angle oblique to the grid line. Fortunately, it is possible to modify the QUICKER scheme using non-linear and linear functional relationship. Details of the development of polynomial upwinding scheme are given in this paper, where it is seen that this non-linear scheme has also third order accuracy. This polynomial upwinding scheme is used as the basis for the SHARPER and SMARTER schemes. Another revised scheme was developed by partial modification of QUICKER scheme using CDS and UPWIND schemes (QUICKUP). These revised schemes are tested at the well known bench mark flows, Two-Dimensional Pure Convection Flows in Oblique-Step, Lid Driven Cavity Flows and Buoyancy Driven Cavity Flows. For remain absolutely monotonic without overshoot and oscillation. QUICKUP scheme is more accurate than any other scheme in their relative accuracy. In high Reynolds number Lid Driven Catity Flow, SMARTER and SHARPER schemes retain lower computational cost than QUICKER and QUICKUP schemes, but computed velocity values in the revised schemes produced less predicted values than QUICKER scheme which is strongly effected by overshoot and undershoot values. Also, in Buoyancy Driven Cavity Flow, SMARTER, SHARPER and QUICKUP schemes give acceptable results. (Author)
High-accuracy determination for optical indicatrix rotation in ferroelectric DTGS
O.S.Kushnir; O.A.Bevz; O.G.Vlokh
2000-01-01
Optical indicatrix rotation in deuterated ferroelectric triglycine sulphate is studied with the high-accuracy null-polarimetric technique. The behaviour of the effect in ferroelectric phase is referred to quadratic spontaneous electrooptics.
Testing Manifest Monotonicity Using Order-Constrained Statistical Inference
Tijmstra, Jesper; Hessen, David J.; van der Heijden, Peter G. M.; Sijtsma, Klaas
2013-01-01
Most dichotomous item response models share the assumption of latent monotonicity, which states that the probability of a positive response to an item is a nondecreasing function of a latent variable intended to be measured. Latent monotonicity cannot be evaluated directly, but it implies manifest monotonicity across a variety of observed scores,…
Monotone Comparative Statics for the Industry Composition
DEFF Research Database (Denmark)
Laugesen, Anders Rosenstand; Bache, Peter Arendorf
2015-01-01
We let heterogeneous firms face decisions on a number of complementary activities in a monopolistically-competitive industry. The endogenous level of competition and selection regarding entry and exit of firms introduces a wedge between monotone comparative statics (MCS) at the firm level and MCS...... for the industry composition. The latter phenomenon is defined as first-order stochastic dominance shifts in the equilibrium distributions of all activities across active firms. We provide sufficient conditions for MCS at both levels of analysis and show that we may have either type of MCS without the other...
The Monotonicity Puzzle: An Experimental Investigation of Incentive Structures
Directory of Open Access Journals (Sweden)
Jeannette Brosig
2010-05-01
Full Text Available Non-monotone incentive structures, which - according to theory - are able to induce optimal behavior, are often regarded as empirically less relevant for labor relationships. We compare the performance of a theoretically optimal non-monotone contract with a monotone one under controlled laboratory conditions. Implementing some features relevant to real-world employment relationships, our paper demonstrates that, in fact, the frequency of income-maximizing decisions made by agents is higher under the monotone contract. Although this observed behavior does not change the superiority of the non-monotone contract for principals, they do not choose this contract type in a significant way. This is what we call the monotonicity puzzle. Detailed investigations of decisions provide a clue for solving the puzzle and a possible explanation for the popularity of monotone contracts.
Energy Technology Data Exchange (ETDEWEB)
Tong, Vivian, E-mail: v.tong13@imperial.ac.uk [Department of Materials, Imperial College London, Prince Consort Road, London SW7 2AZ (United Kingdom); Jiang, Jun [Department of Materials, Imperial College London, Prince Consort Road, London SW7 2AZ (United Kingdom); Wilkinson, Angus J. [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Britton, T. Ben [Department of Materials, Imperial College London, Prince Consort Road, London SW7 2AZ (United Kingdom)
2015-08-15
High resolution, cross-correlation-based, electron backscatter diffraction (EBSD) measures the variation of elastic strains and lattice rotations from a reference state. Regions near grain boundaries are often of interest but overlap of patterns from the two grains could reduce accuracy of the cross-correlation analysis. To explore this concern, patterns from the interior of two grains have been mixed to simulate the interaction volume crossing a grain boundary so that the effect on the accuracy of the cross correlation results can be tested. It was found that the accuracy of HR-EBSD strain measurements performed in a FEG-SEM on zirconium remains good until the incident beam is less than 18 nm from a grain boundary. A simulated microstructure was used to measure how often pattern overlap occurs at any given EBSD step size, and a simple relation was found linking the probability of overlap with step size. - Highlights: • Pattern overlap occurs at grain boundaries and reduces HR-EBSD accuracy. • A test is devised to measure the accuracy of HR-EBSD in the presence of overlap. • High pass filters can sometimes, but not generally, improve HR-EBSD measurements. • Accuracy of HR-EBSD remains high until the reference pattern intensity is <72%. • 9% of points near a grain boundary will have significant error for 200nm step size in Zircaloy-4.
Tauscher, Sebastian; Fuchs, Alexander; Baier, Fabian; Kahrs, Lüder A; Ortmaier, Tobias
2017-10-01
Assistance of robotic systems in the operating room promises higher accuracy and, hence, demanding surgical interventions become realisable (e.g. the direct cochlear access). Additionally, an intuitive user interface is crucial for the use of robots in surgery. Torque sensors in the joints can be employed for intuitive interaction concepts. Regarding the accuracy, they lead to a lower structural stiffness and, thus, to an additional error source. The aim of this contribution is to examine, if an accuracy needed for demanding interventions can be achieved by such a system or not. Feasible accuracy results of the robot-assisted process depend on each work-flow step. This work focuses on the determination of the tool coordinate frame. A method for drill axis definition is implemented and analysed. Furthermore, a concept of admittance feed control is developed. This allows the user to control feeding along the planned path by applying a force to the robots structure. The accuracy is researched by drilling experiments with a PMMA phantom and artificial bone blocks. The described drill axis estimation process results in a high angular repeatability ([Formula: see text]). In the first set of drilling results, an accuracy of [Formula: see text] at entrance and [Formula: see text] at target point excluding imaging was achieved. With admittance feed control an accuracy of [Formula: see text] at target point was realised. In a third set twelve holes were drilled in artificial temporal bone phantoms including imaging. In this set-up an error of [Formula: see text] and [Formula: see text] was achieved. The results of conducted experiments show that accuracy requirements for demanding procedures such as the direct cochlear access can be fulfilled with compliant systems. Furthermore, it was shown that with the presented admittance feed control an accuracy of less then [Formula: see text] is achievable.
Transformation-invariant and nonparametric monotone smooth estimation of ROC curves.
Du, Pang; Tang, Liansheng
2009-01-30
When a new diagnostic test is developed, it is of interest to evaluate its accuracy in distinguishing diseased subjects from non-diseased subjects. The accuracy of the test is often evaluated by receiver operating characteristic (ROC) curves. Smooth ROC estimates are often preferable for continuous test results when the underlying ROC curves are in fact continuous. Nonparametric and parametric methods have been proposed by various authors to obtain smooth ROC curve estimates. However, there are certain drawbacks with the existing methods. Parametric methods need specific model assumptions. Nonparametric methods do not always satisfy the inherent properties of the ROC curves, such as monotonicity and transformation invariance. In this paper we propose a monotone spline approach to obtain smooth monotone ROC curves. Our method ensures important inherent properties of the underlying ROC curves, which include monotonicity, transformation invariance, and boundary constraints. We compare the finite sample performance of the newly proposed ROC method with other ROC smoothing methods in large-scale simulation studies. We illustrate our method through a real life example. Copyright (c) 2008 John Wiley & Sons, Ltd.
Adaptive sensor-based ultra-high accuracy solar concentrator tracker
Brinkley, Jordyn; Hassanzadeh, Ali
2017-09-01
Conventional solar trackers use information of the sun's position, either by direct sensing or by GPS. Our method uses the shading of the receiver. This, coupled with nonimaging optics design allows us to achieve ultra-high concentration. Incorporating a sensor based shadow tracking method with a two stage concentration solar hybrid parabolic trough allows the system to maintain high concentration with acute accuracy.
Generalized convexity, generalized monotonicity recent results
Martinez-Legaz, Juan-Enrique; Volle, Michel
1998-01-01
A function is convex if its epigraph is convex. This geometrical structure has very strong implications in terms of continuity and differentiability. Separation theorems lead to optimality conditions and duality for convex problems. A function is quasiconvex if its lower level sets are convex. Here again, the geo metrical structure of the level sets implies some continuity and differentiability properties for quasiconvex functions. Optimality conditions and duality can be derived for optimization problems involving such functions as well. Over a period of about fifty years, quasiconvex and other generalized convex functions have been considered in a variety of fields including economies, man agement science, engineering, probability and applied sciences in accordance with the need of particular applications. During the last twenty-five years, an increase of research activities in this field has been witnessed. More recently generalized monotonicity of maps has been studied. It relates to generalized conve...
Type monotonic allocation schemes for multi-glove games
Brânzei, R.; Solymosi, T.; Tijs, S.H.
2007-01-01
Multiglove markets and corresponding games are considered.For this class of games we introduce the notion of type monotonic allocation scheme.Allocation rules for multiglove markets based on weight systems are introduced and characterized.These allocation rules generate type monotonic allocation schemes for multiglove games and are also helpful in proving that each core element of the corresponding game is extendable to a type monotonic allocation scheme.The T-value turns out to generate a ty...
Stability of dynamical systems on the role of monotonic and non-monotonic Lyapunov functions
Michel, Anthony N; Liu, Derong
2015-01-01
The second edition of this textbook provides a single source for the analysis of system models represented by continuous-time and discrete-time, finite-dimensional and infinite-dimensional, and continuous and discontinuous dynamical systems. For these system models, it presents results which comprise the classical Lyapunov stability theory involving monotonic Lyapunov functions, as well as corresponding contemporary stability results involving non-monotonicLyapunov functions.Specific examples from several diverse areas are given to demonstrate the applicability of the developed theory to many important classes of systems, including digital control systems, nonlinear regulator systems, pulse-width-modulated feedback control systems, and artificial neural networks. The authors cover the following four general topics: - Representation and modeling of dynamical systems of the types described above - Presentation of Lyapunov and Lagrange stability theory for dynamical sy...
Accuracy of hiatal hernia detection with esophageal high-resolution manometry
Weijenborg, P. W.; van Hoeij, F. B.; Smout, A. J. P. M.; Bredenoord, A. J.
2015-01-01
The diagnosis of a sliding hiatal hernia is classically made with endoscopy or barium esophagogram. Spatial separation of the lower esophageal sphincter (LES) and diaphragm, the hallmark of hiatal hernia, can also be observed on high-resolution manometry (HRM), but the diagnostic accuracy of this
DEFF Research Database (Denmark)
Gnad, Florian; de Godoy, Lyris M F; Cox, Jürgen
2009-01-01
Protein phosphorylation is a fundamental regulatory mechanism that affects many cell signaling processes. Using high-accuracy MS and stable isotope labeling in cell culture-labeling, we provide a global view of the Saccharomyces cerevisiae phosphoproteome, containing 3620 phosphorylation sites ma...
High accuracy positioning using carrier-phases with the opensource GPSTK software
Salazar Hernández, Dagoberto José; Hernández Pajares, Manuel; Juan Zornoza, José Miguel; Sanz Subirana, Jaume
2008-01-01
The objective of this work is to show how using a proper GNSS data management strategy, combined with the flexibility provided by the open source "GPS Toolkit" (GPSTk), it is possible to easily develop both simple code-based processing strategies as well as basic high accuracy carrier-phase positioning techniques like Precise Point Positioning (PPP
Very high-accuracy calibration of radiation pattern and gain of a near-field probe
DEFF Research Database (Denmark)
Pivnenko, Sergey; Nielsen, Jeppe Majlund; Breinbjerg, Olav
2014-01-01
In this paper, very high-accuracy calibration of the radiation pattern and gain of a near-field probe is described. An open-ended waveguide near-field probe has been used in a recent measurement of the C-band Synthetic Aperture Radar (SAR) Antenna Subsystem for the Sentinel 1 mission of the Europ...
From journal to headline: the accuracy of climate science news in Danish high quality newspapers
DEFF Research Database (Denmark)
Vestergård, Gunver Lystbæk
2011-01-01
analysis to examine the accuracy of Danish high quality newspapers in quoting scientific publications from 1997 to 2009. Out of 88 articles, 46 contained inaccuracies though the majority was found to be insignificant and random. The study concludes that Danish broadsheet newspapers are ‘moderately...
Stepsize Restrictions for Boundedness and Monotonicity of Multistep Methods
Hundsdorfer, W.
2011-04-29
In this paper nonlinear monotonicity and boundedness properties are analyzed for linear multistep methods. We focus on methods which satisfy a weaker boundedness condition than strict monotonicity for arbitrary starting values. In this way, many linear multistep methods of practical interest are included in the theory. Moreover, it will be shown that for such methods monotonicity can still be valid with suitable Runge-Kutta starting procedures. Restrictions on the stepsizes are derived that are not only sufficient but also necessary for these boundedness and monotonicity properties. © 2011 Springer Science+Business Media, LLC.
Technics study on high accuracy crush dressing and sharpening of diamond grinding wheel
Jia, Yunhai; Lu, Xuejun; Li, Jiangang; Zhu, Lixin; Song, Yingjie
2011-05-01
Mechanical grinding of artificial diamond grinding wheel was traditional wheel dressing process. The rotate speed and infeed depth of tool wheel were main technics parameters. The suitable technics parameters of metals-bonded diamond grinding wheel and resin-bonded diamond grinding wheel high accuracy crush dressing were obtained by a mount of experiment in super-hard material wheel dressing grind machine and by analysis of grinding force. In the same time, the effect of machine sharpening and sprinkle granule sharpening was contrasted. These analyses and lots of experiments had extent instruction significance to artificial diamond grinding wheel accuracy crush dressing.
High accuracy interface characterization of three phase material systems in three dimensions
DEFF Research Database (Denmark)
Jørgensen, Peter Stanley; Hansen, Karin Vels; Larsen, Rasmus
2010-01-01
Quantification of interface properties such as two phase boundary area and triple phase boundary length is important in the characterization ofmanymaterial microstructures, in particular for solid oxide fuel cell electrodes. Three-dimensional images of these microstructures can be obtained...... by tomography schemes such as focused ion beam serial sectioning or micro-computed tomography. We present a high accuracy method of calculating two phase surface areas and triple phase length of triple phase systems from subvoxel accuracy segmentations of constituent phases. The method performs a three phase...... polygonization of the interface boundaries which results in a non-manifold mesh of connected faces. We show how the triple phase boundaries can be extracted as connected curve loops without branches. The accuracy of the method is analyzed by calculations on geometrical primitives...
Automated novel high-accuracy miniaturized positioning system for use in analytical instrumentation
Siomos, Konstadinos; Kaliakatsos, John; Apostolakis, Manolis; Lianakis, John; Duenow, Peter
1996-01-01
The development of three-dimensional automotive devices (micro-robots) for applications in analytical instrumentation, clinical chemical diagnostics and advanced laser optics, depends strongly on the ability of such a device: firstly to be positioned with high accuracy, reliability, and automatically, by means of user friendly interface techniques; secondly to be compact; and thirdly to operate under vacuum conditions, free of most of the problems connected with conventional micropositioners using stepping-motor gear techniques. The objective of this paper is to develop and construct a mechanically compact computer-based micropositioning system for coordinated motion in the X-Y-Z directions with: (1) a positioning accuracy of less than 1 micrometer, (the accuracy of the end-position of the system is controlled by a hard/software assembly using a self-constructed optical encoder); (2) a heat-free propulsion mechanism for vacuum operation; and (3) synchronized X-Y motion.
Monotone measures of ergodicity for Markov chains
Directory of Open Access Journals (Sweden)
J. Keilson
1998-01-01
Full Text Available The following paper, first written in 1974, was never published other than as part of an internal research series. Its lack of publication is unrelated to the merits of the paper and the paper is of current importance by virtue of its relation to the relaxation time. A systematic discussion is provided of the approach of a finite Markov chain to ergodicity by proving the monotonicity of an important set of norms, each measures of egodicity, whether or not time reversibility is present. The paper is of particular interest because the discussion of the relaxation time of a finite Markov chain [2] has only been clean for time reversible chains, a small subset of the chains of interest. This restriction is not present here. Indeed, a new relaxation time quoted quantifies the relaxation time for all finite ergodic chains (cf. the discussion of Q1(t below Equation (1.7]. This relaxation time was developed by Keilson with A. Roy in his thesis [6], yet to be published.
High Accuracy Acoustic Relative Humidity Measurement inDuct Flow with Air
Directory of Open Access Journals (Sweden)
Cees van der Geld
2010-08-01
Full Text Available An acoustic relative humidity sensor for air-steam mixtures in duct flow is designed and tested. Theory, construction, calibration, considerations on dynamic response and results are presented. The measurement device is capable of measuring line averaged values of gas velocity, temperature and relative humidity (RH instantaneously, by applying two ultrasonic transducers and an array of four temperature sensors. Measurement ranges are: gas velocity of 0–12 m/s with an error of ±0.13 m/s, temperature 0–100 °C with an error of ±0.07 °C and relative humidity 0–100% with accuracy better than 2 % RH above 50 °C. Main advantage over conventional humidity sensors is the high sensitivity at high RH at temperatures exceeding 50 °C, with accuracy increasing with increasing temperature. The sensors are non-intrusive and resist highly humid environments.
High accuracy digital aging monitor based on PLL-VCO circuit
International Nuclear Information System (INIS)
Zhang Yuejun; Jiang Zhidi; Wang Pengjun; Zhang Xuelong
2015-01-01
As the manufacturing process is scaled down to the nanoscale, the aging phenomenon significantly affects the reliability and lifetime of integrated circuits. Consequently, the precise measurement of digital CMOS aging is a key aspect of nanoscale aging tolerant circuit design. This paper proposes a high accuracy digital aging monitor using phase-locked loop and voltage-controlled oscillator (PLL-VCO) circuit. The proposed monitor eliminates the circuit self-aging effect for the characteristic of PLL, whose frequency has no relationship with circuit aging phenomenon. The PLL-VCO monitor is implemented in TSMC low power 65 nm CMOS technology, and its area occupies 303.28 × 298.94 μm 2 . After accelerating aging tests, the experimental results show that PLL-VCO monitor improves accuracy about high temperature by 2.4% and high voltage by 18.7%. (semiconductor integrated circuits)
High accuracy acoustic relative humidity measurement in duct flow with air.
van Schaik, Wilhelm; Grooten, Mart; Wernaart, Twan; van der Geld, Cees
2010-01-01
An acoustic relative humidity sensor for air-steam mixtures in duct flow is designed and tested. Theory, construction, calibration, considerations on dynamic response and results are presented. The measurement device is capable of measuring line averaged values of gas velocity, temperature and relative humidity (RH) instantaneously, by applying two ultrasonic transducers and an array of four temperature sensors. Measurement ranges are: gas velocity of 0-12 m/s with an error of ± 0.13 m/s, temperature 0-100 °C with an error of ± 0.07 °C and relative humidity 0-100% with accuracy better than 2 % RH above 50 °C. Main advantage over conventional humidity sensors is the high sensitivity at high RH at temperatures exceeding 50 °C, with accuracy increasing with increasing temperature. The sensors are non-intrusive and resist highly humid environments.
A proposal for limited criminal liability in high-accuracy endoscopic sinus surgery.
Voultsos, P; Casini, M; Ricci, G; Tambone, V; Midolo, E; Spagnolo, A G
2017-02-01
The aim of the present study is to propose legal reform limiting surgeons' criminal liability in high-accuracy and high-risk surgery such as endoscopic sinus surgery (ESS). The study includes a review of the medical literature, focusing on identifying and examining reasons why ESS carries a very high risk of serious complications related to inaccurate surgical manoeuvers and reviewing British and Italian legal theory and case-law on medical negligence, especially with regard to Italian Law 189/2012 (so called "Balduzzi" Law). It was found that serious complications due to inaccurate surgical manoeuvers may occur in ESS regardless of the skill, experience and prudence/diligence of the surgeon. Subjectivity should be essential to medical negligence, especially regarding high-accuracy surgery. Italian Law 189/2012 represents a good basis for the limitation of criminal liability resulting from inaccurate manoeuvres in high-accuracy surgery such as ESS. It is concluded that ESS surgeons should be relieved of criminal liability in cases of simple/ordinary negligence where guidelines have been observed. © Copyright by Società Italiana di Otorinolaringologia e Chirurgia Cervico-Facciale, Rome, Italy.
Logarithmically completely monotonic functions involving the Generalized Gamma Function
Directory of Open Access Journals (Sweden)
Faton Merovci
2010-12-01
Full Text Available By a simple approach, two classes of functions involving generalization Euler's gamma function and originating from certain problems of traffic flow are proved to be logarithmically completely monotonic and a class of functions involving the psi function is showed to be completely monotonic.
Logarithmically completely monotonic functions involving the Generalized Gamma Function
Faton Merovci; Valmir Krasniqi
2010-01-01
By a simple approach, two classes of functions involving generalization Euler's gamma function and originating from certain problems of traffic flow are proved to be logarithmically completely monotonic and a class of functions involving the psi function is showed to be completely monotonic.
Testing manifest monotonicity using order-constrained statistical inference
Tijmstra, J.; Hessen, D.J.; van der Heijden, P.G.M.; Sijtsma, K.
2013-01-01
Most dichotomous item response models share the assumption of latent monotonicity, which states that the probability of a positive response to an item is a nondecreasing function of a latent variable intended to be measured. Latent monotonicity cannot be evaluated directly, but it implies manifest
Modeling non-monotonic properties under propositional argumentation
Wang, Geng; Lin, Zuoquan
2013-03-01
In the field of knowledge representation, argumentation is usually considered as an abstract framework for nonclassical logic. In this paper, however, we'd like to present a propositional argumentation framework, which can be used to closer simulate a real-world argumentation. We thereby argue that under a dialectical argumentation game, we can allow non-monotonic reasoning even under classical logic. We introduce two methods together for gaining nonmonotonicity, one by giving plausibility for arguments, the other by adding "exceptions" which is similar to defaults. Furthermore, we will give out an alternative definition for propositional argumentation using argumentative models, which is highly related to the previous reasoning method, but with a simple algorithm for calculation.
Saini, Harsh; Lal, Sunil Pranit; Naidu, Vimal Vikash; Pickering, Vincel Wince; Singh, Gurmeet; Tsunoda, Tatsuhiko; Sharma, Alok
2016-12-05
High dimensional feature space generally degrades classification in several applications. In this paper, we propose a strategy called gene masking, in which non-contributing dimensions are heuristically removed from the data to improve classification accuracy. Gene masking is implemented via a binary encoded genetic algorithm that can be integrated seamlessly with classifiers during the training phase of classification to perform feature selection. It can also be used to discriminate between features that contribute most to the classification, thereby, allowing researchers to isolate features that may have special significance. This technique was applied on publicly available datasets whereby it substantially reduced the number of features used for classification while maintaining high accuracies. The proposed technique can be extremely useful in feature selection as it heuristically removes non-contributing features to improve the performance of classifiers.
High Accuracy Evaluation of the Finite Fourier Transform Using Sampled Data
Morelli, Eugene A.
1997-01-01
Many system identification and signal processing procedures can be done advantageously in the frequency domain. A required preliminary step for this approach is the transformation of sampled time domain data into the frequency domain. The analytical tool used for this transformation is the finite Fourier transform. Inaccuracy in the transformation can degrade system identification and signal processing results. This work presents a method for evaluating the finite Fourier transform using cubic interpolation of sampled time domain data for high accuracy, and the chirp Zeta-transform for arbitrary frequency resolution. The accuracy of the technique is demonstrated in example cases where the transformation can be evaluated analytically. Arbitrary frequency resolution is shown to be important for capturing details of the data in the frequency domain. The technique is demonstrated using flight test data from a longitudinal maneuver of the F-18 High Alpha Research Vehicle.
High-Accuracy Spherical Near-Field Measurements for Satellite Antenna Testing
DEFF Research Database (Denmark)
Breinbjerg, Olav
2017-01-01
The spherical near-field antenna measurement technique is unique in combining several distinct advantages and it generally constitutes the most accurate technique for experimental characterization of radiation from antennas. From the outset in 1970, spherical near-field antenna measurements have...... matured into a well-established technique that is widely used for testing antennas for many wireless applications. In particular, for high-accuracy applications, such as remote sensing satellite missions in ESA's Earth Observation Programme with uncertainty requirements at the level of 0.05dB - 0.10d......B, the spherical near-field antenna measurement technique is generally superior. This paper addresses the means to achieving high measurement accuracy; these include the measurement technique per se, its implementation in terms of proper measurement procedures, the use of uncertainty estimates, as well as facility...
A New Approach to High-accuracy Road Orthophoto Mapping Based on Wavelet Transform
Directory of Open Access Journals (Sweden)
Ming Yang
2011-12-01
Full Text Available Existing orthophoto map based on satellite photography and aerial photography is not precise enough for road marking. This paper proposes a new approach to high-accuracy orthophoto mapping. The approach uses inverse perspective transformation to process the image information and generates the orthophoto fragment. The offline interpolation algorithm is used to process the location information. It processes the dead reckoning and the EKF location information, and uses the result to transform the fragments to the global coordinate system. At last it uses wavelet transform to divides the image to two frequency bands and uses weighted median algorithm to deal with them separately. The result of experiment shows that the map produced with this method has high accuracy.
Bruns, M.; Keyson, D.V.; Jabon, M.E.; Hummels, C.C.M.; Hekkert, P.P.M.; Bailenson, J.N.
2013-01-01
Control errors often occur in repetitive and monotonous tasks, such as manual assembly tasks. Much research has been done in the area of human error identification; however, most existing systems focus solely on the prediction of errors, not on increasing worker accuracy. The current study examines
Identification and delineation of areas flood hazard using high accuracy of DEM data
Riadi, B.; Barus, B.; Widiatmaka; Yanuar, M. J. P.; Pramudya, B.
2018-05-01
Flood incidents that often occur in Karawang regency need to be mitigated. These expectations exist on technologies that can predict, anticipate and reduce disaster risks. Flood modeling techniques using Digital Elevation Model (DEM) data can be applied in mitigation activities. High accuracy DEM data used in modeling, will result in better flooding flood models. The result of high accuracy DEM data processing will yield information about surface morphology which can be used to identify indication of flood hazard area. The purpose of this study was to identify and describe flood hazard areas by identifying wetland areas using DEM data and Landsat-8 images. TerraSAR-X high-resolution data is used to detect wetlands from landscapes, while land cover is identified by Landsat image data. The Topography Wetness Index (TWI) method is used to detect and identify wetland areas with basic DEM data, while for land cover analysis using Tasseled Cap Transformation (TCT) method. The result of TWI modeling yields information about potential land of flood. Overlay TWI map with land cover map that produces information that in Karawang regency the most vulnerable areas occur flooding in rice fields. The spatial accuracy of the flood hazard area in this study was 87%.
Gondán, László; Kocsis, Bence; Raffai, Péter; Frei, Zsolt
2018-03-01
Mergers of stellar-mass black holes on highly eccentric orbits are among the targets for ground-based gravitational-wave detectors, including LIGO, VIRGO, and KAGRA. These sources may commonly form through gravitational-wave emission in high-velocity dispersion systems or through the secular Kozai–Lidov mechanism in triple systems. Gravitational waves carry information about the binaries’ orbital parameters and source location. Using the Fisher matrix technique, we determine the measurement accuracy with which the LIGO–VIRGO–KAGRA network could measure the source parameters of eccentric binaries using a matched filtering search of the repeated burst and eccentric inspiral phases of the waveform. We account for general relativistic precession and the evolution of the orbital eccentricity and frequency during the inspiral. We find that the signal-to-noise ratio and the parameter measurement accuracy may be significantly higher for eccentric sources than for circular sources. This increase is sensitive to the initial pericenter distance, the initial eccentricity, and the component masses. For instance, compared to a 30 {M}ȯ –30 {M}ȯ non-spinning circular binary, the chirp mass and sky-localization accuracy can improve by a factor of ∼129 (38) and ∼2 (11) for an initially highly eccentric binary assuming an initial pericenter distance of 20 M tot (10 M tot).
A Smart High Accuracy Silicon Piezoresistive Pressure Sensor Temperature Compensation System
Directory of Open Access Journals (Sweden)
Guanwu Zhou
2014-07-01
Full Text Available Theoretical analysis in this paper indicates that the accuracy of a silicon piezoresistive pressure sensor is mainly affected by thermal drift, and varies nonlinearly with the temperature. Here, a smart temperature compensation system to reduce its effect on accuracy is proposed. Firstly, an effective conditioning circuit for signal processing and data acquisition is designed. The hardware to implement the system is fabricated. Then, a program is developed on LabVIEW which incorporates an extreme learning machine (ELM as the calibration algorithm for the pressure drift. The implementation of the algorithm was ported to a micro-control unit (MCU after calibration in the computer. Practical pressure measurement experiments are carried out to verify the system’s performance. The temperature compensation is solved in the interval from −40 to 85 °C. The compensated sensor is aimed at providing pressure measurement in oil-gas pipelines. Compared with other algorithms, ELM acquires higher accuracy and is more suitable for batch compensation because of its higher generalization and faster learning speed. The accuracy, linearity, zero temperature coefficient and sensitivity temperature coefficient of the tested sensor are 2.57% FS, 2.49% FS, 8.1 × 10−5/°C and 29.5 × 10−5/°C before compensation, and are improved to 0.13%FS, 0.15%FS, 1.17 × 10−5/°C and 2.1 × 10−5/°C respectively, after compensation. The experimental results demonstrate that the proposed system is valid for the temperature compensation and high accuracy requirement of the sensor.
Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers
Directory of Open Access Journals (Sweden)
Zheng You
2013-04-01
Full Text Available The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.
Optical system error analysis and calibration method of high-accuracy star trackers.
Sun, Ting; Xing, Fei; You, Zheng
2013-04-08
The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.
Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu
2017-10-12
In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.
Directory of Open Access Journals (Sweden)
Peilu Liu
2017-10-01
Full Text Available In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA. In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.
High-accuracy determination of the neutron flux at n{sub T}OF
Energy Technology Data Exchange (ETDEWEB)
Barbagallo, M.; Colonna, N.; Mastromarco, M.; Meaze, M.; Tagliente, G.; Variale, V. [Sezione di Bari, INFN, Bari (Italy); Guerrero, C.; Andriamonje, S.; Boccone, V.; Brugger, M.; Calviani, M.; Cerutti, F.; Chin, M.; Ferrari, A.; Kadi, Y.; Losito, R.; Versaci, R.; Vlachoudis, V. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Tsinganis, A. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); National Technical University of Athens (NTUA), Athens (Greece); Tarrio, D.; Duran, I.; Leal-Cidoncha, E.; Paradela, C. [Universidade de Santiago de Compostela, Santiago (Spain); Altstadt, S.; Goebel, K.; Langer, C.; Reifarth, R.; Schmidt, S.; Weigand, M. [Johann-Wolfgang-Goethe Universitaet, Frankfurt (Germany); Andrzejewski, J.; Marganiec, J.; Perkowski, J. [Uniwersytet Lodzki, Lodz (Poland); Audouin, L.; Leong, L.S.; Tassan-Got, L. [Centre National de la Recherche Scientifique/IN2P3 - IPN, Orsay (France); Becares, V.; Cano-Ott, D.; Garcia, A.R.; Gonzalez-Romero, E.; Martinez, T.; Mendoza, E. [Centro de Investigaciones Energeticas Medioambientales y Tecnologicas (CIEMAT), Madrid (Spain); Becvar, F.; Krticka, M.; Kroll, J.; Valenta, S. [Charles University, Prague (Czech Republic); Belloni, F.; Fraval, K.; Gunsing, F.; Lampoudis, C.; Papaevangelou, T. [Commissariata l' Energie Atomique (CEA) Saclay - Irfu, Gif-sur-Yvette (France); Berthoumieux, E.; Chiaveri, E. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Commissariata l' Energie Atomique (CEA) Saclay - Irfu, Gif-sur-Yvette (France); Billowes, J.; Ware, T.; Wright, T. [University of Manchester, Manchester (United Kingdom); Bosnar, D.; Zugec, P. [University of Zagreb, Department of Physics, Faculty of Science, Zagreb (Croatia); Calvino, F.; Cortes, G.; Gomez-Hornillos, M.B.; Riego, A. [Universitat Politecnica de Catalunya, Barcelona (Spain); Carrapico, C.; Goncalves, I.F.; Sarmento, R.; Vaz, P. [Universidade Tecnica de Lisboa, Instituto Tecnologico e Nuclear, Instituto Superior Tecnico, Lisboa (Portugal); Cortes-Giraldo, M.A.; Praena, J.; Quesada, J.M.; Sabate-Gilarte, M. [Universidad de Sevilla, Sevilla (Spain); Diakaki, M.; Karadimos, D.; Kokkoris, M.; Vlastou, R. [National Technical University of Athens (NTUA), Athens (Greece); Domingo-Pardo, C.; Giubrone, G.; Tain, J.L. [CSIC-Universidad de Valencia, Instituto de Fisica Corpuscular, Valencia (Spain); Dressler, R.; Kivel, N.; Schumann, D.; Steinegger, P. [Paul Scherrer Institut, Villigen PSI (Switzerland); Dzysiuk, N.; Mastinu, P.F. [Laboratori Nazionali di Legnaro, INFN, Rome (Italy); Eleftheriadis, C.; Manousos, A. [Aristotle University of Thessaloniki, Thessaloniki (Greece); Ganesan, S.; Gurusamy, P.; Saxena, A. [Bhabha Atomic Research Centre (BARC), Mumbai (IN); Griesmayer, E.; Jericha, E.; Leeb, H. [Technische Universitaet Wien, Atominstitut, Wien (AT); Hernandez-Prieto, A. [European Organization for Nuclear Research (CERN), Geneva (CH); Universitat Politecnica de Catalunya, Barcelona (ES); Jenkins, D.G.; Vermeulen, M.J. [University of York, Heslington, York (GB); Kaeppeler, F. [Institut fuer Kernphysik, Karlsruhe Institute of Technology, Campus Nord, Karlsruhe (DE); Koehler, P. [Oak Ridge National Laboratory (ORNL), Oak Ridge (US); Lederer, C. [Johann-Wolfgang-Goethe Universitaet, Frankfurt (DE); University of Vienna, Faculty of Physics, Vienna (AT); Massimi, C.; Mingrone, F.; Vannini, G. [Universita di Bologna (IT); INFN, Sezione di Bologna, Dipartimento di Fisica, Bologna (IT); Mengoni, A.; Ventura, A. [Agenzia nazionale per le nuove tecnologie, l' energia e lo sviluppo economico sostenibile (ENEA), Bologna (IT); Milazzo, P.M. [Sezione di Trieste, INFN, Trieste (IT); Mirea, M. [Horia Hulubei National Institute of Physics and Nuclear Engineering - IFIN HH, Bucharest - Magurele (RO); Mondalaers, W.; Plompen, A.; Schillebeeckx, P. [Institute for Reference Materials and Measurements, European Commission JRC, Geel (BE); Pavlik, A.; Wallner, A. [University of Vienna, Faculty of Physics, Vienna (AT); Rauscher, T. [University of Basel, Department of Physics and Astronomy, Basel (CH); Roman, F. [European Organization for Nuclear Research (CERN), Geneva (CH); Horia Hulubei National Institute of Physics and Nuclear Engineering - IFIN HH, Bucharest - Magurele (RO); Rubbia, C. [European Organization for Nuclear Research (CERN), Geneva (CH); Laboratori Nazionali del Gran Sasso dell' INFN, Assergi (AQ) (IT); Weiss, C. [European Organization for Nuclear Research (CERN), Geneva (CH); Johann-Wolfgang-Goethe Universitaet, Frankfurt (DE)
2013-12-15
The neutron flux of the n{sub T}OF facility at CERN was measured, after installation of the new spallation target, with four different systems based on three neutron-converting reactions, which represent accepted cross sections standards in different energy regions. A careful comparison and combination of the different measurements allowed us to reach an unprecedented accuracy on the energy dependence of the neutron flux in the very wide range (thermal to 1 GeV) that characterizes the n{sub T}OF neutron beam. This is a pre-requisite for the high accuracy of cross section measurements at n{sub T}OF. An unexpected anomaly in the neutron-induced fission cross section of {sup 235}U is observed in the energy region between 10 and 30keV, hinting at a possible overestimation of this important cross section, well above currently assigned uncertainties. (orig.)
Fission product model for BWR analysis with improved accuracy in high burnup
International Nuclear Information System (INIS)
Ikehara, Tadashi; Yamamoto, Munenari; Ando, Yoshihira
1998-01-01
A new fission product (FP) chain model has been studied to be used in a BWR lattice calculation. In attempting to establish the model, two requirements, i.e. the accuracy in predicting burnup reactivity and the easiness in practical application, are simultaneously considered. The resultant FP model consists of 81 explicit FP nuclides and two lumped pseudo nuclides having the absorption cross sections independent of burnup history and fuel composition. For the verification, extensive numerical tests covering over a wide range of operational conditions and fuel compositions have been carried out. The results indicate that the estimated errors in burnup reactivity are within 0.1%Δk for exposures up to 100GWd/t. It is concluded that the present model can offer a high degree of accuracy for FP representation in BWR lattice calculation. (author)
High Accuracy Attitude Control System Design for Satellite with Flexible Appendages
Directory of Open Access Journals (Sweden)
Wenya Zhou
2014-01-01
Full Text Available In order to realize the high accuracy attitude control of satellite with flexible appendages, attitude control system consisting of the controller and structural filter was designed. When the low order vibration frequency of flexible appendages is approximating the bandwidth of attitude control system, the vibration signal will enter the control system through measurement device to bring impact on the accuracy or even the stability. In order to reduce the impact of vibration of appendages on the attitude control system, the structural filter is designed in terms of rejecting the vibration of flexible appendages. Considering the potential problem of in-orbit frequency variation of the flexible appendages, the design method for the adaptive notch filter is proposed based on the in-orbit identification technology. Finally, the simulation results are given to demonstrate the feasibility and effectiveness of the proposed design techniques.
International Nuclear Information System (INIS)
Furukawa, Masaru; Ohkawa, Yushiro; Matsuyama, Akinobu
2016-01-01
A high-accuracy numerical integration algorithm for a charged particle motion is developed. The algorithm is based on the Hamiltonian mechanics and the operator decomposition. The algorithm is made to be time-reversal symmetric, and its order of accuracy can be increased to any order by using a recurrence formula. One of the advantages is that it is an explicit method. An effective way to decompose the time evolution operator is examined; the Poisson tensor is decomposed and non-canonical variables are adopted. The algorithm is extended to a time dependent fields' case by introducing the extended phase space. Numerical tests showing the performance of the algorithm are presented. One is the pure cyclotron motion for a long time period, and the other is a charged particle motion in a rapidly oscillating field. (author)
High-accuracy defect sizing for CRDM penetration adapters using the ultrasonic TOFD technique
International Nuclear Information System (INIS)
Atkinson, I.
1995-01-01
Ultrasonic time-of-flight diffraction (TOFD) is the preferred technique for critical sizing of throughwall orientated defects in a wide range of components, primarily because it is intrinsically more accurate than amplitude-based techniques. For the same reason, TOFD is the preferred technique for sizing the cracks in control rod drive mechanism (CRDM) penetration adapters, which have been the subject of much recent attention. Once the considerable problem of restricted access for the UT probes has been overcome, this inspection lends itself to very high accuracy defect sizing using TOFD. In qualification trials under industrial conditions, depth sizing to an accuracy of ≤ 0.5 mm has been routinely achieved throughout the full wall thickness (16 mm) of the penetration adapters, using only a single probe pair and without recourse to signal processing. (author)
Non-monotonic relationships between emotional arousal and memory for color and location.
Boywitt, C Dennis
2015-01-01
Recent research points to the decreased diagnostic value of subjective retrieval experience for memory accuracy for emotional stimuli. While for neutral stimuli rich recollective experiences are associated with better context memory than merely familiar memories this association appears questionable for emotional stimuli. The present research tested the implicit assumption that the effect of emotional arousal on memory is monotonic, that is, steadily increasing (or decreasing) with increasing arousal. In two experiments emotional arousal was manipulated in three steps using emotional pictures and subjective retrieval experience as well as context memory were assessed. The results show an inverted U-shape relationship between arousal and recognition memory but for context memory and retrieval experience the relationship was more complex. For frame colour, context memory decreased linearly while for spatial location it followed the inverted U-shape function. The complex, non-monotonic relationships between arousal and memory are discussed as possible explanations for earlier divergent findings.
High accuracy of family history of melanoma in Danish melanoma cases
DEFF Research Database (Denmark)
Wadt, Karin A W; Drzewiecki, Krzysztof T; Gerdes, Anne-Marie
2015-01-01
The incidence of melanoma in Denmark has immensely increased over the last 10 years making Denmark a high risk country for melanoma. In the last two decades multiple public campaigns have sought to increase the awareness of melanoma. Family history of melanoma is a known major risk factor...... but previous studies have shown that self-reported family history of melanoma is highly inaccurate. These studies are 15 years old and we wanted to examine if a higher awareness of melanoma has increased the accuracy of self-reported family history of melanoma. We examined the family history of 181 melanoma...
Frey, Bradley J.; Leviton, Douglas B.
2005-01-01
The Cryogenic High Accuracy Refraction Measuring System (CHARMS) at NASA's Goddard Space Flight Center has been enhanced in a number of ways in the last year to allow the system to accurately collect refracted beam deviation readings automatically over a range of temperatures from 15 K to well beyond room temperature with high sampling density in both wavelength and temperature. The engineering details which make this possible are presented. The methods by which the most accurate angular measurements are made and the corresponding data reduction methods used to reduce thousands of observed angles to a handful of refractive index values are also discussed.
Strong monotonicity in mixed-state entanglement manipulation
International Nuclear Information System (INIS)
Ishizaka, Satoshi
2006-01-01
A strong entanglement monotone, which never increases under local operations and classical communications (LOCC), restricts quantum entanglement manipulation more strongly than the usual monotone since the usual one does not increase on average under LOCC. We propose strong monotones in mixed-state entanglement manipulation under LOCC. These are related to the decomposability and one-positivity of an operator constructed from a quantum state, and reveal geometrical characteristics of entangled states. These are lower bounded by the negativity or generalized robustness of entanglement
Monotonicity-based electrical impedance tomography for lung imaging
Zhou, Liangdong; Harrach, Bastian; Seo, Jin Keun
2018-04-01
This paper presents a monotonicity-based spatiotemporal conductivity imaging method for continuous regional lung monitoring using electrical impedance tomography (EIT). The EIT data (i.e. the boundary current-voltage data) can be decomposed into pulmonary, cardiac and other parts using their different periodic natures. The time-differential current-voltage operator corresponding to the lung ventilation can be viewed as either semi-positive or semi-negative definite owing to monotonic conductivity changes within the lung regions. We used these monotonicity constraints to improve the quality of lung EIT imaging. We tested the proposed methods in numerical simulations, phantom experiments and human experiments.
High-Accuracy Elevation Data at Large Scales from Airborne Single-Pass SAR Interferometry
Directory of Open Access Journals (Sweden)
Guy Jean-Pierre Schumann
2016-01-01
Full Text Available Digital elevation models (DEMs are essential data sets for disaster risk management and humanitarian relief services as well as many environmental process models. At present, on the hand, globally available DEMs only meet the basic requirements and for many services and modeling studies are not of high enough spatial resolution and lack accuracy in the vertical. On the other hand, LiDAR-DEMs are of very high spatial resolution and great vertical accuracy but acquisition operations can be very costly for spatial scales larger than a couple of hundred square km and also have severe limitations in wetland areas and under cloudy and rainy conditions. The ideal situation would thus be to have a DEM technology that allows larger spatial coverage than LiDAR but without compromising resolution and vertical accuracy and still performing under some adverse weather conditions and at a reasonable cost. In this paper, we present a novel single pass In-SAR technology for airborne vehicles that is cost-effective and can generate DEMs with a vertical error of around 0.3 m for an average spatial resolution of 3 m. To demonstrate this capability, we compare a sample single-pass In-SAR Ka-band DEM of the California Central Valley from the NASA/JPL airborne GLISTIN-A to a high-resolution LiDAR DEM. We also perform a simple sensitivity analysis to floodplain inundation. Based on the findings of our analysis, we argue that this type of technology can and should be used to replace large regions of globally available lower resolution DEMs, particularly in coastal, delta and floodplain areas where a high number of assets, habitats and lives are at risk from natural disasters. We conclude with a discussion on requirements, advantages and caveats in terms of instrument and data processing.
International Nuclear Information System (INIS)
Jeong, Chang-Joon; Okumura, Keisuke; Ishiguro, Yukio; Tanaka, Ken-ichi
1990-01-01
Validation tests were made for the accuracy of cell calculation methods used in analyses of tight lattices of a mixed-oxide (MOX) fuel core in a high conversion light water reactor (HCLWR). A series of cell calculations was carried out for the lattices referred from an international HCLWR benchmark comparison, with emphasis placed on the resonance calculation methods; the NR, IR approximations, the collision probability method with ultra-fine energy group. Verification was also performed for the geometrical modelling; a hexagonal/cylindrical cell, and the boundary condition; mirror/white reflection. In the calculations, important reactor physics parameters, such as the neutron multiplication factor, the conversion ratio and the void coefficient, were evaluated using the above methods for various HCLWR lattices with different moderator to fuel volume ratios, fuel materials and fissile plutonium enrichments. The calculated results were compared with each other, and the accuracy and applicability of each method were clarified by comparison with continuous energy Monte Carlo calculations. It was verified that the accuracy of the IR approximation became worse when the neutron spectrum became harder. It was also concluded that the cylindrical cell model with the white boundary condition was not so suitable for MOX fuelled lattices, as for UO 2 fuelled lattices. (author)
Accuracy of High-Resolution Ultrasonography in the Detection of Extensor Tendon Lacerations.
Dezfuli, Bobby; Taljanovic, Mihra S; Melville, David M; Krupinski, Elizabeth A; Sheppard, Joseph E
2016-02-01
Lacerations to the extensor mechanism are usually diagnosed clinically. Ultrasound (US) has been a growing diagnostic tool for tendon injuries since the 1990s. To date, there has been no publication establishing the accuracy and reliability of US in the evaluation of extensor mechanism lacerations in the hand. The purpose of this study is to determine the accuracy of US to detect extensor tendon injuries in the hand. Sixteen fingers and 4 thumbs in 4 fresh-frozen and thawed cadaveric hands were used. Sixty-eight 0.5-cm transverse skin lacerations were created. Twenty-seven extensor tendons were sharply transected. The remaining skin lacerations were used as sham dissection controls. One US technologist and one fellowship-trained musculoskeletal radiologist performed real-time dynamic US studies in and out of water bath. A second fellowship trained musculoskeletal radiologist subsequently reviewed the static US images. Dynamic and static US interpretation accuracy was assessed using dissection as "truth." All 27 extensor tendon lacerations and controls were identified correctly with dynamic imaging as either injury models that had a transected extensor tendon or sham controls with intact extensor tendons (sensitivity = 100%, specificity = 100%, positive predictive value = 1.0; all significantly greater than chance). Static imaging had a sensitivity of 85%, specificity of 89%, and accuracy of 88% (all significantly greater than chance). The results of the dynamic real time versus static US imaging were clearly different but did not reach statistical significance. Diagnostic US is a very accurate noninvasive study that can identify extensor mechanism injuries. Clinically suspected cases of acute extensor tendon injury scanned by high-frequency US can aid and/or confirm the diagnosis, with dynamic imaging providing added value compared to static. Ultrasonography, to aid in the diagnosis of extensor mechanism lacerations, can be successfully used in a reliable and
Accuracy assessment of high frequency 3D ultrasound for digital impression-taking of prepared teeth
Heger, Stefan; Vollborn, Thorsten; Tinschert, Joachim; Wolfart, Stefan; Radermacher, Klaus
2013-03-01
Silicone based impression-taking of prepared teeth followed by plaster casting is well-established but potentially less reliable, error-prone and inefficient, particularly in combination with emerging techniques like computer aided design and manufacturing (CAD/CAM) of dental prosthesis. Intra-oral optical scanners for digital impression-taking have been introduced but until now some drawbacks still exist. Because optical waves can hardly penetrate liquids or soft-tissues, sub-gingival preparations still need to be uncovered invasively prior to scanning. High frequency ultrasound (HFUS) based micro-scanning has been recently investigated as an alternative to optical intra-oral scanning. Ultrasound is less sensitive against oral fluids and in principal able to penetrate gingiva without invasively exposing of sub-gingival preparations. Nevertheless, spatial resolution as well as digitization accuracy of an ultrasound based micro-scanning system remains a critical parameter because the ultrasound wavelength in water-like media such as gingiva is typically smaller than that of optical waves. In this contribution, the in-vitro accuracy of ultrasound based micro-scanning for tooth geometry reconstruction is being investigated and compared to its extra-oral optical counterpart. In order to increase the spatial resolution of the system, 2nd harmonic frequencies from a mechanically driven focused single element transducer were separated and corresponding 3D surface models were calculated for both fundamentals and 2nd harmonics. Measurements on phantoms, model teeth and human teeth were carried out for evaluation of spatial resolution and surface detection accuracy. Comparison of optical and ultrasound digital impression taking indicate that, in terms of accuracy, ultrasound based tooth digitization can be an alternative for optical impression-taking.
Kaus, M; Steinmeier, R; Sporer, T; Ganslandt, O; Fahlbusch, R
1997-12-01
This study was designed to determine and evaluate the different system-inherent sources of erroneous target localization of a light-emitting diode (LED)-based neuronavigation system (StealthStation, Stealth Technologies, Boulder, CO). The localization accuracy was estimated by applying a high-precision mechanical micromanipulator to move and exactly locate (+/- 0.1 micron) the pointer at multiple positions in the physical three-dimensional space. The localization error was evaluated by calculating the spatial distance between the (known) LED positions and the LED coordinates measured by the neuronavigator. The results are based on a study of approximately 280,000 independent coordinate measurements. The maximum localization error detected was 0.55 +/- 0.29 mm, with the z direction (distance to the camera array) being the most erroneous coordinate. Minimum localization error was found at a distance of 1400 mm from the central camera (optimal measurement position). Additional error due to 1) mechanical vibrations of the camera tripod (+/- 0.15 mm) and the reference frame (+/- 0.08 mm) and 2) extrapolation of the pointer tip position from the LED coordinates of at least +/- 0.12 mm were detected, leading to a total technical error of 0.55 +/- 0.64 mm. Based on this technical accuracy analysis, a set of handling recommendations is proposed, leading to an improved localization accuracy. The localization error could be reduced by 0.3 +/- 0.15 mm by correct camera positioning (1400 mm distance) plus 0.15 mm by vibration-eliminating fixation of the camera. Correct handling of the probe during the operation may improve the accuracy by up to 0.1 mm.
International Nuclear Information System (INIS)
Zhao, Y; Zimmermann, E; Wolters, B; Van Waasen, S; Huisman, J A; Treichel, A; Kemna, A
2013-01-01
Electrical impedance tomography (EIT) is gaining importance in the field of geophysics and there is increasing interest for accurate borehole EIT measurements in a broad frequency range (mHz to kHz) in order to study subsurface properties. To characterize weakly polarizable soils and sediments with EIT, high phase accuracy is required. Typically, long electrode cables are used for borehole measurements. However, this may lead to undesired electromagnetic coupling effects associated with the inductive coupling between the double wire pairs for current injection and potential measurement and the capacitive coupling between the electrically conductive shield of the cable and the electrically conductive environment surrounding the electrode cables. Depending on the electrical properties of the subsurface and the measured transfer impedances, both coupling effects can cause large phase errors that have typically limited the frequency bandwidth of field EIT measurements to the mHz to Hz range. The aim of this paper is to develop numerical corrections for these phase errors. To this end, the inductive coupling effect was modeled using electronic circuit models, and the capacitive coupling effect was modeled by integrating discrete capacitances in the electrical forward model describing the EIT measurement process. The correction methods were successfully verified with measurements under controlled conditions in a water-filled rain barrel, where a high phase accuracy of 0.8 mrad in the frequency range up to 10 kHz was achieved. The corrections were also applied to field EIT measurements made using a 25 m long EIT borehole chain with eight electrodes and an electrode separation of 1 m. The results of a 1D inversion of these measurements showed that the correction methods increased the measurement accuracy considerably. It was concluded that the proposed correction methods enlarge the bandwidth of the field EIT measurement system, and that accurate EIT measurements can now
An angle encoder for super-high resolution and super-high accuracy using SelfA
Watanabe, Tsukasa; Kon, Masahito; Nabeshima, Nobuo; Taniguchi, Kayoko
2014-06-01
Angular measurement technology at high resolution for applications such as in hard disk drive manufacturing machines, precision measurement equipment and aspherical process machines requires a rotary encoder with high accuracy, high resolution and high response speed. However, a rotary encoder has angular deviation factors during operation due to scale error or installation error. It has been assumed to be impossible to achieve accuracy below 0.1″ in angular measurement or control after the installation onto the rotating axis. Self-calibration (Lu and Trumper 2007 CIRP Ann. 56 499; Kim et al 2011 Proc. MacroScale; Probst 2008 Meas. Sci. Technol. 19 015101; Probst et al Meas. Sci. Technol. 9 1059; Tadashi and Makoto 1993 J. Robot. Mechatronics 5 448; Ralf et al 2006 Meas. Sci. Technol. 17 2811) and cross-calibration (Probst et al 1998 Meas. Sci. Technol. 9 1059; Just et al 2009 Precis. Eng. 33 530; Burnashev 2013 Quantum Electron. 43 130) technologies for a rotary encoder have been actively discussed on the basis of the principle of circular closure. This discussion prompted the development of rotary tables which achieve reliable and high accuracy angular verification. We apply these technologies for the development of a rotary encoder not only to meet the requirement of super-high accuracy but also to meet that of super-high resolution. This paper presents the development of an encoder with 221 = 2097 152 resolutions per rotation (360°), that is, corresponding to a 0.62″ signal period, achieved by the combination of a laser rotary encoder supplied by Magnescale Co., Ltd and a self-calibratable encoder (SelfA) supplied by The National Institute of Advanced Industrial Science & Technology (AIST). In addition, this paper introduces the development of a rotary encoder to guarantee ±0.03″ accuracy at any point of the interpolated signal, with respect to the encoder at the minimum resolution of 233, that is, corresponding to a 0.0015″ signal period after
An angle encoder for super-high resolution and super-high accuracy using SelfA
International Nuclear Information System (INIS)
Watanabe, Tsukasa; Kon, Masahito; Nabeshima, Nobuo; Taniguchi, Kayoko
2014-01-01
Angular measurement technology at high resolution for applications such as in hard disk drive manufacturing machines, precision measurement equipment and aspherical process machines requires a rotary encoder with high accuracy, high resolution and high response speed. However, a rotary encoder has angular deviation factors during operation due to scale error or installation error. It has been assumed to be impossible to achieve accuracy below 0.1″ in angular measurement or control after the installation onto the rotating axis. Self-calibration (Lu and Trumper 2007 CIRP Ann. 56 499; Kim et al 2011 Proc. MacroScale; Probst 2008 Meas. Sci. Technol. 19 015101; Probst et al Meas. Sci. Technol. 9 1059; Tadashi and Makoto 1993 J. Robot. Mechatronics 5 448; Ralf et al 2006 Meas. Sci. Technol. 17 2811) and cross-calibration (Probst et al 1998 Meas. Sci. Technol. 9 1059; Just et al 2009 Precis. Eng. 33 530; Burnashev 2013 Quantum Electron. 43 130) technologies for a rotary encoder have been actively discussed on the basis of the principle of circular closure. This discussion prompted the development of rotary tables which achieve reliable and high accuracy angular verification. We apply these technologies for the development of a rotary encoder not only to meet the requirement of super-high accuracy but also to meet that of super-high resolution. This paper presents the development of an encoder with 2 21 = 2097 152 resolutions per rotation (360°), that is, corresponding to a 0.62″ signal period, achieved by the combination of a laser rotary encoder supplied by Magnescale Co., Ltd and a self-calibratable encoder (SelfA) supplied by The National Institute of Advanced Industrial Science and Technology (AIST). In addition, this paper introduces the development of a rotary encoder to guarantee ±0.03″ accuracy at any point of the interpolated signal, with respect to the encoder at the minimum resolution of 2 33 , that is, corresponding to a 0.0015″ signal period
Ultra-high accuracy optical testing: creating diffraction-limitedshort-wavelength optical systems
Energy Technology Data Exchange (ETDEWEB)
Goldberg, Kenneth A.; Naulleau, Patrick P.; Rekawa, Senajith B.; Denham, Paul E.; Liddle, J. Alexander; Gullikson, Eric M.; Jackson, KeithH.; Anderson, Erik H.; Taylor, John S.; Sommargren, Gary E.; Chapman,Henry N.; Phillion, Donald W.; Johnson, Michael; Barty, Anton; Soufli,Regina; Spiller, Eberhard A.; Walton, Christopher C.; Bajt, Sasa
2005-08-03
Since 1993, research in the fabrication of extreme ultraviolet (EUV) optical imaging systems, conducted at Lawrence Berkeley National Laboratory (LBNL) and Lawrence Livermore National Laboratory (LLNL), has produced the highest resolution optical systems ever made. We have pioneered the development of ultra-high-accuracy optical testing and alignment methods, working at extreme ultraviolet wavelengths, and pushing wavefront-measuring interferometry into the 2-20-nm wavelength range (60-600 eV). These coherent measurement techniques, including lateral shearing interferometry and phase-shifting point-diffraction interferometry (PS/PDI) have achieved RMS wavefront measurement accuracies of 0.5-1-{angstrom} and better for primary aberration terms, enabling the creation of diffraction-limited EUV optics. The measurement accuracy is established using careful null-testing procedures, and has been verified repeatedly through high-resolution imaging. We believe these methods are broadly applicable to the advancement of short-wavelength optical systems including space telescopes, microscope objectives, projection lenses, synchrotron beamline optics, diffractive and holographic optics, and more. Measurements have been performed on a tunable undulator beamline at LBNL's Advanced Light Source (ALS), optimized for high coherent flux; although many of these techniques should be adaptable to alternative ultraviolet, EUV, and soft x-ray light sources. To date, we have measured nine prototype all-reflective EUV optical systems with NA values between 0.08 and 0.30 (f/6.25 to f/1.67). These projection-imaging lenses were created for the semiconductor industry's advanced research in EUV photolithography, a technology slated for introduction in 2009-13. This paper reviews the methods used and our program's accomplishments to date.
Ultra-high accuracy optical testing: creating diffraction-limited short-wavelength optical systems
International Nuclear Information System (INIS)
Goldberg, Kenneth A.; Naulleau, Patrick P.; Rekawa, Senajith B.; Denham, Paul E.; Liddle, J. Alexander; Gullikson, Eric M.; Jackson, KeithH.; Anderson, Erik H.; Taylor, John S.; Sommargren, Gary E.; Chapman, Henry N.; Phillion, Donald W.; Johnson, Michael; Barty, Anton; Soufli, Regina; Spiller, Eberhard A.; Walton, Christopher C.; Bajt, Sasa
2005-01-01
Since 1993, research in the fabrication of extreme ultraviolet (EUV) optical imaging systems, conducted at Lawrence Berkeley National Laboratory (LBNL) and Lawrence Livermore National Laboratory (LLNL), has produced the highest resolution optical systems ever made. We have pioneered the development of ultra-high-accuracy optical testing and alignment methods, working at extreme ultraviolet wavelengths, and pushing wavefront-measuring interferometry into the 2-20-nm wavelength range (60-600 eV). These coherent measurement techniques, including lateral shearing interferometry and phase-shifting point-diffraction interferometry (PS/PDI) have achieved RMS wavefront measurement accuracies of 0.5-1-(angstrom) and better for primary aberration terms, enabling the creation of diffraction-limited EUV optics. The measurement accuracy is established using careful null-testing procedures, and has been verified repeatedly through high-resolution imaging. We believe these methods are broadly applicable to the advancement of short-wavelength optical systems including space telescopes, microscope objectives, projection lenses, synchrotron beamline optics, diffractive and holographic optics, and more. Measurements have been performed on a tunable undulator beamline at LBNL's Advanced Light Source (ALS), optimized for high coherent flux; although many of these techniques should be adaptable to alternative ultraviolet, EUV, and soft x-ray light sources. To date, we have measured nine prototype all-reflective EUV optical systems with NA values between 0.08 and 0.30 (f/6.25 to f/1.67). These projection-imaging lenses were created for the semiconductor industry's advanced research in EUV photolithography, a technology slated for introduction in 2009-13. This paper reviews the methods used and our program's accomplishments to date
Risk-Sensitive Control with Near Monotone Cost
International Nuclear Information System (INIS)
Biswas, Anup; Borkar, V. S.; Suresh Kumar, K.
2010-01-01
The infinite horizon risk-sensitive control problem for non-degenerate controlled diffusions is analyzed under a 'near monotonicity' condition on the running cost that penalizes large excursions of the process.
An Examination of Cooper's Test for Monotonic Trend
Hsu, Louis
1977-01-01
A statistic for testing monotonic trend that has been presented in the literature is shown not to be the binomial random variable it is contended to be, but rather it is linearly related to Kendall's tau statistic. (JKS)
A Survey on Operator Monotonicity, Operator Convexity, and Operator Means
Directory of Open Access Journals (Sweden)
Pattrawut Chansangiam
2015-01-01
Full Text Available This paper is an expository devoted to an important class of real-valued functions introduced by Löwner, namely, operator monotone functions. This concept is closely related to operator convex/concave functions. Various characterizations for such functions are given from the viewpoint of differential analysis in terms of matrix of divided differences. From the viewpoint of operator inequalities, various characterizations and the relationship between operator monotonicity and operator convexity are given by Hansen and Pedersen. In the viewpoint of measure theory, operator monotone functions on the nonnegative reals admit meaningful integral representations with respect to Borel measures on the unit interval. Furthermore, Kubo-Ando theory asserts the correspondence between operator monotone functions and operator means.
The use of high accuracy NAA for the certification of NIST botanical standard reference materials
International Nuclear Information System (INIS)
Becker, D.A.; Greenberg, R.R.; Stone, S.F.
1992-01-01
Neutron activation analysis is one of many analytical techniques used at the National Institute of Standards and Technology (NIST) for the certification of NIST Standard Reference Materials (SRMs). NAA competes favorably with all other techniques because of it's unique capabilities for high accuracy even at very low concentrations for many elements. In this paper, instrumental and radiochemical NAA results are described for 25 elements in two new NIST SRMs, SRM 1515 (Apple Leaves) and SRM 1547 (Peach Leaves), and are compared to the certified values for 19 elements in these two new botanical reference materials. (author) 7 refs.; 4 tabs
High-accuracy critical exponents for O(N) hierarchical 3D sigma models
International Nuclear Information System (INIS)
Godina, J. J.; Li, L.; Meurice, Y.; Oktay, M. B.
2006-01-01
The critical exponent γ and its subleading exponent Δ in the 3D O(N) Dyson's hierarchical model for N up to 20 are calculated with high accuracy. We calculate the critical temperatures for the measure δ(φ-vector.φ-vector-1). We extract the first coefficients of the 1/N expansion from our numerical data. We show that the leading and subleading exponents agree with Polchinski equation and the equivalent Litim equation, in the local potential approximation, with at least 4 significant digits
High-accuracy mass determination of unstable nuclei with a Penning trap mass spectrometer
2002-01-01
The mass of a nucleus is its most fundamental property. A systematic study of nuclear masses as a function of neutron and proton number allows the observation of collective and single-particle effects in nuclear structure. Accurate mass data are the most basic test of nuclear models and are essential for their improvement. This is especially important for the astrophysical study of nuclear synthesis. In order to achieve the required high accuracy, the mass of ions captured in a Penning trap is determined via their cyclotron frequency $ \
Energy Technology Data Exchange (ETDEWEB)
Tomasevic, Dj; Altiparmarkov, D [Institut za Nuklearne Nauke Boris Kidric, Belgrade (Yugoslavia)
1988-07-01
A variational nodal diffusion method with accurate treatment of transverse leakage shape is developed and presented in this paper. Using Legendre expansion in transverse coordinates higher order quasi-one-dimensional nodal equations are formulated. Numerical solution has been carried out using analytical solutions in alternating directions assuming Legendre expansion of the RHS term. The method has been tested against 2D and 3D IAEA benchmark problem, as well as 2D CANDU benchmark problem. The results are highly accurate. The first order approximation yields to the same order of accuracy as the standard nodal methods with quadratic leakage approximation, while the second order reaches reference solution. (author)
Completely monotonic functions related to logarithmic derivatives of entire functions
DEFF Research Database (Denmark)
Pedersen, Henrik Laurberg
2011-01-01
The logarithmic derivative l(x) of an entire function of genus p and having only non-positive zeros is represented in terms of a Stieltjes function. As a consequence, (-1)p(xml(x))(m+p) is a completely monotonic function for all m ≥ 0. This generalizes earlier results on complete monotonicity...... of functions related to Euler's psi-function. Applications to Barnes' multiple gamma functions are given....
Monotonic Loading of Circular Surface Footings on Clay
DEFF Research Database (Denmark)
Ibsen, Lars Bo; Barari, Amin
2011-01-01
Appropriate modeling of offshore foundations under monotonic loading is a significant challenge in geotechnical engineering. This paper reports experimental and numerical analyses, specifically investigating the response of circular surface footings during monotonic loading and elastoplastic...... behavior during reloading. By using the findings presented in this paper, it is possible to extend the model to simulate the vertical-load displacement response of offshore bucket foundations....
A new ultra-high-accuracy angle generator: current status and future direction
Guertin, Christian F.; Geckeler, Ralf D.
2017-09-01
Lack of an extreme high-accuracy angular positioning device available in the United States has left a gap in industrial and scientific efforts conducted there, requiring certain user groups to undertake time-consuming work with overseas laboratories. Specifically, in x-ray mirror metrology the global research community is advancing the state-of-the-art to unprecedented levels. We aim to fill this U.S. gap by developing a versatile high-accuracy angle generator as a part of the national metrology tool set for x-ray mirror metrology and other important industries. Using an established calibration technique to measure the errors of the encoder scale graduations for full-rotation rotary encoders, we implemented an optimized arrangement of sensors positioned to minimize propagation of calibration errors. Our initial feasibility research shows that upon scaling to a full prototype and including additional calibration techniques we can expect to achieve uncertainties at the level of 0.01 arcsec (50 nrad) or better and offer the immense advantage of a highly automatable and customizable product to the commercial market.
Moduli and Characteristics of Monotonicity in Some Banach Lattices
Directory of Open Access Journals (Sweden)
Miroslav Krbec
2010-01-01
Full Text Available First the characteristic of monotonicity of any Banach lattice X is expressed in terms of the left limit of the modulus of monotonicity of X at the point 1. It is also shown that for Köthe spaces the classical characteristic of monotonicity is the same as the characteristic of monotonicity corresponding to another modulus of monotonicity δ^m,E. The characteristic of monotonicity of Orlicz function spaces and Orlicz sequence spaces equipped with the Luxemburg norm are calculated. In the first case the characteristic is expressed in terms of the generating Orlicz function only, but in the sequence case the formula is not so direct. Three examples show why in the sequence case so direct formula is rather impossible. Some other auxiliary and complemented results are also presented. By the results of Betiuk-Pilarska and Prus (2008 which establish that Banach lattices X with ε0,m(X<1 and weak orthogonality property have the weak fixed point property, our results are related to the fixed point theory (Kirk and Sims (2001.
High Accuracy, Miniature Pressure Sensor for Very High Temperatures, Phase I
National Aeronautics and Space Administration — SiWave proposes to develop a compact, low-cost MEMS-based pressure sensor for very high temperatures and low pressures in hypersonic wind tunnels. Most currently...
Markiewicz, Pawel J; Ehrhardt, Matthias J; Erlandsson, Kjell; Noonan, Philip J; Barnes, Anna; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Ourselin, Sebastien
2018-01-01
We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.
A generalized polynomial chaos based ensemble Kalman filter with high accuracy
International Nuclear Information System (INIS)
Li Jia; Xiu Dongbin
2009-01-01
As one of the most adopted sequential data assimilation methods in many areas, especially those involving complex nonlinear dynamics, the ensemble Kalman filter (EnKF) has been under extensive investigation regarding its properties and efficiency. Compared to other variants of the Kalman filter (KF), EnKF is straightforward to implement, as it employs random ensembles to represent solution states. This, however, introduces sampling errors that affect the accuracy of EnKF in a negative manner. Though sampling errors can be easily reduced by using a large number of samples, in practice this is undesirable as each ensemble member is a solution of the system of state equations and can be time consuming to compute for large-scale problems. In this paper we present an efficient EnKF implementation via generalized polynomial chaos (gPC) expansion. The key ingredients of the proposed approach involve (1) solving the system of stochastic state equations via the gPC methodology to gain efficiency; and (2) sampling the gPC approximation of the stochastic solution with an arbitrarily large number of samples, at virtually no additional computational cost, to drastically reduce the sampling errors. The resulting algorithm thus achieves a high accuracy at reduced computational cost, compared to the classical implementations of EnKF. Numerical examples are provided to verify the convergence property and accuracy improvement of the new algorithm. We also prove that for linear systems with Gaussian noise, the first-order gPC Kalman filter method is equivalent to the exact Kalman filter.
A high accuracy algorithm of displacement measurement for a micro-positioning stage
Directory of Open Access Journals (Sweden)
Xiang Zhang
2017-05-01
Full Text Available A high accuracy displacement measurement algorithm for a two degrees of freedom compliant precision micro-positioning stage is proposed based on the computer micro-vision technique. The algorithm consists of an integer-pixel and a subpixel matching procedure. Series of simulations are conducted to verify the proposed method. The results show that the proposed algorithm possesses the advantages of high precision and stability, the resolution can attain to 0.01 pixel theoretically. In addition, the consuming time is reduced about 6.7 times compared with the classical normalized cross correlation algorithm. To validate the practical performance of the proposed algorithm, a laser interferometer measurement system (LIMS is built up. The experimental results demonstrate that the algorithm has better adaptability than that of the LIMS.
Prediction of novel pre-microRNAs with high accuracy through boosting and SVM.
Zhang, Yuanwei; Yang, Yifan; Zhang, Huan; Jiang, Xiaohua; Xu, Bo; Xue, Yu; Cao, Yunxia; Zhai, Qian; Zhai, Yong; Xu, Mingqing; Cooke, Howard J; Shi, Qinghua
2011-05-15
High-throughput deep-sequencing technology has generated an unprecedented number of expressed short sequence reads, presenting not only an opportunity but also a challenge for prediction of novel microRNAs. To verify the existence of candidate microRNAs, we have to show that these short sequences can be processed from candidate pre-microRNAs. However, it is laborious and time consuming to verify these using existing experimental techniques. Therefore, here, we describe a new method, miRD, which is constructed using two feature selection strategies based on support vector machines (SVMs) and boosting method. It is a high-efficiency tool for novel pre-microRNA prediction with accuracy up to 94.0% among different species. miRD is implemented in PHP/PERL+MySQL+R and can be freely accessed at http://mcg.ustc.edu.cn/rpg/mird/mird.php.
High Accuracy mass Measurement of the very Short-Lived Halo Nuclide $^{11}$Li
Le scornet, G
2002-01-01
The archetypal halo nuclide $^{11}$Li has now attracted a wealth of experimental and theoretical attention. The most outstanding property of this nuclide, its extended radius that makes it as big as $^{48}$Ca, is highly dependent on the binding energy of the two neutrons forming the halo. New generation experiments using radioactive beams with elastic proton scattering, knock-out and transfer reactions, together with $\\textit{ab initio}$ calculations require the tightening of the constraint on the binding energy. Good metrology also requires confirmation of the sole existing precision result to guard against a possible systematic deviation (or mistake). We propose a high accuracy mass determintation of $^{11}$Li, a particularly challenging task due to its very short half-life of 8.6 ms, but one perfectly suiting the MISTRAL spectrometer, now commissioned at ISOLDE. We request 15 shifts of beam time.
Computer modeling of oil spill trajectories with a high accuracy method
International Nuclear Information System (INIS)
Garcia-Martinez, Reinaldo; Flores-Tovar, Henry
1999-01-01
This paper proposes a high accuracy numerical method to model oil spill trajectories using a particle-tracking algorithm. The Euler method, used to calculate oil trajectories, can give adequate solutions in most open ocean applications. However, this method may not predict accurate particle trajectories in certain highly non-uniform velocity fields near coastal zones or in river problems. Simple numerical experiments show that the Euler method may also introduce artificial numerical dispersion that could lead to overestimation of spill areas. This article proposes a fourth-order Runge-Kutta method with fourth-order velocity interpolation to calculate oil trajectories that minimise these problems. The algorithm is implemented in the OilTrack model to predict oil trajectories following the 'Nissos Amorgos' oil spill accident that occurred in the Gulf of Venezuela in 1997. Despite lack of adequate field information, model results compare well with observations in the impacted area. (Author)
Treatment accuracy of hypofractionated spine and other highly conformal IMRT treatments
International Nuclear Information System (INIS)
Sutherland, B.; Hanlon, P.; Charles, P.
2011-01-01
Full text: Spinal cord metastases pose difficult challenges for radiation treatment due to tight dose constraints and a concave PTY. This project aimed to thoroughly test the treatment accuracy of the Eclipse Treatment Planning System (TPS) for highly modulated IMRT treatments, in particular of the thoracic spine, using an Elekta Synergy Linear Accelerator. The increased understanding obtained through different quality assurance techniques allowed recommendations to be made for treatment site commissioning with improved accuracy at the Princess Alexandra Hospital (PAH). Three thoracic spine IMRT plans at the PAH were used for data collection. Complex phantom models were built using CT data, and fields simulated using Monte Carlo modelling. The simulated dose distributions were compared with the TPS using gamma analysis and DYH comparison. High resolution QA was done for all fields using the MatriXX ion chamber array, MapCHECK2 diode array shifted, and the EPlD to determine a procedure for commissioning new treatment sites. Basic spine simulations found the TPS overestimated absorbed dose to bone, however within spinal cord there was good agreement. High resolution QA found the average gamma pass rate of the fields to be 99.1 % for MatriXX, 96.5% for MapCHECK2 shifted and 97.7% for EPlD. Preliminary results indicate agreement between the TPS and delivered dose distributions higher than previously believed for the investigated IMRT plans. The poor resolution of the MatriXX, and normalisation issues with MapCHECK2 leads to probable recommendation of EPlD for future IMRT commissioning due to the high resolution and minimal setup required.
High Accuracy, High Energy He-Erd Analysis of H,C, and T
International Nuclear Information System (INIS)
Browning, James F.; Langley, Robert A.; Doyle, Barney L.; Banks, James C.; Wampler, William R.
1999-01-01
A new analysis technique using high-energy helium ions for the simultaneous elastic recoil detection of all three hydrogen isotopes in metal hydride systems extending to depths of several microm's is presented. Analysis shows that it is possible to separate each hydrogen isotope in a heavy matrix such as erbium to depths of 5 microm using incident 11.48MeV 4 He 2 ions with a detection system composed of a range foil and ΔE-E telescope detector. Newly measured cross sections for the elastic recoil scattering of 4 He 2 ions from protons and deuterons are presented in the energy range 10 to 11.75 MeV for the laboratory recoil angle of 30degree
PACMAN Project: A New Solution for the High-accuracy Alignment of Accelerator Components
Mainaud Durand, Helene; Buzio, Marco; Caiazza, Domenico; Catalán Lasheras, Nuria; Cherif, Ahmed; Doytchinov, Iordan; Fuchs, Jean-Frederic; Gaddi, Andrea; Galindo Munoz, Natalia; Gayde, Jean-Christophe; Kamugasa, Solomon; Modena, Michele; Novotny, Peter; Russenschuck, Stephan; Sanz, Claude; Severino, Giordana; Tshilumba, David; Vlachakis, Vasileios; Wendt, Manfred; Zorzetti, Silvia
2016-01-01
The beam alignment requirements for the next generation of lepton colliders have become increasingly challenging. As an example, the alignment requirements for the three major collider components of the CLIC linear collider are as follows. Before the first beam circulates, the Beam Position Monitors (BPM), Accelerating Structures (AS)and quadrupoles will have to be aligned up to 10 μm w.r.t. a straight line over 200 m long segments, along the 20 km of linacs. PACMAN is a study on Particle Accelerator Components' Metrology and Alignment to the Nanometre scale. It is an Innovative Doctoral Program, funded by the EU and hosted by CERN, providing high quality training to 10 Early Stage Researchers working towards a PhD thesis. The technical aim of the project is to improve the alignment accuracy of the CLIC components by developing new methods and tools addressing several steps of alignment simultaneously, to gain time and accuracy. The tools and methods developed will be validated on a test bench. This paper pr...
High Accuracy Mass Measurement of the Dripline Nuclides $^{12,14}$Be
2002-01-01
State-of-the art, three-body nuclear models that describe halo nuclides require the binding energy of the halo neutron(s) as a critical input parameter. In the case of $^{14}$Be, the uncertainty of this quantity is currently far too large (130 keV), inhibiting efforts at detailed theoretical description. A high accuracy, direct mass deterlnination of $^{14}$Be (as well as $^{12}$Be to obtain the two-neutron separation energy) is therefore required. The measurement can be performed with the MISTRAL spectrometer, which is presently the only possible solution due to required accuracy (10 keV) and short half-life (4.5 ms). Having achieved a 5 keV uncertainty for the mass of $^{11}$Li (8.6 ms), MISTRAL has proved the feasibility of such measurements. Since the current ISOLDE production rate of $^{14}$Be is only about 10/s, the installation of a beam cooler is underway in order to improve MISTRAL transmission. The projected improvement of an order of magnitude (in each transverse direction) will make this measureme...
Accuracy assessment of NOAA gridded daily reference evapotranspiration for the Texas High Plains
Moorhead, Jerry; Gowda, Prasanna H.; Hobbins, Michael; Senay, Gabriel; Paul, George; Marek, Thomas; Porter, Dana
2015-01-01
The National Oceanic and Atmospheric Administration (NOAA) provides daily reference evapotranspiration (ETref) maps for the contiguous United States using climatic data from North American Land Data Assimilation System (NLDAS). This data provides large-scale spatial representation of ETref, which is essential for regional scale water resources management. Data used in the development of NOAA daily ETref maps are derived from observations over surfaces that are different from short (grass — ETos) or tall (alfalfa — ETrs) reference crops, often in nonagricultural settings, which carries an unknown discrepancy between assumed and actual conditions. In this study, NOAA daily ETos and ETrs maps were evaluated for accuracy, using observed data from the Texas High Plains Evapotranspiration (TXHPET) network. Daily ETos, ETrs and the climatic data (air temperature, wind speed, and solar radiation) used for calculating ETref were extracted from the NOAA maps for TXHPET locations and compared against ground measurements on reference grass surfaces. NOAA ETrefmaps generally overestimated the TXHPET observations (1.4 and 2.2 mm/day ETos and ETrs, respectively), which may be attributed to errors in the NLDAS modeled air temperature and wind speed, to which reference ETref is most sensitive. Therefore, a bias correction to NLDAS modeled air temperature and wind speed data, or adjustment to the resulting NOAA ETref, may be needed to improve the accuracy of NOAA ETref maps.
High Accuracy Beam Current Monitor System for CEBAF'S Experimental Hall A
International Nuclear Information System (INIS)
J. Denard; A. Saha; G. Lavessiere
2001-01-01
CEBAF accelerator delivers continuous wave (CW) electron beams to three experimental Halls. In Hall A, all experiments require continuous, non-invasive current measurements and a few experiments require an absolute accuracy of 0.2 % in the current range from 1 to 180 (micro)A. A Parametric Current Transformer (PCT), manufactured by Bergoz, has an accurate and stable sensitivity of 4 (micro)A/V but its offset drifts at the muA level over time preclude its direct use for continuous measurements. Two cavity monitors are calibrated against the PCT with at least 50 (micro)A of beam current. The calibration procedure suppresses the error due to PCT's offset drifts by turning the beam on and off, which is invasive to the experiment. One of the goals of the system is to minimize the calibration time without compromising the measurement's accuracy. The linearity of the cavity monitors is a critical parameter for transferring the accurate calibration done at high currents over the whole dynamic range. The method for measuring accurately the linearity is described
Eisenberger, Ute; Wüthrich, Rudolf P; Bock, Andreas; Ambühl, Patrice; Steiger, Jürg; Intondi, Allison; Kuranoff, Susan; Maier, Thomas; Green, Damian; DiCarlo, Lorenzo; Feutren, Gilles; De Geest, Sabina
2013-08-15
This open-label single-arm exploratory study evaluated the accuracy of the Ingestible Sensor System (ISS), a novel technology for directly assessing the ingestion of oral medications and treatment adherence. ISS consists of an ingestible event marker (IEM), a microsensor that becomes activated in gastric fluid, and an adhesive personal monitor (APM) that detects IEM activation. In this study, the IEM was combined to enteric-coated mycophenolate sodium (ECMPS). Twenty stable adult kidney transplants received IEM-ECMPS for a mean of 9.2 weeks totaling 1227 cumulative days. Eight patients prematurely discontinued treatment due to ECMPS gastrointestinal symptoms (n=2), skin intolerance to APM (n=2), and insufficient system usability (n=4). Rash or erythema due to APM was reported in 7 (37%) patients, all during the first month of use. No serious or severe adverse events and no rejection episode were reported. IEM detection accuracy was 100% over 34 directly observed ingestions; Taking Adherence was 99.4% over a total of 2824 prescribed IEM-ECMPS ingestions. ISS could detect accurately the ingestion of two IEM-ECMPS capsules taken at the same time (detection rate of 99.3%, n=2376). ISS is a promising new technology that provides highly reliable measurements of intake and timing of intake of drugs that are combined with the IEM.
Directory of Open Access Journals (Sweden)
Wolfgang Peter Fendler
Full Text Available Our aim was to improve the prediction of unfavorable histopathology (UH in neuroblastic tumors through combined imaging and biochemical parameters.123I-MIBG SPECT and MRI was performed before surgical resection or biopsy in 47 consecutive pediatric patients with neuroblastic tumor. Semi-quantitative tumor-to-liver count-rate ratio (TLCRR, MRI tumor size and margins, urine catecholamine and NSE blood levels of neuron specific enolase (NSE were recorded. Accuracy of single and combined variables for prediction of UH was tested by ROC analysis with Bonferroni correction.34 of 47 patients had UH based on the International Neuroblastoma Pathology Classification (INPC. TLCRR and serum NSE both predicted UH with moderate accuracy. Optimal cut-off for TLCRR was 2.0, resulting in 68% sensitivity and 100% specificity (AUC-ROC 0.86, p < 0.001. Optimal cut-off for NSE was 25.8 ng/ml, resulting in 74% sensitivity and 85% specificity (AUC-ROC 0.81, p = 0.001. Combination of TLCRR/NSE criteria reduced false negative findings from 11/9 to only five, with improved sensitivity and specificity of 85% (AUC-ROC 0.85, p < 0.001.Strong 123I-MIBG uptake and high serum level of NSE were each predictive of UH. Combined analysis of both parameters improved the prediction of UH in patients with neuroblastic tumor. MRI parameters and urine catecholamine levels did not predict UH.
Enhancing the Accuracy of Advanced High Temperature Mechanical Testing through Thermography
Directory of Open Access Journals (Sweden)
Jonathan Jones
2018-03-01
Full Text Available This paper describes the advantages and enhanced accuracy thermography provides to high temperature mechanical testing. This technique is not only used to monitor, but also to control test specimen temperatures where the infra-red technique enables accurate non-invasive control of rapid thermal cycling for non-metallic materials. Isothermal and dynamic waveforms are employed over a 200–800 °C temperature range to pre-oxidised and coated specimens to assess the capability of the technique. This application shows thermography to be accurate to within ±2 °C of thermocouples, a standardised measurement technique. This work demonstrates the superior visibility of test temperatures previously unobtainable by conventional thermocouples or even more modern pyrometers that thermography can deliver. As a result, the speed and accuracy of thermal profiling, thermal gradient measurements and cold/hot spot identification using the technique has increased significantly to the point where temperature can now be controlled by averaging over a specified area. The increased visibility of specimen temperatures has revealed additional unknown effects such as thermocouple shadowing, preferential crack tip heating within an induction coil, and, fundamental response time of individual measurement techniques which are investigated further.
An output amplitude configurable wideband automatic gain control with high gain step accuracy
International Nuclear Information System (INIS)
He Xiaofeng; Ye Tianchun; Mo Taishan; Ma Chengyan
2012-01-01
An output amplitude configurable wideband automatic gain control (AGC) with high gain step accuracy for the GNSS receiver is presented. The amplitude of an AGC is configurable in order to cooperate with baseband chips to achieve interference suppression and be compatible with different full range ADCs. And what's more, the gain-boosting technology is introduced and the circuit is improved to increase the step accuracy. A zero, which is composed by the source feedback resistance and the source capacity, is introduced to compensate for the pole. The AGC is fabricated in a 0.18 μm CMOS process. The AGC shows a 62 dB gain control range by 1 dB each step with a gain error of less than 0.2 dB. The AGC provides 3 dB bandwidth larger than 80 MHz and the overall power consumption is less than 1.8 mA, and the die area is 800 × 300 μm 2 . (semiconductor integrated circuits)
High accuracy of family history of melanoma in Danish melanoma cases.
Wadt, Karin A W; Drzewiecki, Krzysztof T; Gerdes, Anne-Marie
2015-12-01
The incidence of melanoma in Denmark has immensely increased over the last 10 years making Denmark a high risk country for melanoma. In the last two decades multiple public campaigns have sought to increase the awareness of melanoma. Family history of melanoma is a known major risk factor but previous studies have shown that self-reported family history of melanoma is highly inaccurate. These studies are 15 years old and we wanted to examine if a higher awareness of melanoma has increased the accuracy of self-reported family history of melanoma. We examined the family history of 181 melanoma probands who reported 199 cases of melanoma in relatives, of which 135 cases where in first degree relatives. We confirmed the diagnosis of melanoma in 77% of all relatives, and in 83% of first degree relatives. In 181 probands we validated the negative family history of melanoma in 748 first degree relatives and found only 1 case of melanoma which was not reported in a 3 case melanoma family. Melanoma patients in Denmark report family history of melanoma in first and second degree relatives with a high level of accuracy with a true positive predictive value between 77 and 87%. In 99% of probands reporting a negative family history of melanoma in first degree relatives this information is correct. In clinical practice we recommend that melanoma diagnosis in relatives should be verified if possible, but even unverified reported melanoma cases in relatives should be included in the indication of genetic testing and assessment of melanoma risk in the family.
High accuracy Primary Reference gas Mixtures for high-impact greenhouse gases
Nieuwenkamp, Gerard; Zalewska, Ewelina; Pearce-Hill, Ruth; Brewer, Paul; Resner, Kate; Mace, Tatiana; Tarhan, Tanil; Zellweger, Christophe; Mohn, Joachim
2017-04-01
Climate change, due to increased man-made emissions of greenhouse gases, poses one of the greatest risks to society worldwide. High-impact greenhouse gases (CO2, CH4 and N2O) and indirect drivers for global warming (e.g. CO) are measured by the global monitoring stations for greenhouse gases, operated and organized by the World Meteorological Organization (WMO). Reference gases for the calibration of analyzers have to meet very challenging low level of measurement uncertainty to comply with the Data Quality Objectives (DQOs) set by the WMO. Within the framework of the European Metrology Research Programme (EMRP), a project to improve the metrology for high-impact greenhouse gases was granted (HIGHGAS, June 2014-May 2017). As a result of the HIGHGAS project, primary reference gas mixtures in cylinders for ambient levels of CO2, CH4, N2O and CO in air have been prepared with unprecedented low uncertainties, typically 3-10 times lower than usually previously achieved by the NMIs. To accomplish these low uncertainties in the reference standards, a number of preparation and analysis steps have been studied and improved. The purity analysis of the parent gases had to be performed with lower detection limits than previously achievable. E.g., to achieve an uncertainty of 2•10-9 mol/mol (absolute) on the amount fraction for N2O, the detection limit for the N2O analysis in the parent gases has to be in the sub nmol/mol domain. Results of an OPO-CRDS analyzer set-up in the 5µm wavelength domain, with a 200•10-12 mol/mol detection limit for N2O, will be presented. The adsorption effects of greenhouse gas components at cylinder surfaces are critical, and have been studied for different cylinder passivation techniques. Results of a two-year stability study will be presented. The fit-for-purpose of the reference materials was studied for possible variation on isotopic composition between the reference material and the sample. Measurement results for a suit of CO2 in air
Accuracy optimization of high-speed AFM measurements using Design of Experiments
DEFF Research Database (Denmark)
Tosello, Guido; Marinello, F.; Hansen, Hans Nørgaard
2010-01-01
Atomic Force Microscopy (AFM) is being increasingly employed in industrial micro/nano manufacturing applications and integrated into production lines. In order to achieve reliable process and product control at high measuring speed, instrument optimization is needed. Quantitative AFM measurement...... results are influenced by a number of scan settings parameters, defining topography sampling and measurement time: resolution (number of profiles and points per profile), scan range and direction, scanning force and speed. Such parameters are influencing lateral and vertical accuracy and, eventually......, the estimated dimensions of measured features. The definition of scan settings is based on a comprehensive optimization that targets maximization of information from collected data and minimization of measurement uncertainty and scan time. The Design of Experiments (DOE) technique is proposed and applied...
Recent high-accuracy measurements of the 1S0 neutron-neutron scattering length
International Nuclear Information System (INIS)
Howell, C.R.; Chen, Q.; Gonzalez Trotter, D.E.; Salinas, F.; Crowell, A.S.; Roper, C.D.; Tornow, W.; Walter, R.L.; Carman, T.S.; Hussein, A.; Gibbs, W.R.; Gibson, B.F.; Morris, C.; Obst, A.; Sterbenz, S.; Whitton, M.; Mertens, G.; Moore, C.F.; Whiteley, C.R.; Pasyuk, E.; Slaus, I.; Tang, H.; Zhou, Z.; Gloeckle, W.; Witala, H.
2000-01-01
This paper reports two recent high-accuracy determinations of the 1 S 0 neutron-neutron scattering length, a nn . One was done at the Los Alamos National Laboratory using the π - d capture reaction to produce two neutrons with low relative momentum. The neutron-deuteron (nd) breakup reaction was used in other measurement, which was conducted at the Triangle Universities Nuclear Laboratory. The results from the two determinations were consistent with each other and with previous values obtained using the π - d capture reaction. The value obtained from the nd breakup measurements is a nn = -18.7 ± 0.1 (statistical) ± 0.6 (systematic) fm, and the value from the π - d capture experiment is a nn = -18.50 ± 0.05 ± 0.53 fm. The recommended value is a nn = -18.5 ± 0.3 fm. (author)
High accuracy amplitude and phase measurements based on a double heterodyne architecture
International Nuclear Information System (INIS)
Zhao Danyang; Wang Guangwei; Pan Weimin
2015-01-01
In the digital low level RF (LLRF) system of a circular (particle) accelerator, the RF field signal is usually down converted to a fixed intermediate frequency (IF). The ratio of IF and sampling frequency determines the processing required, and differs in various LLRF systems. It is generally desirable to design a universally compatible architecture for different IFs with no change to the sampling frequency and algorithm. A new RF detection method based on a double heterodyne architecture for wide IF range has been developed, which achieves the high accuracy requirement of modern LLRF. In this paper, the relation of IF and phase error is systematically analyzed for the first time and verified by experiments. The effects of temperature drift for 16 h IF detection are inhibited by the amplitude and phase calibrations. (authors)
Marsic, Damien; Méndez-Gómez, Héctor R; Zolotukhin, Sergei
2015-01-01
Biodistribution analysis is a key step in the evaluation of adeno-associated virus (AAV) capsid variants, whether natural isolates or produced by rational design or directed evolution. Indeed, when screening candidate vectors, accurate knowledge about which tissues are infected and how efficiently is essential. We describe the design, validation, and application of a new vector, pTR-UF50-BC, encoding a bioluminescent protein, a fluorescent protein and a DNA barcode, which can be used to visualize localization of transduction at the organism, organ, tissue, or cellular levels. In addition, by linking capsid variants to different barcoded versions of the vector and amplifying the barcode region from various tissue samples using barcoded primers, biodistribution of viral genomes can be analyzed with high accuracy and efficiency.
Accuracy and high-speed technique for autoprocessing of Young's fringes
Chen, Wenyi; Tan, Yushan
1991-12-01
In this paper, an accurate and high-speed method for auto-processing of Young's fringes is proposed. A group of 1-D sampled intensity values along three or more different directions are taken from Young's fringes, and the fringe spacings of each direction are obtained by 1-D FFT respectively. Two directions that have smaller fringe spacing are selected from all directions. The accurate fringe spacings along these two directions are obtained by using orthogonal coherent phase detection technique (OCPD). The actual spacing and angle of Young's fringes, therefore, can be calculated. In this paper, the principle of OCPD is introduced in detail. The accuracy of the method is evaluated theoretically and experimentally.
Directory of Open Access Journals (Sweden)
Bouchaib Benzehaf
2016-11-01
Full Text Available The present study aims to longitudinally depict the dynamic and interactive development of Complexity, Accuracy, and Fluency (CAF in multilingual learners’ L2 and L3 writing. The data sources include free writing tasks written in L2 French and L3 English by 45 high school participants over a period of four semesters. CAF dimensions are measured using a variation of Hunt’s T-units (1964. Analysis of the quantitative data obtained suggests that CAF measures develop differently for learners’ L2 French and L3 English. They increase more persistently in L3 English, and they display the characteristics of a dynamic, non-linear system characterized by ups and downs particularly in L2 French. In light of the results, we suggest more and denser longitudinal data to explore the nature of interactions between these dimensions in foreign language development, particularly at the individual level.
Accuracy of thick-walled hollows during piercing on three-high mill
International Nuclear Information System (INIS)
Potapov, I.N.; Romantsev, B.A.; Shamanaev, V.I.; Popov, V.A.; Kharitonov, E.A.
1975-01-01
The results of investigations are presented concerning the accuracy of geometrical dimensions of thick-walled sleeves produced by piercing on a 100-ton trio screw rolling mill MISiS with three schemes of fixing and centering the rod. The use of a spherical thrust journal for the rod and of a long centering bushing makes it possible to diminish the non-uniformity of the wall thickness of the sleeves by 30-50%. It is established that thick-walled sleeves with accurate geometrical dimensions (nonuniformity of the wall thickness being less than 10%) can be produced if the system sleeve - mandrel - rod is highly rigid and the rod has a two- or three-fold stability margin over the length equal to that of the sleeve being pierced. The process of piercing is expedient to be carried out with increased angles of feed (14-16 deg). Blanks have been made from steel 12Kh1MF
Integral equation models for image restoration: high accuracy methods and fast algorithms
International Nuclear Information System (INIS)
Lu, Yao; Shen, Lixin; Xu, Yuesheng
2010-01-01
Discrete models are consistently used as practical models for image restoration. They are piecewise constant approximations of true physical (continuous) models, and hence, inevitably impose bottleneck model errors. We propose to work directly with continuous models for image restoration aiming at suppressing the model errors caused by the discrete models. A systematic study is conducted in this paper for the continuous out-of-focus image models which can be formulated as an integral equation of the first kind. The resulting integral equation is regularized by the Lavrentiev method and the Tikhonov method. We develop fast multiscale algorithms having high accuracy to solve the regularized integral equations of the second kind. Numerical experiments show that the methods based on the continuous model perform much better than those based on discrete models, in terms of PSNR values and visual quality of the reconstructed images
Innovative High-Accuracy Lidar Bathymetric Technique for the Frequent Measurement of River Systems
Gisler, A.; Crowley, G.; Thayer, J. P.; Thompson, G. S.; Barton-Grimley, R. A.
2015-12-01
Lidar (light detection and ranging) provides absolute depth and topographic mapping capability compared to other remote sensing methods, which is useful for mapping rapidly changing environments such as riverine systems. Effectiveness of current lidar bathymetric systems is limited by the difficulty in unambiguously identifying backscattered lidar signals from the water surface versus the bottom, limiting their depth resolution to 0.3-0.5 m. Additionally these are large, bulky systems that are constrained to expensive aircraft-mounted platforms and use waveform-processing techniques requiring substantial computation time. These restrictions are prohibitive for many potential users. A novel lidar device has been developed that allows for non-contact measurements of water depth down to 1 cm with an accuracy and precision of shallow to deep water allowing for shoreline charting, measuring water volume, mapping bottom topology, and identifying submerged objects. The scalability of the technique opens up the ability for handheld or UAS-mounted lidar bathymetric systems, which provides for potential applications currently unavailable to the community. The high laser pulse repetition rate allows for very fine horizontal resolution while the photon-counting technique permits real-time depth measurement and object detection. The enhanced measurement capability, portability, scalability, and relatively low-cost creates the opportunity to perform frequent high-accuracy monitoring and measuring of aquatic environments which is crucial for understanding how rivers evolve over many timescales. Results from recent campaigns measuring water depth in flowing creeks and murky ponds will be presented which demonstrate that the method is not limited by rough water surfaces and can map underwater topology through moderately turbid water.
Innovative Technique for High-Accuracy Remote Monitoring of Surface Water
Gisler, A.; Barton-Grimley, R. A.; Thayer, J. P.; Crowley, G.
2016-12-01
Lidar (light detection and ranging) provides absolute depth and topographic mapping capability compared to other remote sensing methods, which is useful for mapping rapidly changing environments such as riverine systems and agricultural waterways. Effectiveness of current lidar bathymetric systems is limited by the difficulty in unambiguously identifying backscattered lidar signals from the water surface versus the bottom, limiting their depth resolution to 0.3-0.5 m. Additionally these are large, bulky systems that are constrained to expensive aircraft-mounted platforms and use waveform-processing techniques requiring substantial computation time. These restrictions are prohibitive for many potential users. A novel lidar device has been developed that allows for non-contact measurements of water depth down to 1 cm with an accuracy and precision of shallow to deep water allowing for shoreline charting, measuring water volume, mapping bottom topology, and identifying submerged objects. The scalability of the technique opens up the ability for handheld or UAS-mounted lidar bathymetric systems, which provides for potential applications currently unavailable to the community. The high laser pulse repetition rate allows for very fine horizontal resolution while the photon-counting technique permits real-time depth measurement and object detection. The enhanced measurement capability, portability, scalability, and relatively low-cost creates the opportunity to perform frequent high-accuracy monitoring and measuring of aquatic environments which is crucial for monitoring water resources on fast timescales. Results from recent campaigns measuring water depth in flowing creeks and murky ponds will be presented which demonstrate that the method is not limited by rough water surfaces and can map underwater topology through moderately turbid water.
High-accuracy continuous airborne measurements of greenhouse gases (CO2 and CH4) during BARCA
Chen, H.; Winderlich, J.; Gerbig, C.; Hoefer, A.; Rella, C. W.; Crosson, E. R.; van Pelt, A. D.; Steinbach, J.; Kolle, O.; Beck, V.; Daube, B. C.; Gottlieb, E. W.; Chow, V. Y.; Santoni, G. W.; Wofsy, S. C.
2009-12-01
High-accuracy continuous measurements of greenhouse gases (CO2 and CH4) during the BARCA (Balanço Atmosférico Regional de Carbono na Amazônia) phase B campaign in Brazil in May 2009 were accomplished using a newly available analyzer based on the cavity ring-down spectroscopy (CRDS) technique. This analyzer was flown without a drying system or any in-flight calibration gases. Water vapor corrections associated with dilution and pressure-broadening effects for CO2 and CH4 were derived from laboratory experiments employing measurements of water vapor by the CRDS analyzer. Before the campaign, the stability of the analyzer was assessed by laboratory tests under simulated flight conditions. During the campaign, a comparison of CO2 measurements between the CRDS analyzer and a nondispersive infrared (NDIR) analyzer on board the same aircraft showed a mean difference of 0.22±0.09 ppm for all flights over the Amazon rain forest. At the end of the campaign, CO2 concentrations of the synthetic calibration gases used by the NDIR analyzer were determined by the CRDS analyzer. After correcting for the isotope and the pressure-broadening effects that resulted from changes of the composition of synthetic vs. ambient air, and applying those concentrations as calibrated values of the calibration gases to reprocess the CO2 measurements made by the NDIR, the mean difference between the CRDS and the NDIR during BARCA was reduced to 0.05±0.09 ppm, with the mean standard deviation of 0.23±0.05 ppm. The results clearly show that the CRDS is sufficiently stable to be used in flight without drying the air or calibrating in flight and the water corrections are fully adequate for high-accuracy continuous airborne measurements of CO2 and CH4.
Zeng, Zhaoli; Qu, Xueming; Tan, Yidong; Tan, Runtao; Zhang, Shulian
2015-06-29
A simple and high-accuracy self-mixing interferometer based on single high-order orthogonally polarized feedback effects is presented. The single high-order feedback effect is realized when dual-frequency laser reflects numerous times in a Fabry-Perot cavity and then goes back to the laser resonator along the same route. In this case, two orthogonally polarized feedback fringes with nanoscale resolution are obtained. This self-mixing interferometer has the advantages of higher sensitivity to weak signal than that of conventional interferometer. In addition, two orthogonally polarized fringes are useful for discriminating the moving direction of measured object. The experiment of measuring 2.5nm step is conducted, which shows a great potential in nanometrology.
Information flow in layered networks of non-monotonic units
Schittler Neves, Fabio; Martim Schubert, Benno; Erichsen, Rubem, Jr.
2015-07-01
Layered neural networks are feedforward structures that yield robust parallel and distributed pattern recognition. Even though much attention has been paid to pattern retrieval properties in such systems, many aspects of their dynamics are not yet well characterized or understood. In this work we study, at different temperatures, the memory activity and information flows through layered networks in which the elements are the simplest binary odd non-monotonic function. Our results show that, considering a standard Hebbian learning approach, the network information content has its maximum always at the monotonic limit, even though the maximum memory capacity can be found at non-monotonic values for small enough temperatures. Furthermore, we show that such systems exhibit rich macroscopic dynamics, including not only fixed point solutions of its iterative map, but also cyclic and chaotic attractors that also carry information.
Information flow in layered networks of non-monotonic units
International Nuclear Information System (INIS)
Neves, Fabio Schittler; Schubert, Benno Martim; Erichsen, Rubem Jr
2015-01-01
Layered neural networks are feedforward structures that yield robust parallel and distributed pattern recognition. Even though much attention has been paid to pattern retrieval properties in such systems, many aspects of their dynamics are not yet well characterized or understood. In this work we study, at different temperatures, the memory activity and information flows through layered networks in which the elements are the simplest binary odd non-monotonic function. Our results show that, considering a standard Hebbian learning approach, the network information content has its maximum always at the monotonic limit, even though the maximum memory capacity can be found at non-monotonic values for small enough temperatures. Furthermore, we show that such systems exhibit rich macroscopic dynamics, including not only fixed point solutions of its iterative map, but also cyclic and chaotic attractors that also carry information. (paper)
Estimating monotonic rates from biological data using local linear regression.
Olito, Colin; White, Craig R; Marshall, Dustin J; Barneche, Diego R
2017-03-01
Accessing many fundamental questions in biology begins with empirical estimation of simple monotonic rates of underlying biological processes. Across a variety of disciplines, ranging from physiology to biogeochemistry, these rates are routinely estimated from non-linear and noisy time series data using linear regression and ad hoc manual truncation of non-linearities. Here, we introduce the R package LoLinR, a flexible toolkit to implement local linear regression techniques to objectively and reproducibly estimate monotonic biological rates from non-linear time series data, and demonstrate possible applications using metabolic rate data. LoLinR provides methods to easily and reliably estimate monotonic rates from time series data in a way that is statistically robust, facilitates reproducible research and is applicable to a wide variety of research disciplines in the biological sciences. © 2017. Published by The Company of Biologists Ltd.
Accuracy assessment of high resolution satellite imagery orientation by leave-one-out method
Brovelli, Maria Antonia; Crespi, Mattia; Fratarcangeli, Francesca; Giannone, Francesca; Realini, Eugenio
Interest in high-resolution satellite imagery (HRSI) is spreading in several application fields, at both scientific and commercial levels. Fundamental and critical goals for the geometric use of this kind of imagery are their orientation and orthorectification, processes able to georeference the imagery and correct the geometric deformations they undergo during acquisition. In order to exploit the actual potentialities of orthorectified imagery in Geomatics applications, the definition of a methodology to assess the spatial accuracy achievable from oriented imagery is a crucial topic. In this paper we want to propose a new method for accuracy assessment based on the Leave-One-Out Cross-Validation (LOOCV), a model validation method already applied in different fields such as machine learning, bioinformatics and generally in any other field requiring an evaluation of the performance of a learning algorithm (e.g. in geostatistics), but never applied to HRSI orientation accuracy assessment. The proposed method exhibits interesting features which are able to overcome the most remarkable drawbacks involved by the commonly used method (Hold-Out Validation — HOV), based on the partitioning of the known ground points in two sets: the first is used in the orientation-orthorectification model (GCPs — Ground Control Points) and the second is used to validate the model itself (CPs — Check Points). In fact the HOV is generally not reliable and it is not applicable when a low number of ground points is available. To test the proposed method we implemented a new routine that performs the LOOCV in the software SISAR, developed by the Geodesy and Geomatics Team at the Sapienza University of Rome to perform the rigorous orientation of HRSI; this routine was tested on some EROS-A and QuickBird images. Moreover, these images were also oriented using the world recognized commercial software OrthoEngine v. 10 (included in the Geomatica suite by PCI), manually performing the LOOCV
High-Accuracy Measurements of Total Column Water Vapor From the Orbiting Carbon Observatory-2
Nelson, Robert R.; Crisp, David; Ott, Lesley E.; O'Dell, Christopher W.
2016-01-01
Accurate knowledge of the distribution of water vapor in Earth's atmosphere is of critical importance to both weather and climate studies. Here we report on measurements of total column water vapor (TCWV) from hyperspectral observations of near-infrared reflected sunlight over land and ocean surfaces from the Orbiting Carbon Observatory-2 (OCO-2). These measurements are an ancillary product of the retrieval algorithm used to measure atmospheric carbon dioxide concentrations, with information coming from three highly resolved spectral bands. Comparisons to high-accuracy validation data, including ground-based GPS and microwave radiometer data, demonstrate that OCO-2 TCWV measurements have maximum root-mean-square deviations of 0.9-1.3mm. Our results indicate that OCO-2 is the first space-based sensor to accurately and precisely measure the two most important greenhouse gases, water vapor and carbon dioxide, at high spatial resolution [1.3 x 2.3 km(exp. 2)] and that OCO-2 TCWV measurements may be useful in improving numerical weather predictions and reanalysis products.
A new device for liver cancer biomarker detection with high accuracy
Directory of Open Access Journals (Sweden)
Shuaipeng Wang
2015-06-01
Full Text Available A novel cantilever array-based bio-sensor was batch-fabricated with IC compatible MEMS technology for precise liver cancer bio-marker detection. A micro-cavity was designed in the free end of the cantilever for local antibody-immobilization, thus adsorption of the cancer biomarker is localized in the micro-cavity, and the adsorption-induced k variation can be dramatically reduced with comparison to that caused by adsorption of the whole lever. The cantilever is pizeoelectrically driven into vibration which is pizeoresistively sensed by Wheatstone bridge. These structural features offer several advantages: high sensitivity, high throughput, high mass detection accuracy, and small volume. In addition, an analytical model has been established to eliminate the effect of adsorption-induced lever stiffness change and has been applied to precise mass detection of cancer biomarker AFP, the detected AFP antigen mass (7.6 pg/ml is quite close to the calculated one (5.5 pg/ml, two orders of magnitude better than the value by the fully antibody-immobilized cantilever sensor. These approaches will promote real application of the cantilever sensors in early diagnosis of cancer.
Wang, Kundong; Chen, Bing; Lu, Qingsheng; Li, Hongbing; Liu, Manhua; Shen, Yu; Xu, Zhuoyan
2018-05-15
Endovascular interventional surgery (EIS) is performed under a high radiation environment at the sacrifice of surgeons' health. This paper introduces a novel endovascular interventional surgical robot that aims to reduce radiation to surgeons and physical stress imposed by lead aprons during fluoroscopic X-ray guided catheter intervention. The unique mechanical structure allowed the surgeon to manipulate the axial and radial motion of the catheter and guide wire. Four catheter manipulators (to manipulate the catheter and guide wire), and a control console which consists of four joysticks, several buttons and two twist switches (to control the catheter manipulators) were presented. The entire robotic system was established on a master-slave control structure through CAN (Controller Area Network) bus communication, meanwhile, the slave side of this robotic system showed highly accurate control over velocity and displacement with PID controlling method. The robotic system was tested and passed in vitro and animal experiments. Through functionality evaluation, the manipulators were able to complete interventional surgical motion both independently and cooperatively. The robotic surgery was performed successfully in an adult female pig and demonstrated the feasibility of superior mesenteric and common iliac artery stent implantation. The entire robotic system met the clinical requirements of EIS. The results show that the system has the ability to imitate the movements of surgeons and to accomplish the axial and radial motions with consistency and high-accuracy. Copyright © 2018 John Wiley & Sons, Ltd.
A Robust High-Accuracy Ultrasound Indoor Positioning System Based on a Wireless Sensor Network.
Qi, Jun; Liu, Guo-Ping
2017-11-06
This paper describes the development and implementation of a robust high-accuracy ultrasonic indoor positioning system (UIPS). The UIPS consists of several wireless ultrasonic beacons in the indoor environment. Each of them has a fixed and known position coordinate and can collect all the transmissions from the target node or emit ultrasonic signals. Every wireless sensor network (WSN) node has two communication modules: one is WiFi, that transmits the data to the server, and the other is the radio frequency (RF) module, which is only used for time synchronization between different nodes, with accuracy up to 1 μ s. The distance between the beacon and the target node is calculated by measuring the time-of-flight (TOF) for the ultrasonic signal, and then the position of the target is computed by some distances and the coordinate of the beacons. TOF estimation is the most important technique in the UIPS. A new time domain method to extract the envelope of the ultrasonic signals is presented in order to estimate the TOF. This method, with the envelope detection filter, estimates the value with the sampled values on both sides based on the least squares method (LSM). The simulation results show that the method can achieve envelope detection with a good filtering effect by means of the LSM. The highest precision and variance can reach 0.61 mm and 0.23 mm, respectively, in pseudo-range measurements with UIPS. A maximum location error of 10.2 mm is achieved in the positioning experiments for a moving robot, when UIPS works on the line-of-sight (LOS) signal.
Lv, Zeqian; Xu, Xiaohai; Yan, Tianhao; Cai, Yulong; Su, Yong; Zhang, Qingchuan
2018-01-01
In the measurement of plate specimens, traditional two-dimensional (2D) digital image correlation (DIC) is challenged by two aspects: (1) the slant optical axis (misalignment of the optical camera axis and the object surface) and (2) out-of-plane motions (including translations and rotations) of the specimens. There are measurement errors in the results measured by 2D DIC, especially when the out-of-plane motions are big enough. To solve this problem, a novel compensation method has been proposed to correct the unsatisfactory results. The proposed compensation method consists of three main parts: 1) a pre-calibration step is used to determine the intrinsic parameters and lens distortions; 2) a compensation panel (a rigid panel with several markers located at known positions) is mounted to the specimen to track the specimen's motion so that the relative coordinate transformation between the compensation panel and the 2D DIC setup can be calculated using the coordinate transform algorithm; 3) three-dimensional world coordinates of measuring points on the specimen can be reconstructed via the coordinate transform algorithm and used to calculate deformations. Simulations have been carried out to validate the proposed compensation method. Results come out that when the extensometer length is 400 pixels, the strain accuracy reaches 10 με no matter out-of-plane translations (less than 1/200 of the object distance) nor out-of-plane rotations (rotation angle less than 5°) occur. The proposed compensation method leads to good results even when the out-of-plane translation reaches several percents of the object distance or the out-of-plane rotation angle reaches tens of degrees. The proposed compensation method has been applied in tensile experiments to obtain high-accuracy results as well.
A Robust High-Accuracy Ultrasound Indoor Positioning System Based on a Wireless Sensor Network
Directory of Open Access Journals (Sweden)
Jun Qi
2017-11-01
Full Text Available This paper describes the development and implementation of a robust high-accuracy ultrasonic indoor positioning system (UIPS. The UIPS consists of several wireless ultrasonic beacons in the indoor environment. Each of them has a fixed and known position coordinate and can collect all the transmissions from the target node or emit ultrasonic signals. Every wireless sensor network (WSN node has two communication modules: one is WiFi, that transmits the data to the server, and the other is the radio frequency (RF module, which is only used for time synchronization between different nodes, with accuracy up to 1 μs. The distance between the beacon and the target node is calculated by measuring the time-of-flight (TOF for the ultrasonic signal, and then the position of the target is computed by some distances and the coordinate of the beacons. TOF estimation is the most important technique in the UIPS. A new time domain method to extract the envelope of the ultrasonic signals is presented in order to estimate the TOF. This method, with the envelope detection filter, estimates the value with the sampled values on both sides based on the least squares method (LSM. The simulation results show that the method can achieve envelope detection with a good filtering effect by means of the LSM. The highest precision and variance can reach 0.61 mm and 0.23 mm, respectively, in pseudo-range measurements with UIPS. A maximum location error of 10.2 mm is achieved in the positioning experiments for a moving robot, when UIPS works on the line-of-sight (LOS signal.
International Nuclear Information System (INIS)
Nelson, E.M.
1993-12-01
Some two-dimensional finite element electromagnetic field solvers are described and tested. For TE and TM modes in homogeneous cylindrical waveguides and monopole modes in homogeneous axisymmetric structures, the solvers find approximate solutions to a weak formulation of the wave equation. Second-order isoparametric lagrangian triangular elements represent the field. For multipole modes in axisymmetric structures, the solver finds approximate solutions to a weak form of the curl-curl formulation of Maxwell's equations. Second-order triangular edge elements represent the radial (ρ) and axial (z) components of the field, while a second-order lagrangian basis represents the azimuthal (φ) component of the field weighted by the radius ρ. A reduced set of basis functions is employed for elements touching the axis. With this basis the spurious modes of the curl-curl formulation have zero frequency, so spurious modes are easily distinguished from non-static physical modes. Tests on an annular ring, a pillbox and a sphere indicate the solutions converge rapidly as the mesh is refined. Computed eigenvalues with relative errors of less than a few parts per million are obtained. Boundary conditions for symmetric, periodic and symmetric-periodic structures are discussed and included in the field solver. Boundary conditions for structures with inversion symmetry are also discussed. Special corner elements are described and employed to improve the accuracy of cylindrical waveguide and monopole modes with singular fields at sharp corners. The field solver is applied to three problems: (1) cross-field amplifier slow-wave circuits, (2) a detuned disk-loaded waveguide linear accelerator structure and (3) a 90 degrees overmoded waveguide bend. The detuned accelerator structure is a critical application of this high accuracy field solver. To maintain low long-range wakefields, tight design and manufacturing tolerances are required
Model Accuracy Comparison for High Resolution Insar Coherence Statistics Over Urban Areas
Zhang, Yue; Fu, Kun; Sun, Xian; Xu, Guangluan; Wang, Hongqi
2016-06-01
The interferometric coherence map derived from the cross-correlation of two complex registered synthetic aperture radar (SAR) images is the reflection of imaged targets. In many applications, it can act as an independent information source, or give additional information complementary to the intensity image. Specially, the statistical properties of the coherence are of great importance in land cover classification, segmentation and change detection. However, compared to the amount of work on the statistical characters of SAR intensity, there are quite fewer researches on interferometric SAR (InSAR) coherence statistics. And to our knowledge, all of the existing work that focuses on InSAR coherence statistics, models the coherence with Gaussian distribution with no discrimination on data resolutions or scene types. But the properties of coherence may be different for different data resolutions and scene types. In this paper, we investigate on the coherence statistics for high resolution data over urban areas, by making a comparison of the accuracy of several typical statistical models. Four typical land classes including buildings, trees, shadow and roads are selected as the representatives of urban areas. Firstly, several regions are selected from the coherence map manually and labelled with their corresponding classes respectively. Then we try to model the statistics of the pixel coherence for each type of region, with different models including Gaussian, Rayleigh, Weibull, Beta and Nakagami. Finally, we evaluate the model accuracy for each type of region. The experiments on TanDEM-X data show that the Beta model has a better performance than other distributions.
MODEL ACCURACY COMPARISON FOR HIGH RESOLUTION INSAR COHERENCE STATISTICS OVER URBAN AREAS
Directory of Open Access Journals (Sweden)
Y. Zhang
2016-06-01
Full Text Available The interferometric coherence map derived from the cross-correlation of two complex registered synthetic aperture radar (SAR images is the reflection of imaged targets. In many applications, it can act as an independent information source, or give additional information complementary to the intensity image. Specially, the statistical properties of the coherence are of great importance in land cover classification, segmentation and change detection. However, compared to the amount of work on the statistical characters of SAR intensity, there are quite fewer researches on interferometric SAR (InSAR coherence statistics. And to our knowledge, all of the existing work that focuses on InSAR coherence statistics, models the coherence with Gaussian distribution with no discrimination on data resolutions or scene types. But the properties of coherence may be different for different data resolutions and scene types. In this paper, we investigate on the coherence statistics for high resolution data over urban areas, by making a comparison of the accuracy of several typical statistical models. Four typical land classes including buildings, trees, shadow and roads are selected as the representatives of urban areas. Firstly, several regions are selected from the coherence map manually and labelled with their corresponding classes respectively. Then we try to model the statistics of the pixel coherence for each type of region, with different models including Gaussian, Rayleigh, Weibull, Beta and Nakagami. Finally, we evaluate the model accuracy for each type of region. The experiments on TanDEM-X data show that the Beta model has a better performance than other distributions.
Iterates of piecewise monotone mappings on an interval
Preston, Chris
1988-01-01
Piecewise monotone mappings on an interval provide simple examples of discrete dynamical systems whose behaviour can be very complicated. These notes are concerned with the properties of the iterates of such mappings. The material presented can be understood by anyone who has had a basic course in (one-dimensional) real analysis. The account concentrates on the topological (as opposed to the measure theoretical) aspects of the theory of piecewise monotone mappings. As well as offering an elementary introduction to this theory, these notes also contain a more advanced treatment of the problem of classifying such mappings up to topological conjugacy.
The One to Multiple Automatic High Accuracy Registration of Terrestrial LIDAR and Optical Images
Wang, Y.; Hu, C.; Xia, G.; Xue, H.
2018-04-01
The registration of ground laser point cloud and close-range image is the key content of high-precision 3D reconstruction of cultural relic object. In view of the requirement of high texture resolution in the field of cultural relic at present, The registration of point cloud and image data in object reconstruction will result in the problem of point cloud to multiple images. In the current commercial software, the two pairs of registration of the two kinds of data are realized by manually dividing point cloud data, manual matching point cloud and image data, manually selecting a two - dimensional point of the same name of the image and the point cloud, and the process not only greatly reduces the working efficiency, but also affects the precision of the registration of the two, and causes the problem of the color point cloud texture joint. In order to solve the above problems, this paper takes the whole object image as the intermediate data, and uses the matching technology to realize the automatic one-to-one correspondence between the point cloud and multiple images. The matching of point cloud center projection reflection intensity image and optical image is applied to realize the automatic matching of the same name feature points, and the Rodrigo matrix spatial similarity transformation model and weight selection iteration are used to realize the automatic registration of the two kinds of data with high accuracy. This method is expected to serve for the high precision and high efficiency automatic 3D reconstruction of cultural relic objects, which has certain scientific research value and practical significance.
International Nuclear Information System (INIS)
McEvoy, Sinead; Lavelle, Lisa; Kilcoyne, Aoife; McCarthy, Colin; Dodd, Jonathan D.; DeJong, Pim A.; Loeve, Martine; Tiddens, Harm A.W.M.; McKone, Edward; Gallagher, Charles G.
2012-01-01
To determine the diagnostic accuracy of high-resolution computed tomography (HRCT) for the detection of nontuberculous mycobacterium infection (NTM) in adult cystic fibrosis (CF) patients. Twenty-seven CF patients with sputum-culture-proven NTM (NTM+) underwent HRCT. An age, gender and spirometrically matched group of 27 CF patients without NTM (NTM-) was included as controls. Images were randomly and blindly analysed by two readers in consensus and scored using a modified Bhalla scoring system. Significant differences were seen between NTM (+) and NTM (-) patients in the severity of the bronchiectasis subscore [45 % (1.8/4) vs. 35 % (1.4/4), P = 0.029], collapse/consolidation subscore [33 % (1.3/3) vs. 15 % (0.6/3)], tree-in-bud/centrilobular nodules subscore [43 % (1.7/3) vs. 25 % (1.0/3), P = 0.002] and the total CT score [56 % (18.4/33) vs. 46 % (15.2/33), P = 0.002]. Binary logistic regression revealed BMI, peribronchial thickening, collapse/consolidation and tree-in-bud/centrilobular nodules to be predictors of NTM status (R 2 = 0.43). Receiver-operator curve analysis of the regression model showed an area under the curve of 0.89, P < 0.0001. In adults with CF, seven or more bronchopulmonary segments showing tree-in-bud/centrilobular nodules on HRCT is highly suggestive of NTM colonisation. (orig.)
Energy Technology Data Exchange (ETDEWEB)
McEvoy, Sinead; Lavelle, Lisa; Kilcoyne, Aoife; McCarthy, Colin; Dodd, Jonathan D. [St. Vincent' s University Hospital, Department of Radiology, Dublin (Ireland); DeJong, Pim A. [University Medical Center Utrecht, Department of Radiology, Utrecht (Netherlands); Loeve, Martine; Tiddens, Harm A.W.M. [Erasmus MC-Sophia Children' s Hospital, Department of Radiology, Department of Pediatric Pulmonology and Allergology, Rotterdam (Netherlands); McKone, Edward; Gallagher, Charles G. [St. Vincent' s University Hospital, Department of Respiratory Medicine and National Referral Centre for Adult Cystic Fibrosis, Dublin (Ireland)
2012-12-15
To determine the diagnostic accuracy of high-resolution computed tomography (HRCT) for the detection of nontuberculous mycobacterium infection (NTM) in adult cystic fibrosis (CF) patients. Twenty-seven CF patients with sputum-culture-proven NTM (NTM+) underwent HRCT. An age, gender and spirometrically matched group of 27 CF patients without NTM (NTM-) was included as controls. Images were randomly and blindly analysed by two readers in consensus and scored using a modified Bhalla scoring system. Significant differences were seen between NTM (+) and NTM (-) patients in the severity of the bronchiectasis subscore [45 % (1.8/4) vs. 35 % (1.4/4), P = 0.029], collapse/consolidation subscore [33 % (1.3/3) vs. 15 % (0.6/3)], tree-in-bud/centrilobular nodules subscore [43 % (1.7/3) vs. 25 % (1.0/3), P = 0.002] and the total CT score [56 % (18.4/33) vs. 46 % (15.2/33), P = 0.002]. Binary logistic regression revealed BMI, peribronchial thickening, collapse/consolidation and tree-in-bud/centrilobular nodules to be predictors of NTM status (R{sup 2} = 0.43). Receiver-operator curve analysis of the regression model showed an area under the curve of 0.89, P < 0.0001. In adults with CF, seven or more bronchopulmonary segments showing tree-in-bud/centrilobular nodules on HRCT is highly suggestive of NTM colonisation. (orig.)
Rigorous Training of Dogs Leads to High Accuracy in Human Scent Matching-To-Sample Performance.
Directory of Open Access Journals (Sweden)
Sophie Marchal
Full Text Available Human scent identification is based on a matching-to-sample task in which trained dogs are required to compare a scent sample collected from an object found at a crime scene to that of a suspect. Based on dogs' greater olfactory ability to detect and process odours, this method has been used in forensic investigations to identify the odour of a suspect at a crime scene. The excellent reliability and reproducibility of the method largely depend on rigor in dog training. The present study describes the various steps of training that lead to high sensitivity scores, with dogs matching samples with 90% efficiency when the complexity of the scents presented during the task in the sample is similar to that presented in the in lineups, and specificity reaching a ceiling, with no false alarms in human scent matching-to-sample tasks. This high level of accuracy ensures reliable results in judicial human scent identification tests. Also, our data should convince law enforcement authorities to use these results as official forensic evidence when dogs are trained appropriately.
Spline-based high-accuracy piecewise-polynomial phase-to-sinusoid amplitude converters.
Petrinović, Davor; Brezović, Marko
2011-04-01
We propose a method for direct digital frequency synthesis (DDS) using a cubic spline piecewise-polynomial model for a phase-to-sinusoid amplitude converter (PSAC). This method offers maximum smoothness of the output signal. Closed-form expressions for the cubic polynomial coefficients are derived in the spectral domain and the performance analysis of the model is given in the time and frequency domains. We derive the closed-form performance bounds of such DDS using conventional metrics: rms and maximum absolute errors (MAE) and maximum spurious free dynamic range (SFDR) measured in the discrete time domain. The main advantages of the proposed PSAC are its simplicity, analytical tractability, and inherent numerical stability for high table resolutions. Detailed guidelines for a fixed-point implementation are given, based on the algebraic analysis of all quantization effects. The results are verified on 81 PSAC configurations with the output resolutions from 5 to 41 bits by using a bit-exact simulation. The VHDL implementation of a high-accuracy DDS based on the proposed PSAC with 28-bit input phase word and 32-bit output value achieves SFDR of its digital output signal between 180 and 207 dB, with a signal-to-noise ratio of 192 dB. Its implementation requires only one 18 kB block RAM and three 18-bit embedded multipliers in a typical field-programmable gate array (FPGA) device. © 2011 IEEE
The non-monotonic shear-thinning flow of two strongly cohesive concentrated suspensions
Buscall, Richard; Kusuma, Tiara E.; Stickland, Anthony D.; Rubasingha, Sayuri; Scales, Peter J.; Teo, Hui-En; Worrall, Graham L.
2014-01-01
The behaviour in simple shear of two concentrated and strongly cohesive mineral suspensions showing highly non-monotonic flow curves is described. Two rheometric test modes were employed, controlled stress and controlled shear-rate. In controlled stress mode the materials showed runaway flow above a yield stress, which, for one of the suspensions, varied substantially in value and seemingly at random from one run to the next, such that the up flow-curve appeared to be quite irreproducible. Th...
The regularized monotonicity method: detecting irregular indefinite inclusions
DEFF Research Database (Denmark)
Garde, Henrik; Staboulis, Stratos
2018-01-01
inclusions, where the conductivity distribution has both more and less conductive parts relative to the background conductivity; one such method is the monotonicity method of Harrach, Seo, and Ullrich. We formulate the method for irregular indefinite inclusions, meaning that we make no regularity assumptions...
Generalized monotonicity from global minimization in fourth-order ODEs
M.A. Peletier (Mark)
2000-01-01
textabstractWe consider solutions of the stationary Extended Fisher-Kolmogorov equation with general potential that are global minimizers of an associated variational problem. We present results that relate the global minimization property to a generalized concept of monotonicity of the solutions.
Monotone difference schemes for weakly coupled elliptic and parabolic systems
P. Matus (Piotr); F.J. Gaspar Lorenz (Franscisco); L. M. Hieu (Le Minh); V.T.K. Tuyen (Vo Thi Kim)
2017-01-01
textabstractThe present paper is devoted to the development of the theory of monotone difference schemes, approximating the so-called weakly coupled system of linear elliptic and quasilinear parabolic equations. Similarly to the scalar case, the canonical form of the vector-difference schemes is
Pathwise duals of monotone and additive Markov processes
Czech Academy of Sciences Publication Activity Database
Sturm, A.; Swart, Jan M.
-, - (2018) ISSN 0894-9840 R&D Projects: GA ČR GAP201/12/2613 Institutional support: RVO:67985556 Keywords : pathwise duality * monotone Markov process * additive Markov process * interacting particle system Subject RIV: BA - General Mathematics Impact factor: 0.854, year: 2016 http://library.utia.cas.cz/separaty/2016/SI/swart-0465436.pdf
Statistical analysis of sediment toxicity by additive monotone regression splines
Boer, de W.J.; Besten, den P.J.; Braak, ter C.J.F.
2002-01-01
Modeling nonlinearity and thresholds in dose-effect relations is a major challenge, particularly in noisy data sets. Here we show the utility of nonlinear regression with additive monotone regression splines. These splines lead almost automatically to the estimation of thresholds. We applied this
Interval Routing and Minor-Monotone Graph Parameters
Bakker, E.M.; Bodlaender, H.L.; Tan, R.B.; Leeuwen, J. van
2006-01-01
We survey a number of minor-monotone graph parameters and their relationship to the complexity of routing on graphs. In particular we compare the interval routing parameters κslir(G) and κsir(G) with Colin de Verdi`ere’s graph invariant μ(G) and its variants λ(G) and κ(G). We show that for all the
On monotonic solutions of an integral equation of Abel type
International Nuclear Information System (INIS)
Darwish, Mohamed Abdalla
2007-08-01
We present an existence theorem of monotonic solutions for a quadratic integral equation of Abel type in C[0, 1]. The famous Chandrasekhar's integral equation is considered as a special case. The concept of measure of noncompactness and a fi xed point theorem due to Darbo are the main tools in carrying out our proof. (author)
Rational functions with maximal radius of absolute monotonicity
Loczi, Lajos; Ketcheson, David I.
2014-01-01
-Kutta methods for initial value problems and the radius of absolute monotonicity governs the numerical preservation of properties like positivity and maximum-norm contractivity. We construct a function with p=2 and R>2s, disproving a conjecture of van de Griend
Modelling Embedded Systems by Non-Monotonic Refinement
Mader, Angelika H.; Marincic, J.; Wupper, H.
2008-01-01
This paper addresses the process of modelling embedded sys- tems for formal verification. We propose a modelling process built on non-monotonic refinement and a number of guidelines. The outcome of the modelling process is a model, together with a correctness argument that justifies our modelling
Design and simulation of high accuracy power supplies for injector synchrotron dipole magnets
International Nuclear Information System (INIS)
Fathizadeh, M.
1991-01-01
The ring magnet of the injector synchrotron consists of 68 dipole magnets. These magnets are connected in series and are energized from two feed points 180 degrees apart by two identical 12-phase power supplies. The current in the magnet will be raised linearly at about 1 kA level, and after a small transition period (1 ms to 10 ms typical) the current will be reduced to below the injection level of 60 A. The repetition time for the current waveform is 500 ms. A relatively fast voltage loop along with a high gain current loop are utilized to control the current in the magnet with the required accuracy. Only one regulator circuit is used to control the firing pulses of the two sets of identical 12-phase power supplies. Pspice software was used to design and simulate the power supply performance under ramping and investigate the effect of current changes on the utility voltage and input power factor. A current ripple of ±2x10 -4 and tracking error of ±5x10 -4 was needed. 3 refs., 5 figs
High accuracy line positions of the ν 1 fundamental band of 14 N 2 16 O
Alsaif, Bidoor
2018-03-08
The ν1 fundamental band of N2O is examined by a novel spectrometer that relies on the frequency locking of an external-cavity quantum cascade laser around 7.8 μm to a near-infrared Tm:based frequency comb at 1.9 μm. Due to the large tunability, nearly 70 lines in the 1240 – 1310 cm−1 range of the ν1 band of N2O, from P(40) to R(31), are for the first time measured with an absolute frequency calibration and an uncertainty from 62 to 180 kHz, depending on the line. Accurate values of the spectroscopic constants of the upper state are derived from a fit of the line centers (rms ≈ 4.8 × 10−6 cm−1 or 144 kHz). The ν1 transitions presently measured in a Doppler regime validate high accuracy predictions based on sub-Doppler measurements of the ν3 and ν3-ν1 transitions.
International Nuclear Information System (INIS)
Arnoldi, E.; Ramos-Duran, L.; Abro, J.A.; Costello, P.; Zwerner, P.L.; Schoepf, U.J.; Nikolaou, K.; Reiser, M.F.
2010-01-01
The purpose of this study was to evaluate the diagnostic performance of coronary CT angiography (coronary CTA) using prospective ECG triggering (PT) for the detection of significant coronary artery stenosis compared to invasive coronary angiography (ICA). A total of 20 patients underwent coronary CTA with PT using a 128-slice CT scanner (Definition trademark AS+, Siemens) and ICA. All coronary CTA studies were evaluated for significant coronary artery stenoses (≥50% luminal narrowing) by 2 observers in consensus using the AHA-15-segment model. Findings in CTA were compared to those in ICA. Coronary CTA using PT had 88% sensitivity in comparison to 100% with ICA, 95% to 88% specificity, 80% to 92% positive predictive value and 97% to 100% negative predictive value for diagnosing significant coronary artery stenosis on per segment per patient analysis, respectively. Mean effective radiation dose-equivalent of CTA was 2.6±1 mSv. Coronary CTA using PT enables non-invasive diagnosis of significant coronary artery stenosis with high diagnostic accuracy in comparison to ICA and is associated with comparably low radiation exposure. (orig.) [de
High accuracy line positions of the ν1 fundamental band of 14N216O
AlSaif, Bidoor; Lamperti, Marco; Gatti, Davide; Laporta, Paolo; Fermann, Martin; Farooq, Aamir; Lyulin, Oleg; Campargue, Alain; Marangoni, Marco
2018-05-01
The ν1 fundamental band of N2O is examined by a novel spectrometer that relies on the frequency locking of an external-cavity quantum cascade laser around 7.8 μm to a near-infrared Tm:based frequency comb at 1.9 μm. Due to the large tunability, nearly 70 lines in the 1240-1310 cm-1 range of the ν1 band of N2O, from P(40) to R(31), are for the first time measured with an absolute frequency calibration and an uncertainty from 62 to 180 kHz, depending on the line. Accurate values of the spectroscopic constants of the upper state are derived from a fit of the line centers (rms ≈ 4.8 × 10-6 cm-1 or 144 kHz). The ν1 transitions presently measured in a Doppler regime validate high accuracy predictions based on sub-Doppler measurements of the ν3 and ν3-ν1 transitions.
On the impact of improved dosimetric accuracy on head and neck high dose rate brachytherapy.
Peppa, Vasiliki; Pappas, Eleftherios; Major, Tibor; Takácsi-Nagy, Zoltán; Pantelis, Evaggelos; Papagiannis, Panagiotis
2016-07-01
To study the effect of finite patient dimensions and tissue heterogeneities in head and neck high dose rate brachytherapy. The current practice of TG-43 dosimetry was compared to patient specific dosimetry obtained using Monte Carlo simulation for a sample of 22 patient plans. The dose distributions were compared in terms of percentage dose differences as well as differences in dose volume histogram and radiobiological indices for the target and organs at risk (mandible, parotids, skin, and spinal cord). Noticeable percentage differences exist between TG-43 and patient specific dosimetry, mainly at low dose points. Expressed as fractions of the planning aim dose, percentage differences are within 2% with a general TG-43 overestimation except for the spine. These differences are consistent resulting in statistically significant differences of dose volume histogram and radiobiology indices. Absolute differences of these indices are however small to warrant clinical importance in terms of tumor control or complication probabilities. The introduction of dosimetry methods characterized by improved accuracy is a valuable advancement. It does not appear however to influence dose prescription or call for amendment of clinical recommendations for the mobile tongue, base of tongue, and floor of mouth patient cohort of this study. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Design and simulation of high accuracy power supplies for injector synchrotron dipole magnets
International Nuclear Information System (INIS)
Fathizadeh, M.
1991-01-01
The ring magnet of the injector synchrotron consists of 68 dipole magnets. These magnets are connected in series and are energized from two feed points 180 degree apart by two identical 12-phase power supplies. The current in the magnet will be raised linearly to about 1 kA level, and after a small transition period (1 ms to 10 ms typical) the current will be reduced to below the injection level of 60 A. The repetition time for the current waveform is 500 ms. A relatively fast voltage loop along with a high gain current loop are utilized to control the current in the magnet with the required accuracy. Only one regulator circuit is used to control the firing pulses of the two sets of identical 12-phase power supplies. Pspice software was used to design and simulate the power supply performance under ramping and investigate the effect of current changes on the utility voltage and input power factor. A current ripple of ± 2 x 10 -4 and tracking error of ± 5 x 10 -4 was needed
Quantitative accuracy of serotonergic neurotransmission imaging with high-resolution 123I SPECT
International Nuclear Information System (INIS)
Kuikka, J.T.
2004-01-01
Aim: Serotonin transporter (SERT) imaging can be used to study the role of regional abnormalities of neurotransmitter release in various mental disorders and to study the mechanism of action of therapeutic drugs or drugs' abuse. We examine the quantitative accuracy and reproducibility that can be achieved with high-resolution SPECT of serotonergic neurotransmission. Method: Binding potential (BP) of 123 I labeled tracer specific for midbrain SERT was assessed in 20 healthy persons. The effects of scatter, attenuation, partial volume, misregistration and statistical noise were estimated using phantom and human studies. Results: Without any correction, BP was underestimated by 73%. The partial volume error was the major component in this underestimation whereas the most critical error for the reproducibility was misplacement of region of interest (ROI). Conclusion: The proper ROI registration, the use of the multiple head gamma camera with transmission based scatter correction introduce more relevant results. However, due to the small dimensions of the midbrain SERT structures and poor spatial resolution of SPECT, the improvement without the partial volume correction is not great enough to restore the estimate of BP to that of the true one. (orig.) [de
High Accuracy Ground-based near-Earth-asteroid Astrometry using Synthetic Tracking
Zhai, Chengxing; Shao, Michael; Saini, Navtej; Sandhu, Jagmit; Werne, Thomas; Choi, Philip; Ely, Todd A.; Jacobs, Chirstopher S.; Lazio, Joseph; Martin-Mur, Tomas J.; Owen, William M.; Preston, Robert; Turyshev, Slava; Michell, Adam; Nazli, Kutay; Cui, Isaac; Monchama, Rachel
2018-01-01
Accurate astrometry is crucial for determining the orbits of near-Earth-asteroids (NEAs). Further, the future of deep space high data rate communications is likely to be optical communications, such as the Deep Space Optical Communications package that is part of the baseline payload for the planned Psyche Discovery mission to the Psyche asteroid. We have recently upgraded our instrument on the Pomona College 1 m telescope, at JPL's Table Mountain Facility, for conducting synthetic tracking by taking many short exposure images. These images can be then combined in post-processing to track both asteroid and reference stars to yield accurate astrometry. Utilizing the precision of the current and future Gaia data releases, the JPL-Pomona College effort is now demonstrating precision astrometry on NEAs, which is likely to be of considerable value for cataloging NEAs. Further, treating NEAs as proxies of future spacecraft that carry optical communication lasers, our results serve as a measure of the astrometric accuracy that could be achieved for future plane-of-sky optical navigation.
Optimal design of a high accuracy photoelectric auto-collimator based on position sensitive detector
Yan, Pei-pei; Yang, Yong-qing; She, Wen-ji; Liu, Kai; Jiang, Kai; Duan, Jing; Shan, Qiusha
2018-02-01
A kind of high accuracy Photo-electric auto-collimator based on PSD was designed. The integral structure composed of light source, optical lens group, Position Sensitive Detector (PSD) sensor, and its hardware and software processing system constituted. Telephoto objective optical type is chosen during the designing process, which effectively reduces the length, weight and volume of the optical system, as well as develops simulation-based design and analysis of the auto-collimator optical system. The technical indicators of auto-collimator presented by this paper are: measuring resolution less than 0.05″; a field of view is 2ω=0.4° × 0.4° measuring range is +/-5' error of whole range measurement is less than 0.2″. Measuring distance is 10m, which are applicable to minor-angle precise measuring environment. Aberration analysis indicates that the MTF close to the diffraction limit, the spot in the spot diagram is much smaller than the Airy disk. The total length of the telephoto lens is only 450mm by the design of the optical machine structure optimization. The autocollimator's dimension get compact obviously under the condition of the image quality is guaranteed.
Review of The SIAM 100-Digit Challenge: A Study in High-Accuracy Numerical Computing
International Nuclear Information System (INIS)
Bailey, David
2005-01-01
In the January 2002 edition of SIAM News, Nick Trefethen announced the '$100, 100-Digit Challenge'. In this note he presented ten easy-to-state but hard-to-solve problems of numerical analysis, and challenged readers to find each answer to ten-digit accuracy. Trefethen closed with the enticing comment: 'Hint: They're hard. If anyone gets 50 digits in total, I will be impressed.' This challenge obviously struck a chord in hundreds of numerical mathematicians worldwide, as 94 teams from 25 nations later submitted entries. Many of these submissions exceeded the target of 50 correct digits; in fact, 20 teams achieved a perfect score of 100 correct digits. Trefethen had offered $100 for the best submission. Given the overwhelming response, a generous donor (William Browning, founder of Applied Mathematics, Inc.) provided additional funds to provide a $100 award to each of the 20 winning teams. Soon after the results were out, four participants, each from a winning team, got together and agreed to write a book about the problems and their solutions. The team is truly international: Bornemann is from Germany, Laurie is from South Africa, Wagon is from the USA, and Waldvogel is from Switzerland. This book provides some mathematical background for each problem, and then shows in detail how each of them can be solved. In fact, multiple solution techniques are mentioned in each case. The book describes how to extend these solutions to much larger problems and much higher numeric precision (hundreds or thousands of digit accuracy). The authors also show how to compute error bounds for the results, so that one can say with confidence that one's results are accurate to the level stated. Numerous numerical software tools are demonstrated in the process, including the commercial products Mathematica, Maple and Matlab. Computer programs that perform many of the algorithms mentioned in the book are provided, both in an appendix to the book and on a website. In the process, the
Monotonic and Cyclic Behavior of DIN 34CrNiMo6 Tempered Alloy Steel
Directory of Open Access Journals (Sweden)
Ricardo Branco
2016-04-01
Full Text Available This paper aims at studying the monotonic and cyclic plastic deformation behavior of DIN 34CrNiMo6 high strength steel. Monotonic and low-cycle fatigue tests are conducted in ambient air, at room temperature, using standard 8-mm diameter specimens. The former tests are carried out under position control with constant displacement rate. The latter are performed under fully-reversed strain-controlled conditions, using the single-step test method, with strain amplitudes lying between ±0.4% and ±2.0%. After the tests, the fracture surfaces are examined by scanning electron microscopy in order to characterize the surface morphologies and identify the main failure mechanisms. Regardless of the strain amplitude, a softening behavior was observed throughout the entire life. Total strain energy density, defined as the sum of both tensile elastic and plastic strain energies, was revealed to be an adequate fatigue damage parameter for short and long lives.
A Mathematical Model for Non-monotonic Deposition Profiles in Deep Bed Filtration Systems
DEFF Research Database (Denmark)
Yuan, Hao; Shapiro, Alexander
2011-01-01
A mathematical model for suspension/colloid flow in porous media and non-monotonic deposition is proposed. It accounts for the migration of particles associated with the pore walls via the second energy minimum (surface associated phase). The surface associated phase migration is characterized...... by advection and diffusion/dispersion. The proposed model is able to produce a nonmonotonic deposition profile. A set of methods for estimating the modeling parameters is provided in the case of minimal particle release. The estimation can be easily performed with available experimental information....... The numerical modeling results highly agree with the experimental observations, which proves the ability of the model to catch a non-monotonic deposition profile in practice. An additional equation describing a mobile population behaving differently from the injected population seems to be a sufficient...
Dongarra, Jack
2013-09-18
The LU factorization is an important numerical algorithm for solving systems of linear equations in science and engineering and is a characteristic of many dense linear algebra computations. For example, it has become the de facto numerical algorithm implemented within the LINPACK benchmark to rank the most powerful supercomputers in the world, collected by the TOP500 website. Multicore processors continue to present challenges to the development of fast and robust numerical software due to the increasing levels of hardware parallelism and widening gap between core and memory speeds. In this context, the difficulty in developing new algorithms for the scientific community resides in the combination of two goals: achieving high performance while maintaining the accuracy of the numerical algorithm. This paper proposes a new approach for computing the LU factorization in parallel on multicore architectures, which not only improves the overall performance but also sustains the numerical quality of the standard LU factorization algorithm with partial pivoting. While the update of the trailing submatrix is computationally intensive and highly parallel, the inherently problematic portion of the LU factorization is the panel factorization due to its memory-bound characteristic as well as the atomicity of selecting the appropriate pivots. Our approach uses a parallel fine-grained recursive formulation of the panel factorization step and implements the update of the trailing submatrix with the tile algorithm. Based on conflict-free partitioning of the data and lockless synchronization mechanisms, our implementation lets the overall computation flow naturally without contention. The dynamic runtime system called QUARK is then able to schedule tasks with heterogeneous granularities and to transparently introduce algorithmic lookahead. The performance results of our implementation are competitive compared to the currently available software packages and libraries. For example
Reduced Set of Virulence Genes Allows High Accuracy Prediction of Bacterial Pathogenicity in Humans
Iraola, Gregorio; Vazquez, Gustavo; Spangenberg, Lucía; Naya, Hugo
2012-01-01
Although there have been great advances in understanding bacterial pathogenesis, there is still a lack of integrative information about what makes a bacterium a human pathogen. The advent of high-throughput sequencing technologies has dramatically increased the amount of completed bacterial genomes, for both known human pathogenic and non-pathogenic strains; this information is now available to investigate genetic features that determine pathogenic phenotypes in bacteria. In this work we determined presence/absence patterns of different virulence-related genes among more than finished bacterial genomes from both human pathogenic and non-pathogenic strains, belonging to different taxonomic groups (i.e: Actinobacteria, Gammaproteobacteria, Firmicutes, etc.). An accuracy of 95% using a cross-fold validation scheme with in-fold feature selection is obtained when classifying human pathogens and non-pathogens. A reduced subset of highly informative genes () is presented and applied to an external validation set. The statistical model was implemented in the BacFier v1.0 software (freely available at ), that displays not only the prediction (pathogen/non-pathogen) and an associated probability for pathogenicity, but also the presence/absence vector for the analyzed genes, so it is possible to decipher the subset of virulence genes responsible for the classification on the analyzed genome. Furthermore, we discuss the biological relevance for bacterial pathogenesis of the core set of genes, corresponding to eight functional categories, all with evident and documented association with the phenotypes of interest. Also, we analyze which functional categories of virulence genes were more distinctive for pathogenicity in each taxonomic group, which seems to be a completely new kind of information and could lead to important evolutionary conclusions. PMID:22916122
Dongarra, Jack; Faverge, Mathieu; Ltaief, Hatem; Luszczek, Piotr R.
2013-01-01
The LU factorization is an important numerical algorithm for solving systems of linear equations in science and engineering and is a characteristic of many dense linear algebra computations. For example, it has become the de facto numerical algorithm implemented within the LINPACK benchmark to rank the most powerful supercomputers in the world, collected by the TOP500 website. Multicore processors continue to present challenges to the development of fast and robust numerical software due to the increasing levels of hardware parallelism and widening gap between core and memory speeds. In this context, the difficulty in developing new algorithms for the scientific community resides in the combination of two goals: achieving high performance while maintaining the accuracy of the numerical algorithm. This paper proposes a new approach for computing the LU factorization in parallel on multicore architectures, which not only improves the overall performance but also sustains the numerical quality of the standard LU factorization algorithm with partial pivoting. While the update of the trailing submatrix is computationally intensive and highly parallel, the inherently problematic portion of the LU factorization is the panel factorization due to its memory-bound characteristic as well as the atomicity of selecting the appropriate pivots. Our approach uses a parallel fine-grained recursive formulation of the panel factorization step and implements the update of the trailing submatrix with the tile algorithm. Based on conflict-free partitioning of the data and lockless synchronization mechanisms, our implementation lets the overall computation flow naturally without contention. The dynamic runtime system called QUARK is then able to schedule tasks with heterogeneous granularities and to transparently introduce algorithmic lookahead. The performance results of our implementation are competitive compared to the currently available software packages and libraries. For example
Kim, Ji Hyun; Kim, Sung Eun; Cho, Yu Kyung; Lim, Chul-Hyun; Park, Moo In; Hwang, Jin Won; Jang, Jae-Sik; Oh, Minkyung
2018-01-30
Although high-resolution manometry (HRM) has the advantage of visual intuitiveness, its diagnostic validity remains under debate. The aim of this study was to evaluate the diagnostic accuracy of HRM for esophageal motility disorders. Six staff members and 8 trainees were recruited for the study. In total, 40 patients enrolled in manometry studies at 3 institutes were selected. Captured images of 10 representative swallows and a single swallow in analyzing mode in both high-resolution pressure topography (HRPT) and conventional line tracing formats were provided with calculated metrics. Assessments of esophageal motility disorders showed fair agreement for HRPT and moderate agreement for conventional line tracing (κ = 0.40 and 0.58, respectively). With the HRPT format, the k value was higher in category A (esophagogastric junction [EGJ] relaxation abnormality) than in categories B (major body peristalsis abnormalities with intact EGJ relaxation) and C (minor body peristalsis abnormalities or normal body peristalsis with intact EGJ relaxation). The overall exact diagnostic accuracy for the HRPT format was 58.8% and rater's position was an independent factor for exact diagnostic accuracy. The diagnostic accuracy for major disorders was 63.4% with the HRPT format. The frequency of major discrepancies was higher for category B disorders than for category A disorders (38.4% vs 15.4%; P < 0.001). The interpreter's experience significantly affected the exact diagnostic accuracy of HRM for esophageal motility disorders. The diagnostic accuracy for major disorders was higher for achalasia than distal esophageal spasm and jackhammer esophagus.
DIRECT GEOREFERENCING : A NEW STANDARD IN PHOTOGRAMMETRY FOR HIGH ACCURACY MAPPING
Directory of Open Access Journals (Sweden)
A. Rizaldy
2012-07-01
Full Text Available Direct georeferencing is a new method in photogrammetry, especially in the digital camera era. Theoretically, this method does not require ground control points (GCP and the Aerial Triangulation (AT, to process aerial photography into ground coordinates. Compared with the old method, this method has three main advantages: faster data processing, simple workflow and less expensive project, at the same accuracy. Direct georeferencing using two devices, GPS and IMU. GPS recording the camera coordinates (X, Y, Z, and IMU recording the camera orientation (omega, phi, kappa. Both parameters merged into Exterior Orientation (EO parameter. This parameters required for next steps in the photogrammetric projects, such as stereocompilation, DSM generation, orthorectification and mosaic. Accuracy of this method was tested on topographic map project in Medan, Indonesia. Large-format digital camera Ultracam X from Vexcel is used, while the GPS / IMU is IGI AeroControl. 19 Independent Check Point (ICP were used to determine the accuracy. Horizontal accuracy is 0.356 meters and vertical accuracy is 0.483 meters. Data with this accuracy can be used for 1:2.500 map scale project.
Towards Building Reliable, High-Accuracy Solar Irradiance Database For Arid Climates
Munawwar, S.; Ghedira, H.
2012-12-01
Middle East's growing interest in renewable energy has led to increased activity in solar technology development with the recent commissioning of several utility-scale solar power projects and many other commercial installations across the Arabian Peninsula. The region, lying in a virtually rainless sunny belt with a typical daily average solar radiation exceeding 6 kWh/m2, is also one of the most promising candidates for solar energy deployment. However, it is not the availability of resource, but its characterization and reasonably accurate assessment that determines the application potential. Solar irradiance, magnitude and variability inclusive, is the key input in assessing the economic feasibility of a solar system. The accuracy of such data is of critical importance for realistic on-site performance estimates. This contribution aims to identify the key stages in developing a robust solar database for desert climate by focusing on the challenges that an arid environment presents to parameterization of solar irradiance attenuating factors. Adjustments are proposed based on the currently available resource assessment tools to produce high quality data for assessing bankability. Establishing and maintaining ground solar irradiance measurements is an expensive affair and fairly limited in time (recently operational) and space (fewer sites) in the Gulf region. Developers within solar technology industry, therefore, rely on solar radiation models and satellite-derived data for prompt resource assessment needs. It is imperative that such estimation tools are as accurate as possible. While purely empirical models have been widely researched and validated in the Arabian Peninsula's solar modeling history, they are known to be intrinsically site-specific. A primal step to modeling is an in-depth understanding of the region's climate, identifying the key players attenuating radiation and their appropriate characterization to determine solar irradiance. Physical approach
The research of digital circuit system for high accuracy CCD of portable Raman spectrometer
Yin, Yu; Cui, Yongsheng; Zhang, Xiuda; Yan, Huimin
2013-08-01
The Raman spectrum technology is widely used for it can identify various types of molecular structure and material. The portable Raman spectrometer has become a hot direction of the spectrometer development nowadays for its convenience in handheld operation and real-time detection which is superior to traditional Raman spectrometer with heavy weight and bulky size. But there is still a gap for its measurement sensitivity between portable and traditional devices. However, portable Raman Spectrometer with Shell-Isolated Nanoparticle-Enhanced Raman Spectroscopy (SHINERS) technology can enhance the Raman signal significantly by several orders of magnitude, giving consideration in both measurement sensitivity and mobility. This paper proposed a design and implementation of driver and digital circuit for high accuracy CCD sensor, which is core part of portable spectrometer. The main target of the whole design is to reduce the dark current generation rate and increase signal sensitivity during the long integration time, and in the weak signal environment. In this case, we use back-thinned CCD image sensor from Hamamatsu Corporation with high sensitivity, low noise and large dynamic range. In order to maximize this CCD sensor's performance and minimize the whole size of the device simultaneously to achieve the project indicators, we delicately designed a peripheral circuit for the CCD sensor. The design is mainly composed with multi-voltage circuit, sequential generation circuit, driving circuit and A/D transition parts. As the most important power supply circuit, the multi-voltage circuits with 12 independent voltages are designed with reference power supply IC and set to specified voltage value by the amplifier making up the low-pass filter, which allows the user to obtain a highly stable and accurate voltage with low noise. What's more, to make our design easy to debug, CPLD is selected to generate sequential signal. The A/D converter chip consists of a correlated
In-depth, high-accuracy proteomics of sea urchin tooth organic matrix
Directory of Open Access Journals (Sweden)
Mann Matthias
2008-12-01
Full Text Available Abstract Background The organic matrix contained in biominerals plays an important role in regulating mineralization and in determining biomineral properties. However, most components of biomineral matrices remain unknown at present. In sea urchin tooth, which is an important model for developmental biology and biomineralization, only few matrix components have been identified. The recent publication of the Strongylocentrotus purpuratus genome sequence rendered possible not only the identification of genes potentially coding for matrix proteins, but also the direct identification of proteins contained in matrices of skeletal elements by in-depth, high-accuracy proteomic analysis. Results We identified 138 proteins in the matrix of tooth powder. Only 56 of these proteins were previously identified in the matrices of test (shell and spine. Among the novel components was an interesting group of five proteins containing alanine- and proline-rich neutral or basic motifs separated by acidic glycine-rich motifs. In addition, four of the five proteins contained either one or two predicted Kazal protease inhibitor domains. The major components of tooth matrix were however largely identical to the set of spicule matrix proteins and MSP130-related proteins identified in test (shell and spine matrix. Comparison of the matrices of crushed teeth to intact teeth revealed a marked dilution of known intracrystalline matrix proteins and a concomitant increase in some intracellular proteins. Conclusion This report presents the most comprehensive list of sea urchin tooth matrix proteins available at present. The complex mixture of proteins identified may reflect many different aspects of the mineralization process. A comparison between intact tooth matrix, presumably containing odontoblast remnants, and crushed tooth matrix served to differentiate between matrix components and possible contributions of cellular remnants. Because LC-MS/MS-based methods directly
Automated, high accuracy classification of Parkinsonian disorders: a pattern recognition approach.
Directory of Open Access Journals (Sweden)
Andre F Marquand
Full Text Available Progressive supranuclear palsy (PSP, multiple system atrophy (MSA and idiopathic Parkinson's disease (IPD can be clinically indistinguishable, especially in the early stages, despite distinct patterns of molecular pathology. Structural neuroimaging holds promise for providing objective biomarkers for discriminating these diseases at the single subject level but all studies to date have reported incomplete separation of disease groups. In this study, we employed multi-class pattern recognition to assess the value of anatomical patterns derived from a widely available structural neuroimaging sequence for automated classification of these disorders. To achieve this, 17 patients with PSP, 14 with IPD and 19 with MSA were scanned using structural MRI along with 19 healthy controls (HCs. An advanced probabilistic pattern recognition approach was employed to evaluate the diagnostic value of several pre-defined anatomical patterns for discriminating the disorders, including: (i a subcortical motor network; (ii each of its component regions and (iii the whole brain. All disease groups could be discriminated simultaneously with high accuracy using the subcortical motor network. The region providing the most accurate predictions overall was the midbrain/brainstem, which discriminated all disease groups from one another and from HCs. The subcortical network also produced more accurate predictions than the whole brain and all of its constituent regions. PSP was accurately predicted from the midbrain/brainstem, cerebellum and all basal ganglia compartments; MSA from the midbrain/brainstem and cerebellum and IPD from the midbrain/brainstem only. This study demonstrates that automated analysis of structural MRI can accurately predict diagnosis in individual patients with Parkinsonian disorders, and identifies distinct patterns of regional atrophy particularly useful for this process.
Energy Technology Data Exchange (ETDEWEB)
Nabavizadeh, S.A.; Assadsangabi, R.; Hajmomenian, M.; Vossough, A. [Perelman School of Medicine of the University of Pennsylvania, Department of Radiology, Children' s Hospital of Philadelphia, Philadelphia, PA (United States); Santi, M. [Perelman School of Medicine of the University of Pennsylvania, Department of Pathology, Children' s Hospital of Philadelphia, Philadelphia, PA (United States)
2015-05-01
Pilomyxoid astrocytoma (PMA) is a relatively new tumor entity which has been added to the 2007 WHO Classification of tumors of the central nervous system. The goal of this study is to utilize arterial spin labeling (ASL) perfusion imaging to differentiate PMA from pilocytic astrocytoma (PA). Pulsed ASL and conventional MRI sequences of patients with PMA and PA in the past 5 years were retrospectively evaluated. Patients with history of radiation or treatment with anti-angiogenic drugs were excluded. A total of 24 patients (9 PMA, 15 PA) were included. There were statistically significant differences between PMA and PA in mean tumor/gray matter (GM) cerebral blood flow (CBF) ratios (1.3 vs 0.4, p < 0.001) and maximum tumor/GM CBF ratio (2.3 vs 1, p < 0.001). Area under the receiver operating characteristic (ROC) curves for differentiation of PMA from PA was 0.91 using mean tumor CBF, 0.95 using mean tumor/GM CBF ratios, and 0.89 using maximum tumor/GM CBF. Using a threshold value of 0.91, the mean tumor/GM CBF ratio was able to diagnose PMA with 77 % sensitivity, 100 % specificity, and a threshold value of 0.7, provided 88 % sensitivity and 86 % specificity. There was no statistically significant difference between the two tumors in enhancement pattern (p = 0.33), internal architecture (p = 0.15), or apparent diffusion coefficient (ADC) values (p = 0.07). ASL imaging has high accuracy in differentiating PMA from PA. The result of this study may have important applications in prognostication and treatment planning especially in patients with less accessible tumors such as hypothalamic-chiasmatic gliomas. (orig.)
Functional knowledge transfer for high-accuracy prediction of under-studied biological processes.
Directory of Open Access Journals (Sweden)
Christopher Y Park
Full Text Available A key challenge in genetics is identifying the functional roles of genes in pathways. Numerous functional genomics techniques (e.g. machine learning that predict protein function have been developed to address this question. These methods generally build from existing annotations of genes to pathways and thus are often unable to identify additional genes participating in processes that are not already well studied. Many of these processes are well studied in some organism, but not necessarily in an investigator's organism of interest. Sequence-based search methods (e.g. BLAST have been used to transfer such annotation information between organisms. We demonstrate that functional genomics can complement traditional sequence similarity to improve the transfer of gene annotations between organisms. Our method transfers annotations only when functionally appropriate as determined by genomic data and can be used with any prediction algorithm to combine transferred gene function knowledge with organism-specific high-throughput data to enable accurate function prediction. We show that diverse state-of-art machine learning algorithms leveraging functional knowledge transfer (FKT dramatically improve their accuracy in predicting gene-pathway membership, particularly for processes with little experimental knowledge in an organism. We also show that our method compares favorably to annotation transfer by sequence similarity. Next, we deploy FKT with state-of-the-art SVM classifier to predict novel genes to 11,000 biological processes across six diverse organisms and expand the coverage of accurate function predictions to processes that are often ignored because of a dearth of annotated genes in an organism. Finally, we perform in vivo experimental investigation in Danio rerio and confirm the regulatory role of our top predicted novel gene, wnt5b, in leftward cell migration during heart development. FKT is immediately applicable to many bioinformatics
International Nuclear Information System (INIS)
Haynie, A.; Min, T.-J.; Luan, L.; Mu, W.; Ketterson, J. B.
2009-01-01
We describe an extension of the total-internal-reflection microscopy technique that permits direct in-plane distance measurements with high accuracy (<10 nm) over a wide range of separations. This high position accuracy arises from the creation of a standing evanescent wave and the ability to sweep the nodal positions (intensity minima of the standing wave) in a controlled manner via both the incident angle and the relative phase of the incoming laser beams. Some control over the vertical resolution is available through the ability to scan the incoming angle and with it the evanescent penetration depth.
International Nuclear Information System (INIS)
Kimberly, David A.; Salice, Christopher J.
2015-01-01
Generally, ecotoxicologists rely on short-term tests that assume populations to be static. Conversely, natural populations may be exposed to the same stressors for many generations, which can alter tolerance to the same (or other) stressors. The objective of this study was to improve our understanding of how multigenerational stressors alter life history traits and stressor tolerance. After continuously exposing Daphnia magna to cadmium for 120 days, we assessed life history traits and conducted a challenge at higher temperature and cadmium concentrations. Predictably, individuals exposed to cadmium showed an overall decrease in reproductive output compared to controls. Interestingly, control D. magna were the most cadmium tolerant to novel cadmium, followed by those exposed to high cadmium. Our data suggest that long-term exposure to cadmium alter tolerance traits in a non-monotonic way. Because we observed effects after one-generation removal from cadmium, transgenerational effects may be possible as a result of multigenerational exposure. - Highlights: • Daphnia magna exposed to cadmium for 120 days. • D. magna exposed to cadmium had decreased reproductive output. • Control D. magna were most cadmium tolerant to novel cadmium stress. • Long-term exposure to cadmium alter tolerance traits in a non-monotonic way. • Transgenerational effects observed as a result of multigenerational exposure. - Adverse effects of long-term cadmium exposure persist into cadmium free conditions, as seen by non-monotonic responses when exposed to novel stress one generation removed.
Corbetta, Matteo; Sbarufatti, Claudio; Giglio, Marco; Todd, Michael D.
2018-05-01
The present work critically analyzes the probabilistic definition of dynamic state-space models subject to Bayesian filters used for monitoring and predicting monotonic degradation processes. The study focuses on the selection of the random process, often called process noise, which is a key perturbation source in the evolution equation of particle filtering. Despite the large number of applications of particle filtering predicting structural degradation, the adequacy of the picked process noise has not been investigated. This paper reviews existing process noise models that are typically embedded in particle filters dedicated to monitoring and predicting structural damage caused by fatigue, which is monotonic in nature. The analysis emphasizes that existing formulations of the process noise can jeopardize the performance of the filter in terms of state estimation and remaining life prediction (i.e., damage prognosis). This paper subsequently proposes an optimal and unbiased process noise model and a list of requirements that the stochastic model must satisfy to guarantee high prognostic performance. These requirements are useful for future and further implementations of particle filtering for monotonic system dynamics. The validity of the new process noise formulation is assessed against experimental fatigue crack growth data from a full-scale aeronautical structure using dedicated performance metrics.
High-accuracy dosimetry study for intensity-modulated radiation therapy(IMRT) commissioning
International Nuclear Information System (INIS)
Jeong, Hae Sun
2010-02-01
Intensity-modulated radiation therapy (IMRT), an advanced modality of high-precision radiotherapy, allows for an increase in dose to the tumor volume without increasing the dose to nearby critical organs. In order to successfully achieve the treatment, intensive dosimetry with accurate dose verification is necessary. A dosimetry for IMRT, however, is a challenging task due to dosimetric ally unfavorable phenomena such as dramatic changes of the dose at the field boundaries, dis-equilibrium of the electrons, non-uniformity between the detector and the phantom materials, and distortion of scanner-read doses. In the present study, therefore, the LEGO-type multi-purpose dosimetry phantom was developed and used for the studies on dose measurements and correction. Phantom materials for muscle, fat, bone, and lung tissue were selected after considering mass density, atomic composition, effective atomic number, and photon interaction coefficients. The phantom also includes dosimeter holders for several different types of detectors including films, which accommodates a construction of different designs of phantoms as necessary. In order to evaluate its performance, the developed phantom was tested by measuring the point dose and the percent depth dose (PDD) for small size fields under several heterogeneous conditions. However, the measurements with the two types of dosimeter did not agree well for the field sizes less than 1 x 1 cm 2 in muscle and bone, and less than 3 x 3 cm 2 in air cavity. Thus, it was recognized that several studies on small fields dosimetry and correction methods for the calculation with a PMCEPT code are needed. The under-estimated values from the ion chamber were corrected with a convolution method employed to eliminate the volume effect of the chamber. As a result, the discrepancies between the EBT film and the ion chamber measurements were significantly decreased, from 14% to 1% (1 x 1 cm 2 ), 10% to 1% (0.7 x 0.7 cm 2 ), and 42% to 7% (0.5 x 0
High-accuracy dosimetry study for intensity-modulated radiation therapy(IMRT) commissioning
Energy Technology Data Exchange (ETDEWEB)
Jeong, Hae Sun
2010-02-15
Intensity-modulated radiation therapy (IMRT), an advanced modality of high-precision radiotherapy, allows for an increase in dose to the tumor volume without increasing the dose to nearby critical organs. In order to successfully achieve the treatment, intensive dosimetry with accurate dose verification is necessary. A dosimetry for IMRT, however, is a challenging task due to dosimetric ally unfavorable phenomena such as dramatic changes of the dose at the field boundaries, dis-equilibrium of the electrons, non-uniformity between the detector and the phantom materials, and distortion of scanner-read doses. In the present study, therefore, the LEGO-type multi-purpose dosimetry phantom was developed and used for the studies on dose measurements and correction. Phantom materials for muscle, fat, bone, and lung tissue were selected after considering mass density, atomic composition, effective atomic number, and photon interaction coefficients. The phantom also includes dosimeter holders for several different types of detectors including films, which accommodates a construction of different designs of phantoms as necessary. In order to evaluate its performance, the developed phantom was tested by measuring the point dose and the percent depth dose (PDD) for small size fields under several heterogeneous conditions. However, the measurements with the two types of dosimeter did not agree well for the field sizes less than 1 x 1 cm{sup 2} in muscle and bone, and less than 3 x 3 cm{sup 2} in air cavity. Thus, it was recognized that several studies on small fields dosimetry and correction methods for the calculation with a PMCEPT code are needed. The under-estimated values from the ion chamber were corrected with a convolution method employed to eliminate the volume effect of the chamber. As a result, the discrepancies between the EBT film and the ion chamber measurements were significantly decreased, from 14% to 1% (1 x 1 cm{sup 2}), 10% to 1% (0.7 x 0.7 cm{sup 2}), and 42
Monotonicity properties of keff with shape change and with nesting
International Nuclear Information System (INIS)
Arzhanov, V.
2002-01-01
It was found that, contrary to expectations based on physical intuition, k eff can both increase and decrease when changing the shape of an initially regular critical system, while preserving its volume. Physical intuition would only allow for a decrease of k eff when the surface/volume ratio increases. The unexpected behaviour of increasing k eff was found through numerical investigation. For a convincing demonstration of the possibility of the non-monotonic behaviour, a simple geometrical proof was constructed. This latter proof, in turn, is based on the assumption that k eff can only increase (or stay constant) in the case of nesting, i.e. when adding extra volume to a system. Since we found no formal proof of the nesting theorem for the general case, we close the paper by a simple formal proof of the monotonic behaviour of k eff by nesting
A Hybrid Approach to Proving Memory Reference Monotonicity
Oancea, Cosmin E.
2013-01-01
Array references indexed by non-linear expressions or subscript arrays represent a major obstacle to compiler analysis and to automatic parallelization. Most previous proposed solutions either enhance the static analysis repertoire to recognize more patterns, to infer array-value properties, and to refine the mathematical support, or apply expensive run time analysis of memory reference traces to disambiguate these accesses. This paper presents an automated solution based on static construction of access summaries, in which the reference non-linearity problem can be solved for a large number of reference patterns by extracting arbitrarily-shaped predicates that can (in)validate the reference monotonicity property and thus (dis)prove loop independence. Experiments on six benchmarks show that our general technique for dynamic validation of the monotonicity property can cover a large class of codes, incurs minimal run-time overhead and obtains good speedups. © 2013 Springer-Verlag.
Rational functions with maximal radius of absolute monotonicity
Loczi, Lajos
2014-05-19
We study the radius of absolute monotonicity R of rational functions with numerator and denominator of degree s that approximate the exponential function to order p. Such functions arise in the application of implicit s-stage, order p Runge-Kutta methods for initial value problems and the radius of absolute monotonicity governs the numerical preservation of properties like positivity and maximum-norm contractivity. We construct a function with p=2 and R>2s, disproving a conjecture of van de Griend and Kraaijevanger. We determine the maximum attainable radius for functions in several one-parameter families of rational functions. Moreover, we prove earlier conjectured optimal radii in some families with 2 or 3 parameters via uniqueness arguments for systems of polynomial inequalities. Our results also prove the optimality of some strong stability preserving implicit and singly diagonally implicit Runge-Kutta methods. Whereas previous results in this area were primarily numerical, we give all constants as exact algebraic numbers.
Monotonicity and Logarithmic Concavity of Two Functions Involving Exponential Function
Liu, Ai-Qi; Li, Guo-Fu; Guo, Bai-Ni; Qi, Feng
2008-01-01
The function 1 divided by "x"[superscript 2] minus "e"[superscript"-x"] divided by (1 minus "e"[superscript"-x"])[superscript 2] for "x" greater than 0 is proved to be strictly decreasing. As an application of this monotonicity, the logarithmic concavity of the function "t" divided by "e"[superscript "at"] minus "e"[superscript"(a-1)""t"] for "a"…
Estimation of Poisson-Dirichlet Parameters with Monotone Missing Data
Directory of Open Access Journals (Sweden)
Xueqin Zhou
2017-01-01
Full Text Available This article considers the estimation of the unknown numerical parameters and the density of the base measure in a Poisson-Dirichlet process prior with grouped monotone missing data. The numerical parameters are estimated by the method of maximum likelihood estimates and the density function is estimated by kernel method. A set of simulations was conducted, which shows that the estimates perform well.
Sampling from a Discrete Distribution While Preserving Monotonicity.
1982-02-01
in a table beforehand, this procedure, known as the inverse transform method, requires n storage spaces and EX comparisons on average, which may prove...limitations that deserve attention: a. In general, the alias method does not preserve a monotone relationship between U and X as does the inverse transform method...uses the inverse transform approach but with more information computed beforehand, as in the alias method. The proposed method is not new having been
On a strong law of large numbers for monotone measures
Czech Academy of Sciences Publication Activity Database
Agahi, H.; Mohammadpour, A.; Mesiar, Radko; Ouyang, Y.
2013-01-01
Roč. 83, č. 4 (2013), s. 1213-1218 ISSN 0167-7152 R&D Projects: GA ČR GAP402/11/0378 Institutional support: RVO:67985556 Keywords : capacity * Choquet integral * strong law of large numbers Subject RIV: BA - General Mathematics Impact factor: 0.531, year: 2013 http://library.utia.cas.cz/separaty/2013/E/mesiar-on a strong law of large numbers for monotone measures.pdf
International Nuclear Information System (INIS)
Abgrall, Remi; Mezine, Mohamed
2003-01-01
The aim of this paper is to construct upwind residual distribution schemes for the time accurate solution of hyperbolic conservation laws. To do so, we evaluate a space-time fluctuation based on a space-time approximation of the solution and develop new residual distribution schemes which are extensions of classical steady upwind residual distribution schemes. This method has been applied to the solution of scalar advection equation and to the solution of the compressible Euler equations both in two space dimensions. The first version of the scheme is shown to be, at least in its first order version, unconditionally energy stable and possibly conditionally monotonicity preserving. Using an idea of Csik et al. [Space-time residual distribution schemes for hyperbolic conservation laws, 15th AIAA Computational Fluid Dynamics Conference, Anahein, CA, USA, AIAA 2001-2617, June 2001], we modify the formulation to end up with a scheme that is unconditionally energy stable and unconditionally monotonicity preserving. Several numerical examples are shown to demonstrate the stability and accuracy of the method
Analysis of the plasmodium falciparum proteome by high-accuracy mass spectrometry
DEFF Research Database (Denmark)
Lasonder, Edwin; Ishihama, Yasushi; Andersen, Jens S
2002-01-01
-accuracy (average deviation less than 0.02 Da at 1,000 Da) mass spectrometric proteome analysis of selected stages of the human malaria parasite Plasmodium falciparum. The analysis revealed 1,289 proteins of which 714 proteins were identified in asexual blood stages, 931 in gametocytes and 645 in gametes. The last...
High-accuracy interferometric measurements of flatness and parallelism of a step gauge
CSIR Research Space (South Africa)
Kruger, OA
2001-01-01
Full Text Available The most commonly used method in the calibration of step gauges is the coordinate measuring machine (CMM), equipped with a laser interferometer for the highest accuracy. This paper describes a modification to a length-bar measuring machine...
[Accuracy of placenta accreta prenatal diagnosis by ultrasound and MRI in a high-risk population].
Daney de Marcillac, F; Molière, S; Pinton, A; Weingertner, A-S; Fritz, G; Viville, B; Roedlich, M-N; Gaudineau, A; Sananes, N; Favre, R; Nisand, I; Langer, B
2016-02-01
Main objective was to compare accuracy of ultrasonography and MRI for antenatal diagnosis of placenta accreta. Secondary objectives were to specify the most common sonographic and RMI signs associated with diagnosis of placenta accreta. This retrospective study used data collected from all potential cases of placenta accreta (patients with an anterior placenta praevia with history of scarred uterus) admitted from 01/2010 to 12/2014 in a level III maternity unit in Strasbourg, France. High-risk patients beneficiated antenatally from ultrasonography and MRI. Sonographic signs registered were: abnormal placental lacunae, increased vascularity on color Doppler, absence of the retroplacental clear space, interrupted bladder line. MRI signs registered were: abnormal uterine bulging, intraplacental bands of low signal intensity on T2-weighted images, increased vascularity, heterogeneous signal of the placenta on T2-weighed, interrupted bladder line, protrusion of the placenta into the cervix. Diagnosis of placenta accreta was confirmed histologically after hysterectomy or clinically in case of successful conservative treatment. Twenty-two potential cases of placenta accreta were referred to our center and underwent both ultrasonography and MRI. All cases of placenta accreta had a placenta praevia associated with history of scarred uterus. Sensibility and specificity for ultrasonography were, respectively, 0.92 and 0.67, for MRI 0.84 and 0.78 without significant difference (p>0.05). The most relevant signs associated with diagnosis of placenta accreta in ultrasonography were increased vascularity on color Doppler (sensibility 0.85/specificity 0.78), abnormal placental lacunae (sensibility 0.92/specificity 0.55) and loss of retroplacental clear space (sensibility 0.76/specificity 1.0). The most relevant signs in MRI were: abnormal uterine bulging (sensitivity 0.92/specificity 0.89), dark intraplacental bands on T2-weighted images (sensitivity 0.83/specificity 0.80) or
International Nuclear Information System (INIS)
Iannicelli, Elsa; Di Renzo, Sara; Ferri, Mario; Pilozzi, Emanuela; Di Girolamo, Marco; Sapori, Alessandra; Ziparo, Vincenzo; David, Vincenzo
2014-01-01
To evaluate the accuracy of magnetic resonance imaging (MRI) with lumen distention for rectal cancer staging and circumferential resection margin (CRM) involvement prediction. Seventy-three patients with primary rectal cancer underwent high-resolution MRI with a phased-array coil performed using 60-80 mL room air rectal distention, 1-3 weeks before surgery. MRI results were compared to postoperative histopathological findings. The overall MRI T staging accuracy was calculated. CRM involvement prediction and the N staging, the accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were assessed for each T stage. The agreement between MRI and histological results was assessed using weighted-kappa statistics. The overall MRI accuracy for T staging was 93.6% (k = 0.85). The accuracy, sensitivity, specificity, PPV and NPV for each T stage were as follows: 91.8%, 86.2%, 95.5%, 92.6% and 91.3% for the group ≤ T2; 90.4%, 94.6%, 86.1%, 87.5% and 94% for T3; 98,6%, 85.7%, 100%, 100% and 98.5% for T4, respectively. The predictive CRM accuracy was 94.5% (k = 0.86); the sensitivity, specificity, PPV and NPV were 89.5%, 96.3%, 89.5%, and 96.3% respectively. The N staging accuracy was 68.49% (k = 0.4). MRI performed with rectal lumen distention has proved to be an effective technique both for rectal cancer staging and involved CRM predicting
Energy Technology Data Exchange (ETDEWEB)
Iannicelli, Elsa; Di Renzo, Sara [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Ferri, Mario [Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Pilozzi, Emanuela [Department of Clinical and Molecular Sciences, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Di Girolamo, Marco; Sapori, Alessandra [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Ziparo, Vincenzo [Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); David, Vincenzo [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy)
2014-07-01
To evaluate the accuracy of magnetic resonance imaging (MRI) with lumen distention for rectal cancer staging and circumferential resection margin (CRM) involvement prediction. Seventy-three patients with primary rectal cancer underwent high-resolution MRI with a phased-array coil performed using 60-80 mL room air rectal distention, 1-3 weeks before surgery. MRI results were compared to postoperative histopathological findings. The overall MRI T staging accuracy was calculated. CRM involvement prediction and the N staging, the accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were assessed for each T stage. The agreement between MRI and histological results was assessed using weighted-kappa statistics. The overall MRI accuracy for T staging was 93.6% (k = 0.85). The accuracy, sensitivity, specificity, PPV and NPV for each T stage were as follows: 91.8%, 86.2%, 95.5%, 92.6% and 91.3% for the group ≤ T2; 90.4%, 94.6%, 86.1%, 87.5% and 94% for T3; 98,6%, 85.7%, 100%, 100% and 98.5% for T4, respectively. The predictive CRM accuracy was 94.5% (k = 0.86); the sensitivity, specificity, PPV and NPV were 89.5%, 96.3%, 89.5%, and 96.3% respectively. The N staging accuracy was 68.49% (k = 0.4). MRI performed with rectal lumen distention has proved to be an effective technique both for rectal cancer staging and involved CRM predicting.
Directory of Open Access Journals (Sweden)
Feng Qi
2014-10-01
Full Text Available The authors find the absolute monotonicity and complete monotonicity of some functions involving trigonometric functions and related to estimates the lower bounds of the first eigenvalue of Laplace operator on Riemannian manifolds.
SFOL Pulse: A High Accuracy DME Pulse for Alternative Aircraft Position and Navigation
Directory of Open Access Journals (Sweden)
Euiho Kim
2017-09-01
Full Text Available In the Federal Aviation Administration’s (FAA performance based navigation strategy announced in 2016, the FAA stated that it would retain and expand the Distance Measuring Equipment (DME infrastructure to ensure resilient aircraft navigation capability during the event of a Global Navigation Satellite System (GNSS outage. However, the main drawback of the DME as a GNSS back up system is that it requires a significant expansion of the current DME ground infrastructure due to its poor distance measuring accuracy over 100 m. The paper introduces a method to improve DME distance measuring accuracy by using a new DME pulse shape. The proposed pulse shape was developed by using Genetic Algorithms and is less susceptible to multipath effects so that the ranging error reduces by 36.0–77.3% when compared to the Gaussian and Smoothed Concave Polygon DME pulses, depending on noise environment.
Automatic J–A Model Parameter Tuning Algorithm for High Accuracy Inrush Current Simulation
Directory of Open Access Journals (Sweden)
Xishan Wen
2017-04-01
Full Text Available Inrush current simulation plays an important role in many tasks of the power system, such as power transformer protection. However, the accuracy of the inrush current simulation can hardly be ensured. In this paper, a Jiles–Atherton (J–A theory based model is proposed to simulate the inrush current of power transformers. The characteristics of the inrush current curve are analyzed and results show that the entire inrush current curve can be well featured by the crest value of the first two cycles. With comprehensive consideration of both of the features of the inrush current curve and the J–A parameters, an automatic J–A parameter estimation algorithm is proposed. The proposed algorithm can obtain more reasonable J–A parameters, which improve the accuracy of simulation. Experimental results have verified the efficiency of the proposed algorithm.
Takemura, Akihiro; Ueda, Shinichi; Noto, Kimiya; Kurata, Yuichi; Shoji, Saori
2011-01-01
In this study, we proposed and evaluated a positional accuracy assessment method with two high-resolution digital cameras for add-on six-degrees-of-freedom radiotherapy (6D) couches. Two high resolution digital cameras (D5000, Nikon Co.) were used in this accuracy assessment method. These cameras were placed on two orthogonal axes of a linear accelerator (LINAC) coordinate system and focused on the isocenter of the LINAC. Pictures of a needle that was fixed on the 6D couch were taken by the cameras during couch motions of translation and rotation of each axis. The coordinates of the needle in the pictures were obtained using manual measurement, and the coordinate error of the needle was calculated. The accuracy of a HexaPOD evo (Elekta AB, Sweden) was evaluated using this method. All of the mean values of the X, Y, and Z coordinate errors in the translation tests were within ±0.1 mm. However, the standard deviation of the Z coordinate errors in the Z translation test was 0.24 mm, which is higher than the others. In the X rotation test, we found that the X coordinate of the rotational origin of the 6D couch was shifted. We proposed an accuracy assessment method for a 6D couch. The method was able to evaluate the accuracy of the motion of only the 6D couch and revealed the deviation of the origin of the couch rotation. This accuracy assessment method is effective for evaluating add-on 6D couch positioning.
Thermal Stability of Magnetic Compass Sensor for High Accuracy Positioning Applications
Van-Tang PHAM; Dinh-Chinh NGUYEN; Quang-Huy TRAN; Duc-Trinh CHU; Duc-Tan TRAN
2015-01-01
Using magnetic compass sensors in angle measurements have a wide area of application such as positioning, robot, landslide, etc. However, one of the most phenomenal that affects to the accuracy of the magnetic compass sensor is the temperature. This paper presents two thermal stability schemes for improving performance of a magnetic compass sensor. The first scheme uses the feedforward structure to adjust the angle output of the compass sensor adapt to the variation of the temperature. The se...
A High-Accuracy Linear Conservative Difference Scheme for Rosenau-RLW Equation
Directory of Open Access Journals (Sweden)
Jinsong Hu
2013-01-01
Full Text Available We study the initial-boundary value problem for Rosenau-RLW equation. We propose a three-level linear finite difference scheme, which has the theoretical accuracy of Oτ2+h4. The scheme simulates two conservative properties of original problem well. The existence, uniqueness of difference solution, and a priori estimates in infinite norm are obtained. Furthermore, we analyze the convergence and stability of the scheme by energy method. At last, numerical experiments demonstrate the theoretical results.
New perspectives for high accuracy SLR with second generation geodesic satellites
Lund, Glenn
1993-01-01
This paper reports on the accuracy limitations imposed by geodesic satellite signatures, and on the potential for achieving millimetric performances by means of alternative satellite concepts and an optimized 2-color system tradeoff. Long distance laser ranging, when performed between a ground (emitter/receiver) station and a distant geodesic satellite, is now reputed to enable short arc trajectory determinations to be achieved with an accuracy of 1 to 2 centimeters. This state-of-the-art accuracy is limited principally by the uncertainties inherent to single-color atmospheric path length correction. Motivated by the study of phenomena such as postglacial rebound, and the detailed analysis of small-scale volcanic and strain deformations, the drive towards millimetric accuracies will inevitably be felt. With the advent of short pulse (less than 50 ps) dual wavelength ranging, combined with adequate detection equipment (such as a fast-scanning streak camera or ultra-fast solid-state detectors) the atmospheric uncertainty could potentially be reduced to the level of a few millimeters, thus, exposing other less significant error contributions, of which by far the most significant will then be the morphology of the retroreflector satellites themselves. Existing geodesic satellites are simply dense spheres, several 10's of cm in diameter, encrusted with a large number (426 in the case of LAGEOS) of small cube-corner reflectors. A single incident pulse, thus, results in a significant number of randomly phased, quasi-simultaneous return pulses. These combine coherently at the receiver to produce a convolved interference waveform which cannot, on a shot to shot basis, be accurately and unambiguously correlated to the satellite center of mass. This paper proposes alternative geodesic satellite concepts, based on the use of a very small number of cube-corner retroreflectors, in which the above difficulties are eliminated while ensuring, for a given emitted pulse, the return
Directory of Open Access Journals (Sweden)
Chowdhury Molhammad SR
2000-01-01
Full Text Available Results are obtained on existence theorems of generalized bi-quasi-variational inequalities for quasi-semi-monotone and bi-quasi-semi-monotone operators in both compact and non-compact settings. We shall use the concept of escaping sequences introduced by Border (Fixed Point Theorem with Applications to Economics and Game Theory, Cambridge University Press, Cambridge, 1985 to obtain results in non-compact settings. Existence theorems on non-compact generalized bi-complementarity problems for quasi-semi-monotone and bi-quasi-semi-monotone operators are also obtained. Moreover, as applications of some results of this paper on generalized bi-quasi-variational inequalities, we shall obtain existence of solutions for some kind of minimization problems with quasi- semi-monotone and bi-quasi-semi-monotone operators.
A high-accuracy optical linear algebra processor for finite element applications
Casasent, D.; Taylor, B. K.
1984-01-01
Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.
High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections.
Zhu, Xiangbin; Qiu, Huiling
2016-01-01
Human activity recognition(HAR) from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM) approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP) is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved.
High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections.
Directory of Open Access Journals (Sweden)
Xiangbin Zhu
Full Text Available Human activity recognition(HAR from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved.
DEFF Research Database (Denmark)
Zhao, Ying; Pang, Xiaodan; Deng, Lei
2011-01-01
A novel approach for broadband microwave frequency measurement by employing a single-drive dual-parallel Mach-Zehnder modulator is proposed and experimentally demonstrated. Based on bias manipulations of the modulator, conventional frequency-to-power mapping technique is developed by performing a...... 10−3 relative error. This high accuracy frequency measurement technique is a promising candidate for high-speed electronic warfare and defense applications....
Directory of Open Access Journals (Sweden)
Mark Lyons
2013-06-01
Full Text Available Exploring the effects of fatigue on skilled performance in tennis presents a significant challenge to the researcher with respect to ecological validity. This study examined the effects of moderate and high-intensity fatigue on groundstroke accuracy in expert and non-expert tennis players. The research also explored whether the effects of fatigue are the same regardless of gender and player's achievement motivation characteristics. 13 expert (7 male, 6 female and 17 non-expert (13 male, 4 female tennis players participated in the study. Groundstroke accuracy was assessed using the modified Loughborough Tennis Skills Test. Fatigue was induced using the Loughborough Intermittent Tennis Test with moderate (70% and high-intensities (90% set as a percentage of peak heart rate (attained during a tennis-specific maximal hitting sprint test. Ratings of perceived exertion were used as an adjunct to the monitoring of heart rate. Achievement goal indicators for each player were assessed using the 2 x 2 Achievement Goals Questionnaire for Sport in an effort to examine if this personality characteristic provides insight into how players perform under moderate and high-intensity fatigue conditions. A series of mixed ANOVA's revealed significant fatigue effects on groundstroke accuracy regardless of expertise. The expert players however, maintained better groundstroke accuracy across all conditions compared to the novice players. Nevertheless, in both groups, performance following high-intensity fatigue deteriorated compared to performance at rest and performance while moderately fatigued. Groundstroke accuracy under moderate levels of fatigue was equivalent to that at rest. Fatigue effects were also similar regardless of gender. No fatigue by expertise, or fatigue by gender interactions were found. Fatigue effects were also equivalent regardless of player's achievement goal indicators. Future research is required to explore the effects of fatigue on
High accuracy prediction of beta-turns and their types using propensities and multiple alignments.
Fuchs, Patrick F J; Alix, Alain J P
2005-06-01
We have developed a method that predicts both the presence and the type of beta-turns, using a straightforward approach based on propensities and multiple alignments. The propensities were calculated classically, but the way to use them for prediction was completely new: starting from a tetrapeptide sequence on which one wants to evaluate the presence of a beta-turn, the propensity for a given residue is modified by taking into account all the residues present in the multiple alignment at this position. The evaluation of a score is then done by weighting these propensities by the use of Position-specific score matrices generated by PSI-BLAST. The introduction of secondary structure information predicted by PSIPRED or SSPRO2 as well as taking into account the flanking residues around the tetrapeptide improved the accuracy greatly. This latter evaluated on a database of 426 reference proteins (previously used on other studies) by a sevenfold crossvalidation gave very good results with a Matthews Correlation Coefficient (MCC) of 0.42 and an overall prediction accuracy of 74.8%; this places our method among the best ones. A jackknife test was also done, which gave results within the same range. This shows that it is possible to reach neural networks accuracy with considerably less computional cost and complexity. Furthermore, propensities remain excellent descriptors of amino acid tendencies to belong to beta-turns, which can be useful for peptide or protein engineering and design. For beta-turn type prediction, we reached the best accuracy ever published in terms of MCC (except for the irregular type IV) in the range of 0.25-0.30 for types I, II, and I' and 0.13-0.15 for types VIII, II', and IV. To our knowledge, our method is the only one available on the Web that predicts types I' and II'. The accuracy evaluated on two larger databases of 547 and 823 proteins was not improved significantly. All of this was implemented into a Web server called COUDES (French acronym
A non-parametric test for partial monotonicity in multiple regression
van Beek, M.; Daniëls, H.A.M.
Partial positive (negative) monotonicity in a dataset is the property that an increase in an independent variable, ceteris paribus, generates an increase (decrease) in the dependent variable. A test for partial monotonicity in datasets could (1) increase model performance if monotonicity may be
In some symmetric spaces monotonicity properties can be reduced to the cone of rearrangements
Czech Academy of Sciences Publication Activity Database
Hudzik, H.; Kaczmarek, R.; Krbec, Miroslav
2016-01-01
Roč. 90, č. 1 (2016), s. 249-261 ISSN 0001-9054 Institutional support: RVO:67985840 Keywords : symmetric spaces * K-monotone symmetric Banach spaces * strict monotonicity * lower local uniform monotonicity Subject RIV: BA - General Mathematics Impact factor: 0.826, year: 2016 http://link.springer.com/article/10.1007%2Fs00010-015-0379-6
International Nuclear Information System (INIS)
Wybranski, Christian; Eberhardt, Benjamin; Fischbach, Katharina; Fischbach, Frank; Walke, Mathias; Hass, Peter; Röhl, Friedrich-Wilhelm; Kosiek, Ortrud; Kaiser, Mandy; Pech, Maciej; Lüdemann, Lutz; Ricke, Jens
2015-01-01
Background and purpose: To evaluate the reconstruction accuracy of brachytherapy (BT) applicators tips in vitro and in vivo in MRI-guided 192 Ir-high-dose-rate (HDR)-BT of inoperable liver tumors. Materials and methods: Reconstruction accuracy of plastic BT applicators, visualized by nitinol inserts, was assessed in MRI phantom measurements and in MRI 192 Ir-HDR-BT treatment planning datasets of 45 patients employing CT co-registration and vector decomposition. Conspicuity, short-term dislocation, and reconstruction errors were assessed in the clinical data. The clinical effect of applicator reconstruction accuracy was determined in follow-up MRI data. Results: Applicator reconstruction accuracy was 1.6 ± 0.5 mm in the phantom measurements. In the clinical MRI datasets applicator conspicuity was rated good/optimal in ⩾72% of cases. 16/129 applicators showed not time dependent deviation in between MRI/CT acquisition (p > 0.1). Reconstruction accuracy was 5.5 ± 2.8 mm, and the average image co-registration error was 3.1 ± 0.9 mm. Vector decomposition revealed no preferred direction of reconstruction errors. In the follow-up data deviation of planned dose distribution and irradiation effect was 6.9 ± 3.3 mm matching the mean co-registration error (6.5 ± 2.5 mm; p > 0.1). Conclusion: Applicator reconstruction accuracy in vitro conforms to AAPM TG 56 standard. Nitinol-inserts are feasible for applicator visualization and yield good conspicuity in MRI treatment planning data. No preferred direction of reconstruction errors were found in vivo
Horizontal Positional Accuracy of Google EarthÃ¢Â€Â™s High-Resolution Imagery Archive
Directory of Open Access Journals (Sweden)
David Potere
2008-12-01
Full Text Available Google Earth now hosts high-resolution imagery that spans twenty percent of the EarthÃ¢Â€Â™s landmass and more than a third of the human population. This contemporary highresolution archive represents a significant, rapidly expanding, cost-free and largely unexploited resource for scientific inquiry. To increase the scientific utility of this archive, we address horizontal positional accuracy (georegistration by comparing Google Earth with Landsat GeoCover scenes over a global sample of 436 control points located in 109 cities worldwide. Landsat GeoCover is an orthorectified product with known absolute positional accuracy of less than 50 meters root-mean-squared error (RMSE. Relative to Landsat GeoCover, the 436 Google Earth control points have a positional accuracy of 39.7 meters RMSE (error magnitudes range from 0.4 to 171.6 meters. The control points derived from satellite imagery have an accuracy of 22.8 meters RMSE, which is significantly more accurate than the 48 control-points based on aerial photography (41.3 meters RMSE; t-test p-value < 0.01. The accuracy of control points in more-developed countries is 24.1 meters RMSE, which is significantly more accurate than the control points in developing countries (44.4 meters RMSE; t-test p-value < 0.01. These findings indicate that Google Earth highresolution imagery has a horizontal positional accuracy that is sufficient for assessing moderate-resolution remote sensing products across most of the worldÃ¢Â€Â™s peri-urban areas.
High accuracy mapping with cartographic assessment for a fixed-wing remotely piloted aircraft system
Alves Júnior, Leomar Rufino; Ferreira, Manuel Eduardo; Côrtes, João Batista Ramos; de Castro Jorge, Lúcio André
2018-01-01
The lack of updated maps on large scale representations has encouraged the use of remotely piloted aircraft systems (RPAS) to generate maps for a wide range of professionals. However, some questions arise: do the orthomosaics generated by these systems have the cartographic precision required to use them? Which problems can be identified in stitching orthophotos to generate orthomosaics? To answer these questions, an aerophotogrammetric survey was conducted in an environmental conservation unit in the city of Goiânia. The flight plan was set up using the E-motion software, provided by Sensefly-a Swiss manufacturer of the RPAS Swinglet CAM used in this work. The camera installed in the RPAS was the Canon IXUS 220 HS, with the number of pixels in the sensor array of 12.1 megapixel, complementary metal oxide semiconductor 1 ∶ 2.3 ? (4000 × 3000 pixel), horizontal and vertical pixel sizes of 1.54 μm. Using the orthophotos, four orthomosaics were generated in the Pix4D mapper software. The first orthomosaic was generated without using the control points. The other three mosaics were generated using 4, 8, and 16 premarked ground control points. To check the precision and accuracy of the orthomosaics, 46 premarked targets were uniformly distributed in the block. The three-dimensional (3-D) coordinates of the premarked targets were read on the orthomosaic and compared with the coordinates obtained by the geodetic survey real-time kinematic positioning method using the global navigation satellite system receiver signals. The cartographic accuracy standard was evaluated by discrepancies between these coordinates. The bias was analyzed by the Student's t test and the accuracy by the chi-square probability considering the orthomosaic on a scale of 1 ∶ 250, in which 90% of the points tested must have a planimetric error of control points the scale was 10-fold smaller (1 ∶ 3000).
High-accuracy resolver-to-digital conversion via phase locked loop based on PID controller
Li, Yaoling; Wu, Zhong
2018-03-01
The problem of resolver-to-digital conversion (RDC) is transformed into the problem of angle tracking control, and a phase locked loop (PLL) method based on PID controller is proposed in this paper. This controller comprises a typical PI controller plus an incomplete differential which can avoid the amplification of higher-frequency noise components by filtering the phase detection error with a low-pass filter. Compared with conventional ones, the proposed PLL method makes the converter a system of type III and thus the conversion accuracy can be improved. Experimental results demonstrate the effectiveness of the proposed method.
KLEIN: Coulomb functions for real lambda and positive energy to high accuracy
International Nuclear Information System (INIS)
Barnett, A.R.
1981-01-01
KLEIN computes relativistic Schroedinger (Klein-Gordon) equation solutions, i.e. Coulomb functions for real lambda > - 1, Fsub(lambda)(eta,x), Gsub(lambda)(eta,x), F'sub(lambda)(eta,x) and G'sub(lambda)(eta,x) for real kappa > 0 and real eta, - 10 4 4 . Hence it is also suitable for Bessel and spherical Bessel functions. Accuracies are in the range 10 -14 -10 -16 in oscillating region, and approx. equal to 10 -30 on an extended precision compiler. The program is suitable for generating Klein-Gordon wavefunctions for matching in pion and kaon physics. (orig.)
Wang, Yao-yao; Zhang, Juan; Zhao, Xue-wei; Song, Li-pei; Zhang, Bo; Zhao, Xing
2018-03-01
In order to improve depth extraction accuracy, a method using moving array lenslet technique (MALT) in pickup stage is proposed, which can decrease the depth interval caused by pixelation. In this method, the lenslet array is moved along the horizontal and vertical directions simultaneously for N times in a pitch to get N sets of elemental images. Computational integral imaging reconstruction method for MALT is taken to obtain the slice images of the 3D scene, and the sum modulus (SMD) blur metric is taken on these slice images to achieve the depth information of the 3D scene. Simulation and optical experiments are carried out to verify the feasibility of this method.
Effect of meal glycemic load and caffeine consumption on prolonged monotonous driving performance.
Bragg, Christopher; Desbrow, Ben; Hall, Susan; Irwin, Christopher
2017-11-01
Monotonous driving involves low levels of stimulation and high levels of repetition and is essentially an exercise in sustained attention and vigilance. The aim of this study was to determine the effects of consuming a high or low glycemic load meal on prolonged monotonous driving performance. The effect of consuming caffeine with a high glycemic load meal was also examined. Ten healthy, non-diabetic participants (7 males, age 51±7yrs, mean±SD) completed a repeated measures investigation involving 3 experimental trials. On separate occasions, participants were provided one of three treatments prior to undertaking a 90min computer-based simulated drive. The 3 treatment conditions involved consuming: (1) a low glycemic load meal+placebo capsules (LGL), (2) a high glycemic load meal+placebo capsules (HGL) and (3) a high glycemic load meal+caffeine capsules (3mgkg -1 body weight) (CAF). Measures of driving performance included lateral (standard deviation of lane position (SDLP), average lane position (AVLP), total number of lane crossings (LC)) and longitudinal (average speed (AVSP) and standard deviation of speed (SDSP)) vehicle control parameters. Blood glucose levels, plasma caffeine concentrations and subjective ratings of sleepiness, alertness, mood, hunger and simulator sickness were also collected throughout each trial. No difference in either lateral or longitudinal vehicle control parameters or subjective ratings were observed between HGL and LGL treatments. A significant reduction in SDLP (0.36±0.20m vs 0.41±0.19m, p=0.004) and LC (34.4±31.4 vs 56.7±31.5, p=0.018) was observed in the CAF trial compared to the HGL trial. However, no differences in AVLP, AVSP and SDSP or subjective ratings were detected between these two trials (p>0.05). Altering the glycemic load of a breakfast meal had no effect on measures of monotonous driving performance in non-diabetic adults. Individuals planning to undertake a prolonged monotonous drive following consumption of a
Non-monotonic behaviour in relaxation dynamics of image restoration
International Nuclear Information System (INIS)
Ozeki, Tomoko; Okada, Masato
2003-01-01
We have investigated the relaxation dynamics of image restoration through a Bayesian approach. The relaxation dynamics is much faster at zero temperature than at the Nishimori temperature where the pixel-wise error rate is minimized in equilibrium. At low temperature, we observed non-monotonic development of the overlap. We suggest that the optimal performance is realized through premature termination in the relaxation processes in the case of the infinite-range model. We also performed Markov chain Monte Carlo simulations to clarify the underlying mechanism of non-trivial behaviour at low temperature by checking the local field distributions of each pixel
An iterative method for nonlinear demiclosed monotone-type operators
International Nuclear Information System (INIS)
Chidume, C.E.
1991-01-01
It is proved that a well known fixed point iteration scheme which has been used for approximating solutions of certain nonlinear demiclosed monotone-type operator equations in Hilbert spaces remains applicable in real Banach spaces with property (U, α, m+1, m). These Banach spaces include the L p -spaces, p is an element of [2,∞]. An application of our results to the approximation of a solution of a certain linear operator equation in this general setting is also given. (author). 19 refs
Affine-Invariant Geometric Constraints-Based High Accuracy Simultaneous Localization and Mapping
Directory of Open Access Journals (Sweden)
Gangchen Hua
2017-01-01
Full Text Available In this study we describe a new appearance-based loop-closure detection method for online incremental simultaneous localization and mapping (SLAM using affine-invariant-based geometric constraints. Unlike other pure bag-of-words-based approaches, our proposed method uses geometric constraints as a supplement to improve accuracy. By establishing an affine-invariant hypothesis, the proposed method excludes incorrect visual words and calculates the dispersion of correctly matched visual words to improve the accuracy of the likelihood calculation. In addition, camera’s intrinsic parameters and distortion coefficients are adequate for this method. 3D measuring is not necessary. We use the mechanism of Long-Term Memory and Working Memory (WM to manage the memory. Only a limited size of the WM is used for loop-closure detection; therefore the proposed method is suitable for large-scale real-time SLAM. We tested our method using the CityCenter and Lip6Indoor datasets. Our proposed method results can effectively correct the typical false-positive localization of previous methods, thus gaining better recall ratios and better precision.
The use of high accuracy NAA for the certification of NIST Standard Reference Materials
International Nuclear Information System (INIS)
Becker, D.A.; Greenberg, R.R.; Stone, S.
1991-01-01
Neutron activation analysis (NAA) is only one of many analytical techniques used at the National Institute of Standards and Technology (NIST) for the certification of NIST Standard Reference Materials (SRMs). We compete daily against all of the other available analytical techniques in terms of accuracy, precision, and the cost required to obtain that requisite accuracy and precision. Over the years, the authors have found that NAA can and does compete favorably with these other techniques because of its' unique capabilities for redundancy and quality assurance. Good examples are the two new NIST leaf SRMs, Apple Leaves (SRM 1515) and Peach Leaves (SRM 1547). INAA was used to measure the homogeneity of 12 elements in 15 samples of each material at the 100 mg sample size. In addition, instrumental and radiochemical NAA combined for 27 elemental determinations, out of a total of 54 elemental determinations made on each material with all NIST techniques combined. This paper describes the NIST NAA procedures used in these analyses, the quality assurance techniques employed, and the analytical results for the 24 elements determined by NAA in these new botanical SRMs. The NAA results are also compared to the final certified values for these SRMs
High-accuracy 3-D modeling of cultural heritage: the digitizing of Donatello's "Maddalena".
Guidi, Gabriele; Beraldin, J Angelo; Atzeni, Carlo
2004-03-01
Three-dimensional digital modeling of Heritage works of art through optical scanners, has been demonstrated in recent years with results of exceptional interest. However, the routine application of three-dimensional (3-D) modeling to Heritage conservation still requires the systematic investigation of a number of technical problems. In this paper, the acquisition process of the 3-D digital model of the Maddalena by Donatello, a wooden statue representing one of the major masterpieces of the Italian Renaissance which was swept away by the Florence flood of 1966 and successively restored, is described. The paper reports all the steps of the acquisition procedure, from the project planning to the solution of the various problems due to range camera calibration and to material non optically cooperative. Since the scientific focus is centered on the 3-D model overall dimensional accuracy, a methodology for its quality control is described. Such control has demonstrated how, in some situations, the ICP-based alignment can lead to incorrect results. To circumvent this difficulty we propose an alignment technique based on the fusion of ICP with close-range digital photogrammetry and a non-invasive procedure in order to generate a final accurate model. In the end detailed results are presented, demonstrating the improvement of the final model, and how the proposed sensor fusion ensure a pre-specified level of accuracy.
Vision-based algorithms for high-accuracy measurements in an industrial bakery
Heleno, Paulo; Davies, Roger; Correia, Bento A. B.; Dinis, Joao
2002-02-01
This paper describes the machine vision algorithms developed for VIP3D, a measuring system used in an industrial bakery to monitor the dimensions and weight of loaves of bread (baguettes). The length and perimeter of more than 70 different varieties of baguette are measured with 1-mm accuracy, quickly, reliably and automatically. VIP3D uses a laser triangulation technique to measure the perimeter. The shape of the loaves is approximately cylindrical and the perimeter is defined as the convex hull of a cross-section perpendicular to the baguette axis at mid-length. A camera, mounted obliquely to the measuring plane, captures an image of a laser line projected onto the upper surface of the baguette. Three cameras are used to measure the baguette length, a solution adopted in order to minimize perspective-induced measurement errors. The paper describes in detail the machine vision algorithms developed to perform segmentation of the laser line and subsequent calculation of the perimeter of the baguette. The algorithms used to segment and measure the position of the ends of the baguette, to sub-pixel accuracy, are also described, as are the algorithms used to calibrate the measuring system and compensate for camera-induced image distortion.
Hyun, Yil Sik; Bae, Joong Ho; Park, Hye Sun; Eun, Chang Soo
2013-01-01
Accurate diagnosis of gastric intestinal metaplasia is important; however, conventional endoscopy is known to be an unreliable modality for diagnosing gastric intestinal metaplasia (IM). The aims of the study were to evaluate the interobserver variation in diagnosing IM by high-definition (HD) endoscopy and the diagnostic accuracy of this modality for IM among experienced and inexperienced endoscopists. Selected 50 cases, taken with HD endoscopy, were sent for a diagnostic inquiry of gastric IM through visual inspection to five experienced and five inexperienced endoscopists. The interobserver agreement between endoscopists was evaluated to verify the diagnostic reliability of HD endoscopy in diagnosing IM, and the diagnostic accuracy, sensitivity, and specificity were evaluated for validity of HD endoscopy in diagnosing IM. Interobserver agreement among the experienced endoscopists was "poor" (κ = 0.38) and it was also "poor" (κ = 0.33) among the inexperienced endoscopists. The diagnostic accuracy of the experienced endoscopists was superior to that of the inexperienced endoscopists (P = 0.003). Since diagnosis through visual inspection is unreliable in the diagnosis of IM, all suspicious areas for gastric IM should be considered to be biopsied. Furthermore, endoscopic experience and education are needed to raise the diagnostic accuracy of gastric IM. PMID:23678267
Hyun, Yil Sik; Han, Dong Soo; Bae, Joong Ho; Park, Hye Sun; Eun, Chang Soo
2013-05-01
Accurate diagnosis of gastric intestinal metaplasia is important; however, conventional endoscopy is known to be an unreliable modality for diagnosing gastric intestinal metaplasia (IM). The aims of the study were to evaluate the interobserver variation in diagnosing IM by high-definition (HD) endoscopy and the diagnostic accuracy of this modality for IM among experienced and inexperienced endoscopists. Selected 50 cases, taken with HD endoscopy, were sent for a diagnostic inquiry of gastric IM through visual inspection to five experienced and five inexperienced endoscopists. The interobserver agreement between endoscopists was evaluated to verify the diagnostic reliability of HD endoscopy in diagnosing IM, and the diagnostic accuracy, sensitivity, and specificity were evaluated for validity of HD endoscopy in diagnosing IM. Interobserver agreement among the experienced endoscopists was "poor" (κ = 0.38) and it was also "poor" (κ = 0.33) among the inexperienced endoscopists. The diagnostic accuracy of the experienced endoscopists was superior to that of the inexperienced endoscopists (P = 0.003). Since diagnosis through visual inspection is unreliable in the diagnosis of IM, all suspicious areas for gastric IM should be considered to be biopsied. Furthermore, endoscopic experience and education are needed to raise the diagnostic accuracy of gastric IM.
Thermal Stability of Magnetic Compass Sensor for High Accuracy Positioning Applications
Directory of Open Access Journals (Sweden)
Van-Tang PHAM
2015-12-01
Full Text Available Using magnetic compass sensors in angle measurements have a wide area of application such as positioning, robot, landslide, etc. However, one of the most phenomenal that affects to the accuracy of the magnetic compass sensor is the temperature. This paper presents two thermal stability schemes for improving performance of a magnetic compass sensor. The first scheme uses the feedforward structure to adjust the angle output of the compass sensor adapt to the variation of the temperature. The second scheme increases both the temperature working range and steady error performance of the sensor. In this scheme, we try to keep the temperature of the sensor is stable at the certain value (e.g. 25 oC by using a PID (proportional-integral-derivative controller and a heating/cooling generator. Many experiment scenarios have implemented to confirm the effectivity of these solutions.
Mazaheri, Alireza; Ricchiuto, Mario; Nishikawa, Hiroaki
2016-01-01
In this paper, we introduce a new hyperbolic first-order system for general dispersive partial differential equations (PDEs). We then extend the proposed system to general advection-diffusion-dispersion PDEs. We apply the fourth-order RD scheme of Ref. 1 to the proposed hyperbolic system, and solve time-dependent dispersive equations, including the classical two-soliton KdV and a dispersive shock case. We demonstrate that the predicted results, including the gradient and Hessian (second derivative), are in a very good agreement with the exact solutions. We then show that the RD scheme applied to the proposed system accurately captures dispersive shocks without numerical oscillations. We also verify that the solution, gradient and Hessian are predicted with equal order of accuracy.
High-accuracy energy formulas for the attractive two-site Bose-Hubbard model
Ermakov, Igor; Byrnes, Tim; Bogoliubov, Nikolay
2018-02-01
The attractive two-site Bose-Hubbard model is studied within the framework of the analytical solution obtained by the application of the quantum inverse scattering method. The structure of the ground and excited states is analyzed in terms of solutions of Bethe equations, and an approximate solution for the Bethe roots is given. This yields approximate formulas for the ground-state energy and for the first excited-state energy. The obtained formulas work with remarkable precision for a wide range of parameters of the model, and are confirmed numerically. An expansion of the Bethe state vectors into a Fock space is also provided for evaluation of expectation values, although this does not have accuracy similar to that of the energies.
Accuracy and repeatability positioning of high-performancel athe for non-circular turning
Directory of Open Access Journals (Sweden)
Majda Paweł
2017-11-01
Full Text Available This paper presents research on the accuracy and repeatability of CNC axis positioning in an innovative lathe with an additional Xs axis. This axis is used to perform movements synchronized with the angular position of the main drive, i.e. the spindle, and with the axial feed along the Z axis. This enables the one-pass turning of non-circular surfaces, rope and trapezoidal threads, as well as the surfaces of rotary tools such as a gear cutting hob, etc. The paper presents and discusses the interpretation of results and the calibration effects of positioning errors in the lathe’s numerical control system. Finally, it shows the geometric characteristics of the rope thread turned at various spindle speeds, including before and after-correction of the positioning error of the Xs axis.
Accuracy and repeatability positioning of high-performancel athe for non-circular turning
Majda, Paweł; Powałka, Bartosz
2017-11-01
This paper presents research on the accuracy and repeatability of CNC axis positioning in an innovative lathe with an additional Xs axis. This axis is used to perform movements synchronized with the angular position of the main drive, i.e. the spindle, and with the axial feed along the Z axis. This enables the one-pass turning of non-circular surfaces, rope and trapezoidal threads, as well as the surfaces of rotary tools such as a gear cutting hob, etc. The paper presents and discusses the interpretation of results and the calibration effects of positioning errors in the lathe's numerical control system. Finally, it shows the geometric characteristics of the rope thread turned at various spindle speeds, including before and after-correction of the positioning error of the Xs axis.
A method of high accuracy clock synchronization by frequency following with VCXO
International Nuclear Information System (INIS)
Ma Yichao; Wu Jie; Zhang Jie; Song Hongzhi; Kong Yang
2011-01-01
In this paper, the principle of the synchronous protocol of the IEEE1588 is analyzed, and the factors that affect the accuracy of synchronization is summarized. Through the hardware timer in a microcontroller, we give the exactly the time when a package is sent or received. So synchronization of the distributed clocks can reach 1 μs in this way. Another method to improve precision of the synchronization is to replace the traditional fixed frequency crystal of the slave device, which needs to follow up the master clock, by an adjustable VCXO. So it is possible to fine tune the frequency of the distributed clocks, and reduce the drift of clock, which shows great benefit for the clock synchronization. A test measurement shows the synchronization of distribute clocks can be better than 10 ns using this method, which is more accurate than the method realized by software. (authors)
Experimental quantum control landscapes: Inherent monotonicity and artificial structure
International Nuclear Information System (INIS)
Roslund, Jonathan; Rabitz, Herschel
2009-01-01
Unconstrained searches over quantum control landscapes are theoretically predicted to generally exhibit trap-free monotonic behavior. This paper makes an explicit experimental demonstration of this intrinsic monotonicity for two controlled quantum systems: frequency unfiltered and filtered second-harmonic generation (SHG). For unfiltered SHG, the landscape is randomly sampled and interpolation of the data is found to be devoid of landscape traps up to the level of data noise. In the case of narrow-band-filtered SHG, trajectories are taken on the landscape to reveal a lack of traps. Although the filtered SHG landscape is trap free, it exhibits a rich local structure. A perturbation analysis around the top of these landscapes provides a basis to understand their topology. Despite the inherent trap-free nature of the landscapes, practical constraints placed on the controls can lead to the appearance of artificial structure arising from the resultant forced sampling of the landscape. This circumstance and the likely lack of knowledge about the detailed local landscape structure in most quantum control applications suggests that the a priori identification of globally successful (un)constrained curvilinear control variables may be a challenging task.
Positivity and monotonicity properties of C0-semigroups. Pt. 1
International Nuclear Information System (INIS)
Bratteli, O.; Kishimoto, A.; Robinson, D.W.
1980-01-01
If exp(-tH), exp(-tK), are self-adjoint, positivity preserving, contraction semigroups on a Hilbert space H = L 2 (X;dμ) we write esup(-tH) >= esup(-tK) >= 0 whenever exp(-tH) - exp(-tK) is positivity preserving for all t >= 0 and then we characterize the class of positive functions for which (*) always implies esup(-tf(H)) >= esup(-tf(K)) >= 0. This class consists of the f epsilon Csup(infinitely)(0, infinitely) with (-1)sup(n)fsup((n + 1))(x) >= 0, x epsilon(0, infinitely), n = 0, 1, 2, ... In particular it contains the class of monotone operator functions. Furthermore if exp(-tH) is Lsup(P)(X;dμ) contractive for all p epsilon[1, infinitely] and all t > 0 (or, equivalently, for p = infinitely and t > 0) then exp(-tf(H)) has the same property. Various applications to monotonicity properties of Green's functions are given. (orig.)
Theoretical and experimental study of non-monotonous effects
International Nuclear Information System (INIS)
Delforge, J.
1977-01-01
In recent years, the study of the effects of low dose rates has expanded considerably, especially in connection with current problems concerning the environment and health physics. After having made a precise definition of the different types of non-monotonous effect which may be encountered, for each the main experimental results known are indicated, as well as the principal consequences which may be expected. One example is the case of radiotherapy, where there is a chance of finding irradiation conditions such that the ratio of destructive action on malignant cells to healthy cells is significantly improved. In the second part of the report, the appearance of these phenomena, especially at low dose rates are explained. For this purpose, the theory of transformation systems of P. Delattre is used as a theoretical framework. With the help of a specific example, it is shown that non-monotonous effects are frequently encountered, especially when the overall effect observed is actually the sum of several different elementary effects (e.g. in survival curves, where death may be due to several different causes), or when the objects studied possess inherent kinetics not limited to restoration phenomena alone (e.g. cellular cycle) [fr
The Monotonic Lagrangian Grid for Rapid Air-Traffic Evaluation
Kaplan, Carolyn; Dahm, Johann; Oran, Elaine; Alexandrov, Natalia; Boris, Jay
2010-01-01
The Air Traffic Monotonic Lagrangian Grid (ATMLG) is presented as a tool to evaluate new air traffic system concepts. The model, based on an algorithm called the Monotonic Lagrangian Grid (MLG), can quickly sort, track, and update positions of many aircraft, both on the ground (at airports) and in the air. The underlying data structure is based on the MLG, which is used for sorting and ordering positions and other data needed to describe N moving bodies and their interactions. Aircraft that are close to each other in physical space are always near neighbors in the MLG data arrays, resulting in a fast nearest-neighbor interaction algorithm that scales as N. Recent upgrades to ATMLG include adding blank place-holders within the MLG data structure, which makes it possible to dynamically change the MLG size and also improves the quality of the MLG grid. Additional upgrades include adding FAA flight plan data, such as way-points and arrival and departure times from the Enhanced Traffic Management System (ETMS), and combining the MLG with the state-of-the-art strategic and tactical conflict detection and resolution algorithms from the NASA-developed Stratway software. In this paper, we present results from our early efforts to couple ATMLG with the Stratway software, and we demonstrate that it can be used to quickly simulate air traffic flow for a very large ETMS dataset.
Energy Technology Data Exchange (ETDEWEB)
Hallstrom, Jason; Ni, Zheng Richard
2018-05-15
This STTR Phase I project assessed the feasibility of a new CO2 sensing system optimized for low-cost, high-accuracy, whole-building monitoring for use in demand control ventilation. The focus was on the development of a wireless networking platform and associated firmware to provide signal conditioning and conversion, fault- and disruptiontolerant networking, and multi-hop routing at building scales to avoid wiring costs. Early exploration of a bridge (or “gateway”) to direct digital control services was also explored. Results of the project contributed to an improved understanding of a new electrochemical sensor for monitoring indoor CO2 concentrations, as well as the electronics and networking infrastructure required to deploy those sensors at building scales. New knowledge was acquired concerning the sensor’s accuracy, environmental response, and failure modes, and the acquisition electronics required to achieve accuracy over a wide range of CO2 concentrations. The project demonstrated that the new sensor offers repeatable correspondence with commercial optical sensors, with supporting electronics that offer gain accuracy within 0.5%, and acquisition accuracy within 1.5% across three orders of magnitude variation in generated current. Considering production, installation, and maintenance costs, the technology presents a foundation for achieving whole-building CO2 sensing at a price point below $0.066 / sq-ft – meeting economic feasibility criteria established by the Department of Energy. The technology developed under this award addresses obstacles on the critical path to enabling whole-building CO2 sensing and demand control ventilation in commercial retrofits, small commercial buildings, residential complexes, and other highpotential structures that have been slow to adopt these technologies. It presents an opportunity to significantly reduce energy use throughout the United States a
Energy Technology Data Exchange (ETDEWEB)
Qian, Shaoxiang, E-mail: qian.shaoxiang@jgc.com [EN Technology Center, Process Technology Division, JGC Corporation, 2-3-1 Minato Mirai, Nishi-ku, Yokohama 220-6001 (Japan); Kanamaru, Shinichiro [EN Technology Center, Process Technology Division, JGC Corporation, 2-3-1 Minato Mirai, Nishi-ku, Yokohama 220-6001 (Japan); Kasahara, Naoto [Nuclear Engineering and Management, School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)
2015-07-15
Highlights: • Numerical methods for accurate prediction of thermal loading were proposed. • Predicted fluid temperature fluctuation (FTF) intensity is close to the experiment. • Predicted structure temperature fluctuation (STF) range is close to the experiment. • Predicted peak frequencies of FTF and STF also agree well with the experiment. • CFD results show the proposed numerical methods are of sufficiently high accuracy. - Abstract: Temperature fluctuations generated by the mixing of hot and cold fluids at a T-junction, which is widely used in nuclear power and process plants, can cause thermal fatigue failure. The conventional methods for evaluating thermal fatigue tend to provide insufficient accuracy, because they were developed based on limited experimental data and a simplified one-dimensional finite element analysis (FEA). CFD/FEA coupling analysis is expected as a useful tool for the more accurate evaluation of thermal fatigue. The present paper aims to verify the accuracy of proposed numerical methods of simulating fluid and structure temperature fluctuations at a T-junction for thermal fatigue evaluation. The dynamic Smagorinsky model (DSM) is used for large eddy simulation (LES) sub-grid scale (SGS) turbulence model, and a hybrid scheme (HS) is adopted for the calculation of convective terms in the governing equations. Also, heat transfer between fluid and structure is calculated directly through thermal conduction by creating a mesh with near wall resolution (NWR) by allocating grid points within the thermal boundary sub-layer. The simulation results show that the distribution of fluid temperature fluctuation intensity and the range of structure temperature fluctuation are remarkably close to the experimental results. Moreover, the peak frequencies of power spectrum density (PSD) of both fluid and structure temperature fluctuations also agree well with the experimental results. Therefore, the numerical methods used in the present paper are
Gorroño, Javier; Banks, Andrew C.; Fox, Nigel P.; Underwood, Craig
2017-08-01
Optical earth observation (EO) satellite sensors generally suffer from drifts and biases relative to their pre-launch calibration, caused by launch and/or time in the space environment. This places a severe limitation on the fundamental reliability and accuracy that can be assigned to satellite derived information, and is particularly critical for long time base studies for climate change and enabling interoperability and Analysis Ready Data. The proposed TRUTHS (Traceable Radiometry Underpinning Terrestrial and Helio-Studies) mission is explicitly designed to address this issue through re-calibrating itself directly to a primary standard of the international system of units (SI) in-orbit and then through the extension of this SI-traceability to other sensors through in-flight cross-calibration using a selection of Committee on Earth Observation Satellites (CEOS) recommended test sites. Where the characteristics of the sensor under test allows, this will result in a significant improvement in accuracy. This paper describes a set of tools, algorithms and methodologies that have been developed and used in order to estimate the radiometric uncertainty achievable for an indicative target sensor through in-flight cross-calibration using a well-calibrated hyperspectral SI-traceable reference sensor with observational characteristics such as TRUTHS. In this study, Multi-Spectral Imager (MSI) of Sentinel-2 and Landsat-8 Operational Land Imager (OLI) is evaluated as an example, however the analysis is readily translatable to larger-footprint sensors such as Sentinel-3 Ocean and Land Colour Instrument (OLCI) and Visible Infrared Imaging Radiometer Suite (VIIRS). This study considers the criticality of the instrumental and observational characteristics on pixel level reflectance factors, within a defined spatial region of interest (ROI) within the target site. It quantifies the main uncertainty contributors in the spectral, spatial, and temporal domains. The resultant tool
Monotonic and fatigue deformation of Ni--W directionally solidified eutectic
International Nuclear Information System (INIS)
Garmong, G.; Williams, J.C.
1975-01-01
Unlike many eutectic composites, the Ni--W eutectic exhibits extensive ductility by slip. Furthermore, its properties may be greatly varied by proper heat treatments. Results of studies of deformation in both monotonic and fatigue loading are reported. During monotonic deformation the fiber/matrix interface acts as a source of dislocations at low strains and an obstacle to matrix slip at higher strains. Deforming the quenched-plus-aged eutectic causes planar matrix slip, with the result that matrix slip bands create stress concentrations in the fibers at low strains. The aged eutectic reaches generally higher stress levels for comparable strains than does the as-quenched eutectic, and the failure strains decrease with increasing aging times. For the composites tested in fatigue, the aged eutectic has better high-stress fatigue resistance than the as-quenched material, but for low-stress, high-cycle fatigue their cycles to failure are nearly the same. However, both crack initiation and crack propagation are different in the two conditions, so the coincidence in high-cycle fatigue is probably fortuitous. The effect of matrix strength on composite performance is not simple, since changes in strength may be accompanied by alterations in slip modes and failure processes. (17 fig) (auth)
The high accuracy data processing system of laser interferometry signals based on MSP430
Qi, Yong-yue; Lin, Yu-chi; Zhao, Mei-rong
2009-07-01
Generally speaking there are two orthogonal signals used in single-frequency laser interferometer for differentiating direction and electronic subdivision. However there usually exist three errors with the interferential signals: zero offsets error, unequal amplitude error and quadrature phase shift error. These three errors have a serious impact on subdivision precision. Based on Heydemann error compensation algorithm, it is proposed to achieve compensation of the three errors. Due to complicated operation of the Heydemann mode, a improved arithmetic is advanced to decrease the calculating time effectively in accordance with the special characteristic that only one item of data will be changed in each fitting algorithm operation. Then a real-time and dynamic compensatory circuit is designed. Taking microchip MSP430 as the core of hardware system, two input signals with the three errors are turned into digital quantity by the AD7862. After data processing in line with improved arithmetic, two ideal signals without errors are output by the AD7225. At the same time two original signals are turned into relevant square wave and imported to the differentiating direction circuit. The impulse exported from the distinguishing direction circuit is counted by the timer of the microchip. According to the number of the pulse and the soft subdivision the final result is showed by LED. The arithmetic and the circuit are adopted to test the capability of a laser interferometer with 8 times optical path difference and the measuring accuracy of 12-14nm is achieved.
A new phase-shift microscope designed for high accuracy stitching interferometry
International Nuclear Information System (INIS)
Thomasset, Muriel; Idir, Mourad; Polack, François; Bray, Michael; Servant, Jean-Jacques
2013-01-01
Characterizing nanofocusing X-ray mirrors for the soon coming nano-imaging beamlines of synchrotron light sources motivates the development of new instruments with improved performances. The sensitivity and accuracy goal is now fixed well under the nm level and, at the same time, the spatial frequency range of the measurement should be pushed toward 50 mm −1 . SOLEIL synchrotron facility has therefore undertaken to equip with an interferential microscope suitable for stitching interferometry at this performance level. In order to keep control on the whole metrology chain it was decided to build a custom instrument in partnership with two small optics companies EOTECH and MBO. The new instrument is a Michelson micro-interferometer equipped with a custom-designed telecentric objective. It achieves the large depth of focus suitable for performing reliable calibrations and measurements. The concept has been validated with a predevelopment set-up, delivered in July 2010, which showed a static repeatability below 1 nm PV despite a non-thermally stabilized environment. The final instrument was delivered early this year and was installed inside SOLEIL's controlled environment facility, where thorough characterization tests are under way. Latest test results and first stitching measurements are presented
Experimental study of very low permeability rocks using a high accuracy permeameter
International Nuclear Information System (INIS)
Larive, Elodie
2002-01-01
The measurement of fluid flow through 'tight' rocks is important to provide a better understanding of physical processes involved in several industrial and natural problems. These include deep nuclear waste repositories, management of aquifers, gas, petroleum or geothermal reservoirs, or earthquakes prevention. The major part of this work consisted of the design, construction and use of an elaborate experimental apparatus allowing laboratory permeability measurements (fluid flow) of very low permeability rocks, on samples at a centimetric scale, to constrain their hydraulic behaviour at realistic in-situ conditions. The accuracy permeameter allows the use of several measurement methods, the steady-state flow method, the transient pulse method, and the sinusoidal pore pressure oscillation method. Measurements were made with the pore pressure oscillation method, using different waveform periods, at several pore and confining pressure conditions, on different materials. The permeability of one natural standard, Westerly granite, and an artificial one, a micro-porous cement, were measured, and results obtained agreed with previous measurements made on these materials showing the reliability of the permeameter. A study of a Yorkshire sandstone shows a relationship between rock microstructure, permeability anisotropy and thermal cracking. Microstructure, porosity and permeability concepts, and laboratory permeability measurements specifications are presented, the permeameter is described, and then permeability results obtained on the investigated materials are reported [fr
Demonstrating High-Accuracy Orbital Access Using Open-Source Tools
Gilbertson, Christian; Welch, Bryan
2017-01-01
Orbit propagation is fundamental to almost every space-based analysis. Currently, many system analysts use commercial software to predict the future positions of orbiting satellites. This is one of many capabilities that can replicated, with great accuracy, without using expensive, proprietary software. NASAs SCaN (Space Communication and Navigation) Center for Engineering, Networks, Integration, and Communications (SCENIC) project plans to provide its analysis capabilities using a combination of internal and open-source software, allowing for a much greater measure of customization and flexibility, while reducing recurring software license costs. MATLAB and the open-source Orbit Determination Toolbox created by Goddard Space Flight Center (GSFC) were utilized to develop tools with the capability to propagate orbits, perform line-of-sight (LOS) availability analyses, and visualize the results. The developed programs are modular and can be applied for mission planning and viability analysis in a variety of Solar System applications. The tools can perform 2 and N-body orbit propagation, find inter-satellite and satellite to ground station LOS access (accounting for intermediate oblate spheroid body blocking, geometric restrictions of the antenna field-of-view (FOV), and relativistic corrections), and create animations of planetary movement, satellite orbits, and LOS accesses. The code is the basis for SCENICs broad analysis capabilities including dynamic link analysis, dilution-of-precision navigation analysis, and orbital availability calculations.
A study for high accuracy measurement of residual stress by deep hole drilling technique
Kitano, Houichi; Okano, Shigetaka; Mochizuki, Masahito
2012-08-01
The deep hole drilling technique (DHD) received much attention in recent years as a method for measuring through-thickness residual stresses. However, some accuracy problems occur when residual stress evaluation is performed by the DHD technique. One of the reasons is that the traditional DHD evaluation formula applies to the plane stress condition. The second is that the effects of the plastic deformation produced in the drilling process and the deformation produced in the trepanning process are ignored. In this study, a modified evaluation formula, which is applied to the plane strain condition, is proposed. In addition, a new procedure is proposed which can consider the effects of the deformation produced in the DHD process by investigating the effects in detail by finite element (FE) analysis. Then, the evaluation results obtained by the new procedure are compared with that obtained by traditional DHD procedure by FE analysis. As a result, the new procedure evaluates the residual stress fields better than the traditional DHD procedure when the measuring object is thick enough that the stress condition can be assumed as the plane strain condition as in the model used in this study.
On a novel low cost high accuracy experimental setup for tomographic particle image velocimetry
International Nuclear Information System (INIS)
Discetti, Stefano; Ianiro, Andrea; Astarita, Tommaso; Cardone, Gennaro
2013-01-01
This work deals with the critical aspects related to cost reduction of a Tomo PIV setup and to the bias errors introduced in the velocity measurements by the coherent motion of the ghost particles. The proposed solution consists of using two independent imaging systems composed of three (or more) low speed single frame cameras, which can be up to ten times cheaper than double shutter cameras with the same image quality. Each imaging system is used to reconstruct a particle distribution in the same measurement region, relative to the first and the second exposure, respectively. The reconstructed volumes are then interrogated by cross-correlation in order to obtain the measured velocity field, as in the standard tomographic PIV implementation. Moreover, differently from tomographic PIV, the ghost particle distributions of the two exposures are uncorrelated, since their spatial distribution is camera orientation dependent. For this reason, the proposed solution promises more accurate results, without the bias effect of the coherent ghost particles motion. Guidelines for the implementation and the application of the present method are proposed. The performances are assessed with a parametric study on synthetic experiments. The proposed low cost system produces a much lower modulation with respect to an equivalent three-camera system. Furthermore, the potential accuracy improvement using the Motion Tracking Enhanced MART (Novara et al 2010 Meas. Sci. Technol. 21 035401) is much higher than in the case of the standard implementation of tomographic PIV. (paper)
Multipartite entangled quantum states: Transformation, Entanglement monotones and Application
Cui, Wei
Entanglement is one of the fundamental features of quantum information science. Though bipartite entanglement has been analyzed thoroughly in theory and shown to be an important resource in quantum computation and communication protocols, the theory of entanglement shared between more than two parties, which is called multipartite entanglement, is still not complete. Specifically, the classification of multipartite entanglement and the transformation property between different multipartite states by local operators and classical communications (LOCC) are two fundamental questions in the theory of multipartite entanglement. In this thesis, we present results related to the LOCC transformation between multipartite entangled states. Firstly, we investigate the bounds on the LOCC transformation probability between multipartite states, especially the GHZ class states. By analyzing the involvement of 3-tangle and other entanglement measures under weak two-outcome measurement, we derive explicit upper and lower bound on the transformation probability between GHZ class states. After that, we also analyze the transformation between N-party W type states, which is a special class of multipartite entangled states that has an explicit unique expression and a set of analytical entanglement monotones. We present a necessary and sufficient condition for a known upper bound of transformation probability between two N-party W type states to be achieved. We also further investigate a novel entanglement transformation protocol, the random distillation, which transforms multipartite entanglement into bipartite entanglement ii shared by a non-deterministic pair of parties. We find upper bounds for the random distillation protocol for general N-party W type states and find the condition for the upper bounds to be achieved. What is surprising is that the upper bounds correspond to entanglement monotones that can be increased by Separable Operators (SEP), which gives the first set of
ISPA - a high accuracy X-ray and gamma camera Exhibition LEPFest 2000
2000-01-01
ISPA offers ... Ten times better resolution than Anger cameras High efficiency single gamma counting Noise reduction by sensitivity to gamma energy ...for Single Photon Emission Computed Tomography (SPECT)
Energy Technology Data Exchange (ETDEWEB)
Kinoshita, Kanji; Murayama, Kouichi; Ogata, Hiroyuki [and others
1997-04-01
The fracture behavior for Japanese carbon steel pipe STS410 was examined under dynamic monotonic and cyclic loading through a research program of International Piping Integrity Research Group (EPIRG-2), in order to evaluate the strength of pipe during the seismic event The tensile test and the fracture toughness test were conducted for base metal and TIG weld metal. Three base metal pipe specimens, 1,500mm in length and 6-inch diameter sch.120, were employed for a quasi-static monotonic, a dynamic monotonic and a dynamic cyclic loading pipe fracture tests. One weld joint pipe specimen was also employed for a dynamic cyclic loading test In the dynamic cyclic loading test, the displacement was controlled as applying the fully reversed load (R=-1). The pipe specimens with a circumferential through-wall crack were subjected four point bending load at 300C in air. Japanese STS410 carbon steel pipe material was found to have high toughness under dynamic loading condition through the CT fracture toughness test. As the results of pipe fracture tests, the maximum moment to pipe fracture under dynamic monotonic and cyclic loading condition, could be estimated by plastic collapse criterion and the effect of dynamic monotonic loading and cyclic loading was a little on the maximum moment to pipe fracture of the STS410 carbon steel pipe. The STS410 carbon steel pipe seemed to be less sensitive to dynamic and cyclic loading effects than the A106Gr.B carbon steel pipe evaluated in IPIRG-1 program.
International Nuclear Information System (INIS)
Kinoshita, Kanji; Murayama, Kouichi; Ogata, Hiroyuki
1997-01-01
The fracture behavior for Japanese carbon steel pipe STS410 was examined under dynamic monotonic and cyclic loading through a research program of International Piping Integrity Research Group (EPIRG-2), in order to evaluate the strength of pipe during the seismic event The tensile test and the fracture toughness test were conducted for base metal and TIG weld metal. Three base metal pipe specimens, 1,500mm in length and 6-inch diameter sch.120, were employed for a quasi-static monotonic, a dynamic monotonic and a dynamic cyclic loading pipe fracture tests. One weld joint pipe specimen was also employed for a dynamic cyclic loading test In the dynamic cyclic loading test, the displacement was controlled as applying the fully reversed load (R=-1). The pipe specimens with a circumferential through-wall crack were subjected four point bending load at 300C in air. Japanese STS410 carbon steel pipe material was found to have high toughness under dynamic loading condition through the CT fracture toughness test. As the results of pipe fracture tests, the maximum moment to pipe fracture under dynamic monotonic and cyclic loading condition, could be estimated by plastic collapse criterion and the effect of dynamic monotonic loading and cyclic loading was a little on the maximum moment to pipe fracture of the STS410 carbon steel pipe. The STS410 carbon steel pipe seemed to be less sensitive to dynamic and cyclic loading effects than the A106Gr.B carbon steel pipe evaluated in IPIRG-1 program
Gated viewing and high-accuracy three-dimensional laser radar
DEFF Research Database (Denmark)
Busck, Jens; Heiselberg, Henning
2004-01-01
, a high PRF of 32 kHz, and a high-speed camera with gate times down to 200 ps and delay steps down to 100 ps. The electronics and the software also allow for gated viewing with automatic gain control versus range, whereby foreground backscatter can be suppressed. We describe our technique for the rapid...
Energy Technology Data Exchange (ETDEWEB)
Yamaguchi, S; Koterayama, W [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics
1996-04-10
The differential global positioning system (DGPS) can eliminate most of errors in ship velocity measurement by GPS positioning alone. Through two rounds of marine observations by towing an observation robot in summer 1995, the authors attempted high-accuracy measurement of ship velocities by DGPS, and also carried out both positioning by GPS alone and measurement using the bottom track of ADCP (acoustic Doppler current profiler). In this paper, the results obtained by these measurement methods were examined through comparison among them, and the accuracy of the measured ship velocities was considered. In DGPS measurement, both translocation method and interference positioning method were used. ADCP mounted on the observation robot allowed measurement of the velocity of current meter itself by its bottom track in shallow sea areas less than 350m. As the result of these marine observations, it was confirmed that the accuracy equivalent to that of direct measurement by bottom track is possible to be obtained by DGPS. 3 refs., 5 figs., 1 tab.
Sampling dynamics: an alternative to payoff-monotone selection dynamics
DEFF Research Database (Denmark)
Berkemer, Rainer
payoff-monotone nor payoff-positive which has interesting consequences. This can be demonstrated by application to the travelers dilemma, a deliberately constructed social dilemma. The game has just one symmetric Nash equilibrium which is Pareto inefficient. Especially when the travelers have many......'' of the standard game theory result. Both, analytical tools and agent based simulation are used to investigate the dynamic stability of sampling equilibria in a generalized travelers dilemma. Two parameters are of interest: the number of strategy options (m) available to each traveler and an experience parameter...... (k), which indicates the number of samples an agent would evaluate before fixing his decision. The special case (k=1) can be treated analytically. The stationary points of the dynamics must be sampling equilibria and one can calculate that for m>3 there will be an interior solution in addition...
Monotonic childhoods: representations of otherness in research writing
Directory of Open Access Journals (Sweden)
Denise Marcos Bussoletti
2011-12-01
Full Text Available This paper is part of a doctoral thesis entitled “Monotonic childhoods – a rhapsody of hope”. It follows the perspective of a critical psychosocial and cultural study, and aims at discussing the other’s representation in research writing, electing childhood as an allegorical and refl ective place. It takes into consideration, by means of analysis, the drawings and poems of children from the Terezin ghetto during the Second World War. The work is mostly based on Serge Moscovici’s Social Representation Theory, but it is also in constant dialogue with other theories and knowledge fi elds, especially Walter Benjamin’s and Mikhail Bakhtin’s contributions. At the end, the paper supports the thesis that conceives poetics as one of the translation axes of childhood cultures.
Convex analysis and monotone operator theory in Hilbert spaces
Bauschke, Heinz H
2017-01-01
This reference text, now in its second edition, offers a modern unifying presentation of three basic areas of nonlinear analysis: convex analysis, monotone operator theory, and the fixed point theory of nonexpansive operators. Taking a unique comprehensive approach, the theory is developed from the ground up, with the rich connections and interactions between the areas as the central focus, and it is illustrated by a large number of examples. The Hilbert space setting of the material offers a wide range of applications while avoiding the technical difficulties of general Banach spaces. The authors have also drawn upon recent advances and modern tools to simplify the proofs of key results making the book more accessible to a broader range of scholars and users. Combining a strong emphasis on applications with exceptionally lucid writing and an abundance of exercises, this text is of great value to a large audience including pure and applied mathematicians as well as researchers in engineering, data science, ma...
Expert system for failures detection and non-monotonic reasoning
International Nuclear Information System (INIS)
Assis, Abilio de; Schirru, Roberto
1997-01-01
This paper presents the development of a shell denominated TIGER that has the purpose to serve as environment to the development of expert systems in diagnosis of faults in industrial complex plants. A model of knowledge representation and an inference engine based on non monotonic reasoning has been developed in order to provide flexibility in the representation of complex plants as well as performance to satisfy restrictions of real time. The TIGER is able to provide both the occurred fault and a hierarchical view of the several reasons that caused the fault to happen. As a validation of the developed shell a monitoring system of the critical safety functions of Angra-1 has been developed. 7 refs., 7 figs., 2 tabs
Monotonicity of fitness landscapes and mutation rate control.
Belavkin, Roman V; Channon, Alastair; Aston, Elizabeth; Aston, John; Krašovec, Rok; Knight, Christopher G
2016-12-01
A common view in evolutionary biology is that mutation rates are minimised. However, studies in combinatorial optimisation and search have shown a clear advantage of using variable mutation rates as a control parameter to optimise the performance of evolutionary algorithms. Much biological theory in this area is based on Ronald Fisher's work, who used Euclidean geometry to study the relation between mutation size and expected fitness of the offspring in infinite phenotypic spaces. Here we reconsider this theory based on the alternative geometry of discrete and finite spaces of DNA sequences. First, we consider the geometric case of fitness being isomorphic to distance from an optimum, and show how problems of optimal mutation rate control can be solved exactly or approximately depending on additional constraints of the problem. Then we consider the general case of fitness communicating only partial information about the distance. We define weak monotonicity of fitness landscapes and prove that this property holds in all landscapes that are continuous and open at the optimum. This theoretical result motivates our hypothesis that optimal mutation rate functions in such landscapes will increase when fitness decreases in some neighbourhood of an optimum, resembling the control functions derived in the geometric case. We test this hypothesis experimentally by analysing approximately optimal mutation rate control functions in 115 complete landscapes of binding scores between DNA sequences and transcription factors. Our findings support the hypothesis and find that the increase of mutation rate is more rapid in landscapes that are less monotonic (more rugged). We discuss the relevance of these findings to living organisms.
Wening, Stefanie; Keith, Nina; Abele, Andrea E
2016-06-01
In negotiations, a focus on interests (why negotiators want something) is key to integrative agreements. Yet, many negotiators spontaneously focus on positions (what they want), with suboptimal outcomes. Our research applies construal-level theory to negotiations and proposes that a high construal level instigates a focus on interests during negotiations which, in turn, positively affects outcomes. In particular, we tested the notion that the effect of construal level on outcomes was mediated by information exchange and judgement accuracy. Finally, we expected the mere mode of presentation of task material to affect construal levels and manipulated construal levels using concrete versus abstract negotiation tasks. In two experiments, participants negotiated in dyads in either a high- or low-construal-level condition. In Study 1, high-construal-level dyads outperformed dyads in the low-construal-level condition; this main effect was mediated by information exchange. Study 2 replicated both the main and mediation effects using judgement accuracy as mediator and additionally yielded a positive effect of a high construal level on a second, more complex negotiation task. These results not only provide empirical evidence for the theoretically proposed link between construal levels and negotiation outcomes but also shed light on the processes underlying this effect. © 2015 The British Psychological Society.
Neutrino mass from cosmology: impact of high-accuracy measurement of the Hubble constant
Energy Technology Data Exchange (ETDEWEB)
Sekiguchi, Toyokazu [Institute for Cosmic Ray Research, University of Tokyo, Kashiwa 277-8582 (Japan); Ichikawa, Kazuhide [Department of Micro Engineering, Kyoto University, Kyoto 606-8501 (Japan); Takahashi, Tomo [Department of Physics, Saga University, Saga 840-8502 (Japan); Greenhill, Lincoln, E-mail: sekiguti@icrr.u-tokyo.ac.jp, E-mail: kazuhide@me.kyoto-u.ac.jp, E-mail: tomot@cc.saga-u.ac.jp, E-mail: greenhill@cfa.harvard.edu [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)
2010-03-01
Non-zero neutrino mass would affect the evolution of the Universe in observable ways, and a strong constraint on the mass can be achieved using combinations of cosmological data sets. We focus on the power spectrum of cosmic microwave background (CMB) anisotropies, the Hubble constant H{sub 0}, and the length scale for baryon acoustic oscillations (BAO) to investigate the constraint on the neutrino mass, m{sub ν}. We analyze data from multiple existing CMB studies (WMAP5, ACBAR, CBI, BOOMERANG, and QUAD), recent measurement of H{sub 0} (SHOES), with about two times lower uncertainty (5 %) than previous estimates, and recent treatments of BAO from the Sloan Digital Sky Survey (SDSS). We obtained an upper limit of m{sub ν} < 0.2eV (95 % C.L.), for a flat ΛCDM model. This is a 40 % reduction in the limit derived from previous H{sub 0} estimates and one-third lower than can be achieved with extant CMB and BAO data. We also analyze the impact of smaller uncertainty on measurements of H{sub 0} as may be anticipated in the near term, in combination with CMB data from the Planck mission, and BAO data from the SDSS/BOSS program. We demonstrate the possibility of a 5σ detection for a fiducial neutrino mass of 0.1 eV or a 95 % upper limit of 0.04 eV for a fiducial of m{sub ν} = 0 eV. These constraints are about 50 % better than those achieved without external constraint. We further investigate the impact on modeling where the dark-energy equation of state is constant but not necessarily -1, or where a non-flat universe is allowed. In these cases, the next-generation accuracies of Planck, BOSS, and 1 % measurement of H{sub 0} would all be required to obtain the limit m{sub ν} < 0.05−0.06 eV (95 % C.L.) for the fiducial of m{sub ν} = 0 eV. The independence of systematics argues for pursuit of both BAO and H{sub 0} measurements.
Challenges in high accuracy surface replication for micro optics and micro fluidics manufacture
DEFF Research Database (Denmark)
Tosello, Guido; Hansen, Hans Nørgaard; Calaon, Matteo
2014-01-01
Patterning the surface of polymer components with microstructured geometries is employed in optical and microfluidic applications. Mass fabrication of polymer micro structured products is enabled by replication technologies such as injection moulding. Micro structured tools are also produced...... by replication technologies such as nickel electroplating. All replication steps are enabled by a high precision master and high reproduction fidelity to ensure that the functionalities associated with the design are transferred to the final component. Engineered surface micro structures can be either...
Shaw, Patricia; Zhang, Vivien; Metallinos-Katsaras, Elizabeth
2009-02-01
The objective of this study was to examine the quantity and accuracy of dietary supplement (DS) information through magazines with high adolescent readership. Eight (8) magazines (3 teen and 5 adult with high teen readership) were selected. A content analysis for DS was conducted on advertisements and editorials (i.e., articles, advice columns, and bulletins). Noted claims/cautions regarding DS were evaluated for accuracy using Medlineplus.gov and Naturaldatabase.com. Claims for dietary supplements with three or more types of ingredients and those in advertisements were not evaluated. Advertisements were evaluated with respect to size, referenced research, testimonials, and Dietary Supplement Health and Education Act of 1994 (DSHEA) warning visibility. Eighty-eight (88) issues from eight magazines yielded 238 DS references. Fifty (50) issues from five magazines contained no DS reference. Among teen magazines, seven DS references were found: five in the editorials and two in advertisements. In adult magazines, 231 DS references were found: 139 in editorials and 92 in advertisements. Of the 88 claims evaluated, 15% were accurate, 23% were inconclusive, 3% were inaccurate, 5% were partially accurate, and 55% were unsubstantiated (i.e., not listed in reference databases). Of the 94 DS evaluated in advertisements, 43% were full page or more, 79% did not have a DSHEA warning visible, 46% referred to research, and 32% used testimonials. Teen magazines contain few references to DS, none accurate. Adult magazines that have a high teen readership contain a substantial amount of DS information with questionable accuracy, raising concerns that this information may increase the chances of inappropriate DS use by adolescents, thereby increasing the potential for unexpected effects or possible harm.
Herrera, VM; Casas, JP; Miranda, JJ; Perel, P; Pichardo, R; González, A; Sanchez, JR; Ferreccio, C; Aguilera, X; Silva, E; Oróstegui, M; Gómez, LF; Chirinos, JA; Medina-Lezama, J; Pérez, CM; Suárez, E; Ortiz, AP; Rosero, L; Schapochnik, N; Ortiz, Z; Ferrante, D; Diaz, M; Bautista, LE
2009-01-01
Background Cut points for defining obesity have been derived from mortality data among Whites from Europe and the United States and their accuracy to screen for high risk of coronary heart disease (CHD) in other ethnic groups has been questioned. Objective To compare the accuracy and to define ethnic and gender-specific optimal cut points for body mass index (BMI), waist circumference (WC) and waist-to-hip ratio (WHR) when they are used in screening for high risk of CHD in the Latin-American and the US populations. Methods We estimated the accuracy and optimal cut points for BMI, WC and WHR to screen for CHD risk in Latin Americans (n=18 976), non-Hispanic Whites (Whites; n=8956), non-Hispanic Blacks (Blacks; n=5205) and Hispanics (n=5803). High risk of CHD was defined as a 10-year risk ≥20% (Framingham equation). The area under the receiver operator characteristic curve (AUC) and the misclassification-cost term were used to assess accuracy and to identify optimal cut points. Results WHR had the highest AUC in all ethnic groups (from 0.75 to 0.82) and BMI had the lowest (from 0.50 to 0.59). Optimal cut point for BMI was similar across ethnic/gender groups (27 kg/m2). In women, cut points for WC (94 cm) and WHR (0.91) were consistent by ethnicity. In men, cut points for WC and WHR varied significantly with ethnicity: from 91 cm in Latin Americans to 102 cm in Whites, and from 0.94 in Latin Americans to 0.99 in Hispanics, respectively. Conclusion WHR is the most accurate anthropometric indicator to screen for high risk of CHD, whereas BMI is almost uninformative. The same BMI cut point should be used in all men and women. Unique cut points for WC and WHR should be used in all women, but ethnic-specific cut points seem warranted among men. PMID:19238159
DEFF Research Database (Denmark)
Calaon, Matteo; Tosello, Guido; Elsborg, René
2016-01-01
The mass-replication nature of the process calls for fast monitoring of process parameters and product geometrical characteristics. In this direction, the present study addresses the possibility to develop a micro manufacturing platform for micro assembly injection moulding with real-time process....../product monitoring and metrology. The study represent a new concept yet to be developed with great potential for high precision mass-manufacturing of highly functional 3D multi-material (i.e. including metal/soft polymer) micro components. The activities related to HINMICO project objectives proves the importance...
Mejias, Jorge F; Payeur, Alexandre; Selin, Erik; Maler, Leonard; Longtin, André
2014-01-01
The control of input-to-output mappings, or gain control, is one of the main strategies used by neural networks for the processing and gating of information. Using a spiking neural network model, we studied the gain control induced by a form of inhibitory feedforward circuitry-also known as "open-loop feedback"-, which has been experimentally observed in a cerebellum-like structure in weakly electric fish. We found, both analytically and numerically, that this network displays three different regimes of gain control: subtractive, divisive, and non-monotonic. Subtractive gain control was obtained when noise is very low in the network. Also, it was possible to change from divisive to non-monotonic gain control by simply modulating the strength of the feedforward inhibition, which may be achieved via long-term synaptic plasticity. The particular case of divisive gain control has been previously observed in vivo in weakly electric fish. These gain control regimes were robust to the presence of temporal delays in the inhibitory feedforward pathway, which were found to linearize the input-to-output mappings (or f-I curves) via a novel variability-increasing mechanism. Our findings highlight the feedforward-induced gain control analyzed here as a highly versatile mechanism of information gating in the brain.
Directory of Open Access Journals (Sweden)
Jorge F Mejias
2014-02-01
Full Text Available The control of input-to-output mappings, or gain control, is one of the main strategies used by neural networks for the processing and gating of information. Using a spiking neural network model, we studied the gain control induced by a form of inhibitory feedforward circuitry — also known as ’open-loop feedback’ —, which has been experimentally observed in a cerebellum-like structure in weakly electric fish. We found, both analytically and numerically, that this network displays three different regimes of gain control: subtractive, divisive, and non-monotonic. Subtractive gain control was obtained when noise is very low in the network. Also, it was possible to change from divisive to non-monotonic gain control by simply modulating the strength of the feedforward inhibition, which may be achieved via long-term synaptic plasticity. The particular case of divisive gain control has been previously observed in vivo in weakly electric fish. These gain control regimes were robust to the presence of temporal delays in the inhibitory feedforward pathway, which were found to linearize the input-to-output mappings (or f-I curves via a novel variability-increasing mechanism. Our findings highlight the feedforward-induced gain control analyzed here as a highly versatile mechanism of information gating in the brain.
Non-monotonic wetting behavior of chitosan films induced by silver nanoparticles
Energy Technology Data Exchange (ETDEWEB)
Praxedes, A.P.P.; Webler, G.D.; Souza, S.T. [Instituto de Física, Universidade Federal de Alagoas, 57072-970 Maceió, AL (Brazil); Ribeiro, A.S. [Instituto de Química e Biotecnologia, Universidade Federal de Alagoas, 57072-970 Maceió, AL (Brazil); Fonseca, E.J.S. [Instituto de Física, Universidade Federal de Alagoas, 57072-970 Maceió, AL (Brazil); Oliveira, I.N. de, E-mail: italo@fis.ufal.br [Instituto de Física, Universidade Federal de Alagoas, 57072-970 Maceió, AL (Brazil)
2016-05-01
Highlights: • The addition of silver nanoparticles modifies the morphology of chitosan films. • Metallic nanoparticles can be used to control wetting properties of chitosan films. • The contact angle shows a non-monotonic dependence on the silver concentration. - Abstract: The present work is devoted to the study of structural and wetting properties of chitosan-based films containing silver nanoparticles. In particular, the effects of silver concentration on the morphology of chitosan films are characterized by different techniques, such as atomic force microscopy (AFM), X-ray diffraction (XRD) and Fourier transform infrared spectroscopy (FTIR). By means of dynamic contact angle measurements, we study the modification on surface properties of chitosan-based films due to the addition of silver nanoparticles. The results are analyzed in the light of molecular-kinetic theory which describes the wetting phenomena in terms of statistical dynamics for the displacement of liquid molecules in a solid substrate. Our results show that the wetting properties of chitosan-based films are high sensitive to the fraction of silver nanoparticles, with the equilibrium contact angle exhibiting a non-monotonic behavior.
Hamid, Nubailah Abd; Ibrahim, Azmi; Adnan, Azlan; Ismail, Muhammad Hussain
2018-05-01
This paper discusses the superelastic behavior of shape memory alloy, NiTi when used as reinforcement in concrete beams. The ability of NiTi to recover and reduce permanent deformations of concrete beams was investigated. Small-scale concrete beams, with NiTi reinforcement were experimentally investigated under monotonic loads. The behaviour of simply supported reinforced concrete (RC) beams hybrid with NiTi rebars and the control beam subject to monotonic loads were experimentally investigated. This paper is to highlight the ability of the SMA bars to recover and reduce permanent deformations of concrete flexural members. The size of the control beam is 125 mm × 270 mm × 1000 mm with 3 numbers of 12 mm diameter bars as main reinforcement for compression and 3 numbers of 12 mm bars as tension or hanger bars while 6 mm diameter at 100 mm c/c used as shear reinforcement bars for control beam respectively. While, the minimal provision of 200mm using the 12.7mm of superelastic Shape Memory Alloys were employed to replace the steel rebar at the critical region of the beam. In conclusion, the contribution of the SMA bar in combination with high-strength steel to the conventional reinforcement showed that the SMA beam has exhibited an improve performance in term of better crack recovery and deformation. Therefore the usage of hybrid NiTi with the steel can substantially diminish the risk of the earthquake and also can reduce the associated cost aftermath.
High Accuracy Three-dimensional Simulation of Micro Injection Moulded Parts
DEFF Research Database (Denmark)
Tosello, Guido; Costa, F. S.; Hansen, Hans Nørgaard
2011-01-01
Micro injection moulding (μIM) is the key replication technology for high precision manufacturing of polymer micro products. Data analysis and simulations on micro-moulding experiments have been conducted during the present validation study. Detailed information about the μIM process was gathered...
Museum genomics: low-cost and high-accuracy genetic data from historical specimens.
Rowe, Kevin C; Singhal, Sonal; Macmanes, Matthew D; Ayroles, Julien F; Morelli, Toni Lyn; Rubidge, Emily M; Bi, Ke; Moritz, Craig C
2011-11-01
Natural history collections are unparalleled repositories of geographical and temporal variation in faunal conditions. Molecular studies offer an opportunity to uncover much of this variation; however, genetic studies of historical museum specimens typically rely on extracting highly degraded and chemically modified DNA samples from skins, skulls or other dried samples. Despite this limitation, obtaining short fragments of DNA sequences using traditional PCR amplification of DNA has been the primary method for genetic study of historical specimens. Few laboratories have succeeded in obtaining genome-scale sequences from historical specimens and then only with considerable effort and cost. Here, we describe a low-cost approach using high-throughput next-generation sequencing to obtain reliable genome-scale sequence data from a traditionally preserved mammal skin and skull using a simple extraction protocol. We show that single-nucleotide polymorphisms (SNPs) from the genome sequences obtained independently from the skin and from the skull are highly repeatable compared to a reference genome. © 2011 Blackwell Publishing Ltd.
Algorithm of dynamic regulation of a system of duct, for a high accuracy climatic system
Arbatskiy, A. A.; Afonina, G. N.; Glazov, V. S.
2017-11-01
Currently, major part of climatic system, are stationary in projected mode only. At the same time, many modern industrial sites, require constant or periodical changes in technological process. That is 80% of the time, the industrial site is not require ventilation system in projected mode and high precision of climatic parameters must maintain. While that not constantly is in use for climatic systems, which use in parallel for different rooms, we will be have a problem for balance of duct system. For this problem, was created the algorithm for quantity regulation, with minimal changes. Dynamic duct system: Developed of parallel control system of air balance, with high precision of climatic parameters. The Algorithm provide a permanent pressure in main duct, in different a flow of air. Therefore, the ending devises air flow have only one parameter for regulation - flaps open area. Precision of regulation increase and the climatic system provide high precision for temperature and humidity (0,5C for temperature, 5% for relative humidity). Result: The research has been made in CFD-system - PHOENICS. Results for velocity of air in duct, for pressure of air in duct for different operation mode, has been obtained. Equation for air valves positions, with different parameters for climate in room’s, has been obtained. Energy saving potential for dynamic duct system, for different types of a rooms, has been calculated.
Log-supermodularity of weight functions and the loading monotonicity of weighted insurance premiums
Hristo S. Sendov; Ying Wang; Ricardas Zitikis
2010-01-01
The paper is motivated by a problem concerning the monotonicity of insurance premiums with respect to their loading parameter: the larger the parameter, the larger the insurance premium is expected to be. This property, usually called loading monotonicity, is satisfied by premiums that appear in the literature. The increased interest in constructing new insurance premiums has raised a question as to what weight functions would produce loading-monotonic premiums. In this paper we demonstrate a...
International Nuclear Information System (INIS)
Burkhard, Boeckem
1999-01-01
In the course of the progressive developments of sophisticated geodetic systems utilizing electromagnetic waves in the visible or near IR-range a more detailed knowledge of the propagation medium and coevally solutions of atmospherically induced limitations will become important. An alignment system based on atmospherical dispersion, called a dispersometer, is a metrological solution to the atmospherically induced limitations, in optical alignment and direction observations of high accuracy. In the dispersometer we are using the dual-wavelength method for dispersive air to obtain refraction compensated angle measurements, the detrimental impact of atmospheric turbulence notwithstanding. The principle of the dual-wavelength method utilizes atmospherical dispersion, i.e. the wavelength dependence of the refractive index. The difference angle between two light beams of different wavelengths, which is called the dispersion angle Δβ, is to first approximation proportional to the refraction angle: β IR ν(β blue - β IR ) = ν Δβ, this equation implies that the dispersion angle has to be measured at least 42 times more accurate than the desired accuracy of the refraction angle for the wavelengths used in the present dispersometer. This required accuracy constitutes one major difficulty for the instrumental performance in applying the dispersion effect. However, the dual-wavelength method can only be successfully used in an optimized transmitter-receiver combination. Beyond the above mentioned resolution requirement for the detector, major difficulties in instrumental realization arise in the availability of a suitable dual-wavelength laser light source, laser light modulation with a very high extinction ratio and coaxial emittance of mono-mode radiation at both wavelengths. Therefore, this paper focuses on the solutions of the dual-wavelength transmitter introducing a new hardware approach and a complete re-design of the in [1] proposed conception of the dual
A System of Generalized Variational Inclusions Involving a New Monotone Mapping in Banach Spaces
Directory of Open Access Journals (Sweden)
Jinlin Guan
2013-01-01
Full Text Available We introduce a new monotone mapping in Banach spaces, which is an extension of the -monotone mapping studied by Nazemi (2012, and we generalize the variational inclusion involving the -monotone mapping. Based on the new monotone mapping, we propose a new proximal mapping which combines the proximal mapping studied by Nazemi (2012 with the mapping studied by Lan et al. (2011 and show its Lipschitz continuity. Based on the new proximal mapping, we give an iterative algorithm. Furthermore, we prove the convergence of iterative sequences generated by the algorithm under some appropriate conditions. Our results improve and extend corresponding ones announced by many others.
Obliquely Propagating Non-Monotonic Double Layer in a Hot Magnetized Plasma
International Nuclear Information System (INIS)
Kim, T.H.; Kim, S.S.; Hwang, J.H.; Kim, H.Y.
2005-01-01
Obliquely propagating non-monotonic double layer is investigated in a hot magnetized plasma, which consists of a positively charged hot ion fluid and trapped, as well as free electrons. A model equation (modified Korteweg-de Vries equation) is derived by the usual reductive perturbation method from a set of basic hydrodynamic equations. A time stationary obliquely propagating non-monotonic double layer solution is obtained in a hot magnetized-plasma. This solution is an analytic extension of the monotonic double layer and the solitary hole. The effects of obliqueness, external magnetic field and ion temperature on the properties of the non-monotonic double layer are discussed
Picatoste Ruilope, Ricardo; Masi, Alessandro
Hybrid Stepper Motors are widely used in open-loop position applications. They are the choice of actuation for the collimators in the Large Hadron Collider, the largest particle accelerator at CERN. In this case the positioning requirements and the highly radioactive operating environment are unique. The latter forces both the use of long cables to connect the motors to the drives which act as transmission lines and also prevents the use of standard position sensors. However, reliable and precise operation of the collimators is critical for the machine, requiring the prevention of step loss in the motors and maintenance to be foreseen in case of mechanical degradation. In order to make the above possible, an approach is proposed for the application of an Extended Kalman Filter to a sensorless stepper motor drive, when the motor is separated from its drive by long cables. When the long cables and high frequency pulse width modulated control voltage signals are used together, the electrical signals difer greatl...
Huang, Wei-Ren; Huang, Shih-Pu; Tsai, Tsung-Yueh; Lin, Yi-Jyun; Yu, Zong-Ru; Kuo, Ching-Hsiang; Hsu, Wei-Yao; Young, Hong-Tsu
2017-09-01
Spherical lenses lead to forming spherical aberration and reduced optical performance. Consequently, in practice optical system shall apply a combination of spherical lenses for aberration correction. Thus, the volume of the optical system increased. In modern optical systems, aspherical lenses have been widely used because of their high optical performance with less optical components. However, aspherical surfaces cannot be fabricated by traditional full aperture polishing process due to their varying curvature. Sub-aperture computer numerical control (CNC) polishing is adopted for aspherical surface fabrication in recent years. By using CNC polishing process, mid-spatial frequency (MSF) error is normally accompanied during this process. And the MSF surface texture of optics decreases the optical performance for high precision optical system, especially for short-wavelength applications. Based on a bonnet polishing CNC machine, this study focuses on the relationship between MSF surface texture and CNC polishing parameters, which include feed rate, head speed, track spacing and path direction. The power spectral density (PSD) analysis is used to judge the MSF level caused by those polishing parameters. The test results show that controlling the removal depth of single polishing path, through the feed rate, and without same direction polishing path for higher total removal depth can efficiently reduce the MSF error. To verify the optical polishing parameters, we divided a correction polishing process to several polishing runs with different direction polishing paths. Compare to one shot polishing run, multi-direction path polishing plan could produce better surface quality on the optics.
High accuracy injection circuit for the calibration of a large pixel sensor matrix
International Nuclear Information System (INIS)
Quartieri, E.; Comotti, D.; Manghisoni, M.
2013-01-01
Semiconductor pixel detectors, for particle tracking and vertexing in high energy physics experiments as well as for X-ray imaging, in particular for synchrotron light sources and XFELs, require a large area sensor matrix. This work will discuss the design and the characterization of a high-linearity, low dispersion injection circuit to be used for pixel-level calibration of detector readout electronics in a large pixel sensor matrix. The circuit provides a useful tool for the characterization of the readout electronics of the pixel cell unit for both monolithic active pixel sensors and hybrid pixel detectors. In the latter case, the circuit allows for precise analogue test of the readout channel already at the chip level, when no sensor is connected. Moreover, it provides a simple means for calibration of readout electronics once the detector has been connected to the chip. Two injection techniques can be provided by the circuit: one for a charge sensitive amplification and the other for a transresistance readout channel. The aim of the paper is to describe the architecture and the design guidelines of the calibration circuit, which has been implemented in a 130 nm CMOS technology. Moreover, experimental results of the proposed injection circuit will be presented in terms of linearity and dispersion
Melendez, J; Hogeweg, L; Sánchez, C I; Philipsen, R H H M; Aldridge, R W; Hayward, A C; Abubakar, I; van Ginneken, B; Story, A
2018-05-01
Tuberculosis (TB) screening programmes can be optimised by reducing the number of chest radiographs (CXRs) requiring interpretation by human experts. To evaluate the performance of computerised detection software in triaging CXRs in a high-throughput digital mobile TB screening programme. A retrospective evaluation of the software was performed on a database of 38 961 postero-anterior CXRs from unique individuals seen between 2005 and 2010, 87 of whom were diagnosed with TB. The software generated a TB likelihood score for each CXR. This score was compared with a reference standard for notified active pulmonary TB using receiver operating characteristic (ROC) curve and localisation ROC (LROC) curve analyses. On ROC curve analysis, software specificity was 55.71% (95%CI 55.21-56.20) and negative predictive value was 99.98% (95%CI 99.95-99.99), at a sensitivity of 95%. The area under the ROC curve was 0.90 (95%CI 0.86-0.93). Results of the LROC curve analysis were similar. The software could identify more than half of the normal images in a TB screening setting while maintaining high sensitivity, and may therefore be used for triage.
Gao, Chunfeng; Wei, Guo; Wang, Qi; Xiong, Zhenyu; Wang, Qun; Long, Xingwu
2016-10-01
As an indispensable equipment in inertial technology tests, the three-axis turntable is widely used in the calibration of various types inertial navigation systems (INS). In order to ensure the calibration accuracy of INS, we need to accurately measure the initial state of the turntable. However, the traditional measuring method needs a lot of exterior equipment (such as level instrument, north seeker, autocollimator, etc.), and the test processing is complex, low efficiency. Therefore, it is relatively difficult for the inertial measurement equipment manufacturers to realize the self-inspection of the turntable. Owing to the high precision attitude information provided by the laser gyro strapdown inertial navigation system (SINS) after fine alignment, we can use it as the attitude reference of initial state measurement of three-axis turntable. For the principle that the fixed rotation vector increment is not affected by measuring point, we use the laser gyro INS and the encoder of the turntable to provide the attitudes of turntable mounting plat. Through this way, the high accuracy measurement of perpendicularity error and initial attitude of the three-axis turntable has been achieved.
Zhao, Dan; Wang, Xiao; Mu, Jie; Li, Zhilin; Zuo, Yanlei; Zhou, Song; Zhou, Kainan; Zeng, Xiaoming; Su, Jingqin; Zhu, Qihua
2017-02-01
The grating tiling technology is one of the most effective means to increase the aperture of the gratings. The line-density error (LDE) between sub-gratings will degrade the performance of the tiling gratings, high accuracy measurement and compensation of the LDE are of significance to improve the output pulses characteristics of the tiled-grating compressor. In this paper, the influence of LDE on the output pulses of the tiled-grating compressor is quantitatively analyzed by means of numerical simulation, the output beams drift and output pulses broadening resulting from the LDE are presented. Based on the numerical results we propose a compensation method to reduce the degradations of the tiled grating compressor by applying angular tilt error and longitudinal piston error at the same time. Moreover, a monitoring system is setup to measure the LDE between sub-gratings accurately and the dispersion variation due to the LDE is also demonstrated based on spatial-spectral interference. In this way, we can realize high-accuracy measurement and compensation of the LDE, and this would provide an efficient way to guide the adjustment of the tiling gratings.
High accuracy velocity control method for the french moving-coil watt balance
International Nuclear Information System (INIS)
Topcu, Suat; Chassagne, Luc; Haddad, Darine; Alayli, Yasser; Juncar, Patrick
2004-01-01
We describe a novel method of velocity control dedicated to the French moving-coil watt balance. In this project, a coil has to move in a magnetic field at a velocity of 2 mm s -1 with a relative uncertainty of 10 -9 over 60 mm. Our method is based on the use of both a heterodyne Michelson's interferometer, a two-level translation stage, and a homemade high frequency phase-shifting electronic circuit. To quantify the stability of the velocity, the output of the interferometer is sent into a frequency counter and the Doppler frequency shift is recorded. The Allan standard deviation has been used to calculate the stability and a σ y (τ) of about 2.2x10 -9 over 400 s has been obtained
Real-time and high accuracy frequency measurements for intermediate frequency narrowband signals
Tian, Jing; Meng, Xiaofeng; Nie, Jing; Lin, Liwei
2018-01-01
Real-time and accurate measurements of intermediate frequency signals based on microprocessors are difficult due to the computational complexity and limited time constraints. In this paper, a fast and precise methodology based on the sigma-delta modulator is designed and implemented by first generating the twiddle factors using the designed recursive scheme. This scheme requires zero times of multiplications and only half amounts of addition operations by using the discrete Fourier transform (DFT) and the combination of the Rife algorithm and Fourier coefficient interpolation as compared with conventional methods such as DFT and Fast Fourier Transform. Experimentally, when the sampling frequency is 10 MHz, the real-time frequency measurements with intermediate frequency and narrowband signals have a measurement mean squared error of ±2.4 Hz. Furthermore, a single measurement of the whole system only requires approximately 0.3 s to achieve fast iteration, high precision, and less calculation time.
A high-accuracy image registration algorithm using phase-only correlation for dental radiographs
International Nuclear Information System (INIS)
Ito, Koichi; Nikaido, Akira; Aoki, Takafumi; Kosuge, Eiko; Kawamata, Ryota; Kashima, Isamu
2008-01-01
Dental radiographs have been used for the accurate assessment and treatment of dental diseases. The nonlinear deformation between two dental radiographs may be observed, even if they are taken from the same oral regions of the subject. For an accurate diagnosis, the complete geometric registration between radiographs is required. This paper presents an efficient dental radiograph registration algorithm using Phase-Only Correlation (POC) function. The use of phase components in 2D (two-dimensional) discrete Fourier transforms of dental radiograph images makes possible to achieve highly robust image registration and recognition. Experimental evaluation using a dental radiograph database indicates that the proposed algorithm exhibits efficient recognition performance even for distorted radiographs. (author)
Burress, Jacob; Bethea, Donald; Troub, Brandon
2017-05-01
The accurate measurement of adsorbed gas up to high pressures (˜100 bars) is critical for the development of new materials for adsorbed gas storage. The typical Sievert-type volumetric method introduces accumulating errors that can become large at maximum pressures. Alternatively, gravimetric methods employing microbalances require careful buoyancy corrections. In this paper, we present a combination gravimetric and volumetric system for methane sorption measurements on samples between ˜0.5 and 1 g. The gravimetric method described requires no buoyancy corrections. The tandem use of the gravimetric method allows for a check on the highest uncertainty volumetric measurements. The sources and proper calculation of uncertainties are discussed. Results from methane measurements on activated carbon MSC-30 and metal-organic framework HKUST-1 are compared across methods and within the literature.
Bartram, Jason C; Thewlis, Dominic; Martin, David T; Norton, Kevin I
2017-10-16
With knowledge of an individual's critical power (CP) and W' the SKIBA 2 model provides a framework with which to track W' balance during intermittent high intensity work bouts. There are fears the time constant controlling the recovery rate of W' (τ W' ) may require refinement to enable effective use in an elite population. Four elite endurance cyclists completed an array of intermittent exercise protocols to volitional exhaustion. Each protocol lasted approximately 3.5-6 minutes and featured a range of recovery intensities, set in relation to athlete's CPs (DCP). Using the framework of the SKIBA 2 model, the τ W ' values were modified for each protocol to achieve an accurate W' at volitional exhaustion. Modified τ W ' values were compared to equivalent SKIBA 2 τ W ' values to assess the difference in recovery rates for this population. Plotting modified τ W ' values against DCP showed the adjusted relationship between work-rate and recovery-rate. Comparing modified τ W' values against the SKIBA 2 τ W' values showed a negative bias of 112±46s (mean±95%CL), suggesting athlete's recovered W' faster than predicted by SKIBA 2 (p=0.0001). The modified τ W' to DCP relationship was best described by a power function: τ W' =2287.2∗D CP -0.688 (R 2 = 0.433). The current SKIBA 2 model is not appropriate for use in elite cyclists as it under predicts the recovery rate of W'. The modified τ W' equation presented will require validation, but appears more appropriate for high performance athletes. Individual τ W' relationships may be necessary in order to maximise the model's validity.
Good, Ryan J; Leroue, Matthew K; Czaja, Angela S
2018-06-07
Noninvasive positive pressure ventilation (NIPPV) is increasingly used in critically ill pediatric patients, despite limited data on safety and efficacy. Administrative data may be a good resource for observational studies. Therefore, we sought to assess the performance of the International Classification of Diseases, Ninth Revision procedure code for NIPPV. Patients admitted to the PICU requiring NIPPV or heated high-flow nasal cannula (HHFNC) over the 11-month study period were identified from the Virtual PICU System database. The gold standard was manual review of the electronic health record to verify the use of NIPPV or HHFNC among the cohort. The presence or absence of a NIPPV procedure code was determined by using administrative data. Test characteristics with 95% confidence intervals (CIs) were generated, comparing administrative data with the gold standard. Among the cohort ( n = 562), the majority were younger than 5 years, and the most common primary diagnosis was bronchiolitis. Most (82%) required NIPPV, whereas 18% required only HHFNC. The NIPPV code had a sensitivity of 91.1% (95% CI: 88.2%-93.6%) and a specificity of 57.6% (95% CI: 47.2%-67.5%), with a positive likelihood ratio of 2.15 (95% CI: 1.70-2.71) and negative likelihood ratio of 0.15 (95% CI: 0.11-0.22). Among our critically ill pediatric cohort, NIPPV procedure codes had high sensitivity but only moderate specificity. On the basis of our study results, there is a risk of misclassification, specifically failure to identify children who require NIPPV, when using administrative data to study the use of NIPPV in this population. Copyright © 2018 by the American Academy of Pediatrics.
Automatic camera to laser calibration for high accuracy mobile mapping systems using INS
Goeman, Werner; Douterloigne, Koen; Gautama, Sidharta
2013-09-01
A mobile mapping system (MMS) is a mobile multi-sensor platform developed by the geoinformation community to support the acquisition of huge amounts of geodata in the form of georeferenced high resolution images and dense laser clouds. Since data fusion and data integration techniques are increasingly able to combine the complementary strengths of different sensor types, the external calibration of a camera to a laser scanner is a common pre-requisite on today's mobile platforms. The methods of calibration, nevertheless, are often relatively poorly documented, are almost always time-consuming, demand expert knowledge and often require a carefully constructed calibration environment. A new methodology is studied and explored to provide a high quality external calibration for a pinhole camera to a laser scanner which is automatic, easy to perform, robust and foolproof. The method presented here, uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration, a well studied absolute orientation problem needs to be solved. In many cases, the camera and laser sensor are calibrated in relation to the INS system. Therefore, the transformation from camera to laser contains the cumulated error of each sensor in relation to the INS. Here, the calibration of the camera is performed in relation to the laser frame using the time synchronization between the sensors for data association. In this study, the use of the inertial relative movement will be explored to collect more useful calibration data. This results in a better intersensor calibration allowing better coloring of the clouds and a more accurate depth mask for images, especially on the edges of objects in the scene.
Analysis of high accuracy, quantitative proteomics data in the MaxQB database.
Schaab, Christoph; Geiger, Tamar; Stoehr, Gabriele; Cox, Juergen; Mann, Matthias
2012-03-01
MS-based proteomics generates rapidly increasing amounts of precise and quantitative information. Analysis of individual proteomic experiments has made great strides, but the crucial ability to compare and store information across different proteome measurements still presents many challenges. For example, it has been difficult to avoid contamination of databases with low quality peptide identifications, to control for the inflation in false positive identifications when combining data sets, and to integrate quantitative data. Although, for example, the contamination with low quality identifications has been addressed by joint analysis of deposited raw data in some public repositories, we reasoned that there should be a role for a database specifically designed for high resolution and quantitative data. Here we describe a novel database termed MaxQB that stores and displays collections of large proteomics projects and allows joint analysis and comparison. We demonstrate the analysis tools of MaxQB using proteome data of 11 different human cell lines and 28 mouse tissues. The database-wide false discovery rate is controlled by adjusting the project specific cutoff scores for the combined data sets. The 11 cell line proteomes together identify proteins expressed from more than half of all human genes. For each protein of interest, expression levels estimated by label-free quantification can be visualized across the cell lines. Similarly, the expression rank order and estimated amount of each protein within each proteome are plotted. We used MaxQB to calculate the signal reproducibility of the detected peptides for the same proteins across different proteomes. Spearman rank correlation between peptide intensity and detection probability of identified proteins was greater than 0.8 for 64% of the proteome, whereas a minority of proteins have negative correlation. This information can be used to pinpoint false protein identifications, independently of peptide database
Determination of the QCD Λ-parameter and the accuracy of perturbation theory at high energies
International Nuclear Information System (INIS)
Dalla Brida, Mattia; Fritzsch, Patrick; Korzec, Tomasz; Ramos, Alberto; Sint, Stefan; Sommer, Rainer; Humboldt-Universitaet, Berlin
2016-04-01
We discuss the determination of the strong coupling α_M_S(m_Z) or equivalently the QCD Λ-parameter. Its determination requires the use of perturbation theory in α_s(μ) in some scheme, s, and at some energy scale μ. The higher the scale μ the more accurate perturbation theory becomes, owing to asymptotic freedom. As one step in our computation of the Λ-parameter in three-flavor QCD, we perform lattice computations in a scheme which allows us to non-perturbatively reach very high energies, corresponding to α_s=0.1 and below. We find that (continuum) perturbation theory is very accurate there, yielding a three percent error in the Λ-parameter, while data around α_s∼0.2 is clearly insufficient to quote such a precision. It is important to realize that these findings are expected to be generic, as our scheme has advantageous properties regarding the applicability of perturbation theory.
Determination of the QCD Λ-parameter and the accuracy of perturbation theory at high energies
Energy Technology Data Exchange (ETDEWEB)
Dalla Brida, Mattia [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Fritzsch, Patrick [Univ. Autonoma de Madrid (Spain). Inst. de Fisica Teorica UAM/CSIC; Korzec, Tomasz [Wuppertal Univ. (Germany). Dept. of Physics; Ramos, Alberto [CERN - European Organization for Nuclear Research, Geneva (Switzerland). Theory Div.; Sint, Stefan [Trinity College Dublin (Ireland). School of Mathematics; Sommer, Rainer [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Collaboration: ALPHA Collaboration
2016-04-15
We discuss the determination of the strong coupling α{sub MS}(m{sub Z}) or equivalently the QCD Λ-parameter. Its determination requires the use of perturbation theory in α{sub s}(μ) in some scheme, s, and at some energy scale μ. The higher the scale μ the more accurate perturbation theory becomes, owing to asymptotic freedom. As one step in our computation of the Λ-parameter in three-flavor QCD, we perform lattice computations in a scheme which allows us to non-perturbatively reach very high energies, corresponding to α{sub s}=0.1 and below. We find that (continuum) perturbation theory is very accurate there, yielding a three percent error in the Λ-parameter, while data around α{sub s}∼0.2 is clearly insufficient to quote such a precision. It is important to realize that these findings are expected to be generic, as our scheme has advantageous properties regarding the applicability of perturbation theory.
High-accuracy X-ray detector calibration based on cryogenic radiometry
Krumrey, M.; Cibik, L.; Müller, P.
2010-06-01
Cryogenic electrical substitution radiometers (ESRs) are absolute thermal detectors, based on the equivalence of electrical power and radiant power. Their core piece is a cavity absorber, which is typically made of copper to achieve a short response time. At higher photon energies, the use of copper prevents the operation of ESRs due to increasing transmittance. A new absorber design for hard X-rays has been developed at the laboratory of the Physikalisch-Technische Bundesanstalt (PTB) at the electron storage ring BESSY II. The Monte Carlo simulation code Geant4 was applied to optimize its absorptance for photon energies of up to 60 keV. The measurement of the radiant power of monochromatized synchrotron radiation was achieved with relative standard uncertainties of less than 0.2 %, covering the entire photon energy range of three beamlines from 50 eV to 60 keV. Monochromatized synchrotron radiation of high spectral purity is used to calibrate silicon photodiodes against the ESR for photon energies up to 60 keV with relative standard uncertainties below 0.3 %. For some silicon photodiodes, the photocurrent is not linear with the incident radiant power.
High-accuracy X-ray detector calibration based on cryogenic radiometry
International Nuclear Information System (INIS)
Krumrey, M.; Cibik, L.; Mueller, P.
2010-01-01
Cryogenic electrical substitution radiometers (ESRs) are absolute thermal detectors, based on the equivalence of electrical power and radiant power. Their core piece is a cavity absorber, which is typically made of copper to achieve a short response time. At higher photon energies, the use of copper prevents the operation of ESRs due to increasing transmittance. A new absorber design for hard X-rays has been developed at the laboratory of the Physikalisch-Technische Bundesanstalt (PTB) at the electron storage ring BESSY II. The Monte Carlo simulation code Geant4 was applied to optimize its absorptance for photon energies of up to 60 keV. The measurement of the radiant power of monochromatized synchrotron radiation was achieved with relative standard uncertainties of less than 0.2 %, covering the entire photon energy range of three beamlines from 50 eV to 60 keV. Monochromatized synchrotron radiation of high spectral purity is used to calibrate silicon photodiodes against the ESR for photon energies up to 60 keV with relative standard uncertainties below 0.3 %. For some silicon photodiodes, the photocurrent is not linear with the incident radiant power.
High-accuracy local positioning network for the alignment of the Mu2e experiment.
Energy Technology Data Exchange (ETDEWEB)
Hejdukova, Jana B. [Czech Technical Univ., Prague (Czech Republic)
2017-06-01
This Diploma thesis describes the establishment of a high-precision local positioning network and accelerator alignment for the Mu2e physics experiment. The process of establishing new network consists of few steps: design of the network, pre-analysis, installation works, measurements of the network and making adjustments. Adjustments were performed using two approaches. First is a geodetic approach of taking into account the Earth’s curvature and the metrological approach of a pure 3D Cartesian system on the other side. The comparison of those two approaches is performed and evaluated in the results and compared with expected differences. The effect of the Earth’s curvature was found to be significant for this kind of network and should not be neglected. The measurements were obtained with Absolute Tracker AT401, leveling instrument Leica DNA03 and gyrotheodolite DMT Gyromat 2000. The coordinates of the points of the reference network were determined by the Least Square Meth od and the overall view is attached as Annexes.
International Nuclear Information System (INIS)
Afzal, F.; Raza, S.; Shafique, M.
2017-01-01
Objective: To determine the diagnostic accuracy of x-ray chest in interstitial lung disease as confirmed by high resolution computed tomography (HRCT) chest. Study Design: A cross-sectional validational study. Place and Duration of Study: Department of Diagnostic Radiology, Combined Military Hospital Rawalpindi, from Oct 2013 to Apr 2014. Material and Method: A total of 137 patients with clinical suspicion of interstitial lung disease (ILD) aged 20-50 years of both genders were included in the study. Patients with h/o previous histopathological diagnosis, already taking treatment and pregnant females were excluded. All the patients had chest x-ray and then HRCT. The x-ray and HRCT findings were recorded as presence or absence of the ILD. Results: Mean age was 40.21 ± 4.29 years. Out of 137 patients, 79 (57.66 percent) were males and 58 (42.34 percent) were females with male to female ratio of 1.36:1. Chest x-ray detected ILD in 80 (58.39 percent) patients, out of which, 72 (true positive) had ILD and 8 (false positive) had no ILD on HRCT. Overall sensitivity, specificity, positive predictive value, negative predictive value and diagnostic accuracy of chest x-ray in diagnosing ILD was 80.0 percent, 82.98 percent, 90.0 percent, 68.42 percent and 81.02 percent respectively. Conclusion: This study concluded that chest x-ray is simple, non-invasive, economical and readily available alternative to HRCT with an acceptable diagnostic accuracy of 81 percent in the diagnosis of ILD. (author)
Uskul, Ayse K; Paulmann, Silke; Weick, Mario
2016-02-01
Listeners have to pay close attention to a speaker's tone of voice (prosody) during daily conversations. This is particularly important when trying to infer the emotional state of the speaker. Although a growing body of research has explored how emotions are processed from speech in general, little is known about how psychosocial factors such as social power can shape the perception of vocal emotional attributes. Thus, the present studies explored how social power affects emotional prosody recognition. In a correlational study (Study 1) and an experimental study (Study 2), we show that high power is associated with lower accuracy in emotional prosody recognition than low power. These results, for the first time, suggest that individuals experiencing high or low power perceive emotional tone of voice differently. (c) 2016 APA, all rights reserved).
Directory of Open Access Journals (Sweden)
Warwick R Adams
Full Text Available Parkinson's Disease (PD is a progressive neurodegenerative movement disease affecting over 6 million people worldwide. Loss of dopamine-producing neurons results in a range of both motor and non-motor symptoms, however there is currently no definitive test for PD by non-specialist clinicians, especially in the early disease stages where the symptoms may be subtle and poorly characterised. This results in a high misdiagnosis rate (up to 25% by non-specialists and people can have the disease for many years before diagnosis. There is a need for a more accurate, objective means of early detection, ideally one which can be used by individuals in their home setting. In this investigation, keystroke timing information from 103 subjects (comprising 32 with mild PD severity and the remainder non-PD controls was captured as they typed on a computer keyboard over an extended period and showed that PD affects various characteristics of hand and finger movement and that these can be detected. A novel methodology was used to classify the subjects' disease status, by utilising a combination of many keystroke features which were analysed by an ensemble of machine learning classification models. When applied to two separate participant groups, this approach was able to successfully discriminate between early-PD subjects and controls with 96% sensitivity, 97% specificity and an AUC of 0.98. The technique does not require any specialised equipment or medical supervision, and does not rely on the experience and skill of the practitioner. Regarding more general application, it currently does not incorporate a second cardinal disease symptom, so may not differentiate PD from similar movement-related disorders.
Asymptotic Poisson distribution for the number of system failures of a monotone system
International Nuclear Information System (INIS)
Aven, Terje; Haukis, Harald
1997-01-01
It is well known that for highly available monotone systems, the time to the first system failure is approximately exponentially distributed. Various normalising factors can be used as the parameter of the exponential distribution to ensure the asymptotic exponentiality. More generally, it can be shown that the number of system failures is asymptotic Poisson distributed. In this paper we study the performance of some of the normalising factors by using Monte Carlo simulation. The results show that the exponential/Poisson distribution gives in general very good approximations for highly available components. The asymptotic failure rate of the system gives best results when the process is in steady state, whereas other normalising factors seem preferable when the process is not in steady state. From a computational point of view the asymptotic system failure rate is most attractive
Surfactants non-monotonically modify the onset of Faraday waves
Strickland, Stephen; Shearer, Michael; Daniels, Karen
2017-11-01
When a water-filled container is vertically vibrated, subharmonic Faraday waves emerge once the driving from the vibrations exceeds viscous dissipation. In the presence of an insoluble surfactant, a viscous boundary layer forms at the contaminated surface to balance the Marangoni and Boussinesq stresses. For linear gravity-capillary waves in an undriven fluid, the surfactant-induced boundary layer increases the amount of viscous dissipation. In our analysis and experiments, we consider whether similar effects occur for nonlinear Faraday (gravity-capillary) waves. Assuming a finite-depth, infinite-breadth, low-viscosity fluid, we derive an analytic expression for the onset acceleration up to second order in ɛ =√{ 1 / Re } . This expression allows us to include fluid depth and driving frequency as parameters, in addition to the Marangoni and Boussinesq numbers. For millimetric fluid depths and driving frequencies of 30 to 120 Hz, our analysis recovers prior numerical results and agrees with our measurements of NBD-PC surfactant on DI water. In both case, the onset acceleration increases non-monotonically as a function of Marangoni and Boussinesq numbers. For shallower systems, our model predicts that surfactants could decrease the onset acceleration. DMS-0968258.
Dynamical zeta functions for piecewise monotone maps of the interval
Ruelle, David
2004-01-01
Consider a space M, a map f:M\\to M, and a function g:M \\to {\\mathbb C}. The formal power series \\zeta (z) = \\exp \\sum ^\\infty _{m=1} \\frac {z^m}{m} \\sum _{x \\in \\mathrm {Fix}\\,f^m} \\prod ^{m-1}_{k=0} g (f^kx) yields an example of a dynamical zeta function. Such functions have unexpected analytic properties and interesting relations to the theory of dynamical systems, statistical mechanics, and the spectral theory of certain operators (transfer operators). The first part of this monograph presents a general introduction to this subject. The second part is a detailed study of the zeta functions associated with piecewise monotone maps of the interval [0,1]. In particular, Ruelle gives a proof of a generalized form of the Baladi-Keller theorem relating the poles of \\zeta (z) and the eigenvalues of the transfer operator. He also proves a theorem expressing the largest eigenvalue of the transfer operator in terms of the ergodic properties of (M,f,g).
The resource theory of quantum reference frames: manipulations and monotones
International Nuclear Information System (INIS)
Gour, Gilad; Spekkens, Robert W
2008-01-01
Every restriction on quantum operations defines a resource theory, determining how quantum states that cannot be prepared under the restriction may be manipulated and used to circumvent the restriction. A superselection rule (SSR) is a restriction that arises through the lack of a classical reference frame and the states that circumvent it (the resource) are quantum reference frames. We consider the resource theories that arise from three types of SSRs, associated respectively with lacking: (i) a phase reference, (ii) a frame for chirality, and (iii) a frame for spatial orientation. Focusing on pure unipartite quantum states (and in some cases restricting our attention even further to subsets of these), we explore single-copy and asymptotic manipulations. In particular, we identify the necessary and sufficient conditions for a deterministic transformation between two resource states to be possible and, when these conditions are not met, the maximum probability with which the transformation can be achieved. We also determine when a particular transformation can be achieved reversibly in the limit of arbitrarily many copies and find the maximum rate of conversion. A comparison of the three resource theories demonstrates that the extent to which resources can be interconverted decreases as the strength of the restriction increases. Along the way, we introduce several measures of frameness and prove that these are monotonically non-increasing under various classes of operations that are permitted by the SSR
The Marotto Theorem on planar monotone or competitive maps
International Nuclear Information System (INIS)
Yu Huang
2004-01-01
In 1978, Marotto generalized Li-Yorke's results on the criterion for chaos from one-dimensional discrete dynamical systems to n-dimensional discrete dynamical systems, showing that the existence of a non-degenerate snap-back repeller implies chaos in the sense of Li-Yorke. This theorem is very useful in predicting and analyzing discrete chaos in multi-dimensional dynamical systems. Yet, besides it is well known that there exists an error in the conditions of the original Marotto Theorem, and several authors had tried to correct it in different way, Chen, Hsu and Zhou pointed out that the verification of 'non-degeneracy' of a snap-back repeller is the most difficult in general and expected, 'almost beyond reasonable doubt', that the existence of only degenerate snap-back repeller still implies chaotic, which was posed as a conjecture by them. In this paper, we shall give necessary and sufficient conditions of chaos in the sense of Li-Yorke for planar monotone or competitive discrete dynamical systems and solve Chen-Hsu-Zhou Conjecture for such kinds of systems
The Monotonic Lagrangian Grid for Fast Air-Traffic Evaluation
Alexandrov, Natalia; Kaplan, Carolyn; Oran, Elaine; Boris, Jay
2010-01-01
This paper describes the continued development of a dynamic air-traffic model, ATMLG, intended for rapid evaluation of rules and methods to control and optimize transport systems. The underlying data structure is based on the Monotonic Lagrangian Grid (MLG), which is used for sorting and ordering positions and other data needed to describe N moving bodies, and their interactions. In ATMLG, the MLG is combined with algorithms for collision avoidance and updating aircraft trajectories. Aircraft that are close to each other in physical space are always near neighbors in the MLG data arrays, resulting in a fast nearest-neighbor interaction algorithm that scales as N. In this paper, we use ATMLG to examine how the ability to maintain a required separation between aircraft decreases as the number of aircraft in the volume increases. This requires keeping track of the primary and subsequent collision avoidance maneuvers necessary to maintain a five mile separation distance between all aircraft. Simulation results show that the number of collision avoidance moves increases exponentially with the number of aircraft in the volume.
Koopman, Richelle J; Kochendorfer, Karl M; Moore, Joi L; Mehr, David R; Wakefield, Douglas S; Yadamsuren, Borchuluun; Coberly, Jared S; Kruse, Robin L; Wakefield, Bonnie J; Belden, Jeffery L
2011-01-01
We compared use of a new diabetes dashboard screen with use of a conventional approach of viewing multiple electronic health record (EHR) screens to find data needed for ambulatory diabetes care. We performed a usability study, including a quantitative time study and qualitative analysis of information-seeking behaviors. While being recorded with Morae Recorder software and "think-aloud" interview methods, 10 primary care physicians first searched their EHR for 10 diabetes data elements using a conventional approach for a simulated patient, and then using a new diabetes dashboard for another. We measured time, number of mouse clicks, and accuracy. Two coders analyzed think-aloud and interview data using grounded theory methodology. The mean time needed to find all data elements was 5.5 minutes using the conventional approach vs 1.3 minutes using the diabetes dashboard (P dashboard (P dashboard (P dashboard improves both the efficiency and accuracy of acquiring data needed for high-quality diabetes care. Usability analysis tools can provide important insights into the value of optimizing physician use of health information technologies.
High-accuracy self-calibration method for dual-axis rotation-modulating RLG-INS
Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Long, Xingwu
2017-05-01
Inertial navigation system has been the core component of both military and civil navigation systems. Dual-axis rotation modulation can completely eliminate the inertial elements constant errors of the three axes to improve the system accuracy. But the error caused by the misalignment angles and the scale factor error cannot be eliminated through dual-axis rotation modulation. And discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed the effect of calibration error during one modulated period and presented a new systematic self-calibration method for dual-axis rotation-modulating RLG-INS. Procedure for self-calibration of dual-axis rotation-modulating RLG-INS has been designed. The results of self-calibration simulation experiment proved that: this scheme can estimate all the errors in the calibration error model, the calibration precision of the inertial sensors scale factor error is less than 1ppm and the misalignment is less than 5″. These results have validated the systematic self-calibration method and proved its importance for accuracy improvement of dual -axis rotation inertial navigation system with mechanically dithered ring laser gyroscope.
Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.
2014-12-01
This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.
International Nuclear Information System (INIS)
Rabb, Savelas A.; Olesik, John W.
2008-01-01
The ability to obtain high precision, high accuracy measurements in samples with complex matrices using High Performance Inductively Coupled Plasma-Optical Emission Spectroscopy (HP-ICP-OES) was investigated. The Common Analyte Internal Standard (CAIS) procedure was incorporated into the High Performance Inductively Coupled Plasma-Optical Emission Spectroscopy method to correct for matrix-induced changes in emission intensity ratios. Matrix matching and standard addition approaches to minimize matrix-induced errors when using High Performance Inductively Coupled Plasma-Optical Emission Spectroscopy were also assessed. The High Performance Inductively Coupled Plasma-Optical Emission Spectroscopy method was tested with synthetic solutions in a variety of matrices, alloy standard reference materials and geological reference materials
Görgens, Christian; Guddat, Sven; Thomas, Andreas; Wachsmuth, Philipp; Orlovius, Anne-Katrin; Sigmund, Gerd; Thevis, Mario; Schänzer, Wilhelm
2016-11-30
So far, in sports drug testing compounds of different classes are processed and measured using different screening procedures. The constantly increasing number of samples in doping analysis, as well as the large number of substances with doping related, pharmacological effects require the development of even more powerful assays than those already employed in sports drug testing, indispensably with reduced sample preparation procedures. The analysis of native urine samples after direct injection provides a promising analytical approach, which thereby possesses a broad applicability to many different compounds and their metabolites, without a time-consuming sample preparation. In this study, a novel multi-target approach based on liquid chromatography and high resolution/high accuracy mass spectrometry is presented to screen for more than 200 analytes of various classes of doping agents far below the required detection limits in sports drug testing. Here, classic groups of drugs as diuretics, stimulants, β 2 -agonists, narcotics and anabolic androgenic steroids as well as various newer target compounds like hypoxia-inducible factor (HIF) stabilizers, selective androgen receptor modulators (SARMs), selective estrogen receptor modulators (SERMs), plasma volume expanders and other doping related compounds, listed in the 2016 WADA prohibited list were implemented. As a main achievement, growth hormone releasing peptides could be implemented, which chemically belong to the group of small peptides (0.99), limit of detection (0.1-25ng/mL; 3'OH-stanozolol glucuronide: 50pg/mL; dextran/HES: 10μg/mL) and matrix effects. Copyright © 2016 Elsevier B.V. All rights reserved.
On a correspondence between regular and non-regular operator monotone functions
DEFF Research Database (Denmark)
Gibilisco, P.; Hansen, Frank; Isola, T.
2009-01-01
We prove the existence of a bijection between the regular and the non-regular operator monotone functions satisfying a certain functional equation. As an application we give a new proof of the operator monotonicity of certain functions related to the Wigner-Yanase-Dyson skew information....
Tijs, S.H.; Moretti, S.; Brânzei, R.; Norde, H.W.
2005-01-01
A new way is presented to define for minimum cost spanning tree (mcst-) games the irreducible core, which is introduced by Bird in 1976.The Bird core correspondence turns out to have interesting monotonicity and additivity properties and each stable cost monotonic allocation rule for mcst-problems
An analysis of the stability and monotonicity of a kind of control models
Directory of Open Access Journals (Sweden)
LU Yifa
2013-06-01
Full Text Available The stability and monotonicity of control systems with parameters are considered.By the iterative relationship of the coefficients of characteristic polynomials and the Mathematica software,some sufficient conditions for the monotonicity and stability of systems are given.
A simple algorithm for computing positively weighted straight skeletons of monotone polygons☆
Biedl, Therese; Held, Martin; Huber, Stefan; Kaaser, Dominik; Palfrader, Peter
2015-01-01
We study the characteristics of straight skeletons of monotone polygonal chains and use them to devise an algorithm for computing positively weighted straight skeletons of monotone polygons. Our algorithm runs in O(nlogn) time and O(n) space, where n denotes the number of vertices of the polygon. PMID:25648376
A simple algorithm for computing positively weighted straight skeletons of monotone polygons.
Biedl, Therese; Held, Martin; Huber, Stefan; Kaaser, Dominik; Palfrader, Peter
2015-02-01
We study the characteristics of straight skeletons of monotone polygonal chains and use them to devise an algorithm for computing positively weighted straight skeletons of monotone polygons. Our algorithm runs in [Formula: see text] time and [Formula: see text] space, where n denotes the number of vertices of the polygon.
Li, Yongkai; Yi, Ming; Zou, Xiufen
2014-01-01
To gain insights into the mechanisms of cell fate decision in a noisy environment, the effects of intrinsic and extrinsic noises on cell fate are explored at the single cell level. Specifically, we theoretically define the impulse of Cln1/2 as an indication of cell fates. The strong dependence between the impulse of Cln1/2 and cell fates is exhibited. Based on the simulation results, we illustrate that increasing intrinsic fluctuations causes the parallel shift of the separation ratio of Whi5P but that increasing extrinsic fluctuations leads to the mixture of different cell fates. Our quantitative study also suggests that the strengths of intrinsic and extrinsic noises around an approximate linear model can ensure a high accuracy of cell fate selection. Furthermore, this study demonstrates that the selection of cell fates is an entropy-decreasing process. In addition, we reveal that cell fates are significantly correlated with the range of entropy decreases. PMID:25042292
International Nuclear Information System (INIS)
Yao, W.E.; Hershkowitz; Intrator, T.
1985-01-01
The floating potential of the emissive probe has been used to directly measure the plasma potential. The authors have recently presented another method for directly indicating the plasma potential with a differential emissive probe. In this paper they describe the effects of probe size, plasma density and plasma potential fluctuation on plasma potential measurements and give methods for reducing errors. A control system with fast time response (α 20 μs) and high accuracy (the order of the probe temperature T/sub w//e) for maintaining a differential emissive probe at plasma potential has been developed. It can be operated in pulsed discharge plasma to measure plasma potential dynamic characteristics. A solid state optical coupler is employed to improve circuit performance. This system was tested experimentally by measuring the plasma potential in an argon plasma device an on the Phaedrus tandem mirror
A high-accuracy extraction of the isoscalar πN scattering length from pionic deuterium data
International Nuclear Information System (INIS)
Phillips, Daniel R.; Baru, Vadim; Hanhart, Christoph; Nogga, Andreas; Hoferichter, Martin; Kubis, Bastian
2010-01-01
We present a high-accuracy calculation of the π(bar sign)d scattering length using chiral perturbation theory up to order (M π /m p ) 7/2 . For the first time isospin-violating corrections are included consistently. The resulting value of a π -bar d has a theoretical uncertainty of a few percent. We use it, together with data on pionic deuterium and pionic hydrogen atoms, to extract the isoscalar and isovector pion-nucleon scattering lengths from a combined analysis, and obtain a + (7.9±3.2)·10 -3 M π -1 and a-bar (86.3±1.0)·10 -3 M π -1 .
International Nuclear Information System (INIS)
Yao, W.E.; Hershkowitz, N.; Intrator, T.
1985-01-01
The floating potential of the emissive probe has been used to directly measure the plasma potential. The authors have recently presented another method for directly indicating the plasma potential with a differential emissive probe. In this paper they describe the effects of probe size, plasma density and plasma potential fluctuation on plasma potential measurements and give methods for reducing errors. A control system with fast time response (≅ 20 μs) and high accuracy (the order of the probe temperature T/sub w//e) for maintaining a differential emissive probe at plasma potential has been developed. It can be operated in pulsed discharge plasma to measure plasma potential dynamic characteristics. A solid state optical coupler is employed to improve circuit performance. This system was tested experimentally by measuring the plasma potential in an argon plasma device and on the Phaedrus tandem mirror
International Nuclear Information System (INIS)
Paulsen, P.J.; Beary, E.S.
1996-01-01
At NIST (National Institute of Standards and Technology), ICP-MS ID (inductively coupled mass spectrometry isotope dilution) has been used to certify a wide range of elements in a variety of materials with high accuracy. Both the chemical preparation and instrumental procedures are simpler than with other ID mass spectrometric techniques. The ICP-MS has picogram/ml detection limits for most elements using fixed operating parameters. Chemical separations are required only to remove an interference (from molecular ions as well as isobaric atoms), or to pre-concentrate the analyte. For example, chemical separations were required for the analysis of SRM 2711, Montana II Soil, but not for boron in peach leaves, SRM 1547.(3 refs., 3 tabs., 2 figs
Katushkina, O. A.; Alexashov, D. B.; Izmodenov, V. V.; Gvaramadze, V. V.
2017-02-01
High-resolution mid-infrared observations of astrospheres show that many of them have filamentary (cirrus-like) structure. Using numerical models of dust dynamics in astrospheres, we suggest that their filamentary structure might be related to specific spatial distribution of the interstellar dust around the stars, caused by a gyrorotation of charged dust grains in the interstellar magnetic field. Our numerical model describes the dust dynamics in astrospheres under an influence of the Lorentz force and assumption of a constant dust charge. Calculations are performed for the dust grains with different sizes separately. It is shown that non-monotonic spatial dust distribution (viewed as filaments) appears for dust grains with the period of gyromotion comparable with the characteristic time-scale of the dust motion in the astrosphere. Numerical modelling demonstrates that the number of filaments depends on charge-to-mass ratio of dust.
Generalized Yosida Approximations Based on Relatively A-Maximal m-Relaxed Monotonicity Frameworks
Directory of Open Access Journals (Sweden)
Heng-you Lan
2013-01-01
Full Text Available We introduce and study a new notion of relatively A-maximal m-relaxed monotonicity framework and discuss some properties of a new class of generalized relatively resolvent operator associated with the relatively A-maximal m-relaxed monotone operator and the new generalized Yosida approximations based on relatively A-maximal m-relaxed monotonicity framework. Furthermore, we give some remarks to show that the theory of the new generalized relatively resolvent operator and Yosida approximations associated with relatively A-maximal m-relaxed monotone operators generalizes most of the existing notions on (relatively maximal monotone mappings in Hilbert as well as Banach space and can be applied to study variational inclusion problems and first-order evolution equations as well as evolution inclusions.
International Nuclear Information System (INIS)
Nagel, T.; Shao, H.; Roßkopf, C.; Linder, M.; Wörner, A.; Kolditz, O.
2014-01-01
Highlights: • Detailed analysis of cyclic and monotonic loading of thermochemical heat stores. • Fully coupled reactive heat and mass transport. • Reaction kinetics can be simplified in systems limited by heat transport. • Operating lines valid during monotonic and cyclic loading. • Local integral degree of conversion to capture heterogeneous material usage. - Abstract: Thermochemical reactions can be employed in heat storage devices. The choice of suitable reactive material pairs involves a thorough kinetic characterisation by, e.g., extensive thermogravimetric measurements. Before testing a material on a reactor level, simulations with models based on the Theory of Porous Media can be used to establish its suitability. The extent to which the accuracy of the kinetic model influences the results of such simulations is unknown yet fundamental to the validity of simulations based on chemical models of differing complexity. In this article we therefore compared simulation results on the reactor level based on an advanced kinetic characterisation of a calcium oxide/hydroxide system to those obtained by a simplified kinetic model. Since energy storage is often used for short term load buffering, the internal reactor behaviour is analysed under cyclic partial loading and unloading in addition to full monotonic charge/discharge operation. It was found that the predictions by both models were very similar qualitatively and quantitatively in terms of thermal power characteristics, conversion profiles, temperature output, reaction duration and pumping powers. Major differences were, however, observed for the reaction rate profiles themselves. We conclude that for systems not limited by kinetics the simplified model seems sufficient to estimate the reactor behaviour. The degree of material usage within the reactor was further shown to strongly vary under cyclic loading conditions and should be considered when designing systems for certain operating regimes
Local Monotonicity and Isoperimetric Inequality on Hypersurfaces in Carnot groups
Directory of Open Access Journals (Sweden)
Francesco Paolo Montefalcone
2010-12-01
Full Text Available Let G be a k-step Carnot group of homogeneous dimension Q. Later on we shall present some of the results recently obtained in [32] and, in particular, an intrinsic isoperimetric inequality for a C2-smooth compact hypersurface S with boundary @S. We stress that S and @S are endowed with the homogeneous measures n????1 H and n????2 H , respectively, which are actually equivalent to the intrinsic (Q - 1-dimensional and (Q - 2-dimensional Hausdor measures with respect to a given homogeneous metric % on G. This result generalizes a classical inequality, involving the mean curvature of the hypersurface, proven by Michael and Simon [29] and Allard [1], independently. One may also deduce some related Sobolev-type inequalities. The strategy of the proof is inspired by the classical one and will be discussed at the rst section. After reminding some preliminary notions about Carnot groups, we shall begin by proving a linear isoperimetric inequality. The second step is a local monotonicity formula. Then we may achieve the proof by a covering argument.We stress however that there are many dierences, due to our non-Euclidean setting.Some of the tools developed ad hoc are, in order, a \\blow-up" theorem, which holds true also for characteristic points, and a smooth Coarea Formula for the HS-gradient. Other tools are the horizontal integration by parts formula and the 1st variation formula for the H-perimeter n????1H already developed in [30, 31] and then generalized to hypersurfaces having non-empty characteristic set in [32]. These results can be useful in the study of minimal and constant horizontal mean curvature hypersurfaces in Carnot groups.
Abou Chakra, Charbel; Somma, Janine; Elali, Taha; Drapeau, Laurent
2017-04-01
Climate change and its negative impact on water resource is well described. For countries like Lebanon, undergoing major population's rise and already decreasing precipitations issues, effective water resources management is crucial. Their continuous and systematic monitoring overs long period of time is therefore an important activity to investigate drought risk scenarios for the Lebanese territory. Snow cover on Lebanese mountains is the most important water resources reserve. Consequently, systematic observation of snow cover dynamic plays a major role in order to support hydrologic research with accurate data on snow cover volumes over the melting season. For the last 20 years few studies have been conducted for Lebanese snow cover. They were focusing on estimating the snow cover surface using remote sensing and terrestrial measurement without obtaining accurate maps for the sampled locations. Indeed, estimations of both snow cover area and volumes are difficult due to snow accumulation very high variability and Lebanese mountains chains slopes topographic heterogeneity. Therefore, the snow cover relief measurement in its three-dimensional aspect and its Digital Elevation Model computation is essential to estimate snow cover volume. Despite the need to cover the all lebanese territory, we favored experimental terrestrial topographic site approaches due to high resolution satellite imagery cost, its limited accessibility and its acquisition restrictions. It is also most challenging to modelise snow cover at national scale. We therefore, selected a representative witness sinkhole located at Ouyoun el Siman to undertake systematic and continuous observations based on topographic approach using a total station. After four years of continuous observations, we acknowledged the relation between snow melt rate, date of total melting and neighboring springs discharges. Consequently, we are able to forecast, early in the season, dates of total snowmelt and springs low
Xia, Wei; Li, Chuncheng; Hao, Hui; Wang, Yiping; Ni, Xiaoqi; Guo, Dongmei; Wang, Ming
2018-02-01
A novel position-sensitive Fabry-Perot interferometer was constructed with direct phase modulation by a built-in electro-optic modulator. Pure sinusoidal phase modulation of the light was produced, and the first harmonic of the interference signal was extracted to dynamically maintain the interferometer phase to the most sensitive point of the interferogram. Therefore, the minute vibration of the object was coded on the variation of the interference signal and could be directly retrieved by the output voltage of a photodetector. The operating principle and the signal processing method for active feedback control of the interference phase have been demonstrated in detail. The developed vibration sensor was calibrated through a high-precision piezo-electric transducer and tested by a nano-positioning stage under a vibration magnitude of 60 nm and a frequency of 300 Hz. The active phase-tracking method of the system provides high immunity against environmental disturbances. Experimental results show that the proposed interferometer can effectively reconstruct tiny vibration waveforms with subnanometer resolution, paving the way for high-accuracy vibration sensing, especially for micro-electro-mechanical systems/nano-electro-mechanical systems and ultrasonic devices.
High-Accuracy Tidal Flat Digital Elevation Model Construction Using TanDEM-X Science Phase Data
Lee, Seung-Kuk; Ryu, Joo-Hyung
2017-01-01
This study explored the feasibility of using TanDEM-X (TDX) interferometric observations of tidal flats for digital elevation model (DEM) construction. Our goal was to generate high-precision DEMs in tidal flat areas, because accurate intertidal zone data are essential for monitoring coastal environment sand erosion processes. To monitor dynamic coastal changes caused by waves, currents, and tides, very accurate DEMs with high spatial resolution are required. The bi- and monostatic modes of the TDX interferometer employed during the TDX science phase provided a great opportunity for highly accurate intertidal DEM construction using radar interferometry with no time lag (bistatic mode) or an approximately 10-s temporal baseline (monostatic mode) between the master and slave synthetic aperture radar image acquisitions. In this study, DEM construction in tidal flat areas was first optimized based on the TDX system parameters used in various TDX modes. We successfully generated intertidal zone DEMs with 57-m spatial resolutions and interferometric height accuracies better than 0.15 m for three representative tidal flats on the west coast of the Korean Peninsula. Finally, we validated these TDX DEMs against real-time kinematic-GPS measurements acquired in two tidal flat areas; the correlation coefficient was 0.97 with a root mean square error of 0.20 m.
Directory of Open Access Journals (Sweden)
Mark Lyons
2006-06-01
Full Text Available Despite the acknowledged importance of fatigue on performance in sport, ecologically sound studies investigating fatigue and its effects on sport-specific skills are surprisingly rare. The aim of this study was to investigate the effect of moderate and high intensity total body fatigue on passing accuracy in expert and novice basketball players. Ten novice basketball players (age: 23.30 ± 1.05 yrs and ten expert basketball players (age: 22.50 ± 0.41 yrs volunteered to participate in the study. Both groups performed the modified AAHPERD Basketball Passing Test under three different testing conditions: rest, moderate intensity and high intensity total body fatigue. Fatigue intensity was established using a percentage of the maximal number of squat thrusts performed by the participant in one minute. ANOVA with repeated measures revealed a significant (F 2,36 = 5.252, p = 0.01 level of fatigue by level of skill interaction. On examination of the mean scores it is clear that following high intensity total body fatigue there is a significant detriment in the passing performance of both novice and expert basketball players when compared to their resting scores. Fundamentally however, the detrimental impact of fatigue on passing performance is not as steep in the expert players compared to the novice players. The results suggest that expert or skilled players are better able to cope with both moderate and high intensity fatigue conditions and maintain a higher level of performance when compared to novice players. The findings of this research therefore, suggest the need for trainers and conditioning coaches in basketball to include moderate, but particularly high intensity exercise into their skills sessions. This specific training may enable players at all levels of the game to better cope with the demands of the game on court and maintain a higher standard of play
High-accuracy determination of the neutron flux in the new experimental area nTOF-EAR2 at CERN
Energy Technology Data Exchange (ETDEWEB)
Sabate-Gilarte, M. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Universidad de Sevilla, Departamento de Fisica Atomica, Molecular y Nuclear, Sevilla (Spain); Barbagallo, M.; Colonna, N.; Damone, L.; Belloni, F.; Mastromarco, M.; Tagliente, G.; Variale, V. [Istituto Nazionale di Fisica Nucleare, Sezione di Bari, Bari (Italy); Gunsing, F.; Berthoumieux, E.; Diakaki, M.; Papaevangelou, T.; Dupont, E. [Universite Paris-Saclay, CEA Irfu, Gif-sur-Yvette (France); Zugec, P.; Bosnar, D. [University of Zagreb, Department of Physics, Faculty of Science, Zagreb (Croatia); Vlachoudis, V.; Aberle, O.; Brugger, M.; Calviani, M.; Cardella, R.; Cerutti, F.; Chiaveri, E.; Ferrari, A.; Kadi, Y.; Losito, R.; Macina, D.; Montesano, S.; Rubbia, C. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Chen, Y.H.; Audouin, L.; Tassan-Got, L. [Centre National de la Recherche Scientifique/IN2P3 - IPN, Orsay (France); Stamatopoulos, A.; Kokkoris, M.; Tsinganis, A.; Vlastou, R. [National Technical University of Athens (NTUA), Athens (Greece); Lerendegui-Marco, J.; Cortes-Giraldo, M.A.; Guerrero, C.; Quesada, J.M. [Universidad de Sevilla, Departamento de Fisica Atomica, Molecular y Nuclear, Sevilla (Spain); Villacorta, A. [University of Salamanca, Salamanca (Spain); Cosentino, L.; Finocchiaro, P.; Piscopo, M. [INFN, Laboratori Nazionali del Sud, Catania (Italy); Musumarra, A. [INFN, Laboratori Nazionali del Sud, Catania (Italy); Universita di Catania, Dipartimento di Fisica, Catania (Italy); Andrzejewski, J.; Gawlik, A.; Marganiec, J.; Perkowski, J. [University of Lodz, Lodz (Poland); Becares, V.; Balibrea, J.; Cano-Ott, D.; Garcia, A.R.; Gonzalez, E.; Martinez, T.; Mendoza, E. [Centro de Investigaciones Energeticas Medioambientales y Tecnologicas (CIEMAT), Madrid (Spain); Bacak, M.; Weiss, C. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Technische Universitaet Wien, Wien (Austria); Baccomi, R.; Milazzo, P.M. [Istituto Nazionale di Fisica Nucleare, Sezione di Trieste, Trieste (Italy); Barros, S.; Ferreira, P.; Goncalves, I.F.; Vaz, P. [Instituto Superior Tecnico, Lisbon (Portugal); Becvar, F.; Krticka, M.; Valenta, S. [Charles University, Prague (Czech Republic); Beinrucker, C.; Goebel, K.; Heftrich, T.; Reifarth, R.; Schmidt, S.; Weigand, M.; Wolf, C. [Goethe University Frankfurt, Frankfurt (Germany); Billowes, J.; Frost, R.J.W.; Ryan, J.A.; Smith, A.G.; Warren, S.; Wright, T. [University of Manchester, Manchester (United Kingdom); Caamano, M.; Deo, K.; Duran, I.; Fernandez-Dominguez, B.; Leal-Cidoncha, E.; Paradela, C.; Robles, M.S. [University of Santiago de Compostela, Santiago de Compostela (Spain); Calvino, F.; Casanovas, A.; Riego-Perez, A. [Universitat Politecnica de Catalunya, Barcelona (Spain); Castelluccio, D.M.; Lo Meo, S. [Agenzia Nazionale per le Nuove Tecnologie (ENEA), Bologna (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Bologna, Bologna (Italy); Cortes, G.; Mengoni, A. [Agenzia Nazionale per le Nuove Tecnologie (ENEA), Bologna (Italy); Domingo-Pardo, C.; Tain, J.L. [Universidad de Valencia, Instituto de Fisica Corpuscular, Valencia (Spain); Dressler, R.; Heinitz, S.; Kivel, N.; Maugeri, E.A.; Schumann, D. [Paul Scherrer Institut (PSI), Villingen (Switzerland); Furman, V.; Sedyshev, P. [Joint Institute for Nuclear Research (JINR), Dubna (Russian Federation); Gheorghe, I.; Glodariu, T.; Mirea, M.; Oprea, A. [Horia Hulubei National Institute of Physics and Nuclear Engineering, Magurele (Romania); Goverdovski, A.; Ketlerov, V.; Khryachkov, V. [Institute of Physics and Power Engineering (IPPE), Obninsk (Russian Federation); Griesmayer, E.; Jericha, E.; Kavrigin, P.; Leeb, H. [Technische Universitaet Wien, Wien (Austria); Harada, H.; Kimura, A. [Japan Atomic Energy Agency (JAEA), Tokai-mura (Japan); Hernandez-Prieto, A. [European Organization for Nuclear Research (CERN), Geneva (CH); Universitat Politecnica de Catalunya, Barcelona (ES); Heyse, J.; Schillebeeckx, P. [European Commission, Joint Research Centre, Geel (BE); Jenkins, D.G. [University of York, York (GB); Kaeppeler, F. [Karlsruhe Institute of Technology, Karlsruhe (DE); Katabuchi, T. [Tokyo Institute of Technology, Tokyo (JP); Lederer, C.; Lonsdale, S.J.; Woods, P.J. [University of Edinburgh, School of Physics and Astronomy, Edinburgh (GB); Licata, M.; Massimi, C.; Vannini, G. [Istituto Nazionale di Fisica Nucleare, Sezione di Bologna, Bologna (IT); Universita di Bologna, Dipartimento di Fisica e Astronomia, Bologna (IT); Mastinu, P. [Istituto Nazionale di Fisica Nucleare, Sezione di Legnaro, Legnaro (IT); Matteucci, F. [Istituto Nazionale di Fisica Nucleare, Sezione di Trieste, Trieste (IT); Universita di Trieste, Dipartimento di Astronomia, Trieste (IT); Mingrone, F. [European Organization for Nuclear Research (CERN), Geneva (CH); Istituto Nazionale di Fisica Nucleare, Sezione di Bologna, Bologna (IT); Nolte, R. [Physikalisch-Technische Bundesanstalt (PTB), Braunschweig (DE); Palomo-Pinto, F.R. [Universidad de Sevilla, Dept. Ingenieria Electronica, Escuela Tecnica Superior de Ingenieros, Sevilla (ES); Patronis, N. [University of Ioannina, Ioannina (GR); Pavlik, A. [University of Vienna, Faculty of Physics, Vienna (AT); Porras, J.I. [University of Granada, Granada (ES); Praena, J. [Universidad de Sevilla, Departamento de Fisica Atomica, Molecular y Nuclear, Sevilla (ES); University of Granada, Granada (ES); Rajeev, K.; Rout, P.C.; Saxena, A.; Suryanarayana, S.V. [Bhabha Atomic Research Centre (BARC), Mumbai (IN); Rauscher, T. [University of Hertfordshire, Centre for Astrophysics Research, Hatfield (GB); University of Basel, Department of Physics, Basel (CH); Tarifeno-Saldivia, A. [Universitat Politecnica de Catalunya, Barcelona (ES); Universidad de Valencia, Instituto de Fisica Corpuscular, Valencia (ES); Ventura, A. [Istituto Nazionale di Fisica Nucleare, Sezione di Bologna, Bologna (IT); Wallner, A. [Australian National University, Canberra (AU)
2017-10-15
A new high flux experimental area has recently become operational at the nTOF facility at CERN. This new measuring station, nTOF-EAR2, is placed at the end of a vertical beam line at a distance of approximately 20 m from the spallation target. The characterization of the neutron beam, in terms of flux, spatial profile and resolution function, is of crucial importance for the feasibility study and data analysis of all measurements to be performed in the new area. In this paper, the measurement of the neutron flux, performed with different solid-state and gaseous detection systems, and using three neutron-converting reactions considered standard in different energy regions is reported. The results of the various measurements have been combined, yielding an evaluated neutron energy distribution in a wide energy range, from 2 meV to 100 MeV, with an accuracy ranging from 2%, at low energy, to 6% in the high-energy region. In addition, an absolute normalization of the nTOF-EAR2 neutron flux has been obtained by means of an activation measurement performed with {sup 197}Au foils in the beam. (orig.)
Directory of Open Access Journals (Sweden)
Wiwik Budiawan
2016-02-01
Full Text Available Manusia sebagai subyek yang memiliki keterbatasan dalam kerja, sehingga menyebabkan terjadinya kesalahan. Kesalahan manusia yang dilakukan mengakibatkan menurunnya tingkat kewaspadaan masinis dan asisten masinis dalam menjalankan tugas. Tingkat kewaspadaan dipengaruhi oleh 5 faktor yaitu keadaan monoton, kualitas tidur, keadaan psikofisiologi, distraksi dan kelelahan kerja. Metode untuk mengukur 5 faktor yaitu kuisioner mononton, kuisioner Pittsburgh Sleep Quality Index (PSQI, kuisioner General Job Stress dan kuisioner FAS. Sedangkan untuk menguji tingkat kewaspadaan menggunakan Software Psychomotor Vigilance Test (PVT. Responden yang dipilih adalah masinis dan asisten masinis, karena jenis pekerjaan tersebut sangat membutuhkan tingkat kewaspadaan yang tinggi. Hasil pengukuran kemudian dianalisa menggunakan uji regresi linear majemuk. Dalam penelitian ini menghasilkan keadaan monoton, kualitas tidur, keadaan psikofisiologi, distraksi dan kelelahan kerja berpengaruh secara simultan terhadap tingkat kewaspadaan. Hal ini dibuktikan dengan ketika sebelum jam dinas, hasil uji F-hitung keadaan monoton, kualitas tidur, keadaan psikofisiologi adalah sebesar 0,876, sedangkan untuk variabel distraksi dan Kelelahan Kerja (FAS terhadap tingkat kewaspadaan memiliki nilai 2,371. pada saat sesudah bekerja variabel distraksi dan kelelahan kerja (FAS terhadap tingkat kewaspadaan memiliki nilai F-hitung 2,953,dan nilai 0,544 untuk keadaan monoton, kualitas tidur, keadaan psikofisiologi. Faktor yang memiliki pengaruh terbesar terhadap tingkat kewaspadaan sebelum jam dinas yaitu faktor kualitas tidur, sedangkan untuk sesudah jam dinas adalah faktor kelelahan kerja. Abstract Human beings as subjects who have limitations in work, thus causing the error. Human error committed resulted in a decreased level of alertness machinist and assistant machinist in the line of duty. Alert level is influenced by five factors: the state of monotony, quality of sleep
Directory of Open Access Journals (Sweden)
Mervan Pašić
2016-10-01
Full Text Available We study non-monotone positive solutions of the second-order linear differential equations: $(p(tx'' + q(t x = e(t$, with positive $p(t$ and $q(t$. For the first time, some criteria as well as the existence and nonexistence of non-monotone positive solutions are proved in the framework of some properties of solutions $\\theta (t$ of the corresponding integrable linear equation: $(p(t\\theta''=e(t$. The main results are illustrated by many examples dealing with equations which allow exact non-monotone positive solutions not necessarily periodic. Finally, we pose some open questions.
Coles, Phillip; Yurchenko, Sergei N.; Polyansky, Oleg; Kyuberis, Aleksandra; Ovsyannikov, Roman I.; Zobov, Nikolay Fedorovich; Tennyson, Jonathan
2017-06-01
We present a new spectroscopic potential energy surface (PES) for ^{14}NH_3, produced by refining a high accuracy ab initio PES to experimental energy levels taken predominantly from MARVEL. The PES reproduces 1722 matched J=0-8 experimental energies with a root-mean-square error of 0.035 cm-1 under 6000 cm^{-1} and 0.059 under 7200 cm^{-1}. In conjunction with a new DMS calculated using multi reference configuration interaction (MRCI) and H=aug-cc-pVQZ, N=aug-cc-pWCVQZ basis sets, an infrared (IR) line list has been computed which is suitable for use up to 2000 K. The line list is used to assign experimental lines in the 7500 - 10,500 cm^{-1} region and previously unassigned lines in HITRAN in the 6000-7000 cm^{-1} region. Oleg L. Polyansky, Roman I. Ovsyannikov, Aleksandra A. Kyuberis, Lorenzo Lodi, Jonathan Tennyson, Andrey Yachmenev, Sergei N. Yurchenko, Nikolai F. Zobov, J. Mol. Spec., 327 (2016) 21-30 Afaf R. Al Derzia, Tibor Furtenbacher, Jonathan Tennyson, Sergei N. Yurchenko, Attila G. Császár, J. Quant. Spectrosc. Rad. Trans., 161 (2015) 117-130
Takahata, Keisuke; Saito, Fumie; Muramatsu, Taro; Yamada, Makiko; Shirahase, Joichiro; Tabuchi, Hajime; Suhara, Tetsuya; Mimura, Masaru; Kato, Motoichiro
2014-05-01
Over the last two decades, evidence of enhancement of drawing and painting skills due to focal prefrontal damage has accumulated. It is of special interest that most artworks created by such patients were highly realistic ones, but the mechanism underlying this phenomenon remains to be understood. Our hypothesis is that enhanced tendency of realism was associated with accuracy of visual numerosity representation, which has been shown to be mediated predominantly by right parietal functions. Here, we report a case of left prefrontal stroke, where the patient showed enhancement of artistic skills of realistic painting after the onset of brain damage. We investigated cognitive, functional and esthetic characteristics of the patient׳s visual artistry and visual numerosity representation. Neuropsychological tests revealed impaired executive function after the stroke. Despite that, the patient׳s visual artistry related to realism was rather promoted across the onset of brain damage as demonstrated by blind evaluation of the paintings by professional art reviewers. On visual numerical cognition tasks, the patient showed higher performance in comparison with age-matched healthy controls. These results paralleled increased perfusion in the right parietal cortex including the precuneus and intraparietal sulcus. Our data provide new insight into mechanisms underlying change in artistic style due to focal prefrontal lesion. Copyright © 2014 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
M. Dumont
2010-03-01
Full Text Available High-accuracy measurements of snow Bidirectional Reflectance Distribution Function (BRDF were performed for four natural snow samples with a spectrogonio-radiometer in the 500–2600 nm wavelength range. These measurements are one of the first sets of direct snow BRDF values over a wide range of lighting and viewing geometry. They were compared to BRDF calculated with two optical models. Variations of the snow anisotropy factor with lighting geometry, wavelength and snow physical properties were investigated. Results show that at wavelengths with small penetration depth, scattering mainly occurs in the very top layers and the anisotropy factor is controlled by the phase function. In this condition, forward scattering peak or double scattering peak is observed. In contrast at shorter wavelengths, the penetration of the radiation is much deeper and the number of scattering events increases. The anisotropy factor is thus nearly constant and decreases at grazing observation angles. The whole dataset is available on demand from the corresponding author.
Kodama, K. P.
2017-12-01
The talk will consider two broad topics in rock magnetism and paleomagnetism: the accuracy of paleomagnetic remanence and the use of rock magnetics to measure geologic time in sedimentary sequences. The accuracy of the inclination recorded by sedimentary rocks is crucial to paleogeographic reconstructions. Laboratory compaction experiments show that inclination shallows on the order of 10˚-15˚. Corrections to the inclination can be made using the effects of compaction on the directional distribution of secular variation recorded by sediments or the anisotropy of the magnetic grains carrying the ancient remanence. A summary of all the compaction correction studies as of 2012 shows that 85% of sedimentary rocks studied have enjoyed some amount of inclination shallowing. Future work should also consider the effect of grain-scale strain on paleomagnetic remanence. High resolution chronostratigraphy can be assigned to a sedimentary sequence using rock magnetics to detect astronomically-forced climate cycles. The power of the technique is relatively quick, non-destructive measurements, the objective identification of the cycles compared to facies interpretations, and the sensitivity of rock magnetics to subtle changes in sedimentary source. An example of this technique comes from using rock magnetics to identify astronomically-forced climate cycles in three globally distributed occurrences of the Shuram carbon isotope excursion. The Shuram excursion may record the oxidation of the world ocean in the Ediacaran, just before the Cambrian explosion of metazoans. Using rock magnetic cyclostratigraphy, the excursion is shown to have the same duration (8-9 Myr) in southern California, south China and south Australia. Magnetostratigraphy of the rocks carrying the excursion in California and Australia shows a reversed to normal geomagnetic field polarity transition at the excursion's nadir, thus supporting the synchroneity of the excursion globally. Both results point to a
Directory of Open Access Journals (Sweden)
Hendrik eMandelkow
2016-03-01
Full Text Available Naturalistic stimuli like movies evoke complex perceptual processes, which are of great interest in the study of human cognition by functional MRI (fMRI. However, conventional fMRI analysis based on statistical parametric mapping (SPM and the general linear model (GLM is hampered by a lack of accurate parametric models of the BOLD response to complex stimuli. In this situation, statistical machine-learning methods, a.k.a. multivariate pattern analysis (MVPA, have received growing attention for their ability to generate stimulus response models in a data-driven fashion. However, machine-learning methods typically require large amounts of training data as well as computational resources. In the past this has largely limited their application to fMRI experiments involving small sets of stimulus categories and small regions of interest in the brain. By contrast, the present study compares several classification algorithms known as Nearest Neighbour (NN, Gaussian Naïve Bayes (GNB, and (regularised Linear Discriminant Analysis (LDA in terms of their classification accuracy in discriminating the global fMRI response patterns evoked by a large number of naturalistic visual stimuli presented as a movie.Results show that LDA regularised by principal component analysis (PCA achieved high classification accuracies, above 90% on average for single fMRI volumes acquired 2s apart during a 300s movie (chance level 0.7% = 2s/300s. The largest source of classification errors were autocorrelations in the BOLD signal compounded by the similarity of consecutive stimuli. All classifiers performed best when given input features from a large region of interest comprising around 25% of the voxels that responded significantly to the visual stimulus. Consistent with this, the most informative principal components represented widespread distributions of co-activated brain regions that were similar between subjects and may represent functional networks. In light of these
Shokri, Abbas; Eskandarloo, Amir; Norouzi, Marouf; Poorolajal, Jalal; Majidi, Gelareh; Aliyaly, Alireza
2018-03-01
This study compared the diagnostic accuracy of cone-beam computed tomography (CBCT) scans obtained with 2 CBCT systems with high- and low-resolution modes for the detection of root perforations in endodontically treated mandibular molars. The root canals of 72 mandibular molars were cleaned and shaped. Perforations measuring 0.2, 0.3, and 0.4 mm in diameter were created at the furcation area of 48 roots, simulating strip perforations, or on the external surfaces of 48 roots, simulating root perforations. Forty-eight roots remained intact (control group). The roots were filled using gutta-percha (Gapadent, Tianjin, China) and AH26 sealer (Dentsply Maillefer, Ballaigues, Switzerland). The CBCT scans were obtained using the NewTom 3G (QR srl, Verona, Italy) and Cranex 3D (Soredex, Helsinki, Finland) CBCT systems in high- and low-resolution modes, and were evaluated by 2 observers. The chi-square test was used to assess the nominal variables. In strip perforations, the accuracies of low- and high-resolution modes were 75% and 83% for NewTom 3G and 67% and 69% for Cranex 3D. In root perforations, the accuracies of low- and high-resolution modes were 79% and 83% for NewTom 3G and was 56% and 73% for Cranex 3D. The accuracy of the 2 CBCT systems was different for the detection of strip and root perforations. The Cranex 3D had non-significantly higher accuracy than the NewTom 3G. In both scanners, the high-resolution mode yielded significantly higher accuracy than the low-resolution mode. The diagnostic accuracy of CBCT scans was not affected by the perforation diameter.
Sohn, Hojoon; Aero, Abebech D; Menzies, Dick; Behr, Marcel; Schwartzman, Kevin; Alvarez, Gonzalo G; Dan, Andrei; McIntosh, Fiona; Pai, Madhukar; Denkinger, Claudia M
2014-04-01
Xpert MTB/RIF, the first automated molecular test for tuberculosis, is transforming the diagnostic landscape in low-income countries. However, little information is available on its performance in low-incidence, high-resource countries. We evaluated the accuracy of Xpert in a university hospital tuberculosis clinic in Montreal, Canada, for the detection of pulmonary tuberculosis on induced sputum samples, using mycobacterial cultures as the reference standard. We also assessed the potential reduction in time to diagnosis and treatment initiation. We enrolled 502 consecutive patients who presented for evaluation of possible active tuberculosis (most with abnormal chest radiographs, only 18% symptomatic). Twenty-five subjects were identified to have active tuberculosis by culture. Xpert had a sensitivity of 46% (95% confidence interval [CI], 26%-67%) and specificity of 100% (95% CI, 99%-100%) for detection of Mycobacterium tuberculosis. Sensitivity was 86% (95% CI, 42%-100%) in the 7 subjects with smear-positive results, and 28% (95% CI, 10%-56%) in the remaining subjects with smear-negative, culture-positive results; in this latter group, positive Xpert results were obtained a median 12 days before culture results. Subjects with positive cultures but negative Xpert results had minimal disease: 11 of 13 had no symptoms on presentation, and mean time to positive liquid culture results was 28 days (95% CI, 25-47 days) compared with 14 days (95% CI, 8-21 days) in Xpert/culture-positive cases. Our findings suggest limited potential impact of Xpert testing in high-resource, low-incidence ambulatory settings due to lower sensitivity in the context of less extensive disease, and limited potential to expedite diagnosis beyond what is achieved with the existing, well-performing diagnostic algorithm.
Kraemer, D; Chen, G
2014-02-01
Accurate measurements of thermal conductivity are of great importance for materials research and development. Steady-state methods determine thermal conductivity directly from the proportionality between heat flow and an applied temperature difference (Fourier Law). Although theoretically simple, in practice, achieving high accuracies with steady-state methods is challenging and requires rather complex experimental setups due to temperature sensor uncertainties and parasitic heat loss. We developed a simple differential steady-state method in which the sample is mounted between an electric heater and a temperature-controlled heat sink. Our method calibrates for parasitic heat losses from the electric heater during the measurement by maintaining a constant heater temperature close to the environmental temperature while varying the heat sink temperature. This enables a large signal-to-noise ratio which permits accurate measurements of samples with small thermal conductance values without an additional heater calibration measurement or sophisticated heater guards to eliminate parasitic heater losses. Additionally, the differential nature of the method largely eliminates the uncertainties of the temperature sensors, permitting measurements with small temperature differences, which is advantageous for samples with high thermal conductance values and/or with strongly temperature-dependent thermal conductivities. In order to accelerate measurements of more than one sample, the proposed method allows for measuring several samples consecutively at each temperature measurement point without adding significant error. We demonstrate the method by performing thermal conductivity measurements on commercial bulk thermoelectric Bi2Te3 samples in the temperature range of 30-150 °C with an error below 3%.
Logarithmically complete monotonicity of a function related to the Catalan-Qi function
Directory of Open Access Journals (Sweden)
Qi Feng
2016-08-01
Full Text Available In the paper, the authors find necessary and sufficient conditions such that a function related to the Catalan-Qi function, which is an alternative generalization of the Catalan numbers, is logarithmically complete monotonic.
Monotone matrix transformations defined by the group inverse and simultaneous diagonalizability
International Nuclear Information System (INIS)
Bogdanov, I I; Guterman, A E
2007-01-01
Bijective linear transformations of the matrix algebra over an arbitrary field that preserve simultaneous diagonalizability are characterized. This result is used for the characterization of bijective linear monotone transformations . Bibliography: 28 titles.
Totally Optimal Decision Trees for Monotone Boolean Functions with at Most Five Variables
Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail
2013-01-01
In this paper, we present the empirical results for relationships between time (depth) and space (number of nodes) complexity of decision trees computing monotone Boolean functions, with at most five variables. We use Dagger (a tool for optimization
Monotone methods for solving a boundary value problem of second order discrete system
Directory of Open Access Journals (Sweden)
Wang Yuan-Ming
1999-01-01
Full Text Available A new concept of a pair of upper and lower solutions is introduced for a boundary value problem of second order discrete system. A comparison result is given. An existence theorem for a solution is established in terms of upper and lower solutions. A monotone iterative scheme is proposed, and the monotone convergence rate of the iteration is compared and analyzed. The numerical results are given.
Global Attractivity Results for Mixed-Monotone Mappings in Partially Ordered Complete Metric Spaces
Directory of Open Access Journals (Sweden)
Kalabušić S
2009-01-01
Full Text Available We prove fixed point theorems for mixed-monotone mappings in partially ordered complete metric spaces which satisfy a weaker contraction condition than the classical Banach contraction condition for all points that are related by given ordering. We also give a global attractivity result for all solutions of the difference equation , where satisfies mixed-monotone conditions with respect to the given ordering.
Reduction theorems for weighted integral inequalities on the cone of monotone functions
International Nuclear Information System (INIS)
Gogatishvili, A; Stepanov, V D
2013-01-01
This paper surveys results related to the reduction of integral inequalities involving positive operators in weighted Lebesgue spaces on the real semi-axis and valid on the cone of monotone functions, to certain more easily manageable inequalities valid on the cone of non-negative functions. The case of monotone operators is new. As an application, a complete characterization for all possible integrability parameters is obtained for a number of Volterra operators. Bibliography: 118 titles
Totally Optimal Decision Trees for Monotone Boolean Functions with at Most Five Variables
Chikalov, Igor
2013-01-01
In this paper, we present the empirical results for relationships between time (depth) and space (number of nodes) complexity of decision trees computing monotone Boolean functions, with at most five variables. We use Dagger (a tool for optimization of decision trees and decision rules) to conduct experiments. We show that, for each monotone Boolean function with at most five variables, there exists a totally optimal decision tree which is optimal with respect to both depth and number of nodes.
Directory of Open Access Journals (Sweden)
Heinz Werner Höppel
2012-02-01
Full Text Available The monotonic and cyclic deformation behavior of ultrafine-grained metastable austenitic steel AISI 304L, produced by severe plastic deformation, was investigated. Under monotonic loading, the martensitic phase transformation in the ultrafine-grained state is strongly favored. Under cyclic loading, the martensitic transformation behavior is similar to the coarse-grained condition, but the cyclic stress response is three times larger for the ultrafine-grained condition.
International Nuclear Information System (INIS)
Duan Shukai; Liao Xiaofeng
2007-01-01
A new chaotic delayed neuron model with non-monotonously increasing transfer function, called as chaotic Liao's delayed neuron model, was recently reported and analyzed. An electronic implementation of this model is described in detail. At the same time, some methods in circuit design, especially for circuit with time delayed unit and non-monotonously increasing activation unit, are also considered carefully. We find that the dynamical behaviors of the designed circuits are closely similar to the results predicted by numerical experiments
A discrete wavelet spectrum approach for identifying non-monotonic trends in hydroclimate data
Sang, Yan-Fang; Sun, Fubao; Singh, Vijay P.; Xie, Ping; Sun, Jian
2018-01-01
The hydroclimatic process is changing non-monotonically and identifying its trends is a great challenge. Building on the discrete wavelet transform theory, we developed a discrete wavelet spectrum (DWS) approach for identifying non-monotonic trends in hydroclimate time series and evaluating their statistical significance. After validating the DWS approach using two typical synthetic time series, we examined annual temperature and potential evaporation over China from 1961-2013 and found that the DWS approach detected both the warming and the warming hiatus in temperature, and the reversed changes in potential evaporation. Further, the identified non-monotonic trends showed stable significance when the time series was longer than 30 years or so (i.e. the widely defined climate timescale). The significance of trends in potential evaporation measured at 150 stations in China, with an obvious non-monotonic trend, was underestimated and was not detected by the Mann-Kendall test. Comparatively, the DWS approach overcame the problem and detected those significant non-monotonic trends at 380 stations, which helped understand and interpret the spatiotemporal variability in the hydroclimatic process. Our results suggest that non-monotonic trends of hydroclimate time series and their significance should be carefully identified, and the DWS approach proposed has the potential for wide use in the hydrological and climate sciences.
A discrete wavelet spectrum approach for identifying non-monotonic trends in hydroclimate data
Directory of Open Access Journals (Sweden)
Y.-F. Sang
2018-01-01
Full Text Available The hydroclimatic process is changing non-monotonically and identifying its trends is a great challenge. Building on the discrete wavelet transform theory, we developed a discrete wavelet spectrum (DWS approach for identifying non-monotonic trends in hydroclimate time series and evaluating their statistical significance. After validating the DWS approach using two typical synthetic time series, we examined annual temperature and potential evaporation over China from 1961–2013 and found that the DWS approach detected both the warming and the warming hiatus in temperature, and the reversed changes in potential evaporation. Further, the identified non-monotonic trends showed stable significance when the time series was longer than 30 years or so (i.e. the widely defined climate timescale. The significance of trends in potential evaporation measured at 150 stations in China, with an obvious non-monotonic trend, was underestimated and was not detected by the Mann–Kendall test. Comparatively, the DWS approach overcame the problem and detected those significant non-monotonic trends at 380 stations, which helped understand and interpret the spatiotemporal variability in the hydroclimatic process. Our results suggest that non-monotonic trends of hydroclimate time series and their significance should be carefully identified, and the DWS approach proposed has the potential for wide use in the hydrological and climate sciences.
Kang, Hyeon-Ah; Su, Ya-Hui; Chang, Hua-Hua
2018-03-08
A monotone relationship between a true score (τ) and a latent trait level (θ) has been a key assumption for many psychometric applications. The monotonicity property in dichotomous response models is evident as a result of a transformation via a test characteristic curve. Monotonicity in polytomous models, in contrast, is not immediately obvious because item response functions are determined by a set of response category curves, which are conceivably non-monotonic in θ. The purpose of the present note is to demonstrate strict monotonicity in ordered polytomous item response models. Five models that are widely used in operational assessments are considered for proof: the generalized partial credit model (Muraki, 1992, Applied Psychological Measurement, 16, 159), the nominal model (Bock, 1972, Psychometrika, 37, 29), the partial credit model (Masters, 1982, Psychometrika, 47, 147), the rating scale model (Andrich, 1978, Psychometrika, 43, 561), and the graded response model (Samejima, 1972, A general model for free-response data (Psychometric Monograph no. 18). Psychometric Society, Richmond). The study asserts that the item response functions in these models strictly increase in θ and thus there exists strict monotonicity between τ and θ under certain specified conditions. This conclusion validates the practice of customarily using τ in place of θ in applied settings and provides theoretical grounds for one-to-one transformations between the two scales. © 2018 The British Psychological Society.
De Barba, M; Miquel, C; Lobréaux, S; Quenette, P Y; Swenson, J E; Taberlet, P
2017-05-01
Microsatellite markers have played a major role in ecological, evolutionary and conservation research during the past 20 years. However, technical constrains related to the use of capillary electrophoresis and a recent technological revolution that has impacted other marker types have brought to question the continued use of microsatellites for certain applications. We present a study for improving microsatellite genotyping in ecology using high-throughput sequencing (HTS). This approach entails selection of short markers suitable for HTS, sequencing PCR-amplified microsatellites on an Illumina platform and bioinformatic treatment of the sequence data to obtain multilocus genotypes. It takes advantage of the fact that HTS gives direct access to microsatellite sequences, allowing unambiguous allele identification and enabling automation of the genotyping process through bioinformatics. In addition, the massive parallel sequencing abilities expand the information content of single experimental runs far beyond capillary electrophoresis. We illustrated the method by genotyping brown bear samples amplified with a multiplex PCR of 13 new microsatellite markers and a sex marker. HTS of microsatellites provided accurate individual identification and parentage assignment and resulted in a significant improvement of genotyping success (84%) of faecal degraded DNA and costs reduction compared to capillary electrophoresis. The HTS approach holds vast potential for improving success, accuracy, efficiency and standardization of microsatellite genotyping in ecological and conservation applications, especially those that rely on profiling of low-quantity/quality DNA and on the construction of genetic databases. We discuss and give perspectives for the implementation of the method in the light of the challenges encountered in wildlife studies. © 2016 John Wiley & Sons Ltd.
Mitra, Abhishek; Skrzypczak, Magdalena; Ginalski, Krzysztof; Rowicka, Maga
2015-01-01
Sequencing microRNA, reduced representation sequencing, Hi-C technology and any method requiring the use of in-house barcodes result in sequencing libraries with low initial sequence diversity. Sequencing such data on the Illumina platform typically produces low quality data due to the limitations of the Illumina cluster calling algorithm. Moreover, even in the case of diverse samples, these limitations are causing substantial inaccuracies in multiplexed sample assignment (sample bleeding). Such inaccuracies are unacceptable in clinical applications, and in some other fields (e.g. detection of rare variants). Here, we discuss how both problems with quality of low-diversity samples and sample bleeding are caused by incorrect detection of clusters on the flowcell during initial sequencing cycles. We propose simple software modifications (Long Template Protocol) that overcome this problem. We present experimental results showing that our Long Template Protocol remarkably increases data quality for low diversity samples, as compared with the standard analysis protocol; it also substantially reduces sample bleeding for all samples. For comprehensiveness, we also discuss and compare experimental results from alternative approaches to sequencing low diversity samples. First, we discuss how the low diversity problem, if caused by barcodes, can be avoided altogether at the barcode design stage. Second and third, we present modified guidelines, which are more stringent than the manufacturer's, for mixing low diversity samples with diverse samples and lowering cluster density, which in our experience consistently produces high quality data from low diversity samples. Fourth and fifth, we present rescue strategies that can be applied when sequencing results in low quality data and when there is no more biological material available. In such cases, we propose that the flowcell be re-hybridized and sequenced again using our Long Template Protocol. Alternatively, we discuss how
Directory of Open Access Journals (Sweden)
Abhishek Mitra
Full Text Available Sequencing microRNA, reduced representation sequencing, Hi-C technology and any method requiring the use of in-house barcodes result in sequencing libraries with low initial sequence diversity. Sequencing such data on the Illumina platform typically produces low quality data due to the limitations of the Illumina cluster calling algorithm. Moreover, even in the case of diverse samples, these limitations are causing substantial inaccuracies in multiplexed sample assignment (sample bleeding. Such inaccuracies are unacceptable in clinical applications, and in some other fields (e.g. detection of rare variants. Here, we discuss how both problems with quality of low-diversity samples and sample bleeding are caused by incorrect detection of clusters on the flowcell during initial sequencing cycles. We propose simple software modifications (Long Template Protocol that overcome this problem. We present experimental results showing that our Long Template Protocol remarkably increases data quality for low diversity samples, as compared with the standard analysis protocol; it also substantially reduces sample bleeding for all samples. For comprehensiveness, we also discuss and compare experimental results from alternative approaches to sequencing low diversity samples. First, we discuss how the low diversity problem, if caused by barcodes, can be avoided altogether at the barcode design stage. Second and third, we present modified guidelines, which are more stringent than the manufacturer's, for mixing low diversity samples with diverse samples and lowering cluster density, which in our experience consistently produces high quality data from low diversity samples. Fourth and fifth, we present rescue strategies that can be applied when sequencing results in low quality data and when there is no more biological material available. In such cases, we propose that the flowcell be re-hybridized and sequenced again using our Long Template Protocol. Alternatively
High-accuracy single-pass InSAR DEM for large-scale flood hazard applications
Schumann, G.; Faherty, D.; Moller, D.
2017-12-01
In this study, we used a unique opportunity of the GLISTIN-A (NASA airborne mission designed to characterizing the cryosphere) track to Greenland to acquire a high-resolution InSAR DEM of a large area in the Red River of the North Basin (north of Grand Forks, ND, USA), which is a very flood-vulnerable valley, particularly in spring time due to increased soil moisture content near state of saturation and/or, typical for this region, snowmelt. Having an InSAR DEM that meets flood inundation modeling and mapping requirements comparable to LiDAR, would demonstrate great application potential of new radar technology for national agencies with an operational flood forecasting mandate and also local state governments active in flood event prediction, disaster response and mitigation. Specifically, we derived a bare-earth DEM in SAR geometry by first removing the inherent far range bias related to airborne operation, which at the more typical large-scale DEM resolution of 30 m has a sensor accuracy of plus or minus 2.5 cm. Subsequently, an intelligent classifier based on informed relationships between InSAR height, intensity and correlation was used to distinguish between bare-earth, roads or embankments, buildings and tall vegetation in order to facilitate the creation of a bare-earth DEM that would meet the requirements for accurate floodplain inundation mapping. Using state-of-the-art LiDAR terrain data, we demonstrate that capability by achieving a root mean squared error of approximately 25 cm and further illustrating its applicability to flood modeling.
Energy Technology Data Exchange (ETDEWEB)
Hardcastle, Nicholas; Bayliss, Adam; Wong, Jeannie Hsiu Ding; Rosenfeld, Anatoly B.; Tome, Wolfgang A. [Department of Human Oncology, University of Wisconsin-Madison, WI, 53792 (United States); Department of Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, VIC 3002 (Australia) and Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW 2522 (Australia); Department of Human Oncology, University of Wisconsin-Madison, WI 53792 (United States); Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW 2522 (Australia) and Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, 50603 Kuala Lumpur (Malaysia); Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW 2522 (Australia); Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin 53792 (United States); Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, Wisconsin 53792 (United States); Einstein Institute of Oncophysics, Albert Einstein College of Medicine of Yeshiva University, Bronx, New York 10461 (United States) and Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW 2522 (Australia)
2012-08-15
Purpose: A recent field safety notice from TomoTherapy detailed the underdosing of small, off-axis targets when receiving high doses per fraction. This is due to angular undersampling in the dose calculation gantry angles. This study evaluates a correction method to reduce the underdosing, to be implemented in the current version (v4.1) of the TomoTherapy treatment planning software. Methods: The correction method, termed 'Super Sampling' involved the tripling of the number of gantry angles from which the dose is calculated during optimization and dose calculation. Radiochromic film was used to measure the dose to small targets at various off-axis distances receiving a minimum of 21 Gy in one fraction. Measurements were also performed for single small targets at the center of the Lucy phantom, using radiochromic film and the dose magnifying glass (DMG). Results: Without super sampling, the peak dose deficit increased from 0% to 18% for a 10 mm target and 0% to 30% for a 5 mm target as off-axis target distances increased from 0 to 16.5 cm. When super sampling was turned on, the dose deficit trend was removed and all peak doses were within 5% of the planned dose. For measurements in the Lucy phantom at 9.7 cm off-axis, the positional and dose magnitude accuracy using super sampling was verified using radiochromic film and the DMG. Conclusions: A correction method implemented in the TomoTherapy treatment planning system which triples the angular sampling of the gantry angles used during optimization and dose calculation removes the underdosing for targets as small as 5 mm diameter, up to 16.5 cm off-axis receiving up to 21 Gy.
International Nuclear Information System (INIS)
Hardcastle, Nicholas; Bayliss, Adam; Wong, Jeannie Hsiu Ding; Rosenfeld, Anatoly B.; Tomé, Wolfgang A.
2012-01-01
Purpose: A recent field safety notice from TomoTherapy detailed the underdosing of small, off-axis targets when receiving high doses per fraction. This is due to angular undersampling in the dose calculation gantry angles. This study evaluates a correction method to reduce the underdosing, to be implemented in the current version (v4.1) of the TomoTherapy treatment planning software. Methods: The correction method, termed “Super Sampling” involved the tripling of the number of gantry angles from which the dose is calculated during optimization and dose calculation. Radiochromic film was used to measure the dose to small targets at various off-axis distances receiving a minimum of 21 Gy in one fraction. Measurements were also performed for single small targets at the center of the Lucy phantom, using radiochromic film and the dose magnifying glass (DMG). Results: Without super sampling, the peak dose deficit increased from 0% to 18% for a 10 mm target and 0% to 30% for a 5 mm target as off-axis target distances increased from 0 to 16.5 cm. When super sampling was turned on, the dose deficit trend was removed and all peak doses were within 5% of the planned dose. For measurements in the Lucy phantom at 9.7 cm off-axis, the positional and dose magnitude accuracy using super sampling was verified using radiochromic film and the DMG. Conclusions: A correction method implemented in the TomoTherapy treatment planning system which triples the angular sampling of the gantry angles used during optimization and dose calculation removes the underdosing for targets as small as 5 mm diameter, up to 16.5 cm off-axis receiving up to 21 Gy.
Directory of Open Access Journals (Sweden)
R. Sussmann
2011-09-01
Full Text Available We present a strategy (MIR-GBM v1.0 for the retrieval of column-averaged dry-air mole fractions of methane (XCH_{4} with a precision <0.3% (1-σ diurnal variation, 7-min integration and a seasonal bias <0.14% from mid-infrared ground-based solar FTIR measurements of the Network for the Detection of Atmospheric Composition Change (NDACC, comprising 22 FTIR stations. This makes NDACC methane data useful for satellite validation and for the inversion of regional-scale sources and sinks in addition to long-term trend analysis. Such retrievals complement the high accuracy and precision near-infrared observations of the younger Total Carbon Column Observing Network (TCCON with time series dating back 15 years or so before TCCON operations began.
MIR-GBM v1.0 is using HITRAN 2000 (including the 2001 update release and 3 spectral micro windows (2613.70–2615.40 cm^{−1}, 2835.50–2835.80 cm^{−1}, 2921.00–2921.60 cm^{−1}. A first-order Tikhonov constraint is applied to the state vector given in units of per cent of volume mixing ratio. It is tuned to achieve minimum diurnal variation without damping seasonality. Final quality selection of the retrievals uses a threshold for the goodness of fit (χ^{2} < 1 as well as for the ratio of root-mean-square spectral noise and information content (<0.15%. Column-averaged dry-air mole fractions are calculated using the retrieved methane profiles and four-times-daily pressure-temperature-humidity profiles from National Center for Environmental Prediction (NCEP interpolated to the time of measurement.
MIR-GBM v1.0 is the optimum of 24 tested retrieval strategies (8 different spectral micro-window selections, 3 spectroscopic line lists: HITRAN 2000, 2004, 2008. Dominant errors of the non-optimum retrieval strategies are systematic HDO/H_{2}O-CH_{4} interference errors leading to a seasonal bias up to ≈5%. Therefore interference
Explosive percolation on directed networks due to monotonic flow of activity
Waagen, Alex; D'Souza, Raissa M.; Lu, Tsai-Ching
2017-07-01
An important class of real-world networks has directed edges, and in addition, some rank ordering on the nodes, for instance the popularity of users in online social networks. Yet, nearly all research related to explosive percolation has been restricted to undirected networks. Furthermore, information on such rank-ordered networks typically flows from higher-ranked to lower-ranked individuals, such as follower relations, replies, and retweets on Twitter. Here we introduce a simple percolation process on an ordered, directed network where edges are added monotonically with respect to the rank ordering. We show with a numerical approach that the emergence of a dominant strongly connected component appears to be discontinuous. Large-scale connectivity occurs at very high density compared with most percolation processes, and this holds not just for the strongly connected component structure but for the weakly connected component structure as well. We present analysis with branching processes, which explains this unusual behavior and gives basic intuition for the underlying mechanisms. We also show that before the emergence of a dominant strongly connected component, multiple giant strongly connected components may exist simultaneously. By adding a competitive percolation rule with a small bias to link uses of similar rank, we show this leads to formation of two distinct components, one of high-ranked users, and one of low-ranked users, with little flow between the two components.
International Nuclear Information System (INIS)
Dirras, G.; Bouvier, S.; Gubicza, J.; Hasni, B.; Szilagyi, T.
2009-01-01
The present work focuses on understanding the mechanical behavior of bulk ultrafine-grained nickel specimens processed by spark plasma sintering of high purity nickel nanopowder and subsequently deformed under large amplitude monotonic simple shear tests and strain-controlled cyclic simple shear tests at room temperature. During cyclic tests, the samples were deformed up to an accumulated von Mises strain of about ε VM = 0.75 (the flow stress was in the 650-700 MPa range), which is extremely high in comparison with the low tensile/compression ductility of this class of materials at quasi-static conditions. The underlying physical mechanisms were investigated by electron microscopy and X-ray diffraction profile analysis. Lattice dislocation-based plasticity leading to cell formation and dislocation interactions with twin boundaries contributed to the work-hardening of these materials. The large amount of plastic strain that has been reached during the shear tests highlights intrinsic mechanical characteristics of the ultrafine-grained nickel studied here.
Energy Technology Data Exchange (ETDEWEB)
Dirras, G., E-mail: dirras@univ-paris13.fr [LPMTM - CNRS, Institut Galilee, Universite Paris 13, 99 Avenue J.B. Clement, 93430 Villetaneuse (France); Bouvier, S. [LPMTM - CNRS, Institut Galilee, Universite Paris 13, 99 Avenue J.B. Clement, 93430 Villetaneuse (France); Gubicza, J. [Department of Materials Physics, Eoetvoes Lorand University, P.O.B. 32, Budapest H-1518 (Hungary); Hasni, B. [LPMTM - CNRS, Institut Galilee, Universite Paris 13, 99 Avenue J.B. Clement, 93430 Villetaneuse (France); Szilagyi, T. [Department of Materials Physics, Eoetvoes Lorand University, P.O.B. 32, Budapest H-1518 (Hungary)
2009-11-25
The present work focuses on understanding the mechanical behavior of bulk ultrafine-grained nickel specimens processed by spark plasma sintering of high purity nickel nanopowder and subsequently deformed under large amplitude monotonic simple shear tests and strain-controlled cyclic simple shear tests at room temperature. During cyclic tests, the samples were deformed up to an accumulated von Mises strain of about {epsilon}{sub VM} = 0.75 (the flow stress was in the 650-700 MPa range), which is extremely high in comparison with the low tensile/compression ductility of this class of materials at quasi-static conditions. The underlying physical mechanisms were investigated by electron microscopy and X-ray diffraction profile analysis. Lattice dislocation-based plasticity leading to cell formation and dislocation interactions with twin boundaries contributed to the work-hardening of these materials. The large amount of plastic strain that has been reached during the shear tests highlights intrinsic mechanical characteristics of the ultrafine-grained nickel studied here.
Yi, Hongming; Wu, Tao; Lauraguais, Amélie; Semenov, Vladimir; Coeur, Cecile; Cassez, Andy; Fertein, Eric; Gao, Xiaoming; Chen, Weidong
2017-12-04
A spectroscopic instrument based on a mid-infrared external cavity quantum cascade laser (EC-QCL) was developed for high-accuracy measurements of dinitrogen pentoxide (N 2 O 5 ) at the ppbv-level. A specific concentration retrieval algorithm was developed to remove, from the broadband absorption spectrum of N 2 O 5 , both etalon fringes resulting from the EC-QCL intrinsic structure and spectral interference lines of H 2 O vapour absorption, which led to a significant improvement in measurement accuracy and detection sensitivity (by a factor of 10), compared to using a traditional algorithm for gas concentration retrieval. The developed EC-QCL-based N 2 O 5 sensing platform was evaluated by real-time tracking N 2 O 5 concentration in its most important nocturnal tropospheric chemical reaction of NO 3 + NO 2 ↔ N 2 O 5 in an atmospheric simulation chamber. Based on an optical absorption path-length of L eff = 70 m, a minimum detection limit of 15 ppbv was achieved with a 25 s integration time and it was down to 3 ppbv in 400 s. The equilibrium rate constant K eq involved in the above chemical reaction was determined with direct concentration measurements using the developed EC-QCL sensing platform, which was in good agreement with the theoretical value deduced from a referenced empirical formula under well controlled experimental conditions. The present work demonstrates the potential and the unique advantage of the use of a modern external cavity quantum cascade laser for applications in direct quantitative measurement of broadband absorption of key molecular species involved in chemical kinetic and climate-change related tropospheric chemistry.
A detailed sensitivity analysis was conducted to determine the relative effects of measurement errors in climate data input parameters on the accuracy of calculated reference crop evapotranspiration (ET) using the ASCE-EWRI Standardized Reference ET Equation. Data for the period of 1995 to 2008, fro...
Directory of Open Access Journals (Sweden)
Guoqing Zhou
2016-06-01
Full Text Available This paper proposes a novel rigorous transformation model for 2D-3D registration to address the difficult problem of obtaining a sufficient number of well-distributed ground control points (GCPs in urban areas with tall buildings. The proposed model applies two types of geometric constraints, co-planarity and perpendicularity, to the conventional photogrammetric collinearity model. Both types of geometric information are directly obtained from geometric building structures, with which the geometric constraints are automatically created and combined into the conventional transformation model. A test field located in downtown Denver, Colorado, is used to evaluate the accuracy and reliability of the proposed method. The comparison analysis of the accuracy achieved by the proposed method and the conventional method is conducted. Experimental results demonstrated that: (1 the theoretical accuracy of the solved registration parameters can reach 0.47 pixels, whereas the other methods reach only 1.23 and 1.09 pixels; (2 the RMS values of 2D-3D registration achieved by the proposed model are only two pixels along the x and y directions, much smaller than the RMS values of the conventional model, which are approximately 10 pixels along the x and y directions. These results demonstrate that the proposed method is able to significantly improve the accuracy of 2D-3D registration with much fewer GCPs in urban areas with tall buildings.
Sicard, Pierre; Martin-lauzer, François-regis
2017-04-01
In the context of global climate change and adjustment/resilience policies' design and implementation, there is a need not only i. for environmental monitoring, e.g. through a range of Earth Observations (EO) land "products" but ii. for a precise assessment of uncertainties of the aforesaid information that feed environmental decision-making (to be introduced in the EO metadata) and also iii. for a perfect handing of the thresholds which help translate "environment tolerance limits" to match detected EO changes through ecosystem modelling. Uncertainties' insight means precision and accuracy's knowledge and subsequent ability of setting thresholds for change detection systems. Traditionally, the validation of satellite-derived products has taken the form of intensive field campaigns to sanction the introduction of data processors in Payload Data Ground Segments chains. It is marred by logistical challenges and cost issues, reason why it is complemented by specific surveys at ground-based monitoring sites which can provide near-continuous observations at a high temporal resolution (e.g. RadCalNet). Unfortunately, most of the ground-level monitoring sites, in the number of 100th or 1000th, which are part of wider observation networks (e.g. FLUXNET, NEON, IMAGINES) mainly monitor the state of the atmosphere and the radiation exchange at the surface, which are different to the products derived from EO data. In addition they are "point-based" compared to the EO cover to be obtained from Sentinel-2 or Sentinel-3. Yet, data from these networks, processed by spatial extrapolation models, are well-suited to the bottom-up approach and relevant to the validation of vegetation parameters' consistency (e.g. leaf area index, fraction of absorbed photosynthetically active radiation). Consistency means minimal errors on spatial and temporal gradients of EO products. Test of the procedure for land-cover products' consistency assessment with field measurements delivered by worldwide
International Nuclear Information System (INIS)
Deuerling, Justin M.; Rudy, David J.; Niebur, Glen L.; Roeder, Ryan K.
2010-01-01
Purpose: Microcomputed tomography (micro-CT) is increasingly used as a nondestructive alternative to ashing for measuring bone mineral content. Phantoms are utilized to calibrate the measured x-ray attenuation to discrete levels of mineral density, typically including levels up to 1000 mg HA/cm 3 , which encompasses levels of bone mineral density (BMD) observed in trabecular bone. However, levels of BMD observed in cortical bone and levels of tissue mineral density (TMD) in both cortical and trabecular bone typically exceed 1000 mg HA/cm 3 , requiring extrapolation of the calibration regression, which may result in error. Therefore, the objectives of this study were to investigate (1) the relationship between x-ray attenuation and an expanded range of hydroxyapatite (HA) density in a less attenuating polymer matrix and (2) the effects of the calibration on the accuracy of subsequent measurements of mineralization in human cortical bone specimens. Methods: A novel HA-polymer composite phantom was prepared comprising a less attenuating polymer phase (polyethylene) and an expanded range of HA density (0-1860 mg HA/cm 3 ) inclusive of characteristic levels of BMD in cortical bone or TMD in cortical and trabecular bone. The BMD and TMD of cortical bone specimens measured using the new HA-polymer calibration phantom were compared to measurements using a conventional HA-polymer phantom comprising 0-800 mg HA/cm 3 and the corresponding ash density measurements on the same specimens. Results: The HA-polymer composite phantom exhibited a nonlinear relationship between x-ray attenuation and HA density, rather than the linear relationship typically employed a priori, and obviated the need for extrapolation, when calibrating the measured x-ray attenuation to high levels of mineral density. The BMD and TMD of cortical bone specimens measured using the conventional phantom was significantly lower than the measured ash density by 19% (p<0.001, ANCOVA) and 33% (p<0.05, Tukey's HSD
Directory of Open Access Journals (Sweden)
Evgeni V Nikolaev
2016-04-01
Full Text Available Synthetic constructs in biotechnology, biocomputing, and modern gene therapy interventions are often based on plasmids or transfected circuits which implement some form of "on-off" switch. For example, the expression of a protein used for therapeutic purposes might be triggered by the recognition of a specific combination of inducers (e.g., antigens, and memory of this event should be maintained across a cell population until a specific stimulus commands a coordinated shut-off. The robustness of such a design is hampered by molecular ("intrinsic" or environmental ("extrinsic" noise, which may lead to spontaneous changes of state in a subset of the population and is reflected in the bimodality of protein expression, as measured for example using flow cytometry. In this context, a "majority-vote" correction circuit, which brings deviant cells back into the required state, is highly desirable, and quorum-sensing has been suggested as a way for cells to broadcast their states to the population as a whole so as to facilitate consensus. In this paper, we propose what we believe is the first such a design that has mathematically guaranteed properties of stability and auto-correction under certain conditions. Our approach is guided by concepts and theory from the field of "monotone" dynamical systems developed by M. Hirsch, H. Smith, and others. We benchmark our design by comparing it to an existing design which has been the subject of experimental and theoretical studies, illustrating its superiority in stability and self-correction of synchronization errors. Our stability analysis, based on dynamical systems theory, guarantees global convergence to steady states, ruling out unpredictable ("chaotic" behaviors and even sustained oscillations in the limit of convergence. These results are valid no matter what are the values of parameters, and are based only on the wiring diagram. The theory is complemented by extensive computational bifurcation analysis
Chen, H.; Winderlich, J.; Gerbig, C.; Hoefer, A.; Rella, C. W.; Crosson, E. R.; Van Pelt, A. D.; Steinbach, J.; Kolle, O.; Beck, V.; Daube, B. C.; Gottlieb, E. W.; Chow, V. Y.; Santoni, G. W.; Wofsy, S. C.
2010-01-01
High-accuracy continuous measurements of greenhouse gases (CO2 and CH4) during the BARCA (Balancao Atmosferico Regional de Carbono na Amazonia) phase B campaign in Brazil in May 2009 were accomplished using a newly available analyzer based on the cavity ring-down spectroscopy (CRDS) technique. This
High accuracy results for the energy levels of the molecular ions H+2, D+2 and HD+, up to J = 2
International Nuclear Information System (INIS)
Karr, J Ph; Hilico, L
2006-01-01
We present a nonrelativistic calculation of the rotation-vibration levels of the molecular ions H + 2 , D + 2 and HD + , relying on the diagonalization of the exact three-body Hamiltonian in a variational basis. The J = 2 levels are obtained with a very high accuracy of 10 -14 au (for most levels) representing an improvement by five orders of magnitude over previous calculations. The accuracy is also improved for the J = 1 levels of H + 2 and D + 2 with respect to earlier works. Moreover, we have computed the sensitivities of the energy levels with respect to the mass ratios, allowing these levels to be used for metrological purposes
Non-monotonic dose dependence of the Ge- and Ti-centres in quartz
International Nuclear Information System (INIS)
Woda, C.; Wagner, G.A.
2007-01-01
The dose response of the Ge- and Ti-centres in quartz is studied over a large dose range. After an initial signal increase in the low dose range, both defects show a pronounced decrease in signal intensities for high doses. The model by Euler and Kahan [1987. Radiation effects and anelastic loss in germanium-doped quartz. Phys. Rev. B 35 (9), 4351-4359], in which the signal drop is explained by an enhanced trapping of holes at the electron trapping site, is critically discussed. A generalization of the model is then developed, following similar considerations by Lawless et al. [2005. A model for non-monotonic dose dependence of thermoluminescence (TL). J. Phys. Condens. Matter 17, 737-753], who explained a signal drop in TL by an enhanced recombination rate with electrons at the recombination centre. Finally, an alternative model for the signal decay is given, based on the competition between single and double electron capture at the electron trapping site. From the critical discussion of the different models it is concluded that the double electron capture mechanism is the most probable effect for the dose response
Failure mechanisms of closed-cell aluminum foam under monotonic and cyclic loading
International Nuclear Information System (INIS)
Amsterdam, E.; De Hosson, J.Th.M.; Onck, P.R.
2006-01-01
This paper concentrates on the differences in failure mechanisms of Alporas closed-cell aluminum foam under either monotonic or cyclic loading. The emphasis lies on aspects of crack nucleation and crack propagation in relation to the microstructure. The cell wall material consists of Al dendrites and an interdendritic network of Al 4 Ca and Al 22 CaTi 2 precipitates. In situ scanning electron microscopy monotonic tensile tests were performed on small samples to study crack nucleation and propagation. Digital image correlation was employed to map the strain in the cell wall on the characteristic microstructural length scale. Monotonic tensile tests and tension-tension fatigue tests were performed on larger samples to observe the overall fracture behavior and crack path in monotonic and cyclic loading. The crack nucleation and propagation path in both loading conditions are revealed and it can be concluded that during monotonic tension cracks nucleate in and propagate partly through the Al 4 Ca interdendritic network, whereas under cyclic loading cracks nucleate and propagate through the Al dendrites
Alexander, R. H. (Principal Investigator); Fitzpatrick, K. A.
1975-01-01
The author has identified the following significant results. Level 2 land use maps produced at three scales (1:24,000, 1:100,000, and 1:250,000) from high altitude photography were compared with each other and with point data obtained in the field. The same procedures were employed to determine the accuracy of the Level 1 land use maps produced at 1:250,000 from high altitude photography and color composite ERTS imagery. Accuracy of the Level 2 maps was 84.9 percent at 1:24,000, 77.4 percent at 1:100,000 and 73.0 percent at 1:250,000. Accuracy of the Level 1 1:250,000 maps was 76.5 percent for aerial photographs and 69.5 percent for ERTS imagery. The cost of Level 2 land use mapping at 1:24,000 was found to be high ($11.93 per sq km). The cost of mapping at 1:100,000 ($1.75) was about two times as expensive as mapping at 1:250,000 ($.88), and the accuracy increased by only 4.4 percent.
International Nuclear Information System (INIS)
Herfurth, F.; Kellerbauer, A.; Sauvan, E.; Ames, F.; Engels, O.; Audi, G.; Lunney, D.; Beck, D.; Blaum, K.; Kluge, H.J.; Scheidenberger, C.; Sikler, G.; Weber, C.; Bollen, G.; Schwarz, S.; Moore, R.B.; Oinonen, M.
2002-01-01
Mass measurements of 34 Ar, 73-78 Kr, and 74,76 Rb were performed with the Penning-trap mass spectrometer ISOLTRAP. Very accurate Q EC -values are needed for the investigations of the Ft-value of 0 + → 0 + nuclear β-decays used to test the standard model predictions for weak interactions. The necessary accuracy on the Q EC -value requires the mass of mother and daughter nuclei to be measured with δm/m ≤ 3 . 10 -8 . For most of the measured nuclides presented here this has been reached. The 34 Ar mass has been measured with a relative accuracy of 1.1 .10 -8 . The Q EC -value of the 34 Ar 0 + → 0 + decay can now be determined with an uncertainty of about 0.01%. Furthermore, 74 Rb is the shortest-lived nuclide ever investigated in a Penning trap. (orig.)
International Nuclear Information System (INIS)
Konovalov, N.V.
The accuracy of the calculation of the characteristics of a radiation field in a plane layer is investigated by solving the transfer equation in dependence on the error in the specification of the scattering indicatrix. It is shown that a small error in the specification of the indicatrix can lead to a large error in the solution at large optical depths. An estimate is given for the region of optical thicknesses for which the emission field can be determined with sufficient degree of accuracy from the transfer equation with a known error in the specification of the indicatrix. For an estimation of the error involved in various numerical methods, and also for a determination of the region of their applicability, the results of calculations of problems with strongly anisotropic indicatrix are given
Geometric Accuracy Investigations of SEVIRI High Resolution Visible (HRV Level 1.5 Imagery
Directory of Open Access Journals (Sweden)
Sultan Kocaman Aksakal
2013-05-01
Full Text Available GCOS (Global Climate Observing System is a long-term program for monitoring the climate, detecting the changes, and assessing their impacts. Remote sensing techniques are being increasingly used for climate-related measurements. Imagery of the SEVIRI instrument on board of the European geostationary satellites Meteosat-8 and Meteosat-9 are often used for the estimation of essential climate variables. In a joint project between the Swiss GCOS Office and ETH Zurich, geometric accuracy and temporal stability of 1-km resolution HRV channel imagery of SEVIRI have been evaluated over Switzerland. A set of tools and algorithms has been developed for the investigations. Statistical analysis and blunder detection have been integrated in the process for robust evaluation. The relative accuracy is evaluated by tracking large numbers of feature points in consecutive HRV images taken at 15-minute intervals. For the absolute accuracy evaluation, lakes in Switzerland and surroundings are used as reference. 20 lakes digitized from Landsat orthophotos are transformed into HRV images and matched via 2D translation terms at sub-pixel level. The algorithms are tested using HRV images taken on 24 days in 2008 (2 days per month. The results show that 2D shifts that are up to 8 pixels are present both in relative and absolute terms.
Dunn, Naomi; Williamson, Ann
2012-01-01
Although monotony is widely recognised as being detrimental to performance, its occurrence and effects are not yet well understood. This is despite the fact that task-related characteristics, such as monotony and low task demand, have been shown to contribute to performance decrements over time. Participants completed one of two simulated train-driving scenarios. Both were highly monotonous and differed only in terms of the level of cognitive demand required (i.e. low demand or high demand). These results highlight the seriously detrimental effects of the combination of monotony and low task demands and clearly show that even a relatively minor increase in cognitive demand can mitigate adverse monotony-related effects on performance for extended periods of time. Monotony is an inherent characteristic of transport industries, including rail, aviation and road transport, which can have adverse impact on safety, reliability and efficiency. This study highlights possible strategies for mitigating these adverse effects. Practitioner Summary: This study provides evidence for the importance of cognitive demand in mitigating monotony-related effects on performance. The results have clear implications for the rapid onset of performance deterioration in low demand monotonous tasks and demonstrate that these detrimental performance effects can be overcome with simple solutions, such as making the task more cognitively engaging.
Comparison of boundedness and monotonicity properties of one-leg and linear multistep methods
Mozartova, A.; Savostianov, I.; Hundsdorfer, W.
2015-01-01
© 2014 Elsevier B.V. All rights reserved. One-leg multistep methods have some advantage over linear multistep methods with respect to storage of the past results. In this paper boundedness and monotonicity properties with arbitrary (semi-)norms or convex functionals are analyzed for such multistep methods. The maximal stepsize coefficient for boundedness and monotonicity of a one-leg method is the same as for the associated linear multistep method when arbitrary starting values are considered. It will be shown, however, that combinations of one-leg methods and Runge-Kutta starting procedures may give very different stepsize coefficients for monotonicity than the linear multistep methods with the same starting procedures. Detailed results are presented for explicit two-step methods.
Energy Technology Data Exchange (ETDEWEB)
Erol, V. [Department of Computer Engineering, Institute of Science, Okan University, Istanbul (Turkey); Netas Telecommunication Inc., Istanbul (Turkey)
2016-04-21
Entanglement has been studied extensively for understanding the mysteries of non-classical correlations between quantum systems. In the bipartite case, there are well known monotones for quantifying entanglement such as concurrence, relative entropy of entanglement (REE) and negativity, which cannot be increased via local operations. The study on these monotones has been a hot topic in quantum information [1-7] in order to understand the role of entanglement in this discipline. It can be observed that from any arbitrary quantum pure state a mixed state can obtained. A natural generalization of this observation would be to consider local operations classical communication (LOCC) transformations between general pure states of two parties. Although this question is a little more difficult, a complete solution has been developed using the mathematical framework of the majorization theory [8]. In this work, we analyze the relation between entanglement monotones concurrence and negativity with respect to majorization for general two-level quantum systems of two particles.
Bornkamp, Björn; Ickstadt, Katja
2009-03-01
In this article, we consider monotone nonparametric regression in a Bayesian framework. The monotone function is modeled as a mixture of shifted and scaled parametric probability distribution functions, and a general random probability measure is assumed as the prior for the mixing distribution. We investigate the choice of the underlying parametric distribution function and find that the two-sided power distribution function is well suited both from a computational and mathematical point of view. The model is motivated by traditional nonlinear models for dose-response analysis, and provides possibilities to elicitate informative prior distributions on different aspects of the curve. The method is compared with other recent approaches to monotone nonparametric regression in a simulation study and is illustrated on a data set from dose-response analysis.
Comparison of boundedness and monotonicity properties of one-leg and linear multistep methods
Mozartova, A.
2015-05-01
© 2014 Elsevier B.V. All rights reserved. One-leg multistep methods have some advantage over linear multistep methods with respect to storage of the past results. In this paper boundedness and monotonicity properties with arbitrary (semi-)norms or convex functionals are analyzed for such multistep methods. The maximal stepsize coefficient for boundedness and monotonicity of a one-leg method is the same as for the associated linear multistep method when arbitrary starting values are considered. It will be shown, however, that combinations of one-leg methods and Runge-Kutta starting procedures may give very different stepsize coefficients for monotonicity than the linear multistep methods with the same starting procedures. Detailed results are presented for explicit two-step methods.
International Nuclear Information System (INIS)
Wen, Chenyang; He, Shengyang; Hu, Peida; Bu, Changgen
2017-01-01
Attitude heading reference systems (AHRSs) based on micro-electromechanical system (MEMS) inertial sensors are widely used because of their low cost, light weight, and low power. However, low-cost AHRSs suffer from large inertial sensor errors. Therefore, experimental performance evaluation of MEMS-based AHRSs after system implementation is necessary. High-accuracy turntables can be used to verify the performance of MEMS-based AHRSs indoors, but they are expensive and unsuitable for outdoor tests. This study developed a low-cost two-axis rotating platform for indoor and outdoor attitude determination. A high-accuracy inclinometer and encoders were integrated into the platform to improve the achievable attitude test accuracy. An attitude error compensation method was proposed to calibrate the initial attitude errors caused by the movements and misalignment angles of the platform. The proposed attitude error determination method was examined through rotating experiments, which showed that the standard deviations of the pitch and roll errors were 0.050° and 0.090°, respectively. The pitch and roll errors both decreased to 0.024° when the proposed attitude error determination method was used. This decrease validates the effectiveness of the compensation method. Experimental results demonstrated that the integration of the inclinometer and encoders improved the performance of the low-cost, two-axis, rotating platform in terms of attitude accuracy. (paper)
A Multiscale Enrichment Procedure for Nonlinear Monotone Operators
Efendiev, Yalchin R.; Galvis, J.; Presho, M.; Zhou, J.
2014-01-01
. Galvis, R. Lazarov, S. Margenov and J. Ren, Robust two-level domain decomposition preconditioners for high-contrast anisotropic flows in multiscale media. Submitted.; Y. Efendiev, J. Galvis and X. Wu, J. Comput. Phys. 230 (2011) 937–955; J. Galvis and Y
Gkinis, Vasileios; Holme, Christian; Morris, Valerie; Thayer, Abigail Grace; Vaughn, Bruce; Kjaer, Helle Astrid; Vallelonga, Paul; Simonsen, Marius; Jensen, Camilla Marie; Svensson, Anders; Maffrezzoli, Niccolo; Vinther, Bo; Dallmayr, Remi
2017-04-01
We present a performance comparison study between two state of the art Cavity Ring Down Spectrometers (Picarro L2310-i, L2140-i). The comparison took place during the Continuous Flow Analysis (CFA) campaign for the measurement of the Renland ice core, over a period of three months. Instant and complete vaporisation of the ice core melt stream, as well as of in-house water reference materials is achieved by accurate control of microflows of liquid into a homemade calibration system by following simple principles of the Hagen-Poiseuille law. Both instruments share the same vaporisation unit in a configuration that minimises sample preparation discrepancies between the two analyses. We describe our SMOW-SLAP calibration and measurement protocols for such a CFA application and present quality control metrics acquired during the full period of the campaign on a daily basis. The results indicate an unprecedented performance for all 3 isotopic ratios (δ2H, δ17O, δ18O ) in terms of precision, accuracy and resolution. We also comment on the precision and accuracy of the second order excess parameters of HD16O and H217O over H218O (Dxs, Δ17O ). To our knowledge these are the first reported CFA measurements at this level of precision and accuracy for all three isotopic ratios. Differences on the performance of the two instruments are carefully assessed during the measurement and reported here. Our quality control protocols extend to the area of low water mixing ratios, a regime in which often atmospheric vapour measurements take place and Cavity Ring Down Analysers show a poorer performance due to the lower signal to noise ratios. We address such issues and propose calibration protocols from which water vapour isotopic analyses can benefit from.
Görgens, Christian; Guddat, Sven; Dib, Josef; Geyer, Hans; Schänzer, Wilhelm; Thevis, Mario
2015-01-01
To date, substances such as Mildronate (Meldonium) are not on the radar of anti-doping laboratories as the compound is not explicitly classified as prohibited. However, the anti-ischemic drug Mildronate demonstrates an increase in endurance performance of athletes, improved rehabilitation after exercise, protection against stress, and enhanced activations of central nervous system (CNS) functions. In the present study, the existing evidence of Mildronate's usage in sport, which is arguably not (exclusively) based on medicinal reasons, is corroborated by unequivocal analytical data allowing the estimation of the prevalence and extent of misuse in professional sports. Such data are vital to support decision-making processes, particularly regarding the ban on drugs in sport. Due to the growing body of evidence (black market products and athlete statements) concerning its misuse in sport, adequate test methods for the reliable identification of Mildronate are required, especially since the substance has been added to the 2015 World Anti-Doping Agency (WADA) monitoring program. In the present study, two approaches were established using an in-house synthesized labelled internal standard (Mildronate-D3 ). One aimed at the implementation of the analyte into routine doping control screening methods to enable its monitoring at the lowest possible additional workload for the laboratory, and another that is appropriate for the peculiar specifics of the analyte, allowing the unequivocal confirmation of findings using hydrophilic interaction liquid chromatography-high resolution/high accuracy mass spectrometry (HILIC-HRMS). Here, according to applicable regulations in sports drug testing, a full qualitative validation was conducted. The assay demonstrated good specificity, robustness (rRT=0.3%), precision (intra-day: 7.0-8.4%; inter-day: 9.9-12.9%), excellent linearity (R>0.99) and an adequate lower limit of detection (<10 ng/mL). Copyright © 2015 John Wiley & Sons, Ltd.
Görgens, Christian; Dib, Josef; Geyer, Hans; Schänzer, Wilhelm; Thevis, Mario
2015-01-01
To date, substances such as Mildronate (Meldonium) are not on the radar of anti‐doping laboratories as the compound is not explicitly classified as prohibited. However, the anti‐ischemic drug Mildronate demonstrates an increase in endurance performance of athletes, improved rehabilitation after exercise, protection against stress, and enhanced activations of central nervous system (CNS) functions. In the present study, the existing evidence of Mildronate's usage in sport, which is arguably not (exclusively) based on medicinal reasons, is corroborated by unequivocal analytical data allowing the estimation of the prevalence and extent of misuse in professional sports. Such data are vital to support decision‐making processes, particularly regarding the ban on drugs in sport. Due to the growing body of evidence (black market products and athlete statements) concerning its misuse in sport, adequate test methods for the reliable identification of Mildronate are required, especially since the substance has been added to the 2015 World Anti‐Doping Agency (WADA) monitoring program. In the present study, two approaches were established using an in‐house synthesized labelled internal standard (Mildronate‐D3). One aimed at the implementation of the analyte into routine doping control screening methods to enable its monitoring at the lowest possible additional workload for the laboratory, and another that is appropriate for the peculiar specifics of the analyte, allowing the unequivocal confirmation of findings using hydrophilic interaction liquid chromatography‐high resolution/high accuracy mass spectrometry (HILIC‐HRMS). Here, according to applicable regulations in sports drug testing, a full qualitative validation was conducted. The assay demonstrated good specificity, robustness (rRT=0.3%), precision (intra‐day: 7.0–8.4%; inter‐day: 9.9–12.9%), excellent linearity (R>0.99) and an adequate lower limit of detection (<10 ng/mL). © 2015 The Authors
Monotone numerical methods for finite-state mean-field games
Gomes, Diogo A.; Saude, Joao
2017-01-01
Here, we develop numerical methods for finite-state mean-field games (MFGs) that satisfy a monotonicity condition. MFGs are determined by a system of differential equations with initial and terminal boundary conditions. These non-standard conditions are the main difficulty in the numerical approximation of solutions. Using the monotonicity condition, we build a flow that is a contraction and whose fixed points solve the MFG, both for stationary and time-dependent problems. We illustrate our methods in a MFG modeling the paradigm-shift problem.
Existence, uniqueness, monotonicity and asymptotic behaviour of travelling waves for epidemic models
International Nuclear Information System (INIS)
Hsu, Cheng-Hsiung; Yang, Tzi-Sheng
2013-01-01
The purpose of this work is to investigate the existence, uniqueness, monotonicity and asymptotic behaviour of travelling wave solutions for a general epidemic model arising from the spread of an epidemic by oral–faecal transmission. First, we apply Schauder's fixed point theorem combining with a supersolution and subsolution pair to derive the existence of positive monotone monostable travelling wave solutions. Then, applying the Ikehara's theorem, we determine the exponential rates of travelling wave solutions which converge to two different equilibria as the moving coordinate tends to positive infinity and negative infinity, respectively. Finally, using the sliding method, we prove the uniqueness result provided the travelling wave solutions satisfy some boundedness conditions. (paper)
Monotone numerical methods for finite-state mean-field games
Gomes, Diogo A.
2017-04-29
Here, we develop numerical methods for finite-state mean-field games (MFGs) that satisfy a monotonicity condition. MFGs are determined by a system of differential equations with initial and terminal boundary conditions. These non-standard conditions are the main difficulty in the numerical approximation of solutions. Using the monotonicity condition, we build a flow that is a contraction and whose fixed points solve the MFG, both for stationary and time-dependent problems. We illustrate our methods in a MFG modeling the paradigm-shift problem.
Directory of Open Access Journals (Sweden)
Lemieux Sébastien
2006-08-01
Full Text Available Abstract Background The identification of differentially expressed genes (DEGs from Affymetrix GeneChips arrays is currently done by first computing expression levels from the low-level probe intensities, then deriving significance by comparing these expression levels between conditions. The proposed PL-LM (Probe-Level Linear Model method implements a linear model applied on the probe-level data to directly estimate the treatment effect. A finite mixture of Gaussian components is then used to identify DEGs using the coefficients estimated by the linear model. This approach can readily be applied to experimental design with or without replication. Results On a wholly defined dataset, the PL-LM method was able to identify 75% of the differentially expressed genes within 10% of false positives. This accuracy was achieved both using the three replicates per conditions available in the dataset and using only one replicate per condition. Conclusion The method achieves, on this dataset, a higher accuracy than the best set of tools identified by the authors of the dataset, and does so using only one replicate per condition.
Energy Technology Data Exchange (ETDEWEB)
Maglevanny, I.I., E-mail: sianko@list.ru [Volgograd State Social Pedagogical University, 27 Lenin Avenue, Volgograd 400131 (Russian Federation); Smolar, V.A. [Volgograd State Technical University, 28 Lenin Avenue, Volgograd 400131 (Russian Federation)
2016-01-15
We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.
International Nuclear Information System (INIS)
Maglevanny, I.I.; Smolar, V.A.
2016-01-01
We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.
Masuyama, Hiroyuki
2014-01-01
In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally,...
A Multiscale Enrichment Procedure for Nonlinear Monotone Operators
Efendiev, Yalchin R.
2014-03-11
In this paper, multiscale finite element methods (MsFEMs) and domain decomposition techniques are developed for a class of nonlinear elliptic problems with high-contrast coefficients. In the process, existing work on linear problems [Y. Efendiev, J. Galvis, R. Lazarov, S. Margenov and J. Ren, Robust two-level domain decomposition preconditioners for high-contrast anisotropic flows in multiscale media. Submitted.; Y. Efendiev, J. Galvis and X. Wu, J. Comput. Phys. 230 (2011) 937–955; J. Galvis and Y. Efendiev, SIAM Multiscale Model. Simul. 8 (2010) 1461–1483.] is extended to treat a class of nonlinear elliptic operators. The proposed method requires the solutions of (small dimension and local) nonlinear eigenvalue problems in order to systematically enrich the coarse solution space. Convergence of the method is shown to relate to the dimension of the coarse space (due to the enrichment procedure) as well as the coarse mesh size. In addition, it is shown that the coarse mesh spaces can be effectively used in two-level domain decomposition preconditioners. A number of numerical results are presented to complement the analysis.
Wang, Hongyu; Zhang, Baomin; Zhao, Xun; Li, Cong; Lu, Cunyue
2018-04-01
Conventional stereo vision algorithms suffer from high levels of hardware resource utilization due to algorithm complexity, or poor levels of accuracy caused by inadequacies in the matching algorithm. To address these issues, we have proposed a stereo range-finding technique that produces an excellent balance between cost, matching accuracy and real-time performance, for power line inspection using UAV. This was achieved through the introduction of a special image preprocessing algorithm and a weighted local stereo matching algorithm, as well as the design of a corresponding hardware architecture. Stereo vision systems based on this technique have a lower level of resource usage and also a higher level of matching accuracy following hardware acceleration. To validate the effectiveness of our technique, a stereo vision system based on our improved algorithms were implemented using the Spartan 6 FPGA. In comparative experiments, it was shown that the system using the improved algorithms outperformed the system based on the unimproved algorithms, in terms of resource utilization and matching accuracy. In particular, Block RAM usage was reduced by 19%, and the improved system was also able to output range-finding data in real time.
Wang, Raorao; Lu, Chenglin; Arola, Dwayne; Zhang, Dongsheng
2013-08-01
The aim of this study was to compare failure modes and fracture strength of ceramic structures using a combination of experimental and numerical methods. Twelve specimens with flat layer structures were fabricated from two types of ceramic systems (IPS e.max ceram/e.max press-CP and Vita VM9/Lava zirconia-VZ) and subjected to monotonic load to fracture with a tungsten carbide sphere. Digital image correlation (DIC) and fractography technology were used to analyze fracture behaviors of specimens. Numerical simulation was also applied to analyze the stress distribution in these two types of dental ceramics. Quasi-plastic damage occurred beneath the indenter in porcelain in all cases. In general, the fracture strength of VZ specimens was greater than that of CP specimens. The crack initiation loads of VZ and CP were determined as 958 ± 50 N and 724 ± 36 N, respectively. Cracks were induced by plastic damage and were subsequently driven by tensile stress at the elastic/plastic boundary and extended downward toward to the veneer/core interface from the observation of DIC at the specimen surface. Cracks penetrated into e.max press core, which led to a serious bulk fracture in CP crowns, while in VZ specimens, cracks were deflected and extended along the porcelain/zirconia core interface without penetration into the zirconia core. The rupture loads for VZ and CP ceramics were determined as 1150 ± 170 N and 857 ± 66 N, respectively. Quasi-plastic deformation (damage) is responsible for crack initiation within porcelain in both types of crowns. Due to the intrinsic mechanical properties, the fracture behaviors of these two types of ceramics are different. The zirconia core with high strength and high elastic modulus has better resistance to fracture than the e.max core. © 2013 by the American College of Prosthodontists.
Saadeddin, Kamal; Abdel-Hafez, Mamoun F.; Jaradat, Mohammad A.; Jarrah, Mohammad Amin
2013-12-01
In this paper, a low-cost navigation system that fuses the measurements of the inertial navigation system (INS) and the global positioning system (GPS) receiver is developed. First, the system's dynamics are obtained based on a vehicle's kinematic model. Second, the INS and GPS measurements are fused using an extended Kalman filter (EKF) approach. Subsequently, an artificial intelligence based approach for the fusion of INS/GPS measurements is developed based on an Input-Delayed Adaptive Neuro-Fuzzy Inference System (IDANFIS). Experimental tests are conducted to demonstrate the performance of the two sensor fusion approaches. It is found that the use of the proposed IDANFIS approach achieves a reduction in the integration development time and an improvement in the estimation accuracy of the vehicle's position and velocity compared to the EKF based approach.
Directory of Open Access Journals (Sweden)
R.K. Mohanty
2014-01-01
Full Text Available In this paper, we report new three level implicit super stable methods of order two in time and four in space for the solution of hyperbolic damped wave equations in one, two and three space dimensions subject to given appropriate initial and Dirichlet boundary conditions. We use uniform grid points both in time and space directions. Our methods behave like fourth order accurate, when grid size in time-direction is directly proportional to the square of grid size in space-direction. The proposed methods are super stable. The resulting system of algebraic equations is solved by the Gauss elimination method. We discuss new alternating direction implicit (ADI methods for two and three dimensional problems. Numerical results and the graphical representation of numerical solution are presented to illustrate the accuracy of the proposed methods.
Energy Technology Data Exchange (ETDEWEB)
Dijken, Bart R.J. van [University of Groningen, University Medical Center Groningen Department of Radiology, Groningen (Netherlands); Laar, Peter Jan van; Hoorn, Anouk van der [University of Groningen, University Medical Center Groningen Department of Radiology, Groningen (Netherlands); University of Groningen, University Medical Center Groningen, Center for Medical Imaging-North East Netherlands, Groningen (Netherlands); Holtman, Gea A. [University of Groningen, University Medical Center Groningen, Department of General Practice, Groningen (Netherlands)
2017-10-15
Treatment response assessment in high-grade gliomas uses contrast enhanced T1-weighted MRI, but is unreliable. Novel advanced MRI techniques have been studied, but the accuracy is not well known. Therefore, we performed a systematic meta-analysis to assess the diagnostic accuracy of anatomical and advanced MRI for treatment response in high-grade gliomas. Databases were searched systematically. Study selection and data extraction were done by two authors independently. Meta-analysis was performed using a bivariate random effects model when ≥5 studies were included. Anatomical MRI (five studies, 166 patients) showed a pooled sensitivity and specificity of 68% (95%CI 51-81) and 77% (45-93), respectively. Pooled apparent diffusion coefficients (seven studies, 204 patients) demonstrated a sensitivity of 71% (60-80) and specificity of 87% (77-93). DSC-perfusion (18 studies, 708 patients) sensitivity was 87% (82-91) with a specificity of 86% (77-91). DCE-perfusion (five studies, 207 patients) sensitivity was 92% (73-98) and specificity was 85% (76-92). The sensitivity of spectroscopy (nine studies, 203 patients) was 91% (79-97) and specificity was 95% (65-99). Advanced techniques showed higher diagnostic accuracy than anatomical MRI, the highest for spectroscopy, supporting the use in treatment response assessment in high-grade gliomas. (orig.)
Giordano, Alessia; Stranieri, Angelica; Rossi, Gabriele; Paltrinieri, Saverio
2015-06-01
The ΔWBC (the ratio between DIFF and BASO counts of the Sysmex XT-2000iV), hereafter defined as ΔTNC (total nucleated cells), is high in effusions due to feline infectious peritonitis (FIP), as cells are entrapped in fibrin clots formed in the BASO reagent. Similar clots form in the Rivalta's test, a method with high diagnostic accuracy for FIP. The objective of this study was to determine the diagnostic accuracy for FIP and the optimal cutoff of ΔTNC. After a retrospective search of our database, DIFF and BASO counts, and the ΔTNC from cats with and without FIP were compared to each other. Sensitivity, specificity, and positive and negative likelihood ratios (LR+, LR-) were calculated. A ROC curve was designed to determine the cutoff for best sensitivity and specificity. Effusions from 20 FIP and 31 non-FIP cats were analyzed. The ΔTNC was higher (P 2.5 had 100% specificity. The ΔTNC has a high diagnostic accuracy for FIP-related effusions by providing an estimate of precipitable proteins, as the Rivalta's test, in addition to the cell count. As fibrin clots result in false lower BASO counts, the ΔTNC is preferable to the WBC count generated by the BASO channel alone in suspected FIP effusions. © 2015 American Society for Veterinary Clinical Pathology.
Luo, Shunlong; Sun, Yuan
2017-08-01
Quantifications of coherence are intensively studied in the context of completely decoherent operations (i.e., von Neuamnn measurements, or equivalently, orthonormal bases) in recent years. Here we investigate partial coherence (i.e., coherence in the context of partially decoherent operations such as Lüders measurements). A bona fide measure of partial coherence is introduced. As an application, we address the monotonicity problem of K -coherence (a quantifier for coherence in terms of Wigner-Yanase skew information) [Girolami, Phys. Rev. Lett. 113, 170401 (2014), 10.1103/PhysRevLett.113.170401], which is introduced to realize a measure of coherence as axiomatized by Baumgratz, Cramer, and Plenio [Phys. Rev. Lett. 113, 140401 (2014), 10.1103/PhysRevLett.113.140401]. Since K -coherence fails to meet the necessary requirement of monotonicity under incoherent operations, it is desirable to remedy this monotonicity problem. We show that if we modify the original measure by taking skew information with respect to the spectral decomposition of an observable, rather than the observable itself, as a measure of coherence, then the problem disappears, and the resultant coherence measure satisfies the monotonicity. Some concrete examples are discussed and related open issues are indicated.
On the Computation of Optimal Monotone Mean-Variance Portfolios via Truncated Quadratic Utility
Ales Cerný; Fabio Maccheroni; Massimo Marinacci; Aldo Rustichini
2008-01-01
We report a surprising link between optimal portfolios generated by a special type of variational preferences called divergence preferences (cf. [8]) and optimal portfolios generated by classical expected utility. As a special case we connect optimization of truncated quadratic utility (cf. [2]) to the optimal monotone mean-variance portfolios (cf. [9]), thus simplifying the computation of the latter.
Monotonous property of non-oscillations of the damped Duffing's equation
International Nuclear Information System (INIS)
Feng Zhaosheng
2006-01-01
In this paper, we give a qualitative study to the damped Duffing's equation by means of the qualitative theory of planar systems. Under certain parametric conditions, the monotonous property of the bounded non-oscillations is obtained. Explicit exact solutions are obtained by a direct method and application of this approach to a reaction-diffusion equation is presented
A note on profit maximization and monotonicity for inbound call centers
Koole, G.M.; Pot, S.A.
2011-01-01
We consider an inbound call center with a fixed reward per call and communication and agent costs. By controlling the number of lines and the number of agents, we can maximize the profit. Abandonments are included in our performance model. Monotonicity results for the maximization problem are
DEFF Research Database (Denmark)
Garde, Henrik
2018-01-01
. For a fair comparison, exact matrix characterizations are used when probing the monotonicity relations to avoid errors from numerical solution to PDEs and numerical integration. Using a special factorization of the Neumann-to-Dirichlet map also makes the non-linear method as fast as the linear method...
ON AN EXPONENTIAL INEQUALITY AND A STRONG LAW OF LARGE NUMBERS FOR MONOTONE MEASURES
Czech Academy of Sciences Publication Activity Database
Agahi, H.; Mesiar, Radko
2014-01-01
Roč. 50, č. 5 (2014), s. 804-813 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Choquet expectation * a strong law of large numbers * exponential inequality * monotone probability Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/E/mesiar-0438052.pdf
Monotonic Set-Extended Prefix Rewriting and Verification of Recursive Ping-Pong Protocols
DEFF Research Database (Denmark)
Delzanno, Giorgio; Esparza, Javier; Srba, Jiri
2006-01-01
of messages) some verification problems become decidable. In particular we give an algorithm to decide control state reachability, a problem related to security properties like secrecy and authenticity. The proof is via a reduction to a new prefix rewriting model called Monotonic Set-extended Prefix rewriting...
A note on monotone solutions for a nonconvex second-order functional differential inclusion
Directory of Open Access Journals (Sweden)
Aurelian Cernea
2011-12-01
Full Text Available The existence of monotone solutions for a second-order functional differential inclusion with Carath\\'{e}odory perturbation is obtained in the case when the multifunction that define the inclusion is upper semicontinuous compact valued and contained in the Fr\\'{e}chet subdifferential of a $\\phi $-convex function of order two.
Almost monotonicity formulas for elliptic and parabolic operators with variable coefficients
Matevosyan, Norayr; Petrosyan, Arshak
2010-01-01
In this paper we extend the results of Caffarelli, Jerison, and Kenig [Ann. of Math. (2)155 (2002)] and Caffarelli and Kenig [Amer. J. Math.120 (1998)] by establishing an almost monotonicity estimate for pairs of continuous functions satisfying u
Directory of Open Access Journals (Sweden)
Boubakari Ibrahimou
2013-01-01
maximal monotone with and . Using the topological degree theory developed by Kartsatos and Quarcoo we study the eigenvalue problem where the operator is a single-valued of class . The existence of continuous branches of eigenvectors of infinite length then could be easily extended to the case where the operator is multivalued and is investigated.
Chen, Baojiang; Qin, Jing
2014-05-10
In statistical analysis, a regression model is needed if one is interested in finding the relationship between a response variable and covariates. When the response depends on the covariate, then it may also depend on the function of this covariate. If one has no knowledge of this functional form but expect for monotonic increasing or decreasing, then the isotonic regression model is preferable. Estimation of parameters for isotonic regression models is based on the pool-adjacent-violators algorithm (PAVA), where the monotonicity constraints are built in. With missing data, people often employ the augmented estimating method to improve estimation efficiency by incorporating auxiliary information through a working regression model. However, under the framework of the isotonic regression model, the PAVA does not work as the monotonicity constraints are violated. In this paper, we develop an empirical likelihood-based method for isotonic regression model to incorporate the auxiliary information. Because the monotonicity constraints still hold, the PAVA can be used for parameter estimation. Simulation studies demonstrate that the proposed method can yield more efficient estimates, and in some situations, the efficiency improvement is substantial. We apply this method to a dementia study. Copyright © 2013 John Wiley & Sons, Ltd.
Characteristic of monotonicity of Orlicz function spaces equipped with the Orlicz norm
Czech Academy of Sciences Publication Activity Database
Foralewski, P.; Hudzik, H.; Kaczmarek, R.; Krbec, Miroslav
2013-01-01
Roč. 53, č. 2 (2013), s. 421-432 ISSN 0373-8299 R&D Projects: GA ČR GAP201/10/1920 Institutional support: RVO:67985840 Keywords : Orlicz space * Köthe space * characteristic of monotonicity Subject RIV: BA - General Mathematics
Non-monotonic reasoning in conceptual modeling and ontology design: A proposal
CSIR Research Space (South Africa)
Casini, G
2013-06-01
Full Text Available -1 2nd International Workshop on Ontologies and Conceptual Modeling (Onto.Com 2013), Valencia, Spain, 17-21 June 2013 Non-monotonic reasoning in conceptual modeling and ontology design: A proposal Giovanni Casini1 and Alessandro Mosca2 1...
CFD simulation of simultaneous monotonic cooling and surface heat transfer coefficient
International Nuclear Information System (INIS)
Mihálka, Peter; Matiašovský, Peter
2016-01-01
The monotonic heating regime method for determination of thermal diffusivity is based on the analysis of an unsteady-state (stabilised) thermal process characterised by an independence of the space-time temperature distribution on initial conditions. At the first kind of the monotonic regime a sample of simple geometry is heated / cooled at constant ambient temperature. The determination of thermal diffusivity requires the determination rate of a temperature change and simultaneous determination of the first eigenvalue. According to a characteristic equation the first eigenvalue is a function of the Biot number defined by a surface heat transfer coefficient and thermal conductivity of an analysed material. Knowing the surface heat transfer coefficient and the first eigenvalue the thermal conductivity can be determined. The surface heat transport coefficient during the monotonic regime can be determined by the continuous measurement of long-wave radiation heat flow and the photoelectric measurement of the air refractive index gradient in a boundary layer. CFD simulation of the cooling process was carried out to analyse local convective and radiative heat transfer coefficients more in detail. Influence of ambient air flow was analysed. The obtained eigenvalues and corresponding surface heat transfer coefficient values enable to determine thermal conductivity of the analysed specimen together with its thermal diffusivity during a monotonic heating regime.
Alternans by non-monotonic conduction velocity restitution, bistability and memory
International Nuclear Information System (INIS)
Kim, Tae Yun; Hong, Jin Hee; Heo, Ryoun; Lee, Kyoung J
2013-01-01
Conduction velocity (CV) restitution is a key property that characterizes any medium supporting traveling waves. It reflects not only the dynamics of the individual constituents but also the coupling mechanism that mediates their interaction. Recent studies have suggested that cardiac tissues, which have a non-monotonic CV-restitution property, can support alternans, a period-2 oscillatory response of periodically paced cardiac tissue. This study finds that single-hump, non-monotonic, CV-restitution curves are a common feature of in vitro cultures of rat cardiac cells. We also find that the Fenton–Karma model, one of the well-established mathematical models of cardiac tissue, supports a very similar non-monotonic CV restitution in a physiologically relevant parameter regime. Surprisingly, the mathematical model as well as the cell cultures support bistability and show cardiac memory that tends to work against the generation of an alternans. Bistability was realized by adopting two different stimulation protocols, ‘S1S2’, which produces a period-1 wave train, and ‘alternans-pacing’, which favors a concordant alternans. Thus, we conclude that the single-hump non-monotonicity in the CV-restitution curve is not sufficient to guarantee a cardiac alternans, since cardiac memory interferes and the way the system is paced matters. (paper)
On the Monotonicity and Log-Convexity of a Four-Parameter Homogeneous Mean
Directory of Open Access Journals (Sweden)
Yang Zhen-Hang
2008-01-01
Full Text Available Abstract A four-parameter homogeneous mean is defined by another approach. The criterion of its monotonicity and logarithmically convexity is presented, and three refined chains of inequalities for two-parameter mean values are deduced which contain many new and classical inequalities for means.
On utilization bounds for a periodic resource under rate monotonic scheduling
Renssen, van A.M.; Geuns, S.J.; Hausmans, J.P.H.M.; Poncin, W.; Bril, R.J.
2009-01-01
This paper revisits utilization bounds for a periodic resource under the rate monotonic (RM) scheduling algorithm. We show that the existing utilization bound, as presented in [8, 9], is optimistic. We subsequently show that by viewing the unavailability of the periodic resource as a deferrable
Directory of Open Access Journals (Sweden)
San-Yang Liu
2014-01-01
Full Text Available Two unified frameworks of some sufficient descent conjugate gradient methods are considered. Combined with the hyperplane projection method of Solodov and Svaiter, they are extended to solve convex constrained nonlinear monotone equations. Their global convergence is proven under some mild conditions. Numerical results illustrate that these methods are efficient and can be applied to solve large-scale nonsmooth equations.
A Min-max Relation for Monotone Path Systems in Simple Regions
DEFF Research Database (Denmark)
Cameron, Kathleen
1996-01-01
A monotone path system (MPS) is a finite set of pairwise disjointpaths (polygonal arcs) in the plane such that every horizontal line intersectseach of the paths in at most one point. We consider a simple polygon in thexy-plane which bounds the simple polygonal (closed) region D. Let T and B betwo...
Monotonicity of the von Neumann entropy expressed as a function of R\\'enyi entropies
Fannes, Mark
2013-01-01
The von Neumann entropy of a density matrix of dimension d, expressed in terms of the first d-1 integer order R\\'enyi entropies, is monotonically increasing in R\\'enyi entropies of even order and decreasing in those of odd order.
Directory of Open Access Journals (Sweden)
Hongying Zhang
2016-10-01
Full Text Available Given the low accuracy of the traditional remote sensing image processing software when orthorectifying satellite images that cover mountainous areas, and in order to make a full use of mutually compatible and complementary characteristics of the remote sensing image processing software PCI-RPC (Rational Polynomial Coefficients and ArcGIS-Spline, this study puts forward a new operational and effective image processing procedure to improve the accuracy of image orthorectification. The new procedure first processes raw image data into an orthorectified image using PCI with RPC model (PCI-RPC, and then the orthorectified image is further processed using ArcGIS with the Spline tool (ArcGIS-Spline. We used the high-resolution CBERS-02C satellite images (HR1 and HR2 scenes with a pixel size of 2 m acquired from Yangyuan County in Hebei Province of China to test the procedure. In this study, when separately using PCI-RPC and ArcGIS-Spline tools directly to process the HR1/HR2 raw images, the orthorectification accuracies (root mean square errors, RMSEs for HR1/HR2 images were 2.94 m/2.81 m and 4.65 m/4.41 m, respectively. However, when using our newly proposed procedure, the corresponding RMSEs could be reduced to 1.10 m/1.07 m. The experimental results demonstrated that the new image processing procedure which integrates PCI-RPC and ArcGIS-Spline tools could significantly improve image orthorectification accuracy. Therefore, in terms of practice, the new procedure has the potential to use existing software products to easily improve image orthorectification accuracy.
International Nuclear Information System (INIS)
Codorniu Pujals, Daniel
2013-01-01
Raman spectroscopy is one of the most used experimental techniques in studying irradiated carbon nanostructures, in particular graphene, due to its high sensibility to the presence of defects in the crystalline lattice. Special attention has been given to the variation of the intensity of the Raman D-band of graphene with the concentration of defects produced by irradiation. Nowadays, there are enough experimental evidences about the non-monotonous character of that dependence, but the explanation of this behavior is still controversial. In the present work we developed a simplified mathematical model to obtain a functional relationship between these two magnitudes and showed that the non-monotonous dependence is intrinsic to the nature of the D-band and that it is not necessarily linked to amorphization processes. The obtained functional dependence was used to fit experimental data taken from other authors. The determination coefficient of the fitting was 0.96.
Mitt, Mario; Kals, Mart; Pärn, Kalle; Gabriel, Stacey B; Lander, Eric S; Palotie, Aarno; Ripatti, Samuli; Morris, Andrew P; Metspalu, Andres; Esko, Tõnu; Mägi, Reedik; Palta, Priit
2017-06-01
Genetic imputation is a cost-efficient way to improve the power and resolution of genome-wide association (GWA) studies. Current publicly accessible imputation reference panels accurately predict genotypes for common variants with minor allele frequency (MAF)≥5% and low-frequency variants (0.5≤MAF<5%) across diverse populations, but the imputation of rare variation (MAF<0.5%) is still rather limited. In the current study, we evaluate imputation accuracy achieved with reference panels from diverse populations with a population-specific high-coverage (30 ×) whole-genome sequencing (WGS) based reference panel, comprising of 2244 Estonian individuals (0.25% of adult Estonians). Although the Estonian-specific panel contains fewer haplotypes and variants, the imputation confidence and accuracy of imputed low-frequency and rare variants was significantly higher. The results indicate the utility of population-specific reference panels for human genetic studies.
Hernandes, Vinicius Veri; Franco, Marcos Fernado; Santos, Jandyson Machado; Melendez-Perez, Jose J; de Morais, Damila Rodrigues; Rocha, Werickson Fortunato de Carvalho; Borges, Rodrigo; de Souza, Wanderley; Zacca, Jorge Jardim; Logrado, Lucio Paulo Lima; Eberlin, Marcos Nogueira; Correa, Deleon Nascimento
2015-04-01
Ammonium nitrate fuel oil (ANFO) is an explosive used in many civil applications. In Brazil, ANFO has unfortunately also been used in criminal attacks, mainly in automated teller machine (ATM) explosions. In this paper, we describe a detailed characterization of the ANFO composition and its two main constituents (diesel and a nitrate explosive) using high resolution and accuracy mass spectrometry performed on an FT-ICR-mass spectrometer with electrospray ionization (ESI(±)-FTMS) in both the positive and negative ion modes. Via ESI(-)-MS, an ion marker for ANFO was characterized. Using a direct and simple ambient desorption/ionization technique, i.e., easy ambient sonic-spray ionization mass spectrometry (EASI-MS), in a simpler, lower accuracy but robust single quadrupole mass spectrometer, the ANFO ion marker was directly detected from the surface of banknotes collected from ATM explosion theft. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Esteban Müller, J F; Shaposhnikova, E; Valuch, D; Mastoridis, T
2014-01-01
Electron cloud effects such as heat load in the cryogenic system, pressure rise and beam instabilities are among the main limitations for the LHC operation with 25 ns spaced bunches. A new observation tool was developed to monitor the e-cloud activity and has been successfully used in the LHC during Run 1 (2010-2012). The power loss of each bunch due to the e-cloud can be estimated using very precise bunch-by-bunch measurement of the synchronous phase shift. In order to achieve the required accuracy, corrections for reflection in the cables and some systematic errors need to be applied followed by a post-processing of the measurements. Results clearly show the e-cloud build-up along the bunch trains and its evolution during each LHC fill as well as from fill to fill. Measurements during the 2012 LHC scrubbing run reveal a progressive reduction in the e-cloud activity and therefore a decrease in the secondary electron yield (SEY). The total beam power loss can be computed as a sum of the contributions from all...
High-accuracy phase-field models for brittle fracture based on a new family of degradation functions
Sargado, Juan Michael; Keilegavlen, Eirik; Berre, Inga; Nordbotten, Jan Martin
2018-02-01
Phase-field approaches to fracture based on energy minimization principles have been rapidly gaining popularity in recent years, and are particularly well-suited for simulating crack initiation and growth in complex fracture networks. In the phase-field framework, the surface energy associated with crack formation is calculated by evaluating a functional defined in terms of a scalar order parameter and its gradients. These in turn describe the fractures in a diffuse sense following a prescribed regularization length scale. Imposing stationarity of the total energy leads to a coupled system of partial differential equations that enforce stress equilibrium and govern phase-field evolution. These equations are coupled through an energy degradation function that models the loss of stiffness in the bulk material as it undergoes damage. In the present work, we introduce a new parametric family of degradation functions aimed at increasing the accuracy of phase-field models in predicting critical loads associated with crack nucleation as well as the propagation of existing fractures. An additional goal is the preservation of linear elastic response in the bulk material prior to fracture. Through the analysis of several numerical examples, we demonstrate the superiority of the proposed family of functions to the classical quadratic degradation function that is used most often in the literature.
Huang, Haoqian; Chen, Xiyuan; Zhang, Bo; Wang, Jian
2017-01-01
The underwater navigation system, mainly consisting of MEMS inertial sensors, is a key technology for the wide application of underwater gliders and plays an important role in achieving high accuracy navigation and positioning for a long time of period. However, the navigation errors will accumulate over time because of the inherent errors of inertial sensors, especially for MEMS grade IMU (Inertial Measurement Unit) generally used in gliders. The dead reckoning module is added to compensate the errors. In the complicated underwater environment, the performance of MEMS sensors is degraded sharply and the errors will become much larger. It is difficult to establish the accurate and fixed error model for the inertial sensor. Therefore, it is very hard to improve the accuracy of navigation information calculated by sensors. In order to solve the problem mentioned, the more suitable filter which integrates the multi-model method with an EKF approach can be designed according to different error models to give the optimal estimation for the state. The key parameters of error models can be used to determine the corresponding filter. The Adams explicit formula which has an advantage of high precision prediction is simultaneously fused into the above filter to achieve the much more improvement in attitudes estimation accuracy. The proposed algorithm has been proved through theory analyses and has been tested by both vehicle experiments and lake trials. Results show that the proposed method has better accuracy and effectiveness in terms of attitudes estimation compared with other methods mentioned in the paper for inertial navigation applied to underwater gliders. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Meditation experience predicts introspective accuracy.
Directory of Open Access Journals (Sweden)
Kieran C R Fox
Full Text Available The accuracy of subjective reports, especially those involving introspection of one's own internal processes, remains unclear, and research has demonstrated large individual differences in introspective accuracy. It has been hypothesized that introspective accuracy may be heightened in persons who engage in meditation practices, due to the highly introspective nature of such practices. We undertook a preliminary exploration of this hypothesis, examining introspective accuracy in a cross-section of meditation practitioners (1-15,000 hrs experience. Introspective accuracy was assessed by comparing subjective reports of tactile sensitivity for each of 20 body regions during a 'body-scanning' meditation with averaged, objective measures of tactile sensitivity (mean size of body representation area in primary somatosensory cortex; two-point discrimination threshold as reported in prior research. Expert meditators showed significantly better introspective accuracy than novices; overall meditation experience also significantly predicted individual introspective accuracy. These results suggest that long-term meditators provide more accurate introspective reports than novices.
Directory of Open Access Journals (Sweden)
S. S. Chang
2014-05-01
Full Text Available Modulated high-frequency (HF heating of the ionosphere provides a feasible means of artificially generating extremely low-frequency (ELF/very low-frequency (VLF whistler waves, which can leak into the inner magnetosphere and contribute to resonant interactions with high-energy electrons in the plasmasphere. By ray tracing the magnetospheric propagation of ELF/VLF emissions artificially generated at low-invariant latitudes, we evaluate the relativistic electron resonant energies along the ray paths and show that propagating artificial ELF/VLF waves can resonate with electrons from ~ 100 keV to ~ 10 MeV. We further implement test particle simulations to investigate the effects of resonant scattering of energetic electrons due to triggered monotonic/single-frequency ELF/VLF waves. The results indicate that within the period of a resonance timescale, changes in electron pitch angle and kinetic energy are stochastic, and the overall effect is cumulative, that is, the changes averaged over all test electrons increase monotonically with time. The localized rates of wave-induced pitch-angle scattering and momentum diffusion in the plasmasphere are analyzed in detail for artificially generated ELF/VLF whistlers with an observable in situ amplitude of ~ 10 pT. While the local momentum diffusion of relativistic electrons is small, with a rate of −7 s−1, the local pitch-angle scattering can be intense near the loss cone with a rate of ~ 10−4 s−1. Our investigation further supports the feasibility of artificial triggering of ELF/VLF whistler waves for removal of high-energy electrons at lower L shells within the plasmasphere. Moreover, our test particle simulation results show quantitatively good agreement with quasi-linear diffusion coefficients, confirming the applicability of both methods to evaluate the resonant diffusion effect of artificial generated ELF/VLF whistlers.
Measurement of high-energy (10–60 keV) x-ray spectral line widths with eV accuracy
Energy Technology Data Exchange (ETDEWEB)
Seely, J. F., E-mail: seelyjf@gmail.com; Feldman, U. [Artep Inc., 2922 Excelsior Springs Court, Ellicott City, Maryland 21042 (United States); Glover, J. L.; Hudson, L. T.; Ralchenko, Y.; Henins, Albert [National Institute of Standards and Technology, Gaithersburg, Maryland 20899 (United States); Pereira, N. [Ecopulse Inc., P. O. Box 528, Springfield, Virginia 22152 (United States); Di Stefano, C. A.; Kuranz, C. C.; Drake, R. P. [University of Michigan, Ann Arbor, Michigan 48109 (United States); Chen, Hui; Williams, G. J.; Park, J. [Lawrence Livermore National Laboratory, Livermore, California 94551 (United States)
2014-11-15
A high resolution crystal spectrometer utilizing a crystal in transmission geometry has been developed and experimentally optimized to measure the widths of emission lines in the 10–60 keV energy range with eV accuracy. The spectrometer achieves high spectral resolution by utilizing crystal planes with small lattice spacings (down to 2d = 0.099 nm), a large crystal bending radius and Rowland circle diameter (965 mm), and an image plate detector with high spatial resolution (60 μm in the case of the Fuji TR image plate). High resolution W L-shell and K-shell laboratory test spectra in the 10–60 keV range and Ho K-shell spectra near 47 keV recorded at the LLNL Titan laser facility are presented. The Ho K-shell spectra are the highest resolution hard x-ray spectra recorded from a solid target irradiated by a high-intensity laser.
DEFF Research Database (Denmark)
Foglia, Aligi; Gottardi, Guido; Govoni, Laura
2015-01-01
The response of bucket foundations on sand subjected to planar monotonic and cyclic loading is investigated in the paper. Thirteen monotonic and cyclic laboratory tests on a skirted footing model having a 0.3 m diameter and embedment ratio equal to 1 are presented. The loading regime reproduces t...
Energy Technology Data Exchange (ETDEWEB)
Gimelli, Alessia; Genovesi, Dario; Giorgetti, Assuero; Marzullo, Paolo [CNR, Fondazione Toscana Gabriele Monasterio, Pisa (Italy); Bottai, Matteo [University of South Carolina, Division of Biostatistics, Columbia, SC (United States); Karolinska Institutet, Division of Biostatistics, Stockholm (Sweden); Di Martino, Fabio [AOUP, UO Fisica Sanitaria, Pisa (Italy)
2012-01-15
Appropriate use of SPECT imaging is regulated by evidence-based guidelines and appropriateness criteria in an effort to limit the burden of radiation administered to patients. We aimed at establishing whether the use of a low dose for stress-rest single-day nuclear myocardial perfusion imaging on an ultrafast (UF) cardiac gamma camera using cadmium-zinc-telluride solid-state detectors could be used routinely with the same accuracy obtained with standard doses and conventional cameras. To this purpose, 137 consecutive patients (mean age 61 {+-} 8 years) with known or suspected coronary artery disease (CAD) were enrolled. They underwent single-day low-dose stress-rest myocardial perfusion imaging using UF SPECT and invasive coronary angiography. Patients underwent the first scan with a 7-min acquisition time 10 min after the end of the stress protocol (dose range 185 to 222 MBq of {sup 99m}Tc-tetrofosmin). The rest scan (dose range 370 to 444 MBq of {sup 99m}Tc-tetrofosmin) was acquired with a 6-min acquisition time. The mean summed stress scores (SSS) and mean summed rest scores (SRS) were obtained semiquantitatively. Coronary angiograms showed significant epicardial CAD in 83% of patients. Mean SSS and SRS were 10 {+-} 5 and 3 {+-} 3, respectively. Overall the area under the ROC curve for the SSS values was 0.904, while the areas under the ROC curves for each vascular territory were 0.982 for the left anterior descending artery, 0.931 for the left circumflex artery and 0.889 for the right coronary artery. This pilot study demonstrated the feasibility of a low-dose single-day stress-rest fasting protocol performed using UF SPECT, with good sensitivity and specificity in detecting CAD at low patient exposure, opening new perspectives in the use of myocardial perfusion in ischaemic patients. (orig.)
International Nuclear Information System (INIS)
Gimelli, Alessia; Genovesi, Dario; Giorgetti, Assuero; Marzullo, Paolo; Bottai, Matteo; Di Martino, Fabio
2012-01-01
Appropriate use of SPECT imaging is regulated by evidence-based guidelines and appropriateness criteria in an effort to limit the burden of radiation administered to patients. We aimed at establishing whether the use of a low dose for stress-rest single-day nuclear myocardial perfusion imaging on an ultrafast (UF) cardiac gamma camera using cadmium-zinc-telluride solid-state detectors could be used routinely with the same accuracy obtained with standard doses and conventional cameras. To this purpose, 137 consecutive patients (mean age 61 ± 8 years) with known or suspected coronary artery disease (CAD) were enrolled. They underwent single-day low-dose stress-rest myocardial perfusion imaging using UF SPECT and invasive coronary angiography. Patients underwent the first scan with a 7-min acquisition time 10 min after the end of the stress protocol (dose range 185 to 222 MBq of 99m Tc-tetrofosmin). The rest scan (dose range 370 to 444 MBq of 99m Tc-tetrofosmin) was acquired with a 6-min acquisition time. The mean summed stress scores (SSS) and mean summed rest scores (SRS) were obtained semiquantitatively. Coronary angiograms showed significant epicardial CAD in 83% of patients. Mean SSS and SRS were 10 ± 5 and 3 ± 3, respectively. Overall the area under the ROC curve for the SSS values was 0.904, while the areas under the ROC curves for each vascular territory were 0.982 for the left anterior descending artery, 0.931 for the left circumflex artery and 0.889 for the right coronary artery. This pilot study demonstrated the feasibility of a low-dose single-day stress-rest fasting protocol performed using UF SPECT, with good sensitivity and specificity in detecting CAD at low patient exposure, opening new perspectives in the use of myocardial perfusion in ischaemic patients. (orig.)
Directory of Open Access Journals (Sweden)
Xiao-Yan Yue
Full Text Available Accurate and timely glucose monitoring is essential in intensive care units. Real-time continuous glucose monitoring system (CGMS has been advocated for many years to improve glycemic management in critically ill patients. In order to determine the effect of calibration time on the accuracy of CGMS, real-time subcutaneous CGMS was used in 18 critically ill patients. CGMS sensor was calibrated with blood glucose measurements by blood gas/glucose analyzer every 12 hours. Venous blood was sampled every 2 to 4 hours, and glucose concentration was measured by standard central laboratory device (CLD and by blood gas/glucose analyzer. With CLD measurement as reference, relative absolute difference (mean±SD in CGMS and blood gas/glucose analyzer were 14.4%±12.2% and 6.5%±6.2%, respectively. The percentage of matched points in Clarke error grid zone A was 74.8% in CGMS, and 98.4% in blood gas/glucose analyzer. The relative absolute difference of CGMS obtained within 6 hours after sensor calibration (8.8%±7.2% was significantly less than that between 6 to 12 hours after calibration (20.1%±13.5%, p<0.0001. The percentage of matched points in Clarke error grid zone A was also significantly higher in data sets within 6 hours after calibration (92.4% versus 57.1%, p<0.0001. In conclusion, real-time subcutaneous CGMS is accurate in glucose monitoring in critically ill patients. CGMS sensor should be calibrated less than 6 hours, no matter what time interval recommended by manufacturer.
The accuracy of {sup 68}Ga-PSMA PET/CT in primary lymph node staging in high-risk prostate cancer
Energy Technology Data Exchange (ETDEWEB)
Oebek, Can; Doganca, Tuenkut [Acibadem Taksim Hospital, Department of Urology, Istanbul (Turkey); Demirci, Emre [Sisli Etfal Training and Research Hospital, Department of Nuclear Medicine, Istanbul (Turkey); Ocak, Meltem [Istanbul University, Faculty of Pharmacy, Department of Pharmaceutical Technology, Istanbul (Turkey); Kural, Ali Riza [Acibadem University, Department of Urology, Istanbul (Turkey); Yildirim, Asif [Istanbul Medeniyet University, Department of Urology, Istanbul (Turkey); Yuecetas, Ugur [Istanbul Training and Research Hospital, Department of Urology, Istanbul (Turkey); Demirdag, Cetin [Istanbul University, Cerrahpasa School of Medicine, Department of Urology, Istanbul (Turkey); Erdogan, Sarper M. [Istanbul University, Cerrahpasa School of Medicine, Department of Public Health, Istanbul (Turkey); Kabasakal, Levent [Istanbul University, Cerrahpasa School of Medicine, Department of Nuclear Medicine, Istanbul (Turkey); Collaboration: Members of Urooncology Association, Turkey
2017-10-15
To assess the diagnostic accuracy of {sup 68}Ga-PSMA PET in predicting lymph node (LN) metastases in primary N staging in high-risk and very high-risk nonmetastatic prostate cancer in comparison with morphological imaging. This was a multicentre trial of the Society of Urologic Oncology in Turkey in conjunction with the Nuclear Medicine Department of Cerrahpasa School of Medicine, Istanbul University. Patients were accrued from eight centres. Patients with high-risk and very high-risk disease scheduled to undergo surgical treatment with extended LN dissection between July 2014 and October 2015 were included. Either MRI or CT was used for morphological imaging. PSMA PET/CT was performed and evaluated at a single centre. Sensitivity, specificity and accuracy were calculated for the detection of lymphatic metastases by PSMA PET/CT and morphological imaging. Kappa values were calculated to evaluate the correlation between the numbers of LN metastases detected by PSMA PET/CT and by histopathology. Data on 51 eligible patients are presented. The sensitivity, specificity and accuracy of PSMA PET in detecting LN metastases in the primary setting were 53%, 86% and 76%, and increased to 67%, 88% and 81% in the subgroup with of patients with ≥15 LN removed. Kappa values for the correlation between imaging and pathology were 0.41 for PSMA PET and 0.18 for morphological imaging. PSMA PET/CT is superior to morphological imaging for the detection of metastatic LNs in patients with primary prostate cancer. Surgical dissection remains the gold standard for precise lymphatic staging. (orig.)
Investigation on de-trapping mechanisms related to non-monotonic kink pattern in GaN HEMT devices
Directory of Open Access Journals (Sweden)
Chandan Sharma
2017-08-01
Full Text Available This article reports an experimental approach to analyze the kink effect phenomenon which is usually observed during the GaN high electron mobility transistor (HEMT operation. De-trapping of charge carriers is one of the prominent reasons behind the kink effect. The commonly observed non-monotonic behavior of kink pattern is analyzed under two different device operating conditions and it is found that two different de-trapping mechanisms are responsible for a particular kink behavior. These different de-trapping mechanisms are investigated through a time delay analysis which shows the presence of traps with different time constants. Further voltage sweep and temperature analysis corroborates the finding that different de-trapping mechanisms play a role in kink behavior under different device operating conditions.
Investigation on de-trapping mechanisms related to non-monotonic kink pattern in GaN HEMT devices
Sharma, Chandan; Laishram, Robert; Amit, Rawal, Dipendra Singh; Vinayak, Seema; Singh, Rajendra
2017-08-01
This article reports an experimental approach to analyze the kink effect phenomenon which is usually observed during the GaN high electron mobility transistor (HEMT) operation. De-trapping of charge carriers is one of the prominent reasons behind the kink effect. The commonly observed non-monotonic behavior of kink pattern is analyzed under two different device operating conditions and it is found that two different de-trapping mechanisms are responsible for a particular kink behavior. These different de-trapping mechanisms are investigated through a time delay analysis which shows the presence of traps with different time constants. Further voltage sweep and temperature analysis corroborates the finding that different de-trapping mechanisms play a role in kink behavior under different device operating conditions.
Thermal effects on the enhanced ductility in non-monotonic uniaxial tension of DP780 steel sheet
Majidi, Omid; Barlat, Frederic; Korkolis, Yannis P.; Fu, Jiawei; Lee, Myoung-Gyu
2016-11-01
To understand the material behavior during non-monotonic loading, uniaxial tension tests were conducted in three modes, namely, the monotonic loading, loading with periodic relaxation and periodic loading-unloadingreloading, at different strain rates (0.001/s to 0.01/s). In this study, the temperature gradient developing during each test and its contribution to increasing the apparent ductility of DP780 steel sheets were considered. In order to assess the influence of temperature, isothermal uniaxial tension tests were also performed at three temperatures (298 K, 313 K and 328 K (25 °C, 40 °C and 55 °C)). A digital image correlation system coupled with an infrared thermography was used in the experiments. The results show that the non-monotonic loading modes increased the apparent ductility of the specimens. It was observed that compared with the monotonic loading, the temperature gradient became more uniform when a non-monotonic loading was applied.
Directory of Open Access Journals (Sweden)
Jenifer L. Vaughan
2016-03-01
Objectives: This study aimed to evaluate the accuracy of the DM96 in a South African laboratory, with emphasis on its performance in samples collected from HIV-positive patients. Methods: A total of 149 samples submitted for a routine differential white cell count in 2012 and 2013 at the Chris Hani Baragwanath Academic Hospital in Johannesburg, South Africa were included, of which 79 (53.0% were collected from HIV-positive patients. Results of DM96 analysis pre- and post-classification were compared with a manual differential white cell count and the impact of HIV infection and other variables of interest were assessed. Results: Pre- and post-classification accuracies were similar to those reported in developed countries. Reclassification was required in 16% of cells, with particularly high misclassification rates for eosinophils (31.7%, blasts (33.7% and basophils (93.5%. Multivariate analysis revealed a significant relationship between the number of misclassified cells and both the white cell count (p = 0.035 and the presence of malignant cells in the blood (p = 0.049, but not with any other variables analysed, including HIV status. Conclusion: The DM96 exhibited acceptable accuracy in this South African laboratory, which was not impacted by HIV infection. However, as it does not eliminate the need for experienced morphologists, its cost may be unjustifiable in a resource-constrained setting.
Sun, Kai; Han, Ruijuan; Han, Yang; Shi, Xuesen; Hu, Jiang; Lu, Bin
2018-02-28
To evaluate the diagnostic accuracy of combined computed tomography colonography (CTC) and dual-energy iodine map imaging for detecting colorectal masses using high-pitch dual-source CT, compared with optical colonography (OC) and histopathologic findings. Twenty-eight consecutive patients were prospectively enrolled in this study. All patients were underwent contrast-enhanced CTC acquisition using dual-energy mode and OC and pathologic examination. The size of the space-occupied mass, the CT value after contrast enhancement, and the iodine value were measured and statistically compared. The sensitivity, specificity, accuracy rate, and positive predictive and negative predictive values of dual-energy contrast-enhanced CTC were calculated and compared between conventional CTC and dual-energy iodine images. The iodine value of stool was significantly lower than the colonic neoplasia (P dual-energy iodine maps imaging was 95.6% (95% CI = 77.9%-99.2%). The specificity of the two methods was 42.8% (95% CI = 15.4%-93.5%) and 100% (95% CI = 47.9%-100%; P = 0.02), respectively. Compared with optical colonography and histopathology, combined CTC and dual-energy iodine maps imaging can distinguish stool and colonic neoplasia, distinguish between benign and malignant tumors initially and improve the diagnostic accuracy of CTC for colorectal cancer screening.