WorldWideScience

Sample records for linear quantification range

  1. Colored Range Searching in Linear Space

    DEFF Research Database (Denmark)

    Grossi, Roberto; Vind, Søren Juhl

    2014-01-01

    In colored range searching, we are given a set of n colored points in d ≥ 2 dimensions to store, and want to support orthogonal range queries taking colors into account. In the colored range counting problem, a query must report the number of distinct colors found in the query range, while...... an answer to the colored range reporting problem must report the distinct colors in the query range. We give the first linear space data structure for both problems in two dimensions (d = 2) with o(n) worst case query time. We also give the first data structure obtaining almost-linear space usage and o...

  2. Dual-range linearized transimpedance amplifier system

    Science.gov (United States)

    Wessendorf, Kurt O.

    2010-11-02

    A transimpedance amplifier system is disclosed which simultaneously generates a low-gain output signal and a high-gain output signal from an input current signal using a single transimpedance amplifier having two different feedback loops with different amplification factors to generate two different output voltage signals. One of the feedback loops includes a resistor, and the other feedback loop includes another resistor in series with one or more diodes. The transimpedance amplifier system includes a signal linearizer to linearize one or both of the low- and high-gain output signals by scaling and adding the two output voltage signals from the transimpedance amplifier. The signal linearizer can be formed either as an analog device using one or two summing amplifiers, or alternately can be formed as a digital device using two analog-to-digital converters and a digital signal processor (e.g. a microprocessor or a computer).

  3. Bayesian uncertainty quantification in linear models for diffusion MRI.

    Science.gov (United States)

    Sjölund, Jens; Eklund, Anders; Özarslan, Evren; Herberthson, Magnus; Bånkestad, Maria; Knutsson, Hans

    2018-03-29

    Diffusion MRI (dMRI) is a valuable tool in the assessment of tissue microstructure. By fitting a model to the dMRI signal it is possible to derive various quantitative features. Several of the most popular dMRI signal models are expansions in an appropriately chosen basis, where the coefficients are determined using some variation of least-squares. However, such approaches lack any notion of uncertainty, which could be valuable in e.g. group analyses. In this work, we use a probabilistic interpretation of linear least-squares methods to recast popular dMRI models as Bayesian ones. This makes it possible to quantify the uncertainty of any derived quantity. In particular, for quantities that are affine functions of the coefficients, the posterior distribution can be expressed in closed-form. We simulated measurements from single- and double-tensor models where the correct values of several quantities are known, to validate that the theoretically derived quantiles agree with those observed empirically. We included results from residual bootstrap for comparison and found good agreement. The validation employed several different models: Diffusion Tensor Imaging (DTI), Mean Apparent Propagator MRI (MAP-MRI) and Constrained Spherical Deconvolution (CSD). We also used in vivo data to visualize maps of quantitative features and corresponding uncertainties, and to show how our approach can be used in a group analysis to downweight subjects with high uncertainty. In summary, we convert successful linear models for dMRI signal estimation to probabilistic models, capable of accurate uncertainty quantification. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Mappings with closed range and finite dimensional linear spaces

    International Nuclear Information System (INIS)

    Iyahen, S.O.

    1984-09-01

    This paper looks at two settings, each of continuous linear mappings of linear topological spaces. In one setting, the domain space is fixed while the range space varies over a class of linear topological spaces. In the second setting, the range space is fixed while the domain space similarly varies. The interest is in when the requirement that the mappings have a closed range implies that the domain or range space is finite dimensional. Positive results are obtained for metrizable spaces. (author)

  5. Note: A high dynamic range, linear response transimpedance amplifier.

    Science.gov (United States)

    Eckel, S; Sushkov, A O; Lamoreaux, S K

    2012-02-01

    We have built a high dynamic range (nine decade) transimpedance amplifier with a linear response. The amplifier uses junction-gate field effect transistors (JFETs) to switch between three different resistors in the feedback of a low input bias current operational amplifier. This allows for the creation of multiple outputs, each with a linear response and a different transimpedance gain. The overall bandwidth of the transimpedance amplifier is set by the bandwidth of the most sensitive range. For our application, we demonstrate a three-stage amplifier with transimpedance gains of approximately 10(9)Ω, 3 × 10(7)Ω, and 10(4)Ω with a bandwidth of 100 Hz.

  6. Performance of uncertainty quantification methodologies and linear solvers in cardiovascular simulations

    Science.gov (United States)

    Seo, Jongmin; Schiavazzi, Daniele; Marsden, Alison

    2017-11-01

    Cardiovascular simulations are increasingly used in clinical decision making, surgical planning, and disease diagnostics. Patient-specific modeling and simulation typically proceeds through a pipeline from anatomic model construction using medical image data to blood flow simulation and analysis. To provide confidence intervals on simulation predictions, we use an uncertainty quantification (UQ) framework to analyze the effects of numerous uncertainties that stem from clinical data acquisition, modeling, material properties, and boundary condition selection. However, UQ poses a computational challenge requiring multiple evaluations of the Navier-Stokes equations in complex 3-D models. To achieve efficiency in UQ problems with many function evaluations, we implement and compare a range of iterative linear solver and preconditioning techniques in our flow solver. We then discuss applications to patient-specific cardiovascular simulation and how the problem/boundary condition formulation in the solver affects the selection of the most efficient linear solver. Finally, we discuss performance improvements in the context of uncertainty propagation. Support from National Institute of Health (R01 EB018302) is greatly appreciated.

  7. Method validation using weighted linear regression models for quantification of UV filters in water samples.

    Science.gov (United States)

    da Silva, Claudia Pereira; Emídio, Elissandro Soares; de Marchi, Mary Rosa Rodrigues

    2015-01-01

    This paper describes the validation of a method consisting of solid-phase extraction followed by gas chromatography-tandem mass spectrometry for the analysis of the ultraviolet (UV) filters benzophenone-3, ethylhexyl salicylate, ethylhexyl methoxycinnamate and octocrylene. The method validation criteria included evaluation of selectivity, analytical curve, trueness, precision, limits of detection and limits of quantification. The non-weighted linear regression model has traditionally been used for calibration, but it is not necessarily the optimal model in all cases. Because the assumption of homoscedasticity was not met for the analytical data in this work, a weighted least squares linear regression was used for the calibration method. The evaluated analytical parameters were satisfactory for the analytes and showed recoveries at four fortification levels between 62% and 107%, with relative standard deviations less than 14%. The detection limits ranged from 7.6 to 24.1 ng L(-1). The proposed method was used to determine the amount of UV filters in water samples from water treatment plants in Araraquara and Jau in São Paulo, Brazil. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. A Short-Range Distance Sensor with Exceptional Linearity

    Science.gov (United States)

    Simmons, Steven; Youngquist, Robert

    2013-01-01

    A sensor has been demonstrated that can measure distance over a total range of about 300 microns to an accuracy of about 0.1 nm (resolution of about 0.01 nm). This represents an exceptionally large dynamic range of operation - over 1,000,000. The sensor is optical in nature, and requires the attachment of a mirror to the object whose distance is being measured. This work resulted from actively developing a white light interferometric system to be used to measure the depths of defects in the Space Shuttle Orbiter windows. The concept was then applied to measuring distance. The concept later expanded to include spectrometer calibration. In summary, broadband (i.e., white) light is launched into a Michelson interferometer, one mirror of which is fixed and one of which is attached to the object whose distance is to be measured. The light emerging from the interferometer has traveled one of two distances: either the distance to the fixed mirror and back, or the distance to the moving mirror and back. These two light beams mix and produce an interference pattern where some wavelengths interfere constructively and some destructively. Sending this light into a spectrometer allows this interference pattern to be analyzed, yielding the net distance difference between the two paths. The unique feature of this distance sensor is its ability to measure accurately distance over a dynamic range of more than one million, the ratio of its range (about 300 microns) to its accuracy (about 0.1 nanometer). Such a large linear operating range is rare and arises here because both amplitude and phase-matching algorithms contribute to the performance. The sensor is limited by the need to attach a mirror of some kind to the object being tracked, and by the fairly small total range, but the exceptional dynamic range should make it of interest.

  9. Linear response in the nonequilibrium zero range process

    International Nuclear Information System (INIS)

    Maes, Christian; Salazar, Alberto

    2014-01-01

    We explore a number of explicit response formulæ around the boundary driven zero range process to changes in the exit and entrance rates. In such a nonequilibrium regime kinetic (and not only thermodynamic) aspects make a difference in the response. Apart from a number of formal approaches, we illustrate a general decomposition of the linear response into entropic and frenetic contributions, the latter being realized from changes in the dynamical activity at the boundaries. In particular in this way one obtains nonlinear modifications to the Green–Kubo relation. We end by bringing some general remarks about the situation where that nonequilibrium response remains given by the (equilibrium) Kubo formula such as for the density profile in the boundary driven Lorentz gas

  10. Increasing Linear Dynamic Range of a CMOS Image Sensor

    Science.gov (United States)

    Pain, Bedabrata

    2007-01-01

    A generic design and a corresponding operating sequence have been developed for increasing the linear-response dynamic range of a complementary metal oxide/semiconductor (CMOS) image sensor. The design provides for linear calibrated dual-gain pixels that operate at high gain at a low signal level and at low gain at a signal level above a preset threshold. Unlike most prior designs for increasing dynamic range of an image sensor, this design does not entail any increase in noise (including fixed-pattern noise), decrease in responsivity or linearity, or degradation of photometric calibration. The figure is a simplified schematic diagram showing the circuit of one pixel and pertinent parts of its column readout circuitry. The conventional part of the pixel circuit includes a photodiode having a small capacitance, CD. The unconventional part includes an additional larger capacitance, CL, that can be connected to the photodiode via a transfer gate controlled in part by a latch. In the high-gain mode, the signal labeled TSR in the figure is held low through the latch, which also helps to adapt the gain on a pixel-by-pixel basis. Light must be coupled to the pixel through a microlens or by back illumination in order to obtain a high effective fill factor; this is necessary to ensure high quantum efficiency, a loss of which would minimize the efficacy of the dynamic- range-enhancement scheme. Once the level of illumination of the pixel exceeds the threshold, TSR is turned on, causing the transfer gate to conduct, thereby adding CL to the pixel capacitance. The added capacitance reduces the conversion gain, and increases the pixel electron-handling capacity, thereby providing an extension of the dynamic range. By use of an array of comparators also at the bottom of the column, photocharge voltages on sampling capacitors in each column are compared with a reference voltage to determine whether it is necessary to switch from the high-gain to the low-gain mode. Depending upon

  11. Characterising non-linear dynamics in nocturnal breathing patterns of healthy infants using recurrence quantification analysis.

    Science.gov (United States)

    Terrill, Philip I; Wilson, Stephen J; Suresh, Sadasivam; Cooper, David M; Dakin, Carolyn

    2013-05-01

    Breathing dynamics vary between infant sleep states, and are likely to exhibit non-linear behaviour. This study applied the non-linear analytical tool recurrence quantification analysis (RQA) to 400 breath interval periods of REM and N-REM sleep, and then using an overlapping moving window. The RQA variables were different between sleep states, with REM radius 150% greater than N-REM radius, and REM laminarity 79% greater than N-REM laminarity. RQA allowed the observation of temporal variations in non-linear breathing dynamics across a night's sleep at 30s resolution, and provides a basis for quantifying changes in complex breathing dynamics with physiology and pathology. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. The Role of Data Range in Linear Regression

    Science.gov (United States)

    da Silva, M. A. Salgueiro; Seixas, T. M.

    2017-01-01

    Measuring one physical quantity as a function of another often requires making some choices prior to the measurement process. Two of these choices are: the data range where measurements should focus and the number (n) of data points to acquire in the chosen data range. Here, we consider data range as the interval of variation of the independent…

  13. SiGe HBT linear-in-dB high dynamic range RF envelope detectors and wideband high linearity amplifiers

    OpenAIRE

    Pan, Hsuan-yu

    2010-01-01

    This research work aims on exploiting SiGe HBT technologies in high dynamic range wideband RF linear-in- dB envelope detectors and linear amplifiers. First, an improved all-npn broadband highly linear SiGe HBT differential amplifier is presented based on a variation of Caprio's Quad. A broadband linear amplifier with 46dBm OIP₃ at 20MHz, 34dBm OIP₃ at 1GHz, 6dB noise figure and 10.3dBm P₁dB is demonstrated. Second, an improved exact dynamic model of a fast-settling linear-in-dB Automatic Gain...

  14. Design considerations of a linear generator for a range extender application

    Directory of Open Access Journals (Sweden)

    Seo Un-Jae

    2015-12-01

    Full Text Available The free piston linear generator is a new range extender concept for the application in a full electric vehicle. The free piston engine driven linear generators can achieve high efficiency at part and full load which is suitable for the range extender application. This paper presents requirements for designing a linear generator deduced from a basic analysis of a free piston linear generator.

  15. Free piston linear generator in comparison to other range-extender technologies

    OpenAIRE

    Virsik, Roman; Heron, Alex

    2013-01-01

    The free piston linear generator is a new range-extender technology. It converts chemical energy into electrical energy by means of a combustion process and linear generator. Thereby the technology aims to have better properties than other range extenders. Therefore this publication deals with the explanation of the concept and the characteristics of a free piston linear generator and a comparison to other technologies. In order to compare the range extender systems, fuel cells, micro gas tur...

  16. A methodology for direct quantification of over-ranging length in helical computed tomography with real-time dosimetry.

    Science.gov (United States)

    Tien, Christopher J; Winslow, James F; Hintenlang, David E

    2011-01-31

    In helical computed tomography (CT), reconstruction information from volumes adjacent to the clinical volume of interest (VOI) is required for proper reconstruction. Previous studies have relied upon either operator console readings or indirect extrapolation of measurements in order to determine the over-ranging length of a scan. This paper presents a methodology for the direct quantification of over-ranging dose contributions using real-time dosimetry. A Siemens SOMATOM Sensation 16 multislice helical CT scanner is used with a novel real-time "point" fiber-optic dosimeter system with 10 ms temporal resolution to measure over-ranging length, which is also expressed in dose-length-product (DLP). Film was used to benchmark the exact length of over-ranging. Over-ranging length varied from 4.38 cm at pitch of 0.5 to 6.72 cm at a pitch of 1.5, which corresponds to DLP of 131 to 202 mGy-cm. The dose-extrapolation method of Van der Molen et al. yielded results within 3%, while the console reading method of Tzedakis et al. yielded consistently larger over-ranging lengths. From film measurements, it was determined that Tzedakis et al. overestimated over-ranging lengths by one-half of beam collimation width. Over-ranging length measured as a function of reconstruction slice thicknesses produced two linear regions similar to previous publications. Over-ranging is quantified with both absolute length and DLP, which contributes about 60 mGy-cm or about 10% of DLP for a routine abdominal scan. This paper presents a direct physical measurement of over-ranging length within 10% of previous methodologies. Current uncertainties are less than 1%, in comparison with 5% in other methodologies. Clinical implantation can be increased by using only one dosimeter if codependence with console readings is acceptable, with an uncertainty of 1.1% This methodology will be applied to different vendors, models, and postprocessing methods--which have been shown to produce over-ranging lengths

  17. Quantification of the least limiting water range in an oxisol using two methodological strategies

    Directory of Open Access Journals (Sweden)

    Wagner Henrique Moreira

    2014-12-01

    Full Text Available The least limiting water range (LLWR has been used as an indicator of soil physical quality as it represents, in a single parameter, the soil physical properties directly linked to plant growth, with the exception of temperature. The usual procedure for obtaining the LLWR involves determination of the water retention curve (WRC and the soil resistance to penetration curve (SRC in soil samples with undisturbed structure in the laboratory. Determination of the WRC and SRC using field measurements (in situ is preferable, but requires appropriate instrumentation. The objective of this study was to determine the LLWR from the data collected for determination of WRC and SRC in situ using portable electronic instruments, and to compare those determinations with the ones made in the laboratory. Samples were taken from the 0.0-0.1 m layer of a Latossolo Vermelho distrófico (Oxisol. Two methods were used for quantification of the LLWR: the traditional, with measurements made in soil samples with undisturbed structure; and in situ , with measurements of water content (θ, soil water potential (Ψ, and soil resistance to penetration (SR through the use of sensors. The in situ measurements of θ, Ψ and SR were taken over a period of four days of soil drying. At the same time, samples with undisturbed structure were collected for determination of bulk density (BD. Due to the limitations of measurement of Ψ by tensiometer, additional determinations of θ were made with a psychrometer (in the laboratory at the Ψ of -1500 kPa. The results show that it is possible to determine the LLWR by the θ, Ψ and SR measurements using the suggested approach and instrumentation. The quality of fit of the SRC was similar in both strategies. In contrast, the θ and Ψ in situ measurements, associated with those measured with a psychrometer, produced a better WRC description. The estimates of the LLWR were similar in both methodological strategies. The quantification of

  18. On the use of small integrating spheres to improve the linearity range of RASNIKS systems

    International Nuclear Information System (INIS)

    Alberdi, J.; Burgos, C.; Ferrando, A.; Molinero, A.; Schvachkin, V.; Figueroa, C.F.; Matorras, F.; Rodrigo, T.; Ruiz, A.; Vila, I.

    1997-10-01

    Rasniks elements will be used in the CMS alignment system. The large displacements of the different sub detectors expected in the CMS experiment demands large linearity response of this system. By the use of a small integrating sphere we have optimized the source definition such that a factor three improvement in the linearity range with respect to conventional Rasniks configurations is obtained. The response range reached coincides with the maximum one can get with the components used in the test

  19. Improvement of linear reactivity methods and application to long range fuel management

    International Nuclear Information System (INIS)

    Woehlke, R.A.; Quan, B.L.

    1982-01-01

    The original development of the linear reactivity theory assumes flat burnup, batch by batch. The validity of this assumption is explored using multicycle burnup data generated with a detailed 3-D SIMULATE model. The results show that the linear reactivity method can be improved by correcting for batchwise power sharing. The application of linear reactivity to long range fuel management is demonstrated in several examples. Correcting for batchwise power sharing improves the accuracy of the analysis. However, with regard to the sensitivity of fuel cost to changes in various parameters, the corrected and uncorrected linear reactivity theories give remarkably similar results

  20. Linearization of the Bradford protein assay to application in cow milk proteins quantification by UV-Vis spectrophotometry method.

    OpenAIRE

    SANTOS, A. S. de O. dos; COSTA, F. F.; ESTEVES, W. T.; BRITO, M. A. V. P. e; FURTADO, M. A. M.; MARTINS, M. F.

    2015-01-01

    Reliable methods for determination and quantification of total protein in food are essential information to ensure quality and safety of food trade. The objective of this study was to evaluate the linearity of calibration curves obtained from different proteins (blood serum albumin-BSA, α-LA, β-LG, αs, β and κ-CAS) with the reagent of Bradford. Comercial UHT skimmed bovine milk was analyzed for the determination of total protein using the Bradford method by reading at 595 nm. The determinatio...

  1. Internal friction and linear expansion coefficient in zirconium and cobalt within the range of phase transitions

    International Nuclear Information System (INIS)

    Boyarskij, S.V.

    1986-01-01

    Experimental results are presented for internal friction and linear expansion coefficient at zirconium and cobalt in the temperature range from 440 K to the point of the phase transition of the first kind (1138 K for Zr and 706 for Co). Anomalous changes of the internal friction and linear expansion coefficient in the phase transition region are found. Theoretical considerations are given to explain the sharp decrease of the internal friction as temperature approaches the phase transition point

  2. Ultra-low-pressure sputtering to improve exchange bias and tune linear ranges in spin valves

    Energy Technology Data Exchange (ETDEWEB)

    Tang, XiaoLi, E-mail: tangtang1227@163.com; Yu, You; Liu, Ru; Su, Hua; Zhang, HuaiWu; Zhong, ZhiYong; Jing, YuLan

    2017-05-01

    A series of CoFe/IrMn exchange bilayers was grown by DC-sputtering at different ultra-low argon pressures ranging from 0.008 to 0.1 Pa. This pressure range was one to two orders lower than the normal sputtering pressure. Results revealed that the exchange bias increased from 140 to 250 Oe in CoFe(10 nm)/IrMn (15 nm) bilayers of fixed thickness because of the improved crystalline structure and morphological uniformity of films. Since ferromagnetic /antiferromagnetic (FM/AF) bilayers are always used in linear magnetic sensors as detection layers, the varying exchange bias can successfully achieve tunable linear range in a crossed pinning spin valve. The linear range could be adjustable from −80 Oe – +80 Oe to −150 Oe – +150 Oe on the basis of giant magnetoresistance responses. Therefore, this method provides a simple method to tune the operating range of magnetic field sensors. - Highlights: • Increasing exchange bias was achieved in bilayer at ultra-low-pressure sputtering. • The low void density and smooth surface were achieved in low pressure. • Varying exchange bias achieved tunable linear range in spin valve.

  3. Linearity improvement on wide-range log signal of neutron measurement system for HANARO

    International Nuclear Information System (INIS)

    Kim, Young-Ki; Tuetken, Jeffrey S.

    1998-01-01

    This paper discusses engineering activities for improving the linearity characteristics of the Log Power signal from the neutron measurement system for HANARO. This neutron measurement system uses a fission chamber based detector which covers 10.3 decade-wide range from 10 -8 % full power(FP) up to 200%FP, The Log Power signal is designed to control the reactor at low power levels where most of the reactor physics tests are carried out. Therefore, the linearity characteristics of the Log Power signal is the major factor for accurate reactor power control. During the commissioning of the neutron measurement system, it was found that the linearity characteristics of the Log Power signal, especially near 10 -2 %FP, were not accurate enough for controlling the reactor during physics testing. Analysis of the system linearity data directly measured with reactor operating determined that the system was not operating per the design characteristics established from previous installations. The linearity data, which were taken as the reactor was increased in power, were sent to manufacturer's engineering group and a follow-up measures based on the analysis were then fed back to the field. Through step by step trouble-shooting activities, which included minor circuit modifications and alignment procedure changes, the linearity characteristics have been successfully improved and now exceed minimum performance requirements. This paper discusses the trouble-shooting techniques applied, the changes in the linearity characteristics, special circumstances in the HANARO application and the final resolution. (author)

  4. Flexible Ferroelectric Sensors with Ultrahigh Pressure Sensitivity and Linear Response over Exceptionally Broad Pressure Range.

    Science.gov (United States)

    Lee, Youngoh; Park, Jonghwa; Cho, Soowon; Shin, Young-Eun; Lee, Hochan; Kim, Jinyoung; Myoung, Jinyoung; Cho, Seungse; Kang, Saewon; Baig, Chunggi; Ko, Hyunhyub

    2018-04-24

    Flexible pressure sensors with a high sensitivity over a broad linear range can simplify wearable sensing systems without additional signal processing for the linear output, enabling device miniaturization and low power consumption. Here, we demonstrate a flexible ferroelectric sensor with ultrahigh pressure sensitivity and linear response over an exceptionally broad pressure range based on the material and structural design of ferroelectric composites with a multilayer interlocked microdome geometry. Due to the stress concentration between interlocked microdome arrays and increased contact area in the multilayer design, the flexible ferroelectric sensors could perceive static/dynamic pressure with high sensitivity (47.7 kPa -1 , 1.3 Pa minimum detection). In addition, efficient stress distribution between stacked multilayers enables linear sensing over exceptionally broad pressure range (0.0013-353 kPa) with fast response time (20 ms) and high reliability over 5000 repetitive cycles even at an extremely high pressure of 272 kPa. Our sensor can be used to monitor diverse stimuli from a low to a high pressure range including weak gas flow, acoustic sound, wrist pulse pressure, respiration, and foot pressure with a single device.

  5. Application of range-test in multiple linear regression analysis in ...

    African Journals Online (AJOL)

    Application of range-test in multiple linear regression analysis in the presence of outliers is studied in this paper. First, the plot of the explanatory variables (i.e. Administration, Social/Commercial, Economic services and Transfer) on the dependent variable (i.e. GDP) was done to identify the statistical trend over the years.

  6. Novel birefringence interrogation for Sagnac loop interferometer sensor with unlimited linear measurement range.

    Science.gov (United States)

    He, Haijun; Shao, Liyang; Qian, Heng; Zhang, Xinpu; Liang, Jiawei; Luo, Bin; Pan, Wei; Yan, Lianshan

    2017-03-20

    A novel demodulation method for Sagnac loop interferometer based sensor has been proposed and demonstrated, by unwrapping the phase changes with birefringence interrogation. A temperature sensor based on Sagnac loop interferometer has been used to verify the feasibility of the proposed method. Several tests with 40 °C temperature range have been accomplished with a great linearity of 0.9996 in full range. The proposed scheme is universal for all Sagnac loop interferometer based sensors and it has unlimited linear measurable range which overwhelming the conventional demodulation method with peak/dip tracing. Furthermore, the influence of the wavelength sampling interval and wavelength span on the demodulation error has been discussed in this work. The proposed interrogation method has a great significance for Sagnac loop interferometer sensor and it might greatly enhance the availability of this type of sensors in practical application.

  7. Simultaneous identification and quantification of tetrodotoxin in fresh pufferfish and pufferfish-based products using immunoaffinity columns and liquid chromatography/quadrupole-linear ion trap mass spectrometry

    Science.gov (United States)

    Guo, Mengmeng; Wu, Haiyan; Jiang, Tao; Tan, Zhijun; Zhao, Chunxia; Zheng, Guanchao; Li, Zhaoxin; Zhai, Yuxiu

    2017-07-01

    In this study, we established a comprehensive method for simultaneous identification and quantification of tetrodotoxin (TTX) in fresh pufferfish tissues and pufferfish-based products using liquid chromatography/quadrupole-linear ion trap mass spectrometry (LC-QqLIT-MS). TTX was extracted by 1% acetic acid-methanol, and most of the lipids were then removed by freezing lipid precipitation, followed by purification and concentration using immunoaffinity columns (IACs). Matrix effects were substantially reduced due to the high specificity of the IACs, and thus, background interference was avoided. Quantitation analysis was therefore performed using an external calibration curve with standards prepared in mobile phase. The method was evaluated by fortifying samples at 1, 10, and 100 ng/g, respectively, and the recoveries ranged from 75.8%-107%, with a relative standard deviation of less than 15%. The TTX calibration curves were linear over the range of 1-1 000 μg/L, with a detection limit of 0.3 ng/g and a quantification limit of 1 ng/g. Using this method, samples can be further analyzed using an information-dependent acquisition (IDA) experiment, in the positive mode, from a single liquid chromatography-tandem mass spectrometry injection, which can provide an extra level of confirmation by matching the full product ion spectra acquired for a standard sample with those from an enhanced product ion (EPI) library. The scheduled multiple reaction monitoring method enabled TTX to be screened for, and TTX was positively identified using the IDA and EPI spectra. This method was successfully applied to analyze a total of 206 samples of fresh pufferfish tissues and pufferfish-based products. The results from this study show that the proposed method can be used to quantify and identify TTX in a single run with excellent sensitivity and reproducibility, and is suitable for the analysis of complex matrix pufferfish samples.

  8. Improved measurement linearity and precision for AMCW time-of-flight range imaging cameras.

    Science.gov (United States)

    Payne, Andrew D; Dorrington, Adrian A; Cree, Michael J; Carnegie, Dale A

    2010-08-10

    Time-of-flight range imaging systems utilizing the amplitude modulated continuous wave (AMCW) technique often suffer from measurement nonlinearity due to the presence of aliased harmonics within the amplitude modulation signals. Typically a calibration is performed to correct these errors. We demonstrate an alternative phase encoding approach that attenuates the harmonics during the sampling process, thereby improving measurement linearity in the raw measurements. This mitigates the need to measure the system's response or calibrate for environmental changes. In conjunction with improved linearity, we demonstrate that measurement precision can also be increased by reducing the duty cycle of the amplitude modulated illumination source (while maintaining overall illumination power).

  9. Full-range k-domain linearization in spectral-domain optical coherence tomography.

    Science.gov (United States)

    Jeon, Mansik; Kim, Jeehyun; Jung, Unsang; Lee, Changho; Jung, Woonggyu; Boppart, Stephen A

    2011-03-10

    A full-bandwidth k-domain linearization method for spectral-domain optical coherence tomography (SD-OCT) is demonstrated. The method uses information of the wavenumber-pixel-position provided by a translating-slit-based wavelength filter. For calibration purposes, the filter is placed either after a broadband source or at the end of the sample path, and the filtered spectrum with a narrowed line width (∼0.5 nm) is incident on a line-scan camera in the detection path. The wavelength-swept spectra are co-registered with the pixel positions according to their central wavelengths, which can be automatically measured with an optical spectrum analyzer. For imaging, the method does not require a filter or a software recalibration algorithm; it simply resamples the OCT signal from the detector array without employing rescaling or interpolation methods. The accuracy of k-linearization is maximized by increasing the k-linearization order, which is known to be a crucial parameter for maintaining a narrow point-spread function (PSF) width at increasing depths. The broadening effect is studied by changing the k-linearization order by undersampling to search for the optimal value. The system provides more position information, surpassing the optimum without compromising the imaging speed. The proposed full-range k-domain linearization method can be applied to SD-OCT systems to simplify their hardware/software, increase their speed, and improve the axial image resolution. The experimentally measured width of PSF in air has an FWHM of 8 μm at the edge of the axial measurement range. At an imaging depth of 2.5 mm, the sensitivity of the full-range calibration case drops less than 10 dB compared with the uncompensated case.

  10. Facile Quantification and Identification Techniques for Reducing Gases over a Wide Concentration Range Using a MOS Sensor in Temperature-Cycled Operation

    Directory of Open Access Journals (Sweden)

    Caroline Schultealbert

    2018-03-01

    Full Text Available Dedicated methods for quantification and identification of reducing gases based on model-based temperature-cycled operation (TCO using a single commercial MOS gas sensor are presented. During high temperature phases the sensor surface is highly oxidized, yielding a significant sensitivity increase after switching to lower temperatures (differential surface reduction, DSR. For low concentrations, the slope of the logarithmic conductance during this low-temperature phase is evaluated and can directly be used for quantification. For higher concentrations, the time constant for reaching a stable conductance during the same low-temperature phase is evaluated. Both signals represent the reaction rate of the reducing gas on the strongly oxidized surface at this low temperature and provide a linear calibration curve, which is exceptional for MOS sensors. By determining these reaction rates on different low-temperature plateaus and applying pattern recognition, the resulting footprint can be used for identification of different gases. All methods are tested over a wide concentration range from 10 ppb to 100 ppm (4 orders of magnitude for four different reducing gases (CO, H2, ammonia and benzene using randomized gas exposures.

  11. Facile Quantification and Identification Techniques for Reducing Gases over a Wide Concentration Range Using a MOS Sensor in Temperature-Cycled Operation.

    Science.gov (United States)

    Schultealbert, Caroline; Baur, Tobias; Schütze, Andreas; Sauerwald, Tilman

    2018-03-01

    Dedicated methods for quantification and identification of reducing gases based on model-based temperature-cycled operation (TCO) using a single commercial MOS gas sensor are presented. During high temperature phases the sensor surface is highly oxidized, yielding a significant sensitivity increase after switching to lower temperatures (differential surface reduction, DSR). For low concentrations, the slope of the logarithmic conductance during this low-temperature phase is evaluated and can directly be used for quantification. For higher concentrations, the time constant for reaching a stable conductance during the same low-temperature phase is evaluated. Both signals represent the reaction rate of the reducing gas on the strongly oxidized surface at this low temperature and provide a linear calibration curve, which is exceptional for MOS sensors. By determining these reaction rates on different low-temperature plateaus and applying pattern recognition, the resulting footprint can be used for identification of different gases. All methods are tested over a wide concentration range from 10 ppb to 100 ppm (4 orders of magnitude) for four different reducing gases (CO, H₂, ammonia and benzene) using randomized gas exposures.

  12. LINEARIZATION OF THE BRADFORD PROTEIN ASSAY TO APPLICATION IN COW MILK PROTEINS QUANTIFICATION BY UV-Vis SPECTROPHOTOMETRY METHOD

    Directory of Open Access Journals (Sweden)

    Alessa Siqueira de Oliveira dos Santos

    2015-01-01

    Full Text Available Reliable methods for determination and quantification of total protein in food are essential information to ensure quality and safety of food trade. The objective of this study was to evaluate the linearity of calibration curves obtained from different proteins (blood serum albumin-BSA, α-LA, β-LG, caseins (CN: αs, β and κ-CAS with the reagent of Bradford. Comercial UHT skimmed bovine milk was analyzed for the determination of total protein using the Bradford method by reading at 595 nm. The determination of the concentrations of total milk protein was achieved by linear regression. The Bradford method showed a high sensitivity for the determination of total proteins in bovine milk dilution 1:25 to values closer to those obtained by the Kjeldahl method. The results showed that the calibration curve of standard proteins β-CN and BSA obtained better linearity with less variation in the absorbance measurements for the determination of total protein of milk.

  13. Rapid Quantification of Four Anthocyanins in Red Grape Wine by Hydrophilic Interaction Liquid Chromatography/Triple Quadrupole Linear Ion Trap Mass Spectrometry.

    Science.gov (United States)

    Sun, Yongming; Xia, Biqi; Chen, Xiangzhun; Duanmu, Chuansong; Li, Denghao; Han, Chao

    2015-01-01

    The identification and quantification of four anthocyanins (cyanidin-3-O-glucoside, peonidin-3-O-glucoside, delphinidin-3-O-glucoside, and malvidin-3-O-glucoside) in red grape wine were carried out by hydrophilic interaction liquid chromatography/triple quadrupole linear ion trap MS (HILIC/QTrap-MS/MS). Samples were diluted directly and separated on a Merck ZIC HILIC column with 20 mM ammonium acetate solution-acetonitrile mobile phase. Quantitative data acquisition was carried out in the multiple reaction monitoring mode. Additional identification and confirmation of target compounds were performed using the enhanced product ion mode of the linear ion trap. The LOQs were in the range 0.05-1.0 ng/mL. The average recoveries were in the range 94.6 to 104.5%. The HILIC/QTrap-MS/MS platform offers the best sensitivity and specificity for characterization and quantitative determination of the four anthocyanins in red grape wines and fulfills the quality criteria for routine laboratory application.

  14. A high linearity current mode multiplier/divider with a wide dynamic range

    International Nuclear Information System (INIS)

    Liao Pengfei; Luo Ping; Zhang Bo; Li Zhaoji

    2012-01-01

    A high linearity current mode multiplier/divider (CMM/D) with a wide dynamic range is presented. The proposed CMM/D is based on the voltage—current characteristic of the diode, thus wide dynamic range is achieved. In addition, high linearity is achieved because high accuracy current mirrors are adopted and the output current is insensitive to the temperature and device parameters of the fabrication process. Furthermore, no extra bias current for all input signals is required and thus power saving is realized. With proper selection of establishing the input terminal, the proposed circuit can perform as a multifunction circuit to be operated as a multiplier/divider, without changing its topology. The proposed circuit is implemented in a 0.25 μm BCD process and the chip area is 0.26 × 0.24 mm 2 . The simulation and measurement results show that the maximum static linearity error is ±1.8% and the total harmonic distortion is 0.4% while the input current ranges from 0 to 200 μA. (semiconductor integrated circuits)

  15. Calculation of elastic-plastic strain ranges for fatigue analysis based on linear elastic stresses

    International Nuclear Information System (INIS)

    Sauer, G.

    1998-01-01

    Fatigue analysis requires that the maximum strain ranges be known. These strain ranges are generally computed from linear elastic analysis. The elastic strain ranges are enhanced by a factor K e to obtain the total elastic-plastic strain range. The reliability of the fatigue analysis depends on the quality of this factor. Formulae for calculating the K e factor are proposed. A beam is introduced as a computational model for determining the elastic-plastic strains. The beam is loaded by the elastic stresses of the real structure. The elastic-plastic strains of the beam are compared with the beam's elastic strains. This comparison furnishes explicit expressions for the K e factor. The K e factor is tested by means of seven examples. (orig.)

  16. Long-range correlation in synchronization and syncopation tapping: a linear phase correction model.

    Directory of Open Access Journals (Sweden)

    Didier Delignières

    Full Text Available We propose in this paper a model for accounting for the increase in long-range correlations observed in asynchrony series in syncopation tapping, as compared with synchronization tapping. Our model is an extension of the linear phase correction model for synchronization tapping. We suppose that the timekeeper represents a fractal source in the system, and that a process of estimation of the half-period of the metronome, obeying a random-walk dynamics, combines with the linear phase correction process. Comparing experimental and simulated series, we show that our model allows accounting for the experimentally observed pattern of serial dependence. This model complete previous modeling solutions proposed for self-paced and synchronization tapping, for a unifying framework of event-based timing.

  17. Quantification of endocrine disruptors and pesticides in water by gas chromatography-tandem mass spectrometry. Method validation using weighted linear regression schemes.

    Science.gov (United States)

    Mansilha, C; Melo, A; Rebelo, H; Ferreira, I M P L V O; Pinho, O; Domingues, V; Pinho, C; Gameiro, P

    2010-10-22

    A multi-residue methodology based on a solid phase extraction followed by gas chromatography-tandem mass spectrometry was developed for trace analysis of 32 compounds in water matrices, including estrogens and several pesticides from different chemical families, some of them with endocrine disrupting properties. Matrix standard calibration solutions were prepared by adding known amounts of the analytes to a residue-free sample to compensate matrix-induced chromatographic response enhancement observed for certain pesticides. Validation was done mainly according to the International Conference on Harmonisation recommendations, as well as some European and American validation guidelines with specifications for pesticides analysis and/or GC-MS methodology. As the assumption of homoscedasticity was not met for analytical data, weighted least squares linear regression procedure was applied as a simple and effective way to counteract the greater influence of the greater concentrations on the fitted regression line, improving accuracy at the lower end of the calibration curve. The method was considered validated for 31 compounds after consistent evaluation of the key analytical parameters: specificity, linearity, limit of detection and quantification, range, precision, accuracy, extraction efficiency, stability and robustness. Copyright © 2010 Elsevier B.V. All rights reserved.

  18. A high sensitive 66 dB linear dynamic range receiver for 3-D laser radar

    Science.gov (United States)

    Ma, Rui; Zheng, Hao; Zhu, Zhangming

    2017-08-01

    This study presents a CMOS receiver chip realized in 0.18 μm standard CMOS technology and intended for high precision 3-D laser radar. The chip includes an adjustable gain transimpedance pre-amplifier, a post-amplifier and two timing comparators. An additional feedback is employed in the regulated cascode transimpedance amplifier to decrease the input impedance, and a variable gain transimpedance amplifier controlled by digital switches and analog multiplexer is utilized to realize four gain modes, extending the input dynamic range. The measurement shows that the highest transimpedance of the channel is 50 k {{Ω }}, the uncompensated walk error is 1.44 ns in a wide linear dynamic range of 66 dB (1:2000), and the input referred noise current is 2.3 pA/\\sqrt{{Hz}} (rms), resulting in a very low detectable input current of 1 μA with SNR = 5.

  19. Design of a High Linearity Four-Quadrant Analog Multiplier in Wideband Frequency Range

    Directory of Open Access Journals (Sweden)

    Abdul kareem Mokif Obais

    2017-05-01

    Full Text Available In this paper, a voltage mode four quadrant analog multiplier in the wideband frequency rangeis designed using a wideband operational amplifier (OPAMP and squaring circuits. The wideband OPAMP is designed using 10 identical NMOS transistorsand operated with supply voltages of ±12V. Two NMOS transistors and two wideband OPAMP are utilized in the design of the proposed squaring circuit. All the NMOS transistors are based on 0.35µm NMOStechnology. The multiplier has input and output voltage ranges of ±10 V, high range of linearity from -10 V to +10 V, and cutoff frequency of about 5 GHz. The proposed multiplier is designed on PSpice in Orcad 16.6

  20. High Dynamic Range RF Front End with Noise Cancellation and Linearization for WiMAX Receivers

    Directory of Open Access Journals (Sweden)

    J.-M. Wu

    2012-06-01

    Full Text Available This research deals with verification of the high dynamic range for a heterodyne radio frequency (RF front end. A 2.6 GHz RF front end is designed and implemented in a hybrid microwave integrated circuit (HMIC for worldwide interoperability for microwave access (WiMAX receivers. The heterodyne RF front end consists of a low-noise amplifier (LNA with noise cancellation, an RF bandpass filter (BPF, a downconverter with linearization, and an intermediate frequency (IF BPF. A noise canceling technique used in the low-noise amplifier eliminates a thermal noise and then reduces the noise figure (NF of the RF front end by 0.9 dB. Use of a downconverter with diode linearizer also compensates for gain compression, which increases the input-referred third-order intercept point (IIP3 of the RF front end by 4.3 dB. The proposed method substantially increases the spurious-free dynamic range (DRf of the RF front end by 3.5 dB.

  1. A Wide Linearity Range Method for the Determination of Lenalidomide in Plasma by High-Performance Liquid Chromatography: Application to Pharmacokinetic Studies.

    Science.gov (United States)

    Guglieri-López, Beatriz; Pérez-Pitarch, Alejandro; Martinez-Gómez, Maria Amparo; Porta-Oltra, Begoña; Climente-Martí, Mónica; Merino-Sanjuán, Matilde

    2016-12-01

    A wide linearity range analytical method for the determination of lenalidomide in patients with multiple myeloma for pharmacokinetic studies is required. Plasma samples were ultrasonicated for protein precipitation. A solid-phase extraction was performed. The eluted samples were evaporated to dryness under vacuum, and the solid obtained was diluted and injected into the high-performance liquid chromatography (HPLC) system. Separation of lenalidomide was performed on an Xterra RP C18 (250 mm length × 4.6 mm i.d., 5 µm) using a mobile phase consisting of phosphate buffer/acetonitrile (85:15, v/v, pH 3.2) at a flow rate of 0.5 mL · min -1 The samples were monitored at a wavelength of 311 nm. A linear relationship with good correlation coefficient (r = 0.997, n = 9) was found between the peak area and lenalidomide concentrations in the range of 100 to 950 ng · mL -1 The limits of detection and quantitation were 28 and 100 ng · mL -1 , respectively. The intra- and interassay precisions were satisfactory, and the accuracy of the method was proved. In conclusion, the proposed method is suitable for the accurate quantification of lenalidomide in human plasma with a wide linear range, from 100 to 950 ng · mL -1 This is a valuable method for pharmacokinetic studies of lenalidomide in human subjects. © 2016 Society for Laboratory Automation and Screening.

  2. Climate change impacts on tree ranges: model intercomparison facilitates understanding and quantification of uncertainty.

    Science.gov (United States)

    Cheaib, Alissar; Badeau, Vincent; Boe, Julien; Chuine, Isabelle; Delire, Christine; Dufrêne, Eric; François, Christophe; Gritti, Emmanuel S; Legay, Myriam; Pagé, Christian; Thuiller, Wilfried; Viovy, Nicolas; Leadley, Paul

    2012-06-01

    Model-based projections of shifts in tree species range due to climate change are becoming an important decision support tool for forest management. However, poorly evaluated sources of uncertainty require more scrutiny before relying heavily on models for decision-making. We evaluated uncertainty arising from differences in model formulations of tree response to climate change based on a rigorous intercomparison of projections of tree distributions in France. We compared eight models ranging from niche-based to process-based models. On average, models project large range contractions of temperate tree species in lowlands due to climate change. There was substantial disagreement between models for temperate broadleaf deciduous tree species, but differences in the capacity of models to account for rising CO(2) impacts explained much of the disagreement. There was good quantitative agreement among models concerning the range contractions for Scots pine. For the dominant Mediterranean tree species, Holm oak, all models foresee substantial range expansion. © 2012 Blackwell Publishing Ltd/CNRS.

  3. Improved linearity using harmonic error rejection in a full-field range imaging system

    Science.gov (United States)

    Payne, Andrew D.; Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.

    2008-02-01

    Full field range imaging cameras are used to simultaneously measure the distance for every pixel in a given scene using an intensity modulated illumination source and a gain modulated receiver array. The light is reflected from an object in the scene, and the modulation envelope experiences a phase shift proportional to the target distance. Ideally the waveforms are sinusoidal, allowing the phase, and hence object range, to be determined from four measurements using an arctangent function. In practice these waveforms are often not perfectly sinusoidal, and in some cases square waveforms are instead used to simplify the electronic drive requirements. The waveforms therefore commonly contain odd harmonics which contribute a nonlinear error to the phase determination, and therefore an error in the range measurement. We have developed a unique sampling method to cancel the effect of these harmonics, with the results showing an order of magnitude improvement in the measurement linearity without the need for calibration or lookup tables, while the acquisition time remains unchanged. The technique can be applied to existing range imaging systems without having to change or modify the complex illumination or sensor systems, instead only requiring a change to the signal generation and timing electronics.

  4. Recording and quantification of ultrasonic echolocation clicks from free-ranging toothed whales

    DEFF Research Database (Denmark)

    Madsen, Peter Teglberg; Wahlberg, Magnus

    2007-01-01

    Toothed whales produce short, ultrasonic clicks of high directionality and source level to probe their environment acoustically. This process, termed echolocation, is to a large part governed by the properties of the emitted clicks. Therefore derivation of click source parameters from free......-ranging animals is of increasing importance to understand both how toothed whales use echolocation in the wild and how they may be monitored acoustically. This paper addresses how source parameters can be derived from free-ranging toothed whales in the wild using calibrated multi-hydrophone arrays and digital...... of discrete versions of toothed whale clicks that are meaningful in a biosonar context....

  5. Quantification of stromal vascular cell mechanics with a linear cell monolayer rheometer

    Energy Technology Data Exchange (ETDEWEB)

    Elkins, Claire M., E-mail: cma9@stanford.edu; Fuller, Gerald G. [Department of Chemical Engineering, Stanford University, Stanford, California 94305 (United States); Shen, Wen-Jun; Khor, Victor K.; Kraemer, Fredric B. [Division of Endocrinology, Gerontology and Metabolism, Stanford University, Stanford, California 94305 and Veterans Affairs Palo Alto Health Care System, Palo Alto, California 94304 (United States)

    2015-01-15

    Over the past few decades researchers have developed a variety of methods for measuring the mechanical properties of whole cells, including traction force microscopy, atomic force microscopy (AFM), and single-cell tensile testing. Though each of these techniques provides insight into cell mechanics, most also involve some nonideal conditions for acquiring live cell data, such as probing only one portion of a cell at a time, or placing the cell in a nonrepresentative geometry during testing. In the present work, we describe the development of a linear cell monolayer rheometer (LCMR) and its application to measure the mechanics of a live, confluent monolayer of stromal vascular cells. In the LCMR, a monolayer of cells is contacted on both top and bottom by two collagen-coated plates and allowed to adhere. The top plate then shears the monolayer by stepping forward to induce a predetermined step strain, while a force transducer attached to the top plate collects stress information. The stress and strain data are then used to determine the maximum relaxation modulus recorded after step-strain, G{sub r}{sup 0}, referred to as the zero-time relaxation modulus of the cell monolayer. The present study validates the ability of the LCMR to quantify cell mechanics by measuring the change in G{sub r}{sup 0} of a confluent cell monolayer upon the selective inhibition of three major cytoskeletal components (actin microfilaments, vimentin intermediate filaments, and microtubules). The LCMR results indicate that both actin- and vimentin-deficient cells had ∼50% lower G{sub r}{sup 0} values than wild-type, whereas tubulin deficiency resulted in ∼100% higher G{sub r}{sup 0} values. These findings constitute the first use of a cell monolayer rheometer to quantitatively distinguish the roles of different cytoskeletal elements in maintaining cell stiffness and structure. Significantly, they are consistent with results obtained using single-cell mechanical testing methods

  6. Comparison of linear intrascan and interscan dynamic ranges of Orbitrap and ion-mobility time-of-flight mass spectrometers.

    Science.gov (United States)

    Kaufmann, Anton; Walker, Stephan

    2017-11-30

    The linear intrascan and interscan dynamic ranges of mass spectrometers are important in metabolome and residue analysis. A large linear dynamic range is mandatory if both low- and high-abundance ions have to be detected and quantitated in heavy matrix samples. These performance criteria, as provided by modern high-resolution mass spectrometry (HRMS), were systematically investigated. The comparison included two generations of Orbitraps, and an ion mobility quadrupole time-of-flight (QTOF) system In addition, different scan modes, as provided by the utilized instruments, were investigated. Calibration curves of different compounds covering a concentration range of five orders of magnitude were measured to evaluate the linear interscan dynamic range. The linear intrascan dynamic range and the resulting mass accuracy were evaluated by repeating these measurements in the presence of a very intense background. Modern HRMS instruments can show linear dynamic ranges of five orders of magnitude. Often, however, the linear dynamic range is limited by the detection capability (sensitivity and selectivity) and by the electrospray ionization. Orbitraps, as opposed to TOF instruments, show a reduced intrascan dynamic range. This is due to the limited C-trap and Orbitrap capacity. The tested TOF instrument shows poorer mass accuracies than the Orbitraps. In contrast, hyphenation with an ion-mobility device seems not to affect the linear dynamic range. The linear dynamic range of modern HRMS instrumentation has been significantly improved. This also refers to the virtual absence of systematic mass shifts at high ion abundances. The intrascan dynamic range of the current Orbitrap technology may still be a limitation when analyzing complex matrix extracts. On the other hand, the linear dynamic range is not only limited by the detector technology, but can also be shortened by peripheral devices, where the ionization and transfer of ions take place. Copyright © 2017 John Wiley

  7. Linear response theory for long-range interacting systems in quasistationary states.

    Science.gov (United States)

    Patelli, Aurelio; Gupta, Shamik; Nardini, Cesare; Ruffo, Stefano

    2012-02-01

    Long-range interacting systems, while relaxing to equilibrium, often get trapped in long-lived quasistationary states which have lifetimes that diverge with the system size. In this work, we address the question of how a long-range system in a quasistationary state (QSS) responds to an external perturbation. We consider a long-range system that evolves under deterministic Hamilton dynamics. The perturbation is taken to couple to the canonical coordinates of the individual constituents. Our study is based on analyzing the Vlasov equation for the single-particle phase-space distribution. The QSS represents a stable stationary solution of the Vlasov equation in the absence of the external perturbation. In the presence of small perturbation, we linearize the perturbed Vlasov equation about the QSS to obtain a formal expression for the response observed in a single-particle dynamical quantity. For a QSS that is homogeneous in the coordinate, we obtain an explicit formula for the response. We apply our analysis to a paradigmatic model, the Hamiltonian mean-field model, which involves particles moving on a circle under Hamiltonian dynamics. Our prediction for the response of three representative QSSs in this model (the water-bag QSS, the Fermi-Dirac QSS, and the Gaussian QSS) is found to be in good agreement with N-particle simulations for large N. We also show the long-time relaxation of the water-bag QSS to the Boltzmann-Gibbs equilibrium state. © 2012 American Physical Society

  8. Non-linear characteristics and long-range correlations in Asian stock markets

    Science.gov (United States)

    Jiang, J.; Ma, K.; Cai, X.

    2007-05-01

    We test several non-linear characteristics of Asian stock markets, which indicates the failure of efficient market hypothesis and shows the essence of fractal of the financial markets. In addition, by using the method of detrended fluctuation analysis (DFA) to investigate the long range correlation of the volatility in the stock markets, we find that the crossover phenomena exist in the results of DFA. Further, in the region of small volatility, the scaling behavior is more complicated; in the region of large volatility, the scaling exponent is close to 0.5, which suggests the market is more efficient. All these results may indicate the possibility of characteristic multifractal scaling behaviors of the financial markets.

  9. A Fourier Transform Spectrometer Based on an Electrothermal MEMS Mirror with Improved Linear Scan Range

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2016-09-01

    Full Text Available A Fourier transform spectrometer (FTS that incorporates a closed-loop controlled, electrothermally actuated microelectromechanical systems (MEMS micromirror is proposed and experimentally verified. The scan range and the tilting angle of the mirror plate are the two critical parameters for MEMS-based FTS. In this work, the MEMS mirror with a footprint of 4.3 mm × 3.1 mm is based on a modified lateral-shift-free (LSF bimorph actuator design with large piston and reduced tilting. Combined with a position-sensitive device (PSD for tilt angle sensing, the feedback controlled MEMS mirror generates a 430 µm stable linear piston scan with the mirror plate tilting angle less than ±0.002°. The usable piston scan range is increased to 78% of the MEMS mirror’s full scan capability, and a spectral resolution of 0.55 nm at 531.9 nm wavelength, has been achieved. It is a significant improvement compared to the prior work.

  10. Consistent quantification of climate impacts due to biogenic carbon storage across a range of bio-product systems

    International Nuclear Information System (INIS)

    Guest, Geoffrey; Bright, Ryan M.; Cherubini, Francesco; Strømman, Anders H.

    2013-01-01

    Temporary and permanent carbon storage from biogenic sources is seen as a way to mitigate climate change. The aim of this work is to illustrate the need to harmonize the quantification of such mitigation across all possible storage pools in the bio- and anthroposphere. We investigate nine alternative storage cases and a wide array of bio-resource pools: from annual crops, short rotation woody crops, medium rotation temperate forests, and long rotation boreal forests. For each feedstock type and biogenic carbon storage pool, we quantify the carbon cycle climate impact due to the skewed time distribution between emission and sequestration fluxes in the bio- and anthroposphere. Additional consideration of the climate impact from albedo changes in forests is also illustrated for the boreal forest case. When characterizing climate impact with global warming potentials (GWP), we find a large variance in results which is attributed to different combinations of biomass storage and feedstock systems. The storage of biogenic carbon in any storage pool does not always confer climate benefits: even when biogenic carbon is stored long-term in durable product pools, the climate outcome may still be undesirable when the carbon is sourced from slow-growing biomass feedstock. For example, when biogenic carbon from Norway Spruce from Norway is stored in furniture with a mean life time of 43 years, a climate change impact of 0.08 kg CO 2 eq per kg CO 2 stored (100 year time horizon (TH)) would result. It was also found that when biogenic carbon is stored in a pool with negligible leakage to the atmosphere, the resulting GWP factor is not necessarily − 1 CO 2 eq per kg CO 2 stored. As an example, when biogenic CO 2 from Norway Spruce biomass is stored in geological reservoirs with no leakage, we estimate a GWP of − 0.56 kg CO 2 eq per kg CO 2 stored (100 year TH) when albedo effects are also included. The large variance in GWPs across the range of resource and carbon storage

  11. Consistent quantification of climate impacts due to biogenic carbon storage across a range of bio-product systems

    Energy Technology Data Exchange (ETDEWEB)

    Guest, Geoffrey, E-mail: geoffrey.guest@ntnu.no; Bright, Ryan M., E-mail: ryan.m.bright@ntnu.no; Cherubini, Francesco, E-mail: francesco.cherubini@ntnu.no; Strømman, Anders H., E-mail: anders.hammer.stromman@ntnu.no

    2013-11-15

    Temporary and permanent carbon storage from biogenic sources is seen as a way to mitigate climate change. The aim of this work is to illustrate the need to harmonize the quantification of such mitigation across all possible storage pools in the bio- and anthroposphere. We investigate nine alternative storage cases and a wide array of bio-resource pools: from annual crops, short rotation woody crops, medium rotation temperate forests, and long rotation boreal forests. For each feedstock type and biogenic carbon storage pool, we quantify the carbon cycle climate impact due to the skewed time distribution between emission and sequestration fluxes in the bio- and anthroposphere. Additional consideration of the climate impact from albedo changes in forests is also illustrated for the boreal forest case. When characterizing climate impact with global warming potentials (GWP), we find a large variance in results which is attributed to different combinations of biomass storage and feedstock systems. The storage of biogenic carbon in any storage pool does not always confer climate benefits: even when biogenic carbon is stored long-term in durable product pools, the climate outcome may still be undesirable when the carbon is sourced from slow-growing biomass feedstock. For example, when biogenic carbon from Norway Spruce from Norway is stored in furniture with a mean life time of 43 years, a climate change impact of 0.08 kg CO{sub 2}eq per kg CO{sub 2} stored (100 year time horizon (TH)) would result. It was also found that when biogenic carbon is stored in a pool with negligible leakage to the atmosphere, the resulting GWP factor is not necessarily − 1 CO{sub 2}eq per kg CO{sub 2} stored. As an example, when biogenic CO{sub 2} from Norway Spruce biomass is stored in geological reservoirs with no leakage, we estimate a GWP of − 0.56 kg CO{sub 2}eq per kg CO{sub 2} stored (100 year TH) when albedo effects are also included. The large variance in GWPs across the range of

  12. Accuracy of 1H magnetic resonance spectroscopy for quantification of 2-hydroxyglutarate using linear combination and J-difference editing at 9.4T.

    Science.gov (United States)

    Neuberger, Ulf; Kickingereder, Philipp; Helluy, Xavier; Fischer, Manuel; Bendszus, Martin; Heiland, Sabine

    2017-12-01

    Non-invasive detection of 2-hydroxyglutarate (2HG) by magnetic resonance spectroscopy is attractive since it is related to tumor metabolism. Here, we compare the detection accuracy of 2HG in a controlled phantom setting via widely used localized spectroscopy sequences quantified by linear combination of metabolite signals vs. a more complex approach applying a J-difference editing technique at 9.4T. Different phantoms, comprised out of a concentration series of 2HG and overlapping brain metabolites, were measured with an optimized point-resolved-spectroscopy sequence (PRESS) and an in-house developed J-difference editing sequence. The acquired spectra were post-processed with LCModel and a simulated metabolite set (PRESS) or with a quantification formula for J-difference editing. Linear regression analysis demonstrated a high correlation of real 2HG values with those measured with the PRESS method (adjusted R-squared: 0.700, pJ-difference editing method (adjusted R-squared: 0.908, pJ-difference editing method however had a significantly higher explanatory value over the regression model with the PRESS method (pJ-difference editing 2HG was discernible down to 1mM, whereas with the PRESS method 2HG values were not discernable below 2mM and with higher systematic errors, particularly in phantoms with high concentrations of N-acetyl-asparate (NAA) and glutamate (Glu). In summary, quantification of 2HG with linear combination of metabolite signals shows high systematic errors particularly at low 2HG concentration and high concentration of confounding metabolites such as NAA and Glu. In contrast, J-difference editing offers a more accurate quantification even at low 2HG concentrations, which outweighs the downsides of longer measurement time and more complex postprocessing. Copyright © 2017. Published by Elsevier GmbH.

  13. Accuracy of 1H magnetic resonance spectroscopy for quantification of 2-hydroxyglutarate using linear combination and J-difference editing at 9.4 T

    Energy Technology Data Exchange (ETDEWEB)

    Neuberger, Ulf; Fischer, Manuel; Heiland, Sabine [Univ. of Heidelberg Medical Center, Heidelberg (Germany). Dept. of Neuroradiology; Univ. of Heidelberg Medical Center, Heidelberg (Germany). Div. of Experimental Radiology; Kickingereder, Philipp; Bendszus, Martin [Univ. of Heidelberg Medical Center, Heidelberg (Germany). Dept. of Neuroradiology; Helluy, Xavier [Bochum Univ. (Germany). Dept. of Biopsychology

    2017-07-01

    Non-invasive detection of 2-hydroxyglutarate (2HG) by magnetic resonance spectroscopy is attractive since it is related to tumor metabolism. Here, we compare the detection accuracy of 2HG in a controlled phantom setting via widely used localized spectroscopy sequences quantified by linear combination of metabolite signals vs. a more complex approach applying a J-difference editing technique at 9.4 T. Different phantoms, comprised out of a concentration series of 2HG and overlapping brain metabolites, were measured with an optimized point-resolved-spectroscopy sequence (PRESS) and an in-house developed J-difference editing sequence. The acquired spectra were post-processed with LCModel and a simulated metabolite set (PRESS) or with a quantification formula for J-difference editing. Linear regression analysis demonstrated a high correlation of real 2HG values with those measured with the PRESS method (adjusted R-squared: 0.700, p < 0.001) as well as with those measured with the J-difference editing method (adjusted R-squared: 0.908, p < 0.001). The regression model with the J-difference editing method however had a significantly higher explanatory value over the regression model with the PRESS method (p < 0.0001). Moreover, with J-difference editing 2HG was discernible down to 1 mM, whereas with the PRESS method 2HG values were not discernable below 2 mM and with higher systematic errors, particularly in phantoms with high concentrations of N-acetyl-asparate (NAA) and glutamate (Glu). In summary, quantification of 2HG with linear combination of metabolite signals shows high systematic errors particularly at low 2HG concentration and high concentration of confounding metabolites such as NAA and Glu. In contrast, J-difference editing offers a more accurate quantification even at low 2HG concentrations, which outweighs the downsides of longer measurement time and more complex postprocessing.

  14. Simultaneous flow cytometric quantification of plant nuclear DNA contents over the full range of described angiosperm 2C values.

    Science.gov (United States)

    Galbraith, David W

    2009-08-01

    Flow cytometry provides a rapid, accurate, and simple means to determine nuclear DNA contents (C-value) within plant homogenates. This parameter is extremely useful in a number of applications in basic and applied plant biology; for example, it provides an important starting point for projects involving whole genome sequencing, it facilitates characterization of plant species within natural and agricultural settings, it allows facile identification of engineered plants that are euploid or that represent desired ploidy classes, it points toward studies concerning the role of C-value in plant growth and development and in response to the environment and in terms of evolutionary fitness, and, in uncovering new and unexpected phenomena (for example endoreduplication), it uncovers new avenues of scientific enquiry. Despite the ease of the method, C-values have been determined for only around 2% of the described angiosperm (flowering plant) species. Within this small subset, one of the most remarkable observations is the range of 2C values, which spans at least two orders of magnitude. In determining C-values for new species, technical issues are encountered which relate both to requirement for a method that can provide accurate measurements across this extended dynamic range, and that can accommodate the large amounts of debris which accompanies flow measurements of plant homogenates. In this study, the use of the Accuri C6 flow cytometer for the analysis of plant C-values is described. This work indicates that the unusually large dynamic range of the C6, a design feature, coupled to the linearity of fluorescence emission conferred by staining of nuclei using propidium iodide, allows simultaneous analysis of species whose C-values span that of almost the entire described angiosperms. Copyright 2009 International Society for Advancement of Cytometry.

  15. Generating linear regression model to predict motor functions by use of laser range finder during TUG.

    Science.gov (United States)

    Adachi, Daiki; Nishiguchi, Shu; Fukutani, Naoto; Hotta, Takayuki; Tashiro, Yuto; Morino, Saori; Shirooka, Hidehiko; Nozaki, Yuma; Hirata, Hinako; Yamaguchi, Moe; Yorozu, Ayanori; Takahashi, Masaki; Aoyama, Tomoki

    2017-05-01

    The purpose of this study was to investigate which spatial and temporal parameters of the Timed Up and Go (TUG) test are associated with motor function in elderly individuals. This study included 99 community-dwelling women aged 72.9 ± 6.3 years. Step length, step width, single support time, variability of the aforementioned parameters, gait velocity, cadence, reaction time from starting signal to first step, and minimum distance between the foot and a marker placed to 3 in front of the chair were measured using our analysis system. The 10-m walk test, five times sit-to-stand (FTSTS) test, and one-leg standing (OLS) test were used to assess motor function. Stepwise multivariate linear regression analysis was used to determine which TUG test parameters were associated with each motor function test. Finally, we calculated a predictive model for each motor function test using each regression coefficient. In stepwise linear regression analysis, step length and cadence were significantly associated with the 10-m walk test, FTSTS and OLS test. Reaction time was associated with the FTSTS test, and step width was associated with the OLS test. Each predictive model showed a strong correlation with the 10-m walk test and OLS test (P motor function test. Moreover, the TUG test time regarded as the lower extremity function and mobility has strong predictive ability in each motor function test. Copyright © 2017 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved.

  16. Simultaneous quantification of protein phosphorylation sites using liquid chromatography-tandem mass spectrometry-based targeted proteomics: a linear algebra approach for isobaric phosphopeptides.

    Science.gov (United States)

    Xu, Feifei; Yang, Ting; Sheng, Yuan; Zhong, Ting; Yang, Mi; Chen, Yun

    2014-12-05

    As one of the most studied post-translational modifications (PTM), protein phosphorylation plays an essential role in almost all cellular processes. Current methods are able to predict and determine thousands of phosphorylation sites, whereas stoichiometric quantification of these sites is still challenging. Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS)-based targeted proteomics is emerging as a promising technique for site-specific quantification of protein phosphorylation using proteolytic peptides as surrogates of proteins. However, several issues may limit its application, one of which relates to the phosphopeptides with different phosphorylation sites and the same mass (i.e., isobaric phosphopeptides). While employment of site-specific product ions allows for these isobaric phosphopeptides to be distinguished and quantified, site-specific product ions are often absent or weak in tandem mass spectra. In this study, linear algebra algorithms were employed as an add-on to targeted proteomics to retrieve information on individual phosphopeptides from their common spectra. To achieve this simultaneous quantification, a LC-MS/MS-based targeted proteomics assay was first developed and validated for each phosphopeptide. Given the slope and intercept of calibration curves of phosphopeptides in each transition, linear algebraic equations were developed. Using a series of mock mixtures prepared with varying concentrations of each phosphopeptide, the reliability of the approach to quantify isobaric phosphopeptides containing multiple phosphorylation sites (≥ 2) was discussed. Finally, we applied this approach to determine the phosphorylation stoichiometry of heat shock protein 27 (HSP27) at Ser78 and Ser82 in breast cancer cells and tissue samples.

  17. Expanding the linear dynamic range for quantitative liquid chromatography-high resolution mass spectrometry utilizing natural isotopologue signals

    International Nuclear Information System (INIS)

    Liu, Hanghui; Lam, Lily; Yan, Lin; Chi, Bert; Dasgupta, Purnendu K.

    2014-01-01

    Highlights: • Less abundant isotopologue ions were utilized to decrease detector saturation. • A 25–50 fold increase in the upper limit of dynamic range was demonstrated. • Linear dynamic range was expanded without compromising mass resolution. - Abstract: The linear dynamic range (LDR) for quantitative liquid chromatography–mass spectrometry can be extended until ionization saturation is reached by using a number of target isotopologue ions in addition to the normally used target ion that provides the highest sensitivity. Less abundant isotopologue ions extend the LDR: the lower ion abundance decreases the probability of ion detector saturation. Effectively the sensitivity decreases and the upper limit of the LDR increases. We show in this paper that the technique is particularly powerful with a high resolution time of flight mass spectrometer because the data for all ions are automatically acquired, and we demonstrated this for four small organic molecules; the upper limits of LDRs increased by 25–50 times

  18. Quantification of protein thiols and dithiols in the picomolar range using sodium borohydride and 4,4'-dithiodipyridine

    DEFF Research Database (Denmark)

    Hansen, Rosa E; Østergaard, Henrik; Nørgaard, Per

    2007-01-01

    Experimental determination of the number of thiols in a protein requires methodology that combines high sensitivity and reproducibility with low intrinsic thiol oxidation disposition. In detection of disulfide bonds, it is also necessary to efficiently reduce disulfides and to quantify...... the liberated thiols. Ellman's reagent (5,5'-dithiobis-[2-nitrobenzoic acid], DTNB) is the most widely used reagent for quantification of protein thiols, whereas dithiothreitol (DTT) is commonly used for disulfide reduction. DTNB suffers from a relatively low sensitivity, whereas DTT reduction is inconvenient...... sodium borohydride and the thiol reagent 4,4'-dithiodipyridine (4-DPS). Because borohydride is efficiently destroyed by the addition of acid, the complete reduction and quantification can be performed conveniently in one tube without desalting steps. Furthermore, the use of reverse-phase high...

  19. Fiber optics frequency comb enabled linear optical sampling with operation wavelength range extension.

    Science.gov (United States)

    Liao, Ruolin; Wu, Zhichao; Fu, Songnian; Zhu, Shengnan; Yu, Zhe; Tang, Ming; Liu, Deming

    2018-02-01

    Although the linear optical sampling (LOS) technique is powerful enough to characterize various advanced modulation formats with high symbol rates, the central wavelength of a pulsed local oscillator (LO) needs to be carefully set according to that of the signal under test, due to the coherent mixing operation. Here, we experimentally demonstrate wideband LOS enabled by a fiber optics frequency comb (FOFC). Meanwhile, when the broadband FOFC acts as the pulsed LO, we propose a scheme to mitigate the enhanced sampling error arising in the non-ideal response of a balanced photodetector. Finally, precise characterizations of arbitrary 128 Gbps PDM-QPSK wavelength channels from 1550 to 1570 nm are successfully achieved, when a 101.3 MHz frequency spaced comb with a 3 dB spectral power ripple of 20 nm is used.

  20. Prediction of failures in linear systems with the use of tolerance ranges

    International Nuclear Information System (INIS)

    Gadzhiev, Ch.M.

    1993-01-01

    The problem of predicting the technical state of an object can be stated in a general case as that of predicting potential failures on the basis of a quantitative evaluation of the predicted parameters in relation to the set of tolerances on these parameters. The main stages in the prediction are collecting and preparing source data on the prehistory of the predicted phenomenon, forming a mathematical model of this phenomenon, working out the algorithm for the prediction, and adopting a solution from the prediction results. The final two stages of prediction are considered in this article. The prediction algorithm is proposed based on construction of the tolerance range for the signal of error between output coordinates of the system and its mathematical model. A solution regarding possible occurrence of failure in the system is formulated as a result of comparison of the tolerance range and the found confidence interval. 5 refs

  1. Dynamic Range Enhancement of High-Speed Electrical Signal Data via Non-Linear Compression

    Science.gov (United States)

    Laun, Matthew C. (Inventor)

    2016-01-01

    Systems and methods for high-speed compression of dynamic electrical signal waveforms to extend the measuring capabilities of conventional measuring devices such as oscilloscopes and high-speed data acquisition systems are discussed. Transfer function components and algorithmic transfer functions can be used to accurately measure signals that are within the frequency bandwidth but beyond the voltage range and voltage resolution capabilities of the measuring device.

  2. A wideband large dynamic range and high linearity RF front-end for U-band mobile DTV

    International Nuclear Information System (INIS)

    Liu Rongjiang; Liu Shengyou; Guo Guiliang; Cheng Xu; Yan Yuepeng

    2013-01-01

    A wideband large dynamic range and high linearity U-band RF front-end for mobile DTV is introduced, and includes a noise-cancelling low-noise amplifier (LNA), an RF programmable gain amplifier (RFPGA) and a current communicating passive mixer. The noise/distortion cancelling structure and RC post-distortion compensation are employed to improve the linearity of the LNA. An RFPGA with five stages provides large dynamic range and fine gain resolution. A simple resistor voltage network in the passive mixer decreases the gate bias voltage of the mixing transistor, and optimum linearity and symmetrical mixing is obtained at the same time. The RF front-end is implemented in a 0.25 μm CMOS process. Tests show that it achieves an IIP3 (third-order intercept point) of −17 dBm, a conversion gain of 39 dB, and a noise figure of 5.8 dB. The RFPGA achieves a dynamic range of −36.2 to 23.5 dB with a resolution of 0.32 dB. (semiconductor integrated circuits)

  3. Polarizable embedding with a multiconfiguration short-range density functional theory linear response method

    DEFF Research Database (Denmark)

    Hedegård, Erik D.; Olsen, Jógvan Magnus Haugaard; Knecht, Stefan

    2015-01-01

    . To demonstrate the capabilities of PE-MC-srDFT, we also investigated the retinylidene Schiff base chromophore embedded in the channelrhodopsin protein. While using a much more compact reference wave function in terms of active space, our PE-MC-srDFT approach yields excitation energies comparable in quality......We present here the coupling of a polarizable embedding (PE) model to the recently developed multiconfiguration short-range density functional theory method (MC-srDFT), which can treat multiconfigurational systems with a simultaneous account for dynamical and static correlation effects. PE......-MC-srDFT is designed to combine efficient treatment of complicated electronic structures with inclusion of effects from the surrounding environment. The environmental effects encompass classical electrostatic interactions as well as polarization of both the quantum region and the environment. Using response theory...

  4. Designing double-gap linear accelerators for a wide mass range

    International Nuclear Information System (INIS)

    Lysenko, W.P.; Wadlinger, E.A.; Rusnak, B.; Krawczyk, F.; Saadatmand, K.; Wan, Z.

    1998-01-01

    For applications like ion implantation, rf linacs using double-gap structures with external resonators can be used because they are practical at low frequencies. However, since the two gaps associated with a given resonator cannot be individually phased, it is not obvious how to build a linac that can efficiently accelerate particles having different mass/charge ratios. This paper describes the beam dynamics of double-gap rf linacs and shows how to maximize the range of mass/charge ratios. The theory also tells one how to rescale a linac tune (i.e., reset the voltages and phases) so that a new particle, having a different mass or charge, will behave similarly to the original particle

  5. A Facile Approach to Preparing Molecularly Imprinted Chitosan for Detecting 2,4,6-Tribromophenol with a Widely Linear Range

    Directory of Open Access Journals (Sweden)

    Limei Huang

    2017-04-01

    Full Text Available The environmental pollution of 2,4,6-tribromophenol (TBP has attracted attention. Based on an urgent need for the better provision of clean water, in situ determination of TBP is of great importance. Here, a facile and effective approach for detecting TBP is developed, based on coupling molecular imprinting technique with electrodeposition of chitosan (CS on the gold electrode. The TBP imprinting CS film was fabricated by using CS as functional material and TBP as template molecule. The experiments show that the morphologies and electrochemical properties of the imprinted film sensor was different from non-imprinted film electrode. The current of the imprinted film was linearly proportional to the TBP concentration, with a wide linear range of 1.0 × 10−7 mol•L−1 to 1.0 × 10−3 mol•L−1. By selecting drop-coating method as a reference for controlled trials with the same functional material, the results illustrated that the electrodeposition enjoyed a widely linear range advantage.

  6. High-resolution continuum source electrothermal atomic absorption spectrometry: Linearization of the calibration curves within a broad concentration range

    Energy Technology Data Exchange (ETDEWEB)

    Katskov, Dmitri, E-mail: katskovda@tut.ac.za [Tshwane University of Technology, Chemistry Department, Pretoria 0001 (South Africa); Hlongwane, Miranda [Tshwane University of Technology, Chemistry Department, Pretoria 0001 (South Africa); Heitmann, Uwe [German Aerospace Center, Rose-Luxemburg Str. 2, 10178 Berlin (Germany); Florek, Stefan [ISAS-Leibniz-Institut fuer Analytische Wissenschaften e.V., Albert-Einstein-Str. 9,12489 Berlin (Germany)

    2012-05-15

    The calculation algorithm suggested provides linearization of the calibration curves in high-resolution continuum source electrothermal atomic absorption spectrometry. The algorithm is based on the modification of the function wavelength-integrated absorbance vs. concentration of analyte vapor in the absorption volume. According to the suggested approach, the absorption line is represented by a triangle for low and trapezium for high analyte vapor concentration in the absorption volume. The respective semi-empirical formulas include two linearization parameters, which depend on properties of the absorption line and characteristics of the atomizer and spectrometer. The parameters can be approximately evaluated from the theory and determined in practice from the original broad-range calibration curve. The parameters were found and the proposed calculation algorithm verified in the experiments on direct determination of Ag, Cd, Cu, Fe, Mn and Pb in the solutions within a concentration ranges from 0.15 to 625 {mu}g{center_dot}L{sup -1} using tube, platform tube and filter furnace atomizers. The use of various atomizers, lines, elements and atomization temperatures made possible the simulation of various practical analytical conditions. It was found that the algorithm and optimal linearization parameters made it possible to obtain for each line and atomizer linear approximations of the calibration curves within 3-4 orders of magnitude with correlation coefficients close to 0.999. The algorithm makes possible to employ a single line for the direct element determination over a broad concentration range. The sources of errors and the possibility of a priori theoretical evaluation of the linearization parameters are discussed. - Highlights: Black-Right-Pointing-Pointer New calculation algorithm for HR-CS ET AAS measurements was proposed and applied. Black-Right-Pointing-Pointer The suggested formulas include two parameters to be determined experimentally. Black

  7. Quantification of local and global elastic anisotropy in ultrafine grained gradient microstructures, produced by linear flow splitting

    DEFF Research Database (Denmark)

    Niehuesbernd, Jörn; Müller, Clemens; Pantleon, Wolfgang

    2013-01-01

    . Consequently, the macroscopic elastic behavior results from the local elastic properties within the gradient. In the present investigation profiles produced by the linear flow splitting process were examined with respect to local and global elastic anisotropy, which develops during the complex forming process...

  8. Muscle shear elastic modulus is linearly related to muscle torque over the entire range of isometric contraction intensity.

    Science.gov (United States)

    Ateş, Filiz; Hug, François; Bouillard, Killian; Jubeau, Marc; Frappart, Thomas; Couade, Mathieu; Bercoff, Jeremy; Nordez, Antoine

    2015-08-01

    Muscle shear elastic modulus is linearly related to muscle torque during low-level contractions (torque over the entire range of isometric contraction and (ii) the influence of the size of the region of interest (ROI) used to average the shear modulus value. Ten healthy males performed two incremental isometric little finger abductions. The joint torque produced by Abductor Digiti Minimi was considered as an index of muscle torque and elastic modulus. A high coefficient of determination (R(2)) (range: 0.86-0.98) indicated that the relationship between elastic modulus and torque can be accurately modeled by a linear regression over the entire range (0% to 100% of MVC). The changes in shear elastic modulus as a function of torque were highly repeatable. Lower R(2) values (0.89±0.13 for 1/16 of ROI) and significantly increased absolute errors were observed when the shear elastic modulus was averaged over smaller ROI, half, 1/4 and 1/16 of the full ROI) than the full ROI (mean size: 1.18±0.24cm(2)). It suggests that the ROI should be as large as possible for accurate measurement of muscle shear modulus. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. A Dynamic Range Enhanced Readout Technique with a Two-Step TDC for High Speed Linear CMOS Image Sensors

    Directory of Open Access Journals (Sweden)

    Zhiyuan Gao

    2015-11-01

    Full Text Available This paper presents a dynamic range (DR enhanced readout technique with a two-step time-to-digital converter (TDC for high speed linear CMOS image sensors. A multi-capacitor and self-regulated capacitive trans-impedance amplifier (CTIA structure is employed to extend the dynamic range. The gain of the CTIA is auto adjusted by switching different capacitors to the integration node asynchronously according to the output voltage. A column-parallel ADC based on a two-step TDC is utilized to improve the conversion rate. The conversion is divided into coarse phase and fine phase. An error calibration scheme is also proposed to correct quantization errors caused by propagation delay skew within −Tclk~+Tclk. A linear CMOS image sensor pixel array is designed in the 0.13 μm CMOS process to verify this DR-enhanced high speed readout technique. The post simulation results indicate that the dynamic range of readout circuit is 99.02 dB and the ADC achieves 60.22 dB SNDR and 9.71 bit ENOB at a conversion rate of 2 MS/s after calibration, with 14.04 dB and 2.4 bit improvement, compared with SNDR and ENOB of that without calibration.

  10. Development of parallel-plate-based MEMS tunable capacitors with linearized capacitance–voltage response and extended tuning range

    International Nuclear Information System (INIS)

    Shavezipur, M; Nieva, P; Khajepour, A; Hashemi, S M

    2010-01-01

    This paper presents a design technique that can be used to linearize the capacitance–voltage (C–V) response and extend the tuning range of parallel-plate-based MEMS tunable capacitors beyond that of conventional designs. The proposed technique exploits the curvature of the capacitor's moving electrode which could be induced by either manipulating the stress gradients in the plate's material or using bi-layer structures. The change in curvature generates a nonlinear structural stiffness as the moving electrode undergoes out-of-plane deformation due to the actuation voltage. If the moving plate curvature is tailored such that the capacitance increment is proportional to the voltage increment, then a linear C–V response is obtained. The larger structural resistive force at higher bias voltage also delays the pull-in and increases the maximum tunability of the capacitor. Moreover, for capacitors containing an insulation layer between the two electrodes, the proposed technique completely eliminates the pull-in effect. The experimental data obtained from different capacitors fabricated using PolyMUMPs demonstrate the advantages of this design approach where highly linear C–V responses and tunabilities as high as 1050% were recorded. The design methodology introduced in this paper could be easily extended to for example, capacitive pressure and temperature sensors or infrared detectors to enhance their response characteristics.

  11. On the predictability of extreme events in records with linear and nonlinear long-range memory: Efficiency and noise robustness

    Science.gov (United States)

    Bogachev, Mikhail I.; Bunde, Armin

    2011-06-01

    We study the predictability of extreme events in records with linear and nonlinear long-range memory in the presence of additive white noise using two different approaches: (i) the precursory pattern recognition technique (PRT) that exploits solely the information about short-term precursors, and (ii) the return interval approach (RIA) that exploits long-range memory incorporated in the elapsed time after the last extreme event. We find that the PRT always performs better when only linear memory is present. In the presence of nonlinear memory, both methods demonstrate comparable efficiency in the absence of white noise. When additional white noise is present in the record (which is the case in most observational records), the efficiency of the PRT decreases monotonously with increasing noise level. In contrast, the RIA shows an abrupt transition between a phase of low level noise where the prediction is as good as in the absence of noise, and a phase of high level noise where the prediction becomes poor. In the phase of low and intermediate noise the RIA predicts considerably better than the PRT, which explains our recent findings in physiological and financial records.

  12. A parallel competitive Particle Swarm Optimization for non-linear first arrival traveltime tomography and uncertainty quantification

    Science.gov (United States)

    Luu, Keurfon; Noble, Mark; Gesret, Alexandrine; Belayouni, Nidhal; Roux, Pierre-François

    2018-04-01

    Seismic traveltime tomography is an optimization problem that requires large computational efforts. Therefore, linearized techniques are commonly used for their low computational cost. These local optimization methods are likely to get trapped in a local minimum as they critically depend on the initial model. On the other hand, global optimization methods based on MCMC are insensitive to the initial model but turn out to be computationally expensive. Particle Swarm Optimization (PSO) is a rather new global optimization approach with few tuning parameters that has shown excellent convergence rates and is straightforwardly parallelizable, allowing a good distribution of the workload. However, while it can traverse several local minima of the evaluated misfit function, classical implementation of PSO can get trapped in local minima at later iterations as particles inertia dim. We propose a Competitive PSO (CPSO) to help particles to escape from local minima with a simple implementation that improves swarm's diversity. The model space can be sampled by running the optimizer multiple times and by keeping all the models explored by the swarms in the different runs. A traveltime tomography algorithm based on CPSO is successfully applied on a real 3D data set in the context of induced seismicity.

  13. The influence of landscape characteristics and home-range size on the quantification of landscape-genetics relationships

    Science.gov (United States)

    Tabitha A. Graves; Tzeidle N. Wasserman; Milton Cezar Ribeiro; Erin L. Landguth; Stephen F. Spear; Niko Balkenhol; Colleen B. Higgins; Marie-Josee Fortin; Samuel A. Cushman; Lisette P. Waits

    2012-01-01

    A common approach used to estimate landscape resistance involves comparing correlations of ecological and genetic distances calculated among individuals of a species. However, the location of sampled individuals may contain some degree of spatial uncertainty due to the natural variation of animals moving through their home range ormeasurement error in plant or animal...

  14. Direct quantification of PM2.5 fossil and biomass carbon within the Northern Front Range Air Quality Study's domain

    International Nuclear Information System (INIS)

    Klinedinst, D.B.; Currie, L.A.

    1999-01-01

    Radiocarbon ( 14 C) analyses of PM 2.5 (particulate matter with an aerodynamic diameter of 2.5 microm or less) of both ambient and source samples from the Northern Front Range Air Quality Study (NFRAQS) in Colorado were performed. The 14 C analyses were undertaken to provide direct fossil vs modern (biomass) carbon source discrimination data for a subset of summer and winter 1996--1997 samples collected within the Denver metropolitan area. Samples were prepared for 14 C accelerator mass spectrometry measurements using techniques specially developed for small samples, i.e., lt100 μg C. For the days and sampling periods analyzed the median and interquartile range of the winter blank corrected fraction of modern carbon was 23% (16--34%) at Welby and 27% (25--37%) at Brighton. The summer samples exhibited a more mixed signature with a median and interquartile range of 47% (9--70%). Source samples yielded 14 C signatures consistent with expectation. The authors conclude fossil-derived sources contribute substantially in both seasons and at both locations; however, the biomass carbon component dominates episodically in the summer

  15. Kinetics of oriented crystallization of polymers in the linear stress-orientation range in the series expansion approach

    Directory of Open Access Journals (Sweden)

    L. Jarecki

    2018-04-01

    Full Text Available An analytical formula is derived for the oriented crystallization coefficient governing kinetics of oriented crystallization under uniaxial amorphous orientation in the entire temperature range. A series expansion approach is applied to the free energy of crystallization in the Hoffman-Lauritzen kinetic model of crystallization at accounting for the entropy of orientation of the amorphous chains. The series expansion coefficients are calculated for systems of Gaussian chains in linear stress-orientation range. Oriented crystallization rate functions are determined basing on the ‘proportional expansion’ approach proposed by Ziabicki in the steady-state limit. Crystallization kinetics controlled by separate predetermined and sporadic primary nucleation is considered, as well as the kinetics involving both nucleation mechanisms potentially present in oriented systems. The involvement of sporadic nucleation in the transformation kinetics is predicted to increase with increasing amorphous orientation. Example computations illustrate the dependence of the calculated functions on temperature and amorphous orientation, as well as qualitative agreement of the calculations with experimental results.

  16. Lifetime estimates of a fusion reactor first wall by linear damage summation and strain range partitioning methods

    International Nuclear Information System (INIS)

    Liu, K.C.; Grossbeck, M.L.

    1979-01-01

    A generalized model of a first wall made of 20% cold-worked steel was examined for neutron wall loadings ranging from 2 to 5 MW/m 2 . A spectrum of simplified on-off duty cycles was assumed with a 95% burn time. Independent evaluations of cyclic lifetimes were based on two methods: the method of linear damage summation currently being employed for use in ASME high-temperature design Code Case N-47 and that of strain range partitioning being studied for inclusion in the design code. An important point is that the latter method can incorporate a known decrease in ductility for materials subject to irradiation as a parameter, so low-cycle fatigue behavior can be estimated for irradiated material. Lifetimes predicted by the two methods agree reasonably well despite their diversity in concept. Lack of high-cycle fatigue data for the material tested at temperatures within the range of our interest precludes making conclusions on the accuracy of the predicted results, but such data are forthcoming. The analysis includes stress relaxation due to thermal and irradiation-induced creep. Reduced ductility values from irradiations that simulate the environment of the first wall of a fusion reactor were used to estimate the lifetime of the first wall under irradiation. These results indicate that 20% cold-worked type 316 stainless steel could be used as a first-wall material meeting a 8 to 10 MW-year/m 2 lifetime goal for a neutron wall loading of about 2 MW-year/m 2 and a maximum temperature of about 500 0 C

  17. Lithium adduct as precursor ion for sensitive and rapid quantification of 20 (S)-protopanaxadiol in rat plasma by liquid chromatography/quadrupole linear ion trap mass spectrometry and application to rat pharmacokinetic study.

    Science.gov (United States)

    Bao, Yuanwu; Wang, Quanying; Tang, Pingming

    2013-03-01

    A novel, rapid and sensitive liquid chromatography/quadrupole linear ion trap mass spectrometry [LC-ESI-(QqLIT)MS/MS] method was developed and validated for the quantification of protopanaxadiol (PPD) in rat plasma. Oleanolic acid (OA) was used as internal standard (IS). A simple protein precipitation based on acetonitrile (ACN) was employed. Chromatographic separation was performed on a Sepax GP-C18 column (50 × 2.1 mm, 5 μM) with a mobile phase consisting of ACN-water and 1.5 μM formic acid and 25 mM lithium acetate (90 : 10, v/v) at a flow rate of 0.4 ml/min for 3.0 min. Multiple-reaction-monitoring mode was performed using lithium adduct ion as precursor ion of m/z 467.5/449.4 and 455.6/407.4 for the drug and IS, respectively. Calibration curve was recovered over a concentration range of 0.5-100 ng/ml with a correlation coefficient >0.99. The limit of detection was 0.2 ng/ml in rat plasma for PPD. The results of the intraday and interday precision and accuracy studies were well within the acceptable limits. The validated method was successfully applied to investigate the pharmacokinetic study of PPD after intravenous and gavage administration to rat. Copyright © 2013 John Wiley & Sons, Ltd.

  18. Quantification of chemical and physical processes influencing ozone during long-range transport using a trajectory ensemble

    Directory of Open Access Journals (Sweden)

    M. Cain

    2012-08-01

    Full Text Available During long-range transport, many distinct processes – including photochemistry, deposition, emissions and mixing – contribute to the transformation of air mass composition. Partitioning the effects of different processes can be useful when considering the sensitivity of chemical transformation to, for example, a changing environment or anthropogenic influence. However, transformation is not observed directly, since mixing ratios are measured, and models must be used to relate changes to processes. Here, four cases from the ITCT-Lagrangian 2004 experiment are studied. In each case, aircraft intercepted a distinct air mass several times during transport over the North Atlantic, providing a unique dataset and quantifying the net changes in composition from all processes. A new framework is presented to deconstruct the change in O3 mixing ratio (Δ O3 into its component processes, which were not measured directly, taking into account the uncertainty in measurements, initial air mass variability and its time evolution.

    The results show that the net chemical processing (Δ O3chem over the whole simulation is greater than net physical processing (Δ O3phys in all cases. This is in part explained by cancellation effects associated with mixing. In contrast, each case is in a regime of either net photochemical destruction (lower tropospheric transport or production (an upper tropospheric biomass burning case. However, physical processes influence O3 indirectly through addition or removal of precursor gases, so that changes to physical parameters in a model can have a larger effect on Δ O3chem than Δ O3phys. Despite its smaller magnitude, the physical processing distinguishes the lower tropospheric export cases, since the net photochemical O3 change is −5 ppbv per day in all three cases.

    Processing is quantified using a Lagrangian

  19. Left Ventricular Stroke Volume Quantification by Contrast Echocardiography – Comparison of Linear and Flow-Based Methods to Cardiac Magnetic Resonance

    Science.gov (United States)

    Dele-Michael, Abiola O.; Fujikura, Kana; Devereux, Richard B; Islam, Fahmida; Hriljac, Ingrid; Wilson, Sean R.; Lin, Fay; Weinsaft, Jonathan W.

    2014-01-01

    Background Echocardiography (echo) quantified LV stroke volume (SV) is widely used to assess systolic performance after acute myocardial infarction (AMI). This study compared two common echo approaches – predicated on flow (Doppler) and linear chamber dimensions (Teichholz) – to volumetric SV and global infarct parameters quantified by cardiac magnetic resonance (CMR). Methods Multimodality imaging was performed as part of a post-AMI registry. For echo, SV was measured by Doppler and Teichholz methods. Cine-CMR was used for volumetric SV and LVEF quantification, and delayed-enhancement CMR for infarct size. Results 142 patients underwent same-day echo and CMR. On echo, mean SV by Teichholz (78±17ml) was slightly higher than Doppler (75±16ml; Δ=3±13ml, p=0.02). Compared to SV on CMR (78±18ml), mean difference by Teichholz (Δ=−0.2±14; p=0.89) was slightly smaller than Doppler (Δ−3±14; p=0.02) but limits of agreement were similar between CMR and echo methods (Teichholz: −28, 27 ml, Doppler: −31, 24ml). For Teichholz, differences with CMR SV were greatest among patients with anteroseptal or lateral wall hypokinesis (p<0.05). For Doppler, differences were associated with aortic valve abnormalities or root dilation (p=0.01). SV by both echo methods decreased stepwise in relation to global LV injury as assessed by CMR-quantified LVEF and infarct size (p<0.01). Conclusions Teichholz and Doppler calculated SV yield similar magnitude of agreement with CMR. Teichholz differences with CMR increase with septal or lateral wall contractile dysfunction, whereas Doppler yields increased offsets in patients with aortic remodeling. PMID:23488864

  20. Estimating the input function non-invasively for FDG-PET quantification with multiple linear regression analysis: simulation and verification with in vivo data

    International Nuclear Information System (INIS)

    Fang, Yu-Hua; Kao, Tsair; Liu, Ren-Shyan; Wu, Liang-Chih

    2004-01-01

    A novel statistical method, namely Regression-Estimated Input Function (REIF), is proposed in this study for the purpose of non-invasive estimation of the input function for fluorine-18 2-fluoro-2-deoxy-d-glucose positron emission tomography (FDG-PET) quantitative analysis. We collected 44 patients who had undergone a blood sampling procedure during their FDG-PET scans. First, we generated tissue time-activity curves of the grey matter and the whole brain with a segmentation technique for every subject. Summations of different intervals of these two curves were used as a feature vector, which also included the net injection dose. Multiple linear regression analysis was then applied to find the correlation between the input function and the feature vector. After a simulation study with in vivo data, the data of 29 patients were applied to calculate the regression coefficients, which were then used to estimate the input functions of the other 15 subjects. Comparing the estimated input functions with the corresponding real input functions, the averaged error percentages of the area under the curve and the cerebral metabolic rate of glucose (CMRGlc) were 12.13±8.85 and 16.60±9.61, respectively. Regression analysis of the CMRGlc values derived from the real and estimated input functions revealed a high correlation (r=0.91). No significant difference was found between the real CMRGlc and that derived from our regression-estimated input function (Student's t test, P>0.05). The proposed REIF method demonstrated good abilities for input function and CMRGlc estimation, and represents a reliable replacement for the blood sampling procedures in FDG-PET quantification. (orig.)

  1. Analysis of linear and cyclic oligomers in polyamide-6 without sample preparation by liquid chromatography using the sandwich injection method. II. Methods of detection and quantification and overall long-term performance.

    Science.gov (United States)

    Mengerink, Y; Peters, R; Kerkhoff, M; Hellenbrand, J; Omloo, H; Andrien, J; Vestjens, M; van der Wal, S

    2000-05-05

    By separating the first six linear and cyclic oligomers of polyamide-6 on a reversed-phase high-performance liquid chromatographic system after sandwich injection, quantitative determination of these oligomers becomes feasible. Low-wavelength UV detection of the different oligomers and selective post-column reaction detection of the linear oligomers with o-phthalic dicarboxaldehyde (OPA) and 3-mercaptopropionic acid (3-MPA) are discussed. A general methodology for quantification of oligomers in polymers was developed. It is demonstrated that the empirically determined group-equivalent absorption coefficients and quench factors are a convenient way of quantifying linear and cyclic oligomers of nylon-6. The overall long-term performance of the method was studied by monitoring a reference sample and the calibration factors of the linear and cyclic oligomers.

  2. Extending the Linear Modulation Range to the Full Base Speed Using a Single DC-Link Multilevel Inverter With Capacitor-Fed H-Bridges for IM Drives

    DEFF Research Database (Denmark)

    Rahul, Arun; Pramanick, Sumit; Kaarthik, R. Sudharshan

    2017-01-01

    In this paper, a new space vector pulse width modulation method to extend the linear modulation range of a cascaded five level inverter topology with a single dc supply is presented. Using this method, the inverter can be controlled linearly and the peak phase fundamental output voltage of the in......In this paper, a new space vector pulse width modulation method to extend the linear modulation range of a cascaded five level inverter topology with a single dc supply is presented. Using this method, the inverter can be controlled linearly and the peak phase fundamental output voltage...... of the inverter can be increased from 0.577 to 0.637Vdc without increasing the dc bus voltage and without exceeding the induction motor voltage rating. This new technique makes use of cascaded inverter pole voltage redundancy and property of the space vector structure for its operation. Using this, the induction...

  3. Quantification of normative ranges and baseline predictors of aortoventricular interface dimensions using multi-detector computed tomographic imaging in patients without aortic valve disease

    International Nuclear Information System (INIS)

    Gooley, Robert P.; Cameron, James D.; Soon, Jennifer; Loi, Duncan; Chitale, Gauri; Syeda, Rifath; Meredith, Ian T.

    2015-01-01

    Highlights: • MDCT imaging of the aortoventricular interface is increasingly common. • We present normative ranges for aortoventricular interface dimensions. • Such techniques and ranges should be used to standardise reporting and research. - Abstract: Background: Multidetector computed tomographic (MDCT) assessment of the aortoventricular interface has gained increased importance with the advent of minimally invasive treatment modalities for aortic and mitral valve disease. This has included a standardised technique of identifying a plane through the nadir of each coronary cusp, the basal plane, and taking further measurements in relation to this plane. Despite this there is no published data defining normal ranges for these aortoventricular metrics in a healthy cohort. This study seeks to quantify normative ranges for MDCT derived aortoventricular dimensions and evaluate baseline demographic and anthropomorphic associates of these measurements in a normal cohort. Methods: 250 consecutive patients undergoing MDCT coronary angiography were included. Aortoventricular dimensions at multiple levels of the aortoventricular interface were assessed and normative ranges quantified. Multivariate linear regression was performed to identify baseline predictors of each metric. Results: The mean age was 59 ± 12 years. The basal plane was eccentric (EI = 0.22 ± 0.06) while the left ventricular outflow tract was more eccentric (EI = 0.32 ±0.06), with no correlation to gender, age or hypertension. Male gender, height and body mass index were consistent independent predictors of larger aortoventricular dimensions at all anatomical levels, while age was predictive of supra-annular measurements. Conclusions: Male gender, height and BMI are independent predictors of all aortoventricular dimensions while age predicts only supra-annular dimensions. Use of defined metrics such as the basal plane and formation of normative ranges for these metrics allows reference for clinical

  4. Quantification of normative ranges and baseline predictors of aortoventricular interface dimensions using multi-detector computed tomographic imaging in patients without aortic valve disease

    Energy Technology Data Exchange (ETDEWEB)

    Gooley, Robert P., E-mail: robert.gooley@monashhealth.org [MonashHeart, Monash Health, Melbourne 3168 (Australia); Monash Cardiovascular Research Centre, Department of Medicine (MMC), Monash University, Melbourne 3168 (Australia); Cameron, James D., E-mail: james.cameron@monash.edu [MonashHeart, Monash Health, Melbourne 3168 (Australia); Monash Cardiovascular Research Centre, Department of Medicine (MMC), Monash University, Melbourne 3168 (Australia); Soon, Jennifer, E-mail: jenn.sa@gmail.com [MonashHeart, Monash Health, Melbourne 3168 (Australia); Monash Cardiovascular Research Centre, Department of Medicine (MMC), Monash University, Melbourne 3168 (Australia); Loi, Duncan, E-mail: dloi2@student.monash.edu [Monash Cardiovascular Research Centre, Department of Medicine (MMC), Monash University, Melbourne 3168 (Australia); Chitale, Gauri, E-mail: gchi21@student.monash.edu [Monash Cardiovascular Research Centre, Department of Medicine (MMC), Monash University, Melbourne 3168 (Australia); Syeda, Rifath, E-mail: rssye1@student.monash.edu [Monash Cardiovascular Research Centre, Department of Medicine (MMC), Monash University, Melbourne 3168 (Australia); Meredith, Ian T., E-mail: ian.meredith@myheart.id.au [MonashHeart, Monash Health, Melbourne 3168 (Australia); Monash Cardiovascular Research Centre, Department of Medicine (MMC), Monash University, Melbourne 3168 (Australia)

    2015-09-15

    Highlights: • MDCT imaging of the aortoventricular interface is increasingly common. • We present normative ranges for aortoventricular interface dimensions. • Such techniques and ranges should be used to standardise reporting and research. - Abstract: Background: Multidetector computed tomographic (MDCT) assessment of the aortoventricular interface has gained increased importance with the advent of minimally invasive treatment modalities for aortic and mitral valve disease. This has included a standardised technique of identifying a plane through the nadir of each coronary cusp, the basal plane, and taking further measurements in relation to this plane. Despite this there is no published data defining normal ranges for these aortoventricular metrics in a healthy cohort. This study seeks to quantify normative ranges for MDCT derived aortoventricular dimensions and evaluate baseline demographic and anthropomorphic associates of these measurements in a normal cohort. Methods: 250 consecutive patients undergoing MDCT coronary angiography were included. Aortoventricular dimensions at multiple levels of the aortoventricular interface were assessed and normative ranges quantified. Multivariate linear regression was performed to identify baseline predictors of each metric. Results: The mean age was 59 ± 12 years. The basal plane was eccentric (EI = 0.22 ± 0.06) while the left ventricular outflow tract was more eccentric (EI = 0.32 ±0.06), with no correlation to gender, age or hypertension. Male gender, height and body mass index were consistent independent predictors of larger aortoventricular dimensions at all anatomical levels, while age was predictive of supra-annular measurements. Conclusions: Male gender, height and BMI are independent predictors of all aortoventricular dimensions while age predicts only supra-annular dimensions. Use of defined metrics such as the basal plane and formation of normative ranges for these metrics allows reference for clinical

  5. Linear interpolation method in ensemble Kohn-Sham and range-separated density-functional approximations for excited states

    DEFF Research Database (Denmark)

    Senjean, Bruno; Knecht, Stefan; Jensen, Hans Jørgen Aa

    2015-01-01

    Gross-Oliveira-Kohn density-functional theory (GOK-DFT) for ensembles is, in principle, very attractive but has been hard to use in practice. A practical model based on GOK-DFT for the calculation of electronic excitation energies is discussed. The model relies on two modifications of GOK-DFT: use...... promising results have been obtained for both single (including charge transfer) and double excitations with spin-independent short-range local and semilocal functionals. Even at the Kohn-Sham ensemble DFT level, which is recovered when the range-separation parameter is set to 0, LIM performs better than...

  6. Facile Quantification and Identification Techniques for Reducing Gases over a Wide Concentration Range Using a MOS Sensor in Temperature-Cycled Operation

    OpenAIRE

    Caroline Schultealbert; Tobias Baur; Andreas Schütze; Tilman Sauerwald

    2018-01-01

    Dedicated methods for quantification and identification of reducing gases based on model-based temperature-cycled operation (TCO) using a single commercial MOS gas sensor are presented. During high temperature phases the sensor surface is highly oxidized, yielding a significant sensitivity increase after switching to lower temperatures (differential surface reduction, DSR). For low concentrations, the slope of the logarithmic conductance during this low-temperature phase is evaluated and can ...

  7. Molecular magnetism of a linear Fe(III)-Mn(II)-Fe(III) complex. Influence of long-range exchange interaction

    International Nuclear Information System (INIS)

    Lengen, M.; Chaudhuri, P.

    1994-01-01

    The magnetic properties of [L-Fe(III)-dmg 3 Mn(II)-Fe(III)-L] (ClO 4 ) 2 have been characterized by magnetic susceptibility, EPR, and Moessbauer studies. L represents 1,4,7-trimethyl-,1,4,7-triazacyclononane and dmg represents dimethylglyoxime. X-ray diffraction measurements yield that the arrangement of the three metal centers is strictly linear with atomic distances d Fe-Mn 0.35 nm and d Fe-Fe = 0.7 nm. Magnetic susceptibility measurements (3-295 K) were analyzed in the framework of the spin-Hamiltonian formalism considering Heisenberg exchange and Zeeman interaction: H = J Fe-Mn (S Fe1 + S Fe2 )S Mn + J Fe-Fe S Fe1 S Fe2 + gμ B S total B. The spins S Fe1 = S Fe2 = S Mn = 5/2 of the complex are antiferromagnetically coupled, yielding a total spin of S total = 5/2 with exchange coupling constants F Fe-Mn = 13.4 cm -1 and J Fe-Fe = 4.5 cm -1 . Magnetically split Moessbauer spectra were recorded at 1.5 K under various applied fields (20 mT, 170 mT, 4 T). The spin-Hamiltonian analysis of these spectra yields isotropic magnetic hyperfine coupling with A total /(g N μ N ) = -18.5 T. The corresponding local component A Fe is related to A total via spin-projection: A total = (6/7)A Fe . The resulting A Fe /(g N μ N ) -21.6 T is in agreement with standard values of ferric high-spin complexes. Spin-Hamiltonian parameters as obtained from Moessbauer studies and exchange coupling constants as derived from susceptibility measurements are corroborated by temperature-dependent EPR studies. (orig.)

  8. Some remarks on the time of flight and range of a projectile in a linear resisting medium

    Directory of Open Access Journals (Sweden)

    S. M. Stewart

    2011-01-01

    Full Text Available In view of the recent work by Karkantzakos [Journal of Engineering Science and Technology Review 2 (2009 76–81], anumber of remarks highlighting the connection between the Lambert W function and the time of flight and range of a projectilemoving in a resisting medium where the retarding force acting on the projectile is proportional to its velocity are made.In particular, we show how each of these quantities can be expressed in closed form in terms of the Lambert W function andindicate how the analysis of the motion becomes greatly simplified by its introduction.

  9. Multibloc system electronic equipment: D.C. linear - logarithmic amplifier and periodmeter and wide range (pulses, fluctuations and direct current) measuring set

    International Nuclear Information System (INIS)

    Guerre, J.; Plaige, Y.; Vaux, C.

    1974-01-01

    The requirements which have led to the design of a specific equipment for reactor neutron control (Multibloc system) are briefly recalled. It is shown how, for reasons of saving the cost of installation, the development tended towards a multifunction performance from signals delivered by one detector. Two major achievments in accordance with the above trend are described: the D.C. linear - logarithmic amplifier and periodmeter, and the wide dynamics range measuring set [fr

  10. Creep-fatigue life prediction for different heats of Type 304 stainless steel by linear-damage rule, strain-range partitioning method, and damage-rate approach

    International Nuclear Information System (INIS)

    Maiya, P.S.

    1978-07-01

    The creep-fatigue life results for five different heats of Type 304 stainless steel at 593 0 C (1100 0 F), generated under push-pull conditions in the axial strain-control mode, are presented. The life predictions for the various heats based on the linear-damage rule, strain-range partitioning method, and damage-rate approach are discussed. The appropriate material properties required for computation of fatigue life are also included

  11. Experimental investigation of the accuracy for absolute quantification of brain creatine concentration using long time echo point resolved spectroscopy sequence with an external standard and linear combination of model spectra

    International Nuclear Information System (INIS)

    Lin Yan; Shen Zhiwei; Xiao Yeyu; Zheng Wenbin; Wu Renhua; Li Hui; Xiao Zhuanwei

    2008-01-01

    Objective: To investigate the accuracy for absolute quantification of brain creatine (Cr) concentration using long time echo (TE) point resolved spectroscopy (PRESS) sequence performed with an extemal standard and postprocessed with the linear combination of model spectra ( LCModel). Methods: Ten swine (3.1 ± 0.6 kg) and an external standard phantom containing detectable compounds of known concentration were investigated in this study by using 1.5 T GE Signa scanner and a standard head coil. The single-voxel proton magnetic resonance spectroscopy ( 1 H-MRS) data were acquired from the two ROIs (2 cm x 2 cm x 2 cm) placed in swine brain and external standard solution using PRESS sequence with TE 135 ms, TR 1500 ms, and 128 scan averages. The in vivo quantification of Cr was accomplished by LCModel. After 1 H-MRS examination, each animal was sacrificed immediately. In vitro Cr concentration was analyzed by high performance liquid chromatography (HPLC). Results: In the 1 H-MRS group, the Cr concentration was (9.37±0.14)mmol/kg. In the HPLC group, the Cr concentration was (8.91± 0.13)mmol/kg. Good agreement was obtained between these two methods (t=9.038, P=0.491). Conclusion: The long echo time PRESS sequence performed with an external standard and processed with LCModel is proven to be an accurate technique to detect the in vivo brain Cr concentration. (authors)

  12. Study of deuteron photodisintegration with linearly polarized photons over the energy range E/sub el/ = 0. 4 to 0. 8 GeV

    Energy Technology Data Exchange (ETDEWEB)

    Agababyan, K.S.; Adamyan, F.V.; Akopyan, G.G.; Vartapetyan, G.A.; Galumyan, P.I.; Grabskii, V.O.; Karapetyan, V.V.; Karapetyan, G.V.; Kordonskii, M.S.

    1985-06-01

    We describe the experimental methods and the results of measurements of the asymmetry of the cross section of the eld pn reaction induced by linearly polarized photons over the energy range E/sub el/ = 0.4 to 0.8 GeV and proton angles in the c.m. system theta* = 45 to 95. Experiments were conducted on a two-arm spectrometer installation. The results obtained do not agree either with calculations within the framework of phenomenological models, or with predictions of a partial-wave analysis that includes the contribution of dibaryon resonances.

  13. Interpretability of Multivariate Brain Maps in Linear Brain Decoding: Definition, and Heuristic Quantification in Multivariate Analysis of MEG Time-Locked Effects.

    Science.gov (United States)

    Kia, Seyed Mostafa; Vega Pons, Sandro; Weisz, Nathan; Passerini, Andrea

    2016-01-01

    Brain decoding is a popular multivariate approach for hypothesis testing in neuroimaging. Linear classifiers are widely employed in the brain decoding paradigm to discriminate among experimental conditions. Then, the derived linear weights are visualized in the form of multivariate brain maps to further study spatio-temporal patterns of underlying neural activities. It is well known that the brain maps derived from weights of linear classifiers are hard to interpret because of high correlations between predictors, low signal to noise ratios, and the high dimensionality of neuroimaging data. Therefore, improving the interpretability of brain decoding approaches is of primary interest in many neuroimaging studies. Despite extensive studies of this type, at present, there is no formal definition for interpretability of multivariate brain maps. As a consequence, there is no quantitative measure for evaluating the interpretability of different brain decoding methods. In this paper, first, we present a theoretical definition of interpretability in brain decoding; we show that the interpretability of multivariate brain maps can be decomposed into their reproducibility and representativeness. Second, as an application of the proposed definition, we exemplify a heuristic for approximating the interpretability in multivariate analysis of evoked magnetoencephalography (MEG) responses. Third, we propose to combine the approximated interpretability and the generalization performance of the brain decoding into a new multi-objective criterion for model selection. Our results, for the simulated and real MEG data, show that optimizing the hyper-parameters of the regularized linear classifier based on the proposed criterion results in more informative multivariate brain maps. More importantly, the presented definition provides the theoretical background for quantitative evaluation of interpretability, and hence, facilitates the development of more effective brain decoding algorithms

  14. Lung tumors and radon inhalation in over 2000 rats: Approximate linearity across a wide range of doses and potentiation by tobacco smoke

    International Nuclear Information System (INIS)

    Gray, R.G.; Lafuma, J.; Parish, S.E.; Peto, R.; CEA Centre d'Etudes Nucleaires de Fontenay-aux-Roses

    1986-01-01

    More than 2000 rats were exposed to cumulative doses of up to 28,000 WLMs of radon gas. More than 300 pulmonary tumors were induced by this exposure, most being nonfatal lesions detected only at autopsy of animals that had died of unrelated causes. Above 6000 WLMs rats suffered increasingly from life shortening due to radiation-induced nonneoplastic causes and so had less time in which to develop tumors. When adjusted for these competing causes of death, the hazard function for the excess risk of developing pulmonary tumors was approximately linearly related to dose throughout the range of doses studied. This suggests that some previously reported high-dose ''reductions'' in radiogenic tumor-induction rates may chiefly have involved the killing of rats rather than the killing of precursor cells. Rats exposed to radon and then to six months of inhalation of tobacco smoke had a four times greater age-specific prevalence of pulmonary tumors than rats exposed to an identical radon dose either alone or preceded by tobacco smoke inhalation. This suggests that tobacco smoke may accelerate the carcinogenic process by acting as a promoter of radiation-induced somatic damage. These data suggest that, for assessing human risk from exposure to radon, the linear model should be assumed, but that the WLM is not on its own an adequate index of carcinogenic insult. 7 refs., 2 figs., 4 tabs

  15. Rational design of binder-free noble metal/metal oxide arrays with nanocauliflower structure for wide linear range nonenzymatic glucose detection

    KAUST Repository

    Li, Zhenzhen

    2015-06-12

    One-dimensional nanocomposites of metal-oxide and noble metal were expected to present superior performance for nonenzymatic glucose detection due to its good conductivity and high catalytic activity inherited from noble metal and metal oxide respectively. As a proof of concept, we synthesized gold and copper oxide (Au/CuO) composite with unique one-dimensional nanocauliflowers structure. Due to the nature of the synthesis method, no any foreign binder was needed in keeping either Au or CuO in place. To the best of our knowledge, this is the first attempt in combining metal oxide and noble metal in a binder-free style for fabricating nonenzymatic glucose sensor. The Au/CuO nanocauliflowers with large electrochemical active surface and high electrolyte contact area would promise a wide linear range and high sensitive detection of glucose with good stability and reproducibility due to its good electrical conductivity of Au and high electrocatalytic activity of CuO.

  16. X-ray diffraction measurement of the linear thermal expansion coefficients of WCoB in the range 300 to 973 K

    International Nuclear Information System (INIS)

    Petrov, K.; Will, G.

    1981-01-01

    High-temperature treatment of tungsten carbide-cobalt hard alloys in TiB 2 media leads to the formation of a surface diffusion coating which contains orthorhombic WCoB. The function of this compound in enhancing wear resistance of cutting tools, is discussed. The thermal expansion of WCoB is of primary interest, since the wear resistance of the coating reflects the degree of matching of the thermal expansion coefficients of the different phases. Preparation of the samples is described and experimental details of the X-ray diffraction measurements are given. The temperature dependence of the lattice parameters for the range 300 to 973 K, and the corresponding linear thermal expansion coefficients along the three principal crystallographic directions, are given. The results are discussed in terms of the bonding features of the solid. (U.K.)

  17. The Cu-MOF-199/single-walled carbon nanotubes modified electrode for simultaneous determination of hydroquinone and catechol with extended linear ranges and lower detection limits

    International Nuclear Information System (INIS)

    Zhou, Jian; Li, Xi; Yang, Linlin; Yan, Songlin; Wang, Mengmeng; Cheng, Dan; Chen, Qi; Dong, Yulin; Liu, Peng; Cai, Weiquan; Zhang, Chaocan

    2015-01-01

    A novel electrochemical sensor based on Cu-MOF-199 [Cu-MOF-199 = Cu 3 (BTC) 2 (BTC = 1,3,5-benzenetricarboxylicacid)] and SWCNTs (single-walled carbon nanotubes) was fabricated for the simultaneous determination of hydroquinone (HQ) and catechol (CT). The modification procedure was carried out through casting SWCNTs on the bare glassy carbon electrode (GCE) and followed by the electrodeposition of Cu-MOF-199 on the SWCNTs modified electrode. Cyclic voltammetry (CV), electrochemical impedance spectroscopy (EIS) and scanning electron microscopy (SEM) were performed to characterize the electrochemical performance and surface characteristics of the as-prepared sensor. The composite electrode exhibited an excellent electrocatalytic activity with increased electrochemical signals towards the oxidation of HQ and CT, owing to the synergistic effect of SWCNTs and Cu-MOF-199. Under the optimized condition, the linear response range were from 0.1 to 1453 μmol L −1 (R HQ  = 0.9999) for HQ and 0.1–1150 μmol L −1 (R CT  = 0.9990) for CT. The detection limits for HQ and CT were as low as 0.08 and 0.1 μmol L −1 , respectively. Moreover, the modified electrode presented the good reproducibility and the excellent anti-interference performance. The analytical performance of the developed sensor for the simultaneous detection of HQ and CT had been evaluated in practical samples with satisfying results. - Highlights: • Cu-MOF-199/SWCNTs/GCE was facilely fabricated by the electrodeposition on SWCNTs/GCE. • An electrochemical sensor for detecting HQ and CT was constructed based on this modified electrode. • The proposed electrochemical sensor showed an extended linear range and lower detection limits. • The proposed electrochemical sensor had an excellent stability and reproducibility.

  18. Fundamental Analysis of the Linear Multiple Regression Technique for Quantification of Water Quality Parameters from Remote Sensing Data. Ph.D. Thesis - Old Dominion Univ.

    Science.gov (United States)

    Whitlock, C. H., III

    1977-01-01

    Constituents with linear radiance gradients with concentration may be quantified from signals which contain nonlinear atmospheric and surface reflection effects for both homogeneous and non-homogeneous water bodies provided accurate data can be obtained and nonlinearities are constant with wavelength. Statistical parameters must be used which give an indication of bias as well as total squared error to insure that an equation with an optimum combination of bands is selected. It is concluded that the effect of error in upwelled radiance measurements is to reduce the accuracy of the least square fitting process and to increase the number of points required to obtain a satisfactory fit. The problem of obtaining a multiple regression equation that is extremely sensitive to error is discussed.

  19. Pixel multiplexing technique for real-time three-dimensional-imaging laser detection and ranging system using four linear-mode avalanche photodiodes

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Fan; Wang, Yuanqing, E-mail: yqwang@nju.edu.cn; Li, Fenfang [School of Electronic Science and Engineering, Nanjing University, Nanjing 210046 (China)

    2016-03-15

    The avalanche-photodiode-array (APD-array) laser detection and ranging (LADAR) system has been continually developed owing to its superiority of nonscanning, large field of view, high sensitivity, and high precision. However, how to achieve higher-efficient detection and better integration of the LADAR system for real-time three-dimensional (3D) imaging continues to be a problem. In this study, a novel LADAR system using four linear mode APDs (LmAPDs) is developed for high-efficient detection by adopting a modulation and multiplexing technique. Furthermore, an automatic control system for the array LADAR system is proposed and designed by applying the virtual instrumentation technique. The control system aims to achieve four functions: synchronization of laser emission and rotating platform, multi-channel synchronous data acquisition, real-time Ethernet upper monitoring, and real-time signal processing and 3D visualization. The structure and principle of the complete system are described in the paper. The experimental results demonstrate that the LADAR system is capable of achieving real-time 3D imaging on an omnidirectional rotating platform under the control of the virtual instrumentation system. The automatic imaging LADAR system utilized only 4 LmAPDs to achieve 256-pixel-per-frame detection with by employing 64-bit demodulator. Moreover, the lateral resolution is ∼15 cm and range accuracy is ∼4 cm root-mean-square error at a distance of ∼40 m.

  20. Quantification of the validity of simulations based on Geant4 and FLUKA for photo-nuclear interactions in the high energy range

    Science.gov (United States)

    Quintieri, Lina; Pia, Maria Grazia; Augelli, Mauro; Saracco, Paolo; Capogni, Marco; Guarnieri, Guido

    2017-09-01

    Photo-nuclear interactions are relevant in many research fields of both fundamental and applied physics and, for this reason, accurate Monte Carlo simulations of photo-nuclear interactions can provide a valuable and indispensable support in a wide range of applications (i.e from the optimisation of photo-neutron source target to the dosimetric estimation in high energy accelerator, etc). Unfortunately, few experimental photo-nuclear data are available above 100 MeV, so that, in the high energy range (from hundreds of MeV up to GeV scale), the code predictions are based on physical models. The aim of this work is to compare the predictions of relevant observables involving photon-nuclear interaction modelling, obtained with GEANT4 and FLUKA, to experimental data (if available), in order to assess the code estimation reliability, over a wide energy range. In particular, the comparison of the estimated photo-neutron yields and energy spectra with the experimental results of the n@BTF experiment (carried out at the Beam Test Facility of DaΦne collider, in Frascati, Italy) is here reported and discussed. Moreover, the preliminary results of the comparison of the cross sections used in the codes with the"evaluated' data recommended by the IAEA are also presented for some selected cases (W, Pb, Zn).

  1. Quantification of the sources of long-range transport of PM2.5 pollution in the Ordos region, Inner Mongolia, China

    International Nuclear Information System (INIS)

    Khuzestani, Reza Bashiri; Schauer, James J.; Wei, Yongjie; Zhang, Lulu; Cai, Tianqi; Zhang, Yang; Zhang, Yuanxun

    2017-01-01

    The Ordos region of Inner Mongolia is rapidly developing and suffers from poor air quality and unhealthy levels of fine particulate matter. PM 2.5 concentrations in the Ordos region were found to exceed 75 μg/m 3 on average, annually, with peak pollution days in excess of 350 μg/m 3 , but local air pollution emissions from surrounding sources are not sufficient to drive pollution levels to these concentrations. The current study was designed to quantify sources of PM 2.5 and assess the local source contributions and effects of regional transport on local pollution. The results show that the Ordos region is primarily impacted by regional long-range transport of pollutants from anthropogenic sources located outside of the Inner Mongolia in Shanxi province areas but is also largely affected by regional dust transported from the deserts located in western Inner Mongolia. The analysis proved that approximately 77% of PM 2.5 mass is transported long-range from the sites exterior to the study area and contributes 59.32 μg/m 3 on average, annually, while the local sources contribute 17.41 μg/m 3 (23%) on annual average to the PM 2.5 mass in the study area. High spatial correlation coefficients (R 2  > 0.6) were observed for most of the factors pointing to the transport of external emissions into the area. Spatial correlation analysis, bivariate polar plots and hybrid trajectory models for industrial and secondary inorganic factors provide evidence for the impact of long-range transport from Shanxi province areas. In addition, the deserts in western Inner Mongolia were found to be the source regions for dust. Finally, our analysis shows that the source of oil combustion and mobile factors are impacted by local sources in the Ordos region; however, some regional impacts from other regions were also observed for mobile source in the area. - Dominance of the regional long-range transport of PM 2.5 sources in the Ordos region. Around 77% of PM 2.5 mass is transported

  2. Improving PET Quantification of Small Animal [68Ga]DOTA-Labeled PET/CT Studies by Using a CT-Based Positron Range Correction.

    Science.gov (United States)

    Cal-Gonzalez, Jacobo; Vaquero, Juan José; Herraiz, Joaquín L; Pérez-Liva, Mailyn; Soto-Montenegro, María Luisa; Peña-Zalbidea, Santiago; Desco, Manuel; Udías, José Manuel

    2018-01-19

    Image quality of positron emission tomography (PET) tracers that emits high-energy positrons, such as Ga-68, Rb-82, or I-124, is significantly affected by positron range (PR) effects. PR effects are especially important in small animal PET studies, since they can limit spatial resolution and quantitative accuracy of the images. Since generators accessibility has made Ga-68 tracers wide available, the aim of this study is to show how the quantitative results of [ 68 Ga]DOTA-labeled PET/X-ray computed tomography (CT) imaging of neuroendocrine tumors in mice can be improved using positron range correction (PRC). Eighteen scans in 12 mice were evaluated, with three different models of tumors: PC12, AR42J, and meningiomas. In addition, three different [ 68 Ga]DOTA-labeled radiotracers were used to evaluate the PRC with different tracer distributions: [ 68 Ga]DOTANOC, [ 68 Ga]DOTATOC, and [ 68 Ga]DOTATATE. Two PRC methods were evaluated: a tissue-dependent (TD-PRC) and a tissue-dependent spatially-variant correction (TDSV-PRC). Taking a region in the liver as reference, the tissue-to-liver ratio values for tumor tissue (TLR tumor ), lung (TLR lung ), and necrotic areas within the tumors (TLR necrotic ) and their respective relative variations (ΔTLR) were evaluated. All TLR values in the PRC images were significantly different (p DOTA-labeled PET/CT imaging of mice with neuroendocrine tumors, hence demonstrating that these techniques could also ameliorate the deleterious effect of the positron range in clinical PET imaging.

  3. Modelling of plasma-antenna coupling and non-linear radio frequency wave-plasma-wall interactions in the magnetized plasma device under ion cyclotron range of frequencies

    International Nuclear Information System (INIS)

    Lu, LingFeng

    2016-01-01

    Ion Cyclotron Resonant Heating (ICRH) by waves in 30-80 MHz range is currently used in magnetic fusion plasmas. Excited by phased arrays of current straps at the plasma periphery, these waves exist under two polarizations. The Fast Wave tunnels through the tenuous plasma edge and propagates to its center where it is absorbed. The parasitically emitted Slow Wave only exists close to the launchers. How much power can be coupled to the center with 1 A current on the straps? How do the emitted radiofrequency (RF) near and far fields interact parasitically with the edge plasma via RF sheath rectification at plasma-wall interfaces? To address these two issues simultaneously, in realistic geometry over the size of ICRH antennas, this thesis upgraded and tested the Self-consistent Sheaths and Waves for ICH (SSWICH) code. SSWICH couples self-consistently RF wave propagation and Direct Current (DC) plasma biasing via non-linear RF and DC sheath boundary conditions (SBCs) at plasma/wall interfaces. Its upgrade is full wave and was implemented in two dimensions (toroidal/radial). New SBCs coupling the two polarizations were derived and implemented along shaped walls tilted with respect to the confinement magnetic field. Using this new tool in the absence of SBCs, we studied the impact of a density decaying continuously inside the antenna box and across the Lower Hybrid (LH) resonance. Up to the memory limits of our workstation, the RF fields below the LH resonance changed with the grid size. However the coupled power spectrum hardly evolved and was only weakly affected by the density inside the box. In presence of SBCs, SSWICH-FW simulations have identified the role of the fast wave on RF sheath excitation and reproduced some key experimental observations. SSWICH-FW was finally adapted to conduct the first electromagnetic and RF-sheath 2D simulations of the cylindrical magnetized plasma device ALINE. (author) [fr

  4. Development of Quantification Method for Bioluminescence Imaging

    International Nuclear Information System (INIS)

    Kim, Hyeon Sik; Min, Jung Joon; Lee, Byeong Il; Choi, Eun Seo; Tak, Yoon O; Choi, Heung Kook; Lee, Ju Young

    2009-01-01

    Optical molecular luminescence imaging is widely used for detection and imaging of bio-photons emitted by luminescent luciferase activation. The measured photons in this method provide the degree of molecular alteration or cell numbers with the advantage of high signal-to-noise ratio. To extract useful information from the measured results, the analysis based on a proper quantification method is necessary. In this research, we propose a quantification method presenting linear response of measured light signal to measurement time. We detected the luminescence signal by using lab-made optical imaging equipment of animal light imaging system (ALIS) and different two kinds of light sources. One is three bacterial light-emitting sources containing different number of bacteria. The other is three different non-bacterial light sources emitting very weak light. By using the concept of the candela and the flux, we could derive simplified linear quantification formula. After experimentally measuring light intensity, the data was processed with the proposed quantification function. We could obtain linear response of photon counts to measurement time by applying the pre-determined quantification function. The ratio of the re-calculated photon counts and measurement time present a constant value although different light source was applied. The quantification function for linear response could be applicable to the standard quantification process. The proposed method could be used for the exact quantitative analysis in various light imaging equipment with presenting linear response behavior of constant light emitting sources to measurement time

  5. Performance of the Real-Q EBV Quantification Kit for Epstein-Barr Virus DNA Quantification in Whole Blood.

    Science.gov (United States)

    Huh, Hee Jae; Park, Jong Eun; Kim, Ji Youn; Yun, Sun Ae; Lee, Myoung Keun; Lee, Nam Yong; Kim, Jong Won; Ki, Chang Seok

    2017-03-01

    There has been increasing interest in standardized and quantitative Epstein-Barr virus (EBV) DNA testing for the management of EBV disease. We evaluated the performance of the Real-Q EBV Quantification Kit (BioSewoom, Korea) in whole blood (WB). Nucleic acid extraction and real-time PCR were performed by using the MagNA Pure 96 (Roche Diagnostics, Germany) and 7500 Fast real-time PCR system (Applied Biosystems, USA), respectively. Assay sensitivity, linearity, and conversion factor were determined by using the World Health Organization international standard diluted in EBV-negative WB. We used 81 WB clinical specimens to compare performance of the Real-Q EBV Quantification Kit and artus EBV RG PCR Kit (Qiagen, Germany). The limit of detection (LOD) and limit of quantification (LOQ) for the Real-Q kit were 453 and 750 IU/mL, respectively. The conversion factor from EBV genomic copies to IU was 0.62. The linear range of the assay was from 750 to 10⁶ IU/mL. Viral load values measured with the Real-Q assay were on average 0.54 log₁₀ copies/mL higher than those measured with the artus assay. The Real-Q assay offered good analytical performance for EBV DNA quantification in WB.

  6. Isotope-dilution TurboFlow-LC-MS/MS method for simultaneous quantification of ten steroid metabolites in serum

    DEFF Research Database (Denmark)

    Søeborg, Tue; Frederiksen, Hanne; Johannsen, Trine Holm

    2017-01-01

    ), and estrone 3-sulfate (E1-S) in serum was developed and validated. Limits of quantification, variability (inter- and intra-day), analytical range and linearity were all found to be acceptable for clinical use. Furthermore, sample stability was evaluated including the influence of freeze-thaw cycles...

  7. Linear and mass attenuation coefficient for CdTe compound of X-rays from 10 to 100 keV energy range in different phases

    Energy Technology Data Exchange (ETDEWEB)

    Saim, A., E-mail: saim1989asma@gmail.com; Tebboune, A.; Berkok, H.; Belameiri, N.; Belbachir, A.H.

    2014-07-25

    The Full Potential Linear Muffin Tin Orbitals method within the density functional theory has been utilized to calculate structural and electronic properties of the CdTe compound. We have checked that the CdTe has two phase-transitions from zinc-blend to cinnabar and from cinnabar to rocksalt. We have found that the rigidity, the energy and the nature of the gap change according to the phase change, so we can predict that a CdTe detector may have different behaviors in different phase conditions. In order to investigate this behavior change, the linear and the mass attenuation coefficients of X-ray in rocksalt, zinc-blend and cinnabar structures are calculated from 10 keV to100 keV, using the XCOM data. We have found that when CdTe undergoes a phase transition from zinc-blend to cinnabar, its linear attenuation coefficient decreases down to a value of about 100 times smaller than its initial one, and when it undergoes a transition from cinnabar to rocksalt it increases up to a value about 90 times larger than its initial one.

  8. Superposition Quantification

    Science.gov (United States)

    Chang, Li-Na; Luo, Shun-Long; Sun, Yuan

    2017-11-01

    The principle of superposition is universal and lies at the heart of quantum theory. Although ever since the inception of quantum mechanics a century ago, superposition has occupied a central and pivotal place, rigorous and systematic studies of the quantification issue have attracted significant interests only in recent years, and many related problems remain to be investigated. In this work we introduce a figure of merit which quantifies superposition from an intuitive and direct perspective, investigate its fundamental properties, connect it to some coherence measures, illustrate it through several examples, and apply it to analyze wave-particle duality. Supported by Science Challenge Project under Grant No. TZ2016002, Laboratory of Computational Physics, Institute of Applied Physics and Computational Mathematics, Beijing, Key Laboratory of Random Complex Structures and Data Science, Chinese Academy of Sciences, Grant under No. 2008DP173182

  9. Linear Colliders

    International Nuclear Information System (INIS)

    Alcaraz, J.

    2001-01-01

    After several years of study e''+ e''- linear colliders in the TeV range have emerged as the major and optimal high-energy physics projects for the post-LHC era. These notes summarize the present status form the main accelerator and detector features to their physics potential. The LHC era. These notes summarize the present status, from the main accelerator and detector features to their physics potential. The LHC is expected to provide first discoveries in the new energy domain, whereas an e''+ e''- linear collider in the 500 GeV-1 TeV will be able to complement it to an unprecedented level of precision in any possible areas: Higgs, signals beyond the SM and electroweak measurements. It is evident that the Linear Collider program will constitute a major step in the understanding of the nature of the new physics beyond the Standard Model. (Author) 22 refs

  10. Multiaxial fatigue criteria for AISI 304 and 2-1/4 Cr-1 Mo steel at 5380C with applications to strain-range partitioning and linear summation of creep and fatigue damage

    International Nuclear Information System (INIS)

    Blass, J.J.

    1982-01-01

    An improved multiaxial fatigue failure criterion was developed based on the results of combined axial-torsional strain cycling tests of AISI 304 and 2-1/4 Cr-1 Mo steel conducted at 538 0 C (1000 0 F). The formulation of this criterion involves the shear and normal components of inelastic strain range on the planes of maximum inelastic shear strain range. Optimum values of certain parameters contained in the formulation were obtained for each material by the method of least squares. The ability of this criterion to correlate the test results was compared with that of the usual (Mises) equivalent inelastic strain range criterion. An improved definition of equivalent inelastic strain range resulting from these considerations was used to generalize the theory of Strain Range Partitioning to multiaxial stress-strain conditions and was also applied to the linear summation of creep and fatigue damage

  11. Electrophoresis Gel Quantification with a Flatbed Scanner and Versatile Lighting from a Screen Scavenged from a Liquid Crystal Display (LCD) Monitor

    Science.gov (United States)

    Yeung, Brendan; Ng, Tuck Wah; Tan, Han Yen; Liew, Oi Wah

    2012-01-01

    The use of different types of stains in the quantification of proteins separated on gels using electrophoresis offers the capability of deriving good outcomes in terms of linear dynamic range, sensitivity, and compatibility with specific proteins. An inexpensive, simple, and versatile lighting system based on liquid crystal display backlighting is…

  12. A linear programming manual

    Science.gov (United States)

    Tuey, R. C.

    1972-01-01

    Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.

  13. [Comparison of film-screen combinations with contrast detail diagram and interactive image analysis. 2: Linear assessment of grey scale ranges with interactive image analysis].

    Science.gov (United States)

    Stamm, G; Eichbaum, G; Hagemann, G

    1997-09-01

    The following three screen-film combinations were compared: a) a combination of anticrossover film and UV-light emitting screens, b) a combination of blue-light emitting screens and film, and c) a conventional green fluorescing screen-film combination. Radiographs of a specially designed plexiglass phantom (0.2 x 0.2 x 0.12 m3) with bar patterns of lead and plaster and of air, respectively were obtained using the following parameters: 12 pulse generator, 0.6 mm focus size, 4.7 mm aluminum pre-filter, a grid with 40 lines/cm (12:1) and a focus-detector distance of 1.15 m. Image analysis was performed using an IBAS system and a Zeiss Kontron computer. Display conditions were the following: display distance 0.12 m, a vario film objective 35/70 (Zeiss), a video camera tube with a PbO photocathode, 625 lines (Siemens Heimann), an IBAS image matrix of 512 x 512 pixels with a resolution of 7 lines/mm, the projected matrix area was 5000 microns2. Grey scale ranges were measured on a line perpendicular to the grouped bar patterns. The difference between the maximum and minimum density value served as signal. The spatial resolution of the detector system was measured when the signal value was three times higher than the standard deviation of the means of multiple density measurements. The results showed considerable advantages of the two new screen-film combinations as compared to the conventional screen-film combination. The result was contradictory to the findings with pure visual assessment of thresholds (part I) that had found no differences. The authors concluded that (automatic) interactive image analysis algorithms serve as an objective measure and are specifically advantageous when small differences in image quality are to be evaluated.

  14. The Zugspitze radiative closure experiment for quantifying water vapor absorption over the terrestrial and solar infrared – Part 3: Quantification of the mid- and near-infrared water vapor continuum in the 2500 to 7800 cm−1 spectral range under atmospheric conditions

    Directory of Open Access Journals (Sweden)

    A. Reichert

    2016-09-01

    Full Text Available We present a first quantification of the near-infrared (NIR water vapor continuum absorption from an atmospheric radiative closure experiment carried out at the Zugspitze (47.42° N, 10.98° E; 2964 m a.s.l.. Continuum quantification is achieved via radiative closure using radiometrically calibrated solar Fourier transform infrared (FTIR absorption spectra covering the 2500 to 7800 cm−1 spectral range. The dry atmospheric conditions at the Zugspitze site (IWV 1.4 to 3.3 mm enable continuum quantification even within water vapor absorption bands, while upper limits for continuum absorption can be provided in the centers of window regions. Throughout 75 % of the 2500 to 7800 cm−1 spectral range, the Zugspitze results agree within our estimated uncertainty with the widely used MT_CKD 2.5.2 model (Mlawer et al., 2012. In the wings of water vapor absorption bands, our measurements indicate about 2–5 times stronger continuum absorption than MT_CKD, namely in the 2800 to 3000 cm−1 and 4100 to 4200 cm−1 spectral ranges. The measurements are consistent with the laboratory measurements of Mondelain et al. (2015, which rely on cavity ring-down spectroscopy (CDRS, and the calorimetric–interferometric measurements of Bicknell et al. (2006. Compared to the recent FTIR laboratory studies of Ptashnik et al. (2012, 2013, our measurements are consistent within the estimated errors throughout most of the spectral range. However, in the wings of water vapor absorption bands our measurements indicate typically 2–3 times weaker continuum absorption under atmospheric conditions, namely in the 3200 to 3400, 4050 to 4200, and 6950 to 7050 cm−1 spectral regions.

  15. Linear algebra

    CERN Document Server

    Shilov, Georgi E

    1977-01-01

    Covers determinants, linear spaces, systems of linear equations, linear functions of a vector argument, coordinate transformations, the canonical form of the matrix of a linear operator, bilinear and quadratic forms, Euclidean spaces, unitary spaces, quadratic forms in Euclidean and unitary spaces, finite-dimensional space. Problems with hints and answers.

  16. Application of Neesler reagent in the ammonium quantification used in the fermentations of biotechnology products

    Directory of Open Access Journals (Sweden)

    Dinorah Torres-Idavoy

    2015-08-01

    Full Text Available The ammonium salts are used in fermentations to supplement the deficient amounts of nitrogen and stabilize the pH of the culture medium. The excess ammonium ion exerts a detrimental effect on the fermentation process inhibiting microbial growth. An analytical method based on Neesler reagent was developed for monitoring and controlling the concentration of ammonium during the fermentation process. The test was standardized, by means of the selection of measuring equipment, and the reaction time as well as comparing standards of ammonium salts. The method was characterized with the evaluation of the next parameters: Specificity, Linearity and Range, Quantification Limit, Accuracy and Precision. The method proved to be specific. Two linear curves were defined in the ranges of concentrations of ammonium chloride salt (2-20 μg/ml and ammonium sulfate salt (5-30 μg/ml. The limits of quantification were the lowest points of each one. The method proved to be accurate and precise. This assay was applied to samples of the yeast culture and bacteria of the genus Saccharomyces and E. coli respectively. A novel method in micro plate for quantification and analytical control of ammonia was developed. This method is used to control this fundamental chemical component in the fermentations, to optimize the culture medium. Thus, an appropriate expression of recombinant proteins and proper vaccine candidates for clinical use are achieved

  17. Uncertainty quantification theory, implementation, and applications

    CERN Document Server

    Smith, Ralph C

    2014-01-01

    The field of uncertainty quantification is evolving rapidly because of increasing emphasis on models that require quantified uncertainties for large-scale applications, novel algorithm development, and new computational architectures that facilitate implementation of these algorithms. Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines. The book begins with a detailed discussion of applications where uncertainty quantification is critical for both scientific understanding and policy. It then covers concepts from probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, surrogate model construction, and local and global sensitivity analysis. The author maintains a complementary web page where readers ca...

  18. 10 ps resolution, 160 ns full scale range and less than 1.5% differential non-linearity time-to-digital converter module for high performance timing measurements

    Energy Technology Data Exchange (ETDEWEB)

    Markovic, B.; Tamborini, D.; Villa, F.; Tisa, S.; Tosi, A.; Zappa, F. [Politecnico di Milano, Dipartimento di Elettronica e Informazione, Piazza Leonardo da Vinci 32, 20133 Milano (Italy)

    2012-07-15

    We present a compact high performance time-to-digital converter (TDC) module that provides 10 ps timing resolution, 160 ns dynamic range and a differential non-linearity better than 1.5% LSB{sub rms}. The TDC can be operated either as a general-purpose time-interval measurement device, when receiving external START and STOP pulses, or in photon-timing mode, when employing the on-chip SPAD (single photon avalanche diode) detector for detecting photons and time-tagging them. The instrument precision is 15 ps{sub rms} (i.e., 36 ps{sub FWHM}) and in photon timing mode it is still better than 70 ps{sub FWHM}. The USB link to the remote PC allows the easy setting of measurement parameters, the fast download of acquired data, and their visualization and storing via an user-friendly software interface. The module proves to be the best candidate for a wide variety of applications such as: fluorescence lifetime imaging, time-of-flight ranging measurements, time-resolved positron emission tomography, single-molecule spectroscopy, fluorescence correlation spectroscopy, diffuse optical tomography, optical time-domain reflectometry, quantum optics, etc.

  19. Strategy for the maximization of clinically relevant information from hepatitis C virus, RT-PCR quantification.

    LENUS (Irish Health Repository)

    Levis, J

    2012-02-03

    BACKGROUND: The increasing clinical application of viral load assays for monitoring viral infections has been an incentive for the development of standardized tests for the hepatitis C virus. OBJECTIVE: To develop a simple model for the prediction of baseline viral load in individuals infected with the hepatitis C virus. METHODOLOGY: Viral load quantification of each patient\\'s first sample was assessed by RT-PCR-ELISA using the Roche MONITOR assay in triplicate. Genotype of the infecting virus was identified by reverse line probe hybridization, using amplicons resulting from the qualitative HCV Roche AMPLICOR assay. RESULTS: Retrospective evaluation of first quantitative values suggested that 82.4% (n=168\\/204) of individuals had a viral load between 4.3 and 6.7 log(10) viral copies per ml. A few patients (3.4%; n=7\\/204) have a serum viremia less than the lower limit of the linear range of the RT-PCR assay. Subsequent, prospective evaluation of hepatitis C viral load of all new patients using a model based on the dynamic range of viral load in the retrospective group correctly predicted the dynamic range in 75.9% (n=33\\/54). CONCLUSION: The dynamic range of hepatitis C viremia extends beyond the linear range of the Roche MONITOR assay. Accurate determination of serum viremia is substantially improved by dilution of specimens prior to quantification.

  20. Simultaneous quantification of flavonoids and triterpenoids in licorice using HPLC.

    Science.gov (United States)

    Wang, Yuan-Chuen; Yang, Yi-Shan

    2007-05-01

    Numerous bioactive compounds are present in licorice (Glycyrrhizae Radix), including flavonoids and triterpenoids. In this study, a reversed-phase high-performance liquid chromatography (HPLC) method for simultaneous quantification of three flavonoids (liquiritin, liquiritigenin and isoliquiritigenin) and four triterpenoids (glycyrrhizin, 18alpha-glycyrrhetinic acid, 18beta-glycyrrhetinic acid and 18beta-glycyrrhetinic acid methyl ester) from licorice was developed, and further, to quantify these 7 compounds from 20 different licorice samples. Specifically, the reverse-phase HPLC was performed with a gradient mobile phase composed of 25 mM phosphate buffer (pH 2.5)-acetonitrile featuring gradient elution steps as follows: 0 min, 100:0; 10 min, 80:20; 50 min, 70:30; 73 min, 50:50; 110 min, 50:50; 125 min, 20:80; 140 min, 20:80, and peaks were detected at 254 nm. By using our technique, a rather good specificity was obtained regarding to the separation of these seven compounds. The regression coefficient for the linear equations for the seven compounds lay between 0.9978 and 0.9992. The limits of detection and quantification lay in the range of 0.044-0.084 and 0.13-0.25 microg/ml, respectively. The relative recovery rates for the seven compounds lay between 96.63+/-2.43 and 103.55+/-2.77%. Coefficient variation for intra-day and inter-day precisions lay in the range of 0.20-1.84 and 0.28-1.86%, respectively. Based upon our validation results, this analytical technique is a convenient method to simultaneous quantify numerous bioactive compounds derived from licorice, featuring good quantification parameters, accuracy and precision.

  1. Quantification of taurine in energy drinks using ¹H NMR.

    Science.gov (United States)

    Hohmann, Monika; Felbinger, Christine; Christoph, Norbert; Wachter, Helmut; Wiest, Johannes; Holzgrabe, Ulrike

    2014-05-01

    The consumption of so called energy drinks is increasing, especially among adolescents. These beverages commonly contain considerable amounts of the amino sulfonic acid taurine, which is related to a magnitude of various physiological effects. The customary method to control the legal limit of taurine in energy drinks is LC-UV/vis with postcolumn derivatization using ninhydrin. In this paper we describe the quantification of taurine in energy drinks by (1)H NMR as an alternative to existing methods of quantification. Variation of pH values revealed the separation of a distinct taurine signal in (1)H NMR spectra, which was applied for integration and quantification. Quantification was performed using external calibration (R(2)>0.9999; linearity verified by Mandel's fitting test with a 95% confidence level) and PULCON. Taurine concentrations in 20 different energy drinks were analyzed by both using (1)H NMR and LC-UV/vis. The deviation between (1)H NMR and LC-UV/vis results was always below the expanded measurement uncertainty of 12.2% for the LC-UV/vis method (95% confidence level) and at worst 10.4%. Due to the high accordance to LC-UV/vis data and adequate recovery rates (ranging between 97.1% and 108.2%), (1)H NMR measurement presents a suitable method to quantify taurine in energy drinks. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Quantification in emission tomography

    International Nuclear Information System (INIS)

    Buvat, Irene

    2011-11-01

    The objective of this lecture is to understand the possibilities and limitations of the quantitative analysis of single photon emission computed tomography (SPECT) and positron emission tomography (PET) images. It is also to identify the conditions to be fulfilled to obtain reliable quantitative measurements from images. Content: 1 - Introduction: Quantification in emission tomography - definition and challenges; quantification biasing phenomena 2 - Main problems impacting quantification in PET and SPECT: problems, consequences, correction methods, results (Attenuation, scattering, partial volume effect, movement, un-stationary spatial resolution in SPECT, fortuitous coincidences in PET, standardisation in PET); 3 - Synthesis: accessible efficiency, know-how, Precautions, beyond the activity measurement

  3. Linear gate

    International Nuclear Information System (INIS)

    Suwono.

    1978-01-01

    A linear gate providing a variable gate duration from 0,40μsec to 4μsec was developed. The electronic circuity consists of a linear circuit and an enable circuit. The input signal can be either unipolar or bipolar. If the input signal is bipolar, the negative portion will be filtered. The operation of the linear gate is controlled by the application of a positive enable pulse. (author)

  4. Linear Accelerators

    International Nuclear Information System (INIS)

    Vretenar, M

    2014-01-01

    The main features of radio-frequency linear accelerators are introduced, reviewing the different types of accelerating structures and presenting the main characteristics aspects of linac beam dynamics

  5. Quantification of local mobilities

    DEFF Research Database (Denmark)

    Zhang, Y. B.

    2018-01-01

    A new method for quantification of mobilities of local recrystallization boundary segments is presented. The quantification is based on microstructures characterized using electron microscopy and on determination of migration velocities and driving forces for local boundary segments. Pure aluminium...... is investigated and the results show that even for a single recrystallization boundary, different boundary segments migrate differently, and the differences can be understood based on variations in mobilities and local deformed microstructures. The present work has important implications for understanding...

  6. Linearization Method and Linear Complexity

    Science.gov (United States)

    Tanaka, Hidema

    We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.

  7. Analytic Morse/long-range potential energy surfaces and "adiabatic-hindered-rotor" treatment for a symmetric top-linear molecule dimer: A case study of CH3F-H2

    Science.gov (United States)

    Zhang, Xiao-Long; Ma, Yong-Tao; Zhai, Yu; Li, Hui

    2018-03-01

    A first effective six-dimensional ab initio potential energy surface (PES) for CH3F-H2 which explicitly includes the intramolecular Q3 stretching normal mode of the CH3F monomer is presented. The electronic structure computations have been carried out at the explicitly correlated coupled cluster level of theory [CCSD(T)-F12a] with an augmented correlation-consistent triple zeta basis set. Five-dimensional analytical intermolecular PESs for ν3(CH3F) = 0 and 1 are then obtained by fitting the vibrationally averaged potentials to the Morse/Long-Range (MLR) potential function form. The MLR function form is applied to the nonlinear molecule-linear molecule case for the first time. These fits to 25 015 points have root-mean-square deviations of 0.74 cm-1 and 0.082 cm-1 for interaction energies less than 0.0 cm-1. Using the adiabatic hindered-rotor approximation, three-dimensional PESs for CH3F-paraH2 are generated from the 5D PESs over all possible orientations of the hydrogen monomer. The infrared and microwave spectra for CH3F-paraH2 dimer are predicted for the first time. These analytic PESs can be used for modeling the dynamical behavior in CH3F-(H2)N clusters, including the possible appearance of microscopic superfluidity.

  8. Metal Stable Isotope Tagging: Renaissance of Radioimmunoassay for Multiplex and Absolute Quantification of Biomolecules.

    Science.gov (United States)

    Liu, Rui; Zhang, Shixi; Wei, Chao; Xing, Zhi; Zhang, Sichun; Zhang, Xinrong

    2016-05-17

    The unambiguous quantification of biomolecules is of great significance in fundamental biological research as well as practical clinical diagnosis. Due to the lack of a detectable moiety, the direct and highly sensitive quantification of biomolecules is often a "mission impossible". Consequently, tagging strategies to introduce detectable moieties for labeling target biomolecules were invented, which had a long and significant impact on studies of biomolecules in the past decades. For instance, immunoassays have been developed with radioisotope tagging by Yalow and Berson in the late 1950s. The later languishment of this technology can be almost exclusively ascribed to the use of radioactive isotopes, which led to the development of nonradioactive tagging strategy-based assays such as enzyme-linked immunosorbent assay, fluorescent immunoassay, and chemiluminescent and electrochemiluminescent immunoassay. Despite great success, these strategies suffered from drawbacks such as limited spectral window capacity for multiplex detection and inability to provide absolute quantification of biomolecules. After recalling the sequences of tagging strategies, an apparent question is why not use stable isotopes from the start? A reasonable explanation is the lack of reliable means for accurate and precise quantification of stable isotopes at that time. The situation has changed greatly at present, since several atomic mass spectrometric measures for metal stable isotopes have been developed. Among the newly developed techniques, inductively coupled plasma mass spectrometry is an ideal technique to determine metal stable isotope-tagged biomolecules, for its high sensitivity, wide dynamic linear range, and more importantly multiplex and absolute quantification ability. Since the first published report by our group, metal stable isotope tagging has become a revolutionary technique and gained great success in biomolecule quantification. An exciting research highlight in this area

  9. Linear algebra

    CERN Document Server

    Said-Houari, Belkacem

    2017-01-01

    This self-contained, clearly written textbook on linear algebra is easily accessible for students. It begins with the simple linear equation and generalizes several notions from this equation for the system of linear equations and introduces the main ideas using matrices. It then offers a detailed chapter on determinants and introduces the main ideas with detailed proofs. The third chapter introduces the Euclidean spaces using very simple geometric ideas and discusses various major inequalities and identities. These ideas offer a solid basis for understanding general Hilbert spaces in functional analysis. The following two chapters address general vector spaces, including some rigorous proofs to all the main results, and linear transformation: areas that are ignored or are poorly explained in many textbooks. Chapter 6 introduces the idea of matrices using linear transformation, which is easier to understand than the usual theory of matrices approach. The final two chapters are more advanced, introducing t...

  10. Quantification of catechin in the spray-dried extract of Pimenta pseudocaryophyllus

    Directory of Open Access Journals (Sweden)

    Leonardo C. Silva

    Full Text Available Abstract This paper describes the quantification of catechin in the spray-dried extract of Pimenta pseudocaryophyllus (Gomes Landrum, Myrtaceae, citral chemotype using a validated HPLC-PDA method. The method employs a RP-18 column with acetonitrile:water-orthophosphoric acid 0.05% (gradient system and UV detection at 210 nm. The method was demonstrated to be simple, sensitive, specific, linear, precise, accurate and robust. The response was linear over a range of 5–200 µg/ml (r > 0.999. The range of recoveries was 92.27–102.54%. The relative standard deviation values for intra- and inter-day precision studies were 4.30 and 3.78%, respectively. This assay can be readily utilized as quality control method for catechin in the dried extract of P. pseudocaryophyllus.

  11. Noninvasive Quantification of Pancreatic Fat in Humans

    OpenAIRE

    Lingvay, Ildiko; Esser, Victoria; Legendre, Jaime L.; Price, Angela L.; Wertz, Kristen M.; Adams-Huet, Beverley; Zhang, Song; Unger, Roger H.; Szczepaniak, Lidia S.

    2009-01-01

    Objective: To validate magnetic resonance spectroscopy (MRS) as a tool for non-invasive quantification of pancreatic triglyceride (TG) content and to measure the pancreatic TG content in a diverse human population with a wide range of body mass index (BMI) and glucose control.

  12. Linear algebra

    CERN Document Server

    Stoll, R R

    1968-01-01

    Linear Algebra is intended to be used as a text for a one-semester course in linear algebra at the undergraduate level. The treatment of the subject will be both useful to students of mathematics and those interested primarily in applications of the theory. The major prerequisite for mastering the material is the readiness of the student to reason abstractly. Specifically, this calls for an understanding of the fact that axioms are assumptions and that theorems are logical consequences of one or more axioms. Familiarity with calculus and linear differential equations is required for understand

  13. Linear programming

    CERN Document Server

    Solow, Daniel

    2014-01-01

    This text covers the basic theory and computation for a first course in linear programming, including substantial material on mathematical proof techniques and sophisticated computation methods. Includes Appendix on using Excel. 1984 edition.

  14. Linear algebra

    CERN Document Server

    Liesen, Jörg

    2015-01-01

    This self-contained textbook takes a matrix-oriented approach to linear algebra and presents a complete theory, including all details and proofs, culminating in the Jordan canonical form and its proof. Throughout the development, the applicability of the results is highlighted. Additionally, the book presents special topics from applied linear algebra including matrix functions, the singular value decomposition, the Kronecker product and linear matrix equations. The matrix-oriented approach to linear algebra leads to a better intuition and a deeper understanding of the abstract concepts, and therefore simplifies their use in real world applications. Some of these applications are presented in detailed examples. In several ‘MATLAB-Minutes’ students can comprehend the concepts and results using computational experiments. Necessary basics for the use of MATLAB are presented in a short introduction. Students can also actively work with the material and practice their mathematical skills in more than 300 exerc...

  15. Linear algebra

    CERN Document Server

    Berberian, Sterling K

    2014-01-01

    Introductory treatment covers basic theory of vector spaces and linear maps - dimension, determinants, eigenvalues, and eigenvectors - plus more advanced topics such as the study of canonical forms for matrices. 1992 edition.

  16. Linear Models

    CERN Document Server

    Searle, Shayle R

    2012-01-01

    This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.

  17. Quantification of water in hydrous ringwoodite

    Directory of Open Access Journals (Sweden)

    Sylvia-Monique eThomas

    2015-01-01

    Full Text Available Ringwoodite, γ-(Mg,Fe2SiO4, in the lower 150 km of Earth’s mantle transition zone (410-660 km depth can incorporate up to 1.5-2 wt% H2O as hydroxyl defects. We present a mineral-specific IR calibration for the absolute water content in hydrous ringwoodite by combining results from Raman spectroscopy, secondary ion mass spectrometery (SIMS and proton-proton (pp-scattering on a suite of synthetic Mg- and Fe-bearing hydrous ringwoodites. H2O concentrations in the crystals studied here range from 0.46 to 1.7 wt% H2O (absolute methods, with the maximum H2O in the same sample giving 2.5 wt% by SIMS calibration. Anchoring our spectroscopic results to absolute H-atom concentrations from pp-scattering measurements, we report frequency-dependent integrated IR-absorption coefficients for water in ringwoodite ranging from 78180 to 158880 L mol-1cm-2, depending upon frequency of the OH absorption. We further report a linear wavenumber IR calibration for H2O quantification in hydrous ringwoodite across the Mg2SiO4-Fe2SiO4 solid solution, which will lead to more accurate estimations of the water content in both laboratory-grown and naturally occurring ringwoodites. Re-evaluation of the IR spectrum for a natural hydrous ringwoodite inclusion in diamond from the study of Pearson et al. (2014 indicates the crystal contains 1.43 ± 0.27 wt% H2O, thus confirming near-maximum amounts of H2O for this sample from the transition zone.

  18. LINEAR ACCELERATOR

    Science.gov (United States)

    Christofilos, N.C.; Polk, I.J.

    1959-02-17

    Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.

  19. Fluorescent quantification of melanin.

    Science.gov (United States)

    Fernandes, Bruno; Matamá, Teresa; Guimarães, Diana; Gomes, Andreia; Cavaco-Paulo, Artur

    2016-11-01

    Melanin quantification is reportedly performed by absorption spectroscopy, commonly at 405 nm. Here, we propose the implementation of fluorescence spectroscopy for melanin assessment. In a typical in vitro assay to assess melanin production in response to an external stimulus, absorption spectroscopy clearly overvalues melanin content. This method is also incapable of distinguishing non-melanotic/amelanotic control cells from those that are actually capable of performing melanogenesis. Therefore, fluorescence spectroscopy is the best method for melanin quantification as it proved to be highly specific and accurate, detecting even small variations in the synthesis of melanin. This method can also be applied to the quantification of melanin in more complex biological matrices like zebrafish embryos and human hair. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. The effect of long-range air mass transport pathways on PM10 and NO2 concentrations at urban and rural background sites in Ireland: Quantification using clustering techniques.

    Science.gov (United States)

    Donnelly, Aoife A; Broderick, Brian M; Misstear, Bruce D

    2015-01-01

    The specific aims of this paper are to: (i) quantify the effects of various long range transport pathways nitrogen dioxide (NO2) and particulate matter with diameter less than 10μm (PM10) concentrations in Ireland and identify air mass movement corridors which may lead to incidences poor air quality for application in forecasting; (ii) compare the effects of such pathways at various sites; (iii) assess pathways associated with a period of decreased air quality in Ireland. The origin of and the regions traversed by an air mass 96h prior to reaching a receptor is modelled and k-means clustering is applied to create air-mass groups. Significant differences in air pollution levels were found between air mass cluster types at urban and rural sites. It was found that easterly or recirculated air masses lead to higher NO2 and PM10 levels with average NO2 levels varying between 124% and 239% of the seasonal mean and average PM10 levels varying between 103% and 199% of the seasonal mean at urban and rural sites. Easterly air masses are more frequent during winter months leading to higher overall concentrations. The span in relative concentrations between air mass clusters is highest at the rural site indicating that regional factors are controlling concentration levels. The methods used in this paper could be applied to assist in modelling and forecasting air quality based on long range transport pathways and forecast meteorology without the requirement for detailed emissions data over a large regional domain or the use of computationally demanding modelling techniques.

  1. an expansion of the aboveground biomass quantification model for ...

    African Journals Online (AJOL)

    Research Note BECVOL 3: an expansion of the aboveground biomass quantification model for ... African Journal of Range and Forage Science ... encroachment and estimation of food to browser herbivore species, was proposed during 1989.

  2. Non-linearities in Holocene floodplain sediment storage

    Science.gov (United States)

    Notebaert, Bastiaan; Nils, Broothaerts; Jean-François, Berger; Gert, Verstraeten

    2013-04-01

    Floodplain sediment storage is an important part of the sediment cascade model, buffering sediment delivery between hillslopes and oceans, which is hitherto not fully quantified in contrast to other global sediment budget components. Quantification and dating of floodplain sediment storage is data and financially demanding, limiting contemporary estimates for larger spatial units to simple linear extrapolations from a number of smaller catchments. In this paper we will present non-linearities in both space and time for floodplain sediment budgets in three different catchments. Holocene floodplain sediments of the Dijle catchment in the Belgian loess region, show a clear distinction between morphological stages: early Holocene peat accumulation, followed by mineral floodplain aggradation from the start of the agricultural period on. Contrary to previous assumptions, detailed dating of this morphological change at different shows an important non-linearity in geomorphologic changes of the floodplain, both between and within cross sections. A second example comes from the Pre-Alpine French Valdaine region, where non-linearities and complex system behavior exists between (temporal) patterns of soil erosion and floodplain sediment deposition. In this region Holocene floodplain deposition is characterized by different cut-and-fill phases. The quantification of these different phases shows a complicated image of increasing and decreasing floodplain sediment storage, which hampers the image of increasing sediment accumulation over time. Although fill stages may correspond with large quantities of deposited sediment and traditionally calculated sedimentation rates for such stages are high, they do not necessary correspond with a long-term net increase in floodplain deposition. A third example is based on the floodplain sediment storage in the Amblève catchment, located in the Belgian Ardennes uplands. Detailed floodplain sediment quantification for this catchments shows

  3. Linear regression

    CERN Document Server

    Olive, David J

    2017-01-01

    This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...

  4. Linear algebra

    CERN Document Server

    Edwards, Harold M

    1995-01-01

    In his new undergraduate textbook, Harold M Edwards proposes a radically new and thoroughly algorithmic approach to linear algebra Originally inspired by the constructive philosophy of mathematics championed in the 19th century by Leopold Kronecker, the approach is well suited to students in the computer-dominated late 20th century Each proof is an algorithm described in English that can be translated into the computer language the class is using and put to work solving problems and generating new examples, making the study of linear algebra a truly interactive experience Designed for a one-semester course, this text adopts an algorithmic approach to linear algebra giving the student many examples to work through and copious exercises to test their skills and extend their knowledge of the subject Students at all levels will find much interactive instruction in this text while teachers will find stimulating examples and methods of approach to the subject

  5. Linearity in Process Languages

    DEFF Research Database (Denmark)

    Nygaard, Mikkel; Winskel, Glynn

    2002-01-01

    The meaning and mathematical consequences of linearity (managing without a presumed ability to copy) are studied for a path-based model of processes which is also a model of affine-linear logic. This connection yields an affine-linear language for processes, automatically respecting open......-map bisimulation, in which a range of process operations can be expressed. An operational semantics is provided for the tensor fragment of the language. Different ways to make assemblies of processes lead to different choices of exponential, some of which respect bisimulation....

  6. Inverse Problems and Uncertainty Quantification

    KAUST Repository

    Litvinenko, Alexander

    2014-01-06

    In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.

  7. Inverse Problems and Uncertainty Quantification

    KAUST Repository

    Litvinenko, Alexander; Matthies, Hermann G.

    2014-01-01

    In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.

  8. Inverse problems and uncertainty quantification

    KAUST Repository

    Litvinenko, Alexander

    2013-12-18

    In a Bayesian setting, inverse problems and uncertainty quantification (UQ)— the propagation of uncertainty through a computational (forward) model—are strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.

  9. Disease quantification in dermatology

    DEFF Research Database (Denmark)

    Greve, Tanja Maria; Kamp, Søren; Jemec, Gregor B E

    2013-01-01

    Accurate documentation of disease severity is a prerequisite for clinical research and the practice of evidence-based medicine. The quantification of skin diseases such as psoriasis currently relies heavily on clinical scores. Although these clinical scoring methods are well established and very ...

  10. Uncertainty Quantification with Applications to Engineering Problems

    DEFF Research Database (Denmark)

    Bigoni, Daniele

    in measurements, predictions and manufacturing, and we can say that any dynamical system used in engineering is subject to some of these uncertainties. The first part of this work presents an overview of the mathematical framework used in Uncertainty Quantification (UQ) analysis and introduces the spectral tensor...... and thus the UQ analysis of the associated systems will benefit greatly from the application of methods which require few function evaluations. We first consider the propagation of the uncertainty and the sensitivity analysis of the non-linear dynamics of railway vehicles with suspension components whose......-scale problems, where efficient methods are necessary with today’s computational resources. The outcome of this work was also the creation of several freely available Python modules for Uncertainty Quantification, which are listed and described in the appendix....

  11. Powder X-ray diffraction method for the quantification of cocrystals in the crystallization mixture.

    Science.gov (United States)

    Padrela, Luis; de Azevedo, Edmundo Gomes; Velaga, Sitaram P

    2012-08-01

    The solid state purity of cocrystals critically affects their performance. Thus, it is important to accurately quantify the purity of cocrystals in the final crystallization product. The aim of this study was to develop a powder X-ray diffraction (PXRD) quantification method for investigating the purity of cocrystals. The method developed was employed to study the formation of indomethacin-saccharin (IND-SAC) cocrystals by mechanochemical methods. Pure IND-SAC cocrystals were geometrically mixed with 1:1 w/w mixture of indomethacin/saccharin in various proportions. An accurately measured amount (550 mg) of the mixture was used for the PXRD measurements. The most intense, non-overlapping, characteristic diffraction peak of IND-SAC was used to construct the calibration curve in the range 0-100% (w/w). This calibration model was validated and used to monitor the formation of IND-SAC cocrystals by liquid-assisted grinding (LAG). The IND-SAC cocrystal calibration curve showed excellent linearity (R(2) = 0.9996) over the entire concentration range, displaying limit of detection (LOD) and limit of quantification (LOQ) values of 1.23% (w/w) and 3.74% (w/w), respectively. Validation results showed excellent correlations between actual and predicted concentrations of IND-SAC cocrystals (R(2) = 0.9981). The accuracy and reliability of the PXRD quantification method depend on the methods of sample preparation and handling. The crystallinity of the IND-SAC cocrystals was higher when larger amounts of methanol were used in the LAG method. The PXRD quantification method is suitable and reliable for verifying the purity of cocrystals in the final crystallization product.

  12. Non-linear methods for the quantification of cyclic motion

    OpenAIRE

    Quintana Duque, Juan Carlos

    2016-01-01

    Traditional methods of human motion analysis assume that fluctuations in cycles (e.g. gait motion) and repetitions (e.g. tennis shots) arise solely from noise. However, the fluctuations may have enough information to describe the properties of motion. Recently, the fluctuations in motion have been analysed based on the concepts of variability and stability, but they are not used uniformly. On the one hand, these concepts are often mixed in the existing literature, while on the other hand, the...

  13. Comparison of machine learning and semi-quantification algorithms for (I123)FP-CIT classification: the beginning of the end for semi-quantification?

    Science.gov (United States)

    Taylor, Jonathan Christopher; Fenner, John Wesley

    2017-11-29

    Semi-quantification methods are well established in the clinic for assisted reporting of (I123) Ioflupane images. Arguably, these are limited diagnostic tools. Recent research has demonstrated the potential for improved classification performance offered by machine learning algorithms. A direct comparison between methods is required to establish whether a move towards widespread clinical adoption of machine learning algorithms is justified. This study compared three machine learning algorithms with that of a range of semi-quantification methods, using the Parkinson's Progression Markers Initiative (PPMI) research database and a locally derived clinical database for validation. Machine learning algorithms were based on support vector machine classifiers with three different sets of features: Voxel intensities Principal components of image voxel intensities Striatal binding radios from the putamen and caudate. Semi-quantification methods were based on striatal binding ratios (SBRs) from both putamina, with and without consideration of the caudates. Normal limits for the SBRs were defined through four different methods: Minimum of age-matched controls Mean minus 1/1.5/2 standard deviations from age-matched controls Linear regression of normal patient data against age (minus 1/1.5/2 standard errors) Selection of the optimum operating point on the receiver operator characteristic curve from normal and abnormal training data Each machine learning and semi-quantification technique was evaluated with stratified, nested 10-fold cross-validation, repeated 10 times. The mean accuracy of the semi-quantitative methods for classification of local data into Parkinsonian and non-Parkinsonian groups varied from 0.78 to 0.87, contrasting with 0.89 to 0.95 for classifying PPMI data into healthy controls and Parkinson's disease groups. The machine learning algorithms gave mean accuracies between 0.88 to 0.92 and 0.95 to 0.97 for local and PPMI data respectively. Classification

  14. A novel immunological assay for hepcidin quantification in human serum.

    Directory of Open Access Journals (Sweden)

    Vasiliki Koliaraki

    Full Text Available BACKGROUND: Hepcidin is a 25-aminoacid cysteine-rich iron regulating peptide. Increased hepcidin concentrations lead to iron sequestration in macrophages, contributing to the pathogenesis of anaemia of chronic disease whereas decreased hepcidin is observed in iron deficiency and primary iron overload diseases such as hereditary hemochromatosis. Hepcidin quantification in human blood or urine may provide further insights for the pathogenesis of disorders of iron homeostasis and might prove a valuable tool for clinicians for the differential diagnosis of anaemia. This study describes a specific and non-operator demanding immunoassay for hepcidin quantification in human sera. METHODS AND FINDINGS: An ELISA assay was developed for measuring hepcidin serum concentration using a recombinant hepcidin25-His peptide and a polyclonal antibody against this peptide, which was able to identify native hepcidin. The ELISA assay had a detection range of 10-1500 microg/L and a detection limit of 5.4 microg/L. The intra- and interassay coefficients of variance ranged from 8-15% and 5-16%, respectively. Mean linearity and recovery were 101% and 107%, respectively. Mean hepcidin levels were significantly lower in 7 patients with juvenile hemochromatosis (12.8 microg/L and 10 patients with iron deficiency anemia (15.7 microg/L and higher in 7 patients with Hodgkin lymphoma (116.7 microg/L compared to 32 age-matched healthy controls (42.7 microg/L. CONCLUSIONS: We describe a new simple ELISA assay for measuring hepcidin in human serum with sufficient accuracy and reproducibility.

  15. Comparative study of label and label-free techniques using shotgun proteomics for relative protein quantification.

    Science.gov (United States)

    Sjödin, Marcus O D; Wetterhall, Magnus; Kultima, Kim; Artemenko, Konstantin

    2013-06-01

    The analytical performance of three different strategies, iTRAQ (isobaric tag for relative and absolute quantification), dimethyl labeling (DML) and label free (LF) for relative protein quantification using shotgun proteomics have been evaluated. The methods have been explored using samples containing (i) Bovine proteins in known ratios and (ii) Bovine proteins in known ratios spiked into Escherichia coli. The latter case mimics the actual conditions in a typical biological sample with a few differentially expressed proteins and a bulk of proteins with unchanged ratios. Additionally, the evaluation was performed on both QStar and LTQ-FTICR mass spectrometers. LF LTQ-FTICR was found to have the highest proteome coverage while the highest accuracy based on the artificially regulated proteins was found for DML LTQ-FTICR (54%). A varying linearity (k: 0.55-1.16, r(2): 0.61-0.96) was shown for all methods within selected dynamic ranges. All methods were found to consistently underestimate Bovine protein ratios when matrix proteins were added. However, LF LTQ-FTICR was more tolerant toward a compression effect. A single peptide was demonstrated to be sufficient for a reliable quantification using iTRAQ. A ranking system utilizing several parameters important for quantitative proteomics demonstrated that the overall performance of the five different methods was; DML LTQ-FTICR>iTRAQ QStar>LF LTQ-FTICR>DML QStar>LF QStar. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Assessment of current mass spectrometric workflows for the quantification of low abundant proteins and phosphorylation sites

    Directory of Open Access Journals (Sweden)

    Manuel Bauer

    2015-12-01

    Full Text Available The data described here provide a systematic performance evaluation of popular data-dependent (DDA and independent (DIA mass spectrometric (MS workflows currently used in quantitative proteomics. We assessed the limits of identification, quantification and detection for each method by analyzing a dilution series of 20 unmodified and 10 phosphorylated synthetic heavy labeled reference peptides, respectively, covering six orders of magnitude in peptide concentration with and without a complex human cell digest background. We found that all methods performed very similarly in the absence of background proteins, however, when analyzing whole cell lysates, targeted methods were at least 5–10 times more sensitive than directed or DDA methods. In particular, higher stage fragmentation (MS3 of the neutral loss peak using a linear ion trap increased dynamic quantification range of some phosphopeptides up to 100-fold. We illustrate the power of this targeted MS3 approach for phosphopeptide monitoring by successfully quantifying 9 phosphorylation sites of the kinetochore and spindle assembly checkpoint component Mad1 over different cell cycle states from non-enriched pull-down samples. The data are associated to the research article ‘Evaluation of data-dependent and data-independent mass spectrometric workflows for sensitive quantification of proteins and phosphorylation sites׳ (Bauer et al., 2014 [1]. The mass spectrometry and the analysis dataset have been deposited to the ProteomeXchange Consortium (http://proteomecentral.proteomexchange.org via the PRIDE partner repository with the dataset identifier PXD000964.

  17. Quantification of cellular uptake of DNA nanostructures by qPCR.

    Science.gov (United States)

    Okholm, Anders Hauge; Nielsen, Jesper Sejrup; Vinther, Mathias; Sørensen, Rasmus Schøler; Schaffert, David; Kjems, Jørgen

    2014-05-15

    DNA nanostructures facilitating drug delivery are likely soon to be realized. In the past few decades programmed self-assembly of DNA building blocks have successfully been employed to construct sophisticated nanoscale objects. By conjugating functionalities to DNA, other molecules such as peptides, proteins and polymers can be precisely positioned on DNA nanostructures. This exceptional ability to produce modular nanoscale devices with tunable and controlled behavior has initiated an interest in employing DNA nanostructures for drug delivery. However, to obtain this the relationship between cellular interactions and structural and functional features of the DNA delivery device must be thoroughly investigated. Here, we present a rapid and robust method for the precise quantification of the component materials of DNA origami structures capable of entering cells in vitro. The quantification is performed by quantitative polymerase chain reaction, allowing a linear dynamic range of detection of five orders of magnitude. We demonstrate the use of this method for high-throughput screening, which could prove efficient to identify key features of DNA nanostructures enabling cell penetration. The method described here is suitable for quantification of in vitro uptake studies but should easily be extended to quantify DNA nanostructures in blood or tissue samples. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Linear programming

    CERN Document Server

    Karloff, Howard

    1991-01-01

    To this reviewer’s knowledge, this is the first book accessible to the upper division undergraduate or beginning graduate student that surveys linear programming from the Simplex Method…via the Ellipsoid algorithm to Karmarkar’s algorithm. Moreover, its point of view is algorithmic and thus it provides both a history and a case history of work in complexity theory. The presentation is admirable; Karloff's style is informal (even humorous at times) without sacrificing anything necessary for understanding. Diagrams (including horizontal brackets that group terms) aid in providing clarity. The end-of-chapter notes are helpful...Recommended highly for acquisition, since it is not only a textbook, but can also be used for independent reading and study. —Choice Reviews The reader will be well served by reading the monograph from cover to cover. The author succeeds in providing a concise, readable, understandable introduction to modern linear programming. —Mathematics of Computing This is a textbook intend...

  19. Accurate episomal HIV 2-LTR circles quantification using optimized DNA isolation and droplet digital PCR.

    Science.gov (United States)

    Malatinkova, Eva; Kiselinova, Maja; Bonczkowski, Pawel; Trypsteen, Wim; Messiaen, Peter; Vermeire, Jolien; Verhasselt, Bruno; Vervisch, Karen; Vandekerckhove, Linos; De Spiegelaere, Ward

    2014-01-01

    In HIV-infected patients on combination antiretroviral therapy (cART), the detection of episomal HIV 2-LTR circles is a potential marker for ongoing viral replication. Quantification of 2-LTR circles is based on quantitative PCR or more recently on digital PCR assessment, but is hampered due to its low abundance. Sample pre-PCR processing is a critical step for 2-LTR circles quantification, which has not yet been sufficiently evaluated in patient derived samples. We compared two sample processing procedures to more accurately quantify 2-LTR circles using droplet digital PCR (ddPCR). Episomal HIV 2-LTR circles were either isolated by genomic DNA isolation or by a modified plasmid DNA isolation, to separate the small episomal circular DNA from chromosomal DNA. This was performed in a dilution series of HIV-infected cells and HIV-1 infected patient derived samples (n=59). Samples for the plasmid DNA isolation method were spiked with an internal control plasmid. Genomic DNA isolation enables robust 2-LTR circles quantification. However, in the lower ranges of detection, PCR inhibition caused by high genomic DNA load substantially limits the amount of sample input and this impacts sensitivity and accuracy. Moreover, total genomic DNA isolation resulted in a lower recovery of 2-LTR templates per isolate, further reducing its sensitivity. The modified plasmid DNA isolation with a spiked reference for normalization was more accurate in these low ranges compared to genomic DNA isolation. A linear correlation of both methods was observed in the dilution series (R2=0.974) and in the patient derived samples with 2-LTR numbers above 10 copies per million peripheral blood mononuclear cells (PBMCs), (R2=0.671). Furthermore, Bland-Altman analysis revealed an average agreement between the methods within the 27 samples in which 2-LTR circles were detectable with both methods (bias: 0.3875±1.2657 log10). 2-LTR circles quantification in HIV-infected patients proved to be more

  20. High performance liquid chromatography-charged aerosol detection applying an inverse gradient for quantification of rhamnolipid biosurfactants.

    Science.gov (United States)

    Behrens, Beate; Baune, Matthias; Jungkeit, Janek; Tiso, Till; Blank, Lars M; Hayen, Heiko

    2016-07-15

    A method using high performance liquid chromatography coupled to charged-aerosol detection (HPLC-CAD) was developed for the quantification of rhamnolipid biosurfactants. Qualitative sample composition was determined by liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS). The relative quantification of different derivatives of rhamnolipids including di-rhamnolipids, mono-rhamnolipids, and their precursors 3-(3-hydroxyalkanoyloxy)alkanoic acids (HAAs) differed for two compared LC-MS instruments and revealed instrument dependent responses. Our here reported HPLC-CAD method provides uniform response. An inverse gradient was applied for the absolute quantification of rhamnolipid congeners to account for the detector's dependency on the solvent composition. The CAD produces a uniform response not only for the analytes but also for structurally different (nonvolatile) compounds. It was demonstrated that n-dodecyl-β-d-maltoside or deoxycholic acid can be used as alternative standards. The method of HPLC-ultra violet (UV) detection after a derivatization of rhamnolipids and HAAs to their corresponding phenacyl esters confirmed the obtained results but required additional, laborious sample preparation steps. Sensitivity determined as limit of detection and limit of quantification for four mono-rhamnolipids was in the range of 0.3-1.0 and 1.2-2.0μg/mL, respectively, for HPLC-CAD and 0.4 and 1.5μg/mL, respectively, for HPLC-UV. Linearity for HPLC-CAD was at least 0.996 (R(2)) in the calibrated range of about 1-200μg/mL. Hence, the here presented HPLC-CAD method allows absolute quantification of rhamnolipids and derivatives. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Determination of statin drugs in hospital effluent with dispersive liquid-liquid microextraction and quantification by liquid chromatography.

    Science.gov (United States)

    Martins, Ayrton F; Frank, Carla da S; Altissimo, Joseline; de Oliveira, Júlia A; da Silva, Daiane S; Reichert, Jaqueline F; Souza, Darliana M

    2017-08-24

    Statins are classified as being amongst the most prescribed agents for treating hypercholesterolaemia and preventing vascular diseases. In this study, a rapid and effective liquid chromatography method, assisted by diode array detection, was designed and validated for the simultaneous quantification of atorvastatin (ATO) and simvastatin (SIM) in hospital effluent samples. The solid phase extraction (SPE) of the analytes was optimized regarding sorbent material and pH, and the dispersive liquid-liquid microextraction (DLLME), in terms of pH, ionic strength, type and volume of extractor/dispersor solvents. The performance of both extraction procedures was evaluated in terms of linearity, quantification limits, accuracy (recovery %), precision and matrix effects for each analyte. The methods proved to be linear in the concentration range considered; the quantification limits were 0.45 µg L -1 for ATO and 0.75 µg L -1 for SIM; the matrix effect was almost absent in both methods and the average recoveries remained between 81.5-90.0%; and the RSD values were <20%. The validated methods were applied to the quantification of the statins in real samples of hospital effluent; the concentrations ranged from 18.8 µg L -1 to 35.3 µg L -1 for ATO, and from 30.3 µg L -1 to 38.5 µg L -1 for SIM. Since the calculated risk quotient was ≤192, the occurrence of ATO and SIM in hospital effluent poses a potential serious risk to human health and the aquatic ecosystem.

  2. Simultaneous screening and quantification of 52 common pharmaceuticals and drugs of abuse in hair using UPLC-TOF-MS

    DEFF Research Database (Denmark)

    Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Dalsgaard, Petur Weihe

    2010-01-01

    An UPLC-TOF-MS method for simultaneous screening and quantification of 52 drugs in hair was developed and validated. The selected drugs represent the most common classes of pharmaceuticals and drugs of abuse such as amphetamines, analgesics, antidepressants, antipsychotics, benzodiazepines, cocaine.......05 ng/mg for 87% of the analytes. A good linear behaviour was achieved for most of the analytes in the range from LOQ to 10 or 25 ng/mg except for the amphetamines. The method showed an acceptable precision and trueness, since the obtained CV and BIAS values were...

  3. Uncertainty quantification for hyperbolic and kinetic equations

    CERN Document Server

    Pareschi, Lorenzo

    2017-01-01

    This book explores recent advances in uncertainty quantification for hyperbolic, kinetic, and related problems. The contributions address a range of different aspects, including: polynomial chaos expansions, perturbation methods, multi-level Monte Carlo methods, importance sampling, and moment methods. The interest in these topics is rapidly growing, as their applications have now expanded to many areas in engineering, physics, biology and the social sciences. Accordingly, the book provides the scientific community with a topical overview of the latest research efforts.

  4. Reduction of Linear Programming to Linear Approximation

    OpenAIRE

    Vaserstein, Leonid N.

    2006-01-01

    It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.

  5. First application of a microsphere-based immunoassay to the detection of genetically modified organisms (GMOs): quantification of Cry1Ab protein in genetically modified maize.

    Science.gov (United States)

    Fantozzi, Anna; Ermolli, Monica; Marini, Massimiliano; Scotti, Domenico; Balla, Branko; Querci, Maddalena; Langrell, Stephen R H; Van den Eede, Guy

    2007-02-21

    An innovative covalent microsphere immunoassay, based on the usage of fluorescent beads coupled to a specific antibody, was developed for the quantification of the endotoxin Cry1Ab present in MON810 and Bt11 genetically modified (GM) maize lines. In particular, a specific protocol was developed to assess the presence of Cry1Ab in a very broad range of GM maize concentrations, from 0.1 to 100% [weight of genetically modified organism (GMO)/weight]. Test linearity was achieved in the range of values from 0.1 to 3%, whereas fluorescence signal increased following a nonlinear model, reaching a plateau at 25%. The limits of detection and quantification were equal to 0.018 and 0.054%, respectively. The present study describes the first application of quantitative high-throughput immunoassays in GMO analysis.

  6. Multiplex electrochemical DNA platform for femtomolar-level quantification of genetically modified soybean.

    Science.gov (United States)

    Manzanares-Palenzuela, C Lorena; de-los-Santos-Álvarez, Noemí; Lobo-Castañón, María Jesús; López-Ruiz, Beatriz

    2015-06-15

    Current EU regulations on the mandatory labeling of genetically modified organisms (GMOs) with a minimum content of 0.9% would benefit from the availability of reliable and rapid methods to detect and quantify DNA sequences specific for GMOs. Different genosensors have been developed to this aim, mainly intended for GMO screening. A remaining challenge, however, is the development of genosensing platforms for GMO quantification, which should be expressed as the number of event-specific DNA sequences per taxon-specific sequences. Here we report a simple and sensitive multiplexed electrochemical approach for the quantification of Roundup-Ready Soybean (RRS). Two DNA sequences, taxon (lectin) and event-specific (RR), are targeted via hybridization onto magnetic beads. Both sequences are simultaneously detected by performing the immobilization, hybridization and labeling steps in a single tube and parallel electrochemical readout. Hybridization is performed in a sandwich format using signaling probes labeled with fluorescein isothiocyanate (FITC) or digoxigenin (Dig), followed by dual enzymatic labeling using Fab fragments of anti-Dig and anti-FITC conjugated to peroxidase or alkaline phosphatase, respectively. Electrochemical measurement of the enzyme activity is finally performed on screen-printed carbon electrodes. The assay gave a linear range of 2-250 pM for both targets, with LOD values of 650 fM (160 amol) and 190 fM (50 amol) for the event-specific and the taxon-specific targets, respectively. Results indicate that the method could be applied for GMO quantification below the European labeling threshold level (0.9%), offering a general approach for the rapid quantification of specific GMO events in foods. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Completeness Results for Linear Logic on Petri Nets

    DEFF Research Database (Denmark)

    Engberg, Uffe Henrik; Winskel, Glynn

    1993-01-01

    Completeness is shown for several versions of Girard's linear logic with respect to Petri nets as the class of models. The strongest logic considered is intuitionistic linear logic, with $otimes$, $-!circ$, &, $oplus$ and the exponential ! (''of course´´), and forms of quantification. This logic ...

  8. A simple method of digitizing analog scintigrams for quantification and digital archiving

    International Nuclear Information System (INIS)

    Schramm, M.; Kaempfer, B.; Wolf, H.; Clausen, M.; Wendhausen, H.; Henze, E.

    1993-01-01

    This study was undertaken to evaluate a quick, reliable and cheap method of digitizing analog scintigrams. 40 whole-body bone scintigrams were obtained simultaneously in analog and genuine digital format. The analog scans on X-ray film were then digitized seecondarily by three different methods: 300 dpi flatbed scanning, high-resolution camera scanning and camcorder recording. A simple exposure approach using a light box, a cheap camcorder, a PC and image grabber hard- and software proved to be optimal. Visual interpretation showed no differences in clinical findings when comparing the analog images with their secondarily digitized counterparts. To test the possibility of quantification, 126 equivalent ROIs were drawn both in the genuine digital and the secondarily digitized images. Comparing the ROI count to whole-body count percentage of the corresponding ROIs showed the correlation to be linear. The evaluation of phantom studies showed the linear correlation to be true within a wide activity range. Thus, secondary digitalization of analog scintigrams is an easy, cheap and reliable method of archiving images and allows secondary digital quantification. (orig.) [de

  9. [A simple method of digitizing analog scintigrams for quantification and digital archiving].

    Science.gov (United States)

    Schramm, M; Kämpfer, B; Wolf, H; Clausen, M; Wendhausen, H; Henze, E

    1993-02-01

    This study was undertaken to evaluate a quick, reliable and cheap method of digitizing analog scintigrams. 40 whole-body bone scintigrams were obtained simultaneously in analog and genuine digital format. The analog scans on x-ray film were then digitized secondarily by three different methods: 300 dpi flat-bed scanning, high-resolution camera scanning and camcorder recording. A simple exposure approach using a light box, a cheap camcorder, a PC and image grabber hard- and software proved to be optimal. Visual interpretation showed no differences in clinical findings when comparing the analog images with their secondarily digitized counterparts. To test the possibility of quantification, 126 equivalent ROIs were drawn both in the genuine digital and the secondarily digitized images. Comparing the ROI count to whole-body count percentage of the corresponding ROIs showed the correlation to be linear. The evaluation of phantom studies showed the linear correlation to be true within a wide activity range. Thus, secondary digitalization of analog scintigrams is an easy, cheap and reliable method of archiving images and allows secondary digital quantification.

  10. Validated LC-MS/MS Method for the Quantification of Ponatinib in Plasma: Application to Metabolic Stability.

    Directory of Open Access Journals (Sweden)

    Adnan A Kadi

    Full Text Available In the current work, a rapid, specific, sensitive and validated liquid chromatography tandem mass-spectrometric method was developed for the quantification of ponatinib (PNT in human plasma and rat liver microsomes (RLMs with its application to metabolic stability. Chromatographic separation of PNT and vandetanib (IS were accomplished on Agilent eclipse plus C18 analytical column (50 mm × 2.1 mm, 1.8 μm particle size maintained at 21±2°C. Flow rate was 0.25 mLmin-1 with run time of 4 min. Mobile phase consisted of solvent A (10 mM ammonium formate, pH adjusted to 4.1 with formic acid and solvent B (acetonitrile. Ions were generated by electrospray (ESI and multiple reaction monitoring (MRM was used as basis for quantification. The results revealed a linear calibration curve in the range of 5-400 ngmL-1 (r2 ≥ 0.9998 with lower limit of quantification (LOQ and lower limit of detection (LOD of 4.66 and 1.53 ngmL-1 in plasma, 4.19 and 1.38 ngmL-1 in RLMs. The intra- and inter-day precision and accuracy in plasma ranged from1.06 to 2.54% and -1.48 to -0.17, respectively. Whereas in RLMs ranged from 0.97 to 2.31% and -1.65 to -0.3%. The developed procedure was applied for quantification of PNT in human plasma and RLMs for study metabolic stability of PNT. PNT disappeared rapidly in the 1st 10 minutes of RLM incubation and the disappearance plateaued out for the rest of the incubation. In vitro half-life (t1/2 was 6.26 min and intrinsic clearance (CLin was 15.182± 0.477.

  11. linear-quadratic-linear model

    Directory of Open Access Journals (Sweden)

    Tanwiwat Jaikuna

    2017-02-01

    Full Text Available Purpose: To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL model. Material and methods : The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR, and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD2 was calculated using biological effective dose (BED based on the LQL model. The software calculation and the manual calculation were compared for EQD2 verification with pair t-test statistical analysis using IBM SPSS Statistics version 22 (64-bit. Results: Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV determined by D90%, 0.56% in the bladder, 1.74% in the rectum when determined by D2cc, and less than 1% in Pinnacle. The difference in the EQD2 between the software calculation and the manual calculation was not significantly different with 0.00% at p-values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT and 0.240, 0.320, and 0.849 for brachytherapy (BT in HR-CTV, bladder, and rectum, respectively. Conclusions : The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.

  12. Underwater Ranging

    OpenAIRE

    S. P. Gaba

    1984-01-01

    The paper deals with underwater laser ranging system, its principle of operation and maximum depth capability. The sources of external noise and methods to improve signal-to-noise ratio are also discussed.

  13. Model Uncertainty Quantification Methods In Data Assimilation

    Science.gov (United States)

    Pathiraja, S. D.; Marshall, L. A.; Sharma, A.; Moradkhani, H.

    2017-12-01

    Data Assimilation involves utilising observations to improve model predictions in a seamless and statistically optimal fashion. Its applications are wide-ranging; from improving weather forecasts to tracking targets such as in the Apollo 11 mission. The use of Data Assimilation methods in high dimensional complex geophysical systems is an active area of research, where there exists many opportunities to enhance existing methodologies. One of the central challenges is in model uncertainty quantification; the outcome of any Data Assimilation study is strongly dependent on the uncertainties assigned to both observations and models. I focus on developing improved model uncertainty quantification methods that are applicable to challenging real world scenarios. These include developing methods for cases where the system states are only partially observed, where there is little prior knowledge of the model errors, and where the model error statistics are likely to be highly non-Gaussian.

  14. Accident sequence quantification with KIRAP

    International Nuclear Information System (INIS)

    Kim, Tae Un; Han, Sang Hoon; Kim, Kil You; Yang, Jun Eon; Jeong, Won Dae; Chang, Seung Cheol; Sung, Tae Yong; Kang, Dae Il; Park, Jin Hee; Lee, Yoon Hwan; Hwang, Mi Jeong.

    1997-01-01

    The tasks of probabilistic safety assessment(PSA) consists of the identification of initiating events, the construction of event tree for each initiating event, construction of fault trees for event tree logics, the analysis of reliability data and finally the accident sequence quantification. In the PSA, the accident sequence quantification is to calculate the core damage frequency, importance analysis and uncertainty analysis. Accident sequence quantification requires to understand the whole model of the PSA because it has to combine all event tree and fault tree models, and requires the excellent computer code because it takes long computation time. Advanced Research Group of Korea Atomic Energy Research Institute(KAERI) has developed PSA workstation KIRAP(Korea Integrated Reliability Analysis Code Package) for the PSA work. This report describes the procedures to perform accident sequence quantification, the method to use KIRAP's cut set generator, and method to perform the accident sequence quantification with KIRAP. (author). 6 refs

  15. Accident sequence quantification with KIRAP

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Un; Han, Sang Hoon; Kim, Kil You; Yang, Jun Eon; Jeong, Won Dae; Chang, Seung Cheol; Sung, Tae Yong; Kang, Dae Il; Park, Jin Hee; Lee, Yoon Hwan; Hwang, Mi Jeong

    1997-01-01

    The tasks of probabilistic safety assessment(PSA) consists of the identification of initiating events, the construction of event tree for each initiating event, construction of fault trees for event tree logics, the analysis of reliability data and finally the accident sequence quantification. In the PSA, the accident sequence quantification is to calculate the core damage frequency, importance analysis and uncertainty analysis. Accident sequence quantification requires to understand the whole model of the PSA because it has to combine all event tree and fault tree models, and requires the excellent computer code because it takes long computation time. Advanced Research Group of Korea Atomic Energy Research Institute(KAERI) has developed PSA workstation KIRAP(Korea Integrated Reliability Analysis Code Package) for the PSA work. This report describes the procedures to perform accident sequence quantification, the method to use KIRAP`s cut set generator, and method to perform the accident sequence quantification with KIRAP. (author). 6 refs.

  16. Easy, Fast, and Reproducible Quantification of Cholesterol and Other Lipids in Human Plasma by Combined High Resolution MSX and FTMS Analysis

    Science.gov (United States)

    Gallego, Sandra F.; Højlund, Kurt; Ejsing, Christer S.

    2018-01-01

    Reliable, cost-effective, and gold-standard absolute quantification of non-esterified cholesterol in human plasma is of paramount importance in clinical lipidomics and for the monitoring of metabolic health. Here, we compared the performance of three mass spectrometric approaches available for direct detection and quantification of cholesterol in extracts of human plasma. These approaches are high resolution full scan Fourier transform mass spectrometry (FTMS) analysis, parallel reaction monitoring (PRM), and novel multiplexed MS/MS (MSX) technology, where fragments from selected precursor ions are detected simultaneously. Evaluating the performance of these approaches in terms of dynamic quantification range, linearity, and analytical precision showed that the MSX-based approach is superior to that of the FTMS and PRM-based approaches. To further show the efficacy of this approach, we devised a simple routine for extensive plasma lipidome characterization using only 8 μL of plasma, using a new commercially available ready-to-spike-in mixture with 14 synthetic lipid standards, and executing a single 6 min sample injection with combined MSX analysis for cholesterol quantification and FTMS analysis for quantification of sterol esters, glycerolipids, glycerophospholipids, and sphingolipids. Using this simple routine afforded reproducible and absolute quantification of 200 lipid species encompassing 13 lipid classes in human plasma samples. Notably, the analysis time of this procedure can be shortened for high throughput-oriented clinical lipidomics studies or extended with more advanced MSALL technology (Almeida R. et al., J. Am. Soc. Mass Spectrom. 26, 133-148 [1]) to support in-depth structural elucidation of lipid molecules. [Figure not available: see fulltext.

  17. A RP-HPLC method for quantification of diclofenac sodium released from biological macromolecules.

    Science.gov (United States)

    Bhattacharya, Shiv Sankar; Banerjee, Subham; Ghosh, Ashoke Kumar; Chattopadhyay, Pronobesh; Verma, Anurag; Ghosh, Amitava

    2013-07-01

    Interpenetrating network (IPN) microbeads of sodium carboxymethyl locust bean gum (SCMLBG) and sodium carboxymethyl cellulose (SCMC) containing diclofenac sodium (DS), a nonsteroidal anti-inflammatory drug, were prepared by single water-in-water (w/w) emulsion gelation process using AlCl3 as cross-linking agent in a complete aqueous environment. Pharmacokinetic study of these IPN microbeads was then carried out by a simple and feasible high-performance liquid chromatographic method with UV detection which was developed and validated for the quantification of diclofenac sodium in rabbit plasma. The chromatographic separation was carried out in a Hypersil BDS, C18 column (250 mm × 4.6 mm; 5 m). The mobile phase was a mixture of acetonitrile and methanol (70:30, v/v) at a flow rate of 1.0 ml/min. The UV detection was set at 276 nm. The extraction recovery of diclofenac sodium in plasma of three quality control (QC) samples was ranged from 81.52% to 95.29%. The calibration curve was linear in the concentration range of 20-1000 ng/ml with the correlation coefficient (r(2)) above 0.9951. The method was specific and sensitive with the limit of quantification of 20 ng/ml. In stability tests, diclofenac sodium in rabbit plasma was stable during storage and assay procedure. Copyright © 2013. Published by Elsevier B.V.

  18. Dried blood spot assay for the quantification of phenytoin using Liquid Chromatography-Mass Spectrometry.

    Science.gov (United States)

    Villanelli, Fabio; Giocaliere, Elisa; Malvagia, Sabrina; Rosati, Anna; Forni, Giulia; Funghini, Silvia; Shokry, Engy; Ombrone, Daniela; Della Bona, Maria Luisa; Guerrini, Renzo; la Marca, Giancarlo

    2015-02-02

    Phenytoin (PHT) is one of the most commonly used anticonvulsant drugs for the treatment of epilepsy and bipolar disorders. The large amount of plasma required by conventional methods for drug quantification makes mass spectrometry combined with dried blood spot (DBS) sampling crucial for pediatric patients where therapeutic drug monitoring or pharmacokinetic studies may be difficult to realize. DBS represents a new convenient sampling support requiring minimally invasive blood drawing and providing long-term stability of samples and less expensive shipment and storage. The aim of this study was to develop a LC-MS/MS method for the quantification of PHT on DBS. This analytical method was validated and gave good linearity (r(2)=0.999) in the range of 0-100mg/l. LOQ and LOD were 1.0mg/l and 0.3mg/l, respectively. The drug extraction from paper was performed in a few minutes using a mixture composed of organic solvent for 80%. The recovery ranged from 85 to 90%; PHT in DBS showed to be stable at different storage temperatures for one month. A good correlation was also obtained between PHT plasma and DBS concentrations. This method is both precise and accurate and appears to be particularly suitable to monitor treatment with a simple and convenient sample collection procedure. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Rapid and sensitive Nitrosomonas europaea biosensor assay for quantification of bioavailable ammonium sensu strictu in soil.

    Science.gov (United States)

    Nguyen, Minh Dong; Risgaard-Petersen, Nils; Sørensen, Jan; Brandt, Kristian K

    2011-02-01

    Knowledge on bioavailable ammonium sensu strictu (i.e., immediately available for cellular uptake) in soil is required to understand nutrient uptake processes in microorganisms and thus of vital importance for plant production. We here present a novel ammonium biosensor approach based on the lithoautotrophic ammonia-oxidizing bacterium Nitrosomonas europaea transformed with a luxAB sensor plasmid. Bioluminescence-based ammonium detection was achieved within 10 min with a quantification limit in liquid samples of ∼20 μM and a linear response range up to 400 μM. Biosensor and conventional chemical quantification of ammonium in soil solutions agreed well across a range of sample and assay conditions. The biosensor was subsequently applied for a solid phase-contact assay allowing for direct interaction of biosensor cells with soil particle-associated (i.e., exchangeable plus fixed) ammonium. The assay successfully quantified bioavailable ammonium even in unfertilized soil and demonstrated markedly higher ratios of bioavailable ammonium to water- or 2 M KCl-exchangeable ammonium in anoxic soil than in corresponding oxic soil. Particle-associated ammonium contributed by at least 74% and 93% of the total bioavailable pool in oxic and anoxic soil, respectively. The N. europaea biosensor should have broad relevance for environmental monitoring of bioavailable ammonium and processes depending on ammonium bioavailability.

  20. Linearization of the bradford protein assay.

    Science.gov (United States)

    Ernst, Orna; Zor, Tsaffrir

    2010-04-12

    Determination of microgram quantities of protein in the Bradford Coomassie brilliant blue assay is accomplished by measurement of absorbance at 590 nm. This most common assay enables rapid and simple protein quantification in cell lysates, cellular fractions, or recombinant protein samples, for the purpose of normalization of biochemical measurements. However, an intrinsic nonlinearity compromises the sensitivity and accuracy of this method. It is shown that under standard assay conditions, the ratio of the absorbance measurements at 590 nm and 450 nm is strictly linear with protein concentration. This simple procedure increases the accuracy and improves the sensitivity of the assay about 10-fold, permitting quantification down to 50 ng of bovine serum albumin. Furthermore, the interference commonly introduced by detergents that are used to create the cell lysates is greatly reduced by the new protocol. A linear equation developed on the basis of mass action and Beer's law perfectly fits the experimental data.

  1. Uncertainty quantification for environmental models

    Science.gov (United States)

    Hill, Mary C.; Lu, Dan; Kavetski, Dmitri; Clark, Martyn P.; Ye, Ming

    2012-01-01

    ]. There are also bootstrapping and cross-validation approaches.Sometimes analyses are conducted using surrogate models [12]. The availability of so many options can be confusing. Categorizing methods based on fundamental questions assists in communicating the essential results of uncertainty analyses to stakeholders. Such questions can focus on model adequacy (e.g., How well does the model reproduce observed system characteristics and dynamics?) and sensitivity analysis (e.g., What parameters can be estimated with available data? What observations are important to parameters and predictions? What parameters are important to predictions?), as well as on the uncertainty quantification (e.g., How accurate and precise are the predictions?). The methods can also be classified by the number of model runs required: few (10s to 1000s) or many (10,000s to 1,000,000s). Of the methods listed above, the most computationally frugal are generally those based on local derivatives; MCMC methods tend to be among the most computationally demanding. Surrogate models (emulators)do not necessarily produce computational frugality because many runs of the full model are generally needed to create a meaningful surrogate model. With this categorization, we can, in general, address all the fundamental questions mentioned above using either computationally frugal or demanding methods. Model development and analysis can thus be conducted consistently using either computation-ally frugal or demanding methods; alternatively, different fundamental questions can be addressed using methods that require different levels of effort. Based on this perspective, we pose the question: Can computationally frugal methods be useful companions to computationally demanding meth-ods? The reliability of computationally frugal methods generally depends on the model being reasonably linear, which usually means smooth nonlin-earities and the assumption of Gaussian errors; both tend to be more valid with more linear

  2. The art of linear electronics

    CERN Document Server

    Hood, John Linsley

    2013-01-01

    The Art of Linear Electronics presents the principal aspects of linear electronics and techniques in linear electronic circuit design. The book provides a wide range of information on the elucidation of the methods and techniques in the design of linear electronic circuits. The text discusses such topics as electronic component symbols and circuit drawing; passive and active semiconductor components; DC and low frequency amplifiers; and the basic effects of feedback. Subjects on frequency response modifying circuits and filters; audio amplifiers; low frequency oscillators and waveform generato

  3. Quantification of methionine and selenomethionine in biological samples using multiple reaction monitoring high performance liquid chromatography tandem mass spectrometry (MRM-HPLC-MS/MS).

    Science.gov (United States)

    Vu, Dai Long; Ranglová, Karolína; Hájek, Jan; Hrouzek, Pavel

    2018-05-01

    Quantification of selenated amino-acids currently relies on methods employing inductively coupled plasma mass spectrometry (ICP-MS). Although very accurate, these methods do not allow the simultaneous determination of standard amino-acids, hampering the comparison of the content of selenated versus non-selenated species such as methionine (Met) and selenomethionine (SeMet). This paper reports two approaches for the simultaneous quantification of Met and SeMet. In the first approach, standard enzymatic hydrolysis employing Protease XIV was applied for the preparation of samples. The second approach utilized methanesulfonic acid (MA) for the hydrolysis of samples, either in a reflux system or in a microwave oven, followed by derivatization with diethyl ethoxymethylenemalonate. The prepared samples were then analyzed by multiple reaction monitoring high performance liquid chromatography tandem mass spectrometry (MRM-HPLC-MS/MS). Both approaches provided platforms for the accurate determination of selenium/sulfur substitution rate in Met. Moreover the second approach also provided accurate simultaneous quantification of Met and SeMet with a low limit of detection, low limit of quantification and wide linearity range, comparable to the commonly used gas chromatography mass spectrometry (GC-MS) method or ICP-MS. The novel method was validated using certified reference material in conjunction with the GC-MS reference method. Copyright © 2018. Published by Elsevier B.V.

  4. Development of real-time PCR method for the detection and the quantification of a new endogenous reference gene in sugar beet "Beta vulgaris L.": GMO application.

    Science.gov (United States)

    Chaouachi, Maher; Alaya, Akram; Ali, Imen Ben Haj; Hafsa, Ahmed Ben; Nabi, Nesrine; Bérard, Aurélie; Romaniuk, Marcel; Skhiri, Fethia; Saïd, Khaled

    2013-01-01

    KEY MESSAGE : Here, we describe a new developed quantitative real-time PCR method for the detection and quantification of a new specific endogenous reference gene used in GMO analysis. The key requirement of this study was the identification of a new reference gene used for the differentiation of the four genomic sections of the sugar beet (Beta vulgaris L.) (Beta, Corrollinae, Nanae and Procumbentes) suitable for quantification of genetically modified sugar beet. A specific qualitative polymerase chain reaction (PCR) assay was designed to detect the sugar beet amplifying a region of the adenylate transporter (ant) gene only from the species of the genomic section I of the genus Beta (cultivated and wild relatives) and showing negative PCR results for 7 species of the 3 other sections, 8 related species and 20 non-sugar beet plants. The sensitivity of the assay was 15 haploid genome copies (HGC). A quantitative real-time polymerase chain reaction (QRT-PCR) assay was also performed, having high linearity (R (2) > 0.994) over sugar beet standard concentrations ranging from 20,000 to 10 HGC of the sugar beet DNA per PCR. The QRT-PCR assay described in this study was specific and more sensitive for sugar beet quantification compared to the validated test previously reported in the European Reference Laboratory. This assay is suitable for GMO quantification in routine analysis from a wide variety of matrices.

  5. Miniature linear cooler development

    International Nuclear Information System (INIS)

    Pruitt, G.R.

    1993-01-01

    An overview is presented of the status of a family of miniature linear coolers currently under development by Hughes Aircraft Co. for use in hand held, volume limited or power limited infrared applications. These coolers, representing the latest additions to the Hughes family of TOP trademark [twin-opposed piston] linear coolers, have been fabricated and tested in three different configurations. Each configuration is designed to utilize a common compressor assembly resulting in reduced manufacturing costs. The baseline compressor has been integrated with two different expander configurations and has been operated with two different levels of input power. These various configuration combinations offer a wide range of performance and interface characteristics which may be tailored to applications requiring limited power and size without significantly compromising cooler capacity or cooldown characteristics. Key cooler characteristics and test data are summarized for three combinations of cooler configurations which are representative of the versatility of this linear cooler design. Configurations reviewed include the shortened coldfinger [1.50 to 1.75 inches long], limited input power [less than 17 Watts] for low power availability applications; the shortened coldfinger with higher input power for lightweight, higher performance applications; and coldfingers compatible with DoD 0.4 Watt Common Module coolers for wider range retrofit capability. Typical weight of these miniature linear coolers is less than 500 grams for the compressor, expander and interconnecting transfer line. Cooling capacity at 80K at room ambient conditions ranges from 400 mW to greater than 550 mW. Steady state power requirements for maintaining a heat load of 150 mW at 80K has been shown to be less than 8 Watts. Ongoing reliability growth testing is summarized including a review of the latest test article results

  6. Glassy carbon electrode modified with multi-walled carbon nanotubes sensor for the quantification of antihistamine drug pheniramine in solubilized systems

    Directory of Open Access Journals (Sweden)

    Rajeev Jain

    2012-02-01

    Full Text Available A sensitive electroanalytical method for quantification of pheniramine in pharmaceutical formulation has been investigated on the basis of the enhanced electrochemical response at glassy carbon electrode modified with multi-walled carbon nanotubes in the presence of sodium lauryl sulfate. The experimental results suggest that the pheniramine in anionic surfactant solution exhibits electrocatalytic effect resulting in a marked enhancement of the peak current response. Peak current response is linearly dependent on the concentration of pheniramine in the range 200–1500 μg/mL with correlation coefficient 0.9987. The limit of detection is 58.31 μg/mL. The modified electrode shows good sensitivity and repeatability. Keywords: Pheniramine, Sodium lauryl sulfate (SLS, Glassy carbon electrode modified with multi-walled carbon nanotubes (GCE-MWCNTs, Solubilized systems, Voltammetric quantification

  7. Quantification of N-acetyl- and N-glycolylneuraminic acids by a stable isotope dilution assay using high-performance liquid chromatography-tandem mass spectrometry.

    Science.gov (United States)

    Allevi, Pietro; Femia, Eti Alessandra; Costa, Maria Letizia; Cazzola, Roberta; Anastasia, Mario

    2008-11-28

    The present report describes a method for the quantification of N-acetyl- and N-glycolylneuraminic acids without any derivatization, using their (13)C(3)-isotopologues as internal standards and a C(18) reversed-phase column modified by decylboronic acid which allows for the first time a complete chromatographic separation between the two analytes. The method is based on high-performance liquid chromatographic coupled with electrospray ion-trap mass spectrometry. The limit of quantification of the method is 0.1mg/L (2.0ng on column) for both analytes. The calibration curves are linear for both sialic acids over the range of 0.1-80mg/L (2.0-1600ng on column) with a correlation coefficient greater than 0.997. The proposed method was applied to the quantitative determination of sialic acids released from fetuin as a model of glycoproteins.

  8. Quantification of protein based on single-molecule counting by total internal reflection fluorescence microscopy with adsorption equilibrium

    International Nuclear Information System (INIS)

    Wang Lei; Xu Guang; Shi Zhikun; Jiang Wei; Jin Wenrui

    2007-01-01

    We developed a sensitive single-molecule imaging method for quantification of protein by total internal reflection fluorescence microscopy with adsorption equilibrium. In this method, the adsorption equilibrium of protein was achieved between solution and glass substrate. Then, fluorescence images of protein molecules in a evanescent wave field were taken by a highly sensitive electron multiplying charge coupled device. Finally, the number of fluorescent spots corresponding to the protein molecules in the images was counted. Alexa Fluor 488-labeled goat anti-rat IgG(H + L) was chosen as the model protein. The spot number showed an excellent linear relationship with protein concentration. The concentration linear range was 5.4 x 10 -11 to 8.1 x 10 -10 mol L -1

  9. Quantification of crystalline cellulose in lignocellulosic biomass using sum frequency generation (SFG) vibration spectroscopy and comparison with other analytical methods.

    Science.gov (United States)

    Barnette, Anna L; Lee, Christopher; Bradley, Laura C; Schreiner, Edward P; Park, Yong Bum; Shin, Heenae; Cosgrove, Daniel J; Park, Sunkyu; Kim, Seong H

    2012-07-01

    The non-centrosymmetry requirement of sum frequency generation (SFG) vibration spectroscopy allows the detection and quantification of crystalline cellulose in lignocellulose biomass without spectral interferences from hemicelluloses and lignin. This paper shows a correlation between the amount of crystalline cellulose in biomass and the SFG signal intensity. Model biomass samples were prepared by mixing commercially available cellulose, xylan, and lignin to defined concentrations. The SFG signal intensity was found sensitive to a wide range of crystallinity, but varied non-linearly with the mass fraction of cellulose in the samples. This might be due to the matrix effects such as light scattering and absorption by xylan and lignin, as well as the non-linear density dependence of the SFG process itself. Comparison with other techniques such as XRD, FT-Raman, FT-IR and NMR demonstrate that SFG can be a complementary and sensitive tool to assess crystalline cellulose in biomass. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Ultra-Sensitive NT-proBNP Quantification for Early Detection of Risk Factors Leading to Heart Failure

    Directory of Open Access Journals (Sweden)

    Keum-Soo Song

    2017-09-01

    Full Text Available Cardiovascular diseases such as acute myocardial infarction and heart failure accounted for the death of 17.5 million people (31% of all global deaths in 2015. Monitoring the level of circulating N-terminal proBNP (NT-proBNP is crucial for the detection of people at risk of heart failure. In this article, we describe a novel ultra-sensitive NT-proBNP test (us-NT-proBNP that allows the quantification of circulating NT-proBNP in 30 min at 25 °C in the linear detection range of 7.0–600 pg/mL. It is a first report on the application of a fluorescence bead labeled detection antibody, DNA-guided detection method, and glass fiber membrane platform for the quantification of NT-proBNP in clinical samples. Limit of blank, limit of detection, and limit of quantification were 2.0 pg/mL, 3.7 pg/mL, and 7 pg/mL, respectively. The coefficient of variation was found to be less than 10% in the entire detection range of 7–600 pg/mL. The test demonstrated specificity for NT-proBNP without interferences from bilirubin, intra-lipid, biotin, and hemoglobin. The serial dilution test for plasma samples containing various NT-proBNP levels showed the linear decrement in concentration with the regression coefficient of 0.980–0.998. These results indicate that us-NT-proBNP test does not suffer from the interference of the plasma components for the measurement of NT-proBNP in clinical samples.

  11. Linear Algebra and Smarandache Linear Algebra

    OpenAIRE

    Vasantha, Kandasamy

    2003-01-01

    The present book, on Smarandache linear algebra, not only studies the Smarandache analogues of linear algebra and its applications, it also aims to bridge the need for new research topics pertaining to linear algebra, purely in the algebraic sense. We have introduced Smarandache semilinear algebra, Smarandache bilinear algebra and Smarandache anti-linear algebra and their fuzzy equivalents. Moreover, in this book, we have brought out the study of linear algebra and vector spaces over finite p...

  12. Applicability of hybrid linear ion trap-high resolution mass spectrometry and quadrupole-linear ion trap-mass spectrometry for mycotoxin analysis in baby food.

    Science.gov (United States)

    Rubert, Josep; James, Kevin J; Mañes, Jordi; Soler, Carla

    2012-02-03

    Recent developments in mass spectrometers have created a paradoxical situation; different mass spectrometers are available, each of them with their specific strengths and drawbacks. Hybrid instruments try to unify several advantages in one instrument. In this study two of wide-used hybrid instruments were compared: hybrid quadrupole-linear ion trap-mass spectrometry (QTRAP®) and the hybrid linear ion trap-high resolution mass spectrometry (LTQ-Orbitrap®). Both instruments were applied to detect the presence of 18 selected mycotoxins in baby food. Analytical parameters were validated according to 2002/657/CE. Limits of quantification (LOQs) obtained by QTRAP® instrument ranged from 0.45 to 45 μg kg⁻¹ while lower limits of quantification (LLOQs) values were obtained by LTQ-Orbitrap®: 7-70 μg kg⁻¹. The correlation coefficients (r) in both cases were upper than 0.989. These values highlighted that both instruments were complementary for the analysis of mycotoxin in baby food; while QTRAP® reached best sensitivity and selectivity, LTQ-Orbitrap® allowed the identification of non-target and unknowns compounds. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Comparison of screen-film combinations: results of a contrast detail study and interactive image quality analysis. Pt. 2. Linear assessment of grey scale ranges with interactive (automatic) image analysis

    International Nuclear Information System (INIS)

    Stamm, G.; Eichbaum, G.; Hagemann, G.

    1997-01-01

    The following three screen-film combinations were compared: (a) A combination of anticross-over film and UV-light emitting screens, (b) a combination of blue-light emitting screens and film, and (c) a conventional green fluorescing screen-film combination. Radiographs of a specially designed plexiglass phantom (0.2x0.2x0.12 m 3 ) with bar patterns of lead and plaster and of air, respectively were obtained using the following parameters: 12 pulse generator, 0.6 mm focus size, 4.7 mm aluminium prefilter, a grid with 40 lines/cm (12:1) and a focus-detector distance of 1.15 m. Image analysis was performed using an IBAS system and a Zeiss Kontron computer. Display conditions were the following: Display distance 0.12 m, a vario film objective 35/70 (Zeiss), a video camera tube with a Pb0 photocathode, 625 lines (Siemens Heimann), an IBAS image matrix of 512x512 pixels with a resolution of 7 lines/mm, the projected matrix area was 5000 μm 2 . Grey scale ranges were measured on a line perpendicular to the grouped bar patterns. The difference between the maximum and minimum density value served as signal. The spatial resolution of the detector system was measured when the signal value was three times higher than the standard deviation of the means of multiple density measurements. The results showed considerable advantages of the two new screen-film combinations as compared to the conventional screen-film combination. The result was contradictory to the findings with pure visual assessment of thresholds (part I) that had found no differences. The authors concluded that (automatic) interactive image analysis algorithms serve as an objective measure and are specifically advantageous when small differences in image quality are to be evaluated. (orig.) [de

  14. Verb aspect, alternations and quantification

    Directory of Open Access Journals (Sweden)

    Svetla Koeva

    2015-11-01

    Full Text Available Verb aspect, alternations and quantification In this paper we are briefly discuss the nature of Bulgarian verb aspect and argue that the verb aspect pairs are different lexical units with different (although related meaning, different argument structure (reflecting categories, explicitness and referential status of arguments and different sets of semantic and syntactic alternations. The verb prefixes resulting in perfective verbs derivation in some cases can be interpreted as lexical quantifiers as well. Thus the Bulgarian verb aspect is related (in different way both with the potential for the generation of alternations and with the prefixal lexical quantification. It is shown that the scope of the lexical quantification by means of verbal prefixes is the quantified verb phrase and the scope remains constant in all derived alternations. The paper concerns the basic issues of these complex problems, while the detailed description of the conditions satisfying particular alternation or particular lexical quantification are subject of a more detailed study.

  15. Advancing agricultural greenhouse gas quantification*

    Science.gov (United States)

    Olander, Lydia; Wollenberg, Eva; Tubiello, Francesco; Herold, Martin

    2013-03-01

    1. Introduction Better information on greenhouse gas (GHG) emissions and mitigation potential in the agricultural sector is necessary to manage these emissions and identify responses that are consistent with the food security and economic development priorities of countries. Critical activity data (what crops or livestock are managed in what way) are poor or lacking for many agricultural systems, especially in developing countries. In addition, the currently available methods for quantifying emissions and mitigation are often too expensive or complex or not sufficiently user friendly for widespread use. The purpose of this focus issue is to capture the state of the art in quantifying greenhouse gases from agricultural systems, with the goal of better understanding our current capabilities and near-term potential for improvement, with particular attention to quantification issues relevant to smallholders in developing countries. This work is timely in light of international discussions and negotiations around how agriculture should be included in efforts to reduce and adapt to climate change impacts, and considering that significant climate financing to developing countries in post-2012 agreements may be linked to their increased ability to identify and report GHG emissions (Murphy et al 2010, CCAFS 2011, FAO 2011). 2. Agriculture and climate change mitigation The main agricultural GHGs—methane and nitrous oxide—account for 10%-12% of anthropogenic emissions globally (Smith et al 2008), or around 50% and 60% of total anthropogenic methane and nitrous oxide emissions, respectively, in 2005. Net carbon dioxide fluxes between agricultural land and the atmosphere linked to food production are relatively small, although significant carbon emissions are associated with degradation of organic soils for plantations in tropical regions (Smith et al 2007, FAO 2012). Population growth and shifts in dietary patterns toward more meat and dairy consumption will lead to

  16. Establishment of a novel two-probe real-time PCR for simultaneously quantification of hepatitis B virus DNA and distinguishing genotype B from non-B genotypes.

    Science.gov (United States)

    Wang, Wei; Liang, Hongpin; Zeng, Yongbin; Lin, Jinpiao; Liu, Can; Jiang, Ling; Yang, Bin; Ou, Qishui

    2014-11-01

    Establishment of a simple, rapid and economical method for quantification and genotyping of hepatitis B virus (HBV) is of great importance for clinical diagnosis and treatment of chronic hepatitis B patients. We hereby aim to develop a novel two-probe real-time PCR for simultaneous quantification of HBV viral concentration and distinguishing genotype B from non-B genotypes. Conserved primers and TaqMan probes for genotype B and non-B genotypes were designed. The linear range, detection sensitivity, specificity and repeatability of the method were assessed. 539 serum samples from HBV-infected patients were assayed, and the results were compared with commercial HBV quantification and HBV genotyping kits. The detection sensitivity of the two-probe real-time PCR was 500IU/ml; the linear range was 10(3)-10(9)IU/ml, and the intra-assay CVs and inter-assay CVs were between 0.84% and 2.80%. No cross-reaction was observed between genotypes B and non-B. Of the 539 detected samples, 509 samples were HBV DNA positive. The results showed that 54.0% (275/509) of the samples were genotype B, 39.5% (201/509) were genotype non-B and 6.5% (33/509) were mixed genotype. The coincidence rate between the method and a commercial HBV DNA genotyping kit was 95.9% (488/509, kappa=0.923, PDNA qPCR kit were achieved. A novel two-probe real-time PCR method for simultaneous quantification of HBV viral concentration and distinguishing genotype B from non-B genotypes was established. The assay was sensitive, specific and reproducible which can be applied to areas prevalent with HBV genotypes B and C, especially in China. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Extended linear chain compounds

    CERN Document Server

    Linear chain substances span a large cross section of contemporary chemistry ranging from covalent polymers, to organic charge transfer com­ plexes to nonstoichiometric transition metal coordination complexes. Their commonality, which coalesced intense interest in the theoretical and exper­ imental solid state physics/chemistry communities, was based on the obser­ vation that these inorganic and organic polymeric substrates exhibit striking metal-like electrical and optical properties. Exploitation and extension of these systems has led to the systematic study of both the chemistry and physics of highly and poorly conducting linear chain substances. To gain a salient understanding of these complex materials rich in anomalous aniso­ tropic electrical, optical, magnetic, and mechanical properties, the conver­ gence of diverse skills and talents was required. The constructive blending of traditionally segregated disciplines such as synthetic and physical organic, inorganic, and polymer chemistry, crystallog...

  18. Quantification of informed opinion

    International Nuclear Information System (INIS)

    Rasmuson, D.M.

    1985-01-01

    The objective of this session, Quantification of Informed Opinion, is to provide the statistician with a better understanding of this important area. The NRC uses informed opinion, sometimes called engineering judgment or subjective judgment, in many areas. Sometimes informed opinion is the only source of information that exists, especially in phenomenological areas, such as steam explosions, where experiments are costly and phenomena are very difficult to measure. There are many degrees of informed opinion. These vary from the weatherman who makes predictions concerning relatively high probability events with a large data base to the phenomenological expert who must use his intuition tempered with basic knowledge and little or no measured data to predict the behavior of events with a low probability of occurrence. The first paper in this session provides the reader with an overview of the subject area. The second paper provides some aspects that must be considered in the collection of informed opinion to improve the quality of the information. The final paper contains an example of the use of informed opinion in the area of seismic hazard characterization. These papers should be useful to researchers and statisticians who need to collect and use informed opinion in their work

  19. Quantification In Neurology

    Directory of Open Access Journals (Sweden)

    Netravati M

    2005-01-01

    Full Text Available There is a distinct shift of emphasis in clinical neurology in the last few decades. A few years ago, it was just sufficient for a clinician to precisely record history, document signs, establish diagnosis and write prescription. In the present context, there has been a significant intrusion of scientific culture in clinical practice. Several criteria have been proposed, refined and redefined to ascertain accurate diagnosis for many neurological disorders. Introduction of the concept of impairment, disability, handicap and quality of life has added new dimension to the measurement of health and disease and neurological disorders are no exception. "Best guess" treatment modalities are no more accepted and evidence based medicine has become an integral component of medical care. Traditional treatments need validation and new therapies require vigorous trials. Thus, proper quantification in neurology has become essential, both in practice and research methodology in neurology. While this aspect is widely acknowledged, there is a limited access to a comprehensive document pertaining to measurements in neurology. This following description is a critical appraisal of various measurements and also provides certain commonly used rating scales/scores in neurological practice.

  20. Echocardiographic chamber quantification in a healthy Dutch population.

    Science.gov (United States)

    van Grootel, R W J; Menting, M E; McGhie, J; Roos-Hesselink, J W; van den Bosch, A E

    2017-12-01

    For accurate interpretation of echocardiographic measurements normative data are required, which are provided by guidelines. For this article, the hypothesis was that these cannot be extrapolated to the Dutch population, since in Dutch clinical practice often higher values are found, which may not be pathological but physiological. Therefore this study aimed to 1) obtain and propose normative values for cardiac chamber quantification in a healthy Dutch population and 2) determine influences of baseline characteristics on these measurements. Prospectively recruited healthy subjects, aged 20-72 years (at least 28 subjects per age decade, equally distributed for gender) underwent physical examination and 2D and 3D echocardiography. Both ventricles and atria were assessed and volumes were calculated. 147 subjects were included (age 44 ± 14 years, 50% female). Overall, feasibility was good for both linear and volumetric measurements. Linear and volumetric parameters were consistently higher than current guidelines recommend, while functional parameters were in line with the guidelines. This was more so in the older population. 3D volumes were higher than 2D volumes. Gender dependency was seen in all body surface area (BSA) corrected volumes and with increasing age, ejection fractions decreased. This study provides 2D and 3D echocardiographic reference ranges for both ventricles and atria derived from a healthy Dutch population. BSA indexed volumes are gender-dependent, age did not influence ventricular volumes and a rise in blood pressure was independently associated with increased right ventricular volumes. The higher volumes found may be indicative for the Dutch population being the tallest in the world.

  1. Polyaniline-graphene oxide nanocomposite sensor for quantification of calcium channel blocker levamlodipine.

    Science.gov (United States)

    Jain, Rajeev; Sinha, Ankita; Khan, Ab Lateef

    2016-08-01

    A novel polyaniline-graphene oxide nanocomposite (PANI/GO/GCE) sensor has been fabricated for quantification of a calcium channel blocker drug levamlodipine (LAMP). Fabricated sensor has been characterized by electrochemical impedance spectroscopy, square wave and cyclic voltammetry, Raman spectroscopy and Fourier transform infrared (FTIR) spectroscopy. The developed PANI/GO/GCE sensor has excellent analytical performance towards electrocatalytic oxidation as compared to PANI/GCE, GO/GCE and bare GCE. Under optimized experimental conditions, the fabricated sensor exhibits a linear response for LAMP for its oxidation over a concentration range from 1.25μgmL(-1) to 13.25μgmL(-1) with correlation coefficient of 0.9950 (r(2)), detection limit of 1.07ngmL(-1) and quantification limit of 3.57ngmL(-1). The sensor shows an excellent performance for detecting LAMP with reproducibility of 2.78% relative standard deviation (RSD). The proposed method has been successfully applied for LAMP determination in pharmaceutical formulation with a recovery from 99.88% to 101.75%. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Nanoparticle-based immunosensor with apoferritin templated metallic phosphate label for quantification of phosphorylated acetylcholinesterase

    International Nuclear Information System (INIS)

    Du, Dan; Chen, Aiqiong; Xie, Yunying; Zhang, Aidong; Lin, Yuehe

    2011-01-01

    A new sandwich-like electrochemical immunosensor has been developed for quantification of organophosphorylated acetylcholinesterase (OP-AChE), an exposure biomarker of organophosphate pesticides and nerve agents. Zirconia nanoparticles (ZrO2 NPs) were anchored on a screen printed electrode (SPE) to preferably capture OP-AChE adducts by metal chelation with phospho-moieties, which was selectively recognized by lead phosphate-apoferritin labeled anti-AChE antibody (LPA-anti-AChE). The sandwich-like immunoreactions were performed among ZrO2 NPs, OP-AChE and LPA-anti-AChE to form ZrO2/OP-AChE/LPA-anti-AChE complex and the released lead ions were detected on a disposable SPE. The binding affinity was investigated by both square wave voltammetry (SWV) and quartz crystal microbalance (QCM) measurements. The proposed immunosensor yielded a linear response current over a broad OP-AChE concentrations range from 0.05 nM to 10 nM, with detection limit of 0.02 nM, which has enough sensitivity for monitoring of low-dose exposure to OPs. This method avoids the drawback of unavailability of commercial OP-specific antibody as well as amplifies detection signal by using apoferritin encoded metallic phosphate nanoparticle tags. This nanoparticle-based immunosensor offers a new method for rapid, sensitive, selective and inexpensive quantification of phosphorylated adducts for monitoring of OP pesticides and nerve agents exposures.

  3. High-performance liquid chromatographic quantification of rifampicin in human plasma: method for Therapecutic drug monitoring

    International Nuclear Information System (INIS)

    Sameh, T.; Hanene, E.; Jebali, N.

    2013-01-01

    A high performance liquid chromatography (HPLC) method has been developed that allows quantification of Rifampicin in human plasma. The method is based on the precipitation of proteins in human plasma with methanol. Optimal assay conditions were found with a C18 column and a simple mobile phase consisting of 0.05 M dipotassic hydrogen phosphate buffer and acetonitrile (53/47, V/V) with 0.086 % diethylamin, pH = 4.46. The flow-rate was 0.6 ml /mm and the drug was monitored at 340 nm. Results from the HPLC analyses showed that the assay method is linear in the concentration range of 1-40 micro g/ml, (r2 >0.99). The limit of quantification and limit of detection of Rifampicin were 0.632 micro g/ml and 0.208 micro g/ml, respectively. Intraday and interday coefficient of variation and bias were below 10% for all samples, suggesting good precision and accuracy of the method. Recoveries were greater than 90% in a plasma sample volume of 100 micro l. The method is being successfully applied to therapeutic drug monitoring of Rifapicin in plasma samples of tuberculosis and staphylococcal infections patients. (author)

  4. Sulfur and nitrogen co-doped carbon dots sensors for nitric oxide fluorescence quantification

    Energy Technology Data Exchange (ETDEWEB)

    Simões, Eliana F.C. [Centro de Investigação em Química da Universidade do Porto (CIQ-UP), Faculdade de Farmácia da Universidade de Coimbra, Pólo das Ciências da Saúde, 3000-548 Coimbra (Portugal); Centro de Investigação em Química da Universidade do Porto (CIQ-UP), Departamento de Geociências, Ambiente e Ordenamento do Território, Faculdade de Ciências da Universidade do Porto, R. Campo Alegre 687, 4169-007 Porto (Portugal); Leitão, João M.M., E-mail: jleitao@ff.uc.pt [Centro de Investigação em Química da Universidade do Porto (CIQ-UP), Faculdade de Farmácia da Universidade de Coimbra, Pólo das Ciências da Saúde, 3000-548 Coimbra (Portugal); Esteves da Silva, Joaquim C.G. [Centro de Investigação em Química da Universidade do Porto (CIQ-UP), Departamento de Geociências, Ambiente e Ordenamento do Território, Faculdade de Ciências da Universidade do Porto, R. Campo Alegre 687, 4169-007 Porto (Portugal)

    2017-04-01

    Microwave synthetized sulfur and nitrogen co-doped carbon dots responded selectively to nitric oxide (NO) at pH 7. Citric acid, urea and sodium thiosulfate in the proportion of 1:1:3 were used respectively as carbon, nitrogen and sulfur sources in the carbon dots microwave synthesis. For this synthesis, the three compounds were diluted in 15 mL of water and exposed for 5 min to a microwave radiation of 700 W. It is observed that the main factor contributing to the increased sensitivity and selectivity response to NO at pH 7 is the sodium thiosulfate used as sulfur source. A linear response range from 1 to 25 μM with a sensitivity of 16 μM{sup −1} and a detection limit of 0.3 μM were obtained. The NO quantification capability was assessed in standard and in fortified serum solutions. - Highlights: • S,N co-doped CDs were microwave synthetized from citric acid, urea and sodium thiosulfate. • The NO fluorescence sensing was evaluated at pH 7. • The selective and sensitive detection of NO at pH 7 was achieved. • Good NO quantification results in serum samples were obtained.

  5. Nanoparticle-based immunosensor with apoferritin templated metallic phosphate label for quantification of phosphorylated acetylcholinesterase

    Energy Technology Data Exchange (ETDEWEB)

    Du, Dan; Chen, Aiqiong; Xie, Yunying; Zhang, Aidong; Lin, Yuehe

    2011-05-15

    A new sandwich-like electrochemical immunosensor has been developed for quantification of organophosphorylated acetylcholinesterase (OP-AChE), an exposure biomarker of organophosphate pesticides and nerve agents. Zirconia nanoparticles (ZrO2 NPs) were anchored on a screen printed electrode (SPE) to preferably capture OP-AChE adducts by metal chelation with phospho-moieties, which was selectively recognized by lead phosphate-apoferritin labeled anti-AChE antibody (LPA-anti-AChE). The sandwich-like immunoreactions were performed among ZrO2 NPs, OP-AChE and LPA-anti-AChE to form ZrO2/OP-AChE/LPA-anti-AChE complex and the released lead ions were detected on a disposable SPE. The binding affinity was investigated by both square wave voltammetry (SWV) and quartz crystal microbalance (QCM) measurements. The proposed immunosensor yielded a linear response current over a broad OP-AChE concentrations range from 0.05 nM to 10 nM, with detection limit of 0.02 nM, which has enough sensitivity for monitoring of low-dose exposure to OPs. This method avoids the drawback of unavailability of commercial OP-specific antibody as well as amplifies detection signal by using apoferritin encoded metallic phosphate nanoparticle tags. This nanoparticle-based immunosensor offers a new method for rapid, sensitive, selective and inexpensive quantification of phosphorylated adducts for monitoring of OP pesticides and nerve agents exposures.

  6. Sulfur and nitrogen co-doped carbon dots sensors for nitric oxide fluorescence quantification

    International Nuclear Information System (INIS)

    Simões, Eliana F.C.; Leitão, João M.M.; Esteves da Silva, Joaquim C.G.

    2017-01-01

    Microwave synthetized sulfur and nitrogen co-doped carbon dots responded selectively to nitric oxide (NO) at pH 7. Citric acid, urea and sodium thiosulfate in the proportion of 1:1:3 were used respectively as carbon, nitrogen and sulfur sources in the carbon dots microwave synthesis. For this synthesis, the three compounds were diluted in 15 mL of water and exposed for 5 min to a microwave radiation of 700 W. It is observed that the main factor contributing to the increased sensitivity and selectivity response to NO at pH 7 is the sodium thiosulfate used as sulfur source. A linear response range from 1 to 25 μM with a sensitivity of 16 μM"−"1 and a detection limit of 0.3 μM were obtained. The NO quantification capability was assessed in standard and in fortified serum solutions. - Highlights: • S,N co-doped CDs were microwave synthetized from citric acid, urea and sodium thiosulfate. • The NO fluorescence sensing was evaluated at pH 7. • The selective and sensitive detection of NO at pH 7 was achieved. • Good NO quantification results in serum samples were obtained.

  7. Cadmium voltametric quantification in table chocolate produced in Chiquinquira-Boyaca, Colombia

    Directory of Open Access Journals (Sweden)

    Paola Andrea Vargas Moreno

    2017-04-01

    Full Text Available Bioaccumulation of heavy metals such as cadmium has been a major concern in scientific communities and international food organizations, given the great toxicological risk to the consumer, and in many places there is no detailed record of its actual content. In this way, the need arises to carry out a study and registration of the concentration of this metal in products such as table chocolate, of great consumption at regional and national level. Likewise, we seek to have effective quantification tools and a reliable and affordable method to achieve the aim of this research. In this research, Cadmium content in powdered and granulated table chocolate was determined, elaborated and commercialized in the municipality of Chiquinquira, Boyacá-Colombia, using the differential pulse voltammetric method of anodic redisolution (DPVMAR. Previously, the parameters of this method were evaluated, selecting selectivity, linearity, sensitivity, precision and accuracy with satisfactory results as follows: selective at a potential range of 0.54 to 0.64 V, sensitivity in ppb, R2> 0.95, % CV 80%. Analysis of variance showed no significant statistical differences (P <0.05 between the results. Cadmium quantification in samples of granulated and powder chocolate showed values of concentration between 214 and 260 ppb, with the highest concentrations of powder chocolate. Cadmium level did not exceed the tolerable weekly intake limit for this type of food.

  8. Validation of an HPLC method for quantification of total quercetin in Calendula officinalis extracts

    International Nuclear Information System (INIS)

    Muñoz Muñoz, John Alexander; Morgan Machado, Jorge Enrique; Trujillo González, Mary

    2015-01-01

    Introduction: calendula officinalis extracts are used as natural raw material in a wide range of pharmaceutical and cosmetic preparations; however, there are no official methods for quality control of these extracts. Objective: to validate an HPLC-based analytical method for quantification total quercetin in glycolic and hydroalcoholic extracts of Calendula officinalis. Methods: to quantify total quercetin content in the matrices, it was necessary to hydrolyze flavonoid glycosides under optimal conditions. The chromatographic separation was performed on a C-18 SiliaChrom 4.6x150 mm 5 µm column, adapted to a SiliaChrom 5 um C-18 4.6x10 mm precolumn, with UV detection at 370 nm. The gradient elution was performed with a mobile phase consisting of methanol (MeOH) and phosphoric acid (H 3 PO 4 ) (0.08 % w/v). The quantification was performed through the external standard method and comparison with quercetin reference standard. Results: the studied method selectivity against extract components and degradation products under acid/basic hydrolysis, oxidation and light exposure conditions showed no signals that interfere with the quercetin quantification. It was statistically proved that the method is linear from 1.0 to 5.0 mg/mL. Intermediate precision expressed as a variation coefficient was 1.8 and 1.74 % and the recovery percentage was 102.15 and 101.32 %, for glycolic and hydroalcoholic extracts, respectively. Conclusions: the suggested methodology meets the quality parameters required for quantifying total quercetin, which makes it a useful tool for quality control of C. officinalis extracts. (author)

  9. Quantification of fructo-oligosaccharides based on the evaluation of oligomer ratios using an artificial neural network

    Energy Technology Data Exchange (ETDEWEB)

    Onofrejova, Lucia; Farkova, Marta [Department of Chemistry, Faculty of Science, Masaryk University, Kotlarska 2, 611 37 Brno (Czech Republic); Preisler, Jan, E-mail: preisler@chemi.muni.cz [Department of Chemistry, Faculty of Science, Masaryk University, Kotlarska 2, 611 37 Brno (Czech Republic)

    2009-04-13

    The application of an internal standard in quantitative analysis is desirable in order to correct for variations in sample preparation and instrumental response. In mass spectrometry of organic compounds, the internal standard is preferably labelled with a stable isotope, such as {sup 18}O, {sup 15}N or {sup 13}C. In this study, a method for the quantification of fructo-oligosaccharides using matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI TOF MS) was proposed and tested on raftilose, a partially hydrolysed inulin with a degree of polymeration 2-7. A tetraoligosaccharide nystose, which is chemically identical to the raftilose tetramer, was used as an internal standard rather than an isotope-labelled analyte. Two mathematical approaches used for data processing, conventional calculations and artificial neural networks (ANN), were compared. The conventional data processing relies on the assumption that a constant oligomer dispersion profile will change after the addition of the internal standard and some simple numerical calculations. On the other hand, ANN was found to compensate for a non-linear MALDI response and variations in the oligomer dispersion profile with raftilose concentration. As a result, the application of ANN led to lower quantification errors and excellent day-to-day repeatability compared to the conventional data analysis. The developed method is feasible for MS quantification of raftilose in the range of 10-750 pg with errors below 7%. The content of raftilose was determined in dietary cream; application can be extended to other similar polymers. It should be stressed that no special optimisation of the MALDI process was carried out. A common MALDI matrix and sample preparation were used and only the basic parameters, such as sampling and laser energy, were optimised prior to quantification.

  10. Direct quantification of creatinine in human urine by using isotope dilution extractive electrospray ionization tandem mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Li Xue [Institute of Environmental Pollution and Health, School of Environmental and Chemical Engineering, Shanghai University, Shanghai 200444 (China); Jiangxi Key Laboratory for Mass Spectrometry and Instrumentation, Applied Chemistry Department, East China Institute of Technology, Nanchang 330013 (China); Fang Xiaowei [Jiangxi Key Laboratory for Mass Spectrometry and Instrumentation, Applied Chemistry Department, East China Institute of Technology, Nanchang 330013 (China); Yu Zhiqiang; Sheng Guoying [Guangdong Key Laboratory of Environmental Protection and Resource Utilization, State Key Laboratory of Organic Geochemistry, Guangzhou Institute of Geochemistry, Chinese Academy of Sciences, Guangzhou 510640 (China); Wu Minghong [Shanghai Applied Radiation Institute, School of Environmental and Chemical Engineering, Shanghai University, Shanghai 200444 (China); Fu Jiamo [Institute of Environmental Pollution and Health, School of Environmental and Chemical Engineering, Shanghai University, Shanghai 200444 (China); Guangdong Key Laboratory of Environmental Protection and Resource Utilization, State Key Laboratory of Organic Geochemistry, Guangzhou Institute of Geochemistry, Chinese Academy of Sciences, Guangzhou 510640 (China); Chen Huanwen, E-mail: chw8868@gmail.com [Jiangxi Key Laboratory for Mass Spectrometry and Instrumentation, Applied Chemistry Department, East China Institute of Technology, Nanchang 330013 (China)

    2012-10-20

    Highlights: Black-Right-Pointing-Pointer High throughput analysis of urinary creatinine is achieved by using ID-EESI-MS/MS. Black-Right-Pointing-Pointer Urine sample is directly analyzed and no sample pre-treatment is required. Black-Right-Pointing-Pointer Accurate quantification is accomplished with isotope dilution technique. - Abstract: Urinary creatinine (CRE) is an important biomarker of renal function. Fast and accurate quantification of CRE in human urine is required by clinical research. By using isotope dilution extractive electrospray ionization tandem mass spectrometry (EESI-MS/MS) a high throughput method for direct and accurate quantification of urinary CRE was developed in this study. Under optimized conditions, the method detection limit was lower than 50 {mu}g L{sup -1}. Over the concentration range investigated (0.05-10 mg L{sup -1}), the calibration curve was obtained with satisfactory linearity (R{sup 2} = 0.9861), and the relative standard deviation (RSD) values for CRE and isotope-labeled CRE (CRE-d3) were 7.1-11.8% (n = 6) and 4.1-11.3% (n = 6), respectively. The isotope dilution EESI-MS/MS method was validated by analyzing six human urine samples, and the results were comparable with the conventional spectrophotometric method (based on the Jaffe reaction). Recoveries for individual urine samples were 85-111% and less than 0.3 min was taken for each measurement, indicating that the present isotope dilution EESI-MS/MS method is a promising strategy for the fast and accurate quantification of urinary CRE in clinical laboratories.

  11. Direct quantification of creatinine in human urine by using isotope dilution extractive electrospray ionization tandem mass spectrometry

    International Nuclear Information System (INIS)

    Li Xue; Fang Xiaowei; Yu Zhiqiang; Sheng Guoying; Wu Minghong; Fu Jiamo; Chen Huanwen

    2012-01-01

    Highlights: ► High throughput analysis of urinary creatinine is achieved by using ID-EESI–MS/MS. ► Urine sample is directly analyzed and no sample pre-treatment is required. ► Accurate quantification is accomplished with isotope dilution technique. - Abstract: Urinary creatinine (CRE) is an important biomarker of renal function. Fast and accurate quantification of CRE in human urine is required by clinical research. By using isotope dilution extractive electrospray ionization tandem mass spectrometry (EESI–MS/MS) a high throughput method for direct and accurate quantification of urinary CRE was developed in this study. Under optimized conditions, the method detection limit was lower than 50 μg L −1 . Over the concentration range investigated (0.05–10 mg L −1 ), the calibration curve was obtained with satisfactory linearity (R 2 = 0.9861), and the relative standard deviation (RSD) values for CRE and isotope-labeled CRE (CRE-d3) were 7.1–11.8% (n = 6) and 4.1–11.3% (n = 6), respectively. The isotope dilution EESI–MS/MS method was validated by analyzing six human urine samples, and the results were comparable with the conventional spectrophotometric method (based on the Jaffe reaction). Recoveries for individual urine samples were 85–111% and less than 0.3 min was taken for each measurement, indicating that the present isotope dilution EESI–MS/MS method is a promising strategy for the fast and accurate quantification of urinary CRE in clinical laboratories.

  12. Estimation of the quantification uncertainty from flow injection and liquid chromatography transient signals in inductively coupled plasma mass spectrometry

    International Nuclear Information System (INIS)

    Laborda, Francisco; Medrano, Jesus; Castillo, Juan R.

    2004-01-01

    The quality of the quantitative results obtained from transient signals in high-performance liquid chromatography-inductively coupled plasma mass spectrometry (HPLC-ICPMS) and flow injection-inductively coupled plasma mass spectrometry (FI-ICPMS) was investigated under multielement conditions. Quantification methods were based on multiple-point calibration by simple and weighted linear regression, and double-point calibration (measurement of the baseline and one standard). An uncertainty model, which includes the main sources of uncertainty from FI-ICPMS and HPLC-ICPMS (signal measurement, sample flow rate and injection volume), was developed to estimate peak area uncertainties and statistical weights used in weighted linear regression. The behaviour of the ICPMS instrument was characterized in order to be considered in the model, concluding that the instrument works as a concentration detector when it is used to monitorize transient signals from flow injection or chromatographic separations. Proper quantification by the three calibration methods was achieved when compared to reference materials, although the double-point calibration allowed to obtain results of the same quality as the multiple-point calibration, shortening the calibration time. Relative expanded uncertainties ranged from 10-20% for concentrations around the LOQ to 5% for concentrations higher than 100 times the LOQ

  13. Sensitive quantification of apomorphine in human plasma using a LC-ESI-MS-MS method.

    Science.gov (United States)

    Abe, Emuri; Alvarez, Jean-Claude

    2006-06-01

    An analytical method based on liquid chromatography coupled with ion trap mass spectrometry (MS) detection with electrospray ionization interface has been developed for the identification and quantification of apomorphine in human plasma. Apomorphine was isolated from 0.5 mL of plasma using a liquid-liquid extraction with diethyl ether and boldine as internal standard, with satisfactory extraction recoveries. Analytes were separated on a 5-microm C18 Highpurity (Thermohypersil) column (150 mm x 2.1 mm I.D.) maintained at 30 degrees C, coupled to a precolumn (C18, 5-microm, 10 mm x 2.0 mm I.D., Thermo). The elution was achieved isocratically with a mobile phase of 2 mM NH4COOH buffer pH 3.8/acetonitrile (50/50, vol/vol) at a flow rate of 200 microL per minute. Data were collected either in full-scan MS mode at m/z 150 to 500 or in full-scan tandem mass spectrometry mode, selecting the [M+H]ion at m/z 268.0 for apomorphine and m/z 328.0 for boldine. The most intense daughter ion of apomorphine (m/z 237.1) and boldine (m/z 297.0) were used for quantification. Retention times were 2.03 and 2.11 minutes for boldine and apomorphine, respectively. Calibration curves were linear in the 0.025 to 20 ng/mL range. The limits of detection and quantification were 0.010 ng/mL and 0.025 ng/mL, respectively. Accuracy and precision of the assay were measured by analyzing 54 quality control samples for 3 days. At concentrations of 0.075, 1.5, and 15 ng/mL, intraday precisions were less than 10.1%, 5.3%, and 3.8%, and interday precisions were less than 4.8%, 6.6%, and 6.5%, respectively. Accuracies were in the 99.5 to 104.2% range. An example of a patient who was given 6 mg of apomorphine subcutaneously is shown, with concentrations of 14.1 ng/mL after 30 minutes and 0.20 ng/mL after 6 hours. The method described enables the unambiguous identification and quantification of apomorphine with very good sensitivity using only 0.5 mL of sample, and is very convenient for therapeutic drug

  14. Functionalized linear and cyclic polyolefins

    Energy Technology Data Exchange (ETDEWEB)

    Tuba, Robert; Grubbs, Robert H.

    2018-02-13

    This invention relates to methods and compositions for preparing linear and cyclic polyolefins. More particularly, the invention relates to methods and compositions for preparing functionalized linear and cyclic polyolefins via olefin metathesis reactions. Polymer products produced via the olefin metathesis reactions of the invention may be utilized for a wide range of materials applications. The invention has utility in the fields of polymer and materials chemistry and manufacture.

  15. Non-linear osmosis

    Science.gov (United States)

    Diamond, Jared M.

    1966-01-01

    1. The relation between osmotic gradient and rate of osmotic water flow has been measured in rabbit gall-bladder by a gravimetric procedure and by a rapid method based on streaming potentials. Streaming potentials were directly proportional to gravimetrically measured water fluxes. 2. As in many other tissues, water flow was found to vary with gradient in a markedly non-linear fashion. There was no consistent relation between the water permeability and either the direction or the rate of water flow. 3. Water flow in response to a given gradient decreased at higher osmolarities. The resistance to water flow increased linearly with osmolarity over the range 186-825 m-osM. 4. The resistance to water flow was the same when the gall-bladder separated any two bathing solutions with the same average osmolarity, regardless of the magnitude of the gradient. In other words, the rate of water flow is given by the expression (Om — Os)/[Ro′ + ½k′ (Om + Os)], where Ro′ and k′ are constants and Om and Os are the bathing solution osmolarities. 5. Of the theories advanced to explain non-linear osmosis in other tissues, flow-induced membrane deformations, unstirred layers, asymmetrical series-membrane effects, and non-osmotic effects of solutes could not explain the results. However, experimental measurements of water permeability as a function of osmolarity permitted quantitative reconstruction of the observed water flow—osmotic gradient curves. Hence non-linear osmosis in rabbit gall-bladder is due to a decrease in water permeability with increasing osmolarity. 6. The results suggest that aqueous channels in the cell membrane behave as osmometers, shrinking in concentrated solutions of impermeant molecules and thereby increasing membrane resistance to water flow. A mathematical formulation of such a membrane structure is offered. PMID:5945254

  16. Quantification of 13 priority polycyclic aromatic hydrocarbons in human urine by headspace solid-phase microextraction gas chromatography-isotope dilution mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Campo, Laura [Department of Occupational and Environmental Health, University of Milan and Fondazione IRCCS Ospedale Maggiore Policlinico, Mangiagalli e Regina Elena, Milan (Italy)], E-mail: laura.campo@unimi.it; Mercadante, Rosa; Rossella, Federica; Fustinoni, Silvia [Department of Occupational and Environmental Health, University of Milan and Fondazione IRCCS Ospedale Maggiore Policlinico, Mangiagalli e Regina Elena, Milan (Italy)

    2009-01-12

    Polycyclic aromatic hydrocarbons (PAHs) are common environmental pollutants in both living and working environments. The aim of this study was the development of a headspace solid-phase microextraction gas chromatography-isotope dilution mass spectrometry (HS-SPME/GC-IDMS) method for the simultaneous quantification of 13 PAHs in urine samples. Different parameters affecting PAHs extraction by HS-SPME were considered and optimized: type/thickness of fiber coatings, extraction temperature/time, desorption temperature/time, ionic strength and sample agitation. The stability of spiked PAHs solutions and of real urine samples stored up to 90 days in containers of different materials was evaluated. In the optimized method, analytes were absorbed for 60 min at 80 deg. C in the sample headspace with a 100 {mu}m polydimethylsiloxane fiber. The method is very specific, with linear range from the limit of quantification to 8.67 x 10{sup 3} ng L{sup -1}, a within-run precision of <20% and a between-run precision of <20% for 2-, 3- and 4-ring compounds and of <30% for 5-ring compounds, trueness within 20% of the spiked concentration, and limit of quantification in the 2.28-2.28 x 10{sup 1} ng L{sup -1} range. An application of the proposed method using 15 urine samples from subjects exposed to PAHs at different environmental levels is shown.

  17. Rapid Quantification and Validation of Lipid Concentrations within Liposomes

    Directory of Open Access Journals (Sweden)

    Carla B. Roces

    2016-09-01

    Full Text Available Quantification of the lipid content in liposomal adjuvants for subunit vaccine formulation is of extreme importance, since this concentration impacts both efficacy and stability. In this paper, we outline a high performance liquid chromatography-evaporative light scattering detector (HPLC-ELSD method that allows for the rapid and simultaneous quantification of lipid concentrations within liposomal systems prepared by three liposomal manufacturing techniques (lipid film hydration, high shear mixing, and microfluidics. The ELSD system was used to quantify four lipids: 1,2-dimyristoyl-sn-glycero-3-phosphocholine (DMPC, cholesterol, dimethyldioctadecylammonium (DDA bromide, and ᴅ-(+-trehalose 6,6′-dibehenate (TDB. The developed method offers rapidity, high sensitivity, direct linearity, and a good consistency on the responses (R2 > 0.993 for the four lipids tested. The corresponding limit of detection (LOD and limit of quantification (LOQ were 0.11 and 0.36 mg/mL (DMPC, 0.02 and 0.80 mg/mL (cholesterol, 0.06 and 0.20 mg/mL (DDA, and 0.05 and 0.16 mg/mL (TDB, respectively. HPLC-ELSD was shown to be a rapid and effective method for the quantification of lipids within liposome formulations without the need for lipid extraction processes.

  18. Linearly constrained minimax optimization

    DEFF Research Database (Denmark)

    Madsen, Kaj; Schjær-Jacobsen, Hans

    1978-01-01

    We present an algorithm for nonlinear minimax optimization subject to linear equality and inequality constraints which requires first order partial derivatives. The algorithm is based on successive linear approximations to the functions defining the problem. The resulting linear subproblems...

  19. Linear feedback controls the essentials

    CERN Document Server

    Haidekker, Mark A

    2013-01-01

    The design of control systems is at the very core of engineering. Feedback controls are ubiquitous, ranging from simple room thermostats to airplane engine control. Helping to make sense of this wide-ranging field, this book provides a new approach by keeping a tight focus on the essentials with a limited, yet consistent set of examples. Analysis and design methods are explained in terms of theory and practice. The book covers classical, linear feedback controls, and linear approximations are used when needed. In parallel, the book covers time-discrete (digital) control systems and juxtapos

  20. Quantification of fat using chemical shift imaging and 1H-MR spectroscopy in phantom model

    International Nuclear Information System (INIS)

    Peng Xingui; Ju Shenghong; Fang Fang; Teng Gaojun

    2010-01-01

    Objective: To evaluate the accuracy of chemical shift imaging (CSI) and MR spectroscopy (MRS) for fat quantification in phantom model. Methods: Eleven phantoms were made according to the volume percentage of fat ranging from 0 to 100% with an interval of 10%. The fat concentration in the phantoms were measured respectively by CSI and MRS and compared using one-sample t test. The correlation between the two methods was also analyzed. The concentration of saturated fatty acids (FS), unsaturated fatty acids (FU) and the poly, unsaturation degree (PUD) were calculated by using MRS. Results: The fat concentration was (48.0±1.0)%, (57.0±0.5)%, (67.3±0.6)%, (77.3± 0.6)%, (83.3±0.6)% and (91.0±1.0)% respectively with fat volume of 50% to 100% by CSI. The fat concentration was (8.3±0.6)%, (16.3±0.7)%, (27.7±0.6)%, (36.0±1.0)%, (43.5± 0.6)% and (56.5±1.0)% respectively with fat volume of 10% to 60% by MRS, the fat concentration were underestimated by CSI and MRS (P<0.05), and had high linear correlation with the real concentration in phantoms (CSI: r=0.998, MRS: r=0.996, P<0.01). There was also a linear correlation between two methods (r=0.992, P<0.01) but no statistically significant difference (paired- samples t test, t=-0.125, P=0.903). By using MRS, the relative ratio of FS and FU in fat were 0. 15 and 0.85, the PUD was 0.0325, respectively, and highly consistent with these in phantoms. Conclusion: Both CSI and MRS are efficient and accurate methods in fat quantification at 7.0 T MR. (authors)

  1. Simultaneous quantification of coumarins, flavonoids and limonoids in Fructus Citri Sarcodactylis by high performance liquid chromatography coupled with diode array detector.

    Science.gov (United States)

    Chu, Jun; Li, Song-Lin; Yin, Zhi-Qi; Ye, Wen-Cai; Zhang, Qing-Wen

    2012-07-01

    A high performance liquid chromatography coupled with diode array detector (HPLC-DAD) method was developed for simultaneous quantification of eleven major bioactive components including six coumarins, three flavonoids and two limonoids in Fructus Citri Sarcodactylis. The analysis was performed on a Cosmosil 5 C(18)-MS-II column (4.6 mm × 250 mm, 5 μm) with water-acetonitrile gradient elution. The method was validated in terms of linearity, sensitivity, precision, stability and accuracy. It was found that the calibration curves for all analytes showed good linearity (R(2)>0.9993) within the test ranges. The overall limit of detection (LOD) and limit of quantification (LOQ) were less than 3.0 and 10.2 ng. The relative standard deviations (RSDs) for intra- and inter-day repeatability were not more than 4.99% and 4.92%, respectively. The sample was stable for at least 48 h. The spike recoveries of eleven components were 95.1-104.9%. The established method was successfully applied to determine eleven components in three samples from different locations. The results showed that the newly developed HPLC-DAD method was linear, sensitive, precise and accurate, and could be used for quality control of Fructus Citri Sarcodactylis. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Foundations of linear and generalized linear models

    CERN Document Server

    Agresti, Alan

    2015-01-01

    A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,

  3. Low cost high performance uncertainty quantification

    KAUST Repository

    Bekas, C.

    2009-01-01

    Uncertainty quantification in risk analysis has become a key application. In this context, computing the diagonal of inverse covariance matrices is of paramount importance. Standard techniques, that employ matrix factorizations, incur a cubic cost which quickly becomes intractable with the current explosion of data sizes. In this work we reduce this complexity to quadratic with the synergy of two algorithms that gracefully complement each other and lead to a radically different approach. First, we turned to stochastic estimation of the diagonal. This allowed us to cast the problem as a linear system with a relatively small number of multiple right hand sides. Second, for this linear system we developed a novel, mixed precision, iterative refinement scheme, which uses iterative solvers instead of matrix factorizations. We demonstrate that the new framework not only achieves the much needed quadratic cost but in addition offers excellent opportunities for scaling at massively parallel environments. We based our implementation on BLAS 3 kernels that ensure very high processor performance. We achieved a peak performance of 730 TFlops on 72 BG/P racks, with a sustained performance 73% of theoretical peak. We stress that the techniques presented in this work are quite general and applicable to several other important applications. Copyright © 2009 ACM.

  4. Quantification of pramipexole in human plasma by liquid chromatography tandem mass spectrometry using tamsulosin as internal standard.

    Science.gov (United States)

    Nirogi, Ramakrishna V S; Kandikere, Vishwottam; Shrivastava, Wishu; Mudigonda, Koteshwara; Maurya, Santosh; Ajjala, Devender

    2007-11-01

    A high-performance liquid chromatography/electrospray ionization tandem mass spectrometry method was developed and validated for the quantification of pramipexole in human plasma. Following liquid-liquid extraction, the analytes were separated using an isocratic mobile phase on a reverse-phase column and analyzed by MS/MS in the multiple reaction monitoring mode using the respective [M + H](+) ions, m/z 212/152 for pramipexole and m/z 409/228 for the IS. The method exhibited a linear dynamic range of 200-8000 pg/mL for pramipexole in human plasma. The lower limit of quantification was 200 pg/mL with a relative standard deviation of less than 8%. Acceptable precision and accuracy were obtained for concentrations over the standard curve range. A run time of 3.5 min for each sample made it possible to analyze more than 200 human plasma samples per day. The validated method has been successfully used to analyze human plasma samples for application in pharmacokinetic, bioavailability or bioequivalence studies. Copyright (c) 2007 John Wiley & Sons, Ltd.

  5. Quantification of prebiotics in commercial infant formulas.

    Science.gov (United States)

    Sabater, Carlos; Prodanov, Marin; Olano, Agustín; Corzo, Nieves; Montilla, Antonia

    2016-03-01

    Since breastfeeding is not always possible, infant formulas (IFs) are supplemented with prebiotic oligosaccharides, such as galactooligosaccharides (GOS) and/or fructooligosaccharides (FOS) to exert similar effects to those of the breast milk. Nowadays, a great number of infant formulas enriched with prebiotics are disposal in the market, however there are scarce data about their composition. In this study, the combined use of two chromatographic methods (GC-FID and HPLC-RID) for the quantification of carbohydrates present in commercial infant formulas have been used. According to the results obtained by GC-FID for products containing prebiotics, the content of FOS, GOS and GOS/FOS was in the ranges of 1.6-5.0, 1.7-3.2, and 0.08-0.25/2.3-3.8g/100g of product, respectively. HPLC-RID analysis allowed quantification of maltodextrins with degree of polymerization (DP) up to 19. The methodology proposed here may be used for routine quality control of infant formula and other food ingredients containing prebiotics. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Quantification by aberration corrected (S)TEM of boundaries formed by symmetry breaking phase transformations

    Energy Technology Data Exchange (ETDEWEB)

    Schryvers, D., E-mail: nick.schryvers@uantwerpen.be [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Salje, E.K.H. [Department of Earth Sciences, University of Cambridge, Cambridge CB2 3EQ (United Kingdom); Nishida, M. [Department of Engineering Sciences for Electronics and Materials, Faculty of Engineering Sciences, Kyushu University, Kasuga, Fukuoka 816-8580 (Japan); De Backer, A. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Idrissi, H. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Institute of Mechanics, Materials and Civil Engineering, Université Catholique de Louvain, Place Sainte Barbe, 2, B-1348, Louvain-la-Neuve (Belgium); Van Aert, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium)

    2017-05-15

    The present contribution gives a review of recent quantification work of atom displacements, atom site occupations and level of crystallinity in various systems and based on aberration corrected HR(S)TEM images. Depending on the case studied, picometer range precisions for individual distances can be obtained, boundary widths at the unit cell level determined or statistical evolutions of fractions of the ordered areas calculated. In all of these cases, these quantitative measures imply new routes for the applications of the respective materials. - Highlights: • Quantification of picometer displacements at ferroelastic twin boundary in CaTiO{sub 3.} • Quantification of kinks in meandering ferroelectric domain wall in LiNbO{sub 3}. • Quantification of column occupation in anti-phase boundary in Co-Pt. • Quantification of atom displacements at twin boundary in Ni-Ti B19′ martensite.

  7. 1H NMR quantification in very dilute toxin solutions: application to anatoxin-a analysis.

    Science.gov (United States)

    Dagnino, Denise; Schripsema, Jan

    2005-08-01

    A complete procedure is described for the extraction, detection and quantification of anatoxin-a in biological samples. Anatoxin-a is extracted from biomass by a routine acid base extraction. The extract is analysed by GC-MS, without the need of derivatization, with a detection limit of 0.5 ng. A method was developed for the accurate quantification of anatoxin-a in the standard solution to be used for the calibration of the GC analysis. 1H NMR allowed the accurate quantification of microgram quantities of anatoxin-a. The accurate quantification of compounds in standard solutions is rarely discussed, but for compounds like anatoxin-a (toxins with prices in the range of a million dollar a gram), of which generally only milligram quantities or less are available, this factor in the quantitative analysis is certainly not trivial. The method that was developed can easily be adapted for the accurate quantification of other toxins in very dilute solutions.

  8. Towards the International Linear Collider

    International Nuclear Information System (INIS)

    Lopez-Fernandez, Ricardo

    2006-01-01

    The broad physics potential of e+e- linear colliders was recognized by the high energy physics community right after the end of LEP in 2000. In 2007, the Large Hadron Collider (LHC) now under construction at CERN will obtain its first collisions. The LHC, colliding protons with protons at 14 TeV, will discover a standard model Higgs boson over the full potential mass range, and should be sensitive to new physics into the several TeV range. The program for the Linear Collider (LC) will be set in the context of the discoveries made at the LHC. All the proposals for a Linear Collider will extend the discoveries and provide a wealth of measurements that are essential for giving deeper understanding of their meaning, and pointing the way to further evolution of particle physics in the future. For the mexican groups is the right time to join such an effort

  9. Compositional Solution Space Quantification for Probabilistic Software Analysis

    Science.gov (United States)

    Borges, Mateus; Pasareanu, Corina S.; Filieri, Antonio; d'Amorim, Marcelo; Visser, Willem

    2014-01-01

    Probabilistic software analysis aims at quantifying how likely a target event is to occur during program execution. Current approaches rely on symbolic execution to identify the conditions to reach the target event and try to quantify the fraction of the input domain satisfying these conditions. Precise quantification is usually limited to linear constraints, while only approximate solutions can be provided in general through statistical approaches. However, statistical approaches may fail to converge to an acceptable accuracy within a reasonable time. We present a compositional statistical approach for the efficient quantification of solution spaces for arbitrarily complex constraints over bounded floating-point domains. The approach leverages interval constraint propagation to improve the accuracy of the estimation by focusing the sampling on the regions of the input domain containing the sought solutions. Preliminary experiments show significant improvement on previous approaches both in results accuracy and analysis time.

  10. Metering error quantification under voltage and current waveform distortion

    Science.gov (United States)

    Wang, Tao; Wang, Jia; Xie, Zhi; Zhang, Ran

    2017-09-01

    With integration of more and more renewable energies and distortion loads into power grid, the voltage and current waveform distortion results in metering error in the smart meters. Because of the negative effects on the metering accuracy and fairness, it is an important subject to study energy metering combined error. In this paper, after the comparing between metering theoretical value and real recorded value under different meter modes for linear and nonlinear loads, a quantification method of metering mode error is proposed under waveform distortion. Based on the metering and time-division multiplier principles, a quantification method of metering accuracy error is proposed also. Analyzing the mode error and accuracy error, a comprehensive error analysis method is presented which is suitable for new energy and nonlinear loads. The proposed method has been proved by simulation.

  11. Development and Validation of an HPLC Method for Simultaneous Quantification of Clopidogrel Bisulfate, Its Carboxylic Acid Metabolite, and Atorvastatin in Human Plasma: Application to a Pharmacokinetic Study

    Directory of Open Access Journals (Sweden)

    Octavian Croitoru

    2015-01-01

    Full Text Available A simple, sensitive, and specific reversed phase liquid chromatographic method was developed and validated for simultaneous quantification of clopidogrel, its carboxylic acid metabolite, and atorvastatin in human serum. Plasma samples were deproteinized with acetonitrile and ibuprofen was chosen as internal standard. Chromatographic separation was performed on an BDS Hypersil C18 column (250 × 4.6 mm; 5 μm via gradient elution with mobile phase consisting of 10 mM phosphoric acid (sodium buffer solution (pH = 2.6 adjusted with 85% orthophosphoric acid : acetonitrile : methanol with flow rate of 1 mL·min−1. Detection was achieved with PDA detector at 220 nm. The method was validated in terms of linearity, sensitivity, precision, accuracy, limit of quantification, and stability tests. Calibration curves of the analytes were found to be linear in the range of 0.008–2 μg·mL−1 for clopidogrel, 0.01–4 μg·mL−1 for its carboxylic acid metabolite, and 0.005–2.5 μg·mL−1 for atorvastatin. The results of accuracy (as recovery with ibuprofen as internal standard were in the range of 96–98% for clopidogrel, 94–98% for its carboxylic acid metabolite, and 90–99% for atorvastatin, respectively.

  12. Performance verification and comparison of TianLong automatic hypersensitive hepatitis B virus DNA quantification system with Roche CAP/CTM system.

    Science.gov (United States)

    Li, Ming; Chen, Lin; Liu, Li-Ming; Li, Yong-Li; Li, Bo-An; Li, Bo; Mao, Yuan-Li; Xia, Li-Fang; Wang, Tong; Liu, Ya-Nan; Li, Zheng; Guo, Tong-Sheng

    2017-10-07

    To investigate and compare the analytical and clinical performance of TianLong automatic hypersensitive hepatitis B virus (HBV) DNA quantification system and Roche CAP/CTM system. Two hundred blood samples for HBV DNA testing, HBV-DNA negative samples and high-titer HBV-DNA mixture samples were collected and prepared. National standard materials for serum HBV and a worldwide HBV DNA panel were employed for performance verification. The analytical performance, such as limit of detection, limit of quantification, accuracy, precision, reproducibility, linearity, genotype coverage and cross-contamination, was determined using the TianLong automatic hypersensitive HBV DNA quantification system (TL system). Correlation and Bland-Altman plot analyses were carried out to compare the clinical performance of the TL system assay and the CAP/CTM system. The detection limit of the TL system was 10 IU/mL, and its limit of quantification was 30 IU/mL. The differences between the expected and tested concentrations of the national standards were less than ± 0.4 Log 10 IU/mL, which showed high accuracy of the system. Results of the precision, reproducibility and linearity tests showed that the multiple test coefficient of variation (CV) of the same sample was less than 5% for 10 2 -10 6 IU/mL; and for 30-10 8 IU/mL, the linear correlation coefficient r 2 = 0.99. The TL system detected HBV DNA (A-H) genotypes and there was no cross-contamination during the "checkerboard" test. When compared with the CAP/CTM assay, the two assays showed 100% consistency in both negative and positive sample results (15 negative samples and 185 positive samples). No statistical differences between the two assays in the HBV DNA quantification values were observed ( P > 0.05). Correlation analysis indicated a significant correlation between the two assays, r 2 = 0.9774. The Bland-Altman plot analysis showed that 98.9% of the positive data were within the 95% acceptable range, and the maximum difference

  13. In vivo quantification of DNA double strand breaks

    International Nuclear Information System (INIS)

    Simonsson, M.; Qvarnstroem, F.; Turesson, I.; Johansson, K.-A.; Nyman, J.; Hermansson, I.; Oden, A.; Book, M.

    2003-01-01

    DNA double strand breaks (DSBs) can be introduced in the genome by exposure to exogenous agents such as ionising radiation and radio-mimetic chemicals. The biological importance of these breaks is significant even at low numbers. Inaccurate repair or lack of repair of a single DSB has the potential to kill a cell or lead to tumourigenesis. Thus the induction and repair of DSBs are crucial events in the onset of malignancies. Following the induction of DSBs, the core histone H2AX is rapidly phosphorylated at residue serine 139. This phosphorylated form of H2AX is referred to as gH2AX. Histones wrapped in megabase regions flanking these breaks are involved in this process, which results in the formation of discrete nuclear foci. It has previously been shown that a single DSB is sufficient to produce a detectable focus. So far there has been a lack of methods capable of measuring the amount of DSBs at clinically relevant quantities. Such a method would embrace a wide field of applications. It could be applied as a biological dosimeter when studying carcinogenic effects and provide the basis for an assay predicting individual radiosensitivity. We describe a measurement procedure that detects and quantifies small amounts of DSBs in vivo. This is accomplished using immunofluorescence detection of the molecular marker gH2AX. The gH2AX foci are quantified in histological sections using basic digital image analysis methods as the main component. In a primary assessment of the procedure we analysed the in vivo dose response of prostate cancer patients in clinical practice undergoing radiotherapy. Epidermal nucleated cells in skin biopsies taken 30 minutes following the first single dose delivered show linear dose response for low doses ranging from 0 - 1.2 Gy. The described procedure for double strand break quantification can detect dose changes as low as 0.18 Gy

  14. A refined methodology for modeling volume quantification performance in CT

    Science.gov (United States)

    Chen, Baiyu; Wilson, Joshua; Samei, Ehsan

    2014-03-01

    The utility of CT lung nodule volume quantification technique depends on the precision of the quantification. To enable the evaluation of quantification precision, we previously developed a mathematical model that related precision to image resolution and noise properties in uniform backgrounds in terms of an estimability index (e'). The e' was shown to predict empirical precision across 54 imaging and reconstruction protocols, but with different correlation qualities for FBP and iterative reconstruction (IR) due to the non-linearity of IR impacted by anatomical structure. To better account for the non-linearity of IR, this study aimed to refine the noise characterization of the model in the presence of textured backgrounds. Repeated scans of an anthropomorphic lung phantom were acquired. Subtracted images were used to measure the image quantum noise, which was then used to adjust the noise component of the e' calculation measured from a uniform region. In addition to the model refinement, the validation of the model was further extended to 2 nodule sizes (5 and 10 mm) and 2 segmentation algorithms. Results showed that the magnitude of IR's quantum noise was significantly higher in structured backgrounds than in uniform backgrounds (ASiR, 30-50%; MBIR, 100-200%). With the refined model, the correlation between e' values and empirical precision no longer depended on reconstruction algorithm. In conclusion, the model with refined noise characterization relfected the nonlinearity of iterative reconstruction in structured background, and further showed successful prediction of quantification precision across a variety of nodule sizes, dose levels, slice thickness, reconstruction algorithms, and segmentation software.

  15. AGNES at vibrated gold microwire electrode for the direct quantification of free copper concentrations

    Energy Technology Data Exchange (ETDEWEB)

    Domingos, Rute F., E-mail: rdomingos@ipgp.fr [Centro de Química Estrutural, Instituto Superior Técnico, Universidade de Lisboa, Torre Sul Lab 11-6.3, Av. Rovisco Pais #1, 1049-001 Lisbon (Portugal); Carreira, Sara [Centro de Química Estrutural, Instituto Superior Técnico, Universidade de Lisboa, Torre Sul Lab 11-6.3, Av. Rovisco Pais #1, 1049-001 Lisbon (Portugal); Galceran, Josep [Department of Chemistry, University of Lleida and Agrotecnio, Rovira Roure 191, 25198 Lleida (Spain); Salaün, Pascal [School of Environmental Sciences, University of Liverpool, 4 Brownlow Street, Liverpool L693 GP (United Kingdom); Pinheiro, José P. [LIEC/ENSG, UMR 7360 CNRS – Université de Lorraine, 15 Avenue du Charmois, 54500 Vandoeuvre-les-Nancy (France)

    2016-05-12

    The free metal ion concentration and the dynamic features of the metal species are recognized as key to predict metal bioavailability and toxicity to aquatic organisms. Quantification of the former is, however, still challenging. In this paper, it is shown for the first time that the concentration of free copper (Cu{sup 2+}) can be quantified by applying AGNES (Absence of Gradients and Nernstian equilibrium stripping) at a solid gold electrode. It was found that: i) the amount of deposited Cu follows a Nernstian relationship with the applied deposition potential, and ii) the stripping signal is linearly related with the free metal ion concentration. The performance of AGNES at the vibrating gold microwire electrode (VGME) was assessed for two labile systems: Cu-malonic acid and Cu-iminodiacetic acid at ionic strength 0.01 M and a range of pH values from 4.0 to 6.0. The free Cu concentrations and conditional stability constants obtained by AGNES were in good agreement with stripping scanned voltammetry and thermodynamic theoretical predictions obtained by Visual MinteQ. This work highlights the suitability of gold electrodes for the quantification of free metal ion concentrations by AGNES. It also strongly suggests that other solid electrodes may be well appropriate for such task. This new application of AGNES is a first step towards a range of applications for a number of metals in speciation, toxicological and environmental studies for the direct determination of the key parameter that is the free metal ion concentration. - Highlights: • AGNES principles are valid at the vibrating gold microwire electrode (VGME). • VGME was successfully employed to quantify free Cu concentrations by using AGNES. • Stability constants of labile systems were in good agreement with predictions.

  16. Validation and evaluation of an HPLC methodology for the quantification of the potent antimitotic compound (+)-discodermolide in the Caribbean marine sponge Discodermia dissoluta.

    Science.gov (United States)

    Valderrama, Katherine; Castellanos, Leonardo; Zea, Sven

    2010-08-01

    The sponge Discodermia dissoluta is the source of the potent antimitotic compound (+)-discodermolide. The relatively abundant and shallow populations of this sponge in Santa Marta, Colombia, allow for studies to evaluate the natural and biotechnological supply options of (+)-discodermolide. In this work, an RP-HPLC-UV methodology for the quantification of (+)-discodermolide from sponge samples was tested and validated. Our protocol for extracting this compound from the sponge included lyophilization, exhaustive methanol extraction, partitioning using water and dichloromethane, purification of the organic fraction in RP-18 cartridges and then finally retrieving the (+)-discodermolide in the methanol-water (80:20 v/v) fraction. This fraction was injected into an HPLC system with an Xterra RP-18 column and a detection wavelength of 235 nm. The calibration curve was linear, making it possible to calculate the LODs and quantification in these experiments. The intra-day and inter-day precision showed relative standard deviations lower than 5%. The accuracy, determined as the percentage recovery, was 99.4%. Nine samples of the sponge from the Bahamas, Bonaire, Curaçao and Santa Marta had concentrations of (+)-discodermolide ranging from 5.3 to 29.3 microg/g(-1) of wet sponge. This methodology is quick and simple, allowing for the quantification in sponges from natural environments, in situ cultures or dissociated cells.

  17. Quantification of Parvovirus B19 DNA Using COBAS AmpliPrep Automated Sample Preparation and LightCycler Real-Time PCR

    Science.gov (United States)

    Schorling, Stefan; Schalasta, Gunnar; Enders, Gisela; Zauke, Michael

    2004-01-01

    The COBAS AmpliPrep instrument (Roche Diagnostics GmbH, D-68305 Mannheim, Germany) automates the entire sample preparation process of nucleic acid isolation from serum or plasma for polymerase chain reaction analysis. We report the analytical performance of the LightCycler Parvovirus B19 Quantification Kit (Roche Diagnostics) using nucleic acids isolated with the COBAS AmpliPrep instrument. Nucleic acids were extracted using the Total Nucleic Acid Isolation Kit (Roche Diagnostics) and amplified with the LightCycler Parvovirus B19 Quantification Kit. The kit combination processes 72 samples per 8-hour shift. The lower detection limit is 234 IU/ml at a 95% hit-rate, linear range approximately 104-1010 IU/ml, and overall precision 16 to 40%. Relative sensitivity and specificity in routine samples from pregnant women are 100% and 93%, respectively. Identification of a persistent parvovirus B19-infected individual by the polymerase chain reaction among 51 anti-parvovirus B19 IgM-negative samples underlines the importance of additional nucleic acid testing in pregnancy and its superiority to serology in identifying the risk of parvovirus B19 transmission via blood or blood products. Combination of the Total Nucleic Acid Isolation Kit on the COBAS AmpliPrep instrument with the LightCycler Parvovirus B19 Quantification Kit provides a reliable and time-saving tool for sensitive and accurate detection of parvovirus B19 DNA. PMID:14736825

  18. Recurrence quantification analysis theory and best practices

    CERN Document Server

    Jr, Jr; Marwan, Norbert

    2015-01-01

    The analysis of recurrences in dynamical systems by using recurrence plots and their quantification is still an emerging field.  Over the past decades recurrence plots have proven to be valuable data visualization and analysis tools in the theoretical study of complex, time-varying dynamical systems as well as in various applications in biology, neuroscience, kinesiology, psychology, physiology, engineering, physics, geosciences, linguistics, finance, economics, and other disciplines.   This multi-authored book intends to comprehensively introduce and showcase recent advances as well as established best practices concerning both theoretical and practical aspects of recurrence plot based analysis.  Edited and authored by leading researcher in the field, the various chapters address an interdisciplinary readership, ranging from theoretical physicists to application-oriented scientists in all data-providing disciplines.

  19. Simultaneous characterization and quantification of 17 main compounds in Rabdosia rubescens by high performance liquid chromatography

    Directory of Open Access Journals (Sweden)

    Sen Guo

    2017-04-01

    Full Text Available Rabdosia rubescens is a healthy herbal tea and well-known Chinese medicinal herb. To evaluate the quality of R. rubescens from China, a high performance liquid chromatography method with dual-wavelength detection was developed and validated. The method was successfully applied for the simultaneous characterization and quantification of 17 main constituents from four different cultivation regions in China. Under optimal conditions, analysis was performed on a Luna C-18 column and gradient elution with a solvent system of acetonitrile and 0.5% (v/v acetic acid–water at a flow rate of 1.0 mL/min and wavelength of 220 nm and 280 nm. All standard calibration curves exhibited good linearity (r2 > 0.9992 within the test ranges. The precision was evaluated by intraday and interday tests, which revealed relative standard deviation values within the ranges of 0.57–2.35% and 0.52–3.40%, respectively. The recoveries were in the range of 96.37–101.66%. The relative standard deviation values for stability and repeatability were < 5%. The contents of some compounds were low and varied with different cultivars. The proposed method could serve as a prerequisite for quality control of R. rubescens materials and products.

  20. Collagen Quantification in Tissue Specimens.

    Science.gov (United States)

    Coentro, João Quintas; Capella-Monsonís, Héctor; Graceffa, Valeria; Wu, Zhuning; Mullen, Anne Maria; Raghunath, Michael; Zeugolis, Dimitrios I

    2017-01-01

    Collagen is the major extracellular protein in mammals. Accurate quantification of collagen is essential in the biomaterials (e.g., reproducible collagen scaffold fabrication), drug discovery (e.g., assessment of collagen in pathophysiologies, such as fibrosis), and tissue engineering (e.g., quantification of cell-synthesized collagen) fields. Although measuring hydroxyproline content is the most widely used method to quantify collagen in biological specimens, the process is very laborious. To this end, the Sircol™ Collagen Assay is widely used due to its inherent simplicity and convenience. However, this method leads to overestimation of collagen content due to the interaction of Sirius red with basic amino acids of non-collagenous proteins. Herein, we describe the addition of an ultrafiltration purification step in the process to accurately determine collagen content in tissues.

  1. Correlation between US-PSV and MDCTA in the quantification of carotid artery stenosis

    International Nuclear Information System (INIS)

    Saba, Luca; Sanfilippo, Roberto; Montisci, Roberto; Mallarini, Giorgio

    2010-01-01

    Purpose: Stroke is a major cause of death and serious long-lasting neurological disability and the severity of carotid artery stenosis is one of the most important determinants of cerebrovascular events. The purpose of this paper is to evaluate the correlation between multi-detector-row CT angiography (MDCTA) and ultra-sound peak-systolic-velocity (US-PSV) in the quantification of carotid artery stenosis. Methods and material: 52 patients were retrospectively studied by using four-detector row CT and ultra-sound. Each patient was assessed for stenosis degree by using NASCET method when studied by using MDCT and by using PSV when studied by using US. Statistic analysis was performed to determine the entity of correlation (method of Pearson) between MDCTA and US-PSV. The Bland-Altman analysis was applied to assess the level of inter-technique agreement. Results: Sonographic PSV measurements ranged from 70 to 589 cm/s. Distal ICA velocities ranged from 29 to 238 cm/s. Linear regression analysis showed a good correlation (r 2 = 0.613) between MDCTA-NASCET linear percentage stenosis and PSV and measured. PSV value that corresponded to a NASCET linear percentage stenosis of 70% was 283 cm/s and with this values sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were 75%, 88.6%, 90.7% and 70.5%, respectively. Conclusions: Results of our study suggest that NASCET stenosis measured in MDCTA and PSV values have a good correlation. The use of a threshold of 283 cm/s allows obtaining good value of sensitivity and specificity.

  2. A reliable method of quantification of trace copper in beverages with and without alcohol by spectrophotometry after cloud point extraction

    Directory of Open Access Journals (Sweden)

    Ramazan Gürkan

    2013-01-01

    Full Text Available A new cloud point extraction (CPE method was developed for the separation and preconcentration of copper (II prior to spectrophotometric analysis. For this purpose, 1-(2,4-dimethylphenyl azonapthalen-2-ol (Sudan II was used as a chelating agent and the solution pH was adjusted to 10.0 with borate buffer. Polyethylene glycol tert-octylphenyl ether (Triton X-114 was used as an extracting agent in the presence of sodium dodecylsulphate (SDS. After phase separation, based on the cloud point of the mixture, the surfactant-rich phase was diluted with acetone, and the enriched analyte was spectrophotometrically determined at 537 nm. The variables affecting CPE efficiency were optimized. The calibration curve was linear within the range 0.285-20 µg L-1 with a detection limit of 0.085 µg L-1. The method was successfully applied to the quantification of copper in different beverage samples.

  3. Evaluation of the Coat-A-Count sup 125 I fentanyl RIA: Comparison of sup 12 5I RIA and GC/MS-SIM for quantification of fentanyl in case urine specimens

    Energy Technology Data Exchange (ETDEWEB)

    Watts, V.W.; Caplan, Y.H. (Mesa Police Crime Laboratory, AZ (USA))

    1990-09-01

    The Coat-A-Count solid phase {sup 125}I Fentanyl Radioimmunoassay was evaluated with respect to linearity and precision using equine urine fortified with fentanyl and then compared with a gas chromatographic/mass spectrometric method for quantification of fentanyl in urine. The RIA assay was found to be linear over the urine fentanyl concentration range of 0.25 to 7.5 ng/mL and precise with coefficients of variation (CV) ranging from 9.6 to 19.3%. The RIA calibrators, ranging in fentanyl concentrations from 0.25 to 7.5 ng/mL, and controls, at mean fentanyl concentrations of 0.46 and 1.32 ng/mL, were compared by both the RIA and GC/MS methods. The cross-reactivity with the {sup 125}I RIA test was determined for the fentanyl metabolites, norfentanyl and hydroxyfentanyl, and found to be 5% and 35%, respectively. The illicit fentanyl analogs were found to show significant cross-reactivity, ranging from 20 to 100%. The {sup 125}I RIA was compared to GC/MS quantifications of fentanyl in 35 positive and 20 negative case urine specimens.

  4. Evaluation of the Coat-A-Count 125I fentanyl RIA: Comparison of 125I RIA and GC/MS-SIM for quantification of fentanyl in case urine specimens

    International Nuclear Information System (INIS)

    Watts, V.W.; Caplan, Y.H.

    1990-01-01

    The Coat-A-Count solid phase 125 I Fentanyl Radioimmunoassay was evaluated with respect to linearity and precision using equine urine fortified with fentanyl and then compared with a gas chromatographic/mass spectrometric method for quantification of fentanyl in urine. The RIA assay was found to be linear over the urine fentanyl concentration range of 0.25 to 7.5 ng/mL and precise with coefficients of variation (CV) ranging from 9.6 to 19.3%. The RIA calibrators, ranging in fentanyl concentrations from 0.25 to 7.5 ng/mL, and controls, at mean fentanyl concentrations of 0.46 and 1.32 ng/mL, were compared by both the RIA and GC/MS methods. The cross-reactivity with the 125 I RIA test was determined for the fentanyl metabolites, norfentanyl and hydroxyfentanyl, and found to be 5% and 35%, respectively. The illicit fentanyl analogs were found to show significant cross-reactivity, ranging from 20 to 100%. The 125 I RIA was compared to GC/MS quantifications of fentanyl in 35 positive and 20 negative case urine specimens

  5. Linear shaped charge

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, David; Stofleth, Jerome H.; Saul, Venner W.

    2017-07-11

    Linear shaped charges are described herein. In a general embodiment, the linear shaped charge has an explosive with an elongated arrowhead-shaped profile. The linear shaped charge also has and an elongated v-shaped liner that is inset into a recess of the explosive. Another linear shaped charge includes an explosive that is shaped as a star-shaped prism. Liners are inset into crevices of the explosive, where the explosive acts as a tamper.

  6. Classifying Linear Canonical Relations

    OpenAIRE

    Lorand, Jonathan

    2015-01-01

    In this Master's thesis, we consider the problem of classifying, up to conjugation by linear symplectomorphisms, linear canonical relations (lagrangian correspondences) from a finite-dimensional symplectic vector space to itself. We give an elementary introduction to the theory of linear canonical relations and present partial results toward the classification problem. This exposition should be accessible to undergraduate students with a basic familiarity with linear algebra.

  7. Simple quantification of phenolic compounds present in the minor fraction of virgin olive oil by LC-DAD-FLD.

    Science.gov (United States)

    Godoy-Caballero, M P; Acedo-Valenzuela, M I; Galeano-Díaz, T

    2012-11-15

    This paper presents the results of the study on the extraction, identification and quantification of a group of important phenolic compounds in virgin olive oil (VOO) samples, obtained from olives of various varieties, by liquid chromatography coupled to UV-vis and fluorescence detection. Sixteen phenolic compounds belonging to different families have been identified and quantified spending a total time of 25 min. The linearity was examined by establishing the external standard calibration curves. Four order linear ranges and limits of detection ranging from 0.02 to 0.6 μg mL(-1) and 0.006 to 0.3 μg mL(-1) were achieved using UV-vis and fluorescence detection, respectively. Regarding the real samples, for the determination of the phenolic compounds in higher concentrations (hydroxytyrosol and tyrosol) a simple liquid-liquid extraction with ethanol was used to make the sample compatible with the mobile phase. Recovery values close to 100% were obtained. However, a previous solid phase extraction with Diol cartridges was necessary to concentrate and separate the minor phenolic compounds of the main interferences. The parameters affecting this step were carefully optimized and, after that, recoveries near 80-100% were obtained for the rest of the studied phenolic compounds. Also, the limits of detection were improved 15 times. Finally, the standard addition method was carried out for each of the analytes and no matrix effect was found, so the quantification of the 16 phenolic compounds from different monovarietal VOO was carried out by using the corresponding external standard calibration plot. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Linear-Algebra Programs

    Science.gov (United States)

    Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.

    1982-01-01

    The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.

  9. HPLC Quantification of Cytotoxic Compounds from Aspergillus niger

    Directory of Open Access Journals (Sweden)

    Paula Karina S. Uchoa

    2017-01-01

    Full Text Available A high-performance liquid chromatography method was developed and validated for the quantification of the cytotoxic compounds produced by a marine strain of Aspergillus niger. The fungus was grown in malt peptone dextrose (MPD, potato dextrose yeast (PDY, and mannitol peptone yeast (MnPY media during 7, 14, 21, and 28 days, and the natural products were identified by standard compounds. The validation parameters obtained were selectivity, linearity (coefficient of correlation > 0.99, precision (relative standard deviation below 5%, and accuracy (recovery > 96.

  10. Topics in quaternion linear algebra

    CERN Document Server

    Rodman, Leiba

    2014-01-01

    Quaternions are a number system that has become increasingly useful for representing the rotations of objects in three-dimensional space and has important applications in theoretical and applied mathematics, physics, computer science, and engineering. This is the first book to provide a systematic, accessible, and self-contained exposition of quaternion linear algebra. It features previously unpublished research results with complete proofs and many open problems at various levels, as well as more than 200 exercises to facilitate use by students and instructors. Applications presented in the book include numerical ranges, invariant semidefinite subspaces, differential equations with symmetries, and matrix equations. Designed for researchers and students across a variety of disciplines, the book can be read by anyone with a background in linear algebra, rudimentary complex analysis, and some multivariable calculus. Instructors will find it useful as a complementary text for undergraduate linear algebra courses...

  11. Quantification of Andrographolide Isolated from Andrographis paniculata Nees Obtained from Traditional Market in Yogyakarta Using Validated HPLC

    Directory of Open Access Journals (Sweden)

    Yandi Syukri

    2016-08-01

    Full Text Available This research was aimed to quantification of andrographolide isolated from A. paniculata Ness found in traditional market in Yogyakarta using validated HPLC to obtain high level content of andrographolide. The extraction of andrographolide from A. paniculata was carried out using ethanol as the solvent. Fractionation and isolation were continued using a non-polar solvent. Next, the extracts were re-crystallized to obtain isolated andrographolide. The identity of the compound was confirmed through an analysis of the melting point, IR spectra, and TLC. The purity of the compound was confirmed by the validated HPLC. The data obtained were then compared using an analytical grade of andrographolide as the standard. The isolated andrographolide confirmed melting point, IR spectra and TLC analysis were similar to the standard andrographolide. The method to determine the content of isolated andrographolide showed an adequate precision, with a relative standard deviation (RSD smaller than 1%. The accuracy showed good recovery values were obtained for all concentrations used. The HPLC method in this study showed specificity and selectivity with linearity in the working range and good precision and accuracy, making it very suitable for the quantification of andrographolide isolated in A. paniculata. When compared to the standard, the purity of the isolated andrographolide was 95.74 ± 0.29%.

  12. A validated and densitometric HPTLC method for the simultaneous quantification of reserpine and ajmalicine in Rauvolfia serpentina and Rauvolfia tetraphylla

    Directory of Open Access Journals (Sweden)

    Devendra Kumar Pandey

    Full Text Available ABSTRACT High performance thin layer chromatographic method (HPTLC has been developed for the quantification of reserpine and ajmalicine in root part of two different population of Rauvolfia serpentina (L. Benth. ex Kurz and Rauvolfia tetraphylla L., Apocynaceae, collected from Punjab and Uttarakhand. HPTLC of methanolic extract of root containing indole alkaloids, i.e., reserpine and ajmalicine, was performed on TLC Silicagel 60 F254 (10 cm × 10 cm plates with toluene:ethyl acetate:formic acid (7:2:1, as mobile phase. Quantification of the reserpine and ajmalicine was performed in the absorption–reflection mode at 268 nm. The recovery of reserpine and ajmalicine were 99.3 and 98.7% respectively. The calibration curves were linear for both the reserpine and ajmalicine, in the range of 200–1200 ng. HPTLC densitometry has been performed for the estimation of reserpine and ajmalicine in root part of R. serpentina and R. tetraphylla for the first time. The method is simple, rapid and cost effective and can be used for routine analysis of ajmalicine and reserpine in different Rauvolfia species as well as for quality control of herbal drugs containing Rauvolfia species.

  13. Development and validation of an RP-HPLC method for quantification of trans-resveratrol in the plant extracts

    Directory of Open Access Journals (Sweden)

    Cvetković Žika S.

    2015-01-01

    Full Text Available New, simple, cost effective, accurate and reproducible RP-HPLC method was developed and validated for the quantification of trans-resveratrol in the extracts of grape exocarp and seeds. The method has proved to be simpler and faster than available methods. Methanol was used as a mobile phase with a flow rate of 1.0 cm3 min-1, while the quantification was effected at 306 nm. The separation was performed at 35°C using a C18 column. The results showed that the peak area response was linear in the concentration range of 1-40 μg cm-3. The values of LOD and LOQ were found to be 0.125 and 0.413 μg cm-3, respectively. The antioxidant activity of the extracts was determined using DPPH assay. The ability of DPPH radicals inhibition decreases in the following order: the extract of grape exocarp > trans-resveratrol standard > the extract of grape seeds. [Projekat Ministarstva nauke Republike Srbije, br. TRp-34012

  14. Re-evaluation of thin layer chromatography as an alternative method for the quantification of prostaglandins from rat Kupffer cells.

    Science.gov (United States)

    Pestel, Sabine; Jungermann, Kurt; Schieferdecker, Henrike L

    2005-01-01

    In contrast to conventionally used immunoassays, thin layer chromatography (TLC)--by prelabeling of cells with radioactive arachidonic acid (AA)--allows to differentiate between cellularly built and added prostanoids and thus to investigate feedback effects of prostanoids on their own release. PGD2, TXB2 and PGE2 released from zymosan-stimulated Kupffer cells were separated with distinct RF-values, corresponding to those of the pure substances. Quantification of PGD2 and PGE2 gave comparable results with TLC and immunoassays, but measurement in the presence of added prostanoids was only possible with TLC. Moreover TLC was superior to immunoassays in having a longer linear range while being comparably sensitive. Cellularly built TXB2 in its radioactively labeled form was not detectable by TLC. Inhibition of TXB2 release by externally added AA or technical artifacts were excluded, suggesting that the cellular AA-pools used for prostaglandin and thromboxane synthesis differ in their accessibility for added AA. Thus, TLC is a simple, sensitive and precise method for the quantification of cellularly built prostaglandins but not of thromboxane even in the presence of added prostanoids.

  15. Simultaneous quantification of caffeine and its three primary metabolites in rat plasma by liquid chromatography-tandem mass spectrometry.

    Science.gov (United States)

    Choi, Eu Jin; Bae, Soo Hyeon; Park, Jung Bae; Kwon, Min Jo; Jang, Su Min; Zheng, Yu Fen; Lee, Young Sun; Lee, Su-Jun; Bae, Soo Kyung

    2013-12-01

    A rapid, sensitive, simple and accurate LC-MS/MS method for the simultaneous quantitation of caffeine, and its three primary metabolites, theobromine, paraxanthine, and theophylline, in rat plasma was developed and validated. Chromatographic separation was performed on an Agilent Poroshell 120 EC-C18 column using 1 μg/mL acetaminophen as an internal standard. Each sample was run at 0.5 mL/min for a total run time of 7 min/sample. Detection and quantification were performed using a mass spectrometer in selected reaction-monitoring mode with positive electrospray ionization. The lower limit of quantification was 5 ng/mL for all analytes with linear ranges up to 5000 ng/mL for caffeine and 1000 ng/mL for its metabolites. The coefficient of variation for assay precision was less than 12.6%, with an accuracy of 93.5-114%. The assay was successfully applied to determine plasma concentrations of caffeine, theobromine, paraxanthine, and theophylline in rat administered various energy drinks containing the same caffeine content. Various energy drinks exhibited considerable variability in the pharmacokinetic profiles of caffeine and its three primary metabolites, even containing the same caffeine. Different additives of energy drinks might contribute to these results. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. High-Performance Thin-Layer Chromatographic Quantification of Rosmarinic Acid and Rutin in Abnormal Savda Munziq

    Directory of Open Access Journals (Sweden)

    S. G. Tian

    2013-01-01

    Full Text Available A high-performance thin-layer chromatographic (HPTLC method has been established for simultaneous analysis of rosmarinic acid and rutin in Abnormal Savda Munziq (ASMq. A methanol extract of ASMq was used for quantification. The compounds were separated on silica gel H thin layer plate with ethyl acetate-formic acid-acetic acid-water 15 : 1 : 1 : 1.5 (v/v as a developer, trichloroethanol as the color reagent. The plates were scanned at 365 nm. The linear calibration data of rosmarinic acid and rutin were in the range of 0.0508 to 0.2540 μg (r=0.9964, 0.2707 to 1.35354 μg (r=0.9981, respectively. The recovery rate of rosmarinic acid was 99.17% (RSD = 2.92% and rutin was 95.24% (RSD = 2.38%. The method enables rapid screening, precise, selective, and sensitive quantification for pharmaceutical analysis.

  17. UPLC TOF MS for sensitive quantification of naturally occurring pyrrolizidine alkaloids in Petasites hybridus extract (Ze 339).

    Science.gov (United States)

    Schenk, Alexander; Siewert, Beate; Toff, Stephan; Drewe, Jürgen

    2015-08-01

    Due to increasing regulatory awareness of their hepatotoxic, genotoxic and possibly carcinogenic potential, pyrrolizidine alkaloid (PA) content has to be thoroughly monitored in herbal medicinal preparations. Recently, new very low PA regulatory threshold concentrations have been requested by the authorities. Therefore, a highly sensitive and reproducible UPLC TOF MS method for the quantification of the PAs senkirkine, senecionine, seneciphylline, senecionine-N-oxide and seneciphylline-N-oxide in a CO2-extract of Petasites hybridus leaves (Ze 339) has been developed. The limit of quantification (LOQ) was 2ppb for all PAs. Recovery at the LOQ was between 88.9 and 141.9%, the repeatability precision between 3.5 and 13.6%. Linearity of the five PAs showed correlation coefficients between 0.9995 and 0.9998 and coefficients of variation between 7.44 and 8.56%. A working range between 2 ppb and 200 ppb could be fixed. In the tested batches of the P. hybridus extract Ze 339, the absence of PAs could be demonstrated. In conclusion, this assay allows to determine trace PA concentrations in P. hybridus extract Ze 339, making it suitable for analytical PA monitoring in accordance with regulatory requirements. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Quantification of 4'-geranyloxyferulic acid, a new natural colon cancer chemopreventive agent, by HPLC-DAD in grapefruit skin extract.

    Science.gov (United States)

    Genovese, S; Epifano, F; Carlucci, G; Marcotullio, M C; Curini, M; Locatelli, M

    2010-10-10

    Oxyprenylated natural products (isopentenyloxy-, geranyloxy- and the less spread farnesyloxy-compounds and their biosynthetic derivatives) represent a family of secondary metabolites that have been consider for years merely as biosynthetic intermediates of the most abundant C-prenylated derivatives. Many of the isolated oxyprenylated natural products were shown to exert in vitro and in vivo remarkable anti-cancer and anti-inflammatory effects. 4'-Geranyloxyferulic acid [3-(4'-geranyloxy-3'-methoxyphenyl)-2-trans-propenoic] has been discovered as a valuable chemopreventive agent of several types of cancer. After development of a high yield and "eco-friendly" synthetic scheme of this secondary metabolite, starting from cheap and non-toxic reagents and substrates, we developed a new HPLC-DAD method for its quantification in grapefruit skin extract. A preliminary study on C18 column showed the separation between GOFA and boropinic acid (having the same core but with an isopentenyloxy side chain), used as internal standard. The tested column were thermostated at 28+/-1 degrees C and the separation was achieved in gradient condition at a flow rate of 1 mL/min with a starting mobile phase of H(2)O:methanol (40:60, v/v, 1% formic acid). The limit of detection (LOD, S/N=3) was 0.5 microg/mL and the limit of quantification (LOQ, S/N=10) was 1 microg/mL. Matrix-matched standard curves showed linearity up to 75 microg/mL. In the analytical range the precision (RSD%) values were extract of grapefruit. In conclusion, this method showed LOQ values able to selective quantification of this analyte in grapefruit skin extract.

  19. HPLC MS/MS method for quantification of meprobamate in human plasma: application to 24/7 clinical toxicology.

    Science.gov (United States)

    Delavenne, Xavier; Gay-Montchamp, Jean Pierre; Basset, Thierry

    2011-01-15

    We described the development and full validation of rapid and accurate liquid chromatography method, coupled with tandem mass spectrometry detection, for quantification of meprobamate in human plasma with [(13)C-(2)H(3)]-meprobamate as internal standard. Plasma pretreatment involved a one-step protein precipitation with acetonitrile. Separation was performed by reversed-phase chromatography on a Luna MercuryMS C18 (20 mm×4 mm×3 μm) column using a gradient elution mode. The mobile phase was a mix of distilled water containing 0.1% formic acid and acetonitrile containing 0.1% formic acid. The selected reaction monitoring transitions, in electrospray positive ionization, used for quantification were 219.2→158.2 m/z and 223.1→161.1m/z for meprobamate and internal standard, respectively. Qualification transitions were 219.2→97.0 and 223.1→101.1 m/z for meprobamate and internal standard, respectively. The method was linear over the concentration range of 1-300 mg/L. The intra- and inter-day precision values were below 6.4% and accuracy was within 95.3% and 103.6% for all QC levels (5, 75 and 200 mg/L). The lower limit of quantification was 1 mg/L. Total analysis time was reduced to 6 min including sample preparation. The present method is successfully applied to 24/7 clinical toxicology and demonstrated its usefulness to detect meprobamate poisoning. Copyright © 2010 Elsevier B.V. All rights reserved.

  20. A direct qPCR method for residual DNA quantification in monoclonal antibody drugs produced in CHO cells.

    Science.gov (United States)

    Hussain, Musaddeq

    2015-11-10

    Chinese hamster ovary (CHO) cells are the host cell of choice for manufacturing of monoclonal antibody (mAb) drugs in the biopharmaceutical industry. Host cell DNA is an impurity of such manufacturing process and must be controlled and monitored in order to ensure drug purity and safety. A conventional method for quantification of host residual DNA in drug requires extraction of DNA from the mAb drug substance with subsequent quantification of the extracted DNA using real-time PCR (qPCR). Here we report a method where the DNA extraction step is eliminated prior to qPCR. In this method, which we have named 'direct resDNA qPCR', the mAb drug substance is digested with a protease called KAPA in a 96-well PCR plate, the protease in the digest is then denatured at high temperature, qPCR reagents are added to the resultant reaction wells in the plate along with standards and controls in other wells of the same plate, and the plate subjected to qPCR for analysis of residual host DNA in the samples. This direct resDNA qPCR method for CHO is sensitive to 5.0fg of DNA with high precision and accuracy and has a wide linear range of determination. The method has been successfully tested with four mAbs drug, two IgG1 and two IgG4. Both the purified drug substance as well as a number of process intermediate samples, e.g., bioreactor harvest, Protein A column eluate and ion-exchange column eluates were tested. This method simplifies the residual DNA quantification protocol, reduces time of analysis and leads to increased assay sensitivity and development of automated high-throughput methods. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Voltammetric Quantification of Paraquat and Glyphosate in Surface Waters

    Directory of Open Access Journals (Sweden)

    William Roberto Alza-Camacho

    2016-09-01

    Full Text Available The indiscriminate use of pesticides on crops has a negative environmental impact that affects organisms, soil and water resources, essential for life. Therefore, it is necessary to evaluate the residual effect of these substances in water sources. A simple, affordable and accessible electrochemical method for Paraquat and Glyphosate quantification in water was developed. The study was conducted using as supporting electrolyte Britton-Robinson buffer solution, working electrode of glassy carbon, Ag/AgCl as the reference electrode, and platinum as auxiliary electrode. Differential pulse voltammetry (VDP method for both compounds were validated. Linearity of the methods presented a correlation coefficient of 0.9949 and 0.9919 and the limits of detection and quantification were 130 and 190 mg/L for Paraquat and 40 and 50 mg/L for glyphosate. Comparison with the reference method showed that the electrochemical method provides superior results in quantification of analytes. Of the samples tested, a value of Paraquat was between 0,011 to 1,572 mg/L and for glyphosate it was between 0.201 to 2.777 mg/L, indicating that these compounds are present in water sources and that those may be causing serious problems to human health.

  2. HPLC Quantification of astaxanthin and canthaxanthin in Salmonidae eggs.

    Science.gov (United States)

    Tzanova, Milena; Argirova, Mariana; Atanasov, Vasil

    2017-04-01

    Astaxanthin and canthaxanthin are naturally occurring antioxidants referred to as xanthophylls. They are used as food additives in fish farms to improve the organoleptic qualities of salmonid products and to prevent reproductive diseases. This study reports the development and single-laboratory validation of a rapid method for quantification of astaxanthin and canthaxanthin in eggs of rainbow trout (Oncorhynchus mykiss) and brook trout (Salvelinus fontinalis М.). An advantage of the proposed method is the perfect combination of selective extraction of the xanthophylls and analysis of the extract by high-performance liquid chromatography and photodiode array detection. The method validation was carried out in terms of linearity, accuracy, precision, recovery and limits of detection and quantification. The method was applied for simultaneous quantification of the two xanthophylls in eggs of rainbow trout and brook trout after their selective extraction. The results show that astaxanthin accumulations in salmonid fish eggs are larger than those of canthaxanthin. As the levels of these two xanthophylls affect fish fertility, this method can be used to improve the nutritional quality and to minimize the occurrence of the M74 syndrome in fish populations. Copyright © 2016 John Wiley & Sons, Ltd.

  3. On uncertainty quantification in hydrogeology and hydrogeophysics

    Science.gov (United States)

    Linde, Niklas; Ginsbourger, David; Irving, James; Nobile, Fabio; Doucet, Arnaud

    2017-12-01

    Recent advances in sensor technologies, field methodologies, numerical modeling, and inversion approaches have contributed to unprecedented imaging of hydrogeological properties and detailed predictions at multiple temporal and spatial scales. Nevertheless, imaging results and predictions will always remain imprecise, which calls for appropriate uncertainty quantification (UQ). In this paper, we outline selected methodological developments together with pioneering UQ applications in hydrogeology and hydrogeophysics. The applied mathematics and statistics literature is not easy to penetrate and this review aims at helping hydrogeologists and hydrogeophysicists to identify suitable approaches for UQ that can be applied and further developed to their specific needs. To bypass the tremendous computational costs associated with forward UQ based on full-physics simulations, we discuss proxy-modeling strategies and multi-resolution (Multi-level Monte Carlo) methods. We consider Bayesian inversion for non-linear and non-Gaussian state-space problems and discuss how Sequential Monte Carlo may become a practical alternative. We also describe strategies to account for forward modeling errors in Bayesian inversion. Finally, we consider hydrogeophysical inversion, where petrophysical uncertainty is often ignored leading to overconfident parameter estimation. The high parameter and data dimensions encountered in hydrogeological and geophysical problems make UQ a complicated and important challenge that has only been partially addressed to date.

  4. A simple method of digitizing analog scintigrams for quantification and digital archiving. Eine einfache Methode der Digitalisierung von Analogszintigrammen zur Quantifizierung und digitalen Archivierung

    Energy Technology Data Exchange (ETDEWEB)

    Schramm, M. (Klinik fuer Nuklearmedizin, Kiel Univ. (Germany)); Kaempfer, B. (Klinik fuer Nuklearmedizin, Kiel Univ. (Germany)); Wolf, H. (Klinik fuer Nuklearmedizin, Kiel Univ. (Germany)); Clausen, M. (Klinik fuer Nuklearmedizin, Kiel Univ. (Germany)); Wendhausen, H. (Klinik fuer Nuklearmedizin, Kiel Univ. (Germany)); Henze, E. (Klinik fuer Nuklearmedizin, Kiel Univ. (Germany))

    1993-02-01

    This study was undertaken to evaluate a quick, reliable and cheap method of digitizing analog scintigrams. 40 whole-body bone scintigrams were obtained simultaneously in analog and genuine digital format. The analog scans on X-ray film were then digitized seecondarily by three different methods: 300 dpi flatbed scanning, high-resolution camera scanning and camcorder recording. A simple exposure approach using a light box, a cheap camcorder, a PC and image grabber hard- and software proved to be optimal. Visual interpretation showed no differences in clinical findings when comparing the analog images with their secondarily digitized counterparts. To test the possibility of quantification, 126 equivalent ROIs were drawn both in the genuine digital and the secondarily digitized images. Comparing the ROI count to whole-body count percentage of the corresponding ROIs showed the correlation to be linear. The evaluation of phantom studies showed the linear correlation to be true within a wide activity range. Thus, secondary digitalization of analog scintigrams is an easy, cheap and reliable method of archiving images and allows secondary digital quantification. (orig.)

  5. Validated HPLC method for identification and quantification of p-hydroxy benzoic acid and agnuside in Vitex negundo and Vitex trifolia

    Directory of Open Access Journals (Sweden)

    Sonal Shah

    2013-12-01

    Full Text Available A high performance liquid chromatography coupled with photodiode array detection method was developed for the identification and quantification of p-hydroxy benzoic acid and agnuside in the extracts of Vitex negundo and Vitex trifolia. The separation was achieved using acetonitrile and O-phosphoric acid–water (0.5%, v/v as the mobile phase in an isocratic elution mode. Mean retention times of standard p-hydroxy benzoic acid and agnuside were 6.14 and 11.90 min respectively. The developed method was validated as per the ICH guidelines for limit of detection, limit of quantification, linearity, accuracy and precision. Good linearity (r2≥0.999 was observed for both the compounds in wide concentration range. Relative standard deviation values for intra-day and inter-day precision studies were less than 2%. The analytical recoveries of p-hydroxy benzoic acid and agnuside by the developed HPLC method were 93.07% and 106.11% respectively. Two compounds were identified and quantified in leaves and bar extracts of V. negundo and V. trifolia using the developed HPLC method. Keywords: Vitex negundo, Vitex trifolia, HPLC-PDA, p-Hydroxy benzoic acid, Agnuside

  6. Non linear system become linear system

    Directory of Open Access Journals (Sweden)

    Petre Bucur

    2007-01-01

    Full Text Available The present paper refers to the theory and the practice of the systems regarding non-linear systems and their applications. We aimed the integration of these systems to elaborate their response as well as to highlight some outstanding features.

  7. Linear motor coil assembly and linear motor

    NARCIS (Netherlands)

    2009-01-01

    An ironless linear motor (5) comprising a magnet track (53) and a coil assembly (50) operating in cooperation with said magnet track (53) and having a plurality of concentrated multi-turn coils (31 a-f, 41 a-d, 51 a-k), wherein the end windings (31E) of the coils (31 a-f, 41 a-e) are substantially

  8. A homogeneous assay principle for universal substrate quantification via hydrogen peroxide producing enzymes

    International Nuclear Information System (INIS)

    Zscharnack, Kristin; Kreisig, Thomas; Prasse, Agneta A.; Zuchner, Thole

    2015-01-01

    Highlights: • Application of the TRF-based PATb system for universal oxidase substrate detection. • H 2 O 2 generated by choline or glucose oxidase quenches the TRF signal of PATb. • The assay time is only limited by the oxidase catalysis rate. • Glucose is precisely detected in human serum consistent to a commercial assay. • A reliable quantification of choline in infant formula is shown. - Abstract: H 2 O 2 is a widely occurring molecule which is also a byproduct of a number of enzymatic reactions. It can therefore be used to quantify the corresponding enzymatic substrates. In this study, the time-resolved fluorescence emission of a previously described complex consisting of phthalic acid and terbium (III) ions (PATb) is used for H 2 O 2 detection. In detail, glucose oxidase and choline oxidase convert glucose and choline, respectively, to generate H 2 O 2 which acts as a quencher for the PATb complex. The response time of the PATb complex toward H 2 O 2 is immediate and the assay time only depends on the conversion rate of the enzymes involved. The PATb assay quantifies glucose in a linear range of 0.02–10 mmol L −1 , and choline from 1.56 to 100 μmol L −1 with a detection limit of 20 μmol L −1 for glucose and 1.56 μmol L −1 for choline. Both biomolecules glucose and choline could be detected without pretreatment with good precision and reproducibility in human serum samples and infant formula, respectively. Furthermore, it is shown that the detected glucose concentrations by the PATb system agree with the results of a commercially available assay. In principle, the PATb system is a universal and versatile tool for the quantification of any substrate and enzyme reaction where H 2 O 2 is involved

  9. Rapid quantification of free cholesterol in tears using direct insertion/electron ionization-mass spectrometry.

    Science.gov (United States)

    Wei, Xiaojia Eric; Korth, John; Brown, Simon H J; Mitchell, Todd W; Truscott, Roger J W; Blanksby, Stephen J; Willcox, Mark D P; Zhao, Zhenjun

    2013-12-09

    To establish a simple and rapid analytical method, based on direct insertion/electron ionization-mass spectrometry (DI/EI-MS), for measuring free cholesterol in tears from humans and rabbits. A stable-isotope dilution protocol employing DI/EI-MS in selected ion monitoring mode was developed and validated. It was used to quantify the free cholesterol content in human and rabbit tear extracts. Tears were collected from adult humans (n = 15) and rabbits (n = 10) and lipids extracted. Screening, full-scan (m/z 40-600) DI/EI-MS analysis of crude tear extracts showed that diagnostic ions located in the mass range m/z 350 to 400 were those derived from free cholesterol, with no contribution from cholesterol esters. DI/EI-MS data acquired using selected ion monitoring (SIM) were analyzed for the abundance ratios of diagnostic ions with their stable isotope-labeled analogues arising from the D6-cholesterol internal standard. Standard curves of good linearity were produced and an on-probe limit of detection of 3 ng (at 3:1 signal to noise) and limit of quantification of 8 ng (at 10:1 signal to noise). The concentration of free cholesterol in human tears was 15 ± 6 μg/g, which was higher than in rabbit tears (10 ± 5 μg/g). A stable-isotope dilution DI/EI-SIM method for free cholesterol quantification without prior chromatographic separation was established. Using this method demonstrated that humans have higher free cholesterol levels in their tears than rabbits. This is in agreement with previous reports. This paper provides a rapid and reliable method to measure free cholesterol in small-volume clinical samples.

  10. Development and validation of a method for detection and quantification of ochratoxin A in green coffee using liquid chromatography coupled to mass spectrometry

    Directory of Open Access Journals (Sweden)

    Raquel Duarte da Costa Cunha Bandeira

    2012-12-01

    Full Text Available A method using Liquid Chromatography Tanden Mass Spectrometry (LC-MS/MS with matrix-matched calibration curve was developed and validated for determining ochratoxin A (OTA in green coffee. Linearity was found between 3.0 and 23.0 ng.g-1. Mean recoveries ranged between 90.45% and 108.81%; the relative standard deviation under repeatability and intermediate precision conditions ranged from 5.39% to 9.94% and from 2.20% to 14.34%, respectively. The limits of detection and quantification were 1.2 ng.g-1 and 3.0 ng.g-¹, respectively. The method developed was suitable and contributed to the field of mycotoxin analysis, and it will be used for future production of the Certified Reference Material (CRM for OTA in coffee.

  11. Quantification of ochratoxin A-producing molds in food products by SYBR Green and TaqMan real-time PCR methods

    DEFF Research Database (Denmark)

    Rodríguez, Alicia; Rodríguez, Mar; Luque, M. Isabel

    2011-01-01

    , usually reported in food products, were used as references. All strains were tested for OTA production by mycellar electrokinetic capillary electrophoresis (MECE) and high-pressure liquid chromatography-mass spectrometry (HPLC-MS). The ability of the optimized qPCR protocols to quantify OTA......Ochratoxin A (OTA) is a mycotoxin synthesized by a variety of different fungi, most of them from the genera Penicillium and Aspergillus. Early detection and quantification of OTA producing species is crucial to improve food safety. In the present work, two protocols of real-time qPCR based on SYBR......-producing molds was evaluated in different artificially inoculated foods. A good linear correlation was obtained over the range 1 x 104 to 10 conidia/g per reaction for all qPCR assays in the different food matrices (cooked and cured products and fruits). The detection limit in all inoculated foods ranged between...

  12. Sensitive liquid chromatography-tandem mass spectrometry method for quantification of hydrochlorothiazide in human plasma.

    Science.gov (United States)

    Ramakrishna, N V S; Vishwottam, K N; Manoj, S; Koteshwara, M; Wishu, S; Varma, D P

    2005-12-01

    A simple, rapid, sensitive and specific liquid chromatography-tandem mass spectrometry method was developed and validated for quantification of hydrochlorothiazide (I), a common diuretic and anti-hypertensive agent. The analyte and internal standard, tamsulosin (II) were extracted by liquid-liquid extraction with diethyl ether-dichloromethane (70:30, v/v) using a Glas-Col Multi-Pulse Vortexer. The chromatographic separation was performed on a reversed-phase column (Waters symmetry C18) with a mobile phase of 10 mm ammonium acetate-methanol (15:85, v/v). The protonated analyte was quantitated in negative ionization by multiple reaction monitoring with a mass spectrometer. The mass transitions m/z 296.1 solidus in circle 205.0 and m/z 407.2 solidus in circle 184.9 were used to measure I and II, respectively. The assay exhibited a linear dynamic range of 0.5-200 ng/mL for hydrochlorothiazide in human plasma. The lower limit of quantitation was 500 pg/mL, with a relative standard deviation of less than 9%. Acceptable precision and accuracy were obtained for concentrations over the standard curve ranges. A run time of 2.5 min for each sample made it possible to analyze a throughput of more than 400 human plasma samples per day. The validated method has been successfully used to analyze human plasma samples for application in pharmacokinetic, bioavailability or bioequivalence studies. (c) 2005 John Wiley & Sons, Ltd.

  13. Modelling Loudspeaker Non-Linearities

    DEFF Research Database (Denmark)

    Agerkvist, Finn T.

    2007-01-01

    This paper investigates different techniques for modelling the non-linear parameters of the electrodynamic loudspeaker. The methods are tested not only for their accuracy within the range of original data, but also for the ability to work reasonable outside that range, and it is demonstrated...... that polynomial expansions are rather poor at this, whereas an inverse polynomial expansion or localized fitting functions such as the gaussian are better suited for modelling the Bl-factor and compliance. For the inductance the sigmoid function is shown to give very good results. Finally the time varying...

  14. Linear collider: a preview

    Energy Technology Data Exchange (ETDEWEB)

    Wiedemann, H.

    1981-11-01

    Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center.

  15. Basic linear algebra

    CERN Document Server

    Blyth, T S

    2002-01-01

    Basic Linear Algebra is a text for first year students leading from concrete examples to abstract theorems, via tutorial-type exercises. More exercises (of the kind a student may expect in examination papers) are grouped at the end of each section. The book covers the most important basics of any first course on linear algebra, explaining the algebra of matrices with applications to analytic geometry, systems of linear equations, difference equations and complex numbers. Linear equations are treated via Hermite normal forms which provides a successful and concrete explanation of the notion of linear independence. Another important highlight is the connection between linear mappings and matrices leading to the change of basis theorem which opens the door to the notion of similarity. This new and revised edition features additional exercises and coverage of Cramer's rule (omitted from the first edition). However, it is the new, extra chapter on computer assistance that will be of particular interest to readers:...

  16. Linear collider: a preview

    International Nuclear Information System (INIS)

    Wiedemann, H.

    1981-11-01

    Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center

  17. Matrices and linear transformations

    CERN Document Server

    Cullen, Charles G

    1990-01-01

    ""Comprehensive . . . an excellent introduction to the subject."" - Electronic Engineer's Design Magazine.This introductory textbook, aimed at sophomore- and junior-level undergraduates in mathematics, engineering, and the physical sciences, offers a smooth, in-depth treatment of linear algebra and matrix theory. The major objects of study are matrices over an arbitrary field. Contents include Matrices and Linear Systems; Vector Spaces; Determinants; Linear Transformations; Similarity: Part I and Part II; Polynomials and Polynomial Matrices; Matrix Analysis; and Numerical Methods. The first

  18. Efficient Non Linear Loudspeakers

    DEFF Research Database (Denmark)

    Petersen, Bo R.; Agerkvist, Finn T.

    2006-01-01

    Loudspeakers have traditionally been designed to be as linear as possible. However, as techniques for compensating non linearities are emerging, it becomes possible to use other design criteria. This paper present and examines a new idea for improving the efficiency of loudspeakers at high levels...... by changing the voice coil layout. This deliberate non-linear design has the benefit that a smaller amplifier can be used, which has the benefit of reducing system cost as well as reducing power consumption....

  19. Linear models with R

    CERN Document Server

    Faraway, Julian J

    2014-01-01

    A Hands-On Way to Learning Data AnalysisPart of the core of statistics, linear models are used to make predictions and explain the relationship between the response and the predictors. Understanding linear models is crucial to a broader competence in the practice of statistics. Linear Models with R, Second Edition explains how to use linear models in physical science, engineering, social science, and business applications. The book incorporates several improvements that reflect how the world of R has greatly expanded since the publication of the first edition.New to the Second EditionReorganiz

  20. Linear integrated circuits

    CERN Document Server

    Carr, Joseph

    1996-01-01

    The linear IC market is large and growing, as is the demand for well trained technicians and engineers who understand how these devices work and how to apply them. Linear Integrated Circuits provides in-depth coverage of the devices and their operation, but not at the expense of practical applications in which linear devices figure prominently. This book is written for a wide readership from FE and first degree students, to hobbyists and professionals.Chapter 1 offers a general introduction that will provide students with the foundations of linear IC technology. From chapter 2 onwa

  1. Fault tolerant linear actuator

    Science.gov (United States)

    Tesar, Delbert

    2004-09-14

    In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.

  2. Superconducting linear accelerator cryostat

    International Nuclear Information System (INIS)

    Ben-Zvi, I.; Elkonin, B.V.; Sokolowski, J.S.

    1984-01-01

    A large vertical cryostat for a superconducting linear accelerator using quarter wave resonators has been developed. The essential technical details, operational experience and performance are described. (author)

  3. Accurate quantification of supercoiled DNA by digital PCR

    Science.gov (United States)

    Dong, Lianhua; Yoo, Hee-Bong; Wang, Jing; Park, Sang-Ryoul

    2016-01-01

    Digital PCR (dPCR) as an enumeration-based quantification method is capable of quantifying the DNA copy number without the help of standards. However, it can generate false results when the PCR conditions are not optimized. A recent international comparison (CCQM P154) showed that most laboratories significantly underestimated the concentration of supercoiled plasmid DNA by dPCR. Mostly, supercoiled DNAs are linearized before dPCR to avoid such underestimations. The present study was conducted to overcome this problem. In the bilateral comparison, the National Institute of Metrology, China (NIM) optimized and applied dPCR for supercoiled DNA determination, whereas Korea Research Institute of Standards and Science (KRISS) prepared the unknown samples and quantified them by flow cytometry. In this study, several factors like selection of the PCR master mix, the fluorescent label, and the position of the primers were evaluated for quantifying supercoiled DNA by dPCR. This work confirmed that a 16S PCR master mix avoided poor amplification of the supercoiled DNA, whereas HEX labels on dPCR probe resulted in robust amplification curves. Optimizing the dPCR assay based on these two observations resulted in accurate quantification of supercoiled DNA without preanalytical linearization. This result was validated in close agreement (101~113%) with the result from flow cytometry. PMID:27063649

  4. Quantification of equine immunoglobulin A in serum and secretions by a fluorescent bead-based assay.

    Science.gov (United States)

    Schnabel, Christiane L; Babasyan, Susanna; Freer, Heather; Wagner, Bettina

    2017-06-01

    Only few quantitative reports exist about the concentrations and induction of immunoglobulin A (IgA) in mucosal secretions of horses. Despite this, it is widely assumed that IgA is the predominant immunoglobulin on mucosal surfaces in the horse. Here, two new monoclonal antibodies (mAbs) against equine IgA, clones 84-1 and 161-1, were developed and characterized in detail. Both IgA mAbs specifically bound monomeric and dimeric equine IgA in different applications, such as Western blots and fluorescent bead-based assays. Cross-reactivity with other equine immunoglobulin isotypes was not observed. The new IgA mAb 84-1 was used in combination with the previously characterized anti-equine IgA mAb BVS2 for the development and validation of a fluorescent bead-based assay to quantify total IgA in equine serum and various secretions. The IgA assay's linear detection ranged from 64pg/ml to 1000ng/ml. For the quantification of IgA in serum or in secretions an IgA standard was purified from serum or nasal wash fluid (secretory IgA), respectively. The different standards were needed for accurate IgA quantification in the respective samples taking the different signal intensities of monomeric and dimeric IgA on the florescent bead-based assay into account. IgA was quantified by the bead-based assay established here in different equine samples of healthy adult individuals. In serum the median total IgA was 0.45mg/ml for Thoroughbred horses (TB, n=10) and 1.16mg/ml in Icelandic horses (ICH, n=12). In nasopharyngeal secretions of TB (n=7) 0.13mg/ml median total IgA was measured, and 0.25mg/ml for ICH (n=12). Saliva of ICH (n=6) contained a median of 0.15mg/ml, colostrum of Warmbloods (n=8) a median of 1.89mg/ml IgA. Compared to IgG1 and IgG4/7 quantified in the same samples, IgA appeared as the major immunoglobulin isotype in nasopharyngeal secretions and saliva while it is a minor isotype in serum and colostrum. The newly developed monoclonal antibodies against equine IgA and the

  5. Validation of tumor protein marker quantification by two independent automated immunofluorescence image analysis platforms

    Science.gov (United States)

    Peck, Amy R; Girondo, Melanie A; Liu, Chengbao; Kovatich, Albert J; Hooke, Jeffrey A; Shriver, Craig D; Hu, Hai; Mitchell, Edith P; Freydin, Boris; Hyslop, Terry; Chervoneva, Inna; Rui, Hallgeir

    2016-01-01

    Protein marker levels in formalin-fixed, paraffin-embedded tissue sections traditionally have been assayed by chromogenic immunohistochemistry and evaluated visually by pathologists. Pathologist scoring of chromogen staining intensity is subjective and generates low-resolution ordinal or nominal data rather than continuous data. Emerging digital pathology platforms now allow quantification of chromogen or fluorescence signals by computer-assisted image analysis, providing continuous immunohistochemistry values. Fluorescence immunohistochemistry offers greater dynamic signal range than chromogen immunohistochemistry, and combined with image analysis holds the promise of enhanced sensitivity and analytic resolution, and consequently more robust quantification. However, commercial fluorescence scanners and image analysis software differ in features and capabilities, and claims of objective quantitative immunohistochemistry are difficult to validate as pathologist scoring is subjective and there is no accepted gold standard. Here we provide the first side-by-side validation of two technologically distinct commercial fluorescence immunohistochemistry analysis platforms. We document highly consistent results by (1) concordance analysis of fluorescence immunohistochemistry values and (2) agreement in outcome predictions both for objective, data-driven cutpoint dichotomization with Kaplan–Meier analyses or employment of continuous marker values to compute receiver-operating curves. The two platforms examined rely on distinct fluorescence immunohistochemistry imaging hardware, microscopy vs line scanning, and functionally distinct image analysis software. Fluorescence immunohistochemistry values for nuclear-localized and tyrosine-phosphorylated Stat5a/b computed by each platform on a cohort of 323 breast cancer cases revealed high concordance after linear calibration, a finding confirmed on an independent 382 case cohort, with concordance correlation coefficients >0

  6. A multiplex PCR assay for the detection and quantification of Sclerotinia sclerotiorum and Botrytis cinerea.

    Science.gov (United States)

    Reich, J D; Alexander, T W; Chatterton, S

    2016-05-01

    Traditional culture methods for identifying the plant fungal pathogens Sclerotinia sclerotiorum (Lib.) de Bary and Botrytis cinerea Pers.:Fr. are slow and laborious. The goal of this study was to develop a multiplex real-time PCR (qPCR) assay to detect and quantify DNA from S. sclerotiorum and B. cinerea. A primer set (SsIGS_5) for S. sclerotiorum was designed that targeted the intergenic spacer (IGS) regions of the ribosomal DNA. Addition of a probe to the assay increased its specificity: when the primer/probe set was tested against 21 fungal species (35 strains), amplification was detected from all S. sclerotiorum strains and no other species. For qPCR, the SsIGS_5 primer and probe set exhibited a linear range from 7·0 ng to 0·07 pg target DNA (R(2)  = 0·99). SsIGS_5 was then multiplexed with a previously published primer/probe set for B. cinerea to develop a high-throughput method for the detection and quantification of DNA from both pathogens. When multiplexed, the sensitivity and specificity of both assays were not different from individual qPCR reactions. The multiplex assay is currently being used to detect and quantify S. sclerotiorum and B. cinerea DNA from aerosol samples collected in commercial seed alfalfa fields. A primer and probe set for the quantification of Sclerotinia sclerotiorum DNA in a PCR assay was developed. The probe-based nature of this assay signifies an improvement over previous assays for this species by allowing multiplex reactions while maintaining high sensitivity. The primer/probe set was used in a multiplex real-time PCR assay for the quantification of S. sclerotiorum and Botrytis cinerea DNA, enabling rapid analysis of environmental samples. In crops susceptible to both pathogens, this multiplex assay can be used to quickly quantify the presence of each pathogen. © 2016 Her Majesty the Queen in Right of Canada © 2016 The Society for Applied Microbiology. Reproduced with the permission of the Office of the

  7. Linearity enigmas in ecology

    Energy Technology Data Exchange (ETDEWEB)

    Patten, B.C.

    1983-04-01

    Two issues concerning linearity or nonlinearity of natural systems are considered. Each is related to one of the two alternative defining properties of linear systems, superposition and decomposition. Superposition exists when a linear combination of inputs to a system results in the same linear combination of outputs that individually correspond to the original inputs. To demonstrate this property it is necessary that all initial states and inputs of the system which impinge on the output in question be included in the linear combination manipulation. As this is difficult or impossible to do with real systems of any complexity, nature appears nonlinear even though it may be linear. A linear system that displays nonlinear behavior for this reason is termed pseudononlinear. The decomposition property exists when the dynamic response of a system can be partitioned into an input-free portion due to state plus a state-free portion due to input. This is a characteristic of all linear systems, but not of nonlinear systems. Without the decomposition property, it is not possible to distinguish which portions of a system's behavior are due to innate characteristics (self) vs. outside conditions (environment), which is an important class of questions in biology and ecology. Some philosophical aspects of these findings are then considered. It is suggested that those ecologists who hold to the view that organisms and their environments are separate entities are in effect embracing a linear view of nature, even though their belief systems and mathematical models tend to be nonlinear. On the other hand, those who consider that organism-environment complex forms a single inseparable unit are implictly involved in non-linear thought, which may be in conflict with the linear modes and models that some of them use. The need to rectify these ambivalences on the part of both groups is indicated.

  8. Mössbauer spectra linearity improvement by sine velocity waveform followed by linearization process

    Science.gov (United States)

    Kohout, Pavel; Frank, Tomas; Pechousek, Jiri; Kouril, Lukas

    2018-05-01

    This note reports the development of a new method for linearizing the Mössbauer spectra recorded with a sine drive velocity signal. Mössbauer spectra linearity is a critical parameter to determine Mössbauer spectrometer accuracy. Measuring spectra with a sine velocity axis and consecutive linearization increases the linearity of spectra in a wider frequency range of a drive signal, as generally harmonic movement is natural for velocity transducers. The obtained data demonstrate that linearized sine spectra have lower nonlinearity and line width parameters in comparison with those measured using a traditional triangle velocity signal.

  9. Linear colliders - prospects 1985

    International Nuclear Information System (INIS)

    Rees, J.

    1985-06-01

    We discuss the scaling laws of linear colliders and their consequences for accelerator design. We then report on the SLAC Linear Collider project and comment on experience gained on that project and its application to future colliders. 9 refs., 2 figs

  10. The SLAC linear collider

    International Nuclear Information System (INIS)

    Richter, B.

    1985-01-01

    A report is given on the goals and progress of the SLAC Linear Collider. The author discusses the status of the machine and the detectors and give an overview of the physics which can be done at this new facility. He also gives some ideas on how (and why) large linear colliders of the future should be built

  11. Linear Programming (LP)

    International Nuclear Information System (INIS)

    Rogner, H.H.

    1989-01-01

    The submitted sections on linear programming are extracted from 'Theorie und Technik der Planung' (1978) by W. Blaas and P. Henseler and reformulated for presentation at the Workshop. They consider a brief introduction to the theory of linear programming and to some essential aspects of the SIMPLEX solution algorithm for the purposes of economic planning processes. 1 fig

  12. Racetrack linear accelerators

    International Nuclear Information System (INIS)

    Rowe, C.H.; Wilton, M.S. de.

    1979-01-01

    An improved recirculating electron beam linear accelerator of the racetrack type is described. The system comprises a beam path of four straight legs with four Pretzel bending magnets at the end of each leg to direct the beam into the next leg of the beam path. At least one of the beam path legs includes a linear accelerator. (UK)

  13. Comparison of five DNA quantification methods

    DEFF Research Database (Denmark)

    Nielsen, Karsten; Mogensen, Helle Smidt; Hedman, Johannes

    2008-01-01

    Six commercial preparations of human genomic DNA were quantified using five quantification methods: UV spectrometry, SYBR-Green dye staining, slot blot hybridization with the probe D17Z1, Quantifiler Human DNA Quantification kit and RB1 rt-PCR. All methods measured higher DNA concentrations than...... Quantification kit in two experiments. The measured DNA concentrations with Quantifiler were 125 and 160% higher than expected based on the manufacturers' information. When the Quantifiler human DNA standard (Raji cell line) was replaced by the commercial human DNA preparation G147A (Promega) to generate the DNA...... standard curve in the Quantifiler Human DNA Quantification kit, the DNA quantification results of the human DNA preparations were 31% higher than expected based on the manufacturers' information. The results indicate a calibration problem with the Quantifiler human DNA standard for its use...

  14. Semidefinite linear complementarity problems

    International Nuclear Information System (INIS)

    Eckhardt, U.

    1978-04-01

    Semidefinite linear complementarity problems arise by discretization of variational inequalities describing e.g. elastic contact problems, free boundary value problems etc. In the present paper linear complementarity problems are introduced and the theory as well as the numerical treatment of them are described. In the special case of semidefinite linear complementarity problems a numerical method is presented which combines the advantages of elimination and iteration methods without suffering from their drawbacks. This new method has very attractive properties since it has a high degree of invariance with respect to the representation of the set of all feasible solutions of a linear complementarity problem by linear inequalities. By means of some practical applications the properties of the new method are demonstrated. (orig.) [de

  15. Linear algebra done right

    CERN Document Server

    Axler, Sheldon

    2015-01-01

    This best-selling textbook for a second course in linear algebra is aimed at undergrad math majors and graduate students. The novel approach taken here banishes determinants to the end of the book. The text focuses on the central goal of linear algebra: understanding the structure of linear operators on finite-dimensional vector spaces. The author has taken unusual care to motivate concepts and to simplify proofs. A variety of interesting exercises in each chapter helps students understand and manipulate the objects of linear algebra. The third edition contains major improvements and revisions throughout the book. More than 300 new exercises have been added since the previous edition. Many new examples have been added to illustrate the key ideas of linear algebra. New topics covered in the book include product spaces, quotient spaces, and dual spaces. Beautiful new formatting creates pages with an unusually pleasant appearance in both print and electronic versions. No prerequisites are assumed other than the ...

  16. Inverse problems and uncertainty quantification

    KAUST Repository

    Litvinenko, Alexander; Matthies, Hermann G.

    2013-01-01

    computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example

  17. Handbook on linear motor application

    International Nuclear Information System (INIS)

    1988-10-01

    This book guides the application for Linear motor. It lists classification and speciality of Linear Motor, terms of linear-induction motor, principle of the Motor, types on one-side linear-induction motor, bilateral linear-induction motor, linear-DC Motor on basic of the motor, linear-DC Motor for moving-coil type, linear-DC motor for permanent-magnet moving type, linear-DC motor for electricity non-utility type, linear-pulse motor for variable motor, linear-pulse motor for permanent magneto type, linear-vibration actuator, linear-vibration actuator for moving-coil type, linear synchronous motor, linear electromagnetic motor, linear electromagnetic solenoid, technical organization and magnetic levitation and linear motor and sensor.

  18. A validated HPLC-PDA method for identification and quantification of two bioactive alkaloids, ephedrine and cryptolepine, in different Sida species.

    Science.gov (United States)

    Chatterjee, Arnab; Kumar, Satyanshu; Chattopadhyay, Sunil K

    2013-12-01

    A simple, rapid, accurate and reproducible reverse-phase HPLC method has been developed for the identification and quantification of two alkaloids ephedrine and cryptolepine in different extracts of Sida species using photodiode array detection. Baseline separation of the two alkaloids was achieved on a Waters RP-18 X-terra column (250 × 4.6 mm, 5 µm) using a solvent system consisting of a mixture of water containing 0.1% Trifluoroacetic acid (TFA) and acetonitrile in a gradient elution mode with detection at 210 and 280 nm for ephedrine and cryptolepine, respectively. The calibration curves were linear in a concentration range of 10-250 µg/mL for both the alkaloids with correlation coefficient values >0.99. The limits of detection and quantification for ephedrine and cryptolepine were 5 and 10 µg/mL and 2.5 and 5 µg/mL, respectively. Relative standard deviation values for intra-day and inter-day precision were 1.22 and 1.04% for ephedrine and 1.71 and 2.06% for cryptolepine, respectively. Analytical recovery ranged from 92.46 to 103.95%. The developed HPLC method was applied to identify and quantify ephedrine and cryptolepine in different extracts of Sida species. Copyright © 2013 John Wiley & Sons, Ltd.

  19. Ultra-sensitive LC-MS/MS method for the quantification of gemcitabine and its metabolite 2',2'-difluorodeoxyuridine in human plasma for a microdose clinical trial.

    Science.gov (United States)

    van Nuland, M; Hillebrand, M J X; Rosing, H; Burgers, J A; Schellens, J H M; Beijnen, J H

    2018-03-20

    In microdose clinical trials a maximum of 100 μg of drug substance is administered to participants, in order to determine the pharmacokinetic properties of the agents. Measuring low plasma concentrations after administration of a microdose is challenging and requires the use of ulta-sensitive equipment. Novel liquid chromatography-mass spectrometry (LC-MS/MS) platforms can be used for quantification of low drug plasma levels. Here we describe the development and validation of an LC-MS/MS method for quantification of gemcitabine and its metabolite 2',2'-difluorodeoxyuridine (dFdU) in the low picogram per milliliter range to support a microdose trial. The validated assay ranges from 2.5-500 pg/mL for gemcitabine and 250-50,000 pg/mL for dFdU were linear, with a correlation coefficient (r 2 ) of 0.996 or better. Sample preparation with solid phase extraction provided a good and reproducible recovery. All results were within the acceptance criteria of the latest US FDA guidance and EMA guidelines. In addition, the method was successfully applied to measure plasma concentrations of gemcitabine in a patient after administration of a microdose of gemcitabine. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Linearization Technologies for Broadband Radio-Over-Fiber Transmission Systems

    Directory of Open Access Journals (Sweden)

    Xiupu Zhang

    2014-11-01

    Full Text Available Linearization technologies that can be used for linearizing RoF transmission are reviewed. Three main linearization methods, i.e. electrical analog linearization, optical linearization, and electrical digital linearization are presented and compared. Analog linearization can be achieved using analog predistortion circuits, and can be used for suppression of odd order nonlinear distortion components, such as third and fifth order. Optical linearization includes mixed-polarization, dual-wavelength, optical channelization and the others, implemented in optical domain, to suppress both even and odd order nonlinear distortion components, such as second and third order. Digital predistortion has been a widely used linearization method for RF power amplifiers. However, digital linearization that requires analog to digital converter is severely limited to hundreds of MHz bandwidth. Instead, analog and optical linearization provide broadband linearization with up to tens of GHz. Therefore, for broadband radio over fiber transmission that can be used for future broadband cloud radio access networks, analog and optical linearization are more appropriate than digital linearization. Generally speaking, both analog and optical linearization are able to improve spur-free dynamic range greater than 10 dB over tens of GHz. In order for current digital linearization to be used for broadband radio over fiber transmission, the reduced linearization complexity and increased linearization bandwidth are required. Moreover, some digital linearization methods in which the complexity can be reduced, such as Hammerstein type, may be more promising and require further investigation.

  1. Direct liquid chromatography method for the simultaneous quantification of hydroxytyrosol and tyrosol in red wines.

    Science.gov (United States)

    Piñeiro, Zulema; Cantos-Villar, Emma; Palma, Miguel; Puertas, Belen

    2011-11-09

    A validated HPLC method with fluorescence detection for the simultaneous quantification of hydroxytyrosol and tyrosol in red wines is described. Detection conditions for both compounds were optimized (excitation at 279 and 278 and emission at 631 and 598 nm for hydroxytyrosol and tyrosol, respectively). The validation of the analytical method was based on selectivity, linearity, robustness, detection and quantification limits, repeatability, and recovery. The detection and quantification limits in red wines were set at 0.023 and 0.076 mg L(-1) for hydroxytyrosol and at 0.007 and 0.024 mg L(-1) for tyrosol determination, respectively. Precision values, both within-day and between-day (n = 5), remained below 3% for both compounds. In addition, a fractional factorial experimental design was developed to analyze the influence of six different conditions on analysis. The final optimized HPLC-fluorescence method allowed the analysis of 30 nonpretreated Spanish red wines to evaluate their hydroxytyrosol and tyrosol contents.

  2. Linear ubiquitination in immunity.

    Science.gov (United States)

    Shimizu, Yutaka; Taraborrelli, Lucia; Walczak, Henning

    2015-07-01

    Linear ubiquitination is a post-translational protein modification recently discovered to be crucial for innate and adaptive immune signaling. The function of linear ubiquitin chains is regulated at multiple levels: generation, recognition, and removal. These chains are generated by the linear ubiquitin chain assembly complex (LUBAC), the only known ubiquitin E3 capable of forming the linear ubiquitin linkage de novo. LUBAC is not only relevant for activation of nuclear factor-κB (NF-κB) and mitogen-activated protein kinases (MAPKs) in various signaling pathways, but importantly, it also regulates cell death downstream of immune receptors capable of inducing this response. Recognition of the linear ubiquitin linkage is specifically mediated by certain ubiquitin receptors, which is crucial for translation into the intended signaling outputs. LUBAC deficiency results in attenuated gene activation and increased cell death, causing pathologic conditions in both, mice, and humans. Removal of ubiquitin chains is mediated by deubiquitinases (DUBs). Two of them, OTULIN and CYLD, are constitutively associated with LUBAC. Here, we review the current knowledge on linear ubiquitination in immune signaling pathways and the biochemical mechanisms as to how linear polyubiquitin exerts its functions distinctly from those of other ubiquitin linkage types. © 2015 The Authors. Immunological Reviews Published by John Wiley & Sons Ltd.

  3. Cytochrome c oxidase subunit 1-based human RNA quantification to enhance mRNA profiling in forensic biology

    Directory of Open Access Journals (Sweden)

    Dong Zhao

    2017-01-01

    Full Text Available RNA analysis offers many potential applications in forensic science, and molecular identification of body fluids by analysis of cell-specific RNA markers represents a new technique for use in forensic cases. However, due to the nature of forensic materials that often admixed with nonhuman cellular components, human-specific RNA quantification is required for the forensic RNA assays. Quantification assay for human RNA has been developed in the present study with respect to body fluid samples in forensic biology. The quantitative assay is based on real-time reverse transcription-polymerase chain reaction of mitochondrial RNA cytochrome c oxidase subunit I and capable of RNA quantification with high reproducibility and a wide dynamic range. The human RNA quantification improves the quality of mRNA profiling in the identification of body fluids of saliva and semen because the quantification assay can exclude the influence of nonhuman components and reduce the adverse affection from degraded RNA fragments.

  4. Fast comprehensive two-dimensional gas chromatography method for fatty acid methyl ester separation and quantification using dual ionic liquid columns.

    Science.gov (United States)

    Nosheen, Asia; Mitrevski, Blagoj; Bano, Asghari; Marriott, Philip J

    2013-10-18

    Safflower oil is a complex mixture of C18 saturated and unsaturated fatty acids amongst other fatty acids, and achieving separation between these similar structure components using one dimensional gas chromatography (GC) may be difficult. This investigation aims to obtain improved separation of fatty acid methyl esters in safflower oil, and their quantification using comprehensive two-dimensional GC (GC×GC). Here, GC×GC separation is accomplished by the coupling of two ionic liquid (IL) column phases: the combination of SLB-IL111 with IL59 column phases was finally selected since it provided excellent separation of a FAME standard mixture, as well as fatty acids in safflower and linseed oil, compared to other tested column sets. Safflower oil FAME were well separated in a short run of 16min. FAME validation was demonstrated by method reproducibility, linearity over a range up to 500mgL(-1), and limits of detection which ranged from 1.9mgL(-1) to 5.2mgL(-1) at a split ratio of 20:1. Quantification was carried out using two dilution levels of 200-fold for major components and 20-fold for trace components. The fatty acids C15:0 and C17:0 were not reported previously in safflower oil. The SLB-IL111/IL59 column set proved to be an effective and novel configuration for separation and quantification of vegetable and animal oil fatty acids. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Simultaneous quantification of monoamine neurotransmitters and their biogenic metabolites intracellularly and extracellularly in primary neuronal cell cultures and in sub-regions of guinea pig brain.

    Science.gov (United States)

    Schou-Pedersen, Anne Marie V; Hansen, Stine N; Tveden-Nyborg, Pernille; Lykkesfeldt, Jens

    2016-08-15

    In the present paper, we describe a validated chromatographic method for the simultaneous quantification of monoamine neurotransmitters and their biogenic metabolites intracellularly and extracellularly in primary neuronal cell culture and in sub-regions of the guinea pig brain. Electrochemical detection provided limits of quantifications (LOQs) between 3.6 and 12nM. Within the linear range, obtained recoveries were from 90.9±9.9 to 120±14% and intra-day and inter-day precisions found to be less than 5.5% and 12%, respectively. The analytical method was applicable for quantification of intracellular and extracellular amounts of monoamine neurotransmitters and their metabolites in guinea pig frontal cortex and hippocampal primary neuronal cell cultures. Noradrenaline, dopamine and serotonin were found to be in a range from 0.31 to 1.7pmol per 2 million cells intracellularly, but only the biogenic metabolites could be detected extracellularly. Distinct differences in monoamine concentrations were observed when comparing concentrations in guinea pig frontal cortex and cerebellum tissue with higher amounts of dopamine and its metabolites, 3,4-dihydroxyphenylacetic acid and homovanillic acid in frontal cortex, as compared to cerebellum. The chemical turnover in frontal cortex tissue of guinea pig was for serotonin successfully predicted from the turnover observed in the frontal cortex cell culture. In conclusion, the present analytical method shows high precision, accuracy and sensitivity and is broadly applicable to monoamine measurements in cell cultures as well as brain biopsies from animal models used in preclinical neurochemistry. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Linearizing W-algebras

    International Nuclear Information System (INIS)

    Krivonos, S.O.; Sorin, A.S.

    1994-06-01

    We show that the Zamolodchikov's and Polyakov-Bershadsky nonlinear algebras W 3 and W (2) 3 can be embedded as subalgebras into some linear algebras with finite set of currents. Using these linear algebras we find new field realizations of W (2) 3 and W 3 which could be a starting point for constructing new versions of W-string theories. We also reveal a number of hidden relationships between W 3 and W (2) 3 . We conjecture that similar linear algebras can exist for other W-algebra as well. (author). 10 refs

  7. Matrices and linear algebra

    CERN Document Server

    Schneider, Hans

    1989-01-01

    Linear algebra is one of the central disciplines in mathematics. A student of pure mathematics must know linear algebra if he is to continue with modern algebra or functional analysis. Much of the mathematics now taught to engineers and physicists requires it.This well-known and highly regarded text makes the subject accessible to undergraduates with little mathematical experience. Written mainly for students in physics, engineering, economics, and other fields outside mathematics, the book gives the theory of matrices and applications to systems of linear equations, as well as many related t

  8. Elements of linear space

    CERN Document Server

    Amir-Moez, A R; Sneddon, I N

    1962-01-01

    Elements of Linear Space is a detailed treatment of the elements of linear spaces, including real spaces with no more than three dimensions and complex n-dimensional spaces. The geometry of conic sections and quadric surfaces is considered, along with algebraic structures, especially vector spaces and transformations. Problems drawn from various branches of geometry are given.Comprised of 12 chapters, this volume begins with an introduction to real Euclidean space, followed by a discussion on linear transformations and matrices. The addition and multiplication of transformations and matrices a

  9. Applied linear regression

    CERN Document Server

    Weisberg, Sanford

    2013-01-01

    Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus

  10. Multivariate covariance generalized linear models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Jørgensen, Bent

    2016-01-01

    are fitted by using an efficient Newton scoring algorithm based on quasi-likelihood and Pearson estimating functions, using only second-moment assumptions. This provides a unified approach to a wide variety of types of response variables and covariance structures, including multivariate extensions......We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...

  11. Pitot-tube flowmeter for quantification of airflow during sleep

    International Nuclear Information System (INIS)

    Kirkness, J P; McGinley, B M; Schwartz, A R; Smith, P L; Patil, S P; Schneider, H; Verma, M; Wheatley, J R; Amis, T C; Erlacher, M

    2011-01-01

    The gold-standard pneumotachograph is not routinely used to quantify airflow during overnight polysomnography due to the size, weight, bulkiness and discomfort of the equipment that must be worn. To overcome these deficiencies that have precluded the use of a pneumotachograph in routine sleep studies, our group developed a lightweight, low dead space 'pitot flowmeter' (based on pitot-tube principle) for use during sleep. We aimed to examine the characteristics and validate the flowmeter for quantifying airflow and detecting hypopneas during polysomnography by performing a head-to-head comparison with a pneumotachograph. Four experimental paradigms were utilized to determine the technical performance characteristics and the clinical usefulness of the pitot flowmeter in a head-to-head comparison with a pneumotachograph. In each study (1–4), the pitot flowmeter was connected in series with a pneumotachograph under either static flow (flow generator inline or on a face model) or dynamic flow (subject breathing via a polyester face model or on a nasal mask) conditions. The technical characteristics of the pitot flowmeter showed that, (1) the airflow resistance ranged from 0.065 ± 0.002 to 0.279 ± 0.004 cm H 2 O L –1 s –1 over the airflow rates of 10 to 50 L min −1 . (2) On the polyester face model there was a linear relationship between airflow as measured by the pitot flowmeter output voltage and the calibrated pneumtachograph signal a (β 1 = 1.08 V L −1 s −1 ; β 0 = 2.45 V). The clinically relevant performance characteristics (hypopnea detection) showed that (3) when the pitot flowmeter was connected via a mask to the human face model, both the sensitivity and specificity for detecting a 50% decrease in peak-to-peak airflow amplitude was 99.2%. When tested in sleeping human subjects, (4) the pitot flowmeter signal displayed 94.5% sensitivity and 91.5% specificity for the detection of 50% peak-to-peak reductions in pneumotachograph

  12. Pitot-tube flowmeter for quantification of airflow during sleep.

    Science.gov (United States)

    Kirkness, J P; Verma, M; McGinley, B M; Erlacher, M; Schwartz, A R; Smith, P L; Wheatley, J R; Patil, S P; Amis, T C; Schneider, H

    2011-02-01

    The gold-standard pneumotachograph is not routinely used to quantify airflow during overnight polysomnography due to the size, weight, bulkiness and discomfort of the equipment that must be worn. To overcome these deficiencies that have precluded the use of a pneumotachograph in routine sleep studies, our group developed a lightweight, low dead space 'pitot flowmeter' (based on pitot-tube principle) for use during sleep. We aimed to examine the characteristics and validate the flowmeter for quantifying airflow and detecting hypopneas during polysomnography by performing a head-to-head comparison with a pneumotachograph. Four experimental paradigms were utilized to determine the technical performance characteristics and the clinical usefulness of the pitot flowmeter in a head-to-head comparison with a pneumotachograph. In each study (1-4), the pitot flowmeter was connected in series with a pneumotachograph under either static flow (flow generator inline or on a face model) or dynamic flow (subject breathing via a polyester face model or on a nasal mask) conditions. The technical characteristics of the pitot flowmeter showed that, (1) the airflow resistance ranged from 0.065 ± 0.002 to 0.279 ± 0.004 cm H(2)O L(-1) s(-1) over the airflow rates of 10 to 50 L min(-1). (2) On the polyester face model there was a linear relationship between airflow as measured by the pitot flowmeter output voltage and the calibrated pneumotachograph signal a (β(1) = 1.08 V L(-1) s(-1); β(0) = 2.45 V). The clinically relevant performance characteristics (hypopnea detection) showed that (3) when the pitot flowmeter was connected via a mask to the human face model, both the sensitivity and specificity for detecting a 50% decrease in peak-to-peak airflow amplitude was 99.2%. When tested in sleeping human subjects, (4) the pitot flowmeter signal displayed 94.5% sensitivity and 91.5% specificity for the detection of 50% peak-to-peak reductions in pneumotachograph-measured airflow. Our data

  13. Performance of the linear ion trap Orbitrap mass analyzer for qualitative and quantitative analysis of drugs of abuse and relevant metabolites in sewage water.

    Science.gov (United States)

    Bijlsma, Lubertus; Emke, Erik; Hernández, Félix; de Voogt, Pim

    2013-03-20

    This work illustrates the potential of liquid chromatography coupled to a hybrid linear ion trap Fourier Transform Orbitrap mass spectrometer for the simultaneous identification and quantification of 24 drugs of abuse and relevant metabolites in sewage water. The developed methodology consisted of automatic solid-phase extraction using Oasis HLB cartridges, chromatographic separation of the targeted drugs, full-scan accurate mass data acquisition under positive electrospray ionization mode over an m/z range of 50-600Da at a resolution of 30,000 FWHM and simultaneous MS(n) measurements to obtain information of fragment ions generated in the linear ion trap. Accurate mass of the protonated molecule, together with at least one nominal mass product ion and retention time allowed the confident identification of the compounds detected in these complex matrices. In addition to the highly reliable qualitative analysis, Orbitrap analyzer also proved to have satisfactory potential for quantification at sub-ppb analyte levels, a possibility that has been very little explored in the literature until now. The limits of quantification ranged from 4 to 68ngL(-1) in influent sewage water, and from 2 to 35ngL(-1) in effluent, with the exception of MDA, morphine and THC that presented higher values as a consequence of the high ionization suppression in this type of samples. Satisfactory recoveries (70-120%) and precision (abuse could be identified and quantified, mainly MDMA, benzoylecgonine, codeine, oxazepam and temazepam. Orbitrap also showed potential for retrospective investigation of ketamine metabolites in the samples without the need of additional analysis. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Validated electrochemical and chromatographic quantifications of some antibiotic residues in pharmaceutical industrial waste water.

    Science.gov (United States)

    Ibrahim, Heba K; Abdel-Moety, Mona M; Abdel-Gawad, Sherif A; Al-Ghobashy, Medhat A; Kawy, Mohamed Abdel

    2017-03-01

    Realistic implementation of ion selective electrodes (ISEs) into environmental monitoring programs has always been a challenging task. This could be largely attributed to difficulties in validation of ISE assay results. In this study, the electrochemical response of amoxicillin trihydrate (AMX), ciprofloxacin hydrochloride (CPLX), trimethoprim (TMP), and norfloxacin (NFLX) was studied by the fabrication of sensitive membrane electrodes belonging to two types of ISEs, which are polyvinyl chloride (PVC) membrane electrodes and glassy carbon (GC) electrodes. Linear response for the membrane electrodes was in the concentration range of 10 -5 -10 -2  mol/L. For the PVC membrane electrodes, Nernstian slopes of 55.1, 56.5, 56.5, and 54.0 mV/decade were achieved over a pH 4-8 for AMX, CPLX, and NFLX, respectively, and pH 3-6 for TMP. On the other hand, for GC electrodes, Nernstian slopes of 59.1, 58.2, 57.0, and 58.2 mV/decade were achieved over pH 4-8 for AMX, CPLX, and NFLX, respectively, and pH 3-6 for TMP. In addition to assay validation to international industry standards, the fabricated electrodes were also cross-validated relative to conventional separation techniques; high performance liquid chromatography (HPLC), and thin layer chromatography (TLC)-densitometry. The HPLC assay was applied in concentration range of 0.5-10.0 μg/mL, for all target analytes. The TLC-densitometry was adopted over a concentration range of 0.3-1.0 μg/band, for AMX, and 0.1-0.9 μg/band, for CPLX, NFLX, and TMP. The proposed techniques were successfully applied for quantification of the selected drugs either in pure form or waste water samples obtained from pharmaceutical plants. The actual waste water samples were subjected to solid phase extraction (SPE) for pretreatment prior to the application of chromatographic techniques (HPLC and TLC-densitometry). On the other hand, the fabricated electrodes were successfully applied for quantification of the antibiotic residues in actual

  15. A fast and robust hepatocyte quantification algorithm including vein processing

    Directory of Open Access Journals (Sweden)

    Homeyer André

    2010-03-01

    Full Text Available Abstract Background Quantification of different types of cells is often needed for analysis of histological images. In our project, we compute the relative number of proliferating hepatocytes for the evaluation of the regeneration process after partial hepatectomy in normal rat livers. Results Our presented automatic approach for hepatocyte (HC quantification is suitable for the analysis of an entire digitized histological section given in form of a series of images. It is the main part of an automatic hepatocyte quantification tool that allows for the computation of the ratio between the number of proliferating HC-nuclei and the total number of all HC-nuclei for a series of images in one processing run. The processing pipeline allows us to obtain desired and valuable results for a wide range of images with different properties without additional parameter adjustment. Comparing the obtained segmentation results with a manually retrieved segmentation mask which is considered to be the ground truth, we achieve results with sensitivity above 90% and false positive fraction below 15%. Conclusions The proposed automatic procedure gives results with high sensitivity and low false positive fraction and can be applied to process entire stained sections.

  16. Quantification of the vocal folds’ dynamic displacements

    International Nuclear Information System (INIS)

    Hernández-Montes, María del Socorro; Muñoz, Silvino; De La Torre, Manuel; Flores, Mauricio; Pérez, Carlos; Mendoza-Santoyo, Fernando

    2016-01-01

    Fast dynamic data acquisition techniques are required to investigate the motional behavior of the vocal folds (VFs) when they are subjected to a steady air-flow through the trachea. High-speed digital holographic interferometry (DHI) is a non-invasive full-field-of-view technique that has proved its usefulness to study rapid and non-repetitive object movements. Hence it is an ideal technique used here to measure VF displacements and vibration patterns at 2000 fps. Analyses from a set of 200 displacement images showed that VFs’ vibration cycles are established along their width (y) and length (x). Furthermore, the maximum deformation for the right and left VFs’ area may be quantified from these images, which in itself represents an important result in the characterization of this structure. At a controlled air pressure, VF displacements fall within the range ∼100–1740 nm, with a calculated precision and accuracy that yields a variation coefficient of 1.91%. High-speed acquisition of full-field images of VFs and their displacement quantification are on their own significant data in the study of their functional and physiological behavior since voice quality and production depend on how they vibrate, i.e. their displacement amplitude and frequency. Additionally, the use of high speed DHI avoids prolonged examinations and represents a significant scientific and technological alternative contribution in advancing the knowledge and working mechanisms of these tissues. (paper)

  17. Information theoretic quantification of diagnostic uncertainty.

    Science.gov (United States)

    Westover, M Brandon; Eiseman, Nathaniel A; Cash, Sydney S; Bianchi, Matt T

    2012-01-01

    Diagnostic test interpretation remains a challenge in clinical practice. Most physicians receive training in the use of Bayes' rule, which specifies how the sensitivity and specificity of a test for a given disease combine with the pre-test probability to quantify the change in disease probability incurred by a new test result. However, multiple studies demonstrate physicians' deficiencies in probabilistic reasoning, especially with unexpected test results. Information theory, a branch of probability theory dealing explicitly with the quantification of uncertainty, has been proposed as an alternative framework for diagnostic test interpretation, but is even less familiar to physicians. We have previously addressed one key challenge in the practical application of Bayes theorem: the handling of uncertainty in the critical first step of estimating the pre-test probability of disease. This essay aims to present the essential concepts of information theory to physicians in an accessible manner, and to extend previous work regarding uncertainty in pre-test probability estimation by placing this type of uncertainty within a principled information theoretic framework. We address several obstacles hindering physicians' application of information theoretic concepts to diagnostic test interpretation. These include issues of terminology (mathematical meanings of certain information theoretic terms differ from clinical or common parlance) as well as the underlying mathematical assumptions. Finally, we illustrate how, in information theoretic terms, one can understand the effect on diagnostic uncertainty of considering ranges instead of simple point estimates of pre-test probability.

  18. Quantification of the vocal folds’ dynamic displacements

    Science.gov (United States)

    del Socorro Hernández-Montes, María; Muñoz, Silvino; De La Torre, Manuel; Flores, Mauricio; Pérez, Carlos; Mendoza-Santoyo, Fernando

    2016-05-01

    Fast dynamic data acquisition techniques are required to investigate the motional behavior of the vocal folds (VFs) when they are subjected to a steady air-flow through the trachea. High-speed digital holographic interferometry (DHI) is a non-invasive full-field-of-view technique that has proved its usefulness to study rapid and non-repetitive object movements. Hence it is an ideal technique used here to measure VF displacements and vibration patterns at 2000 fps. Analyses from a set of 200 displacement images showed that VFs’ vibration cycles are established along their width (y) and length (x). Furthermore, the maximum deformation for the right and left VFs’ area may be quantified from these images, which in itself represents an important result in the characterization of this structure. At a controlled air pressure, VF displacements fall within the range ~100-1740 nm, with a calculated precision and accuracy that yields a variation coefficient of 1.91%. High-speed acquisition of full-field images of VFs and their displacement quantification are on their own significant data in the study of their functional and physiological behavior since voice quality and production depend on how they vibrate, i.e. their displacement amplitude and frequency. Additionally, the use of high speed DHI avoids prolonged examinations and represents a significant scientific and technological alternative contribution in advancing the knowledge and working mechanisms of these tissues.

  19. Linear system theory

    Science.gov (United States)

    Callier, Frank M.; Desoer, Charles A.

    1991-01-01

    The aim of this book is to provide a systematic and rigorous access to the main topics of linear state-space system theory in both the continuous-time case and the discrete-time case; and the I/O description of linear systems. The main thrusts of the work are the analysis of system descriptions and derivations of their properties, LQ-optimal control, state feedback and state estimation, and MIMO unity-feedback systems.

  20. freeQuant: A Mass Spectrometry Label-Free Quantification Software Tool for Complex Proteome Analysis.

    Science.gov (United States)

    Deng, Ning; Li, Zhenye; Pan, Chao; Duan, Huilong

    2015-01-01

    Study of complex proteome brings forward higher request for the quantification method using mass spectrometry technology. In this paper, we present a mass spectrometry label-free quantification tool for complex proteomes, called freeQuant, which integrated quantification with functional analysis effectively. freeQuant consists of two well-integrated modules: label-free quantification and functional analysis with biomedical knowledge. freeQuant supports label-free quantitative analysis which makes full use of tandem mass spectrometry (MS/MS) spectral count, protein sequence length, shared peptides, and ion intensity. It adopts spectral count for quantitative analysis and builds a new method for shared peptides to accurately evaluate abundance of isoforms. For proteins with low abundance, MS/MS total ion count coupled with spectral count is included to ensure accurate protein quantification. Furthermore, freeQuant supports the large-scale functional annotations for complex proteomes. Mitochondrial proteomes from the mouse heart, the mouse liver, and the human heart were used to evaluate the usability and performance of freeQuant. The evaluation showed that the quantitative algorithms implemented in freeQuant can improve accuracy of quantification with better dynamic range.

  1. Detection and quantification of Leveillula taurica growth in pepper leaves.

    Science.gov (United States)

    Zheng, Zheng; Nonomura, Teruo; Bóka, Károly; Matsuda, Yoshinori; Visser, Richard G F; Toyoda, Hideyoshi; Kiss, Levente; Bai, Yuling

    2013-06-01

    Leveillula taurica is an obligate fungal pathogen that causes powdery mildew disease on a broad range of plants, including important crops such as pepper, tomato, eggplant, onion, cotton, and so on. The early stage of this disease is difficult to diagnose and the disease can easily spread unobserved; for example, in pepper and tomato production fields and greenhouses. The objective of this study was to develop a detection and quantification method of L. taurica biomass in pepper leaves with special regard to the early stages of infection. We monitored the development of the disease to time the infection process on the leaf surface as well as inside the pepper leaves. The initial and final steps of the infection taking place on the leaf surface were consecutively observed using a dissecting microscope and a scanning electron microscope. The development of the intercellular mycelium in the mesophyll was followed by light and transmission electron microscopy. A pair of L. taurica-specific primers was designed based on the internal transcribed spacer sequence of L. taurica and used in real-time polymerase chain reaction (PCR) assay to quantify the fungal DNA during infection. The specificity of this assay was confirmed by testing the primer pair with DNA from host plants and also from another powdery mildew species, Oidium neolycopersici, infecting tomato. A standard curve was obtained for absolute quantification of L. taurica biomass. In addition, we tested a relative quantification method by using a plant gene as reference and the obtained results were compared with the visual disease index scoring. The real-time PCR assay for L. taurica provides a valuable tool for detection and quantification of this pathogen in breeding activities as well in plant-microbe interaction studies.

  2. Validation for The Quantification of Andrographolide Isolated from Andrographis paniculata Nees Plant Using HPLC

    Directory of Open Access Journals (Sweden)

    Yandi Syukri

    2015-12-01

    Full Text Available The aim of study was to develop quantitative analysis of isolated andrographolide from Andrographis paniculata and different solvent for prelimenary studies to preperation Self Nano Emulsifying Drug Delivery System (SNEDDS using HPLC. The separation was acquired on Sunfire C18 column with an isocratic mixture of methanol and water at a ratio of 6:4, v/v as a mobile phase. The method to determine the content of isolated andrographolide showed an adequate precision, with a RSD smaller than 1%. The accuracy was analyzed by adding the standard andrographolide, and good recovery values were obtained for all concentrations used. The HPLC method developed in this study showed specificity and selectivity with linearity in the working range and good precision and accuracy, making it very suitable for the quantification of isolated andrographolide. Compared to the standard, the purity of the isolated andrographolide was 95.74 ± 0.29 %. Prelimenary study to determined the highest solubility of isolated andrographolide in oil, surfactant and co-surfactant phases for preperation of SNEDDS were obtained 1.226 ± 0.009 of Capryol-90, 2.965 ± 0.014 of tween 20, and  6.074 ± 0.101 mg mL-1 of PEG 400, respectively. Conclusion, this method suitable used to determination solublity of isolated andrographolide for preperation SNEDDS.

  3. Enzymatic biosensors based on ingá-cipó peroxidase immobilised on sepiolite for TBHQ quantification.

    Science.gov (United States)

    Regina de Oliveira, Tássia; Grawe, Gregory Ferreira; Moccelini, Sally Katiuce; Terezo, Ailton J; Castilho, Marilza

    2014-05-07

    Sepiolite clay mineral was used as a support for the immobilisation of the peroxidase enzyme from ingá-cipó (Inga edulis Mart.) and was used with graphite powder, multi-walled carbon nanotubes (CNTs), mineral oil, and nafion 0.5% (v/v) in the development of a new biosensor for the determination of the antioxidant tert-butylhydroquinone (TBHQ) by square-wave voltammetry (SWV). For the optimisation and application of the biosensor, several parameters were investigated to determine the optimum experimental conditions using SWV. The best performance was obtained using a 0.1 mol L(-1) phosphate buffer solution (pH 7.0), 4.0 × 10(-4) mol L(-1) hydrogen peroxide, a frequency of 50 Hz, a pulse amplitude of 60 mV, and a scan increment of 6 mV. The biosensor showed good repeatability and reproducibility and remained stable for a period of 20 weeks. The analytical curve revealed a linear response range of 1.65 to 9.82 mg L(-1) (r = 0.994) with detection and quantification limits of 0.41 and 1.25 mg L(-1). A recovery study of TBHQ in salad dressing samples yielded values from 99.6-104.8%. The proposed biosensor was successfully used for the determination of TBHQ in commercial salad dressing samples, giving a relative error of 5.4% in relation to the comparative method (chromatographic).

  4. Simultaneous quantification of stevioside and rebaudioside A in different stevia samples collected from the Indian subcontinent

    Directory of Open Access Journals (Sweden)

    Karishma Chester

    2012-01-01

    Full Text Available Background: A high performance thin layer chromatographic (HPTLC method was developed for simultaneous estimation of stevioside and rebaudioside A in Stevia rebaudiana samples collected from different regions of Indian subcontinent. Materials and Methods: The separation was achieved by using acetone: ethyl acetate: water (5:4:1, v/v/v as the solvent system on precoated silica gel 60 F 254 TLC plates. The densitometric quantification of stevia glycosides was carried out at wavelength 360 nm in absorption mode after spraying with anisaldehyde sulphuric acid as detecting reagent. Results: The well resolved peaks for stevioside and rebaudioside A were observed at R f values 0.31± 0.02 and 0.21± 0.02 respectively. The calibration curves were found linear with a wide range of concentration 100 - 2000 ng spot -1 with good correlation coefficient 0.996 and 0.991 for stevioside and rebaudioside A, respectively. Conclusions: The proposed method was validated as per the ICH (International Conferences on Harmonization guidelines and found simple, sensitive, economic, reproducible, robust and accurate for quantitative analysis of stevia glycosides, which can be applied for quality control of stevia as well as to check.

  5. Development and validation of a method for the quantification of fructooligosaccharides in a prebiotic ice cream

    Directory of Open Access Journals (Sweden)

    Claudia L. González-Aguirre

    2018-02-01

    Full Text Available Context: Fructooligosaccharides (FOS are known as oligofructanes, oligosaccharides or oligofructose, which fall within the concept of prebiotics. One of the methods most commonly used in the industry for quantification and quality control nutraceutical substances classification is the method of high performance liquid chromatography (HPLC. Aims: To develop a procedure for the determination of FOS by HPLC in raw materials and a prebiotic ice cream. Methods: For the chromatographic separation, an HPLC was used with a refractive index detector (IR. The separation was performed using two columns coupled Sugar-pak I™ using an isocratic procedure with water type 1 at 0.35 mL/min. Kestose (GF2, nistose (GF3 and fructofuranosylnystose (GF4 were used as standards. Robustness was assessed by applying the Youden and Steiner test. Results: Good linear correlations were obtained (y = 14191.4470 x + 285684.2, r2 = 0.9904 within the concentration range of 8.0-12.0 mg/mL. The FOS recoveries were 99.5% with the intra-day and inter-day relative standard deviation (RSD less than 0.8%. The robustness test showed that the temperature parameters of the column and flow velocity are critical factors in the method. Conclusions: This reliable, simple and cost-effective method could be applied to the routine monitoring of FOS (GF2, GF3, and GF4 in raw materials and prebiotic ice creams.

  6. Development and validation of an HPLC-FLD method for milbemectin quantification in dog plasma.

    Science.gov (United States)

    Xu, Qianqian; Xiang, Wensheng; Li, Jichang; Liu, Yong; Yu, Xiaolei; Zhang, Yaoteng; Qu, Mingli

    2010-07-15

    Milbemectin is a widely used veterinary antiparasitic agent. A high-performance liquid chromatography with fluorescent detection (HPLC-FLD) method is described for the determination of milbemectin in dog plasma. The derivative procedure included mixing 1-methylimizole [MI, MI-ACN (1:1, v/v), 100 microL], trifluoroacetic anhydride [TFAA, TFAA-ACN (1:2, v/v), 150 microL] with a subsequent incubation for 3s at the room temperature to obtain a fluorescent derivative, which is reproducible in different blood samples and the derivatives proved to be stable for at least 80 h at room temperature. HPLC method was developed on C18 column with FLD detection at an excitation wavelength of 365 nm and emission wavelength of 475 nm, with the mobile phase consisting of methanol and water in the ratio of 98:2 (v/v). The assay lower limit of quantification was 1 ng/mL. The calibration curve was linear over concentration range of 1-200 ng/mL. The intra- and inter-day accuracy was >94% and precision expressed as % coefficient of variation was <5%. This method is specific, simple, accurate, precise and easily adaptable to measure milbemycin in blood of other animals. Crown Copyright 2010. Published by Elsevier B.V. All rights reserved.

  7. Software tools for quantification of X-ray microtomography at the UGCT

    Energy Technology Data Exchange (ETDEWEB)

    Vlassenbroeck, J. [Department of Subatomic and Radiation Physics, Ghent University, Proeftuinstraat 86, B-9000 Gent (Belgium)], E-mail: jelle.vlassenbroeck@ugent.be; Dierick, M.; Masschaele, B. [Department of Subatomic and Radiation Physics, Ghent University, Proeftuinstraat 86, B-9000 Gent (Belgium); Cnudde, V. [Department of Geology and Soil Science, Ghent University, Krijgslaan 281/S8, B-9000 Gent (Belgium); Van Hoorebeke, L. [Department of Subatomic and Radiation Physics, Ghent University, Proeftuinstraat 86, B-9000 Gent (Belgium); Jacobs, P. [Department of Geology and Soil Science, Ghent University, Krijgslaan 281/S8, B-9000 Gent (Belgium)

    2007-09-21

    The technique of X-ray microtomography using X-ray tube radiation offers an interesting tool for the non-destructive investigation of a wide range of materials. A major challenge lies in the analysis and quantification of the resulting data, allowing for a full characterization of the sample under investigation. In this paper, we discuss the software tools for reconstruction and analysis of tomographic data that are being developed at the UGCT. The tomographic reconstruction is performed using Octopus, a high-performance and user-friendly software package. The reconstruction process transforms the raw acquisition data into a stack of 2D cross-sections through the sample, resulting in a 3D data set. A number of artifact and noise reduction algorithms are integrated to reduce ring artifacts, beam hardening artifacts, COR misalignment, detector or stage tilt, pixel non-linearities, etc. These corrections are very important to facilitate the analysis of the 3D data. The analysis of the 3D data focuses primarily on the characterization of pore structures, but will be extended to other applications. A first package for the analysis of pore structures in three dimensions was developed under Matlab. A new package, called Morpho+, is being developed in a C++ environment, with optimizations and extensions of the previously used algorithms. The current status of this project will be discussed. Examples of pore analysis can be found in pharmaceuticals, material science, geology and numerous other fields.

  8. Quantification and characterization of enzymatically produced hyaluronan with fluorophore-assisted carbohydrate electrophoresis.

    Science.gov (United States)

    Kooy, Floor K; Ma, Muyuan; Beeftink, Hendrik H; Eggink, Gerrit; Tramper, Johannes; Boeriu, Carmen G

    2009-01-15

    Hyaluronan (HA) is a polysaccharide with high-potential medical applications, depending on the chain length and the chain length distribution. Special interest goes to homogeneous HA oligosaccharides, which can be enzymatically produced using Pasteurella multocida hyaluronan synthase (PmHAS). We have developed a sensitive, simple, and fast method, based on fluorophore-assisted carbohydrate electrophoresis (FACE), for characterization and quantification of polymerization products. A chromatographic pure fluorescent template was synthesized from HA tetrasaccharide (HA4) and 2-aminobenzoic acid. HA4-fluor and HA4 were used as template for PmHAS-mediated polymerization of nucleotide sugars. All products, fluorescent and nonfluorescent, were analyzed with gel electrophoresis and quantified using lane densitometry. Comparison of HA4- and HA4-fluor-derived polymers showed that the fluorophore did not negatively influence the PmHAS-mediated polymerization. Only even-numbered oligosaccharide products were observed using HA4-fluor or HA4 as template. The fluorophore intensity was linearly related to its concentration, and the limit of detection was determined to be 7.4pmol per product band. With this assay, we can now differentiate oligosaccharides of size range DP2 (degree of polymerization 2) to approximately DP400, monitor the progress of polymerization reactions, and measure subtle differences in polymerization rate. Quantifying polymerization products enables us to study the influence of experimental conditions on HA synthesis.

  9. Capillary gel electrophoresis for the quantification and purity determination of recombinant proteins in inclusion bodies.

    Science.gov (United States)

    Espinosa-de la Garza, Carlos E; Perdomo-Abúndez, Francisco C; Campos-García, Víctor R; Pérez, Néstor O; Flores-Ortiz, Luis F; Medina-Rivero, Emilio

    2013-09-01

    In this work, a high-resolution CGE method for quantification and purity determination of recombinant proteins was developed, involving a single-component inclusion bodies (IBs) solubilization solution. Different recombinant proteins expressed as IBs were used to show method capabilities, using recombinant interferon-β 1b as the model protein for method validation. Method linearity was verified in the range from 0.05 to 0.40 mg/mL and a determination coefficient (r(2) ) of 0.99 was obtained. The LOQs and LODs were 0.018 and 0.006 mg/mL, respectively. RSD for protein content repeatability test was 2.29%. In addition, RSD for protein purity repeatability test was 4.24%. Method accuracy was higher than 90%. Specificity was confirmed, as the method was able to separate recombinant interferon-β 1b monomer from other aggregates and impurities. Sample content and purity was demonstrated to be stable for up to 48 h. Overall, this method is suitable for the analysis of recombinant proteins in IBs according to the attributes established on the International Conference for Harmonization guidelines. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Comparison of Suitability of the Most Common Ancient DNA Quantification Methods.

    Science.gov (United States)

    Brzobohatá, Kristýna; Drozdová, Eva; Smutný, Jiří; Zeman, Tomáš; Beňuš, Radoslav

    2017-04-01

    Ancient DNA (aDNA) extracted from historical bones is damaged and fragmented into short segments, present in low quantity, and usually copurified with microbial DNA. A wide range of DNA quantification methods are available. The aim of this study was to compare the five most common DNA quantification methods for aDNA. Quantification methods were tested on DNA extracted from skeletal material originating from an early medieval burial site. The tested methods included ultraviolet (UV) absorbance, real-time quantitative polymerase chain reaction (qPCR) based on SYBR ® green detection, real-time qPCR based on a forensic kit, quantification via fluorescent dyes bonded to DNA, and fragmentary analysis. Differences between groups were tested using a paired t-test. Methods that measure total DNA present in the sample (NanoDrop ™ UV spectrophotometer and Qubit ® fluorometer) showed the highest concentrations. Methods based on real-time qPCR underestimated the quantity of aDNA. The most accurate method of aDNA quantification was fragmentary analysis, which also allows DNA quantification of the desired length and is not affected by PCR inhibitors. Methods based on the quantification of the total amount of DNA in samples are unsuitable for ancient samples as they overestimate the amount of DNA presumably due to the presence of microbial DNA. Real-time qPCR methods give undervalued results due to DNA damage and the presence of PCR inhibitors. DNA quantification methods based on fragment analysis show not only the quantity of DNA but also fragment length.

  11. Quantification of strontium in human serum by ICP-MS using alternate analyte-free matrix and its application to a pilot bioequivalence study of two strontium ranelate oral formulations in healthy Chinese subjects.

    Science.gov (United States)

    Zhang, Dan; Wang, Xiaolin; Liu, Man; Zhang, Lina; Deng, Ming; Liu, Huichen

    2015-01-01

    A rapid, sensitive and accurate ICP-MS method using alternate analyte-free matrix for calibration standards preparation and a rapid direct dilution procedure for sample preparation was developed and validated for the quantification of exogenous strontium (Sr) from the drug in human serum. Serum was prepared by direct dilution (1:29, v/v) in an acidic solution consisting of nitric acid (0.1%) and germanium (Ge) added as internal standard (IS), to obtain simple and high-throughput preparation procedure with minimized matrix effect, and good repeatability. ICP-MS analysis was performed using collision cell technology (CCT) mode. Alternate matrix method by using distilled water as an alternate analyte-free matrix for the preparation of calibration standards (CS) was used to avoid the influence of endogenous Sr in serum on the quantification. The method was validated in terms of selectivity, carry-over, matrix effects, lower limit of quantification (LLOQ), linearity, precision and accuracy, and stability. Instrumental linearity was verified in the range of 1.00-500ng/mL, corresponding to a concentration range of 0.0300-15.0μg/mL in 50μL sample of serum matrix and alternate matrix. Intra- and inter-day precision as relative standard deviation (RSD) were less than 8.0% and accuracy as relative error (RE) was within ±3.0%. The method allowed a high sample throughput, and was sensitive and accurate enough for a pilot bioequivalence study in healthy male Chinese subjects following single oral administration of two strontium ranelate formulations containing 2g strontium ranelate. Copyright © 2014 Elsevier GmbH. All rights reserved.

  12. Quantification of citalopram or escitalopram and their demethylated metabolites in neonatal hair samples by liquid chromatography-tandem mass spectrometry.

    Science.gov (United States)

    Frison, Giampietro; Favretto, Donata; Vogliardi, Susanna; Terranova, Claudio; Ferrara, Santo Davide

    2008-08-01

    Citalopram and escitalopram are highly selective serotonin reuptake inhibitors widely used in the treatment of depression. They exhibit adverse drug reactions and side effects, however, and the development of specific methods for their determination is of great interest in clinical and forensic toxicology. A liquid chromatography-tandem mass spectrometry method has been developed and validated for the assay of citalopram, escitalopram, and their demethylated metabolites in 10-mg hair samples. The analytes were extracted by incubation in methanol and liquid/liquid extraction with diethyl ether/dichloromethane. Gradient elution on a narrow bore C18 column was realized using clomipramine-d3 as an internal standard. Positive ion electrospray ionization and tandem mass spectrometry determination by collision-induced dissociation were performed in an ion trap mass spectrometer. The method exhibited a linear range of 25 to 2000 pg/mg, a quantification limit of 25 pg/mg for all analytes, relative standard deviations in the range of 12.10 to 9.80 (intraassay), and 13.80 to 11.78 (interassay), and accuracies (as percent recovery of the spiked standards) in the range of 90% to 110%; it was applied to the determination of citalopram and escitalopram and their metabolites in hair samples of two newborns to document their in utero exposure to the drugs. The method proved suitable for neonatal hair analysis of citalopram or escitalopram and was applied to two real cases of gestational exposure.

  13. PIV Uncertainty Quantification and Beyond

    NARCIS (Netherlands)

    Wieneke, B.F.A.

    2017-01-01

    The fundamental properties of computed flow fields using particle imaging velocimetry (PIV) have been investigated, viewing PIV processing as a black box without going in detail into algorithmic details. PIV processing can be analyzed using a linear filter model, i.e. assuming that the computed

  14. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge

    Science.gov (United States)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    This paper presents the formulation of an uncertainty quantification challenge problem consisting of five subproblems. These problems focus on key aspects of uncertainty characterization, sensitivity analysis, uncertainty propagation, extreme-case analysis, and robust design.

  15. Uncertainty Quantification in Aerodynamics Simulations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of the proposed work (Phases I and II) is to develop uncertainty quantification methodologies and software suitable for use in CFD simulations of...

  16. Quantification of virus syndrome in chili peppers

    African Journals Online (AJOL)

    Jane

    2011-06-15

    Jun 15, 2011 ... alternative for the quantification of the disease' syndromes in regards to this crop. The result of these ..... parison of treatments such as cultivars or control measures and ..... Vascular discoloration and stem necrosis. 2.

  17. Low cost high performance uncertainty quantification

    KAUST Repository

    Bekas, C.; Curioni, A.; Fedulova, I.

    2009-01-01

    Uncertainty quantification in risk analysis has become a key application. In this context, computing the diagonal of inverse covariance matrices is of paramount importance. Standard techniques, that employ matrix factorizations, incur a cubic cost

  18. HPLC determination of flavonoid glycosides in Mongolian Dianthus versicolor Fisch. (Caryophyllaceae) compared with quantification by UV spectrophotometry.

    Science.gov (United States)

    Obmann, Astrid; Purevsuren, Sodnomtseren; Zehl, Martin; Kletter, Christa; Reznicek, Gottfried; Narantuya, Samdan; Glasl, Sabine

    2012-01-01

    Dianthus versicolor is used in traditional Mongolian medicine against liver impairment. Fractions enriched in flavone-di- and triglycosides were shown to enhance bile secretion. Therefore, reliable and accurate analytical methods are needed for the determination of these flavonoids in the crude drug and extracts thereof. To provide a validated HPLC-DAD (diode array detector) method especially developed for the separation of polar flavonoids and to compare the data obtained with those evaluated by UV spectrophotometry. Separations were carried out on an Aquasil® C₁₈-column (4.6 mm × 250.0 mm, 5 µm) with a linear gradient of acetonitrile and water (adjusted to pH 2.8 with trifluoroacetic acid) as mobile phase. Rutoside was employed as internal standard with linear behavior in a concentration range of 0.007-3.5 mg/mL. Accuracy was determined by spiking the crude drug with saponarin resulting in recoveries between 92% and 102%. The method allows the quantification of highly polar flavonoid glycosides and the determination of their total content. For saponarin a linear response was evaluated within the range 0.007-3.5 mg/mL (R²  > 0.9999). It was proven that threefold sonication represents a time-saving, effective and cheap method for the extraction of the polar flavonoid glycosides. The contents determined by HPLC were shown to be in agreement with those obtained employing UV spectrophotometry. The study has indicated that the newly developed HPLC method represents a powerful technique for the quality control of D. versicolor. Ultraviolet spectrophotometry may be used alternatively provided that the less polar flavonoids are removed by purification. Copyright © 2011 John Wiley & Sons, Ltd.

  19. Direct qPCR quantification using the Quantifiler(®) Trio DNA quantification kit.

    Science.gov (United States)

    Liu, Jason Yingjie

    2014-11-01

    The effectiveness of a direct quantification assay is essential to the adoption of the combined direct quantification/direct STR workflow. In this paper, the feasibility of using the Quantifiler(®) Trio DNA quantification kit for the direct quantification of forensic casework samples was investigated. Both low-level touch DNA samples and blood samples were collected on PE swabs and quantified directly. The increased sensitivity of the Quantifiler(®) Trio kit enabled the detection of less than 10pg of DNA in unprocessed touch samples and also minimizes the stochastic effect experienced by different targets in the same sample. The DNA quantity information obtained from a direct quantification assay using the Quantifiler(®) Trio kit can also be used to accurately estimate the optimal input DNA quantity for a direct STR amplification reaction. The correlation between the direct quantification results (Quantifiler(®) Trio kit) and the direct STR results (GlobalFiler™ PCR amplification kit(*)) for low-level touch DNA samples indicates that direct quantification using the Quantifiler(®) Trio DNA quantification kit is more reliable than the Quantifiler(®) Duo DNA quantification kit for predicting the STR results of unprocessed touch DNA samples containing less than 10pg of DNA. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. Multivariate generalized linear mixed models using R

    CERN Document Server

    Berridge, Damon Mark

    2011-01-01

    Multivariate Generalized Linear Mixed Models Using R presents robust and methodologically sound models for analyzing large and complex data sets, enabling readers to answer increasingly complex research questions. The book applies the principles of modeling to longitudinal data from panel and related studies via the Sabre software package in R. A Unified Framework for a Broad Class of Models The authors first discuss members of the family of generalized linear models, gradually adding complexity to the modeling framework by incorporating random effects. After reviewing the generalized linear model notation, they illustrate a range of random effects models, including three-level, multivariate, endpoint, event history, and state dependence models. They estimate the multivariate generalized linear mixed models (MGLMMs) using either standard or adaptive Gaussian quadrature. The authors also compare two-level fixed and random effects linear models. The appendices contain additional information on quadrature, model...

  1. A novel high-performance thin layer chromatography method for quantification of long chain aliphatic hydrocarbons from Cissus quadrangularis

    Directory of Open Access Journals (Sweden)

    Vandana Jain

    2016-08-01

    Full Text Available Context: A high-performance thin layer chromatography (HPTLC is an analytical technique, which can be used for the determination of constituents or marker components in various parts of the plants. Earlier studies have estimated phytoconstituents from the stem and other aerial plant parts of Cissus quadrangularis Linn. Estimation of hydrocarbons can also be successfully done using HPTLC technique using suitable derivatization. Aims: To develop and validate a simple and rapid method for the estimation of long chain aliphatic hydrocarbons from the leaves of C. quadrangularis using HPTLC technique. Methods: Precoated silica gel 60 F254 plates were used as stationary phase. The mobile phase used was hexane (100 %. The detection of spots was carried out using berberine sulphate as detecting reagent. Results: The method was validated in terms of linearity, sensitivity, accuracy, and precision. Linearity range was found to be 2-10 µg/mL, limit of detection 0.127 µg/mL, and limit of quantification 0.384 µg/mL. Conclusions: A novel, simple, accurate, precise and sensitive HPTLC method has been developed and validated for the estimation of long chain aliphatic hydrocarbons obtained from the leaves of C. quadrangularis Linn.

  2. Quantification of trace metals in water using complexation and filter concentration.

    Science.gov (United States)

    Dolgin, Bella; Bulatov, Valery; Japarov, Julia; Elish, Eyal; Edri, Elad; Schechter, Israel

    2010-06-15

    Various metals undergo complexation with organic reagents, resulting in colored products. In practice, their molar absorptivities allow for quantification in the ppm range. However, a proper pre-concentration of the colored complex on paper filter lowers the quantification limit to the low ppb range. In this study, several pre-concentration techniques have been examined and compared: filtering the already complexed mixture, complexation on filter, and dipping of dye-covered filter in solution. The best quantification has been based on the ratio of filter reflectance at a certain wavelength to that at zero metal concentration. The studied complex formations (Ni ions with TAN and Cd ions with PAN) involve production of nanoparticle suspensions, which are associated with complicated kinetics. The kinetics of the complexation of Ni ions with TAN has been investigated and optimum timing could be found. Kinetic optimization in regard to some interferences has also been suggested.

  3. Further linear algebra

    CERN Document Server

    Blyth, T S

    2002-01-01

    Most of the introductory courses on linear algebra develop the basic theory of finite­ dimensional vector spaces, and in so doing relate the notion of a linear mapping to that of a matrix. Generally speaking, such courses culminate in the diagonalisation of certain matrices and the application of this process to various situations. Such is the case, for example, in our previous SUMS volume Basic Linear Algebra. The present text is a continuation of that volume, and has the objective of introducing the reader to more advanced properties of vector spaces and linear mappings, and consequently of matrices. For readers who are not familiar with the contents of Basic Linear Algebra we provide an introductory chapter that consists of a compact summary of the prerequisites for the present volume. In order to consolidate the student's understanding we have included a large num­ ber of illustrative and worked examples, as well as many exercises that are strategi­ cally placed throughout the text. Solutions to the ex...

  4. Rapid reagent-less on-line H2O2 quantification in alkaline semiconductor etching solution, Part 2: Nephelometry application.

    Science.gov (United States)

    Zlatev, Roumen; Stoytcheva, Margarita; Valdez, Benjamin

    2018-03-01

    A simple and rapid reagent less nephelometric method for on-line H 2 O 2 quantification in semiconductors etching solutions was developed, optimized, characterized and validated. The intensity of the light scattered by the oxygen gas suspension resulted from H 2 O 2 catalytic decomposition by immobilized MnO 2 was registered as analytical response. The influences of the light wave length, the agitation rate, the temperature and the catalyst surface area on the response amplitude were studied and optimization was done. The achieved linear concentration range from 10 to 150mmolL -1 at 0.9835 calibration curve correlation coefficient, precision from 3.65% to 0.95% and response time from 35 to 20s respectively, at sensitivity of 8.01µAmmol -1 L and LOD of 2.9mmolL -1 completely satisfy the semiconductor industry requirements. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Mitoxantrone Loaded Superparamagnetic Nanoparticles for Drug Targeting: A Versatile and Sensitive Method for Quantification of Drug Enrichment in Rabbit Tissues Using HPLC-UV

    Directory of Open Access Journals (Sweden)

    Rainer Tietze

    2010-01-01

    Full Text Available In medicine, superparamagnetic nanoparticles bound to chemotherapeutics are currently investigated for their feasibility in local tumor therapy. After intraarterial application, these particles can be accumulated in the targeted area by an external magnetic field to increase the drug concentration in the region of interest (Magnetic-Drug-Targeting. We here present an analytical method (HPLC-UV, to detect pure or ferrofluid-bound mitoxantrone in a complex matrix even in trace amounts in order to perform biodistribution studies. Mitoxantrone could be extracted in high yields from different tissues. Recovery of mitoxantrone in liver tissue (5000 ng/g was 76±2%. The limit of quantification of mitoxantrone standard was 10 ng/mL ±12%. Validation criteria such as linearity, precision, and stability were evaluated in ranges achieving the FDA requirements. As shown for pilot samples, biodistribution studies can easily be performed after application of pure or ferrofluid-bound mitoxantrone.

  6. A validated method using RP-HPLC for quantification of reserpine in the Brazilian tree Rauvolfia sellowii Müll. Arg. (Apocynaceae

    Directory of Open Access Journals (Sweden)

    Leopoldo C. Baratto

    2012-01-01

    Full Text Available This study describes a simple, fast and reproducible method using RP-HPLC-UV, in a gradient system, for quantification of reserpine in Rauvolfia sellowii stem bark. The analysis were carried out on a C18 column; mobile phase was water and acetonitrile, and separations were carried out in 10 min, flow rate of 1.0 mL min-1, 25 ºC and 268 nm. The validation data showed that the method was specific, accurate, precise and robust. Results were linear over a range of 0.625-40.0 μg mL-1, and the mean recovery was 95.1%. The amount of reserpine found in the dried stem bark was 0.01% (m/m.

  7. Glassy carbon electrode modified with multi-walled carbon nanotubes sensor for the quantification of antihistamine drug pheniramine in solubilized systems.

    Science.gov (United States)

    Jain, Rajeev; Sharma, Sanjay

    2012-02-01

    A sensitive electroanalytical method for quantification of pheniramine in pharmaceutical formulation has been investigated on the basis of the enhanced electrochemical response at glassy carbon electrode modified with multi-walled carbon nanotubes in the presence of sodium lauryl sulfate. The experimental results suggest that the pheniramine in anionic surfactant solution exhibits electrocatalytic effect resulting in a marked enhancement of the peak current response. Peak current response is linearly dependent on the concentration of pheniramine in the range 200-1500 μg/mL with correlation coefficient 0.9987. The limit of detection is 58.31 μg/mL. The modified electrode shows good sensitivity and repeatability.

  8. Effects of pyruvate dose on in vivo metabolism and quantification of hyperpolarized 13C spectra

    DEFF Research Database (Denmark)

    Janich, M. A.; Menzel, M. I.; Wiesinger, F.

    2012-01-01

    Real‐time in vivo measurements of metabolites are performed by signal enhancement of [1‐13C]pyruvate using dynamic nuclear polarization, rapid dissolution and intravenous injection, acquisition of free induction decay signals and subsequent quantification of spectra. The commonly injected dose...... uptake and metabolic conversion. The goal of this study was to examine the effects of a [1‐13C]pyruvate bolus on metabolic conversion in vivo. Spectra were quantified by three different methods: frequency‐domain fitting with LCModel, time‐domain fitting with AMARES and simple linear least‐squares fitting...... in the time domain. Since the simple linear least‐squares approach showed bleeding artifacts and LCModel produced noisier time signals. AMARES performed best in the quantification of in vivo hyperpolarized pyruvate spectra. We examined pyruvate doses of 0.1–0.4 mmol/kg (body mass) in male Wistar rats...

  9. Uncertainty Quantification in Numerical Aerodynamics

    KAUST Repository

    Litvinenko, Alexander

    2017-05-16

    We consider uncertainty quantification problem in aerodynamic simulations. We identify input uncertainties, classify them, suggest an appropriate statistical model and, finally, estimate propagation of these uncertainties into the solution (pressure, velocity and density fields as well as the lift and drag coefficients). The deterministic problem under consideration is a compressible transonic Reynolds-averaged Navier-Strokes flow around an airfoil with random/uncertain data. Input uncertainties include: uncertain angle of attack, the Mach number, random perturbations in the airfoil geometry, mesh, shock location, turbulence model and parameters of this turbulence model. This problem requires efficient numerical/statistical methods since it is computationally expensive, especially for the uncertainties caused by random geometry variations which involve a large number of variables. In numerical section we compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and gradient-enhanced version of Kriging, radial basis functions and point collocation polynomial chaos, in their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry [D.Liu et al \\'17]. For modeling we used the TAU code, developed in DLR, Germany.

  10. Stochastic approach for radionuclides quantification

    Science.gov (United States)

    Clement, A.; Saurel, N.; Perrin, G.

    2018-01-01

    Gamma spectrometry is a passive non-destructive assay used to quantify radionuclides present in more or less complex objects. Basic methods using empirical calibration with a standard in order to quantify the activity of nuclear materials by determining the calibration coefficient are useless on non-reproducible, complex and single nuclear objects such as waste packages. Package specifications as composition or geometry change from one package to another and involve a high variability of objects. Current quantification process uses numerical modelling of the measured scene with few available data such as geometry or composition. These data are density, material, screen, geometric shape, matrix composition, matrix and source distribution. Some of them are strongly dependent on package data knowledge and operator backgrounds. The French Commissariat à l'Energie Atomique (CEA) is developing a new methodology to quantify nuclear materials in waste packages and waste drums without operator adjustment and internal package configuration knowledge. This method suggests combining a global stochastic approach which uses, among others, surrogate models available to simulate the gamma attenuation behaviour, a Bayesian approach which considers conditional probability densities of problem inputs, and Markov Chains Monte Carlo algorithms (MCMC) which solve inverse problems, with gamma ray emission radionuclide spectrum, and outside dimensions of interest objects. The methodology is testing to quantify actinide activity in different kind of matrix, composition, and configuration of sources standard in terms of actinide masses, locations and distributions. Activity uncertainties are taken into account by this adjustment methodology.

  11. Statistical modeling and MAP estimation for body fat quantification with MRI ratio imaging

    Science.gov (United States)

    Wong, Wilbur C. K.; Johnson, David H.; Wilson, David L.

    2008-03-01

    We are developing small animal imaging techniques to characterize the kinetics of lipid accumulation/reduction of fat depots in response to genetic/dietary factors associated with obesity and metabolic syndromes. Recently, we developed an MR ratio imaging technique that approximately yields lipid/{lipid + water}. In this work, we develop a statistical model for the ratio distribution that explicitly includes a partial volume (PV) fraction of fat and a mixture of a Rician and multiple Gaussians. Monte Carlo hypothesis testing showed that our model was valid over a wide range of coefficient of variation of the denominator distribution (c.v.: 0-0:20) and correlation coefficient among the numerator and denominator (ρ 0-0.95), which cover the typical values that we found in MRI data sets (c.v.: 0:027-0:063, ρ: 0:50-0:75). Then a maximum a posteriori (MAP) estimate for the fat percentage per voxel is proposed. Using a digital phantom with many PV voxels, we found that ratio values were not linearly related to PV fat content and that our method accurately described the histogram. In addition, the new method estimated the ground truth within +1.6% vs. +43% for an approach using an uncorrected ratio image, when we simply threshold the ratio image. On the six genetically obese rat data sets, the MAP estimate gave total fat volumes of 279 +/- 45mL, values 21% smaller than those from the uncorrected ratio images, principally due to the non-linear PV effect. We conclude that our algorithm can increase the accuracy of fat volume quantification even in regions having many PV voxels, e.g. ectopic fat depots.

  12. Quantification of trace metals in infant formula premixes using laser-induced breakdown spectroscopy

    Science.gov (United States)

    Cama-Moncunill, Raquel; Casado-Gavalda, Maria P.; Cama-Moncunill, Xavier; Markiewicz-Keszycka, Maria; Dixit, Yash; Cullen, Patrick J.; Sullivan, Carl

    2017-09-01

    Infant formula is a human milk substitute generally based upon fortified cow milk components. In order to mimic the composition of breast milk, trace elements such as copper, iron and zinc are usually added in a single operation using a premix. The correct addition of premixes must be verified to ensure that the target levels in infant formulae are achieved. In this study, a laser-induced breakdown spectroscopy (LIBS) system was assessed as a fast validation tool for trace element premixes. LIBS is a promising emission spectroscopic technique for elemental analysis, which offers real-time analyses, little to no sample preparation and ease of use. LIBS was employed for copper and iron determinations of premix samples ranging approximately from 0 to 120 mg/kg Cu/1640 mg/kg Fe. LIBS spectra are affected by several parameters, hindering subsequent quantitative analyses. This work aimed at testing three matrix-matched calibration approaches (simple-linear regression, multi-linear regression and partial least squares regression (PLS)) as means for precision and accuracy enhancement of LIBS quantitative analysis. All calibration models were first developed using a training set and then validated with an independent test set. PLS yielded the best results. For instance, the PLS model for copper provided a coefficient of determination (R2) of 0.995 and a root mean square error of prediction (RMSEP) of 14 mg/kg. Furthermore, LIBS was employed to penetrate through the samples by repetitively measuring the same spot. Consequently, LIBS spectra can be obtained as a function of sample layers. This information was used to explore whether measuring deeper into the sample could reduce possible surface-contaminant effects and provide better quantifications.

  13. Improved Method for PD-Quantification in Power Cables

    DEFF Research Database (Denmark)

    Holbøll, Joachim T.; Villefrance, Rasmus; Henriksen, Mogens

    1999-01-01

    n this paper, a method is described for improved quantification of partial discharges(PD) in power cables. The method is suitable for PD-detection and location systems in the MHz-range, where pulse attenuation and distortion along the cable cannot be neglected. The system transfer function...... was calculated and measured in order to form basis for magnitude calculation after each measurements. --- Limitations and capabilities of the method will be discussed and related to relevant field applications of high frequent PD-measurements. --- Methods for increased signal/noise ratio are easily implemented...

  14. Quantification of protein concentration using UV absorbance and Coomassie dyes.

    Science.gov (United States)

    Noble, James E

    2014-01-01

    The measurement of a solubilized protein concentration in solution is an important assay in biochemistry research and development labs for applications ranging from enzymatic studies to providing data for biopharmaceutical lot release. Spectrophotometric protein quantification assays are methods that use UV and visible spectroscopy to rapidly determine the concentration of protein, relative to a standard, or using an assigned extinction coefficient. Where multiple samples need measurement, and/or the sample volume and concentration is limited, preparations of the Coomassie dye commonly known as the Bradford assay can be used. © 2014 Elsevier Inc. All rights reserved.

  15. Lung involvement quantification in chest radiographs

    International Nuclear Information System (INIS)

    Giacomini, Guilherme; Alvarez, Matheus; Oliveira, Marcela de; Miranda, Jose Ricardo A.; Pina, Diana R.; Pereira, Paulo C.M.; Ribeiro, Sergio M.

    2014-01-01

    Tuberculosis (TB) caused by Mycobacterium tuberculosis, is an infectious disease which remains a global health problem. The chest radiography is the commonly method employed to assess the TB's evolution. The methods for quantification of abnormalities of chest are usually performed on CT scans (CT). This quantification is important to assess the TB evolution and treatment and comparing different treatments. However, precise quantification is not feasible for the amount of CT scans required. The purpose of this work is to develop a methodology for quantification of lung damage caused by TB through chest radiographs. It was developed an algorithm for computational processing of exams in Matlab, which creates a lungs' 3D representation, with compromised dilated regions inside. The quantification of lung lesions was also made for the same patients through CT scans. The measurements from the two methods were compared and resulting in strong correlation. Applying statistical Bland and Altman, all samples were within the limits of agreement, with a confidence interval of 95%. The results showed an average variation of around 13% between the two quantification methods. The results suggest the effectiveness and applicability of the method developed, providing better risk-benefit to the patient and cost-benefit ratio for the institution. (author)

  16. Linear mass reflectron

    International Nuclear Information System (INIS)

    Mamyrin, B.A.; Shmikk, D.V.

    1979-01-01

    A description and operating principle of a linear mass reflectron with V-form trajectory of ion motion -a new non-magnetic time-of-flight mass spectrometer with high resolution are presented. The ion-optical system of the device consists of an ion source with ionization by electron shock, of accelerating gaps, reflector gaps, a drift space and ion detector. Ions move in the linear mass refraction along the trajectories parallel to the axis of the analyzer chamber. The results of investigations into the experimental device are given. With an ion drift length of 0.6 m the device resolution is 1200 with respect to the peak width at half-height. Small-sized mass spectrometric transducers with high resolution and sensitivity may be designed on the base of the linear mass reflectron principle

  17. Applied linear algebra

    CERN Document Server

    Olver, Peter J

    2018-01-01

    This textbook develops the essential tools of linear algebra, with the goal of imparting technique alongside contextual understanding. Applications go hand-in-hand with theory, each reinforcing and explaining the other. This approach encourages students to develop not only the technical proficiency needed to go on to further study, but an appreciation for when, why, and how the tools of linear algebra can be used across modern applied mathematics. Providing an extensive treatment of essential topics such as Gaussian elimination, inner products and norms, and eigenvalues and singular values, this text can be used for an in-depth first course, or an application-driven second course in linear algebra. In this second edition, applications have been updated and expanded to include numerical methods, dynamical systems, data analysis, and signal processing, while the pedagogical flow of the core material has been improved. Throughout, the text emphasizes the conceptual connections between each application and the un...

  18. Theory of linear operations

    CERN Document Server

    Banach, S

    1987-01-01

    This classic work by the late Stefan Banach has been translated into English so as to reach a yet wider audience. It contains the basics of the algebra of operators, concentrating on the study of linear operators, which corresponds to that of the linear forms a1x1 + a2x2 + ... + anxn of algebra.The book gathers results concerning linear operators defined in general spaces of a certain kind, principally in Banach spaces, examples of which are: the space of continuous functions, that of the pth-power-summable functions, Hilbert space, etc. The general theorems are interpreted in various mathematical areas, such as group theory, differential equations, integral equations, equations with infinitely many unknowns, functions of a real variable, summation methods and orthogonal series.A new fifty-page section (``Some Aspects of the Present Theory of Banach Spaces'''') complements this important monograph.

  19. Dimension of linear models

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    1996-01-01

    Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four of these cri......Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....

  20. Linear programming using Matlab

    CERN Document Server

    Ploskas, Nikolaos

    2017-01-01

    This book offers a theoretical and computational presentation of a variety of linear programming algorithms and methods with an emphasis on the revised simplex method and its components. A theoretical background and mathematical formulation is included for each algorithm as well as comprehensive numerical examples and corresponding MATLAB® code. The MATLAB® implementations presented in this book  are sophisticated and allow users to find solutions to large-scale benchmark linear programs. Each algorithm is followed by a computational study on benchmark problems that analyze the computational behavior of the presented algorithms. As a solid companion to existing algorithmic-specific literature, this book will be useful to researchers, scientists, mathematical programmers, and students with a basic knowledge of linear algebra and calculus.  The clear presentation enables the reader to understand and utilize all components of simplex-type methods, such as presolve techniques, scaling techniques, pivoting ru...

  1. Linear Colliders TESLA

    International Nuclear Information System (INIS)

    Anon.

    1994-01-01

    The aim of the TESLA (TeV Superconducting Linear Accelerator) collaboration (at present 19 institutions from seven countries) is to establish the technology for a high energy electron-positron linear collider using superconducting radiofrequency cavities to accelerate its beams. Another basic goal is to demonstrate that such a collider can meet its performance goals in a cost effective manner. For this the TESLA collaboration is preparing a 500 MeV superconducting linear test accelerator at the DESY Laboratory in Hamburg. This TTF (TESLA Test Facility) consists of four cryomodules, each approximately 12 m long and containing eight 9-cell solid niobium cavities operating at a frequency of 1.3 GHz

  2. Experimental Approaches at Linear Colliders

    International Nuclear Information System (INIS)

    Jaros, John A

    2002-01-01

    Precision measurements have played a vital role in our understanding of elementary particle physics. Experiments performed using e + e - collisions have contributed an essential part. Recently, the precision measurements at LEP and SLC have probed the standard model at the quantum level and severely constrained the mass of the Higgs boson [1]. Coupled with the limits on the Higgs mass from direct searches [2], this enables the mass to be constrained to be in the range 115-205 GeV. Developments in accelerator R and D have matured to the point where one could contemplate construction of a linear collider with initial energy in the 500 GeV range and a credible upgrade path to ∼ 1 TeV. Now is therefore the correct time to critically evaluate the case for such a facility

  3. Quantification of in vivo oxidative damage in Caenorhabditis elegans during aging by endogenous F3-isoprostane measurement

    NARCIS (Netherlands)

    Labuschagne, C.F.; Stigter, E.C.; Hendriks, M.M.; Berger, R.; Rokach, J.; Korswagen, H.C.; Brenkman, A.B.

    2013-01-01

    Oxidative damage is thought to be a major cause in development of pathologies and aging. However, quantification of oxidative damage is methodologically difficult. Here, we present a robust liquid chromatography-tandem mass spectrometry (LC-MS/MS) approach for accurate, sensitive, and linear in vivo

  4. Perspectives on large Linear Colliders

    International Nuclear Information System (INIS)

    Richter, B.

    1987-01-01

    The accelerator community now generally agrees that the Linear Collider is the most cost-effective technology for reaching much higher energies in the center-of-mass than can be attained in the largest of the e + e - storage rings, LEP. Indeed, even as the first linear collider, the SLC at SLAC, is getting ready to begin operations groups, at SLAC, Novosibirsk, CERN and KEK are doing R and D and conceptual design studies on a next generation machine in the 1 TeV energy region. In this perspectives talk I do not want to restrict my comments to any particular design, and so I will talk about a high-energy machine as the NLC, which is shorthand for the Next Linear Collider, and taken to mean a machine with a center-of-mass energy someplace in the 0.5 to 2 TeV energy range with sufficient luminosity to carry out a meaningful experimental program. I want to discuss three main items with you. The first is the interrelation of energy and luminosity requirements. These two items impose severe constraints on the accelerator builder. Next, I will give an introduction to linear collider design, concentrating on what goes on at the collision point, for still another constraint comes here from the beam-beam interaction which further restricts the choices available to the accelerator builder.Then, I want to give my impressions of the state of the technology available for building these kinds of machines within the next decade

  5. Linear positivity and virtual probability

    International Nuclear Information System (INIS)

    Hartle, James B.

    2004-01-01

    We investigate the quantum theory of closed systems based on the linear positivity decoherence condition of Goldstein and Page. The objective of any quantum theory of a closed system, most generally the universe, is the prediction of probabilities for the individual members of sets of alternative coarse-grained histories of the system. Quantum interference between members of a set of alternative histories is an obstacle to assigning probabilities that are consistent with the rules of probability theory. A quantum theory of closed systems therefore requires two elements: (1) a condition specifying which sets of histories may be assigned probabilities and (2) a rule for those probabilities. The linear positivity condition of Goldstein and Page is the weakest of the general conditions proposed so far. Its general properties relating to exact probability sum rules, time neutrality, and conservation laws are explored. Its inconsistency with the usual notion of independent subsystems in quantum mechanics is reviewed. Its relation to the stronger condition of medium decoherence necessary for classicality is discussed. The linear positivity of histories in a number of simple model systems is investigated with the aim of exhibiting linearly positive sets of histories that are not decoherent. The utility of extending the notion of probability to include values outside the range of 0-1 is described. Alternatives with such virtual probabilities cannot be measured or recorded, but can be used in the intermediate steps of calculations of real probabilities. Extended probabilities give a simple and general way of formulating quantum theory. The various decoherence conditions are compared in terms of their utility for characterizing classicality and the role they might play in further generalizations of quantum mechanics

  6. Linearly Adjustable International Portfolios

    Science.gov (United States)

    Fonseca, R. J.; Kuhn, D.; Rustem, B.

    2010-09-01

    We present an approach to multi-stage international portfolio optimization based on the imposition of a linear structure on the recourse decisions. Multiperiod decision problems are traditionally formulated as stochastic programs. Scenario tree based solutions however can become intractable as the number of stages increases. By restricting the space of decision policies to linear rules, we obtain a conservative tractable approximation to the original problem. Local asset prices and foreign exchange rates are modelled separately, which allows for a direct measure of their impact on the final portfolio value.

  7. Linearly Adjustable International Portfolios

    International Nuclear Information System (INIS)

    Fonseca, R. J.; Kuhn, D.; Rustem, B.

    2010-01-01

    We present an approach to multi-stage international portfolio optimization based on the imposition of a linear structure on the recourse decisions. Multiperiod decision problems are traditionally formulated as stochastic programs. Scenario tree based solutions however can become intractable as the number of stages increases. By restricting the space of decision policies to linear rules, we obtain a conservative tractable approximation to the original problem. Local asset prices and foreign exchange rates are modelled separately, which allows for a direct measure of their impact on the final portfolio value.

  8. Linear induction motor

    International Nuclear Information System (INIS)

    Barkman, W.E.; Adams, W.Q.; Berrier, B.R.

    1978-01-01

    A linear induction motor has been operated on a test bed with a feedback pulse resolution of 5 nm (0.2 μin). Slewing tests with this slide drive have shown positioning errors less than or equal to 33 nm (1.3 μin) at feedrates between 0 and 25.4 mm/min (0-1 ipm). A 0.86-m (34-in)-stroke linear motor is being investigated, using the SPACO machine as a test bed. Initial results were encouraging, and work is continuing to optimize the servosystem compensation

  9. Handbook of linear algebra

    CERN Document Server

    Hogben, Leslie

    2013-01-01

    With a substantial amount of new material, the Handbook of Linear Algebra, Second Edition provides comprehensive coverage of linear algebra concepts, applications, and computational software packages in an easy-to-use format. It guides you from the very elementary aspects of the subject to the frontiers of current research. Along with revisions and updates throughout, the second edition of this bestseller includes 20 new chapters.New to the Second EditionSeparate chapters on Schur complements, additional types of canonical forms, tensors, matrix polynomials, matrix equations, special types of

  10. Linear Algebra Thoroughly Explained

    CERN Document Server

    Vujičić, Milan

    2008-01-01

    Linear Algebra Thoroughly Explained provides a comprehensive introduction to the subject suitable for adoption as a self-contained text for courses at undergraduate and postgraduate level. The clear and comprehensive presentation of the basic theory is illustrated throughout with an abundance of worked examples. The book is written for teachers and students of linear algebra at all levels and across mathematics and the applied sciences, particularly physics and engineering. It will also be an invaluable addition to research libraries as a comprehensive resource book for the subject.

  11. Trace analysis of pesticides in paddy field water by direct injection using liquid chromatography-quadrupole-linear ion trap-mass spectrometry.

    Science.gov (United States)

    Pareja, Lucía; Martínez-Bueno, M J; Cesio, Verónica; Heinzen, Horacio; Fernández-Alba, A R

    2011-07-29

    A multiresidue method was developed for the quantification and confirmation of 70 pesticides in paddy field water. After its filtration, water was injected directly in a liquid chromatograph coupled to a hybrid triple quadrupole-linear ion trap-mass spectrometer (QqLIT). The list of target analytes included organophosphates, phenylureas, sulfonylureas, carbamates, conazoles, imidazolinones and others compounds widely used in different countries where rice is cropped. Detection and quantification limits achieved were in the range from 0.4 to 80 ng L(-1) and from 2 to 150 ng L(-1), respectively. Correlation coefficients for the calibration curves in the range 0.1-50 μg L(-1) were higher than 0.99 except for diazinon (0.1-25 μg L(-1)). Only 9 pesticides presented more than 20% of signal suppression/enhancement, no matrix effect was observed in the studied conditions for the rest of the target pesticides. The method developed was used to investigate the occurrence of pesticides in 59 water samples collected in paddy fields located in Spain and Uruguay. The study shows the presence of bensulfuron methyl, tricyclazole, carbendazim, imidacloprid, tebuconazole and quinclorac in a concentration range from 0.08 to 7.20 μg L(-1). Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Linearização de curvas de titulação para determinação da capacidade tamponante da fibra de alimentos em ampla faixa de pH - DOI: 10.4025/actascianimsci.v32i1.5384 Linearization of the titration curves for determination of buffering capacity of feed fiber in a wide pH range - DOI: 10.4025/actascianimsci.v32i1.5384

    Directory of Open Access Journals (Sweden)

    João Ricardo Dittrich

    2009-12-01

    as EXP (1 pH-1, and the accumulated HCl in each interval. The transformation presented satisfactory fitting to linear regression for all materials evaluated, with r2 higher than 0.85 and standard error residual lower than 0.059. A positive correlation between TA and LB evaluated in the different ranges of pH was obtained. Unlike TA, the LB method makes it possible to obtain the BC as a single linear ratio, valuable for every pH interval from 8.0 to 2.0.

  13. Bayesian Statistics and Uncertainty Quantification for Safety Boundary Analysis in Complex Systems

    Science.gov (United States)

    He, Yuning; Davies, Misty Dawn

    2014-01-01

    The analysis of a safety-critical system often requires detailed knowledge of safe regions and their highdimensional non-linear boundaries. We present a statistical approach to iteratively detect and characterize the boundaries, which are provided as parameterized shape candidates. Using methods from uncertainty quantification and active learning, we incrementally construct a statistical model from only few simulation runs and obtain statistically sound estimates of the shape parameters for safety boundaries.

  14. America, Linearly Cyclical

    Science.gov (United States)

    2013-05-10

    AND VICTIM- ~ vAP BLAMING 4. AMERICA, LINEARLY CYCUCAL AF IMT 1768, 19840901, V5 PREVIOUS EDITION WILL BE USED. C2C Jessica Adams Dr. Brissett...his desires, his failings, and his aspirations follow the same general trend throughout history and throughout cultures. The founding fathers sought

  15. Stanford's linear collider

    International Nuclear Information System (INIS)

    Southworth, B.

    1985-01-01

    The peak of the construction phase of the Stanford Linear Collider, SLC, to achieve 50 GeV electron-positron collisions has now been passed. The work remains on schedule to attempt colliding beams, initially at comparatively low luminosity, early in 1987. (orig./HSI).

  16. Dosimetry of linear sources

    International Nuclear Information System (INIS)

    Mafra Neto, F.

    1992-01-01

    The dose of gamma radiation from a linear source of cesium 137 is obtained, presenting two difficulties: oblique filtration of radiation when cross the platinum wall, in different directions, and dose connection due to the scattering by the material mean of propagation. (C.G.C.)

  17. Resistors Improve Ramp Linearity

    Science.gov (United States)

    Kleinberg, L. L.

    1982-01-01

    Simple modification to bootstrap ramp generator gives more linear output over longer sweep times. New circuit adds just two resistors, one of which is adjustable. Modification cancels nonlinearities due to variations in load on charging capacitor and due to changes in charging current as the voltage across capacitor increases.

  18. LINEAR COLLIDERS: 1992 workshop

    International Nuclear Information System (INIS)

    Settles, Ron; Coignet, Guy

    1992-01-01

    As work on designs for future electron-positron linear colliders pushes ahead at major Laboratories throughout the world in a major international collaboration framework, the LC92 workshop held in Garmisch Partenkirchen this summer, attended by 200 machine and particle physicists, provided a timely focus

  19. Linear genetic programming

    CERN Document Server

    Brameier, Markus

    2007-01-01

    Presents a variant of Genetic Programming that evolves imperative computer programs as linear sequences of instructions, in contrast to the more traditional functional expressions or syntax trees. This book serves as a reference for researchers, but also contains sufficient introduction for students and those who are new to the field

  20. On Solving Linear Recurrences

    Science.gov (United States)

    Dobbs, David E.

    2013-01-01

    A direct method is given for solving first-order linear recurrences with constant coefficients. The limiting value of that solution is studied as "n to infinity." This classroom note could serve as enrichment material for the typical introductory course on discrete mathematics that follows a calculus course.

  1. Review of linear colliders

    International Nuclear Information System (INIS)

    Takeda, Seishi

    1992-01-01

    The status of R and D of future e + e - linear colliders proposed by the institutions throughout the world is described including the JLC, NLC, VLEPP, CLIC, DESY/THD and TESLA projects. The parameters and RF sources are discussed. (G.P.) 36 refs.; 1 tab

  2. Afrika Statistika ISSN 2316-090X Further properties of linear ...

    African Journals Online (AJOL)

    properties of linear prediction sufficiency and the BLUPs in the linear model with new observations. ...... a wide range of applications, for example, plant variety trials, animal breeding, selection ..... Linear Algebra Appl., 430, 2622–2641. DOI.

  3. Development and validation of a high-performance liquid chromatography method for the quantification of talazoparib in rat plasma: Application to plasma protein binding studies.

    Science.gov (United States)

    Hidau, Mahendra Kumar; Kolluru, Srikanth; Palakurthi, Srinath

    2018-02-01

    A sensitive and selective RP-HPLC method has been developed and validated for the quantification of a highly potent poly ADP ribose polymerase inhibitor talazoparib (TZP) in rat plasma. Chromatographic separation was performed with isocratic elution method. Absorbance for TZP was measured with a UV detector (SPD-20A UV-vis) at a λ max of 227 nm. Protein precipitation was used to extract the drug from plasma samples using methanol-acetonitrile (65:35) as the precipitating solvent. The method proved to be sensitive and reproducible over a 100-2000 ng/mL linearity range with a lower limit of quantification (LLQC) of 100 ng/mL. TZP recovery was found to be >85%. Following analytical method development and validation, it was successfully employed to determine the plasma protein binding of TZP. TZP has a high level of protein binding in rat plasma (95.76 ± 0.38%) as determined by dialysis method. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Simultaneous identification and quantification of bisphenol A and 12 bisphenol analogues in environmental samples using precolumn derivatization and ultra high performance liquid chromatography with tandem mass spectrometry.

    Science.gov (United States)

    Wang, Zhonghe; Yu, Jing; Yao, Jiaxi; Wu, Linlin; Xiao, Hang; Wang, Jun; Gao, Rong

    2018-02-10

    A method for the identification and quantification of bisphenol A and 12 bisphenol analogues in river water and sediment samples combining liquid-liquid extraction, precolumn derivatization, and ultra high-performance liquid chromatography coupled with tandem mass spectrometry was developed and validated. Analytes were extracted from the river water sample using a liquid-liquid extraction method. Dansyl chloride was selected as a derivatization reagent. Derivatization reaction conditions affecting production of the dansyl derivatives were tested and optimized. All the derivatized target compounds were well separated and eluted in 10 min. Dansyl chloride labeled compounds were analyzed using a high-resolution mass spectrometer with electrospray ionization in the positive mode, and the results were confirmed and quantified in the parallel reaction monitoring mode. The method validation results showed a satisfactory level of sensitivity. Linearity was assessed using matrix-matched standard calibration, and good correlation coefficients were obtained. The limits of quantification for the analytes ranged from 0.005 to 0.02 ng/mL in river water and from 0.15 to 0.80 ng/g in sediment. Good reproducibility of the method in terms of intra- and interday precision was achieved, yielding relative standard deviations of less than 10.1 and 11.6%, respectively. Finally, this method was successfully applied to the analysis of real samples. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Magnetic immunoassay coupled with inductively coupled plasma mass spectrometry for simultaneous quantification of alpha-fetoprotein and carcinoembryonic antigen in human serum

    Science.gov (United States)

    Zhang, Xing; Chen, Beibei; He, Man; Zhang, Yiwen; Xiao, Guangyang; Hu, Bin

    2015-04-01

    The absolute quantification of glycoproteins in complex biological samples is a challenge and of great significance. Herein, 4-mercaptophenylboronic acid functionalized magnetic beads were prepared to selectively capture glycoproteins, while antibody conjugated gold and silver nanoparticles were synthesized as element tags to label two different glycoproteins. Based on that, a new approach of magnetic immunoassay-inductively coupled plasma mass spectrometry (ICP-MS) was established for simultaneous quantitative analysis of glycoproteins. Taking biomarkers of alpha-fetoprotein (AFP) and carcinoembryonic antigen (CEA) as two model glycoproteins, experimental parameters involved in the immunoassay procedure were carefully optimized and analytical performance of the proposed method was evaluated. The limits of detection (LODs) for AFP and CEA were 0.086 μg L- 1 and 0.054 μg L- 1 with the relative standard deviations (RSDs, n = 7, c = 5 μg L- 1) of 6.5% and 6.2% for AFP and CEA, respectively. Linear range for both AFP and CEA was 0.2-50 μg L- 1. To validate the applicability of the proposed method, human serum samples were analyzed, and the obtained results were in good agreement with that obtained by the clinical chemiluminescence immunoassay. The developed method exhibited good selectivity and sensitivity for the simultaneous determination of AFP and CEA, and extended the applicability of metal nanoparticle tags based on ICP-MS methodology in multiple glycoprotein quantifications.

  6. Partially hydrolyzed guar gum characterization and sensitive quantification in food matrices by high performance anion exchange chromatography with pulsed amperometric detection--validation using accuracy profile.

    Science.gov (United States)

    Mercier, G; Campargue, C

    2012-11-02

    Interest concerning functional ingredients and especially dietary fibres has been growing in recent years. At the same time, the variety of ingredient accepted as dietary fibres and their mixing at low level in complex matrices have considerably complicated their quantitative analysis by approved AOAC methods. These reasons have led to the specific development of an innovative analytical method performed by high-performance anion-exchange chromatography (HPAEC) with pulsed amperometric detection (PAD) to detect and quantify partially hydrolyzed guar gum (PHGG) in fruit preparation and dairy matrices. The analytical methodology was divided in two steps which could be deployed separately or in conjunction. The first, consists in a complete characterization of PHGG by size exclusion chromatography (SEC) with multi-angle light scattering and refractive index detection and HPAEC-PAD to determine its physico-chemical properties and galactomannans content, and the second step is the development of a new HPAEC-PAD method for PHGG direct quantification in complex matrices (dairy product). Validation in terms of detection and quantification limits, linearity of the analytical range, average accuracy (recovery, trueness) and average uncertainty were statistically carried out with accuracy profile. Overall, this new chromatographic method has considerably improved the possibility to quantify without fractionation treatment, low level of dietary fibres emerging from specific galactomannans, in complex matrices and many foodstuffs. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Ultrapressure liquid chromatography-tandem mass spectrometry assay using atmospheric pressure photoionization (UPLC-APPI-MS/MS) for quantification of 4-methoxydiphenylmethane in pharmacokinetic evaluation.

    Science.gov (United States)

    Farhan, Nashid; Fitzpatrick, Sean; Shim, Yun M; Paige, Mikell; Chow, Diana Shu-Lian

    2016-09-05

    4-Methoxydiphenylmethane (4-MDM), a selective augmenter of Leukotriene A4 Hydrolase (LTA4H), is a new anti-inflammatory compound for potential treatment of chronic obstructive pulmonary disease (COPD). Currently, there is no liquid chromatography tandem mass spectrometric (LC-MS/MS) method for the quantification of 4-MDM. A major barrier for developing the LC-MS/MS method is the inability of electrospray ionization (ESI) and atmospheric pressure chemical ionization (APCI) to ionize 4-MDM due to its hydrophobicity and lack of any functional group for ionization. With the advent of atmospheric pressure photoionization (APPI) technique, many hydrophobic compounds have been demonstrated to ionize by charge transfer reactions. In this study, a highly sensitive ultrapressure liquid chromatography tandem mass spectrometry assay using atmospheric pressure photoionization (UPLC-APPI-MS/MS) for the quantifications of 4-MDM in rat plasma has been developed and validated. 4-MDM was extracted from the plasma by solid phase extraction (SPE) and separated chromatographically using a reverse phase C8 column. The photoionization (PI) was achieved by introducing anisole as a dopant to promote the reaction of charge transfer. The assay with a linear range of 5 (LLOQ)-400ngmL(-1) met the regulatory requirements for accuracy, precision and stability. The validated assay was employed to quantify the plasma concentrations of 4-MDM after an oral dosing in Sprague Dawley (SD) rats. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Highly sensitive and simple liquid chromatography assay with ion-pairing extraction and visible detection for quantification of gold from nanoparticles.

    Science.gov (United States)

    Pallotta, Arnaud; Philippe, Valentin; Boudier, Ariane; Leroy, Pierre; Clarot, Igor

    2018-03-01

    A simple isocratic HPLC method using visible detection was developed and validated for the quantification of gold in nanoparticles (AuNP). After a first step of oxidation of nanoparticles, an ion-pair between tetrachloroaurate anion and the cationic dye Rhodamine B was formed and extracted from the aqueous media with the help of an organic solvent. The corresponding Rhodamine B was finally quantified by reversed phase liquid chromatography using a Nucleosil C18 (150mm × 4.6mm, 3µm) column and with a mobile phase containing acetonitrile and 0.1% trifluoroacetic acid aqueous solution (25/75, V/V) at 1.0mLmin -1. and at a wavelength of 555nm. The method was validated using methodology described by the International Conference on Harmonization and was shown to be specific, precise (RSD < 11%), accurate and linear in the range of 0.1 - 30.0µM with a lower limit of quantification (LLOQ) of 0.1µM. This method was in a first time applied to AuNP quality control after their synthesis. In a second time, the absence of gold leakage (either as AuNP or gold salt form) from nanostructured multilayered polyelectrolyte films under shear stress was assessed. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Identification and quantification of 56 targeted phenols in wines, spirits, and vinegars by online solid-phase extraction - ultrahigh-performance liquid chromatography - quadrupole-orbitrap mass spectrometry.

    Science.gov (United States)

    Barnaba, C; Dellacassa, E; Nicolini, G; Nardin, T; Malacarne, M; Larcher, R

    2015-12-04

    Phenolic compounds seriously affect the sensory and nutritional qualities of food products, both through the positive contribution of wood transfer in barrel-aged products and as off-flavours. A new targeted analytical approach combining on-line solid-phase extraction (SPE) clean-up to reduce matrix interference and rapid chromatographic detection performed with ultrahigh-performance liquid chromatography coupled with quadrupole/high-resolution mass spectrometry (Q-Orbitrap), was developed for the quantification of 56 simple phenols. Considering the advantages of using on-line SPE and a resolving power of 140,000, the proposed method was applied to define phenolic content in red (N=8) and white (8) wines, spirits (8), common (8) and balsamic (8) vinegars. The final method was linear from the limits of quantification (0.0001-0.001μgmL(-1)) up to 10μgmL(-1) with R(2) of at least 0.99. Recovery, used to define method accuracy, ranged from 80 to 120% for 89% of compounds. The method was suitable for analytical requirements in the tested matrices being able to analyse 46 phenols in red wines, 41 phenols in white wines and in spirits, 42 phenols in common vinegars and 44 phenols in balsamic vinegars. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Quantification of Coffea arabica and Coffea canephora var. robusta concentration in blends by means of synchronous fluorescence and UV-Vis spectroscopies.

    Science.gov (United States)

    Dankowska, A; Domagała, A; Kowalewski, W

    2017-09-01

    The potential of fluorescence, UV-Vis spectroscopies as well as the low- and mid-level data fusion of both spectroscopies for the quantification of concentrations of roasted Coffea arabica and Coffea canephora var. robusta in coffee blends was investigated. Principal component analysis was used to reduce data multidimensionality. To calculate the level of undeclared addition, multiple linear regression (PCA-MLR) models were used with lowest root mean square error of calibration (RMSEC) of 3.6% and root mean square error of cross-validation (RMSECV) of 7.9%. LDA analysis was applied to fluorescence intensities and UV spectra of Coffea arabica, canephora samples, and their mixtures in order to examine classification ability. The best performance of PCA-LDA analysis was observed for data fusion of UV and fluorescence intensity measurements at wavelength interval of 60nm. LDA showed that data fusion can achieve over 96% of correct classifications (sensitivity) in the test set and 100% of correct classifications in the training set, with low-level data fusion. The corresponding results for individual spectroscopies ranged from 90% (UV-Vis spectroscopy) to 77% (synchronous fluorescence) in the test set, and from 93% to 97% in the training set. The results demonstrate that fluorescence, UV, and visible spectroscopies complement each other, giving a complementary effect for the quantification of roasted Coffea arabica and Coffea canephora var. robusta concentration in blends. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Stable isotope dilution HILIC-MS/MS method for accurate quantification of glutamic acid, glutamine, pyroglutamic acid, GABA and theanine in mouse brain tissues.

    Science.gov (United States)

    Inoue, Koichi; Miyazaki, Yasuto; Unno, Keiko; Min, Jun Zhe; Todoroki, Kenichiro; Toyo'oka, Toshimasa

    2016-01-01

    In this study, we developed the stable isotope dilution hydrophilic interaction liquid chromatography with tandem mass spectrometry (HILIC-MS/MS) technique for the accurate, reasonable and simultaneous quantification of glutamic acid (Glu), glutamine (Gln), pyroglutamic acid (pGlu), γ-aminobutyric acid (GABA) and theanine in mouse brain tissues. The quantification of these analytes was accomplished using stable isotope internal standards and the HILIC separating mode to fully correct the intramolecular cyclization during the electrospray ionization. It was shown that linear calibrations were available with high coefficients of correlation (r(2)  > 0.999, range from 10 pmol/mL to 50 mol/mL). For application of the theanine intake, the determination of Glu, Gln, pGlu, GABA and theanine in the hippocampus and central cortex tissues was performed based on our developed method. In the region of the hippocampus, the concentration levels of Glu and pGlu were significantly reduced during reality-based theanine intake. Conversely, the concentration level of GABA increased. This result showed that transited theanine has an effect on the metabolic balance of Glu analogs in the hippocampus. Copyright © 2015 John Wiley & Sons, Ltd.

  12. A fast ultra high pressure liquid chromatographic method for qualification and quantification of pharmaceutical combination preparations containing paracetamol, acetyl salicylic acid and/or antihistaminics.

    Science.gov (United States)

    Deconinck, E; Sacré, P Y; Baudewyns, S; Courselle, P; De Beer, J

    2011-09-10

    A fully validated UHPLC method for the identification and quantification of pharmaceutical preparations, containing paracetamol and/or acetyl salicylic acid, combined with anti-histaminics (phenylephrine, pheniramine maleate, diphenhydramine, promethazine) and/or other additives as quinine sulphate, caffeine or codeine phosphate, was developed. The proposed method uses a Waters Acquity BEH C18 column (2 mm × 100 mm, 1.7 μm) with a gradient using an ammonium acetate buffer pH 4.0 as aqueous phase and methanol as organic modifier. The obtained method was fully validated based on its measurement uncertainty (accuracy profile) and robustness tests. Calibration lines for all components were linear within the studied ranges. The relative bias and the relative standard deviations for all components were respectively smaller than 1.5% and 2%, the β-expectation tolerance limits did not exceed the acceptance limits of 10% and the relative expanded uncertainties were smaller than 5% for all of the considered components. A UHPLC method was obtained for the identification and quantification of these kind of pharmaceutical preparations, which will significantly reduce analysis times and workload for the laboratories charged with the quality control of these preparations. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Development and Validation of a Bioanalytical Method for Quantification of 2,6-Bis-(4-hydroxy-3-methoxybenzylidene-cyclohexanone (BHMC in Rat Plasma

    Directory of Open Access Journals (Sweden)

    Yu Zhao Lee

    2012-12-01

    Full Text Available A sensitive and accurate high performance liquid chromatography with ultraviolet/visible light detection (HPLC-UV/VIS method for the quantification of 2,6-bis-(4-hydroxy-3-methoxybenzylidene-cyclohexanone (BHMC in rat plasma was developed and validated. BHMC and the internal standard, harmaline, were extracted from plasma samples by a simple liquid–liquid extraction using 95% ethyl acetate and 5% methanol. Plasma concentration of BHMC and internal standard were analyzed by reversed phase chromatography using a C18 column (150 × 4.6 mm I.D., particle size 5 µm and elution with a gradient mobile phase of water and methanol at a flow rate of 1.0 mL/min. Detection of BHMC and internal standard was done at a wavelength of 380 nm. The limit of quantification was 0.02 µg/mL. The calibration curves was linear (R2 > 0.999 over the concentration range of 0.02–2.5 µg/mL. Intra- and inter-day precision were less than 2% coefficient of variation. The validated method was then applied to a pharmacokinetic study in rats by intravenous administration of BHMC at a single dose of 10 mg/kg. Pharmacokinetic parameters such as half-life, maximum plasma concentration, volume of distribution, clearance and elimination rate constant for BHMC were calculated.

  14. Magnetic immunoassay coupled with inductively coupled plasma mass spectrometry for simultaneous quantification of alpha-fetoprotein and carcinoembryonic antigen in human serum

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xing; Chen, Beibei; He, Man; Zhang, Yiwen; Xiao, Guangyang; Hu, Bin, E-mail: binhu@whu.edu.cn

    2015-04-01

    The absolute quantification of glycoproteins in complex biological samples is a challenge and of great significance. Herein, 4-mercaptophenylboronic acid functionalized magnetic beads were prepared to selectively capture glycoproteins, while antibody conjugated gold and silver nanoparticles were synthesized as element tags to label two different glycoproteins. Based on that, a new approach of magnetic immunoassay-inductively coupled plasma mass spectrometry (ICP-MS) was established for simultaneous quantitative analysis of glycoproteins. Taking biomarkers of alpha-fetoprotein (AFP) and carcinoembryonic antigen (CEA) as two model glycoproteins, experimental parameters involved in the immunoassay procedure were carefully optimized and analytical performance of the proposed method was evaluated. The limits of detection (LODs) for AFP and CEA were 0.086 μg L{sup −1} and 0.054 μg L{sup −1} with the relative standard deviations (RSDs, n = 7, c = 5 μg L{sup −1}) of 6.5% and 6.2% for AFP and CEA, respectively. Linear range for both AFP and CEA was 0.2–50 μg L{sup −1}. To validate the applicability of the proposed method, human serum samples were analyzed, and the obtained results were in good agreement with that obtained by the clinical chemiluminescence immunoassay. The developed method exhibited good selectivity and sensitivity for the simultaneous determination of AFP and CEA, and extended the applicability of metal nanoparticle tags based on ICP-MS methodology in multiple glycoprotein quantifications. - Highlights: • 4-Mercaptophenylboronic acid functionalized magnetic beads were prepared and characterized. • ICP-MS based magnetic immunoassay approach was developed for quantification of glycoproteins. • AFP and CEA were quantified simultaneously with Au and Ag NPs as element tags. • The developed method exhibited good selectivity and sensitivity for target glycoproteins.

  15. Simultaneous Quantification of Seven Bioactive Flavonoids in Citri Reticulatae Pericarpium by Ultra-Fast Liquid Chromatography Coupled with Tandem Mass Spectrometry.

    Science.gov (United States)

    Zhao, Lian-Hua; Zhao, Hong-Zheng; Zhao, Xue; Kong, Wei-Jun; Hu, Yi-Chen; Yang, Shi-Hai; Yang, Mei-Hua

    2016-05-01

    Citri Reticulatae Pericarpium (CRP) is a commonly-used traditional Chinese medicine with flavonoids as the major bioactive components. Nevertheless, the contents of the flavonoids in CRP of different sources may significantly vary affecting their therapeutic effects. Thus, the setting up of a reliable and comprehensive quality assessment method for flavonoids in CRP is necessary. To set up a rapid and sensitive ultra-fast liquid chromatography coupled with tandem mass spectrometry (UFLC-MS/MS) method for simultaneous quantification of seven bioactive flavonoids in CRP. A UFLC-MS/MS method coupled to ultrasound-assisted extraction was developed for simultaneous separation and quantification of seven flavonoids including hesperidin, neohesperidin, naringin, narirutin, tangeretin, nobiletin and sinensetin in 16 batches of CRP samples from different sources in China. The established method showed good linearity for all analytes with correlation coefficient (R) over 0.9980, together with satisfactory accuracy, precision and reproducibility. Furthermore, the recoveries at the three spiked levels were higher than 89.71% with relative standard deviations (RSDs) lower than 5.19%. The results indicated that the contents of seven bioactive flavonoids in CRP varied significantly among different sources. Among the samples under study, hesperidin showed the highest contents in 16 samples ranged from 27.50 to 86.30 mg/g, the contents of hesperidin in CRP-15 and CRP-9 were 27.50 and 86.30 mg/g, respectively, while, the amount of narirutin was too low to be measured in some samples. This study revealed that the developed UFLC-MS/MS method was simple, sensitive and reliable for simultaneous quantification of multi-components in CRP with potential perspective for quality control of complex matrices. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Development and validation of high-performance liquid chromatography and high-performance thin-layer chromatography methods for the quantification of khellin in Ammi visnaga seed

    Science.gov (United States)

    Kamal, Abid; Khan, Washim; Ahmad, Sayeed; Ahmad, F. J.; Saleem, Kishwar

    2015-01-01

    Objective: The present study was used to design simple, accurate and sensitive reversed phase-high-performance liquid chromatography RP-HPLC and high-performance thin-layer chromatography (HPTLC) methods for the development of quantification of khellin present in the seeds of Ammi visnaga. Materials and Methods: RP-HPLC analysis was performed on a C18 column with methanol: Water (75: 25, v/v) as a mobile phase. The HPTLC method involved densitometric evaluation of khellin after resolving it on silica gel plate using ethyl acetate: Toluene: Formic acid (5.5:4.0:0.5, v/v/v) as a mobile phase. Results: The developed HPLC and HPTLC methods were validated for precision (interday, intraday and intersystem), robustness and accuracy, limit of detection and limit of quantification. The relationship between the concentration of standard solutions and the peak response was linear in both HPLC and HPTLC methods with the concentration range of 10–80 μg/mL in HPLC and 25–1,000 ng/spot in HPTLC for khellin. The % relative standard deviation values for method precision was found to be 0.63–1.97%, 0.62–2.05% in HPLC and HPTLC for khellin respectively. Accuracy of the method was checked by recovery studies conducted at three different concentration levels and the average percentage recovery was found to be 100.53% in HPLC and 100.08% in HPTLC for khellin. Conclusions: The developed HPLC and HPTLC methods for the quantification of khellin were found simple, precise, specific, sensitive and accurate which can be used for routine analysis and quality control of A. visnaga and several formulations containing it as an ingredient. PMID:26681890

  17. Tonopah Test Range - Index

    Science.gov (United States)

    Capabilities Test Operations Center Test Director Range Control Track Control Communications Tracking Radars Photos Header Facebook Twitter YouTube Flickr RSS Tonopah Test Range Top TTR_TOC Tonopah is the testing range of choice for all national security missions. Tonopah Test Range (TTR) provides research and

  18. Development of heavy ion linear accelerators

    International Nuclear Information System (INIS)

    Bomko, V.A.; Khizhnyak, N.A.

    1981-01-01

    A review of the known heavy ion accelerators is given. It is stated that cyclic and linear accelerators are the most perspective ones in the energy range up to 10 MeV/nucleon according to universality in respect with the possibility of ion acceleration of the wide mass range. However, according to the accelerated beam intensity of the heavier ions the linear accelerators have considerable advantages over any other types of accelerators. The review of the known heavy ion linac structures permits to make the conclusion that a new modification of an accelerating structure of opposite pins excited on a H-wave is the most perspective one [ru

  19. Hepatitis B virus DNA quantification with the three-in-one (3io) method allows accurate single-step differentiation of total HBV DNA and cccDNA in biopsy-size liver samples.

    Science.gov (United States)

    Taranta, Andrzej; Tien Sy, Bui; Zacher, Behrend Johan; Rogalska-Taranta, Magdalena; Manns, Michael Peter; Bock, Claus Thomas; Wursthorn, Karsten

    2014-08-01

    Hepatitis B virus (HBV) replicates via reverse transcription converting its partially double stranded genome into the covalently closed circular DNA (cccDNA). The long-lasting cccDNA serves as a replication intermediate in the nuclei of hepatocytes. It is an excellent, though evasive, parameter for monitoring the course of liver disease and treatment efficiency. To develop and test a new approach for HBV DNA quantification in serum and small-size liver samples. The p3io plasmid contains an HBV fragment and human β-actin gene (hACTB) as a standard. Respective TaqMan probes were labeled with different fluorescent dyes. A triplex real-time PCR for simultaneous quantification of total HBV DNA, cccDNA and hACTB could be established. Three-in-one method allows simultaneous analysis of 3 targets with a lower limit of quantification of 48 copies per 20 μl PCR reaction and a wide range of linearity (R(2)>0.99, pDNA samples from HBV infected patients. Total HBV DNA and cccDNA could be quantified in 32 and 22 of 33 FFPE preserved liver specimens, respectively. Total HBV DNA concentrations quantified by the 3io method remained comparable with Cobas TaqMan HBV Test v2.0. The three-in-one protocol allows the single step quantification of viral DNA in samples from different sources. Therefore lower sample input, faster data acquisition, a lowered error and significantly lower costs are the advantages of the method. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Finite-dimensional linear algebra

    CERN Document Server

    Gockenbach, Mark S

    2010-01-01

    Some Problems Posed on Vector SpacesLinear equationsBest approximationDiagonalizationSummaryFields and Vector SpacesFields Vector spaces Subspaces Linear combinations and spanning sets Linear independence Basis and dimension Properties of bases Polynomial interpolation and the Lagrange basis Continuous piecewise polynomial functionsLinear OperatorsLinear operatorsMore properties of linear operatorsIsomorphic vector spaces Linear operator equations Existence and uniqueness of solutions The fundamental theorem; inverse operatorsGaussian elimination Newton's method Linear ordinary differential eq

  1. Voltage linear transformation circuit design

    Science.gov (United States)

    Sanchez, Lucas R. W.; Jin, Moon-Seob; Scott, R. Phillip; Luder, Ryan J.; Hart, Michael

    2017-09-01

    Many engineering projects require automated control of analog voltages over a specified range. We have developed a computer interface comprising custom hardware and MATLAB code to provide real-time control of a Thorlabs adaptive optics (AO) kit. The hardware interface includes an op amp cascade to linearly shift and scale a voltage range. With easy modifications, any linear transformation can be accommodated. In AO applications, the design is suitable to drive a range of different types of deformable and fast steering mirrors (FSM's). Our original motivation and application was to control an Optics in Motion (OIM) FSM which requires the customer to devise a unique interface to supply voltages to the mirror controller to set the mirror's angular deflection. The FSM is in an optical servo loop with a wave front sensor (WFS), which controls the dynamic behavior of the mirror's deflection. The code acquires wavefront data from the WFS and fits a plane, which is subsequently converted into its corresponding angular deflection. The FSM provides +/-3° optical angular deflection for a +/-10 V voltage swing. Voltages are applied to the mirror via a National Instruments digital-to-analog converter (DAC) followed by an op amp cascade circuit. This system has been integrated into our Thorlabs AO testbed which currently runs at 11 Hz, but with planned software upgrades, the system update rate is expected to improve to 500 Hz. To show that the FSM subsystem is ready for this speed, we conducted two different PID tuning runs at different step commands. Once 500 Hz is achieved, we plan to make the code and method for our interface solution freely available to the community.

  2. High-performance Thin-layer Chromatographic-densitometric Quantification and Recovery of Bioactive Compounds for Identification of Elite Chemotypes of Gloriosa superba L. Collected from Sikkim Himalayas (India).

    Science.gov (United States)

    Misra, Ankita; Shukla, Pushpendra Kumar; Kumar, Bhanu; Chand, Jai; Kushwaha, Poonam; Khalid, Md; Singh Rawat, Ajay Kumar; Srivastava, Sharad

    2017-10-01

    Gloriosa superba L. (Colchicaceae) is used as adjuvant therapy in gout for its potential antimitotic activity due to high colchicine(s) alkaloids. This study aimed to develop an easy, cheap, precise, and accurate high-performance thin-layer chromatographic (HPTLC) validated method for simultaneous quantification of bioactive alkaloids (colchicine and gloriosine) in G. superba L. and to identify its elite chemotype(s) from Sikkim Himalayas (India). The HPTLC chromatographic method was developed using mobile phase of chloroform: acetone: diethyl amine (5:4:1) at λ max of 350 nm. Five germplasms were collected from targeted region, and on morpho-anatomical inspection, no significant variation was observed among them. Quantification data reveal that content of colchicine ( R f : 0.72) and gloriosine ( R f : 0.61) varies from 0.035%-0.150% to 0.006%-0.032% (dry wt. basis). Linearity of method was obtained in the concentration range of 100-400 ng/spot of marker(s), exhibiting regression coefficient of 0.9987 (colchicine) and 0.9983 (gloriosine) with optimum recovery of 97.79 ± 3.86 and 100.023% ± 0.01%, respectively. Limit of detection and limit of quantification were analyzed, respectively, as 6.245, 18.926 and 8.024, 24.316 (ng). Two germplasms, namely NBG-27 and NBG-26, were found to be elite chemotype of both the markers. The developed method is validated in terms of accuracy, recovery, and precision studies as per the ICH guidelines (2005) and can be adopted for the simultaneous quantification of colchicine and gloriosine in phytopharmaceuticals. In addition, this study is relevant to explore the chemotypic variability in metabolite content for commercial and medicinal purposes. An easy, cheap, precise, and accurate high performance thin layer chromatographic (HPTLC) validated method for simultaneous quantification of bioactive alkaloids (colchicine and gloriosine) in G. superba L.Five germplasms were collected from targeted region, and on morpho anatomical

  3. Linearity and Non-linearity of Photorefractive effect in Materials ...

    African Journals Online (AJOL)

    In this paper we have studied the Linearity and Non-linearity of Photorefractive effect in materials using the band transport model. For low light beam intensities the change in the refractive index is proportional to the electric field for linear optics while for non- linear optics the change in refractive index is directly proportional ...

  4. A sensitive high performance liquid chromatography assay for the quantification of doxorubicin associated with DNA in tumor and tissues.

    Science.gov (United States)

    Lucas, Andrew T; O'Neal, Sara K; Santos, Charlene M; White, Taylor F; Zamboni, William C

    2016-02-05

    Doxorubicin, a widely used anticancer agent, exhibits antitumor activity against a wide variety of malignancies. The drug exerts its cytotoxic effects by binding to and intercalating within the DNA of tumor and tissue cells. However, current assays are unable to accurately determine the concentration of the intracellular active form of doxorubicin. Thus, the development of a sample processing method and a high-performance liquid chromatography (HPLC) methodology was performed in order to quantify doxorubicin that is associated with DNA in tumors and tissues, which provided an intracellular cytotoxic measure of doxorubicin exposure after administration of small molecule and nanoparticle formulations of doxorubicin. The assay uses daunorubicin as an internal standard; liquid-liquid phase extraction to isolate drug associated with DNA; a Shimadzu HPLC with fluorescence detection equipped with a Phenomenex Luna C18 (2μm, 2.0×100mm) analytical column and a gradient mobile phase of 0.1% formic acid in water or acetonitrile for separation and quantification. The assay has a lower limit of detection (LLOQ) of 10ng/mL and is shown to be linear up to 3000ng/mL. The intra- and inter-day precision of the assay expressed as a coefficient of variation (CV%) ranged from 4.01 to 8.81%. Furthermore, the suitability of this assay for measuring doxorubicin associated with DNA in vivo was demonstrated by using it to quantify the doxorubicin concentration within tumor samples from SKOV3 and HEC1A mice obtained 72h after administration of PEGylated liposomal doxorubicin (Doxil(®); PLD) at 6mg/kg IV x 1. This HPLC assay allows for sensitive intracellular quantification of doxorubicin and will be an important tool for future studies evaluating intracellular pharmacokinetics of doxorubicin and various nanoparticle formulations of doxorubicin. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Quantification of 15 bile acids in lake charr feces by ultra-high performance liquid chromatography–tandem mass spectrometry

    Science.gov (United States)

    Li, Ke; Buchinger, Tyler J.; Bussy, Ugo; Fissette, Skye D.; Johnson, Nicholas; Li, Weiming

    2015-01-01

    Many fishes are hypothesized to use bile acids (BAs) as chemical cues, yet quantification of BAs in biological samples and the required methods remain limited. Here, we present an UHPLC–MS/MS method for simultaneous, sensitive, and rapid quantification of 15 BAs, including free, taurine, and glycine conjugated BAs, and application of the method to fecal samples from lake charr (Salvelinus namaycush). The analytes were separated on a C18 column with acetonitrile–water (containing 7.5 mM ammonium acetate and 0.1% formic acid) as mobile phase at a flow rate of 0.25 mL/min for 12 min. BAs were monitored with a negative electrospray triple quadrupole mass spectrometer (Xevo TQ-S™). Calibration curves of 15 BAs were linear over the concentration range of 1.00–5,000 ng/mL. Validation revealed that the method was specific, accurate, and precise. The method was applied to quantitative analysis of feces extract of fry lake charr and the food they were eating. The concentrations of analytes CA, TCDCA, TCA, and CDCA were 242.3, 81.2, 60.7, and 36.2 ng/mg, respectively. However, other taurine conjugated BAs, TUDCA, TDCA, and THDCA, were not detected in feces of lake charr. Interestingly, TCA and TCDCA were detected at high concentrations in food pellets, at 71.9 and 38.2 ng/mg, respectively. Application of the method to feces samples from lake charr supported a role of BAs as chemical cues, and will enhance further investigation of BAs as chemical cues in other fish species.

  6. A dilute-and-shoot flow-injection tandem mass spectrometry method for quantification of phenobarbital in urine.

    Science.gov (United States)

    Alagandula, Ravali; Zhou, Xiang; Guo, Baochuan

    2017-01-15

    Liquid chromatography/tandem mass spectrometry (LC/MS/MS) is the gold standard of urine drug testing. However, current LC-based methods are time consuming, limiting the throughput of MS-based testing and increasing the cost. This is particularly problematic for quantification of drugs such as phenobarbital, which is often analyzed in a separate run because they must be negatively ionized. This study examined the feasibility of using a dilute-and-shoot flow-injection method without LC separation to quantify drugs with phenobarbital as a model system. Briefly, a urine sample containing phenobarbital was first diluted by 10 times, followed by flow injection of the diluted sample to mass spectrometer. Quantification and detection of phenobarbital were achieved by an electrospray negative ionization MS/MS system operated in the multiple reaction monitoring (MRM) mode with the stable-isotope-labeled drug as internal standard. The dilute-and-shoot flow-injection method developed was linear with a dynamic range of 50-2000 ng/mL of phenobarbital and correlation coefficient > 0.9996. The coefficients of variation and relative errors for intra- and inter-assays at four quality control (QC) levels (50, 125, 445 and 1600 ng/mL) were 3.0% and 5.0%, respectively. The total run time to quantify one sample was 2 min, and the sensitivity and specificity of the method did not deteriorate even after 1200 consecutive injections. Our method can accurately and robustly quantify phenobarbital in urine without LC separation. Because of its 2 min run time, the method can process 720 samples per day. This feasibility study shows that the dilute-and-shoot flow-injection method can be a general way for fast analysis of drugs in urine. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Quantification of trace-level DNA by real-time whole genome amplification.

    Science.gov (United States)

    Kang, Min-Jung; Yu, Hannah; Kim, Sook-Kyung; Park, Sang-Ryoul; Yang, Inchul

    2011-01-01

    Quantification of trace amounts of DNA is a challenge in analytical applications where the concentration of a target DNA is very low or only limited amounts of samples are available for analysis. PCR-based methods including real-time PCR are highly sensitive and widely used for quantification of low-level DNA samples. However, ordinary PCR methods require at least one copy of a specific gene sequence for amplification and may not work for a sub-genomic amount of DNA. We suggest a real-time whole genome amplification method adopting the degenerate oligonucleotide primed PCR (DOP-PCR) for quantification of sub-genomic amounts of DNA. This approach enabled quantification of sub-picogram amounts of DNA independently of their sequences. When the method was applied to the human placental DNA of which amount was accurately determined by inductively coupled plasma-optical emission spectroscopy (ICP-OES), an accurate and stable quantification capability for DNA samples ranging from 80 fg to 8 ng was obtained. In blind tests of laboratory-prepared DNA samples, measurement accuracies of 7.4%, -2.1%, and -13.9% with analytical precisions around 15% were achieved for 400-pg, 4-pg, and 400-fg DNA samples, respectively. A similar quantification capability was also observed for other DNA species from calf, E. coli, and lambda phage. Therefore, when provided with an appropriate standard DNA, the suggested real-time DOP-PCR method can be used as a universal method for quantification of trace amounts of DNA.

  8. Linearly Refined Session Types

    Directory of Open Access Journals (Sweden)

    Pedro Baltazar

    2012-11-01

    Full Text Available Session types capture precise protocol structure in concurrent programming, but do not specify properties of the exchanged values beyond their basic type. Refinement types are a form of dependent types that can address this limitation, combining types with logical formulae that may refer to program values and can constrain types using arbitrary predicates. We present a pi calculus with assume and assert operations, typed using a session discipline that incorporates refinement formulae written in a fragment of Multiplicative Linear Logic. Our original combination of session and refinement types, together with the well established benefits of linearity, allows very fine-grained specifications of communication protocols in which refinement formulae are treated as logical resources rather than persistent truths.

  9. Linear Water Waves

    Science.gov (United States)

    Kuznetsov, N.; Maz'ya, V.; Vainberg, B.

    2002-08-01

    This book gives a self-contained and up-to-date account of mathematical results in the linear theory of water waves. The study of waves has many applications, including the prediction of behavior of floating bodies (ships, submarines, tension-leg platforms etc.), the calculation of wave-making resistance in naval architecture, and the description of wave patterns over bottom topography in geophysical hydrodynamics. The first section deals with time-harmonic waves. Three linear boundary value problems serve as the approximate mathematical models for these types of water waves. The next section uses a plethora of mathematical techniques in the investigation of these three problems. The techniques used in the book include integral equations based on Green's functions, various inequalities between the kinetic and potential energy and integral identities which are indispensable for proving the uniqueness theorems. The so-called inverse procedure is applied to constructing examples of non-uniqueness, usually referred to as 'trapped nodes.'

  10. The International Linear Collider

    Directory of Open Access Journals (Sweden)

    List Benno

    2014-04-01

    Full Text Available The International Linear Collider (ILC is a proposed e+e− linear collider with a centre-of-mass energy of 200–500 GeV, based on superconducting RF cavities. The ILC would be an ideal machine for precision studies of a light Higgs boson and the top quark, and would have a discovery potential for new particles that is complementary to that of LHC. The clean experimental conditions would allow the operation of detectors with extremely good performance; two such detectors, ILD and SiD, are currently being designed. Both make use of novel concepts for tracking and calorimetry. The Japanese High Energy Physics community has recently recommended to build the ILC in Japan.

  11. The International Linear Collider

    Science.gov (United States)

    List, Benno

    2014-04-01

    The International Linear Collider (ILC) is a proposed e+e- linear collider with a centre-of-mass energy of 200-500 GeV, based on superconducting RF cavities. The ILC would be an ideal machine for precision studies of a light Higgs boson and the top quark, and would have a discovery potential for new particles that is complementary to that of LHC. The clean experimental conditions would allow the operation of detectors with extremely good performance; two such detectors, ILD and SiD, are currently being designed. Both make use of novel concepts for tracking and calorimetry. The Japanese High Energy Physics community has recently recommended to build the ILC in Japan.

  12. Dimension of linear models

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    1996-01-01

    Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....... of these criteria are widely used ones, while the remaining four are ones derived from the H-principle of mathematical modeling. Many examples from practice show that the criteria derived from the H-principle function better than the known and popular criteria for the number of components. We shall briefly review...

  13. Reciprocating linear motor

    Science.gov (United States)

    Goldowsky, Michael P. (Inventor)

    1987-01-01

    A reciprocating linear motor is formed with a pair of ring-shaped permanent magnets having opposite radial polarizations, held axially apart by a nonmagnetic yoke, which serves as an axially displaceable armature assembly. A pair of annularly wound coils having axial lengths which differ from the axial lengths of the permanent magnets are serially coupled together in mutual opposition and positioned with an outer cylindrical core in axial symmetry about the armature assembly. One embodiment includes a second pair of annularly wound coils serially coupled together in mutual opposition and an inner cylindrical core positioned in axial symmetry inside the armature radially opposite to the first pair of coils. Application of a potential difference across a serial connection of the two pairs of coils creates a current flow perpendicular to the magnetic field created by the armature magnets, thereby causing limited linear displacement of the magnets relative to the coils.

  14. Duality in linearized gravity

    International Nuclear Information System (INIS)

    Henneaux, Marc; Teitelboim, Claudio

    2005-01-01

    We show that duality transformations of linearized gravity in four dimensions, i.e., rotations of the linearized Riemann tensor and its dual into each other, can be extended to the dynamical fields of the theory so as to be symmetries of the action and not just symmetries of the equations of motion. Our approach relies on the introduction of two superpotentials, one for the spatial components of the spin-2 field and the other for their canonically conjugate momenta. These superpotentials are two-index, symmetric tensors. They can be taken to be the basic dynamical fields and appear locally in the action. They are simply rotated into each other under duality. In terms of the superpotentials, the canonical generator of duality rotations is found to have a Chern-Simons-like structure, as in the Maxwell case

  15. The SLAC linear collider

    International Nuclear Information System (INIS)

    Phinney, N.

    1992-01-01

    The SLAC Linear Collider has begun a new era of operation with the SLD detector. During 1991 there was a first engineering run for the SLD in parallel with machine improvements to increase luminosity and reliability. For the 1992 run, a polarized electron source was added and more than 10,000 Zs with an average of 23% polarization have been logged by the SLD. This paper discusses the performance of the SLC in 1991 and 1992 and the technical advances that have produced higher luminosity. Emphasis will be placed on issues relevant to future linear colliders such as producing and maintaining high current, low emittance beams and focusing the beams to the micron scale for collisions. (Author) tab., 2 figs., 18 refs

  16. Linear waves and instabilities

    International Nuclear Information System (INIS)

    Bers, A.

    1975-01-01

    The electrodynamic equations for small-amplitude waves and their dispersion relation in a homogeneous plasma are outlined. For such waves, energy and momentum, and their flow and transformation, are described. Perturbation theory of waves is treated and applied to linear coupling of waves, and the resulting instabilities from such interactions between active and passive waves. Linear stability analysis in time and space is described where the time-asymptotic, time-space Green's function for an arbitrary dispersion relation is developed. The perturbation theory of waves is applied to nonlinear coupling, with particular emphasis on pump-driven interactions of waves. Details of the time--space evolution of instabilities due to coupling are given. (U.S.)

  17. The quantification of risk and tourism

    Directory of Open Access Journals (Sweden)

    Piet Croucamp

    2014-01-01

    Full Text Available Tourism in South Africa comprises 9.5% of Gross Domestic Product (GDP, but remains an underresearched industry, especially regarding the quantification of the risks prevailing in the social, political and economic environment in which the industry operates. Risk prediction, extrapolation forecasting is conducted largely in the context of a qualitative methodology. This article reflects on the quantification of social constructs as variables of risk in the tourism industry with reference to South Africa. The theory and methodology of quantification is briefly reviewed and the indicators of risk are conceptualized and operationalized. The identified indicators are scaled in indices for purposes of quantification. Risk assessments and the quantification of constructs rely heavily on the experience - often personal - of the researcher and this scholarly endeavour is, therefore, not inclusive of all possible identified indicators of risk. It is accepted that tourism in South Africa is an industry comprising of a large diversity of sectors, each with a different set of risk indicators and risk profiles. The emphasis of this article is thus on the methodology to be applied to a risk profile. A secondary endeavour is to provide for clarity about the conceptual and operational confines of risk in general, as well as how quantified risk relates to the tourism industry. The indices provided include both domesticand international risk indicators. The motivation for the article is to encourage a greater emphasis on quantitative research in our efforts to understand and manage a risk profile for the tourist industry.

  18. Fundamentals of linear algebra

    CERN Document Server

    Dash, Rajani Ballav

    2008-01-01

    FUNDAMENTALS OF LINEAR ALGEBRA is a comprehensive Text Book, which can be used by students and teachers of All Indian Universities. The Text has easy, understandable form and covers all topics of UGC Curriculum. There are lots of worked out examples which helps the students in solving the problems without anybody's help. The Problem sets have been designed keeping in view of the questions asked in different examinations.

  19. Linear network theory

    CERN Document Server

    Sander, K F

    1964-01-01

    Linear Network Theory covers the significant algebraic aspect of network theory, with minimal reference to practical circuits. The book begins the presentation of network analysis with the exposition of networks containing resistances only, and follows it up with a discussion of networks involving inductance and capacity by way of the differential equations. Classification and description of certain networks, equivalent networks, filter circuits, and network functions are also covered. Electrical engineers, technicians, electronics engineers, electricians, and students learning the intricacies

  20. Non linear viscoelastic models

    DEFF Research Database (Denmark)

    Agerkvist, Finn T.

    2011-01-01

    Viscoelastic eects are often present in loudspeaker suspensions, this can be seen in the displacement transfer function which often shows a frequency dependent value below the resonance frequency. In this paper nonlinear versions of the standard linear solid model (SLS) are investigated....... The simulations show that the nonlinear version of the Maxwell SLS model can result in a time dependent small signal stiness while the Kelvin Voight version does not....

  1. Relativistic Linear Restoring Force

    Science.gov (United States)

    Clark, D.; Franklin, J.; Mann, N.

    2012-01-01

    We consider two different forms for a relativistic version of a linear restoring force. The pair comes from taking Hooke's law to be the force appearing on the right-hand side of the relativistic expressions: d"p"/d"t" or d"p"/d["tau"]. Either formulation recovers Hooke's law in the non-relativistic limit. In addition to these two forces, we…

  2. Superconducting linear colliders

    International Nuclear Information System (INIS)

    Anon.

    1990-01-01

    The advantages of superconducting radiofrequency (SRF) for particle accelerators have been demonstrated by successful operation of systems in the TRISTAN and LEP electron-positron collider rings respectively at the Japanese KEK Laboratory and at CERN. If performance continues to improve and costs can be lowered, this would open an attractive option for a high luminosity TeV (1000 GeV) linear collider

  3. Perturbed asymptotically linear problems

    OpenAIRE

    Bartolo, R.; Candela, A. M.; Salvatore, A.

    2012-01-01

    The aim of this paper is investigating the existence of solutions of some semilinear elliptic problems on open bounded domains when the nonlinearity is subcritical and asymptotically linear at infinity and there is a perturbation term which is just continuous. Also in the case when the problem has not a variational structure, suitable procedures and estimates allow us to prove that the number of distinct crtitical levels of the functional associated to the unperturbed problem is "stable" unde...

  4. Linear pneumatic actuator

    Directory of Open Access Journals (Sweden)

    Avram Mihai

    2017-01-01

    Full Text Available The paper presents a linear pneumatic actuator with short working stroke. It consists of a pneumatic motor (a simple stroke cylinder or a membrane chamber, two 2/2 pneumatic distributors “all or nothing” electrically commanded for controlling the intake/outtake flow to/from the active chamber of the motor, a position transducer and a microcontroller. There is also presented the theoretical analysis (mathematical modelling and numerical simulation accomplished.

  5. Linear pneumatic actuator

    OpenAIRE

    Avram Mihai; Niţu Constantin; Bucşan Constantin; Grămescu Bogdan

    2017-01-01

    The paper presents a linear pneumatic actuator with short working stroke. It consists of a pneumatic motor (a simple stroke cylinder or a membrane chamber), two 2/2 pneumatic distributors “all or nothing” electrically commanded for controlling the intake/outtake flow to/from the active chamber of the motor, a position transducer and a microcontroller. There is also presented the theoretical analysis (mathematical modelling and numerical simulation) accomplished.

  6. Linear MHD equilibria

    International Nuclear Information System (INIS)

    Scheffel, J.

    1984-03-01

    The linear Grad-Shafranov equation for a toroidal, axisymmetric plasma is solved analytically. Exact solutions are given in terms of confluent hyper-geometric functions. As an alternative, simple and accurate WKBJ solutions are presented. With parabolic pressure profiles, both hollow and peaked toroidal current density profiles are obtained. As an example the equilibrium of a z-pinch with a square-shaped cross section is derived.(author)

  7. Linear induction accelerator

    Science.gov (United States)

    Buttram, M.T.; Ginn, J.W.

    1988-06-21

    A linear induction accelerator includes a plurality of adder cavities arranged in a series and provided in a structure which is evacuated so that a vacuum inductance is provided between each adder cavity and the structure. An energy storage system for the adder cavities includes a pulsed current source and a respective plurality of bipolar converting networks connected thereto. The bipolar high-voltage, high-repetition-rate square pulse train sets and resets the cavities. 4 figs.

  8. Stability-indicating liquid chromatographic method for quantification of new anti-epileptic drug lacosamide in bulk and pharmaceutical formulation

    Directory of Open Access Journals (Sweden)

    Chhalotiya Usmangani K.

    2012-01-01

    Full Text Available An isocratic stability indicating reversed-phase liquid chromatographic determination was developed for the quantitative determination of lacosamide in the pharmaceutical dosage form. A Hypersil C-18, 4.5μm column with mobile phase containing acetonitrile-water (20:80, v/v was used. The flow rate was 1.0 mL min-1 and effluents were monitored at 258 nm. The retention time of lacosamide was 8.9 min. The method was found to be linear in the concentration range of 5-100 μg/ml and the recovery was found to be in the range of 99.15 - 100.09 %. The limit of detection and limit of quantification were found to be 2 μg/ml and 5 μg/ml, respectively. Lacosamide stock solutions were subjected to acid and alkali hydrolysis, chemical oxidation and dry heat degradation. The drug was found to be stable to the dry heat and acidic condition attempted. The proposed method was validated and successfully applied to the estimation of lacosamide in tablet dosage forms.

  9. Linear algebraic groups

    CERN Document Server

    Springer, T A

    1998-01-01

    "[The first] ten chapters...are an efficient, accessible, and self-contained introduction to affine algebraic groups over an algebraically closed field. The author includes exercises and the book is certainly usable by graduate students as a text or for self-study...the author [has a] student-friendly style… [The following] seven chapters... would also be a good introduction to rationality issues for algebraic groups. A number of results from the literature…appear for the first time in a text." –Mathematical Reviews (Review of the Second Edition) "This book is a completely new version of the first edition. The aim of the old book was to present the theory of linear algebraic groups over an algebraically closed field. Reading that book, many people entered the research field of linear algebraic groups. The present book has a wider scope. Its aim is to treat the theory of linear algebraic groups over arbitrary fields. Again, the author keeps the treatment of prerequisites self-contained. The material of t...

  10. Parametric Linear Dynamic Logic

    Directory of Open Access Journals (Sweden)

    Peter Faymonville

    2014-08-01

    Full Text Available We introduce Parametric Linear Dynamic Logic (PLDL, which extends Linear Dynamic Logic (LDL by temporal operators equipped with parameters that bound their scope. LDL was proposed as an extension of Linear Temporal Logic (LTL that is able to express all ω-regular specifications while still maintaining many of LTL's desirable properties like an intuitive syntax and a translation into non-deterministic Büchi automata of exponential size. But LDL lacks capabilities to express timing constraints. By adding parameterized operators to LDL, we obtain a logic that is able to express all ω-regular properties and that subsumes parameterized extensions of LTL like Parametric LTL and PROMPT-LTL. Our main technical contribution is a translation of PLDL formulas into non-deterministic Büchi word automata of exponential size via alternating automata. This yields a PSPACE model checking algorithm and a realizability algorithm with doubly-exponential running time. Furthermore, we give tight upper and lower bounds on optimal parameter values for both problems. These results show that PLDL model checking and realizability are not harder than LTL model checking and realizability.

  11. Quantum linear Boltzmann equation

    International Nuclear Information System (INIS)

    Vacchini, Bassano; Hornberger, Klaus

    2009-01-01

    We review the quantum version of the linear Boltzmann equation, which describes in a non-perturbative fashion, by means of scattering theory, how the quantum motion of a single test particle is affected by collisions with an ideal background gas. A heuristic derivation of this Lindblad master equation is presented, based on the requirement of translation-covariance and on the relation to the classical linear Boltzmann equation. After analyzing its general symmetry properties and the associated relaxation dynamics, we discuss a quantum Monte Carlo method for its numerical solution. We then review important limiting forms of the quantum linear Boltzmann equation, such as the case of quantum Brownian motion and pure collisional decoherence, as well as the application to matter wave optics. Finally, we point to the incorporation of quantum degeneracies and self-interactions in the gas by relating the equation to the dynamic structure factor of the ambient medium, and we provide an extension of the equation to include internal degrees of freedom.

  12. The Stanford Linear Collider

    International Nuclear Information System (INIS)

    Emma, P.

    1995-01-01

    The Stanford Linear Collider (SLC) is the first and only high-energy e + e - linear collider in the world. Its most remarkable features are high intensity, submicron sized, polarized (e - ) beams at a single interaction point. The main challenges posed by these unique characteristics include machine-wide emittance preservation, consistent high intensity operation, polarized electron production and transport, and the achievement of a high degree of beam stability on all time scales. In addition to serving as an important machine for the study of Z 0 boson production and decay using polarized beams, the SLC is also an indispensable source of hands-on experience for future linear colliders. Each new year of operation has been highlighted with a marked improvement in performance. The most significant improvements for the 1994-95 run include new low impedance vacuum chambers for the damping rings, an upgrade to the optics and diagnostics of the final focus systems, and a higher degree of polarization from the electron source. As a result, the average luminosity has nearly doubled over the previous year with peaks approaching 10 30 cm -2 s -1 and an 80% electron polarization at the interaction point. These developments as well as the remaining identifiable performance limitations will be discussed

  13. Quantification of mutant SPOP proteins in prostate cancer using mass spectrometry-based targeted proteomics

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Hui; Barbieri, Christopher E.; He, Jintang; Gao, Yuqian; Shi, Tujin; Wu, Chaochao; Schepmoes, Athena A.; Fillmore, Thomas L.; Chae, Sung-Suk; Huang, Dennis; Mosquera, Juan Miguel; Qian, Wei-Jun; Smith, Richard D.; Srivastava, Sudhir; Kagan, Jacob; Camp, David G.; Rodland, Karin D.; Rubin, Mark A.; Liu, Tao

    2017-08-15

    Speckle-type POZ protein (SPOP) is an E3 ubiquitin ligase adaptor protein that functions as a potential tumor suppressor, and SPOP mutations have been identified in ~10% of human prostate cancers. However, it remains unclear if mutant SPOP proteins can be utilized as biomarkers for early detection, diagnosis, prognosis or targeted therapy of prostate cancer. Moreover, the SPOP mutation sites are distributed in a relatively short region where multiple lysine residues, posing significant challenges for bottom-up proteomics analysis of the SPOP mutations. To address this issue, PRISM (high-pressure, high-resolution separations coupled with intelligent selection and multiplexing)-SRM (selected reaction monitoring) mass spectrometry assays have been developed for quantifying wild-type SPOP protein and 11 prostate cancer-derived SPOP mutations. Despite inherent limitations due to amino acid sequence constraints, all the PRISM-SRM assays developed using Arg-C digestion showed a linear dynamic range of at least two orders of magnitude, with limits of quantification range from 0.1 to 1 fmol/μg of total protein in the cell lysate. Applying these SRM assays to analyze HEK293T cells with and without expression of the three most frequent SPOP mutations in prostate cancer (Y87N, F102C or F133V) led to confident detection of all three SPOP mutations in corresponding positive cell lines but not in the negative cell lines. Expression of the F133V mutation and wild-type SPOP was at much lower levels compared to that of F102C and Y87N mutations; however, at present it is unknown if this also affects the activity of the SPOP protein. In summary, PRISM-SRM enables multiplexed, isoform-specific detection of mutant SPOP proteins in cell lysates, which holds great potential in biomarker development for prostate cancer.

  14. An Optimized Method for Quantification of Pathogenic Leptospira in Environmental Water Samples.

    Science.gov (United States)

    Riediger, Irina N; Hoffmaster, Alex R; Casanovas-Massana, Arnau; Biondo, Alexander W; Ko, Albert I; Stoddard, Robyn A

    2016-01-01

    Leptospirosis is a zoonotic disease usually acquired by contact with water contaminated with urine of infected animals. However, few molecular methods have been used to monitor or quantify pathogenic Leptospira in environmental water samples. Here we optimized a DNA extraction method for the quantification of leptospires using a previously described Taqman-based qPCR method targeting lipL32, a gene unique to and highly conserved in pathogenic Leptospira. QIAamp DNA mini, MO BIO PowerWater DNA and PowerSoil DNA Isolation kits were evaluated to extract DNA from sewage, pond, river and ultrapure water samples spiked with leptospires. Performance of each kit varied with sample type. Sample processing methods were further evaluated and optimized using the PowerSoil DNA kit due to its performance on turbid water samples and reproducibility. Centrifugation speeds, water volumes and use of Escherichia coli as a carrier were compared to improve DNA recovery. All matrices showed a strong linearity in a range of concentrations from 106 to 10° leptospires/mL and lower limits of detection ranging from Leptospira in environmental waters (river, pond and sewage) which consists of the concentration of 40 mL samples by centrifugation at 15,000×g for 20 minutes at 4°C, followed by DNA extraction with the PowerSoil DNA Isolation kit. Although the method described herein needs to be validated in environmental studies, it potentially provides the opportunity for effective, timely and sensitive assessment of environmental leptospiral burden.

  15. Characterization and quantification of suspended sediment sources to the Manawatu River, New Zealand.

    Science.gov (United States)

    Vale, S S; Fuller, I C; Procter, J N; Basher, L R; Smith, I E

    2016-02-01

    Knowledge of sediment movement throughout a catchment environment is essential due to its influence on the character and form of our landscape relating to agricultural productivity and ecological health. Sediment fingerprinting is a well-used tool for evaluating sediment sources within a fluvial catchment but still faces areas of uncertainty for applications to large catchments that have a complex arrangement of sources. Sediment fingerprinting was applied to the Manawatu River Catchment to differentiate 8 geological and geomorphological sources. The source categories were Mudstone, Hill Subsurface, Hill Surface, Channel Bank, Mountain Range, Gravel Terrace, Loess and Limestone. Geochemical analysis was conducted using XRF and LA-ICP-MS. Geochemical concentrations were analysed using Discriminant Function Analysis and sediment un-mixing models. Two mixing models were used in conjunction with GRG non-linear and Evolutionary optimization methods for comparison. Discriminant Function Analysis required 16 variables to correctly classify 92.6% of sediment sources. Geological explanations were achieved for some of the variables selected, although there is a need for mineralogical information to confirm causes for the geochemical signatures. Consistent source estimates were achieved between models with optimization techniques providing globally optimal solutions for sediment quantification. Sediment sources was attributed primarily to Mudstone, ≈38-46%; followed by the Mountain Range, ≈15-18%; Hill Surface, ≈12-16%; Hill Subsurface, ≈9-11%; Loess, ≈9-15%; Gravel Terrace, ≈0-4%; Channel Bank, ≈0-5%; and Limestone, ≈0%. Sediment source apportionment fits with the conceptual understanding of the catchment which has recognized soft sedimentary mudstone to be highly susceptible to erosion. Inference of the processes responsible for sediment generation can be made for processes where there is a clear relationship with the geomorphology, but is problematic for

  16. An Integrated Circuit for Chip-Based Analysis of Enzyme Kinetics and Metabolite Quantification.

    Science.gov (United States)

    Cheah, Boon Chong; Macdonald, Alasdair Iain; Martin, Christopher; Streklas, Angelos J; Campbell, Gordon; Al-Rawhani, Mohammed A; Nemeth, Balazs; Grant, James P; Barrett, Michael P; Cumming, David R S

    2016-06-01

    We have created a novel chip-based diagnostic tools based upon quantification of metabolites using enzymes specific for their chemical conversion. Using this device we show for the first time that a solid-state circuit can be used to measure enzyme kinetics and calculate the Michaelis-Menten constant. Substrate concentration dependency of enzyme reaction rates is central to this aim. Ion-sensitive field effect transistors (ISFET) are excellent transducers for biosensing applications that are reliant upon enzyme assays, especially since they can be fabricated using mainstream microelectronics technology to ensure low unit cost, mass-manufacture, scaling to make many sensors and straightforward miniaturisation for use in point-of-care devices. Here, we describe an integrated ISFET array comprising 2(16) sensors. The device was fabricated with a complementary metal oxide semiconductor (CMOS) process. Unlike traditional CMOS ISFET sensors that use the Si3N4 passivation of the foundry for ion detection, the device reported here was processed with a layer of Ta2O5 that increased the detection sensitivity to 45 mV/pH unit at the sensor readout. The drift was reduced to 0.8 mV/hour with a linear pH response between pH 2-12. A high-speed instrumentation system capable of acquiring nearly 500 fps was developed to stream out the data. The device was then used to measure glucose concentration through the activity of hexokinase in the range of 0.05 mM-231 mM, encompassing glucose's physiological range in blood. Localised and temporal enzyme kinetics of hexokinase was studied in detail. These results present a roadmap towards a viable personal metabolome machine.

  17. Digital histology quantification of intra-hepatic fat in patients undergoing liver resection.

    Science.gov (United States)

    Parkin, E; O'Reilly, D A; Plumb, A A; Manoharan, P; Rao, M; Coe, P; Frystyk, J; Ammori, B; de Liguori Carino, N; Deshpande, R; Sherlock, D J; Renehan, A G

    2015-08-01

    High intra-hepatic fat (IHF) content is associated with insulin resistance, visceral adiposity, and increased morbidity and mortality following liver resection. However, in clinical practice, IHF is assessed indirectly by pre-operative imaging [for example, chemical-shift magnetic resonance (CS-MR)]. We used the opportunity in patients undergoing liver resection to quantify IHF by digital histology (D-IHF) and relate this to CT-derived anthropometrics, insulin-related serum biomarkers, and IHF estimated by CS-MR. A reproducible method for quantification of D-IHF using 7 histology slides (inter- and intra-rater concordance: 0.97 and 0.98) was developed. In 35 patients undergoing resection for colorectal cancer metastases, we measured: CT-derived subcutaneous and visceral adipose tissue volumes, Homeostasis Model Assessment of Insulin Resistance (HOMA-IR), fasting serum adiponectin, leptin and fetuin-A. We estimated relative IHF using CS-MR and developed prediction models for IHF using a factor-clustered approach. The multivariate linear regression models showed that D-IHF was best predicted by HOMA-IR (Beta coefficient(per doubling): 2.410, 95% CI: 1.093, 5.313) and adiponectin (β(per doubling): 0.197, 95% CI: 0.058, 0.667), but not by anthropometrics. MR-derived IHF correlated with D-IHF (rho: 0.626; p = 0.0001), but levels of agreement deviated in upper range values (CS-MR over-estimated IHF: regression versus zero, p = 0.009); this could be adjusted for by a correction factor (CF: 0.7816). Our findings show IHF is associated with measures of insulin resistance, but not measures of visceral adiposity. CS-MR over-estimated IHF in the upper range. Larger studies are indicated to test whether a correction of imaging-derived IHF estimates is valid. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Dynamic Planar Range Maxima Queries

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Tsakalidis, Konstantinos

    2011-01-01

    We consider the dynamic two-dimensional maxima query problem. Let P be a set of n points in the plane. A point is maximal if it is not dominated by any other point in P. We describe two data structures that support the reporting of the t maximal points that dominate a given query point, and allow...... for insertions and deletions of points in P. In the pointer machine model we present a linear space data structure with O(logn + t) worst case query time and O(logn) worst case update time. This is the first dynamic data structure for the planar maxima dominance query problem that achieves these bounds...... are integers in the range U = {0, …,2 w  − 1 }. We present a linear space data structure that supports 3-sided range maxima queries in O(logn/loglogn+t) worst case time and updates in O(logn/loglogn) worst case time. These are the first sublogarithmic worst case bounds for all operations in the RAM model....

  19. Simultaneous digital quantification and fluorescence-based size characterization of massively parallel sequencing libraries.

    Science.gov (United States)

    Laurie, Matthew T; Bertout, Jessica A; Taylor, Sean D; Burton, Joshua N; Shendure, Jay A; Bielas, Jason H

    2013-08-01

    Due to the high cost of failed runs and suboptimal data yields, quantification and determination of fragment size range are crucial steps in the library preparation process for massively parallel sequencing (or next-generation sequencing). Current library quality control methods commonly involve quantification using real-time quantitative PCR and size determination using gel or capillary electrophoresis. These methods are laborious and subject to a number of significant limitations that can make library calibration unreliable. Herein, we propose and test an alternative method for quality control of sequencing libraries using droplet digital PCR (ddPCR). By exploiting a correlation we have discovered between droplet fluorescence and amplicon size, we achieve the joint quantification and size determination of target DNA with a single ddPCR assay. We demonstrate the accuracy and precision of applying this method to the preparation of sequencing libraries.

  20. Application of the third theory of quantification in coal and gas outburst forecast

    Energy Technology Data Exchange (ETDEWEB)

    Wu, C.; Qin, Y.; Zhang, X. [China University of Mining and Technology, Xuzhou (China). School of Resource and Geoscience Engineering

    2004-12-01

    The essential principles of the third theory of quantification are discussed. The concept and calculated method of reaction degree are put forward which extend the applying range and scientificalness of the primary reaction. Taking the Zhongmacun mine as example, on the base of analyzing the rules of gas geology synthetically and traversing the geological factors affecting coal and gas outburst. The paper adopts the method of combining statistical units with the third theory of quantification, screens out 8 sensitive geological factors from 11 geological indexes and carries through the work of gas geology regionalism to the exploited area of Zhongmacun according to the research result. The practice shows that it is feasible to apply the third theory of quantification to gas geology, which offers a new thought to screen the sensitive geological factors of gas outburst forecast. 3 refs., 3 figs., 3 tabs.

  1. Contrast enhanced CT-scans are not comparable to non-enhanced scans in emphysema quantification

    International Nuclear Information System (INIS)

    Heussel, C.P.; Kappes, J.; Hantusch, R.; Hartlieb, S.; Weinheimer, O.; Kauczor, H.-U.; Eberhardt, R.

    2010-01-01

    Systemic, interventional and surgical treatments have gone new ways in treatment of emphysema. For longitudinal therapy monitoring and as end-points for clinical trials, quantification of the disease is necessary. Sensitive, easy to measure, as well as stable and reproducible parameters have to be characterized. One parameter that might affect emphysema quantification is IV contrast enhancement, which might also be indicated. Whether or not the contrast enhanced scan is also suited for emphysema quantification or an additional scan is necessary, a retrospective analysis of 12 adult patients undergoing clinically indicated both, a non-enhanced and enhanced thin section MSCT within a week (median 0 days, range 0-4 days) was done. The in-house YACTA software was used for automatic quantification of lung and emphysema volume, emphysema index, mean lung density, and 5th, 10th, 15th percentile. After IV contrast administration, the median CT derived lung volume decreased mild by 1.1%, while median emphysema volume decreased by relevant 11%. This results in a decrease of median emphysema index by 9%. The median lung density (15th percentile) increased after contrast application by 18 HU (9 HU). CT quantification delivers emphysema values that are clearly affected by IV contrast application. The detected changes after contrast application show the results of higher density in the lung parenchyma. Therefore the amount of quantified emphysema is reduced and the lung density increased after contrast enhancement. In longitudinal analyses, non-enhanced scans should be the reference, while enhanced scans cannot be used.

  2. Photoacoustic bio-quantification of graphene based nanomaterials at a single cell level (Conference Presentation)

    Science.gov (United States)

    Nedosekin, Dmitry A.; Nolan, Jacqueline; Biris, Alexandru S.; Zharov, Vladimir P.

    2017-03-01

    Arkansas Nanomedicine Center at the University of Arkansas for Medical Sciences in collaboration with other Arkansas Universities and the FDA-based National Center of Toxicological Research in Jefferson, AR is developing novel techniques for rapid quantification of graphene-based nanomaterials (GBNs) in various biological samples. All-carbon GBNs have wide range of potential applications in industry, agriculture, food processing and medicine; however, quantification of GBNs is difficult in carbon reach biological tissues. The accurate quantification of GBNs is essential for research on material toxicity and the development of GBNs-based drug delivery platforms. We have developed microscopy and cytometry platforms for detection and quantification of GBNs in single cells, tissue and blood samples using photoacoustic contrast of GBNs. We demonstrated PA quantification of individual graphene uptake by single cells. High-resolution PA microscopy provided mapping of GBN distribution within live cells to establish correlation with intracellular toxic phenomena using apoptotic and necrotic assays. This new methodology and corresponding technical platform provide the insight on possible toxicological risks of GBNs at singe cells levels. In addition, in vivo PA image flow cytometry demonstrated the capability to monitor of GBNs pharmacokinetics in mouse model and to map the resulting biodistribution of GBNs in mouse tissues. The integrated PA platform provided an unprecedented sensitivity toward GBNs and allowed to enhance conventional toxicology research by providing a direct correlation between uptake of GBNs at a single cell level and cell viability status.

  3. Development of Quantitative Real-Time PCR Assays for Detection and Quantification of Surrogate Biological Warfare Agents in Building Debris and Leachate▿

    Science.gov (United States)

    Saikaly, Pascal E.; Barlaz, Morton A.; de los Reyes, Francis L.

    2007-01-01

    Evaluation of the fate and transport of biological warfare (BW) agents in landfills requires the development of specific and sensitive detection assays. The objective of the current study was to develop and validate SYBR green quantitative real-time PCR (Q-PCR) assays for the specific detection and quantification of surrogate BW agents in synthetic building debris (SBD) and leachate. Bacillus atrophaeus (vegetative cells and spores) and Serratia marcescens were used as surrogates for Bacillus anthracis (anthrax) and Yersinia pestis (plague), respectively. The targets for SYBR green Q-PCR assays were the 16S-23S rRNA intergenic transcribed spacer (ITS) region and recA gene for B. atrophaeus and the gyrB, wzm, and recA genes for S. marcescens. All assays showed high specificity when tested against 5 ng of closely related Bacillus and Serratia nontarget DNA from 21 organisms. Several spore lysis methods that include a combination of one or more of freeze-thaw cycles, chemical lysis, hot detergent treatment, bead beat homogenization, and sonication were evaluated. All methods tested showed similar threshold cycle values. The limit of detection of the developed Q-PCR assays was determined using DNA extracted from a pure bacterial culture and DNA extracted from sterile water, leachate, and SBD samples spiked with increasing quantities of surrogates. The limit of detection for B. atrophaeus genomic DNA using the ITS and B. atrophaeus recA Q-PCR assays was 7.5 fg per PCR. The limits of detection of S. marcescens genomic DNA using the gyrB, wzm, and S. marcescens recA Q-PCR assays were 7.5 fg, 75 fg, and 7.5 fg per PCR, respectively. Quantification of B. atrophaeus vegetative cells and spores was linear (R2 > 0.98) over a 7-log-unit dynamic range down to 101 B. atrophaeus cells or spores. Quantification of S. marcescens (R2 > 0.98) was linear over a 6-log-unit dynamic range down to 102 S. marcescens cells. The developed Q-PCR assays are highly specific and sensitive and can

  4. Rapid identification and quantification of Campylobacter coli and Campylobacter jejuni by real-time PCR in pure cultures and in complex samples

    Directory of Open Access Journals (Sweden)

    Denis Martine

    2011-05-01

    Full Text Available Abstract Background Campylobacter spp., especially Campylobacter jejuni (C. jejuni and Campylobacter coli (C. coli, are recognized as the leading human foodborne pathogens in developed countries. Livestock animals carrying Campylobacter pose an important risk for human contamination. Pigs are known to be frequently colonized with Campylobacter, especially C. coli, and to excrete high numbers of this pathogen in their faeces. Molecular tools, notably real-time PCR, provide an effective, rapid, and sensitive alternative to culture-based methods for the detection of C. coli and C. jejuni in various substrates. In order to serve as a diagnostic tool supporting Campylobacter epidemiology, we developed a quantitative real-time PCR method for species-specific detection and quantification of C. coli and C. jejuni directly in faecal, feed, and environmental samples. Results With a sensitivity of 10 genome copies and a linear range of seven to eight orders of magnitude, the C. coli and C. jejuni real-time PCR assays allowed a precise quantification of purified DNA from C. coli and C. jejuni. The assays were highly specific and showed a 6-log-linear dynamic range of quantification with a quantitative detection limit of approximately 2.5 × 102 CFU/g of faeces, 1.3 × 102 CFU/g of feed, and 1.0 × 103 CFU/m2 for the environmental samples. Compared to the results obtained by culture, both C. coli and C. jejuni real-time PCR assays exhibited a specificity of 96.2% with a kappa of 0.94 and 0.89 respectively. For faecal samples of experimentally infected pigs, the coefficients of correlation between the C. coli or C. jejuni real-time PCR assay and culture enumeration were R2 = 0.90 and R2 = 0.93 respectively. Conclusion The C. coli and C. jejuni real-time quantitative PCR assays developed in this study provide a method capable of directly detecting and quantifying C. coli and C. jejuni in faeces, feed, and environmental samples. These assays represent a new

  5. Morphological quantification of hierarchical geomaterials by X-ray nano-CT bridges the gap from nano to micro length scales

    KAUST Repository

    Brisard, S.; Chae, R. S.; Bihannic, I.; Michot, L.; Guttmann, P.; Thieme, J.; Schneider, G.; Monteiro, P. J. M.; Levitz, P.

    2012-01-01

    Morphological quantification of the complex structure of hierarchical geomaterials is of great relevance for Earth science and environmental engineering, among others. To date, methods that quantify the 3D morphology on length scales ranging from a

  6. Decomposable log-linear models

    DEFF Research Database (Denmark)

    Eriksen, Poul Svante

    can be characterized by a structured set of conditional independencies between some variables given some other variables. We term the new model class decomposable log-linear models, which is illustrated to be a much richer class than decomposable graphical models.It covers a wide range of non...... The present paper considers discrete probability models with exact computational properties. In relation to contingency tables this means closed form expressions of the maksimum likelihood estimate and its distribution. The model class includes what is known as decomposable graphicalmodels, which......-hierarchical models, models with structural zeroes, models described by quasi independence and models for level merging. Also, they have a very natural interpretation as they may be formulated by a structured set of conditional independencies between two events given some other event. In relation to contingency...

  7. A homogeneous assay principle for universal substrate quantification via hydrogen peroxide producing enzymes

    Energy Technology Data Exchange (ETDEWEB)

    Zscharnack, Kristin; Kreisig, Thomas; Prasse, Agneta A. [Institute of Bioanalytical Chemistry, Faculty of Chemistry and Mineralogy, Universität Leipzig, Deutscher Platz 5, 04103 Leipzig (Germany); Zuchner, Thole, E-mail: Thole.Zuechner@octapharma.com [Institute of Bioanalytical Chemistry, Faculty of Chemistry and Mineralogy, Universität Leipzig, Deutscher Platz 5, 04103 Leipzig (Germany); Center for Biotechnology and Biomedicine, Universität Leipzig, Deutscher Platz 5, 04103 Leipzig (Germany)

    2015-01-07

    Highlights: • Application of the TRF-based PATb system for universal oxidase substrate detection. • H{sub 2}O{sub 2} generated by choline or glucose oxidase quenches the TRF signal of PATb. • The assay time is only limited by the oxidase catalysis rate. • Glucose is precisely detected in human serum consistent to a commercial assay. • A reliable quantification of choline in infant formula is shown. - Abstract: H{sub 2}O{sub 2} is a widely occurring molecule which is also a byproduct of a number of enzymatic reactions. It can therefore be used to quantify the corresponding enzymatic substrates. In this study, the time-resolved fluorescence emission of a previously described complex consisting of phthalic acid and terbium (III) ions (PATb) is used for H{sub 2}O{sub 2} detection. In detail, glucose oxidase and choline oxidase convert glucose and choline, respectively, to generate H{sub 2}O{sub 2} which acts as a quencher for the PATb complex. The response time of the PATb complex toward H{sub 2}O{sub 2} is immediate and the assay time only depends on the conversion rate of the enzymes involved. The PATb assay quantifies glucose in a linear range of 0.02–10 mmol L{sup −1}, and choline from 1.56 to 100 μmol L{sup −1} with a detection limit of 20 μmol L{sup −1} for glucose and 1.56 μmol L{sup −1} for choline. Both biomolecules glucose and choline could be detected without pretreatment with good precision and reproducibility in human serum samples and infant formula, respectively. Furthermore, it is shown that the detected glucose concentrations by the PATb system agree with the results of a commercially available assay. In principle, the PATb system is a universal and versatile tool for the quantification of any substrate and enzyme reaction where H{sub 2}O{sub 2} is involved.

  8. Highly sensitive real-time PCR for specific detection and quantification of Coxiella burnetii

    Directory of Open Access Journals (Sweden)

    Linke Sonja

    2006-01-01

    Full Text Available Abstract Background Coxiella burnetii, the bacterium causing Q fever, is an obligate intracellular biosafety level 3 agent. Detection and quantification of these bacteria with conventional methods is time consuming and dangerous. During the last years, several PCR based diagnostic assays were developed to detect C. burnetii DNA in cell cultures and clinical samples. We developed and evaluated TaqMan-based real-time PCR assays that targeted the singular icd (isocitrate dehydrogenase gene and the transposase of the IS1111a element present in multiple copies in the C. burnetii genome. Results To evaluate the precision of the icd and IS1111 real-time PCR assays, we performed different PCR runs with independent DNA dilutions of the C. burnetii Nine Mile RSA493 strain. The results showed very low variability, indicating efficient reproducibility of both assays. Using probit analysis, we determined that the minimal number of genome equivalents per reaction that could be detected with a 95% probability was 10 for the icd marker and 6.5 for the IS marker. Plasmid standards with cloned icd and IS1111 fragments were used to establish standard curves which were linear over a range from 10 to 107 starting plasmid copy numbers. We were able to quantify cell numbers of a diluted, heat-inactivated Coxiella isolate with a detection limit of 17 C. burnetii particles per reaction. Real-time PCR targeting both markers was performed with DNA of 75 different C. burnetii isolates originating from all over the world. Using this approach, the number of IS1111 elements in the genome of the Nine Mile strain was determined to be 23, close to 20, the number revealed by genome sequencing. In other isolates, the number of IS1111 elements varied widely (between seven and 110 and seemed to be very high in some isolates. Conclusion We validated TaqMan-based real-time PCR assays targeting the icd and IS1111 markers of C. burnetii. The assays were shown to be specific, highly

  9. Visualization and quantification of evolving datasets. Final report: 8-1-93 - 4-30-97

    International Nuclear Information System (INIS)

    Zabusky, N.; Silver, D.

    1999-01-01

    The material below is the final technical/progress report of the Laboratory for Visiometrics and Modeling (Vizlab) in visiometrics for the grant entitled Visualization and Quantification of Evolving Phenomena. This includes coordination with DOE supported scientists at Los Alamos National Laboratory (LANL) and Princeton Plasma Physics Laboratory (PPPL), and with theoretical and computational physicists at the National Institute of Fusion Science (NIFS) in Nagoya, Japan and the Institute of Laser Engineering (ILE) in Osaka, Japan. The authors research areas included: Enhancement and distribution of the DAVID environment, this is a 2D visualization environment incorporating many advanced quantifications and diagnostics useful for prediction, understanding, and reduced model formation; Feature extraction, tracking and quantification of 3D time-dependent datasets of non-linear and turbulent simulations both compressible and incompressible. This work is applicable to all 3D time-varying simulations; Visiometrics in shock-interface interactions and mixing for the Richtmyer-Meshkov (RM) environment. This work highlights reduced models for nonlinear evolutions and the role of density stratified interfaces (contact discontinuities) and has application to supernova physics, laser fusion and supersonic combustion. The collaborative projects included areas of (1) Feature extraction, tracking and quantification in 3D turbulence: compressible and incompressible; (2) Numerical Tokamak Project (NTP); (3) Data projection and reduced modeling for shock-interface interactions and mixing. (The Richtmyer-Meshkov (RM) environment relevant to laser fusion and combustion)

  10. Terahertz identification and quantification of penicillamine enantiomers

    International Nuclear Information System (INIS)

    Ji Te; Zhao Hongwei; Chen Min; Xiao Tiqiao; Han Pengyu

    2013-01-01

    Identification and characterization of L-, D- and DL- penicillamine were demonstrated by Terahertz time-domain spectroscopy (THz-TDS). To understand the physical origins of the low frequency resonant modes, the density functional theory (DFT) was adopted for theoretical calculation. It was found that the collective THz frequency motions were decided by the intramolecular and intermolecular hydrogen bond interactions. Moreover, the quantification of penicillamine enantiomers mixture was demonstrated by a THz spectra fitting method with a relative error of less than 3.5%. This technique can be a valuable tool for the discrimination and quantification of chiral drugs in pharmaceutical industry. (authors)

  11. Small diameter symmetric networks from linear groups

    Science.gov (United States)

    Campbell, Lowell; Carlsson, Gunnar E.; Dinneen, Michael J.; Faber, Vance; Fellows, Michael R.; Langston, Michael A.; Moore, James W.; Multihaupt, Andrew P.; Sexton, Harlan B.

    1992-01-01

    In this note is reported a collection of constructions of symmetric networks that provide the largest known values for the number of nodes that can be placed in a network of a given degree and diameter. Some of the constructions are in the range of current potential engineering significance. The constructions are Cayley graphs of linear groups obtained by experimental computation.

  12. Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Fowler, Michael James [Clarkson Univ., Potsdam, NY (United States)

    2014-04-25

    In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy laboratories.

  13. Facile preparation of poly(methylene blue) modified carbon paste electrode for the detection and quantification of catechin

    Energy Technology Data Exchange (ETDEWEB)

    Manasa, G [Electrochemistry Research Group, Department of Chemistry, St. Joseph' s College, Lalbagh Road, Bangalore, 560027, Karnataka (India); Mascarenhas, Ronald J, E-mail: ronaldmasc2311@yahoo.co.in [Electrochemistry Research Group, Department of Chemistry, St. Joseph' s College, Lalbagh Road, Bangalore, 560027, Karnataka (India); Satpati, Ashis K [Analytical Chemistry Division, Bhabha Atomic Research Centre, Anushakthi Nagar, Trombay, Mumbai 400094, Maharashtra (India); D' Souza, Ozma J [Electrochemistry Research Group, Department of Chemistry, St. Joseph' s College, Lalbagh Road, Bangalore, 560027, Karnataka (India); Dhason, A [Soft Condensed Matter, Raman Research Institute, Sadashivnagar, Bangalore 560080, Karnataka (India)

    2017-04-01

    Free radicals are formed as byproducts of metabolism, and are highly unstable due to the presence of unpaired electrons. They readily react with other important cellular components such as DNA causing them damage. Antioxidants such as (+)-catechin (CAT), neutralize free radicals in the blood stream. Hence there is a need for detection and quantification of catechin concentration in various food sources and beverages. Electro-oxidative properties of catechin were investigated using cyclic voltammetry (CV) and differential pulse voltammetry (DPV). A carbon paste working electrode modified by electropolymerizing methylene blue (MB) was fabricated. Field emission scanning electron microscopy (FESEM) and atomic force microscopy (AFM) techniques were used to study the surface morphology of the electrode. Quasi-reversible electron transfer reaction occurred at + 0.260 V through a diffusion controlled process. In comparison to the bare carbon paste electrode (CPE), there was a significant 5.3 times increment in anodic current sensitivity at the modified electrode at physiological pH. Our findings indicate that for the electro-oxidation of CAT, CPE is a better base material for electropolymerization of MB compared to glassy carbon electrode (GCE). Nyquist plot followed the theoretical shape, indicating low interfacial charge transfer resistance of 0.095 kΩ at the modified electrode. Calibration plots obtained by DPV were linear in two ranges of 1.0 × 10{sup −3} to 1.0 × 10{sup −6} and 1.0 × 10{sup −7} to 0.1 × 10{sup −8} M. The limit of detection (LOD) and limit of quantification (LOQ) was 4.9 nM and 14 nM respectively. Application of the developed electrode was demonstrated by detecting catechin in green tea and spiked fruit juice with satisfactory recoveries. The sensor was stable, sensitive, selective and reproducible. - Highlights: • Remarkable electrocatalytic oxidation of Catechin at poly(methylene blue) modified CPE • Complete elimination of signal

  14. Two-stream Convolutional Neural Network for Methane Emissions Quantification

    Science.gov (United States)

    Wang, J.; Ravikumar, A. P.; McGuire, M.; Bell, C.; Tchapmi, L. P.; Brandt, A. R.

    2017-12-01

    Methane, a key component of natural gas, has a 25x higher global warming potential than carbon dioxide on a 100-year basis. Accurately monitoring and mitigating methane emissions require cost-effective detection and quantification technologies. Optical gas imaging, one of the most commonly used leak detection technology, adopted by Environmental Protection Agency, cannot estimate leak-sizes. In this work, we harness advances in computer science to allow for rapid and automatic leak quantification. Particularly, we utilize two-stream deep Convolutional Networks (ConvNets) to estimate leak-size by capturing complementary spatial information from still plume frames, and temporal information from plume motion between frames. We build large leak datasets for training and evaluating purposes by collecting about 20 videos (i.e. 397,400 frames) of leaks. The videos were recorded at six distances from the source, covering 10 -60 ft. Leak sources included natural gas well-heads, separators, and tanks. All frames were labeled with a true leak size, which has eight levels ranging from 0 to 140 MCFH. Preliminary analysis shows that two-stream ConvNets provides significant accuracy advantage over single steam ConvNets. Spatial stream ConvNet can achieve an accuracy of 65.2%, by extracting important features, including texture, plume area, and pattern. Temporal stream, fed by the results of optical flow analysis, results in an accuracy of 58.3%. The integration of the two-stream ConvNets gives a combined accuracy of 77.6%. For future work, we will split the training and testing datasets in distinct ways in order to test the generalization of the algorithm for different leak sources. Several analytic metrics, including confusion matrix and visualization of key features, will be used to understand accuracy rates and occurrences of false positives. The quantification algorithm can help to find and fix super-emitters, and improve the cost-effectiveness of leak detection and repair

  15. Final Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Conrad, Patrick [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Bigoni, Daniele [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Parno, Matthew [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2017-06-09

    QUEST (\\url{www.quest-scidac.org}) is a SciDAC Institute that is focused on uncertainty quantification (UQ) in large-scale scientific computations. Our goals are to (1) advance the state of the art in UQ mathematics, algorithms, and software; and (2) provide modeling, algorithmic, and general UQ expertise, together with software tools, to other SciDAC projects, thereby enabling and guiding a broad range of UQ activities in their respective contexts. QUEST is a collaboration among six institutions (Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University) with a history of joint UQ research. Our vision encompasses all aspects of UQ in leadership-class computing. This includes the well-founded setup of UQ problems; characterization of the input space given available data/information; local and global sensitivity analysis; adaptive dimensionality and order reduction; forward and inverse propagation of uncertainty; handling of application code failures, missing data, and hardware/software fault tolerance; and model inadequacy, comparison, validation, selection, and averaging. The nature of the UQ problem requires the seamless combination of data, models, and information across this landscape in a manner that provides a self-consistent quantification of requisite uncertainties in predictions from computational models. Accordingly, our UQ methods and tools span an interdisciplinary space across applied math, information theory, and statistics. The MIT QUEST effort centers on statistical inference and methods for surrogate or reduced-order modeling. MIT personnel have been responsible for the development of adaptive sampling methods, methods for approximating computationally intensive models, and software for both forward uncertainty propagation and statistical inverse problems. A key software product of the MIT QUEST effort is the MIT

  16. Hydrophilic interaction liquid chromatography/positive ion electrospray ionization mass spectrometry method for the quantification of alprazolam and α-hydroxy-alprazolam in human plasma.

    Science.gov (United States)

    Kalogria, Eleni; Pistos, Constantinos; Panderi, Irene

    2013-12-30

    A hydrophilic interaction liquid chromatography/positive ion electrospray-mass spectrometry (HILIC-ESI/MS) has been developed and fully validated for the quantification of alprazolam and its main metabolite, α-hydroxy-alprazolam, in human plasma. The assay is based on 50μL plasma samples, following liquid-liquid extraction. All analytes and the internal standard (tiamulin) were separated by hydrophilic interaction liquid chromatography using an X-Bridge-HILIC analytical column (150.0mm×2.1mm i.d., particle size 3.5μm) under isoscratic elution. The mobile phase was composed of a 7% 10mM ammonium formate water solution in acetonitrile and pumped at a flow rate of 0.20mLmin(-1). Running in positive electrospray ionization and selected ion monitoring (SIM) the mass spectrometer was set to analyze the protonated molecules [M+H](+) at m/z 309, 325 and 494 for alprazolam, α-hydroxy-alprazolam and tiamulin (ISTD) respectively. The assay was linear over the concentration range of 2.5-250ngmL(-1) for alprazolam and 2.5-50ngmL(-1) for α-hydroxy alprazolam. Intermediate precision was less than 4.1% over the tested concentration ranges. The method is the first reported application of HILIC in the analysis benzodiazepines in human plasma. With a small sample size (50μL human plasma) and a run time less than 10.0min for each sample the method can be used to support a wide range of clinical studies concerning alprazolam quantification. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Non linear microtearing modes

    International Nuclear Information System (INIS)

    Garbet, X.; Mourgues, F.; Samain, A.

    1987-01-01

    Among the various instabilities which could explain the anomalous electron heat transport observed in tokamaks during additional heating, a microtearing turbulence is a reasonable candidate since it affects directly the magnetic topology. This turbulence may be described in a proper frame rotating around the majors axis by a static potential vector. In strong non linear regimes, the flow of electrons along the stochastic field lines induces a current. The point is to know whether this current can sustain the turbulence. The mechanisms of this self-consistency, involving the combined effects of the thermal diamagnetism and of the electric drift are presented here

  18. RF linear accelerators

    CERN Document Server

    Wangler, Thomas P

    2008-01-01

    Thomas P. Wangler received his B.S. degree in physics from Michigan State University, and his Ph.D. degree in physics and astronomy from the University of Wisconsin. After postdoctoral appointments at the University of Wisconsin and Brookhaven National Laboratory, he joined the staff of Argonne National Laboratory in 1966, working in the fields of experimental high-energy physics and accelerator physics. He joined the Accelerator Technology Division at Los Alamos National Laboratory in 1979, where he specialized in high-current beam physics and linear accelerator design and technology. In 2007

  19. SLAC linear collider

    International Nuclear Information System (INIS)

    Richter, B.; Bell, R.A.; Brown, K.L.

    1980-06-01

    The SLAC LINEAR COLLIDER is designed to achieve an energy of 100 GeV in the electron-positron center-of-mass system by accelerating intense bunches of particles in the SLAC linac and transporting the electron and positron bunches in a special magnet system to a point where they are focused to a radius of about 2 microns and made to collide head on. The rationale for this new type of colliding beam system is discussed, the project is described, some of the novel accelerator physics issues involved are discussed, and some of the critical technical components are described

  20. Matlab linear algebra

    CERN Document Server

    Lopez, Cesar

    2014-01-01

    MATLAB is a high-level language and environment for numerical computation, visualization, and programming. Using MATLAB, you can analyze data, develop algorithms, and create models and applications. The language, tools, and built-in math functions enable you to explore multiple approaches and reach a solution faster than with spreadsheets or traditional programming languages, such as C/C++ or Java. MATLAB Linear Algebra introduces you to the MATLAB language with practical hands-on instructions and results, allowing you to quickly achieve your goals. In addition to giving an introduction to

  1. Characterization of 3D PET systems for accurate quantification of myocardial blood flow

    OpenAIRE

    Renaud, Jennifer M.; Yip, Kathy; Guimond, Jean; Trottier, Mikaël; Pibarot, Philippe; Turcotte, Éric; Maguire, Conor; Lalonde, Lucille; Gulenchyn, Karen; Farncombe, Troy; Wisenberg, Gerald; Moody, Jonathan; Lee, Benjamin; Port, Steven C.; Turkington, Timothy G

    2016-01-01

    Three-dimensional (3D) mode imaging is the current standard for positron emission tomography-computed tomography (PET-CT) systems. Dynamic imaging for quantification of myocardial blood flow (MBF) with short-lived tracers, such as Rb-82- chloride (Rb-82), requires accuracy to be maintained over a wide range of isotope activities and scanner count-rates. We propose new performance standard measurements to characterize the dynamic range of PET systems for accurate quantitative...

  2. Comparison between Two Linear Supervised Learning Machines' Methods with Principle Component Based Methods for the Spectrofluorimetric Determination of Agomelatine and Its Degradants.

    Science.gov (United States)

    Elkhoudary, Mahmoud M; Naguib, Ibrahim A; Abdel Salam, Randa A; Hadad, Ghada M

    2017-05-01

    Four accurate, sensitive and reliable stability indicating chemometric methods were developed for the quantitative determination of Agomelatine (AGM) whether in pure form or in pharmaceutical formulations. Two supervised learning machines' methods; linear artificial neural networks (PC-linANN) preceded by principle component analysis and linear support vector regression (linSVR), were compared with two principle component based methods; principle component regression (PCR) as well as partial least squares (PLS) for the spectrofluorimetric determination of AGM and its degradants. The results showed the benefits behind using linear learning machines' methods and the inherent merits of their algorithms in handling overlapped noisy spectral data especially during the challenging determination of AGM alkaline and acidic degradants (DG1 and DG2). Relative mean squared error of prediction (RMSEP) for the proposed models in the determination of AGM were 1.68, 1.72, 0.68 and 0.22 for PCR, PLS, SVR and PC-linANN; respectively. The results showed the superiority of supervised learning machines' methods over principle component based methods. Besides, the results suggested that linANN is the method of choice for determination of components in low amounts with similar overlapped spectra and narrow linearity range. Comparison between the proposed chemometric models and a reported HPLC method revealed the comparable performance and quantification power of the proposed models.

  3. Quantification of live Lactobacillus acidophilus in mixed populations of live and killed by application of attenuated reflection Fourier transform infrared spectroscopy combined with chemometrics.

    Science.gov (United States)

    Toziou, Peristera-Maria; Barmpalexis, Panagiotis; Boukouvala, Paraskevi; Verghese, Susan; Nikolakakis, Ioannis

    2018-05-30

    Since culture-based methods are costly and time consuming, alternative methods are investigated for the quantification of probiotics in commercial products. In this work ATR- FTIR vibration spectroscopy was applied for the differentiation and quantification of live Lactobacillus (La 5) in mixed populations of live and killed La 5, in the absence and in the presence of enteric polymer Eudragit ® L 100-55. Suspensions of live (La 5_L) and killed in acidic environment bacillus (La 5_K) were prepared and binary mixtures of different percentages were used to grow cell cultures for colony counting and spectral analysis. The increase in the number of colonies with added%La 5_L to the mixture was log-linear (r 2  = 0.926). Differentiation of La 5_L from La 5_K was possible directly from the peak area at 1635 cm -1 (amides of proteins and peptides) and a linear relationship between%La 5_L and peak area in the range 0-95% was obtained. Application of partial least squares regression (PLSR) gave reasonable prediction of%La 5_L (RMSEp = 6.48) in binary mixtures of live and killed La 5 but poor prediction (RMSEp = 11.75) when polymer was added to the La 5 mixture. Application of artificial neural networks (ANNs) improved greatly the predictive ability for%La 5_L both in the absence and in the presence of polymer (RMSEp = 8.11 × 10 -8 for La 5 only mixtures and RMSEp = 8.77 × 10 -8 with added polymer) due to their ability to express in the calibration models more hidden spectral information than PLSR. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Validation of an enzyme-linked immunosorbent assay for the quantification of citrullinated histone H3 as a marker for neutrophil extracellular traps in human plasma.

    Science.gov (United States)

    Thålin, Charlotte; Daleskog, Maud; Göransson, Sophie Paues; Schatzberg, Daphne; Lasselin, Julie; Laska, Ann-Charlotte; Kallner, Anders; Helleday, Thomas; Wallén, Håkan; Demers, Mélanie

    2017-06-01

    There is an emerging interest in the diverse functions of neutrophil extracellular traps (NETs) in a variety of disease settings. However, data on circulating NETs rely largely upon surrogate NET markers such as cell-free DNA, nucleosomes, and NET-associated enzymes. Citrullination of histone H3 by peptidyl arginine deiminase 4 (PAD4) is central for NET formation, and citrullinated histone H3 (H3Cit) is considered a NET-specific biomarker. We therefore aimed to optimize and validate a new enzyme-linked immunosorbent assay (ELISA) to quantify the levels of H3Cit in human plasma. A standard curve made of in vitro PAD4-citrullinated histones H3 allows for the quantification of H3Cit in plasma using an anti-histone antibody as capture antibody and an anti-histone H3 citrulline antibody for detection. The assay was evaluated for linearity, stability, specificity, and precision on plasma samples obtained from a human model of inflammation before and after lipopolysaccharide injection. The results revealed linearity and high specificity demonstrated by the inability of detecting non-citrullinated histone H3. Coefficients of variation for intra- and inter-assay variability ranged from 2.1 to 5.1% and from 5.8 to 13.5%, respectively, allowing for a high precision. Furthermore, our results support an inflammatory induction of a systemic NET burden by showing, for the first time, clear intra-individual elevations of plasma H3Cit in a human model of lipopolysaccharide-induced inflammation. Taken together, our work demonstrates the development of a new method for the quantification of H3Cit by ELISA that can reliably be used for the detection of NETs in human plasma.

  5. Quantification of triacontanol and its PEGylated prodrug in rat plasma by GC-MS/MS: Application to a pre-clinical pharmacokinetic study.

    Science.gov (United States)

    Lu, Xiaoyu; Fang, Min; Dai, Yu; Yang, Yue; Fan, Ali; Xu, Jiaqiu; Qin, Zhiying; Lu, Yang; Zhao, Di; Chen, Xijing; Li, Ning

    2018-04-24

    PEGylation techniques have been increasingly employed in drug delivery system and chemical modification of compounds with low aqueous solubility. Triacontanol (TA) is a natural product with several pharmacological activities, but its low aqueous solubility significantly limited its application. PEGylated triacontanol (PEG-TA) was designed as the prodrug to improve the aqueous solubility and pharmacokinetic properties of TA. On the basis of salting-out assisted liquid-liquid extraction (SALLE) and saponification sample preparation procedure, a reliable gas chromatography tandem mass spectrometric (GC-MS/MS) method was developed and validated for the quantification of PEG-TA and its metabolite TA in rat plasma after separation and transformation. Acetonitrile-methanol (9:1, v/v) and ammonium acetate (10 M) were utilized to separate PEG-TA and TA (including conjugated TA with fatty acid). Saponification facilitated the complete conversion of PEG-TA into TA, so PEG-TA could be indirectly quantified. The results revealed that the GC-MS/MS method had excellent selectivity, accuracy and linearity. Calibration curves were linear (R 2 >0.99) within the range of 20.0-1000.0 ng/mL for TA and 100.0-10,000.0 ng/mL for PEG-TA. The intra- and inter-day precision of quality control samples were within 15%, and their accuracy values varied from 93.54% to 113.38%. This analytical method has been successfully applied to pharmacokinetic study of PEG-TA. This study can facilitate the further exploration and quantification of PEGylated prodrugs. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Benchmarking common quantification strategies for large-scale phosphoproteomics

    DEFF Research Database (Denmark)

    Hogrebe, Alexander; von Stechow, Louise; Bekker-Jensen, Dorte B

    2018-01-01

    Comprehensive mass spectrometry (MS)-based proteomics is now feasible, but reproducible quantification remains challenging, especially for post-translational modifications such as phosphorylation. Here, we compare the most popular quantification techniques for global phosphoproteomics: label-free...

  7. Special set linear algebra and special set fuzzy linear algebra

    OpenAIRE

    Kandasamy, W. B. Vasantha; Smarandache, Florentin; Ilanthenral, K.

    2009-01-01

    The authors in this book introduce the notion of special set linear algebra and special set fuzzy Linear algebra, which is an extension of the notion set linear algebra and set fuzzy linear algebra. These concepts are best suited in the application of multi expert models and cryptology. This book has five chapters. In chapter one the basic concepts about set linear algebra is given in order to make this book a self contained one. The notion of special set linear algebra and their fuzzy analog...

  8. Extending the linear model with R generalized linear, mixed effects and nonparametric regression models

    CERN Document Server

    Faraway, Julian J

    2005-01-01

    Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...

  9. Competitive inhibition can linearize dose-response and generate a linear rectifier.

    Science.gov (United States)

    Savir, Yonatan; Tu, Benjamin P; Springer, Michael

    2015-09-23

    Many biological responses require a dynamic range that is larger than standard bi-molecular interactions allow, yet the also ability to remain off at low input. Here we mathematically show that an enzyme reaction system involving a combination of competitive inhibition, conservation of the total level of substrate and inhibitor, and positive feedback can behave like a linear rectifier-that is, a network motif with an input-output relationship that is linearly sensitive to substrate above a threshold but unresponsive below the threshold. We propose that the evolutionarily conserved yeast SAGA histone acetylation complex may possess the proper physiological response characteristics and molecular interactions needed to perform as a linear rectifier, and we suggest potential experiments to test this hypothesis. One implication of this work is that linear responses and linear rectifiers might be easier to evolve or synthetically construct than is currently appreciated.

  10. Electrodynamic linear motor

    Energy Technology Data Exchange (ETDEWEB)

    Munehiro, H

    1980-05-29

    When driving the carriage of a printer through a rotating motor, there are problems regarding the limited accuracy of the carriage position due to rotation or contraction and ageing of the cable. In order to solve the problem, a direct drive system was proposed, in which the printer carriage is driven by a linear motor. If one wants to keep the motor circuit of such a motor compact, then the magnetic flux density in the air gap must be reduced or the motor travel must be reduced. It is the purpose of this invention to create an electrodynamic linear motor, which on the one hand is compact and light and on the other hand has a relatively high constant force over a large travel. The invention is characterised by the fact that magnetic fields of alternating polarity are generated at equal intervals in the magnetic field, and that the coil arrangement has 2 adjacent coils, whose size corresponds to half the length of each magnetic pole. A logic circuit is provided to select one of the two coils and to determine the direction of the current depending on the signals of a magnetic field sensor on the coil arrangement.

  11. Linear wind generator

    International Nuclear Information System (INIS)

    Kozarov, A.; Petrov, O.; Antonov, J.; Sotirova, S.; Petrova, B.

    2006-01-01

    The purpose of the linear wind-power generator described in this article is to decrease the following disadvantages of the common wind-powered turbine: 1) large bending and twisting moments to the blades and the shaft, especially when strong winds and turbulence exist; 2) significant values of the natural oscillation period of the construction result in the possibility of occurrence of destroying resonance oscillations; 3) high velocity of the peripheral parts of the rotor creating a danger for birds; 4) difficulties, connected with the installation and the operation on the mountain ridges and passages where the wind energy potential is the largest. The working surfaces of the generator in questions driven by the wind are not connected with a joint shaft but each moves along a railway track with few oscillations. So the sizes of each component are small and their number can be rather large. The mechanical trajectory is not a circle but a closed outline in a vertical plain, which consists of two rectilinear sectors, one above the other, connected in their ends by semi-circumferences. The mechanical energy of each component turns into electrical on the principle of the linear electrical generator. A regulation is provided when the direction of the wind is perpendicular to the route. A possibility of effectiveness is shown through aiming of additional quantities of air to the movable components by static barriers

  12. Ultrasonography findings and tumour quantification in patients with pseudomyxoma peritonei

    International Nuclear Information System (INIS)

    Krause, J.; Bergman, A.; Graf, W.; Nilsson, A.; Mahteme, H.

    2012-01-01

    Pseudomyxoma peritonei (PMP) is a disease with various clinical presentations and the diagnostic value of ultrasonography (US) is under investigated. The purpose of this study was to identify the most common US finding in PMP and to investigate US sensitivity, specificity, positive and negative predictive value in quantifying tumour burden in different abdomino-pelvic regions in PMP patients. Between February 2006 and December 2008, 54 patients were treated with cytoreductive surgery (CRS) and hyperthermic intraperitoneal chemotherapy (HIPEC) due to PMP. The results from preoperative US examination with and without intravenously administrated contrast (SonoVue) were compared to surgical findings. The mean US peritoneal cancer index (PCI) was 6 (range 0–25) and the surgical PCI was 18 (range 3–27) p < 0.0001. The histo-pathological subtypes did not influence the US findings. Ascites, bowel loops adhesions and omental cake were mostly visualised correctly by US. The sensitivity of US in quantification of tumour nodules was 91.5% (range 74–100%) and specificity was 33.8% (range 18–55%). The positive predictive value of US examination in PMP was 22% (range 11–44%) and the negative predictive value was 93% (range 77–100%). US can detect the most common PMP findings (ascites and omental cake). The sensitivity of US to quantify PMP tumour burden in different abdominio-pelvic region was relatively high, however, this imaging tool had low specificity.

  13. Compressive laser ranging.

    Science.gov (United States)

    Babbitt, Wm Randall; Barber, Zeb W; Renner, Christoffer

    2011-12-15

    Compressive sampling has been previously proposed as a technique for sampling radar returns and determining sparse range profiles with a reduced number of measurements compared to conventional techniques. By employing modulation on both transmission and reception, compressive sensing in ranging is extended to the direct measurement of range profiles without intermediate measurement of the return waveform. This compressive ranging approach enables the use of pseudorandom binary transmit waveforms and return modulation, along with low-bandwidth optical detectors to yield high-resolution ranging information. A proof-of-concept experiment is presented. With currently available compact, off-the-shelf electronics and photonics, such as high data rate binary pattern generators and high-bandwidth digital optical modulators, compressive laser ranging can readily achieve subcentimeter resolution in a compact, lightweight package.

  14. Linear Parametric Sensitivity Analysis of the Constraint Coefficient Matrix in Linear Programs

    OpenAIRE

    Zuidwijk, Rob

    2005-01-01

    textabstractSensitivity analysis is used to quantify the impact of changes in the initial data of linear programs on the optimal value. In particular, parametric sensitivity analysis involves a perturbation analysis in which the effects of small changes of some or all of the initial data on an optimal solution are investigated, and the optimal solution is studied on a so-called critical range of the initial data, in which certain properties such as the optimal basis in linear programming are ...

  15. Micromethod for quantification of SH groups generated after reduction of monoclonal antibodies

    International Nuclear Information System (INIS)

    Escobar, Normando Iznaga; Morales, Alejo; Nunez, Gilda

    1996-01-01

    A simple, rapid, and reproducible micromethod for quantification of sulfhydryl (SH) groups generated after reduction of monoclonal antibody (MAb) disulfide bonds with 2-mercaptoethanol (2-ME) is described. The number of SH groups per molecule of antibody in the 2-ME and in the other reducing agents was calculated from the cysteine standard curve using Ellman's reagent to develop the yellow color. Results were plotted as absorbance at 405 nm vs. cysteine concentration (μg/mL). After subtraction of the background due to Ellman's reagent, a straight-line relationship passing through the origin was obtained. Absorption spectrum of the yellow products was controlled, and no significative differences were found between optical density at 412 nm and 405 nm. Using a small quantity of antibody in the order of 37 μg, the lowest detection limit for cysteine quantification was 0.03 μg. An excellent linear correlation was found between both cysteine concentration and absorbance (r = 0.999), and the mean value of the relative error in the quantification of cysteine from samples was 2.8%. A statistical Student t-test showed an excellent linearity and parallelism between cysteine standard and samples

  16. Micromethod for quantification of SH groups generated after reduction of monoclonal antibodies

    Energy Technology Data Exchange (ETDEWEB)

    Escobar, Normando Iznaga; Morales, Alejo; Nunez, Gilda

    1996-07-01

    A simple, rapid, and reproducible micromethod for quantification of sulfhydryl (SH) groups generated after reduction of monoclonal antibody (MAb) disulfide bonds with 2-mercaptoethanol (2-ME) is described. The number of SH groups per molecule of antibody in the 2-ME and in the other reducing agents was calculated from the cysteine standard curve using Ellman's reagent to develop the yellow color. Results were plotted as absorbance at 405 nm vs. cysteine concentration ({mu}g/mL). After subtraction of the background due to Ellman's reagent, a straight-line relationship passing through the origin was obtained. Absorption spectrum of the yellow products was controlled, and no significative differences were found between optical density at 412 nm and 405 nm. Using a small quantity of antibody in the order of 37 {mu}g, the lowest detection limit for cysteine quantification was 0.03 {mu}g. An excellent linear correlation was found between both cysteine concentration and absorbance (r = 0.999), and the mean value of the relative error in the quantification of cysteine from samples was 2.8%. A statistical Student t-test showed an excellent linearity and parallelism between cysteine standard and samples.

  17. Mappings with closed range and compactness

    International Nuclear Information System (INIS)

    Iyahen, S.O.; Umweni, I.

    1985-12-01

    The motivation for this note is the result of E.O. Thorp that a normed linear space E is finite dimensional if and only if every continuous linear map for E into any normed linear space has a closed range. Here, a class of Hausdorff topological groups is introduced; called r-compactifiable topological groups, they include compact groups, locally compact Abelian groups and locally convex linear topological spaces. It is proved that a group in this class which is separable, complete metrizable or locally compact, is necessarily compact if its image by a continuous group homomorphism is necessarily closed. It is deduced then that a Hausdorff locally convex is zero if its image by a continuous additive map is necessarily closed. (author)

  18. Linearization of the Lorenz system

    International Nuclear Information System (INIS)

    Li, Chunbiao; Sprott, Julien Clinton; Thio, Wesley

    2015-01-01

    A partial and complete piecewise linearized version of the Lorenz system is proposed. The linearized versions have an independent total amplitude control parameter. Additional further linearization leads naturally to a piecewise linear version of the diffusionless Lorenz system. A chaotic circuit with a single amplitude controller is then implemented using a new switch element, producing a chaotic oscillation that agrees with the numerical calculation for the piecewise linear diffusionless Lorenz system. - Highlights: • A partial and complete piecewise linearized version of the Lorenz system are addressed. • The linearized versions have an independent total amplitude control parameter. • A piecewise linear version of the diffusionless Lorenz system is derived by further linearization. • A corresponding chaotic circuit without any multiplier is implemented for the chaotic oscillation

  19. Topics in computational linear optimization

    DEFF Research Database (Denmark)

    Hultberg, Tim Helge

    2000-01-01

    Linear optimization has been an active area of research ever since the pioneering work of G. Dantzig more than 50 years ago. This research has produced a long sequence of practical as well as theoretical improvements of the solution techniques avilable for solving linear optimization problems...... of high quality solvers and the use of algebraic modelling systems to handle the communication between the modeller and the solver. This dissertation features four topics in computational linear optimization: A) automatic reformulation of mixed 0/1 linear programs, B) direct solution of sparse unsymmetric...... systems of linear equations, C) reduction of linear programs and D) integration of algebraic modelling of linear optimization problems in C++. Each of these topics is treated in a separate paper included in this dissertation. The efficiency of solving mixed 0-1 linear programs by linear programming based...

  20. Linearization of the Lorenz system

    Energy Technology Data Exchange (ETDEWEB)

    Li, Chunbiao, E-mail: goontry@126.com [School of Electronic & Information Engineering, Nanjing University of Information Science & Technology, Nanjing 210044 (China); Engineering Technology Research and Development Center of Jiangsu Circulation Modernization Sensor Network, Jiangsu Institute of Commerce, Nanjing 211168 (China); Sprott, Julien Clinton [Department of Physics, University of Wisconsin–Madison, Madison, WI 53706 (United States); Thio, Wesley [Department of Electrical and Computer Engineering, The Ohio State University, Columbus, OH 43210 (United States)

    2015-05-08

    A partial and complete piecewise linearized version of the Lorenz system is proposed. The linearized versions have an independent total amplitude control parameter. Additional further linearization leads naturally to a piecewise linear version of the diffusionless Lorenz system. A chaotic circuit with a single amplitude controller is then implemented using a new switch element, producing a chaotic oscillation that agrees with the numerical calculation for the piecewise linear diffusionless Lorenz system. - Highlights: • A partial and complete piecewise linearized version of the Lorenz system are addressed. • The linearized versions have an independent total amplitude control parameter. • A piecewise linear version of the diffusionless Lorenz system is derived by further linearization. • A corresponding chaotic circuit without any multiplier is implemented for the chaotic oscillation.

  1. Dryden Aeronautical Test Range

    Data.gov (United States)

    Federal Laboratory Consortium — Recently redesignated to honor Dr. Hugh L. Dryden, NASA's Dryden Aeronautical Test Range (DATR) supports aerospace flight research and technology integration, space...

  2. Compact Antenna Range

    Data.gov (United States)

    Federal Laboratory Consortium — Facility consists of a folded compact antenna range including a computer controlled three axis position table, parabolic reflector and RF sources for the measurement...

  3. Supercritical fluid extraction (SFE) of ketamine metabolites from dried urine and on-line quantification by supercritical fluid chromatography and single mass detection (on-line SFE-SFC-MS).

    Science.gov (United States)

    Hofstetter, Robert; Fassauer, Georg M; Link, Andreas

    2018-02-15

    On-line solid-phase supercritical fluid extraction (SFE) and chromatography (SFC) coupled to mass spectrometry (MS) has been evaluated for its usefulness with respect to metabolic profiling and pharmacological investigations of ketamine in humans. The aim of this study was to develop and validate a rapid, highly selective and sensitive SFE-SFC-MS method for the quantification of ketamine and its metabolites in miniature amounts in human urine excluding liquid-liquid extraction (LLE). Several conditions were optimized systematically following the requirements of the European Medicines Agency: selectivity, carry-over, calibration curve parameters (LLOQ, range and linearity), within- and between-run accuracy and precision, dilution integrity, matrix effect, and stability. The method, which required a relatively small volume of human urine (20 μL per sample), was validated for pharmacologically and toxicologically relevant concentrations ranging from 25.0 to 1000 ng/mL (r 2  > 0.995). The lower limit of quantification (LLOQ) for all compounds was found to be as low as 0.5 ng. In addition, stability of analytes during removal of water from the urine samples using different conditions (filter paper or ISOLUTE® HM-N) was studied. In conclusion, the method developed in this study can be successfully applied to studies of ketamine metabolites in humans, and may pave the way for routine application of on-line SFE-SFC-MS in clinical investigations. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Colour thresholding and objective quantification in bioimaging

    Science.gov (United States)

    Fermin, C. D.; Gerber, M. A.; Torre-Bueno, J. R.

    1992-01-01

    Computer imaging is rapidly becoming an indispensable tool for the quantification of variables in research and medicine. Whilst its use in medicine has largely been limited to qualitative observations, imaging in applied basic sciences, medical research and biotechnology demands objective quantification of the variables in question. In black and white densitometry (0-256 levels of intensity) the separation of subtle differences between closely related hues from stains is sometimes very difficult. True-colour and real-time video microscopy analysis offer choices not previously available with monochrome systems. In this paper we demonstrate the usefulness of colour thresholding, which has so far proven indispensable for proper objective quantification of the products of histochemical reactions and/or subtle differences in tissue and cells. In addition, we provide interested, but untrained readers with basic information that may assist decisions regarding the most suitable set-up for a project under consideration. Data from projects in progress at Tulane are shown to illustrate the advantage of colour thresholding over monochrome densitometry and for objective quantification of subtle colour differences between experimental and control samples.

  5. Recurrence quantification analysis in Liu's attractor

    International Nuclear Information System (INIS)

    Balibrea, Francisco; Caballero, M. Victoria; Molera, Lourdes

    2008-01-01

    Recurrence Quantification Analysis is used to detect transitions chaos to periodical states or chaos to chaos in a new dynamical system proposed by Liu et al. This system contains a control parameter in the second equation and was originally introduced to investigate the forming mechanism of the compound structure of the chaotic attractor which exists when the control parameter is zero

  6. Quantification of coating aging using impedance measurements

    NARCIS (Netherlands)

    Westing, E.P.M. van; Weijde, D.H. van der; Vreijling, M.P.W.; Ferrari, G.M.; Wit, J.H.W. de

    1998-01-01

    This chapter shows the application results of a novel approach to quantify the ageing of organic coatings using impedance measurements. The ageing quantification is based on the typical impedance behaviour of barrier coatings in immersion. This immersion behaviour is used to determine the limiting

  7. Quantification analysis of CT for aphasic patients

    International Nuclear Information System (INIS)

    Watanabe, Shunzo; Ooyama, Hiroshi; Hojo, Kei; Tasaki, Hiroichi; Hanazono, Toshihide; Sato, Tokijiro; Metoki, Hirobumi; Totsuka, Motokichi; Oosumi, Noboru.

    1987-01-01

    Using a microcomputer, the locus and extent of the lesions, as demonstrated by computed tomography, for 44 aphasic patients with various types of aphasia were superimposed onto standardized matrices, composed of 10 slices with 3000 points (50 by 60). The relationships between the foci of the lesions and types of aphasia were investigated on the slices numbered 3, 4, 5, and 6 using a quantification theory, Type 3 (pattern analysis). Some types of regularities were observed on Slices 3, 4, 5, and 6. The group of patients with Broca's aphasia and the group with Wernicke's aphasia were generally separated on the 1st component and the 2nd component of the quantification theory, Type 3. On the other hand, the group with global aphasia existed between the group with Broca's aphasia and that with Wernicke's aphasia. The group of patients with amnestic aphasia had no specific findings, and the group with conduction aphasia existed near those with Wernicke's aphasia. The above results serve to establish the quantification theory, Type 2 (discrimination analysis) and the quantification theory, Type 1 (regression analysis). (author)

  8. Quantification analysis of CT for aphasic patients

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, S.; Ooyama, H.; Hojo, K.; Tasaki, H.; Hanazono, T.; Sato, T.; Metoki, H.; Totsuka, M.; Oosumi, N.

    1987-02-01

    Using a microcomputer, the locus and extent of the lesions, as demonstrated by computed tomography, for 44 aphasic patients with various types of aphasia were superimposed onto standardized matrices, composed of 10 slices with 3000 points (50 by 60). The relationships between the foci of the lesions and types of aphasia were investigated on the slices numbered 3, 4, 5, and 6 using a quantification theory, Type 3 (pattern analysis). Some types of regularities were observed on slices 3, 4, 5, and 6. The group of patients with Broca's aphasia and the group with Wernicke's aphasia were generally separated on the 1st component and the 2nd component of the quantification theory, Type 3. On the other hand, the group with global aphasia existed between the group with Broca's aphasia and that with Wernicke's aphasia. The group of patients with amnestic aphasia had no specific findings, and the group with conduction aphasia existed near those with Wernicke's aphasia. The above results serve to establish the quantification theory, Type 2 (discrimination analysis) and the quantification theory, Type 1 (regression analysis).

  9. Quantification of Cannabinoid Content in Cannabis

    Science.gov (United States)

    Tian, Y.; Zhang, F.; Jia, K.; Wen, M.; Yuan, Ch.

    2015-09-01

    Cannabis is an economically important plant that is used in many fields, in addition to being the most commonly consumed illicit drug worldwide. Monitoring the spatial distribution of cannabis cultivation and judging whether it is drug- or fiber-type cannabis is critical for governments and international communities to understand the scale of the illegal drug trade. The aim of this study was to investigate whether the cannabinoids content in cannabis could be spectrally quantified using a spectrometer and to identify the optimal wavebands for quantifying the cannabinoid content. Spectral reflectance data of dried cannabis leaf samples and the cannabis canopy were measured in the laboratory and in the field, respectively. Correlation analysis and the stepwise multivariate regression method were used to select the optimal wavebands for cannabinoid content quantification based on the laboratory-measured spectral data. The results indicated that the delta-9-tetrahydrocannabinol (THC) content in cannabis leaves could be quantified using laboratory-measured spectral reflectance data and that the 695 nm band is the optimal band for THC content quantification. This study provides prerequisite information for designing spectral equipment to enable immediate quantification of THC content in cannabis and to discriminate drug- from fiber-type cannabis based on THC content quantification in the field.

  10. Quantification of glycyrrhizin biomarker in Glycyrrhiza glabra ...

    African Journals Online (AJOL)

    Background: A simple and sensitive thin-layer chromatographic method has been established for quantification of glycyrrhizin in Glycyrrhiza glabra rhizome and baby herbal formulations by validated Reverse Phase HPTLC method. Materials and Methods: RP-HPTLC Method was carried out using glass coated with RP-18 ...

  11. Cues, quantification, and agreement in language comprehension.

    Science.gov (United States)

    Tanner, Darren; Bulkes, Nyssa Z

    2015-12-01

    We investigated factors that affect the comprehension of subject-verb agreement in English, using quantification as a window into the relationship between morphosyntactic processes in language production and comprehension. Event-related brain potentials (ERPs) were recorded while participants read sentences with grammatical and ungrammatical verbs, in which the plurality of the subject noun phrase was either doubly marked (via overt plural quantification and morphological marking on the noun) or singly marked (via only plural morphology on the noun). Both acceptability judgments and the ERP data showed heightened sensitivity to agreement violations when quantification provided an additional cue to the grammatical number of the subject noun phrase, over and above plural morphology. This is consistent with models of grammatical comprehension that emphasize feature prediction in tandem with cue-based memory retrieval. Our results additionally contrast with those of prior studies that showed no effects of plural quantification on agreement in language production. These findings therefore highlight some nontrivial divergences in the cues and mechanisms supporting morphosyntactic processing in language production and comprehension.

  12. Perfusion Quantification Using Gaussian Process Deconvolution

    DEFF Research Database (Denmark)

    Andersen, Irene Klærke; Have, Anna Szynkowiak; Rasmussen, Carl Edward

    2002-01-01

    The quantification of perfusion using dynamic susceptibility contrast MRI (DSC-MRI) requires deconvolution to obtain the residual impulse response function (IRF). In this work, a method using the Gaussian process for deconvolution (GPD) is proposed. The fact that the IRF is smooth is incorporated...

  13. On the linear programming bound for linear Lee codes.

    Science.gov (United States)

    Astola, Helena; Tabus, Ioan

    2016-01-01

    Based on an invariance-type property of the Lee-compositions of a linear Lee code, additional equality constraints can be introduced to the linear programming problem of linear Lee codes. In this paper, we formulate this property in terms of an action of the multiplicative group of the field [Formula: see text] on the set of Lee-compositions. We show some useful properties of certain sums of Lee-numbers, which are the eigenvalues of the Lee association scheme, appearing in the linear programming problem of linear Lee codes. Using the additional equality constraints, we formulate the linear programming problem of linear Lee codes in a very compact form, leading to a fast execution, which allows to efficiently compute the bounds for large parameter values of the linear codes.

  14. A critical view on microplastic quantification in aquatic organisms

    International Nuclear Information System (INIS)

    Vandermeersch, Griet; Van Cauwenberghe, Lisbeth; Janssen, Colin R.; Marques, Antonio; Granby, Kit; Fait, Gabriella; Kotterman, Michiel J.J.; Diogène, Jorge; Bekaert, Karen; Robbens, Johan; Devriese, Lisa

    2015-01-01

    Microplastics, plastic particles and fragments smaller than 5 mm, are ubiquitous in the marine environment. Ingestion and accumulation of microplastics have previously been demonstrated for diverse marine species ranging from zooplankton to bivalves and fish, implying the potential for microplastics to accumulate in the marine food web. In this way, microplastics can potentially impact food safety and human health. Although a few methods to quantify microplastics in biota have been described, no comparison and/or intercalibration of these techniques have been performed. Here we conducted a literature review on all available extraction and quantification methods. Two of these methods, involving wet acid destruction, were used to evaluate the presence of microplastics in field-collected mussels (Mytilus galloprovincialis) from three different “hotspot” locations in Europe (Po estuary, Italy; Tagus estuary, Portugal; Ebro estuary, Spain). An average of 0.18±0.14 total microplastics g −1 w.w. for the Acid mix Method and 0.12±0.04 total microplastics g −1 w.w. for the Nitric acid Method was established. Additionally, in a pilot study an average load of 0.13±0.14 total microplastics g −1 w.w. was recorded in commercial mussels (Mytilus edulis and M. galloprovincialis) from five European countries (France, Italy, Denmark, Spain and The Netherlands). A detailed analysis and comparison of methods indicated the need for further research to develop a standardised operating protocol for microplastic quantification and monitoring.

  15. Introduction to linear elasticity

    CERN Document Server

    Gould, Phillip L

    2013-01-01

    Introduction to Linear Elasticity, 3rd Edition, provides an applications-oriented grounding in the tensor-based theory of elasticity for students in mechanical, civil, aeronautical, and biomedical engineering, as well as materials and earth science. The book is distinct from the traditional text aimed at graduate students in solid mechanics by introducing the subject at a level appropriate for advanced undergraduate and beginning graduate students. The author's presentation allows students to apply the basic notions of stress analysis and move on to advanced work in continuum mechanics, plasticity, plate and shell theory, composite materials, viscoelasticity and finite method analysis. This book also:  Emphasizes tensor-based approach while still distilling down to explicit notation Provides introduction to theory of plates, theory of shells, wave propagation, viscoelasticity and plasticity accessible to advanced undergraduate students Appropriate for courses following emerging trend of teaching solid mechan...

  16. Linear step drive

    International Nuclear Information System (INIS)

    Haniger, L.; Elger, R.; Kocandrle, L.; Zdebor, J.

    1986-01-01

    A linear step drive is described developed in Czechoslovak-Soviet cooperation and intended for driving WWER-1000 control rods. The functional principle is explained of the motor and the mechanical and electrical parts of the drive, power control, and the indicator of position are described. The motor has latches situated in the reactor at a distance of 3 m from magnetic armatures, it has a low structural height above the reactor cover, which suggests its suitability for seismic localities. Its magnetic circuits use counterpoles; the mechanical shocks at the completion of each step are damped using special design features. The position indicator is of a special design and evaluates motor position within ±1% of total travel. A drive diagram and the flow chart of both the control electronics and the position indicator are presented. (author) 4 figs

  17. Linear pulse amplifier

    International Nuclear Information System (INIS)

    Tjutju, R.L.

    1977-01-01

    Pulse amplifier is standard significant part of spectrometer. Apart from other type of amplification, it's a combination of amplification and pulse shaping. Because of its special purpose the device should fulfill the following : High resolution is desired to gain a high yield comparable to its actual state of condition. High signal to noise is desired to nhν resolution. High linearity to facilitate calibration. A good overload recovery, in order to the device will capable of analizing a low energy radiation which appear joinly on the high energy fields. Other expections of the device are its economical and practical use its extentive application. For that reason it's built on a standard NIM principle. Taking also into account the above mentioned considerations. High quality component parts are used throughout, while its availability in the domestic market is secured. (author)

  18. Linear Accelerator Laboratory

    International Nuclear Information System (INIS)

    1976-01-01

    This report covers the activity of the Linear Accelerator Laboratory during the period June 1974-June 1976. The activity of the Laboratory is essentially centered on high energy physics. The main activities were: experiments performed with the colliding rings (ACO), construction of the new colliding rings and beginning of the work at higher energy (DCI), bubble chamber experiments with the CERN PS neutrino beam, counter experiments with CERN's PS and setting-up of equipment for new experiments with CERN's SPS. During this period a project has also been prepared for an experiment with the new PETRA colliding ring at Hamburg. On the other hand, intense collaboration with the LURE Laboratory, using the electron synchrotron radiation emitted by ACO and DCI, has been developed [fr

  19. HEAVY ION LINEAR ACCELERATOR

    Science.gov (United States)

    Van Atta, C.M.; Beringer, R.; Smith, L.

    1959-01-01

    A linear accelerator of heavy ions is described. The basic contributions of the invention consist of a method and apparatus for obtaining high energy particles of an element with an increased charge-to-mass ratio. The method comprises the steps of ionizing the atoms of an element, accelerating the resultant ions to an energy substantially equal to one Mev per nucleon, stripping orbital electrons from the accelerated ions by passing the ions through a curtain of elemental vapor disposed transversely of the path of the ions to provide a second charge-to-mass ratio, and finally accelerating the resultant stripped ions to a final energy of at least ten Mev per nucleon.

  20. Linear absorptive dielectrics

    Science.gov (United States)

    Tip, A.

    1998-06-01

    Starting from Maxwell's equations for a linear, nonconducting, absorptive, and dispersive medium, characterized by the constitutive equations D(x,t)=ɛ1(x)E(x,t)+∫t-∞dsχ(x,t-s)E(x,s) and H(x,t)=B(x,t), a unitary time evolution and canonical formalism is obtained. Given the complex, coordinate, and frequency-dependent, electric permeability ɛ(x,ω), no further assumptions are made. The procedure leads to a proper definition of band gaps in the periodic case and a new continuity equation for energy flow. An S-matrix formalism for scattering from lossy objects is presented in full detail. A quantized version of the formalism is derived and applied to the generation of Čerenkov and transition radiation as well as atomic decay. The last case suggests a useful generalization of the density of states to the absorptive situation.