WorldWideScience

Sample records for average mass approach

  1. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  2. An approach to averaging digitized plantagram curves.

    Science.gov (United States)

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  3. Average Transverse Momentum Quantities Approaching the Lightfront

    OpenAIRE

    Boer, Daniel

    2015-01-01

    In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of su...

  4. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  5. Application of Bayesian approach to estimate average level spacing

    International Nuclear Information System (INIS)

    Huang Zhongfu; Zhao Zhixiang

    1991-01-01

    A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out

  6. Partial Averaged Navier-Stokes approach for cavitating flow

    International Nuclear Information System (INIS)

    Zhang, L; Zhang, Y N

    2015-01-01

    Partial Averaged Navier Stokes (PANS) is a numerical approach developed for studying practical engineering problems (e.g. cavitating flow inside hydroturbines) with a resonance cost and accuracy. One of the advantages of PANS is that it is suitable for any filter width, leading a bridging method from traditional Reynolds Averaged Navier-Stokes (RANS) to direct numerical simulations by choosing appropriate parameters. Comparing with RANS, the PANS model will inherit many physical nature from parent RANS but further resolve more scales of motion in great details, leading to PANS superior to RANS. As an important step for PANS approach, one need to identify appropriate physical filter-width control parameters e.g. ratios of unresolved-to-total kinetic energy and dissipation. In present paper, recent studies of cavitating flow based on PANS approach are introduced with a focus on the influences of filter-width control parameters on the simulation results

  7. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  8. An average salary: approaches to the index determination

    Directory of Open Access Journals (Sweden)

    T. M. Pozdnyakova

    2017-01-01

    Full Text Available The article “An average salary: approaches to the index determination” is devoted to studying various methods of calculating this index, both used by official state statistics of the Russian Federation and offered by modern researchers.The purpose of this research is to analyze the existing approaches to calculating the average salary of employees of enterprises and organizations, as well as to make certain additions that would help to clarify this index.The information base of the research is laws and regulations of the Russian Federation Government, statistical and analytical materials of the Federal State Statistics Service of Russia for the section «Socio-economic indexes: living standards of the population», as well as materials of scientific papers, describing different approaches to the average salary calculation. The data on the average salary of employees of educational institutions of the Khabarovsk region served as the experimental base of research. In the process of conducting the research, the following methods were used: analytical, statistical, calculated-mathematical and graphical.The main result of the research is an option of supplementing the method of calculating average salary index within enterprises or organizations, used by Goskomstat of Russia, by means of introducing a correction factor. Its essence consists in the specific formation of material indexes for different categories of employees in enterprises or organizations, mainly engaged in internal secondary jobs. The need for introducing this correction factor comes from the current reality of working conditions of a wide range of organizations, when an employee is forced, in addition to the main position, to fulfill additional job duties. As a result, the situation is frequent when the average salary at the enterprise is difficult to assess objectively because it consists of calculating multiple rates per staff member. In other words, the average salary of

  9. Time averaging procedure for calculating the mass and energy transfer rates in adiabatic two phase flow

    International Nuclear Information System (INIS)

    Boccaccini, L.V.

    1986-07-01

    To take advantages of the semi-implicit computer models - to solve the two phase flow differential system - a proper averaging procedure is also needed for the source terms. In fact, in some cases, the correlations normally used for the source terms - not time averaged - fail using the theoretical time step that arises from the linear stability analysis used on the right handside. Such a time averaging procedure is developed with reference to the bubbly flow regime. Moreover, the concept of mass that must be exchanged to reach equilibrium from a non-equilibrium state is introduced to limit the mass transfer during a time step. Finally some practical calculations are performed to compare the different correlations for the average mass transfer rate developed in this work. (orig.) [de

  10. The background effective average action approach to quantum gravity

    DEFF Research Database (Denmark)

    D’Odorico, G.; Codello, A.; Pagani, C.

    2016-01-01

    of an UV attractive non-Gaussian fixed-point, which we find characterized by real critical exponents. Our closure method is general and can be applied systematically to more general truncations of the gravitational effective average action. © Springer International Publishing Switzerland 2016....

  11. Matrix product approach for the asymmetric random average process

    International Nuclear Information System (INIS)

    Zielen, F; Schadschneider, A

    2003-01-01

    We consider the asymmetric random average process which is a one-dimensional stochastic lattice model with nearest-neighbour interaction but continuous and unbounded state variables. First, the explicit functional representations, so-called beta densities, of all local interactions leading to steady states of product measure form are rigorously derived. This also completes an outstanding proof given in a previous publication. Then we present an alternative solution for the processes with factorized stationary states by using a matrix product ansatz. Due to continuous state variables we obtain a matrix algebra in the form of a functional equation which can be solved exactly

  12. The partially averaged field approach to cosmic ray diffusion

    International Nuclear Information System (INIS)

    Jones, F.C.; Birmingham, T.J.; Kaiser, T.B.

    1976-08-01

    The kinetic equation for particles interacting with turbulent fluctuations is derived by a new nonlinear technique which successfully corrects the difficulties associated with quasilinear theory. In this new method the effects of the fluctuations are evaluated along particle orbits which themselves include the effects of a statistically averaged subset of the possible configurations of the turbulence. The new method is illustrated by calculating the pitch angle diffusion coefficient D/sub Mu Mu/ for particles interacting with slab-model magnetic turbulence, i.e., magnetic fluctuations linearly polarized transverse to a mean magnetic field. Results are compared with those of quasilinear theory and also with those of Monte Carlo calculations. The major effect of the nonlinear treatment in this illustration is the determination of D/sub Mu Mu/ in the vicinity of 90 deg pitch angles where quasilinear theory breaks down. The spatial diffusion coefficient parallel to a mean magnetic field is evaluated using D/sub Mu Mu/ as calculated by this technique. It is argued that the partially averaged field method is not limited to small amplitude fluctuating fields, and is, hence, not a perturbation theory

  13. An Approach to Predict Debris Flow Average Velocity

    Directory of Open Access Journals (Sweden)

    Chen Cao

    2017-03-01

    Full Text Available Debris flow is one of the major threats for the sustainability of environmental and social development. The velocity directly determines the impact on the vulnerability. This study focuses on an approach using radial basis function (RBF neural network and gravitational search algorithm (GSA for predicting debris flow velocity. A total of 50 debris flow events were investigated in the Jiangjia gully. These data were used for building the GSA-based RBF approach (GSA-RBF. Eighty percent (40 groups of the measured data were selected randomly as the training database. The other 20% (10 groups of data were used as testing data. Finally, the approach was applied to predict six debris flow gullies velocities in the Wudongde Dam site area, where environmental conditions were similar to the Jiangjia gully. The modified Dongchuan empirical equation and the pulled particle analysis of debris flow (PPA approach were used for comparison and validation. The results showed that: (i the GSA-RBF predicted debris flow velocity values are very close to the measured values, which performs better than those using RBF neural network alone; (ii the GSA-RBF results and the MDEE results are similar in the Jiangjia gully debris flow velocities prediction, and GSA-RBF performs better; (iii in the study area, the GSA-RBF results are validated reliable; and (iv we could consider more variables in predicting the debris flow velocity by using GSA-RBF on the basis of measured data in other areas, which is more applicable. Because the GSA-RBF approach was more accurate, both the numerical simulation and the empirical equation can be taken into consideration for constructing debris flow mitigation works. They could be complementary and verified for each other.

  14. ship between IS-month mating mass and average lifetime repro

    African Journals Online (AJOL)

    1976; Elliol, Rae & Wickham, 1979; Napier, et af., 1980). Although being in general agreement with results in the literature, it is evident that the present phenotypic correlations between I8-month mating mass and average lifetime lambing and weaning rate tended to be equal to the highest comparable estimates in the ...

  15. Vertically averaged approaches for CO 2 migration with solubility trapping

    KAUST Repository

    Gasda, S. E.

    2011-05-20

    The long-term storage security of injected carbon dioxide (CO2) is an essential component of geological carbon sequestration operations. In the postinjection phase, the mobile CO2 plume migrates in large part because of buoyancy forces, following the natural topography of the geological formation. The primary trapping mechanisms are capillary and solubility trapping, which evolve over hundreds to thousands of years and can immobilize a significant portion of the mobile CO2 plume. However, both the migration and trapping processes are inherently complex, spanning multiple spatial and temporal scales. Using an appropriate model that can capture both large- and small-scale effects is essential for understanding the role of these processes on the long-term storage security of CO2 sequestration operations. Traditional numerical models quickly become prohibitively expensive for the type of large-scale, long-term modeling that is necessary for characterizing the migration and immobilization of CO2 during the postinjection period. We present an alternative modeling option that combines vertically integrated governing equations with an upscaled representation of the dissolution-convection process. With this approach, we demonstrate the effect of different modeling choices for typical large-scale geological systems and show that practical calculations can be performed at the temporal and spatial scales of interest. Copyright 2011 by the American Geophysical Union.

  16. A self-consistent semiclassical sum rule approach to the average properties of giant resonances

    International Nuclear Information System (INIS)

    Li Guoqiang; Xu Gongou

    1990-01-01

    The average energies of isovector giant resonances and the widths of isoscalar giant resonances are evaluated with the help of a self-consistent semiclassical Sum rule approach. The comparison of the present results with the experimental ones justifies the self-consistent semiclassical sum rule approach to the average properties of giant resonances

  17. Examination of segmental average mass spectra from liquid chromatography-tandem mass spectrometric (LC-MS/MS) data enables screening of multiple types of protein modifications.

    Science.gov (United States)

    Liu, Nai-Yu; Lee, Hsiao-Hui; Chang, Zee-Fen; Tsay, Yeou-Guang

    2015-09-10

    It has been observed that a modified peptide and its non-modified counterpart, when analyzed with reverse phase liquid chromatography, usually share a very similar elution property [1-3]. Inasmuch as this property is common to many different types of protein modifications, we propose an informatics-based approach, featuring the generation of segmental average mass spectra ((sa)MS), that is capable of locating different types of modified peptides in two-dimensional liquid chromatography-mass spectrometric (LC-MS) data collected for regular protease digests from proteins in gels or solutions. To enable the localization of these peptides in the LC-MS map, we have implemented a set of computer programs, or the (sa)MS package, that perform the needed functions, including generating a complete set of segmental average mass spectra, compiling the peptide inventory from the Sequest/TurboSequest results, searching modified peptide candidates and annotating a tandem mass spectrum for final verification. Using ROCK2 as an example, our programs were applied to identify multiple types of modified peptides, such as phosphorylated and hexosylated ones, which particularly include those peptides that could have been ignored due to their peculiar fragmentation patterns and consequent low search scores. Hence, we demonstrate that, when complemented with peptide search algorithms, our approach and the entailed computer programs can add the sequence information needed for bolstering the confidence of data interpretation by the present analytical platforms and facilitate the mining of protein modification information out of complicated LC-MS/MS data. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Modified parity space averaging approaches for online cross-calibration of redundant sensors in nuclear reactors

    Directory of Open Access Journals (Sweden)

    Moath Kassim

    2018-05-01

    Full Text Available To maintain safety and reliability of reactors, redundant sensors are usually used to measure critical variables and estimate their averaged time-dependency. Nonhealthy sensors can badly influence the estimation result of the process variable. Since online condition monitoring was introduced, the online cross-calibration method has been widely used to detect any anomaly of sensor readings among the redundant group. The cross-calibration method has four main averaging techniques: simple averaging, band averaging, weighted averaging, and parity space averaging (PSA. PSA is used to weigh redundant signals based on their error bounds and their band consistency. Using the consistency weighting factor (C, PSA assigns more weight to consistent signals that have shared bands, based on how many bands they share, and gives inconsistent signals of very low weight. In this article, three approaches are introduced for improving the PSA technique: the first is to add another consistency factor, so called trend consistency (TC, to include a consideration of the preserving of any characteristic edge that reflects the behavior of equipment/component measured by the process parameter; the second approach proposes replacing the error bound/accuracy based weighting factor (Wa with a weighting factor based on the Euclidean distance (Wd, and the third approach proposes applying Wd,TC,andC, all together. Cold neutron source data sets of four redundant hydrogen pressure transmitters from a research reactor were used to perform the validation and verification. Results showed that the second and third modified approaches lead to reasonable improvement of the PSA technique. All approaches implemented in this study were similar in that they have the capability to (1 identify and isolate a drifted sensor that should undergo calibration, (2 identify a faulty sensor/s due to long and continuous missing data range, and (3 identify a healthy sensor. Keywords: Nuclear Reactors

  19. Global Practical Stabilization and Tracking for an Underactuated Ship - A Combined Averaging and Backstepping Approach

    Directory of Open Access Journals (Sweden)

    Kristin Y. Pettersen

    1999-10-01

    Full Text Available We solve both the global practical stabilization and tracking problem for an underactuated ship, using a combined integrator backstepping and averaging approach. Exponential convergence to an arbitrarily small neighbourhood of the origin and of the reference trajectory, respectively, is proved. Simulation results are included.

  20. Modelling river bank erosion processes and mass failure mechanisms using 2-D depth averaged numerical model

    Science.gov (United States)

    Die Moran, Andres; El kadi Abderrezzak, Kamal; Tassi, Pablo; Herouvet, Jean-Michel

    2014-05-01

    Bank erosion is a key process that may cause a large number of economic and environmental problems (e.g. land loss, damage to structures and aquatic habitat). Stream bank erosion (toe erosion and mass failure) represents an important form of channel morphology changes and a significant source of sediment. With the advances made in computational techniques, two-dimensional (2-D) numerical models have become valuable tools for investigating flow and sediment transport in open channels at large temporal and spatial scales. However, the implementation of mass failure process in 2D numerical models is still a challenging task. In this paper, a simple, innovative algorithm is implemented in the Telemac-Mascaret modeling platform to handle bank failure: failure occurs whether the actual slope of one given bed element is higher than the internal friction angle. The unstable bed elements are rotated around an appropriate axis, ensuring mass conservation. Mass failure of a bank due to slope instability is applied at the end of each sediment transport evolution iteration, once the bed evolution due to bed load (and/or suspended load) has been computed, but before the global sediment mass balance is verified. This bank failure algorithm is successfully tested using two laboratory experimental cases. Then, bank failure in a 1:40 scale physical model of the Rhine River composed of non-uniform material is simulated. The main features of the bank erosion and failure are correctly reproduced in the numerical simulations, namely the mass wasting at the bank toe, followed by failure at the bank head, and subsequent transport of the mobilised material in an aggradation front. Volumes of eroded material obtained are of the same order of magnitude as the volumes measured during the laboratory tests.

  1. Comparison of mass transport using average and transient rainfall boundary conditions

    International Nuclear Information System (INIS)

    Duguid, J.O.; Reeves, M.

    1976-01-01

    A general two-dimensional model for simulation of saturated-unsaturated transport of radionuclides in ground water has been developed and is currently being tested. The model is being applied to study the transport of radionuclides from a waste-disposal site where field investigations are currently under way to obtain the necessary model parameters. A comparison of the amount of tritium transported is made using both average and transient rainfall boundary conditions. The simulations indicate that there is no substantial difference in the transport for the two conditions tested. However, the values of dispersivity used in the unsaturated zone caused more transport above the water table than has been observed under actual conditions. This deficiency should be corrected and further comparisons should be made before average rainfall boundary conditions are used for long-term transport simulations

  2. Risk-informed Analytical Approaches to Concentration Averaging for the Purpose of Waste Classification

    International Nuclear Information System (INIS)

    Esh, D.W.; Pinkston, K.E.; Barr, C.S.; Bradford, A.H.; Ridge, A.Ch.

    2009-01-01

    Nuclear Regulatory Commission (NRC) staff has developed a concentration averaging approach and guidance for the review of Department of Energy (DOE) non-HLW determinations. Although the approach was focused on this specific application, concentration averaging is generally applicable to waste classification and thus has implications for waste management decisions as discussed in more detail in this paper. In the United States, radioactive waste has historically been classified into various categories for the purpose of ensuring that the disposal system selected is commensurate with the hazard of the waste such that public health and safety will be protected. However, the risk from the near-surface disposal of radioactive waste is not solely a function of waste concentration but is also a function of the volume (quantity) of waste and its accessibility. A risk-informed approach to waste classification for near-surface disposal of low-level waste would consider the specific characteristics of the waste, the quantity of material, and the disposal system features that limit accessibility to the waste. NRC staff has developed example analytical approaches to estimate waste concentration, and therefore waste classification, for waste disposed in facilities or with configurations that were not anticipated when the regulation for the disposal of commercial low-level waste (i.e. 10 CFR Part 61) was developed. (authors)

  3. Calculation of weighted averages approach for the estimation of ping tolerance values

    Science.gov (United States)

    Silalom, S.; Carter, J.L.; Chantaramongkol, P.

    2010-01-01

    A biotic index was created and proposed as a tool to assess water quality in the Upper Mae Ping sub-watersheds. The Ping biotic index was calculated by utilizing Ping tolerance values. This paper presents the calculation of Ping tolerance values of the collected macroinvertebrates. Ping tolerance values were estimated by a weighted averages approach based on the abundance of macroinvertebrates and six chemical constituents that include conductivity, dissolved oxygen, biochemical oxygen demand, ammonia nitrogen, nitrate nitrogen and orthophosphate. Ping tolerance values range from 0 to 10. Macroinvertebrates assigned a 0 are very sensitive to organic pollution while macroinvertebrates assigned 10 are highly tolerant to pollution.

  4. Investigation of the energy-averaged double transition density of isoscalar monopole excitations in medium-heavy mass spherical nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Gorelik, M.L.; Shlomo, S. [National Research Nuclear University “MEPhI”, Moscow 115409 (Russian Federation); Cyclotron Institute, Texas A& M University, College Station, TX 77843 (United States); Tulupov, B.A. [National Research Nuclear University “MEPhI”, Moscow 115409 (Russian Federation); Institute for Nuclear Research, RAS, Moscow 117312 (Russian Federation); Urin, M.H., E-mail: urin@theor.mephi.ru [National Research Nuclear University “MEPhI”, Moscow 115409 (Russian Federation)

    2016-11-15

    The particle–hole dispersive optical model, developed recently, is applied to study properties of high-energy isoscalar monopole excitations in medium-heavy mass spherical nuclei. The energy-averaged strength functions of the isoscalar giant monopole resonance and its overtone in {sup 208}Pb are analyzed. In particular, we analyze the energy-averaged isoscalar monopole double transition density, the key quantity in the description of the hadron–nucleus inelastic scattering, and studied the validity of the factorization approximation using semi classical and microscopic one body transition densities, respectively, in calculating the cross sections for the excitation of isoscalar giant resonances by inelastic alpha scattering.

  5. Vibrations in force-and-mass disordered alloys in the average local-information transfer approximation. Application to Al-Ag

    International Nuclear Information System (INIS)

    Czachor, A.

    1979-01-01

    The configuration-averaged displacement-displacement Green's function, derived in the locator-based approximation accounting for average transfer of information on local coupling and mass, has been applied to study the force-and-mass-disorder induced modifications of phonon dispersion relations in substitutional alloys of cubic structures. In this approach the translational invariance condition is obeyed whereas damping is neglected. The force-disorder was found to lead to additional splitting of phonon curves besides that due to mass-disorder, even in the small impurity-concentration case; at larger concentrations the number of splits (frequency gaps) should be still greater. The use of a quasi-locator in the Green's function derivation allows one to partly reconcile the present results with those of the average t-matrix approximation. The experimentally observed splitting in the [100]T phonon dispersion curve for Al-Ag alloys has been interpreted in terms of the above theory and of a quasi-mass of heavy impurity atoms. (Author)

  6. A new method for the measurement of two-phase mass flow rate using average bi-directional flow tube

    International Nuclear Information System (INIS)

    Yoon, B. J.; Uh, D. J.; Kang, K. H.; Song, C. H.; Paek, W. P.

    2004-01-01

    Average bi-directional flow tube was suggested to apply in the air/steam-water flow condition. Its working principle is similar with Pitot tube, however, it makes it possible to eliminate the cooling system which is normally needed to prevent from flashing in the pressure impulse line of pitot tube when it is used in the depressurization condition. The suggested flow tube was tested in the air-water vertical test section which has 80mm inner diameter and 10m length. The flow tube was installed at 120 of L/D from inlet of test section. In the test, the pressure drop across the average bi-directional flow tube, system pressure and average void fraction were measured on the measuring plane. In the test, fluid temperature and injected mass flow rates of air and water phases were also measured by a RTD and two coriolis flow meters, respectively. To calculate the phasic mass flow rates : from the measured differential pressure and void fraction, Chexal drift-flux correlation was used. In the test a new correlation of momentum exchange factor was suggested. The test result shows that the suggested instrumentation using the measured void fraction and Chexal drift-flux correlation can predict the mass flow rates within 10% error of measured data

  7. An adaptive mesh refinement approach for average current nodal expansion method in 2-D rectangular geometry

    International Nuclear Information System (INIS)

    Poursalehi, N.; Zolfaghari, A.; Minuchehr, A.

    2013-01-01

    Highlights: ► A new adaptive h-refinement approach has been developed for a class of nodal method. ► The resulting system of nodal equations is more amenable to efficient numerical solution. ► The benefit of the approach is reducing computational efforts relative to the uniform fine mesh modeling. ► Spatially adaptive approach greatly enhances the accuracy of the solution. - Abstract: The aim of this work is to develop a spatially adaptive coarse mesh strategy that progressively refines the nodes in appropriate regions of domain to solve the neutron balance equation by zeroth order nodal expansion method. A flux gradient based a posteriori estimation scheme has been utilized for checking the approximate solutions for various nodes. The relative surface net leakage of nodes has been considered as an assessment criterion. In this approach, the core module is called in by adaptive mesh generator to determine gradients of node surfaces flux to explore the possibility of node refinements in appropriate regions and directions of the problem. The benefit of the approach is reducing computational efforts relative to the uniform fine mesh modeling. For this purpose, a computer program ANRNE-2D, Adaptive Node Refinement Nodal Expansion, has been developed to solve neutron diffusion equation using average current nodal expansion method for 2D rectangular geometries. Implementing the adaptive algorithm confirms its superiority in enhancing the accuracy of the solution without using fine nodes throughout the domain and increasing the number of unknown solution. Some well-known benchmarks have been investigated and improvements are reported

  8. Systematic approach to peak-to-average power ratio in OFDM

    Science.gov (United States)

    Schurgers, Curt

    2001-11-01

    OFDM multicarrier systems support high data rate wireless transmission using orthogonal frequency channels, and require no extensive equalization, yet offer excellent immunity against fading and inter-symbol interference. The major drawback of these systems is the large Peak-to-Average power Ratio (PAR) of the transmit signal, which renders a straightforward implementation very costly and inefficient. Existing approaches that attack this PAR issue are abundant, but no systematic framework or comparison between them exist to date. They sometimes even differ in the problem definition itself and consequently in the basic approach to follow. In this work, we provide a systematic approach that resolves this ambiguity and spans the existing PAR solutions. The basis of our framework is the observation that efficient system implementations require a reduced signal dynamic range. This range reduction can be modeled as a hard limiting, also referred to as clipping, where the extra distortion has to be considered as part of the total noise tradeoff. We illustrate that the different PAR solutions manipulate this tradeoff in alternative ways in order to improve the performance. Furthermore, we discuss and compare a broad range of such techniques and organize them into three classes: block coding, clip effect transformation and probabilistic.

  9. Evaluation of Average Life Expectancy of Exposed Individuals and their offspring: Population Genetic Approach

    International Nuclear Information System (INIS)

    Telnov, V. I.; Sotnik, N. V.

    2004-01-01

    Average life expectancy (ALE) is a significant integrating indicator of the population health. It can be affected by many factors such as radiation and hereditary ones. A population-genetic analysis of the average life expectancy (ALE) was performed for nuclear workers at the Mayak Production. Association exposed to external and internal radiation over a wide dose range and their offspring. A methodical approach was proposed to determine ALE for individuals with different genotypes and estimate ALE in the population based on genotype distribution. The analysis of a number of genetic markers revealed significant changes in the age-specific pattern of the Hp types in workers over 60 years. Such changes were caused by both radiation and non-radiation (cardiovascular pathology) factors. In the first case ALE decreased as Hp 1-1 > Hp 2-2> Hp2-1 (radiation). In the second case, it decreased as Hp 1-1> Hp-2-1> Hp2-2 (non-radiation). analysis of genetic markers in the workers offspring indicated significant shifts in distribution of the Hp types, especially an increase in the proportion of Hp 2-2 at doses from external γ-rays over 200 cGy to parents by the time of conception. Based on the non-radiation genotype differences in ALE in this group of offspring, the preliminary calculation of ALE was carried out, which indicated its reduction by 0.52 years in comparison with the control. (Author) 21 refs

  10. Econometric modelling of Serbian current account determinants: Jackknife Model Averaging approach

    Directory of Open Access Journals (Sweden)

    Petrović Predrag

    2014-01-01

    Full Text Available This research aims to model Serbian current account determinants for the period Q1 2002 - Q4 2012. Taking into account the majority of relevant determinants, using the Jackknife Model Averaging approach, 48 different models have been estimated, where 1254 equations needed to be estimated and averaged for each of the models. The results of selected representative models indicate moderate persistence of the CA and positive influence of: fiscal balance, oil trade balance, terms of trade, relative income and real effective exchange rates, where we should emphasise: (i a rather strong influence of relative income, (ii the fact that the worsening of oil trade balance results in worsening of other components (probably non-oil trade balance of CA and (iii that the positive influence of terms of trade reveals functionality of the Harberger-Laursen-Metzler effect in Serbia. On the other hand, negative influence is evident in case of: relative economic growth, gross fixed capital formation, net foreign assets and trade openness. What particularly stands out is the strong effect of relative economic growth that, most likely, reveals high citizens' future income growth expectations, which has negative impact on the CA.

  11. Measurement of the average mass of proteins adsorbed to a nanoparticle by using a suspended microchannel resonator.

    Science.gov (United States)

    Nejadnik, M Reza; Jiskoot, Wim

    2015-02-01

    We assessed the potential of a suspended microchannel resonator (SMR) to measure the adsorption of proteins to nanoparticles. Standard polystyrene beads suspended in buffer were weighed by a SMR system. Particle suspensions were mixed with solutions of bovine serum albumin (BSA) or monoclonal human antibody (IgG), incubated at room temperature for 3 h and weighed again with SMR. The difference in buoyant mass of the bare and protein-coated polystyrene beads was calculated into real mass of adsorbed proteins. The average surface area occupied per protein molecule was calculated, assuming a monolayer of adsorbed protein. In parallel, dynamic light scattering (DLS), nanoparticle tracking analysis (NTA), and zeta potential measurements were performed. SMR revealed a statistically significant increase in the mass of beads because of adsorption of proteins (for BSA and IgG), whereas DLS and NTA did not show a difference between the size of bare and protein-coated beads. The change in the zeta potential of the beads was also measurable. The surface area occupied per protein molecule was in line with their known size. Presented results show that SMR can be used to measure the mass of adsorbed protein to nanoparticles with a high precision in the presence of free protein. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  12. Assessing the optimized precision of the aircraft mass balance method for measurement of urban greenhouse gas emission rates through averaging

    Directory of Open Access Journals (Sweden)

    Alexie M. F. Heimburger

    2017-06-01

    Full Text Available To effectively address climate change, aggressive mitigation policies need to be implemented to reduce greenhouse gas emissions. Anthropogenic carbon emissions are mostly generated from urban environments, where human activities are spatially concentrated. Improvements in uncertainty determinations and precision of measurement techniques are critical to permit accurate and precise tracking of emissions changes relative to the reduction targets. As part of the INFLUX project, we quantified carbon dioxide (CO2, carbon monoxide (CO and methane (CH4 emission rates for the city of Indianapolis by averaging results from nine aircraft-based mass balance experiments performed in November-December 2014. Our goal was to assess the achievable precision of the aircraft-based mass balance method through averaging, assuming constant CO2, CH4 and CO emissions during a three-week field campaign in late fall. The averaging method leads to an emission rate of 14,600 mol/s for CO2, assumed to be largely fossil-derived for this period of the year, and 108 mol/s for CO. The relative standard error of the mean is 17% and 16%, for CO2 and CO, respectively, at the 95% confidence level (CL, i.e. a more than 2-fold improvement from the previous estimate of ~40% for single-flight measurements for Indianapolis. For CH4, the averaged emission rate is 67 mol/s, while the standard error of the mean at 95% CL is large, i.e. ±60%. Given the results for CO2 and CO for the same flight data, we conclude that this much larger scatter in the observed CH4 emission rate is most likely due to variability of CH4 emissions, suggesting that the assumption of constant daily emissions is not correct for CH4 sources. This work shows that repeated measurements using aircraft-based mass balance methods can yield sufficient precision of the mean to inform emissions reduction efforts by detecting changes over time in urban emissions.

  13. A new approach on seismic mortality estimations based on average population density

    Science.gov (United States)

    Zhu, Xiaoxin; Sun, Baiqing; Jin, Zhanyong

    2016-12-01

    This study examines a new methodology to predict the final seismic mortality from earthquakes in China. Most studies established the association between mortality estimation and seismic intensity without considering the population density. In China, however, the data are not always available, especially when it comes to the very urgent relief situation in the disaster. And the population density varies greatly from region to region. This motivates the development of empirical models that use historical death data to provide the path to analyze the death tolls for earthquakes. The present paper employs the average population density to predict the final death tolls in earthquakes using a case-based reasoning model from realistic perspective. To validate the forecasting results, historical data from 18 large-scale earthquakes occurred in China are used to estimate the seismic morality of each case. And a typical earthquake case occurred in the northwest of Sichuan Province is employed to demonstrate the estimation of final death toll. The strength of this paper is that it provides scientific methods with overall forecast errors lower than 20 %, and opens the door for conducting final death forecasts with a qualitative and quantitative approach. Limitations and future research are also analyzed and discussed in the conclusion.

  14. Average transverse momentum vs. dNc/dη for mass-identified particles at Tevatron energies

    International Nuclear Information System (INIS)

    Cole, P.; Allen, C.; Bujak, A.; Carmony, D.D.; Choi, Y.; Debonte, R.; Gutay, L.J.; Hirsch, A.S.; McMahon, T.; Morgan, N.K.; Porile, N.T.; Rimai, A.; Scharenberg, R.P.; Stringfellow, B.C.; Alexopoulos, T.; Erwin, A.R.; Findeisen, C.; Jennings, J.R.; Nelson, K.; Thompson, M.A.; Anderson, E.W.; Lindsey, C.S.; Wang, C.H.; Areti, H.; Hojvat, C.; Reeves, D.; Turkot, F.; Banerjee, S.; Beery, P.D.; Bishop, J.; Biswas, N.N.; Kenney, V.P.; LoSecco, J.M.; McManus, A.P.; Piekarz, J.; Stampke, S.R.; Zuong, H.; Bhat, P.; Carter, T.; Goshaw, A.T.; Loomis, C.; Oh, S.H.; Robertson, W.R.; Walker, W.D.; Wesson, D.K.; DeCarlo, V.

    1992-01-01

    The transverse momentum of charged mesons and anti p's produced within the pseudorapidity range of η=-0.36 to η=+1.0 and azimuthal range of φ=+2deg to φ=+18deg has been measured in anti pp collisions at √s=1.8 TeV. The charged multiplicity of each event was measured by either the 240 element cylindrical hodoscope covering the range -3.25<η<+3.25 or the central drift chamber, which spans a pseudorapidity range of 3.2 units. The average transverse momentum as a function of the pseudorapidity density for mass-identified particles is presented. We have observed pseudorapidity densities as high as 30 particles per unit pseudorapidity. (orig.)

  15. A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation

    Science.gov (United States)

    Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc

    2015-10-01

    This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.

  16. New approaches for metabolomics by mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Vertes, Akos [George Washington Univ., Washington, DC (United States)

    2017-07-10

    Small molecules constitute a large part of the world around us, including fossil and some renewable energy sources. Solar energy harvested by plants and bacteria is converted into energy rich small molecules on a massive scale. Some of the worst contaminants of the environment and compounds of interest for national security also fall in the category of small molecules. The development of large scale metabolomic analysis methods lags behind the state of the art established for genomics and proteomics. This is commonly attributed to the diversity of molecular classes included in a metabolome. Unlike nucleic acids and proteins, metabolites do not have standard building blocks, and, as a result, their molecular properties exhibit a wide spectrum. This impedes the development of dedicated separation and spectroscopic methods. Mass spectrometry (MS) is a strong contender in the quest for a quantitative analytical tool with extensive metabolite coverage. Although various MS-based techniques are emerging for metabolomics, many of these approaches include extensive sample preparation that make large scale studies resource intensive and slow. New ionization methods are redefining the range of analytical problems that can be solved using MS. This project developed new approaches for the direct analysis of small molecules in unprocessed samples, as well as pushed the limits of ultratrace analysis in volume limited complex samples. The projects resulted in techniques that enabled metabolomics investigations with enhanced molecular coverage, as well as the study of cellular response to stimuli on a single cell level. Effectively individual cells became reaction vessels, where we followed the response of a complex biological system to external perturbation. We established two new analytical platforms for the direct study of metabolic changes in cells and tissues following external perturbation. For this purpose we developed a novel technique, laser ablation electrospray

  17. Robust Determinants of Growth in Asian Developing Economies: A Bayesian Panel Data Model Averaging Approach

    OpenAIRE

    LEON-GONZALEZ, Roberto; VINAYAGATHASAN, Thanabalasingam

    2013-01-01

    This paper investigates the determinants of growth in the Asian developing economies. We use Bayesian model averaging (BMA) in the context of a dynamic panel data growth regression to overcome the uncertainty over the choice of control variables. In addition, we use a Bayesian algorithm to analyze a large number of competing models. Among the explanatory variables, we include a non-linear function of inflation that allows for threshold effects. We use an unbalanced panel data set of 27 Asian ...

  18. An average-based accounting approach to capital asset investments: The case of project finance

    OpenAIRE

    Carlo Alberto Magni

    2014-01-01

    Literature and textbooks on capital budgeting endorse Net Present Value (NPV) and generally treat accounting rates of return as not being reliable tools. This paper shows that accounting numbers can be reconciled with NPV and fruitfully employed in real-life applications. Focusing on project finance transactions, an Average Return On Investment (AROI) is drawn from the pro forma financial statements, obtained as the ratio of aggregate income to aggregate book value. It is shown that such a me...

  19. Determination of mean pressure from PIV in compressible flows using the Reynolds-averaging approach

    Science.gov (United States)

    van Gent, Paul L.; van Oudheusden, Bas W.; Schrijer, Ferry F. J.

    2018-03-01

    The feasibility of computing the flow pressure on the basis of PIV velocity data has been demonstrated abundantly for low-speed conditions. The added complications occurring for high-speed compressible flows have, however, so far proved to be largely inhibitive for the accurate experimental determination of instantaneous pressure. Obtaining mean pressure may remain a worthwhile and realistic goal to pursue. In a previous study, a Reynolds-averaging procedure was developed for this, under the moderate-Mach-number assumption that density fluctuations can be neglected. The present communication addresses the accuracy of this assumption, and the consistency of its implementation, by evaluating of the relevance of the different contributions resulting from the Reynolds-averaging. The methodology involves a theoretical order-of-magnitude analysis, complemented with a quantitative assessment based on a simulated and a real PIV experiment. The assessments show that it is sufficient to account for spatial variations in the mean velocity and the Reynolds-stresses and that temporal and spatial density variations (fluctuations and gradients) are of secondary importance and comparable order-of-magnitude. This result permits to simplify the calculation of mean pressure from PIV velocity data and to validate the approximation of neglecting temporal and spatial density variations without having access to reference pressure data.

  20. Banking Crisis Early Warning Model based on a Bayesian Model Averaging Approach

    Directory of Open Access Journals (Sweden)

    Taha Zaghdoudi

    2016-08-01

    Full Text Available The succession of banking crises in which most have resulted in huge economic and financial losses, prompted several authors to study their determinants. These authors constructed early warning models to prevent their occurring. It is in this same vein as our study takes its inspiration. In particular, we have developed a warning model of banking crises based on a Bayesian approach. The results of this approach have allowed us to identify the involvement of the decline in bank profitability, deterioration of the competitiveness of the traditional intermediation, banking concentration and higher real interest rates in triggering bank crisis.

  1. Multiple diagnostic approaches to palpable breast mass

    Energy Technology Data Exchange (ETDEWEB)

    Chin, Soo Yil; Kim, Kie Hwan; Moon, Nan Mo; Kim, Yong Kyu; Jang, Ja June [Korea Cancer Center Hospital, Seoul (Korea, Republic of)

    1985-12-15

    The combination of the various diagnostic methods of palpable breast mass has improved the diagnostic accuracy. From September 1983 to August 1985 pathologically proven 85 patients with palpable breast masses examined with x-ray mammography, ultrasonography, penumomammography and aspiration cytology at Korea Cancer Center Hospital were analyzed. The diagnostic accuracies of each methods were 77.6% of mammogram, 74.1% of ultrasonogram, 90.5% of penumomammogram and 92.4% of aspiration cytology. Pneumomammograms was accomplished without difficulty or complication and depicted more clearly delineated mass with various pathognomonic findings; air-ductal pattern in fibroadenoma (90.4%) and cystosarcoma phylloides (100%), air-halo in fibrocystic disease (14.2%), fibroadenoma (100%), cystosarcoma phylloides (100%), air-cystogram in cystic type of fibrocystic disease (100%) and vaculoar pattern or irregular air collection without retained peripheral gas in carcinoma.

  2. Multiple diagnostic approaches to palpable breast mass

    International Nuclear Information System (INIS)

    Chin, Soo Yil; Kim, Kie Hwan; Moon, Nan Mo; Kim, Yong Kyu; Jang, Ja June

    1985-01-01

    The combination of the various diagnostic methods of palpable breast mass has improved the diagnostic accuracy. From September 1983 to August 1985 pathologically proven 85 patients with palpable breast masses examined with x-ray mammography, ultrasonography, penumomammography and aspiration cytology at Korea Cancer Center Hospital were analyzed. The diagnostic accuracies of each methods were 77.6% of mammogram, 74.1% of ultrasonogram, 90.5% of penumomammogram and 92.4% of aspiration cytology. Pneumomammograms was accomplished without difficulty or complication and depicted more clearly delineated mass with various pathognomonic findings; air-ductal pattern in fibroadenoma (90.4%) and cystosarcoma phylloides (100%), air-halo in fibrocystic disease (14.2%), fibroadenoma (100%), cystosarcoma phylloides (100%), air-cystogram in cystic type of fibrocystic disease (100%) and vaculoar pattern or irregular air collection without retained peripheral gas in carcinoma

  3. The average carbon-stock approach for small-scale CDM AR projects

    Energy Technology Data Exchange (ETDEWEB)

    Garcia Quijano, J.F.; Muys, B. [Katholieke Universiteit Leuven, Laboratory for Forest, Nature and Landscape Research, Leuven (Belgium); Schlamadinger, B. [Joanneum Research Forschungsgesellschaft mbH, Institute for Energy Research, Graz (Austria); Emmer, I. [Face Foundation, Arnhem (Netherlands); Somogyi, Z. [Forest Research Institute, Budapest (Hungary); Bird, D.N. [Woodrising Consulting Inc., Belfountain, Ontario (Canada)

    2004-06-15

    In many afforestation and reforestation (AR) projects harvesting with stand regeneration forms an integral part of the silvicultural system and satisfies local timber and/or fuelwood demand. Especially clear-cut harvesting will lead to an abrupt and significant reduction of carbon stocks. The smaller the project, the more significant the fluctuations of the carbon stocks may be. In the extreme case a small-scale project could consist of a single forest stand. In such case, all accounted carbon may be removed during a harvesting operation and the time-path of carbon stocks will typically look as in the hypothetical example presented in the report. For the aggregate of many such small-scale projects there will be a constant benefit to the atmosphere during the projects, due to averaging effects.

  4. Improving the Grade Point Average of Our At-Risk Students: A Collaborative Group Action Research Approach.

    Science.gov (United States)

    Saurino, Dan R.; Hinson, Kenneth; Bouma, Amy

    This paper focuses on the use of a group action research approach to help student teachers develop strategies to improve the grade point average of at-risk students. Teaching interventions such as group work and group and individual tutoring were compared to teaching strategies already used in the field. Results indicated an improvement in the…

  5. Computer-aided detection of masses in digital tomosynthesis mammography: Comparison of three approaches

    International Nuclear Information System (INIS)

    Chan Heangping; Wei Jun; Zhang Yiheng; Helvie, Mark A.; Moore, Richard H.; Sahiner, Berkman; Hadjiiski, Lubomir; Kopans, Daniel B.

    2008-01-01

    The authors are developing a computer-aided detection (CAD) system for masses on digital breast tomosynthesis mammograms (DBT). Three approaches were evaluated in this study. In the first approach, mass candidate identification and feature analysis are performed in the reconstructed three-dimensional (3D) DBT volume. A mass likelihood score is estimated for each mass candidate using a linear discriminant analysis (LDA) classifier. Mass detection is determined by a decision threshold applied to the mass likelihood score. A free response receiver operating characteristic (FROC) curve that describes the detection sensitivity as a function of the number of false positives (FPs) per breast is generated by varying the decision threshold over a range. In the second approach, prescreening of mass candidate and feature analysis are first performed on the individual two-dimensional (2D) projection view (PV) images. A mass likelihood score is estimated for each mass candidate using an LDA classifier trained for the 2D features. The mass likelihood images derived from the PVs are backprojected to the breast volume to estimate the 3D spatial distribution of the mass likelihood scores. The FROC curve for mass detection can again be generated by varying the decision threshold on the 3D mass likelihood scores merged by backprojection. In the third approach, the mass likelihood scores estimated by the 3D and 2D approaches, described above, at the corresponding 3D location are combined and evaluated using FROC analysis. A data set of 100 DBT cases acquired with a GE prototype system at the Breast Imaging Laboratory in the Massachusetts General Hospital was used for comparison of the three approaches. The LDA classifiers with stepwise feature selection were designed with leave-one-case-out resampling. In FROC analysis, the CAD system for detection in the DBT volume alone achieved test sensitivities of 80% and 90% at average FP rates of 1.94 and 3.40 per breast, respectively. With the

  6. Predicting water main failures using Bayesian model averaging and survival modelling approach

    International Nuclear Information System (INIS)

    Kabir, Golam; Tesfamariam, Solomon; Sadiq, Rehan

    2015-01-01

    To develop an effective preventive or proactive repair and replacement action plan, water utilities often rely on water main failure prediction models. However, in predicting the failure of water mains, uncertainty is inherent regardless of the quality and quantity of data used in the model. To improve the understanding of water main failure, a Bayesian framework is developed for predicting the failure of water mains considering uncertainties. In this study, Bayesian model averaging method (BMA) is presented to identify the influential pipe-dependent and time-dependent covariates considering model uncertainties whereas Bayesian Weibull Proportional Hazard Model (BWPHM) is applied to develop the survival curves and to predict the failure rates of water mains. To accredit the proposed framework, it is implemented to predict the failure of cast iron (CI) and ductile iron (DI) pipes of the water distribution network of the City of Calgary, Alberta, Canada. Results indicate that the predicted 95% uncertainty bounds of the proposed BWPHMs capture effectively the observed breaks for both CI and DI water mains. Moreover, the performance of the proposed BWPHMs are better compare to the Cox-Proportional Hazard Model (Cox-PHM) for considering Weibull distribution for the baseline hazard function and model uncertainties. - Highlights: • Prioritize rehabilitation and replacements (R/R) strategies of water mains. • Consider the uncertainties for the failure prediction. • Improve the prediction capability of the water mains failure models. • Identify the influential and appropriate covariates for different models. • Determine the effects of the covariates on failure

  7. An ensemble-based dynamic Bayesian averaging approach for discharge simulations using multiple global precipitation products and hydrological models

    Science.gov (United States)

    Qi, Wei; Liu, Junguo; Yang, Hong; Sweetapple, Chris

    2018-03-01

    Global precipitation products are very important datasets in flow simulations, especially in poorly gauged regions. Uncertainties resulting from precipitation products, hydrological models and their combinations vary with time and data magnitude, and undermine their application to flow simulations. However, previous studies have not quantified these uncertainties individually and explicitly. This study developed an ensemble-based dynamic Bayesian averaging approach (e-Bay) for deterministic discharge simulations using multiple global precipitation products and hydrological models. In this approach, the joint probability of precipitation products and hydrological models being correct is quantified based on uncertainties in maximum and mean estimation, posterior probability is quantified as functions of the magnitude and timing of discharges, and the law of total probability is implemented to calculate expected discharges. Six global fine-resolution precipitation products and two hydrological models of different complexities are included in an illustrative application. e-Bay can effectively quantify uncertainties and therefore generate better deterministic discharges than traditional approaches (weighted average methods with equal and varying weights and maximum likelihood approach). The mean Nash-Sutcliffe Efficiency values of e-Bay are up to 0.97 and 0.85 in training and validation periods respectively, which are at least 0.06 and 0.13 higher than traditional approaches. In addition, with increased training data, assessment criteria values of e-Bay show smaller fluctuations than traditional approaches and its performance becomes outstanding. The proposed e-Bay approach bridges the gap between global precipitation products and their pragmatic applications to discharge simulations, and is beneficial to water resources management in ungauged or poorly gauged regions across the world.

  8. Review: Management of adnexal masses: An age-guided approach ...

    African Journals Online (AJOL)

    Adnexal masses in different age groups may need different management approaches. By elimination of the type of mass that is less likely in a specific age group one can identify the most prevalent group which can guide management. Most important of all, the probability of malignancy must either be identified or ruled out.

  9. An effective approach using blended learning to assist the average students to catch up with the talented ones

    Directory of Open Access Journals (Sweden)

    Baijie Yang

    2013-03-01

    Full Text Available Because the average students are the prevailing part of the student population, it is important but difficult for the educators to help average students by improving their learning efficiency and learning outcome in school tests. We conducted a quasi-experiment with two English classes taught by one teacher in the second term of the first year of a junior high school. The experimental class was composed of average students (N=37, while the control class comprised talented students (N=34. Therefore the two classes performed differently in English subject with mean difference of 13.48 that is statistically significant based on the independent sample T-Test analysis. We tailored the web-based intelligent English instruction system, called Computer Simulation in Educational Communication (CSIEC and featured with instant feedback, to the learning content in the experiment term, and the experimental class used it one school hour per week throughout the term. This blended learning setting with the focus on vocabulary and dialogue acquisition helped the students in the experimental class improve their learning performance gradually. The mean difference of the final test between the two classes was decreased to 3.78, while the mean difference of the test designed for the specially drilled vocabulary knowledge was decreased to 2.38 and was statistically not significant. The student interview and survey also demonstrated the students’ favor to the blended learning system. We conclude that the long-term integration of this content oriented blended learning system featured with instant feedback into ordinary class is an effective approach to assist the average students to catch up with the talented ones.

  10. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  11. Amputations in natural disasters and mass casualties: staged approach.

    Science.gov (United States)

    Wolfson, Nikolaj

    2012-10-01

    Amputation is a commonly performed procedure during natural disasters and mass casualties related to industrial accidents and military conflicts where large civilian populations are subjected to severe musculoskeletal trauma. Crush injuries and crush syndrome, an often-overwhelming number of casualties, delayed presentations, regional cultural and other factors, all can mandate a surgical approach to amputation that is different than that typically used under non-disaster conditions. The following article will review the subject of amputation during natural disasters and mass casualties with emphasis on a staged approach to minimise post-surgical complications, especially infection.

  12. The role of sodium-poly(acrylates) with different weight-average molar mass in phosphate-free laundry detergent builder systems

    OpenAIRE

    Milojević, Vladimir S.; Ilić-Stojanović, Snežana; Nikolić, Ljubiša; Nikolić, Vesna; Stamenković, Jakov; Stojiljković, Dragan

    2013-01-01

    In this study, the synthesis of sodium-poly(acrylate) was performed by polymerization of acrylic acid in the water solution with three different contents of potassium-persulphate as an initiator. The obtained polymers were characterized by using HPLC and GPC analyses in order to define the purity and average molar mass of poly(acrylic acid). In order to investigate the influence of sodium-poly(acrylate) as a part of carbonate/zeolite detergent builder system, secondary washing characteristics...

  13. An approach to the neck mass | Thandar | Continuing Medical ...

    African Journals Online (AJOL)

    An approach to the neck mass. MA Thandar, NE Jonas. Abstract. No Abstract. Full Text: EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT · AJOL African Journals Online. HOW TO USE AJOL... for Researchers · for Librarians · for Authors · FAQ's · More about AJOL ...

  14. Model-Based Systems Engineering Approach to Managing Mass Margin

    Science.gov (United States)

    Chung, Seung H.; Bayer, Todd J.; Cole, Bjorn; Cooke, Brian; Dekens, Frank; Delp, Christopher; Lam, Doris

    2012-01-01

    When designing a flight system from concept through implementation, one of the fundamental systems engineering tasks ismanaging the mass margin and a mass equipment list (MEL) of the flight system. While generating a MEL and computing a mass margin is conceptually a trivial task, maintaining consistent and correct MELs and mass margins can be challenging due to the current practices of maintaining duplicate information in various forms, such as diagrams and tables, and in various media, such as files and emails. We have overcome this challenge through a model-based systems engineering (MBSE) approach within which we allow only a single-source-of-truth. In this paper we describe the modeling patternsused to capture the single-source-of-truth and the views that have been developed for the Europa Habitability Mission (EHM) project, a mission concept study, at the Jet Propulsion Laboratory (JPL).

  15. The average kinetic energy of the heavy quark in Λb in the Bethe-Salpeter equation approach

    International Nuclear Information System (INIS)

    Guo, X.-H.; Wu, H.-K.

    2007-01-01

    In the previous paper, based on the SU(2) f xSU(2) s heavy quark symmetries of the QCD Lagrangian in the heavy quark limit, the Bethe-Salpeter equation for the heavy baryon Λ b was established with the picture that Λ b is composed of a heavy quark and a scalar light diquark. In the present work, we apply this model to calculate μ π 2 for Λ b , the average kinetic energy of the heavy quark inside Λ b . This quantity is particularly interesting since it can be measured in experiments and since it contributes to the inclusive semileptonic decays of Λ b when contributions from higher order terms in 1/M b expansions are taken into account and consequently influences the determination of the Cabibbo-Kobayashi-Maskawa matrix elements V ub and V cb . We find that μ π 2 for Λ b is 0.25GeV 2 ∼0.95GeV 2 , depending on the parameters in the model including the light diquark mass and the interaction strength between the heavy quark and the light diquark in the kernel of the BS equation. We also find that this result is consistent with the value of μ π 2 for Λ b which is derived from the experimental value of μ π 2 for the B meson with the aid of the heavy quark effective theory

  16. Endoscopic endonasal approach for mass resection of the pterygopalatine fossa

    Directory of Open Access Journals (Sweden)

    Jan Plzák

    Full Text Available OBJECTIVES: Access to the pterygopalatine fossa is very difficult due to its complex anatomy. Therefore, an open approach is traditionally used, but morbidity is unavoidable. To overcome this problem, an endoscopic endonasal approach was developed as a minimally invasive procedure. The surgical aim of the present study was to evaluate the utility of the endoscopic endonasal approach for the management of both benign and malignant tumors of the pterygopalatine fossa. METHOD: We report our experience with the endoscopic endonasal approach for the management of both benign and malignant tumors and summarize recent recommendations. A total of 13 patients underwent surgery via the endoscopic endonasal approach for pterygopalatine fossa masses from 2014 to 2016. This case group consisted of 12 benign tumors (10 juvenile nasopharyngeal angiofibromas and two schwannomas and one malignant tumor. RESULTS: No recurrent tumor developed during the follow-up period. One residual tumor (juvenile nasopharyngeal angiofibroma that remained in the cavernous sinus was stable. There were no significant complications. Typical sequelae included hypesthesia of the maxillary nerve, trismus, and dry eye syndrome. CONCLUSION: The low frequency of complications together with the high efficacy of resection support the use of the endoscopic endonasal approach as a feasible, safe, and beneficial technique for the management of masses in the pterygopalatine fossa.

  17. Endoscopic endonasal approach for mass resection of the pterygopalatine fossa

    Science.gov (United States)

    Plzák, Jan; Kratochvil, Vít; Kešner, Adam; Šurda, Pavol; Vlasák, Aleš; Zvěřina, Eduard

    2017-01-01

    OBJECTIVES: Access to the pterygopalatine fossa is very difficult due to its complex anatomy. Therefore, an open approach is traditionally used, but morbidity is unavoidable. To overcome this problem, an endoscopic endonasal approach was developed as a minimally invasive procedure. The surgical aim of the present study was to evaluate the utility of the endoscopic endonasal approach for the management of both benign and malignant tumors of the pterygopalatine fossa. METHOD: We report our experience with the endoscopic endonasal approach for the management of both benign and malignant tumors and summarize recent recommendations. A total of 13 patients underwent surgery via the endoscopic endonasal approach for pterygopalatine fossa masses from 2014 to 2016. This case group consisted of 12 benign tumors (10 juvenile nasopharyngeal angiofibromas and two schwannomas) and one malignant tumor. RESULTS: No recurrent tumor developed during the follow-up period. One residual tumor (juvenile nasopharyngeal angiofibroma) that remained in the cavernous sinus was stable. There were no significant complications. Typical sequelae included hypesthesia of the maxillary nerve, trismus, and dry eye syndrome. CONCLUSION: The low frequency of complications together with the high efficacy of resection support the use of the endoscopic endonasal approach as a feasible, safe, and beneficial technique for the management of masses in the pterygopalatine fossa. PMID:29069259

  18. Thomas-Fermi approach to nuclear mass formula. Pt. 1

    International Nuclear Information System (INIS)

    Dutta, A.K.; Arcoragi, J.P.; Pearson, J.M.; Tondeur, F.

    1986-01-01

    With a view to having a more secure basis for the nuclear mass formula than is provided by the drop(let) model, we make a preliminary study of the possibilities offered by the Skyrme-ETF method. Two ways of incorporating shell effects are considered: the ''Strutinsky-integral'' method of Chu et al., and the ''expectation-value'' method of Brack et al. Each of these methods is compared with the HF method in an attempt to see how reliably they extrapolate from the known region of the nuclear chart out to the neutron-drip line. The Strutinsky-integral method is shown to perform particularly well, and to offer a promising approach to a more reliable mass formula. (orig.)

  19. Variational approach to thermal masses in compactified models

    Energy Technology Data Exchange (ETDEWEB)

    Dominici, Daniele [Dipartimento di Fisica e Astronomia Università di Firenze and INFN - Sezione di Firenze,Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Roditi, Itzhak [Centro Brasileiro de Pesquisas Físicas - CBPF/MCT,Rua Dr. Xavier Sigaud 150, 22290-180, Rio de Janeiro, RJ (Brazil)

    2015-08-20

    We investigate by means of a variational approach the effective potential of a 5DU(1) scalar model at finite temperature and compactified on S{sup 1} and S{sup 1}/Z{sub 2} as well as the corresponding 4D model obtained through a trivial dimensional reduction. We are particularly interested in the behavior of the thermal masses of the scalar field with respect to the Wilson line phase and the results obtained are compared with those coming from a one-loop effective potential calculation. We also explore the nature of the phase transition.

  20. Elucidating fluctuating diffusivity in center-of-mass motion of polymer models with time-averaged mean-square-displacement tensor

    Science.gov (United States)

    Miyaguchi, Tomoshige

    2017-10-01

    There have been increasing reports that the diffusion coefficient of macromolecules depends on time and fluctuates randomly. Here a method is developed to elucidate this fluctuating diffusivity from trajectory data. Time-averaged mean-square displacement (MSD), a common tool in single-particle-tracking (SPT) experiments, is generalized to a second-order tensor with which both magnitude and orientation fluctuations of the diffusivity can be clearly detected. This method is used to analyze the center-of-mass motion of four fundamental polymer models: the Rouse model, the Zimm model, a reptation model, and a rigid rodlike polymer. It is found that these models exhibit distinctly different types of magnitude and orientation fluctuations of diffusivity. This is an advantage of the present method over previous ones, such as the ergodicity-breaking parameter and a non-Gaussian parameter, because with either of these parameters it is difficult to distinguish the dynamics of the four polymer models. Also, the present method of a time-averaged MSD tensor could be used to analyze trajectory data obtained in SPT experiments.

  1. Metallurgical source-contribution analysis of PM10 annual average concentration: A dispersion modeling approach in moravian-silesian region

    Directory of Open Access Journals (Sweden)

    P. Jančík

    2013-10-01

    Full Text Available The goal of the article is to present analysis of metallurgical industry contribution to annual average PM10 concentrations in Moravian-Silesian based on means of the air pollution modelling in accord with the Czech reference methodology SYMOS´97.

  2. Test of Axel-Brink predictions by a discrete approach to resonance-averaged (n,γ) spectroscopy

    International Nuclear Information System (INIS)

    Raman, S.; Shahal, O.; Slaughter, G.G.

    1981-01-01

    The limitations imposed by Porter-Thomas fluctuations in the study of primary γ rays following neutron capture have been partly overcome by obtaining individual γ-ray spectra from 48 resonances in the 173 Yb(n,γ) reaction and summing them after appropriate normalizations. The resulting average radiation widths (and hence the γ-ray strength function) are in good agreement with the Axel-Brink predictions based on a giant dipole resonance model

  3. More controlling child-feeding practices are found among parents of boys with an average body mass index compared with parents of boys with a high body mass index.

    Science.gov (United States)

    Brann, Lynn S; Skinner, Jean D

    2005-09-01

    To determine if differences existed in mothers' and fathers' perceptions of their sons' weight, controlling child-feeding practices (ie, restriction, monitoring, and pressure to eat), and parenting styles (ie, authoritarian, authoritative, and permissive) by their sons' body mass index (BMI). One person (L.S.B.) interviewed mothers and boys using validated questionnaires and measured boys' weight and height; fathers completed questionnaires independently. Subjects were white, preadolescent boys and their parents. Boys were grouped by their BMI into an average BMI group (n=25; BMI percentile between 33rd and 68th) and a high BMI group (n=24; BMI percentile > or = 85th). Multivariate analyses of variance and analyses of variance. Mothers and fathers of boys with a high BMI saw their sons as more overweight (mothers P=.03, fathers P=.01), were more concerned about their sons' weight (Pfathers of boys with an average BMI (Pfathers of boys with a high BMI monitored their sons' eating less often than fathers of boys with an average BMI (P=.006). No differences were found in parenting by boys' BMI groups for either mothers or fathers. More controlling child-feeding practices were found among mothers (pressure to eat) and fathers (pressure to eat and monitoring) of boys with an average BMI compared with parents of boys with a high BMI. A better understanding of the relationships between feeding practices and boys' weight is necessary. However, longitudinal research is needed to provide evidence of causal association.

  4. Differential population synthesis approach to mass segregation in M92

    International Nuclear Information System (INIS)

    Tobin, W.J.

    1979-01-01

    Spectra are presented of 26 low-metal stars and of the center and one-quarter intensity positions of M92. Spectral coverage is from 390 to 870 nm with resolution better than 1 nm in the blue and 2 nm in the red. Individual pixel signal-to-noise is about 100. Dwarf features are notably absent from the M92 spectra. Numerical estimates of 36 absorption features are extracted from every spectrum, as are two continuum indices. Mathematical models are constructed describing each feature's dependence on stellar color, luminosity, and metal content and then used to estimate the metal content of 6 of the stars for which the metal content is not known. For 10 features reliably measured in M92's center and edge a mass segregation sensitivity parameter is derived from each feature's deduced luminosity dependence. The ratio of feature equivalent widths at cluster edge and center are compared to this sensitivity: no convincing evidence of mass segregation is seen. The only possible edge-to-center difference seen is in the Mg b 517.4 nm feature. Three of the 10 cluster features can be of interstellar origin, at least in part; in particular the luminosity-sensitive Na D line cannot be used as a segregation indicator. The experience gained suggests that an integrated spectrum approach to globular cluster mass segregation is very difficult. An appendix describes in detail the capabilities of the Pine Bluff Observatory .91 m telescope, Cassegrain grating spectrograph, and intensified Reticon dual diode-array detector. It is possible to determine a highly consistent wavelength calibration

  5. Generic features of the dynamics of complex open quantum systems: statistical approach based on averages over the unitary group.

    Science.gov (United States)

    Gessner, Manuel; Breuer, Heinz-Peter

    2013-04-01

    We obtain exact analytic expressions for a class of functions expressed as integrals over the Haar measure of the unitary group in d dimensions. Based on these general mathematical results, we investigate generic dynamical properties of complex open quantum systems, employing arguments from ensemble theory. We further generalize these results to arbitrary eigenvalue distributions, allowing a detailed comparison of typical regular and chaotic systems with the help of concepts from random matrix theory. To illustrate the physical relevance and the general applicability of our results we present a series of examples related to the fields of open quantum systems and nonequilibrium quantum thermodynamics. These include the effect of initial correlations, the average quantum dynamical maps, the generic dynamics of system-environment pure state entanglement and, finally, the equilibration of generic open and closed quantum systems.

  6. Communication: An efficient approach to compute state-specific nuclear gradients for a generic state-averaged multi-configuration self consistent field wavefunction

    Energy Technology Data Exchange (ETDEWEB)

    Granovsky, Alexander A., E-mail: alex.granovsky@gmail.com [Firefly project, Moscow, 117593 Moscow (Russian Federation)

    2015-12-21

    We present a new, very efficient semi-numerical approach for the computation of state-specific nuclear gradients of a generic state-averaged multi-configuration self consistent field wavefunction. Our approach eliminates the costly coupled-perturbed multi-configuration Hartree-Fock step as well as the associated integral transformation stage. The details of the implementation within the Firefly quantum chemistry package are discussed and several sample applications are given. The new approach is routinely applicable to geometry optimization of molecular systems with 1000+ basis functions using a standalone multi-core workstation.

  7. Communication: An efficient approach to compute state-specific nuclear gradients for a generic state-averaged multi-configuration self consistent field wavefunction

    International Nuclear Information System (INIS)

    Granovsky, Alexander A.

    2015-01-01

    We present a new, very efficient semi-numerical approach for the computation of state-specific nuclear gradients of a generic state-averaged multi-configuration self consistent field wavefunction. Our approach eliminates the costly coupled-perturbed multi-configuration Hartree-Fock step as well as the associated integral transformation stage. The details of the implementation within the Firefly quantum chemistry package are discussed and several sample applications are given. The new approach is routinely applicable to geometry optimization of molecular systems with 1000+ basis functions using a standalone multi-core workstation

  8. Communication: An efficient approach to compute state-specific nuclear gradients for a generic state-averaged multi-configuration self consistent field wavefunction.

    Science.gov (United States)

    Granovsky, Alexander A

    2015-12-21

    We present a new, very efficient semi-numerical approach for the computation of state-specific nuclear gradients of a generic state-averaged multi-configuration self consistent field wavefunction. Our approach eliminates the costly coupled-perturbed multi-configuration Hartree-Fock step as well as the associated integral transformation stage. The details of the implementation within the Firefly quantum chemistry package are discussed and several sample applications are given. The new approach is routinely applicable to geometry optimization of molecular systems with 1000+ basis functions using a standalone multi-core workstation.

  9. Quantum wavepacket ab initio molecular dynamics: an approach for computing dynamically averaged vibrational spectra including critical nuclear quantum effects.

    Science.gov (United States)

    Sumner, Isaiah; Iyengar, Srinivasan S

    2007-10-18

    We have introduced a computational methodology to study vibrational spectroscopy in clusters inclusive of critical nuclear quantum effects. This approach is based on the recently developed quantum wavepacket ab initio molecular dynamics method that combines quantum wavepacket dynamics with ab initio molecular dynamics. The computational efficiency of the dynamical procedure is drastically improved (by several orders of magnitude) through the utilization of wavelet-based techniques combined with the previously introduced time-dependent deterministic sampling procedure measure to achieve stable, picosecond length, quantum-classical dynamics of electrons and nuclei in clusters. The dynamical information is employed to construct a novel cumulative flux/velocity correlation function, where the wavepacket flux from the quantized particle is combined with classical nuclear velocities to obtain the vibrational density of states. The approach is demonstrated by computing the vibrational density of states of [Cl-H-Cl]-, inclusive of critical quantum nuclear effects, and our results are in good agreement with experiment. A general hierarchical procedure is also provided, based on electronic structure harmonic frequencies, classical ab initio molecular dynamics, computation of nuclear quantum-mechanical eigenstates, and employing quantum wavepacket ab initio dynamics to understand vibrational spectroscopy in hydrogen-bonded clusters that display large degrees of anharmonicities.

  10. Quantification of benzene, toluene, ethylbenzene and o-xylene in internal combustion engine exhaust with time-weighted average solid phase microextraction and gas chromatography mass spectrometry.

    Science.gov (United States)

    Baimatova, Nassiba; Koziel, Jacek A; Kenessov, Bulat

    2015-05-11

    A new and simple method for benzene, toluene, ethylbenzene and o-xylene (BTEX) quantification in vehicle exhaust was developed based on diffusion-controlled extraction onto a retracted solid-phase microextraction (SPME) fiber coating. The rationale was to develop a method based on existing and proven SPME technology that is feasible for field adaptation in developing countries. Passive sampling with SPME fiber retracted into the needle extracted nearly two orders of magnitude less mass (n) compared with exposed fiber (outside of needle) and sampling was in a time weighted-averaging (TWA) mode. Both the sampling time (t) and fiber retraction depth (Z) were adjusted to quantify a wider range of Cgas. Extraction and quantification is conducted in a non-equilibrium mode. Effects of Cgas, t, Z and T were tested. In addition, contribution of n extracted by metallic surfaces of needle assembly without SPME coating was studied. Effects of sample storage time on n loss was studied. Retracted TWA-SPME extractions followed the theoretical model. Extracted n of BTEX was proportional to Cgas, t, Dg, T and inversely proportional to Z. Method detection limits were 1.8, 2.7, 2.1 and 5.2 mg m(-3) (0.51, 0.83, 0.66 and 1.62 ppm) for BTEX, respectively. The contribution of extraction onto metallic surfaces was reproducible and influenced by Cgas and t and less so by T and by the Z. The new method was applied to measure BTEX in the exhaust gas of a Ford Crown Victoria 1995 and compared with a whole gas and direct injection method. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  12. Research of connection between mass audience and new media. Approaches to new model of mass communication measurement

    OpenAIRE

    Sibiriakova Olena Oleksandrivna

    2015-01-01

    In this research the author examines changes to approaches of observation of mass communication. As a result of systemization of key theoretical models of communication, the author comes to conclusion of evolution of ideas about the process of mass communication measurement from linear to multisided and multiple.

  13. Geotail observations of plasma sheet ion composition over 16 years: On variations of average plasma ion mass and O+ triggering substorm model

    Science.gov (United States)

    Nosé, M.; Ieda, A.; Christon, S. P.

    2009-07-01

    We examined long-term variations of ion composition in the plasma sheet, using energetic (9.4-212.1 keV/e) ion flux data obtained by the suprathermal ion composition spectrometer (STICS) sensor of the energetic particle and ion composition (EPIC) instrument on board the Geotail spacecraft. EPIC/STICS observations are available from 17 October 1992 for more than 16 years, covering the declining phase of solar cycle 22, all of solar cycle 23, and the early phase of solar cycle 24. This unprecedented long-term data set revealed that (1) the He+/H+ and O+/H+ flux ratios in the plasma sheet were dependent on the F10.7 index; (2) the F10.7 index dependence is stronger for O+/H+ than He+/H+; (3) the O+/H+ flux ratio is also weakly correlated with the ΣKp index; and (4) the He2+/H+ flux ratio in the plasma sheet appeared to show no long-term trend. From these results, we derived empirical equations related to plasma sheet ion composition and the F10.7 index and estimated that the average plasma ion mass changes from ˜1.1 amu during solar minimum to ˜2.8 amu during solar maximum. In such a case, the Alfvén velocity during solar maximum decreases to ˜60% of the solar minimum value. Thus, physical processes in the plasma sheet are considered to be much different between solar minimum and solar maximum. We also compared long-term variation of the plasma sheet ion composition with that of the substorm occurrence rate, which is evaluated by the number of Pi2 pulsations. No correlation or negative correlation was found between them. This result contradicts the O+ triggering substorm model, in which heavy ions in the plasma sheet increase the growth rate of the linear ion tearing mode and play an important role in localization and initiation of substorms. In contrast, O+ ions in the plasma sheet may prevent occurrence of substorms.

  14. Mass Society/Culture/Media: An Eclectic Approach.

    Science.gov (United States)

    Clavner, Jerry B.

    Instructors of courses in mass society, culture, and communication start out facing three types of difficulties: the historical orientation of learning, the parochialism of various disciplines, and negative intellectually elitist attitudes toward mass culture/media. Added to these problems is the fact that many instructors have little or no…

  15. [Advances in mass spectrometry-based approaches for neuropeptide analysis].

    Science.gov (United States)

    Ji, Qianyue; Ma, Min; Peng, Xin; Jia, Chenxi; Ji, Qianyue

    2017-07-25

    Neuropeptides are an important class of endogenous bioactive substances involved in the function of the nervous system, and connect the brain and other neural and peripheral organs. Mass spectrometry-based neuropeptidomics are designed to study neuropeptides in a large-scale manner and obtain important molecular information to further understand the mechanism of nervous system regulation and the pathogenesis of neurological diseases. This review summarizes the basic strategies for the study of neuropeptides using mass spectrometry, including sample preparation and processing, qualitative and quantitative methods, and mass spectrometry imagining.

  16. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  17. Neutrino Mass Matrix Textures: A Data-driven Approach

    CERN Document Server

    Bertuzzo, E; Machado, P A N

    2013-01-01

    We analyze the neutrino mass matrix entries and their correlations in a probabilistic fashion, constructing probability distribution functions using the latest results from neutrino oscillation fits. Two cases are considered: the standard three neutrino scenario as well as the inclusion of a new sterile neutrino that potentially explains the reactor and gallium anomalies. We discuss the current limits and future perspectives on the mass matrix elements that can be useful for model building.

  18. Searching for intermediate-mass black holes in galaxies with low-luminosity AGN: a multiple-method approach

    Science.gov (United States)

    Koliopanos, F.; Ciambur, B.; Graham, A.; Webb, N.; Coriat, M.; Mutlu-Pakdil, B.; Davis, B.; Godet, O.; Barret, D.; Seigar, M.

    2017-10-01

    Intermediate Mass Black Holes (IMBHs) are predicted by a variety of models and are the likely seeds for super massive BHs (SMBHs). However, we have yet to establish their existence. One method, by which we can discover IMBHs, is by measuring the mass of an accreting BH, using X-ray and radio observations and drawing on the correlation between radio luminosity, X-ray luminosity and the BH mass, known as the fundamental plane of BH activity (FP-BH). Furthermore, the mass of BHs in the centers of galaxies, can be estimated using scaling relations between BH mass and galactic properties. We are initiating a campaign to search for IMBH candidates in dwarf galaxies with low-luminosity AGN, using - for the first time - three different scaling relations and the FP-BH, simultaneously. In this first stage of our campaign, we measure the mass of seven LLAGN, that have been previously suggested to host central IMBHs, investigate the consistency between the predictions of the BH scaling relations and the FP-BH, in the low mass regime and demonstrate that this multiple method approach provides a robust average mass prediction. In my talk, I will discuss our methodology, results and next steps of this campaign.

  19. Average nuclear surface properties

    International Nuclear Information System (INIS)

    Groote, H. von.

    1979-01-01

    The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)

  20. The Constant Average Relationship Between Dust-obscured Star Formation and Stellar Mass from z=0 to z=2.5

    Science.gov (United States)

    Whitaker, Katherine E.; Pope, Alexandra; Cybulski, Ryan; Casey, Caitlin M.; Popping, Gergo; Yun, Min; 3D-HST Collaboration

    2018-01-01

    The total star formation budget of galaxies consists of the sum of the unobscured star formation, as observed in the rest-frame ultraviolet (UV), together with the obscured component that is absorbed and re-radiated by dust grains in the infrared. We explore how the fraction of obscured star formation depends (SFR) and stellar mass for mass-complete samples of galaxies at 0 MIPS 24μm photometry in the well-studied 5 extragalactic CANDELS fields. We find a strong dependence of the fraction of obscured star formation (f_obscured=SFR_IR/SFR_UV+IR) on stellar mass, with remarkably little evolution in this fraction with redshift out to z=2.5. 50% of star formation is obscured for galaxies with log(M/M⊙)=9.4 although unobscured star formation dominates the budget at lower masses, there exists a tail of low mass extremely obscured star-forming galaxies at z > 1. For log(M/M⊙)>10.5, >90% of star formation is obscured at all redshifts. We also show that at fixed total SFR, f_obscured is lower at higher redshift. At fixed mass, high-redshift galaxies are observed to have more compact sizes and much higher star formation rates, gas fractions and hence surface densities (implying higher dust obscuration), yet we observe no redshift evolution in f_obscured with stellar mass. This poses a challenge to theoretical models to reproduce, where the observed compact sizes at high redshift seem in tension with lower dust obscuration.

  1. On a mass independent approach leading to planetary orbit discretization

    International Nuclear Information System (INIS)

    Oliveira Neto, Marcal de

    2007-01-01

    The present article discusses a possible fractal approach for understanding orbit configurations around a central force field in well known systems of our infinitely small and infinitely large universes, based on quantum atomic models. This approach is supported by recent important theoretical investigations reported in the literature. An application presents a study involving the three star system HD 188753 Cygni in an approach similar to that employed in molecular quantum mechanics investigations

  2. Mass spectrometry imaging enriches biomarker discovery approaches with candidate mapping.

    Science.gov (United States)

    Scott, Alison J; Jones, Jace W; Orschell, Christie M; MacVittie, Thomas J; Kane, Maureen A; Ernst, Robert K

    2014-01-01

    Integral to the characterization of radiation-induced tissue damage is the identification of unique biomarkers. Biomarker discovery is a challenging and complex endeavor requiring both sophisticated experimental design and accessible technology. The resources within the National Institute of Allergy and Infectious Diseases (NIAID)-sponsored Consortium, Medical Countermeasures Against Radiological Threats (MCART), allow for leveraging robust animal models with novel molecular imaging techniques. One such imaging technique, MALDI (matrix-assisted laser desorption ionization) mass spectrometry imaging (MSI), allows for the direct spatial visualization of lipids, proteins, small molecules, and drugs/drug metabolites-or biomarkers-in an unbiased manner. MALDI-MSI acquires mass spectra directly from an intact tissue slice in discrete locations across an x, y grid that are then rendered into a spatial distribution map composed of ion mass and intensity. The unique mass signals can be plotted to generate a spatial map of biomarkers that reflects pathology and molecular events. The crucial unanswered questions that can be addressed with MALDI-MSI include identification of biomarkers for radiation damage that reflect the response to radiation dose over time and the efficacy of therapeutic interventions. Techniques in MALDI-MSI also enable integration of biomarker identification among diverse animal models. Analysis of early, sublethally irradiated tissue injury samples from diverse mouse tissues (lung and ileum) shows membrane phospholipid signatures correlated with histological features of these unique tissues. This paper will discuss the application of MALDI-MSI for use in a larger biomarker discovery pipeline.

  3. MERRA Chem 3D IAU, Precip Mass Flux, Time average 3-hourly (eta coord edges, 1.25X1L73) V5.2.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The MAT3FECHM or tavg3_3d_chm_Fe data product is the MERRA Data Assimilation System Chemistry 3-Dimensional chemistry on layers edges that is time averaged, 3D model...

  4. Constraining East Antarctic mass trends using a Bayesian inference approach

    Science.gov (United States)

    Martin-Español, Alba; Bamber, Jonathan L.

    2016-04-01

    East Antarctica is an order of magnitude larger than its western neighbour and the Greenland ice sheet. It has the greatest potential to contribute to sea level rise of any source, including non-glacial contributors. It is, however, the most challenging ice mass to constrain because of a range of factors including the relative paucity of in-situ observations and the poor signal to noise ratio of Earth Observation data such as satellite altimetry and gravimetry. A recent study using satellite radar and laser altimetry (Zwally et al. 2015) concluded that the East Antarctic Ice Sheet (EAIS) had been accumulating mass at a rate of 136±28 Gt/yr for the period 2003-08. Here, we use a Bayesian hierarchical model, which has been tested on, and applied to, the whole of Antarctica, to investigate the impact of different assumptions regarding the origin of elevation changes of the EAIS. We combined GRACE, satellite laser and radar altimeter data and GPS measurements to solve simultaneously for surface processes (primarily surface mass balance, SMB), ice dynamics and glacio-isostatic adjustment over the period 2003-13. The hierarchical model partitions mass trends between SMB and ice dynamics based on physical principles and measures of statistical likelihood. Without imposing the division between these processes, the model apportions about a third of the mass trend to ice dynamics, +18 Gt/yr, and two thirds, +39 Gt/yr, to SMB. The total mass trend for that period for the EAIS was 57±20 Gt/yr. Over the period 2003-08, we obtain an ice dynamic trend of 12 Gt/yr and a SMB trend of 15 Gt/yr, with a total mass trend of 27 Gt/yr. We then imposed the condition that the surface mass balance is tightly constrained by the regional climate model RACMO2.3 and allowed height changes due to ice dynamics to occur in areas of low surface velocities (solution that satisfies all the input data, given these constraints. By imposing these conditions, over the period 2003-13 we obtained a mass

  5. Automatic individual arterial input functions calculated from PCA outperform manual and population-averaged approaches for the pharmacokinetic modeling of DCE-MR images.

    Science.gov (United States)

    Sanz-Requena, Roberto; Prats-Montalbán, José Manuel; Martí-Bonmatí, Luis; Alberich-Bayarri, Ángel; García-Martí, Gracián; Pérez, Rosario; Ferrer, Alberto

    2015-08-01

    To introduce a segmentation method to calculate an automatic arterial input function (AIF) based on principal component analysis (PCA) of dynamic contrast enhanced MR (DCE-MR) imaging and compare it with individual manually selected and population-averaged AIFs using calculated pharmacokinetic parameters. The study included 65 individuals with prostate examinations (27 tumors and 38 controls). Manual AIFs were individually extracted and also averaged to obtain a population AIF. Automatic AIFs were individually obtained by applying PCA to volumetric DCE-MR imaging data and finding the highest correlation of the PCs with a reference AIF. Variability was assessed using coefficients of variation and repeated measures tests. The different AIFs were used as inputs to the pharmacokinetic model and correlation coefficients, Bland-Altman plots and analysis of variance tests were obtained to compare the results. Automatic PCA-based AIFs were successfully extracted in all cases. The manual and PCA-based AIFs showed good correlation (r between pharmacokinetic parameters ranging from 0.74 to 0.95), with differences below the manual individual variability (RMSCV up to 27.3%). The population-averaged AIF showed larger differences (r from 0.30 to 0.61). The automatic PCA-based approach minimizes the variability associated to obtaining individual volume-based AIFs in DCE-MR studies of the prostate. © 2014 Wiley Periodicals, Inc.

  6. Effective Lagrangian approach to the fermion mass problem

    International Nuclear Information System (INIS)

    Shaw, D.S.; Volkas, R.R.

    1994-01-01

    An effective theory is proposed, combining the standard gauge group SU(3) C direct-product SU(2) L direct-product U(1) Y with a horizontal discrete symmetry. By assigning appropriate charges under this discrete symmetry to the various fermion fields and to (at least) two Higgs doublets, the broad spread of the fermion mass and mixing angle spectrum can be explained as a result of suppressed, non-renormalizable terms. A particular model is constructed which achieves the above while simultaneously suppressing neutral Higgs-induced flavour-changing processes. 9 refs., 3 tabs., 1 fig

  7. (S)fermion masses and lepton flavor violation. A democratic approach

    International Nuclear Information System (INIS)

    Hamaguchi, K.; Kakizaki, Mitsuru; Yamaguchi, Masahiro

    2004-01-01

    It is well-known that flavor mixing among the sfermion masses must be quite suppressed to survive various FCNC experimental bounds. One of the solutions to this supersymmetric FCNC problem is an alignment mechanism in which sfermion masses and fermion masses have some common origin and thus they are somehow aligned to each other. We propose a democratic approach to realize this idea, and illustrate how it has different predictions in slepton masses as well as lepton flavor violation from a more conventional minimal supergravity approach. This talk is based on our work in Ref. 1. (author)

  8. Mass spectrometric based approaches in urine metabolomics and biomarker discovery.

    Science.gov (United States)

    Khamis, Mona M; Adamko, Darryl J; El-Aneed, Anas

    2017-03-01

    Urine metabolomics has recently emerged as a prominent field for the discovery of non-invasive biomarkers that can detect subtle metabolic discrepancies in response to a specific disease or therapeutic intervention. Urine, compared to other biofluids, is characterized by its ease of collection, richness in metabolites and its ability to reflect imbalances of all biochemical pathways within the body. Following urine collection for metabolomic analysis, samples must be immediately frozen to quench any biogenic and/or non-biogenic chemical reactions. According to the aim of the experiment; sample preparation can vary from simple procedures such as filtration to more specific extraction protocols such as liquid-liquid extraction. Due to the lack of comprehensive studies on urine metabolome stability, higher storage temperatures (i.e. 4°C) and repetitive freeze-thaw cycles should be avoided. To date, among all analytical techniques, mass spectrometry (MS) provides the best sensitivity, selectivity and identification capabilities to analyze the majority of the metabolite composition in the urine. Combined with the qualitative and quantitative capabilities of MS, and due to the continuous improvements in its related technologies (i.e. ultra high-performance liquid chromatography [UPLC] and hydrophilic interaction liquid chromatography [HILIC]), liquid chromatography (LC)-MS is unequivocally the most utilized and the most informative analytical tool employed in urine metabolomics. Furthermore, differential isotope tagging techniques has provided a solution to ion suppression from urine matrix thus allowing for quantitative analysis. In addition to LC-MS, other MS-based technologies have been utilized in urine metabolomics. These include direct injection (infusion)-MS, capillary electrophoresis-MS and gas chromatography-MS. In this article, the current progresses of different MS-based techniques in exploring the urine metabolome as well as the recent findings in providing

  9. A Bayesian model averaging approach for estimating the relative risk of mortality associated with heat waves in 105 U.S. cities.

    Science.gov (United States)

    Bobb, Jennifer F; Dominici, Francesca; Peng, Roger D

    2011-12-01

    Estimating the risks heat waves pose to human health is a critical part of assessing the future impact of climate change. In this article, we propose a flexible class of time series models to estimate the relative risk of mortality associated with heat waves and conduct Bayesian model averaging (BMA) to account for the multiplicity of potential models. Applying these methods to data from 105 U.S. cities for the period 1987-2005, we identify those cities having a high posterior probability of increased mortality risk during heat waves, examine the heterogeneity of the posterior distributions of mortality risk across cities, assess sensitivity of the results to the selection of prior distributions, and compare our BMA results to a model selection approach. Our results show that no single model best predicts risk across the majority of cities, and that for some cities heat-wave risk estimation is sensitive to model choice. Although model averaging leads to posterior distributions with increased variance as compared to statistical inference conditional on a model obtained through model selection, we find that the posterior mean of heat wave mortality risk is robust to accounting for model uncertainty over a broad class of models. © 2011, The International Biometric Society.

  10. Mass Optimization of Battery/Supercapacitors Hybrid Systems Based on a Linear Programming Approach

    Science.gov (United States)

    Fleury, Benoit; Labbe, Julien

    2014-08-01

    The objective of this paper is to show that, on a specific launcher-type mission profile, a 40% gain of mass is expected using a battery/supercapacitors active hybridization instead of a single battery solution. This result is based on the use of a linear programming optimization approach to perform the mass optimization of the hybrid power supply solution.

  11. A multifunctional design approach for sustainable concrete : with application to concrete mass products

    NARCIS (Netherlands)

    Hüsken, G.

    2010-01-01

    This thesis provides a multifunctional design approach for sustainable concrete, particularly earth-moist concrete (EMC), with application to concrete mass products. EMC is a concrete with low water content and stiff consistency that is used for the production of concrete mass products, such as

  12. Deference, Denial, and Beyond: A Repertoire Approach to Mass Media and Schooling

    Science.gov (United States)

    Rymes, Betsy

    2011-01-01

    In this article, the author outlines two general research approaches, within the education world, to these mass-mediated formations: "Deference" and "Denial." Researchers who recognize the social practices that give local meaning to mass media formations and ways of speaking do not attempt to recontextualize youth media in their own social…

  13. Mass

    International Nuclear Information System (INIS)

    Quigg, Chris

    2007-01-01

    In the classical physics we inherited from Isaac Newton, mass does not arise, it simply is. The mass of a classical object is the sum of the masses of its parts. Albert Einstein showed that the mass of a body is a measure of its energy content, inviting us to consider the origins of mass. The protons we accelerate at Fermilab are prime examples of Einsteinian matter: nearly all of their mass arises from stored energy. Missing mass led to the discovery of the noble gases, and a new form of missing mass leads us to the notion of dark matter. Starting with a brief guided tour of the meanings of mass, the colloquium will explore the multiple origins of mass. We will see how far we have come toward understanding mass, and survey the issues that guide our research today.

  14. Cell-Averaged discretization for incompressible Navier-Stokes with embedded boundaries and locally refined Cartesian meshes: a high-order finite volume approach

    Science.gov (United States)

    Bhalla, Amneet Pal Singh; Johansen, Hans; Graves, Dan; Martin, Dan; Colella, Phillip; Applied Numerical Algorithms Group Team

    2017-11-01

    We present a consistent cell-averaged discretization for incompressible Navier-Stokes equations on complex domains using embedded boundaries. The embedded boundary is allowed to freely cut the locally-refined background Cartesian grid. Implicit-function representation is used for the embedded boundary, which allows us to convert the required geometric moments in the Taylor series expansion (upto arbitrary order) of polynomials into an algebraic problem in lower dimensions. The computed geometric moments are then used to construct stencils for various operators like the Laplacian, divergence, gradient, etc., by solving a least-squares system locally. We also construct the inter-level data-transfer operators like prolongation and restriction for multi grid solvers using the same least-squares system approach. This allows us to retain high-order of accuracy near coarse-fine interface and near embedded boundaries. Canonical problems like Taylor-Green vortex flow and flow past bluff bodies will be presented to demonstrate the proposed method. U.S. Department of Energy, Office of Science, ASCR (Award Number DE-AC02-05CH11231).

  15. Heat and mass transfer intensification and shape optimization a multi-scale approach

    CERN Document Server

    2013-01-01

    Is the heat and mass transfer intensification defined as a new paradigm of process engineering, or is it just a common and old idea, renamed and given the current taste? Where might intensification occur? How to achieve intensification? How the shape optimization of thermal and fluidic devices leads to intensified heat and mass transfers? To answer these questions, Heat & Mass Transfer Intensification and Shape Optimization: A Multi-scale Approach clarifies  the definition of the intensification by highlighting the potential role of the multi-scale structures, the specific interfacial area, the distribution of driving force, the modes of energy supply and the temporal aspects of processes.   A reflection on the methods of process intensification or heat and mass transfer enhancement in multi-scale structures is provided, including porous media, heat exchangers, fluid distributors, mixers and reactors. A multi-scale approach to achieve intensification and shape optimization is developed and clearly expla...

  16. Optimal Skin-to-Stone Distance Is a Positive Predictor for Successful Outcomes in Upper Ureter Calculi following Extracorporeal Shock Wave Lithotripsy: A Bayesian Model Averaging Approach.

    Directory of Open Access Journals (Sweden)

    Kang Su Cho

    Full Text Available To investigate whether skin-to-stone distance (SSD, which remains controversial in patients with ureter stones, can be a predicting factor for one session success following extracorporeal shock wave lithotripsy (ESWL in patients with upper ureter stones.We retrospectively reviewed the medical records of 1,519 patients who underwent their first ESWL between January 2005 and December 2013. Among these patients, 492 had upper ureter stones that measured 4-20 mm and were eligible for our analyses. Maximal stone length, mean stone density (HU, and SSD were determined on pretreatment non-contrast computed tomography (NCCT. For subgroup analyses, patients were divided into four groups. Group 1 consisted of patients with SSD<25th percentile, group 2 consisted of patients with SSD in the 25th to 50th percentile, group 3 patients had SSD in the 50th to 75th percentile, and group 4 patients had SSD≥75th percentile.In analyses of group 2 patients versus others, there were no statistical differences in mean age, stone length and density. However, the one session success rate in group 2 was higher than other groups (77.9% vs. 67.0%; P = 0.032. The multivariate logistic regression model revealed that shorter stone length, lower stone density, and the group 2 SSD were positive predictors for successful outcomes in ESWL. Using the Bayesian model-averaging approach, longer stone length, lower stone density, and group 2 SSD can be also positive predictors for successful outcomes following ESWL.Our data indicate that a group 2 SSD of approximately 10 cm is a positive predictor for success following ESWL.

  17. Possible impact of multi-electron loss events on the average beam charge state in an HIF target chamber and a neutral beam approach

    Science.gov (United States)

    Grisham, L. R.

    2001-05-01

    Experiments were carried out during the early 1980s to assess the obtainable atomic neutralization of energetic beams of negative ions ranging from lithium to silicon. The experiments found (Grisham et al. Rev. Sci. Instrum. 53 (1982) 281; Princeton Plasma Physics Laboratory Report PPPL-1857, 1981) that, for higher atomic number elements than lithium, it appeared that a substantial fraction of the time more than one electron was being lost in a single collision. This result was inferred from the existence of more than one ionization state in the product beam for even the thinnest line densities at which any electron removal took place. Because of accelerator limitations, these experiments were limited to maximum energies of 7 MeV. However, based upon these results, it is possible that multi-electron loss events may also play a significant role in determining the average ion charge state of the much higher Z and more energetic beams traversing the medium in an heavy ion fusion chamber. This could result in the beam charge state being considerably higher than previously anticipated, and might require designers to consider harder vacuum ballistic focusing approaches, or the development of additional space charge neutralization schemes. This paper discusses the measurements that gave rise for these concerns, as well as a description of further measurements that are proposed to be carried out for atomic numbers and energies per amu which would be closer to those required for heavy ion fusion drivers. With a very low current beam of a massive, but low charge state energetic ion, the charge state distribution emerging from a target gas cell could be measured as a function of line density and medium composition. Varying the line density would allow one to simulate the charge state evolution of the beam as a function of distance into the target chamber. This paper also briefly discusses a possible alternative driver approach using photodetachment-neutralized atomic beams

  18. Mass energy-absorption coefficients and average atomic energy-absorption cross-sections for amino acids in the energy range 0.122-1.330 MeV

    Energy Technology Data Exchange (ETDEWEB)

    More, Chaitali V., E-mail: chaitalimore89@gmail.com; Lokhande, Rajkumar M.; Pawar, Pravina P., E-mail: pravinapawar4@gmail.com [Department of physics, Dr. Babasaheb Ambedkar Marathwada University, Aurangabad 431004 (India)

    2016-05-06

    Mass attenuation coefficients of amino acids such as n-acetyl-l-tryptophan, n-acetyl-l-tyrosine and d-tryptophan were measured in the energy range 0.122-1.330 MeV. NaI (Tl) scintillation detection system was used to detect gamma rays with a resolution of 8.2% at 0.662 MeV. The measured attenuation coefficient values were then used to determine the mass energy-absorption coefficients (σ{sub a,en}) and average atomic energy-absorption cross sections (μ{sub en}/ρ) of the amino acids. Theoretical values were calculated based on XCOM data. Theoretical and experimental values are found to be in good agreement.

  19. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  20. GIS-based Approaches to Catchment Area Analyses of Mass Transit

    DEFF Research Database (Denmark)

    Andersen, Jonas Lohmann Elkjær; Landex, Alex

    2009-01-01

    Catchment area analyses of stops or stations are used to investigate potential number of travelers to public transportation. These analyses are considered a strong decision tool in the planning process of mass transit especially railroads. Catchment area analyses are GIS-based buffer and overlay...... analyses with different approaches depending on the desired level of detail. A simple but straightforward approach to implement is the Circular Buffer Approach where catchment areas are circular. A more detailed approach is the Service Area Approach where catchment areas are determined by a street network...... search to simulate the actual walking distances. A refinement of the Service Area Approach is to implement additional time resistance in the network search to simulate obstacles in the walking environment. This paper reviews and compares the different GIS-based catchment area approaches, their level...

  1. Quantifying and reducing model-form uncertainties in Reynolds-averaged Navier–Stokes simulations: A data-driven, physics-informed Bayesian approach

    International Nuclear Information System (INIS)

    Xiao, H.; Wu, J.-L.; Wang, J.-X.; Sun, R.; Roy, C.J.

    2016-01-01

    Despite their well-known limitations, Reynolds-Averaged Navier–Stokes (RANS) models are still the workhorse tools for turbulent flow simulations in today's engineering analysis, design and optimization. While the predictive capability of RANS models depends on many factors, for many practical flows the turbulence models are by far the largest source of uncertainty. As RANS models are used in the design and safety evaluation of many mission-critical systems such as airplanes and nuclear power plants, quantifying their model-form uncertainties has significant implications in enabling risk-informed decision-making. In this work we develop a data-driven, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations. Uncertainties are introduced directly to the Reynolds stresses and are represented with compact parameterization accounting for empirical prior knowledge and physical constraints (e.g., realizability, smoothness, and symmetry). An iterative ensemble Kalman method is used to assimilate the prior knowledge and observation data in a Bayesian framework, and to propagate them to posterior distributions of velocities and other Quantities of Interest (QoIs). We use two representative cases, the flow over periodic hills and the flow in a square duct, to evaluate the performance of the proposed framework. Both cases are challenging for standard RANS turbulence models. Simulation results suggest that, even with very sparse observations, the obtained posterior mean velocities and other QoIs have significantly better agreement with the benchmark data compared to the baseline results. At most locations the posterior distribution adequately captures the true model error within the developed model form uncertainty bounds. The framework is a major improvement over existing black-box, physics-neutral methods for model-form uncertainty quantification, where prior knowledge and details of the models are not exploited. This approach has

  2. Quantifying and reducing model-form uncertainties in Reynolds-averaged Navier–Stokes simulations: A data-driven, physics-informed Bayesian approach

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, H., E-mail: hengxiao@vt.edu; Wu, J.-L.; Wang, J.-X.; Sun, R.; Roy, C.J.

    2016-11-01

    Despite their well-known limitations, Reynolds-Averaged Navier–Stokes (RANS) models are still the workhorse tools for turbulent flow simulations in today's engineering analysis, design and optimization. While the predictive capability of RANS models depends on many factors, for many practical flows the turbulence models are by far the largest source of uncertainty. As RANS models are used in the design and safety evaluation of many mission-critical systems such as airplanes and nuclear power plants, quantifying their model-form uncertainties has significant implications in enabling risk-informed decision-making. In this work we develop a data-driven, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations. Uncertainties are introduced directly to the Reynolds stresses and are represented with compact parameterization accounting for empirical prior knowledge and physical constraints (e.g., realizability, smoothness, and symmetry). An iterative ensemble Kalman method is used to assimilate the prior knowledge and observation data in a Bayesian framework, and to propagate them to posterior distributions of velocities and other Quantities of Interest (QoIs). We use two representative cases, the flow over periodic hills and the flow in a square duct, to evaluate the performance of the proposed framework. Both cases are challenging for standard RANS turbulence models. Simulation results suggest that, even with very sparse observations, the obtained posterior mean velocities and other QoIs have significantly better agreement with the benchmark data compared to the baseline results. At most locations the posterior distribution adequately captures the true model error within the developed model form uncertainty bounds. The framework is a major improvement over existing black-box, physics-neutral methods for model-form uncertainty quantification, where prior knowledge and details of the models are not exploited. This approach

  3. Lattice Hamiltonian approach to the massless Schwinger model. Precise extraction of the mass gap

    International Nuclear Information System (INIS)

    Cichy, Krzysztof; Poznan Univ.; Kujawa-Cichy, Agnieszka; Szyniszewski, Marcin; Manchester Univ.

    2012-12-01

    We present results of applying the Hamiltonian approach to the massless Schwinger model. A finite basis is constructed using the strong coupling expansion to a very high order. Using exact diagonalization, the continuum limit can be reliably approached. This allows to reproduce the analytical results for the ground state energy, as well as the vector and scalar mass gaps to an outstanding precision better than 10 -6 %.

  4. Lattice Hamiltonian approach to the massless Schwinger model. Precise extraction of the mass gap

    Energy Technology Data Exchange (ETDEWEB)

    Cichy, Krzysztof [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Poznan Univ. (Poland). Faculty of Physics; Kujawa-Cichy, Agnieszka [Poznan Univ. (Poland). Faculty of Physics; Szyniszewski, Marcin [Poznan Univ. (Poland). Faculty of Physics; Manchester Univ. (United Kingdom). NOWNano DTC

    2012-12-15

    We present results of applying the Hamiltonian approach to the massless Schwinger model. A finite basis is constructed using the strong coupling expansion to a very high order. Using exact diagonalization, the continuum limit can be reliably approached. This allows to reproduce the analytical results for the ground state energy, as well as the vector and scalar mass gaps to an outstanding precision better than 10{sup -6} %.

  5. Sliding Mode Control for Mass Moment Aerospace Vehicles Using Dynamic Inversion Approach

    Directory of Open Access Journals (Sweden)

    Xiao-Yu Zhang

    2013-01-01

    Full Text Available The moving mass actuation technique offers significant advantages over conventional aerodynamic control surfaces and reaction control systems, because the actuators are contained entirely within the airframe geometrical envelope. Modeling, control, and simulation of Mass Moment Aerospace Vehicles (MMAV utilizing moving mass actuators are discussed. Dynamics of the MMAV are separated into two parts on the basis of the two time-scale separation theory: the dynamics of fast state and the dynamics of slow state. And then, in order to restrain the system chattering and keep the track performance of the system by considering aerodynamic parameter perturbation, the flight control system is designed for the two subsystems, respectively, utilizing fuzzy sliding mode control approach. The simulation results describe the effectiveness of the proposed autopilot design approach. Meanwhile, the chattering phenomenon that frequently appears in the conventional variable structure systems is also eliminated without deteriorating the system robustness.

  6. Refining mass formulas for astrophysical applications: A Bayesian neural network approach

    Science.gov (United States)

    Utama, R.; Piekarewicz, J.

    2017-10-01

    Background: Exotic nuclei, particularly those near the drip lines, are at the core of one of the fundamental questions driving nuclear structure and astrophysics today: What are the limits of nuclear binding? Exotic nuclei play a critical role in both informing theoretical models as well as in our understanding of the origin of the heavy elements. Purpose: Our aim is to refine existing mass models through the training of an artificial neural network that will mitigate the large model discrepancies far away from stability. Methods: The basic paradigm of our two-pronged approach is an existing mass model that captures as much as possible of the underlying physics followed by the implementation of a Bayesian neural network (BNN) refinement to account for the missing physics. Bayesian inference is employed to determine the parameters of the neural network so that model predictions may be accompanied by theoretical uncertainties. Results: Despite the undeniable quality of the mass models adopted in this work, we observe a significant improvement (of about 40%) after the BNN refinement is implemented. Indeed, in the specific case of the Duflo-Zuker mass formula, we find that the rms deviation relative to experiment is reduced from σrms=0.503 MeV to σrms=0.286 MeV. These newly refined mass tables are used to map the neutron drip lines (or rather "drip bands") and to study a few critical r -process nuclei. Conclusions: The BNN approach is highly successful in refining the predictions of existing mass models. In particular, the large discrepancy displayed by the original "bare" models in regions where experimental data are unavailable is considerably quenched after the BNN refinement. This lends credence to our approach and has motivated us to publish refined mass tables that we trust will be helpful for future astrophysical applications.

  7. The development of an efficient mass balance approach for the purity assignment of organic calibration standards.

    Science.gov (United States)

    Davies, Stephen R; Alamgir, Mahiuddin; Chan, Benjamin K H; Dang, Thao; Jones, Kai; Krishnaswami, Maya; Luo, Yawen; Mitchell, Peter S R; Moawad, Michael; Swan, Hilton; Tarrant, Greg J

    2015-10-01

    The purity determination of organic calibration standards using the traditional mass balance approach is described. Demonstrated examples highlight the potential for bias in each measurement and the need to implement an approach that provides a cross-check for each result, affording fit for purpose purity values in a timely and cost-effective manner. Chromatographic techniques such as gas chromatography with flame ionisation detection (GC-FID) and high-performance liquid chromatography with UV detection (HPLC-UV), combined with mass and NMR spectroscopy, provide a detailed impurity profile allowing an efficient conversion of chromatographic peak areas into relative mass fractions, generally avoiding the need to calibrate each impurity present. For samples analysed by GC-FID, a conservative measurement uncertainty budget is described, including a component to cover potential variations in the response of each unidentified impurity. An alternative approach is also detailed in which extensive purification eliminates the detector response factor issue, facilitating the certification of a super-pure calibration standard which can be used to quantify the main component in less-pure candidate materials. This latter approach is particularly useful when applying HPLC analysis with UV detection. Key to the success of this approach is the application of both qualitative and quantitative (1)H NMR spectroscopy.

  8. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  9. Different top-down approaches to estimate measurement uncertainty of whole blood tacrolimus mass concentration values.

    Science.gov (United States)

    Rigo-Bonnin, Raül; Blanco-Font, Aurora; Canalias, Francesca

    2018-05-08

    Values of mass concentration of tacrolimus in whole blood are commonly used by the clinicians for monitoring the status of a transplant patient and for checking whether the administered dose of tacrolimus is effective. So, clinical laboratories must provide results as accurately as possible. Measurement uncertainty can allow ensuring reliability of these results. The aim of this study was to estimate measurement uncertainty of whole blood mass concentration tacrolimus values obtained by UHPLC-MS/MS using two top-down approaches: the single laboratory validation approach and the proficiency testing approach. For the single laboratory validation approach, we estimated the uncertainties associated to the intermediate imprecision (using long-term internal quality control data) and the bias (utilizing a certified reference material). Next, we combined them together with the uncertainties related to the calibrators-assigned values to obtain a combined uncertainty for, finally, to calculate the expanded uncertainty. For the proficiency testing approach, the uncertainty was estimated in a similar way that the single laboratory validation approach but considering data from internal and external quality control schemes to estimate the uncertainty related to the bias. The estimated expanded uncertainty for single laboratory validation, proficiency testing using internal and external quality control schemes were 11.8%, 13.2%, and 13.0%, respectively. After performing the two top-down approaches, we observed that their uncertainty results were quite similar. This fact would confirm that either two approaches could be used to estimate the measurement uncertainty of whole blood mass concentration tacrolimus values in clinical laboratories. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  10. Evaluation of a mass-balance approach to determine consumptive water use in northeastern Illinois

    Science.gov (United States)

    Mills, Patrick C.; Duncker, James J.; Over, Thomas M.; Marian Domanski,; ,; Engel, Frank

    2014-01-01

    A principal component of evaluating and managing water use is consumptive use. This is the portion of water withdrawn for a particular use, such as residential, which is evaporated, transpired, incorporated into products or crops, consumed by humans or livestock, or otherwise removed from the immediate water environment. The amount of consumptive use may be estimated by a water (mass)-balance approach; however, because of the difficulty of obtaining necessary data, its application typically is restricted to the facility scale. The general governing mass-balance equation is: Consumptive use = Water supplied - Return flows.

  11. A Clifford algebra approach to chiral symmetry breaking and fermion mass hierarchies

    Science.gov (United States)

    Lu, Wei

    2017-09-01

    We propose a Clifford algebra approach to chiral symmetry breaking and fermion mass hierarchies in the context of composite Higgs bosons. Standard model fermions are represented by algebraic spinors of six-dimensional binary Clifford algebra, while ternary Clifford algebra-related flavor projection operators control allowable flavor-mixing interactions. There are three composite electroweak Higgs bosons resulted from top quark, tau neutrino, and tau lepton condensations. Each of the three condensations gives rise to masses of four different fermions. The fermion mass hierarchies within these three groups are determined by four-fermion condensations, which break two global chiral symmetries. The four-fermion condensations induce axion-like pseudo-Nambu-Goldstone bosons and can be dark matter candidates. In addition to the 125 GeV Higgs boson observed at the Large Hadron Collider, we anticipate detection of tau neutrino composite Higgs boson via the charm quark decay channel.

  12. A data base approach for prediction of deforestation-induced mass wasting events

    Science.gov (United States)

    Logan, T. L.

    1981-01-01

    A major topic of concern in timber management is determining the impact of clear-cutting on slope stability. Deforestation treatments on steep mountain slopes have often resulted in a high frequency of major mass wasting events. The Geographic Information System (GIS) is a potentially useful tool for predicting the location of mass wasting sites. With a raster-based GIS, digitally encoded maps of slide hazard parameters can be overlayed and modeled to produce new maps depicting high probability slide areas. The present investigation has the objective to examine the raster-based information system as a tool for predicting the location of the clear-cut mountain slopes which are most likely to experience shallow soil debris avalanches. A literature overview is conducted, taking into account vegetation, roads, precipitation, soil type, slope-angle and aspect, and models predicting mass soil movements. Attention is given to a data base approach and aspects of slide prediction.

  13. Determination of the binding sites for oxaliplatin on insulin using mass spectrometry-based approaches

    DEFF Research Database (Denmark)

    Møller, Charlotte; Sprenger, Richard R.; Stürup, Stefan

    2011-01-01

    Using insulin as a model protein for binding of oxaliplatin to proteins, various mass spectrometric approaches and techniques were compared. Several different platinum adducts were observed, e.g. addition of one or two diaminocyclohexane platinum(II) (Pt(dach)) molecules. By top-down analysis...... and fragmentation of the intact insulin-oxaliplatin adduct using nano-electrospray ionisation quadrupole time-of-flight mass spectrometry (nESI-Q-ToF-MS), the major binding site was assigned to histidine5 on the insulin B chain. In order to simplify the interpretation of the mass spectrum, the disulphide bridges...... were reduced. This led to the additional identification of cysteine6 on the A chain as a binding site along with histidine5 on the B chain. Digestion of insulin-oxaliplatin with endoproteinase Glu-C (GluC) followed by reduction led to the formation of five peptides with Pt(dach) attached...

  14. A new approach for accurate mass assignment on a multi-turn time-of-flight mass spectrometer.

    Science.gov (United States)

    Hondo, Toshinobu; Jensen, Kirk R; Aoki, Jun; Toyoda, Michisato

    2017-12-01

    A simple, effective accurate mass assignment procedure for a time-of-flight mass spectrometer is desirable. External mass calibration using a mass calibration standard together with an internal mass reference (lock mass) is a common technique for mass assignment, however, using polynomial fitting can result in mass-dependent errors. By using the multi-turn time-of-flight mass spectrometer infiTOF-UHV, we were able to obtain multiple time-of-flight data from an ion monitored under several different numbers of laps that was then used to calculate a mass calibration equation. We have developed a data acquisition system that simultaneously monitors spectra at several different lap conditions with on-the-fly centroid determination and scan law estimation, which is a function of acceleration voltage, flight path, and instrumental time delay. Less than 0.9 mDa mass errors were observed for assigned mass to charge ratios ( m/z) ranging between 4 and 134 using only 40 Ar + as a reference. It was also observed that estimating the scan law on-the-fly provides excellent mass drift compensation.

  15. A Bayesian geostatistical approach for evaluating the uncertainty of contaminant mass discharges from point sources

    Science.gov (United States)

    Troldborg, M.; Nowak, W.; Binning, P. J.; Bjerg, P. L.

    2012-12-01

    Estimates of mass discharge (mass/time) are increasingly being used when assessing risks of groundwater contamination and designing remedial systems at contaminated sites. Mass discharge estimates are, however, prone to rather large uncertainties as they integrate uncertain spatial distributions of both concentration and groundwater flow velocities. For risk assessments or any other decisions that are being based on mass discharge estimates, it is essential to address these uncertainties. We present a novel Bayesian geostatistical approach for quantifying the uncertainty of the mass discharge across a multilevel control plane. The method decouples the flow and transport simulation and has the advantage of avoiding the heavy computational burden of three-dimensional numerical flow and transport simulation coupled with geostatistical inversion. It may therefore be of practical relevance to practitioners compared to existing methods that are either too simple or computationally demanding. The method is based on conditional geostatistical simulation and accounts for i) heterogeneity of both the flow field and the concentration distribution through Bayesian geostatistics (including the uncertainty in covariance functions), ii) measurement uncertainty, and iii) uncertain source zone geometry and transport parameters. The method generates multiple equally likely realizations of the spatial flow and concentration distribution, which all honour the measured data at the control plane. The flow realizations are generated by analytical co-simulation of the hydraulic conductivity and the hydraulic gradient across the control plane. These realizations are made consistent with measurements of both hydraulic conductivity and head at the site. An analytical macro-dispersive transport solution is employed to simulate the mean concentration distribution across the control plane, and a geostatistical model of the Box-Cox transformed concentration data is used to simulate observed

  16. A novel featureless approach to mass detection in digital mammograms based on support vector machines

    Energy Technology Data Exchange (ETDEWEB)

    Campanini, Renato [Department of Physics, University of Bologna, and INFN, Bologna (Italy); Dongiovanni, Danilo [Department of Physics, University of Bologna, and INFN, Bologna (Italy); Iampieri, Emiro [Department of Physics, University of Bologna, and INFN, Bologna (Italy); Lanconelli, Nico [Department of Physics, University of Bologna, and INFN, Bologna (Italy); Masotti, Matteo [Department of Physics, University of Bologna, and INFN, Bologna (Italy); Palermo, Giuseppe [Department of Physics, University of Bologna, and INFN, Bologna (Italy); Riccardi, Alessandro [Department of Physics, University of Bologna, and INFN, Bologna (Italy); Roffilli, Matteo [Department of Computer Science, University of Bologna, Bologna (Italy)

    2004-03-21

    In this work, we present a novel approach to mass detection in digital mammograms. The great variability of the appearance of masses is the main obstacle to building a mass detection method. It is indeed demanding to characterize all the varieties of masses with a reduced set of features. Hence, in our approach we have chosen not to extract any feature, for the detection of the region of interest; in contrast, we exploit all the information available on the image. A multiresolution overcomplete wavelet representation is performed, in order to codify the image with redundancy of information. The vectors of the very-large space obtained are then provided to a first support vector machine (SVM) classifier. The detection task is considered here as a two-class pattern recognition problem: crops are classified as suspect or not, by using this SVM classifier. False candidates are eliminated with a second cascaded SVM. To further reduce the number of false positives, an ensemble of experts is applied: the final suspect regions are achieved by using a voting strategy. The sensitivity of the presented system is nearly 80% with a false-positive rate of 1.1 marks per image, estimated on images coming from the USF DDSM database.

  17. A Real-Time Temperature Data Transmission Approach for Intelligent Cooling Control of Mass Concrete

    Directory of Open Access Journals (Sweden)

    Peng Lin

    2014-01-01

    Full Text Available The primary aim of the study presented in this paper is to propose a real-time temperature data transmission approach for intelligent cooling control of mass concrete. A mathematical description of a digital temperature control model is introduced in detail. Based on pipe mounted and electrically linked temperature sensors, together with postdata handling hardware and software, a stable, real-time, highly effective temperature data transmission solution technique is developed and utilized within the intelligent mass concrete cooling control system. Once the user has issued the relevant command, the proposed programmable logic controllers (PLC code performs all necessary steps without further interaction. The code can control the hardware, obtain, read, and perform calculations, and display the data accurately. Hardening concrete is an aggregate of complex physicochemical processes including the liberation of heat. The proposed control system prevented unwanted structural change within the massive concrete blocks caused by these exothermic processes based on an application case study analysis. In conclusion, the proposed temperature data transmission approach has proved very useful for the temperature monitoring of a high arch dam and is able to control thermal stresses in mass concrete for similar projects involving mass concrete.

  18. How mass spectrometric approaches applied to bacterial identification have revolutionized the study of human gut microbiota.

    Science.gov (United States)

    Grégory, Dubourg; Chaudet, Hervé; Lagier, Jean-Christophe; Raoult, Didier

    2018-03-01

    Describing the human hut gut microbiota is one the most exciting challenges of the 21 st century. Currently, high-throughput sequencing methods are considered as the gold standard for this purpose, however, they suffer from several drawbacks, including their inability to detect minority populations. The advent of mass-spectrometric (MS) approaches to identify cultured bacteria in clinical microbiology enabled the creation of the culturomics approach, which aims to establish a comprehensive repertoire of cultured prokaryotes from human specimens using extensive culture conditions. Areas covered: This review first underlines how mass spectrometric approaches have revolutionized clinical microbiology. It then highlights the contribution of MS-based methods to culturomics studies, paying particular attention to the extension of the human gut microbiota repertoire through the discovery of new bacterial species. Expert commentary: MS-based approaches have enabled cultivation methods to be resuscitated to study the human gut microbiota and thus to fill in the blanks left by high-throughput sequencing methods in terms of culturing minority populations. Continued efforts to recover new taxa using culture methods, combined with their rapid implementation in genomic databases, would allow for an exhaustive analysis of the gut microbiota through the use of a comprehensive approach.

  19. A survey of existing and proposed classical and quantum approaches to the photon mass

    Science.gov (United States)

    Spavieri, G.; Quintero, J.; Gillies, G. T.; Rodríguez, M.

    2011-02-01

    Over the past twenty years, there have been several careful experimental, observational and phenomenological investigations aimed at searching for and establishing ever tighter bounds on the possible mass of the photon. There are many fascinating and paradoxical physical implications that would arise from the presence of even a very small value for it, and thus such searches have always been well motivated in terms of the new physics that would result. We provide a brief overview of the theoretical background and classical motivations for this work and the early tests of the exactness of Coulomb's law that underlie it. We then go on to address the modern situation, in which quantum physics approaches come to attention. Among them we focus especially on the implications that the Aharonov-Bohm and Aharonov-Casher class of effects have on searches for a photon mass. These arise in several different ways and can lead to experiments that might involve the interaction of magnetic dipoles, electric dipoles, or charged particles with suitable potentials. Still other quantum-based approaches employ measurements of the g-factor of the electron. Plausible target sensitivities for limits on the photon mass as sought by the various quantum approaches are in the range of 10-53 to 10-54 g. Possible experimental arrangements for the associated experiments are discussed. We close with an assessment of the state of the art and a prognosis for future work.

  20. A survey of existing and proposed classical and quantum approaches to the photon mass

    International Nuclear Information System (INIS)

    Spavieri, G.; Quintero, J.; Gillies, G.T.; Rodriguez, M.

    2011-01-01

    Over the past twenty years, there have been several careful experimental, observational and phenomenological investigations aimed at searching for and establishing ever tighter bounds on the possible mass of the photon. There are many fascinating and paradoxical physical implications that would arise from the presence of even a very small value for it, and thus such searches have always been well motivated in terms of the new physics that would result. We provide a brief overview of the theoretical background and classical motivations for this work and the early tests of the exactness of Coulomb's law that underlie it. We then go on to address the modern situation, in which quantum physics approaches come to attention. Among them we focus especially on the implications that the Aharonov-Bohm and Aharonov-Casher class of effects have on searches for a photon mass. These arise in several different ways and can lead to experiments that might involve the interaction of magnetic dipoles, electric dipoles, or charged particles with suitable potentials. Still other quantum-based approaches employ measurements of the g-factor of the electron. Plausible target sensitivities for limits on the photon mass as sought by the various quantum approaches are in the range of 10 -53 to 10 -54 g. Possible experimental arrangements for the associated experiments are discussed. We close with an assessment of the state of the art and a prognosis for future work. (authors)

  1. An enhanced nonlinear damping approach accounting for system constraints in active mass dampers

    Science.gov (United States)

    Venanzi, Ilaria; Ierimonti, Laura; Ubertini, Filippo

    2015-11-01

    Active mass dampers are a viable solution for mitigating wind-induced vibrations in high-rise buildings and improve occupants' comfort. Such devices suffer particularly when they reach force saturation of the actuators and maximum extension of their stroke, which may occur in case of severe loading conditions (e.g. wind gust and earthquake). Exceeding actuators' physical limits can impair the control performance of the system or even lead to devices damage, with consequent need for repair or substitution of part of the control system. Controllers for active mass dampers should account for their technological limits. Prior work of the authors was devoted to stroke issues and led to the definition of a nonlinear damping approach, very easy to implement in practice. It consisted of a modified skyhook algorithm complemented with a nonlinear braking force to reverse the direction of the mass before reaching the stroke limit. This paper presents an enhanced version of this approach, also accounting for force saturation of the actuator and keeping the simplicity of implementation. This is achieved by modulating the control force by a nonlinear smooth function depending on the ratio between actuator's force and saturation limit. Results of a numerical investigation show that the proposed approach provides similar results to the method of the State Dependent Riccati Equation, a well-established technique for designing optimal controllers for constrained systems, yet very difficult to apply in practice.

  2. Mass Media as Actor in Political Process: Evolution of the Western Approaches since the 1950s. (Part 1)

    OpenAIRE

    Гуторов, Владимир Александрович

    2013-01-01

    The author analyses evolution of western political science’s approaches to mass media and mass media’s role in political process in liberal democracies. The author focuses on theories of “Minimal Effect,” “Mediocracy,” “Effects Research,” “Tеxt Analysis,” “Use and Gratification Approach.” The author discusses A. Giddens’s structuration theory as the most elaborate approach to interpreting the role of mass media in the western political and social science.Key words: mass and political communic...

  3. Desempenho produtivo e massa média de frutos de morangueiro obtidos de diferentes sistemas de cultivo / Performance and average mass of strawberry fruit obtained from different cropping systems

    Directory of Open Access Journals (Sweden)

    Letícia Kurchaidt Pinheiro Camargo

    2010-11-01

    Full Text Available ResumoO trabalho teve como objetivo avaliar a produtividade e a massa média de frutos de oito cultivares de morangueiro (Aromas, Camino Real, Campinas, Dover, Oso Grande, Toyonoka, Tudla-Milsei e Ventana, cultivadas em diferentes sistemas de produção. O delineamento estatístico utilizado foram blocos casualizados com quatro repetições. Os frutos foram colhidos no período de outubro de 2007 a fevereiro de 2008. Os resultados obtidos permitem inferir que, quanto à produtividade, o sistema orgânico foi mais efetivo para as cultivares Oso Grande e Tudla-Milsei e o sistema convencional para Dover e Toyonoka. As maiores massas médias foram encontradas nos frutos das cultivares Tudla-Milsei e Ventana, em ambos os sistemas de cultivo. A cultivar que se destacou tanto no sistema orgânico quanto no sistema convencional, foi a Tudla-Milsei, com as maiores produtividades e os frutos com maior massa média. As cultivares responderam diferentemente em função do manejo cultural empregado em cada sistema de cultivo, o que permite afirmar que há variabilidade entre as cultivares comerciais mais plantadas na atualidade. Portanto, a escolha da cultivar a ser utilizada, visando à produtividade, deverá ocorrer em função do seu desempenho dentro de cada sistema de cultivo. AbstractThe goal of this work was to evaluate the productivity and mass of fruit average of eight cultivars (Aromas, Camino Real, Campinas, Dover, Oso Grande, Toyonoka, Tudla-Milsei and Ventana of strawberry (Fragaria x ananassa grown in different cropping systems. The experimental designed was randomized blocks with 4 replications. The fruits were collected in the period from October 2007 to February 2008. The results allow inferring that the productivity, the organic system was more effective for the Oso Grande e Tudla-Milsei and conventional system for Dover and Toyonoka. The highest masses averages were found in fruits of Tudla-Milsei and Ventana, in both systems. Tudla

  4. Mass Spectrometry Imaging of Biological Tissue: An Approach for Multicenter Studies

    Energy Technology Data Exchange (ETDEWEB)

    Rompp, Andreas; Both, Jean-Pierre; Brunelle, Alain; Heeren, Ronald M.; Laprevote, Olivier; Prideaux, Brendan; Seyer, Alexandre; Spengler, Bernhard; Stoeckli, Markus; Smith, Donald F.

    2015-03-01

    Mass spectrometry imaging has become a popular tool for probing the chemical complexity of biological surfaces. This led to the development of a wide range of instrumentation and preparation protocols. It is thus desirable to evaluate and compare the data output from different methodologies and mass spectrometers. Here, we present an approach for the comparison of mass spectrometry imaging data from different laboratories (often referred to as multicenter studies). This is exemplified by the analysis of mouse brain sections in five laboratories in Europe and the USA. The instrumentation includes matrix-assisted laser desorption/ionization (MALDI)-time-of-flight (TOF), MALDI-QTOF, MALDIFourier transform ion cyclotron resonance (FTICR), atmospheric-pressure (AP)-MALDI-Orbitrap, and cluster TOF-secondary ion mass spectrometry (SIMS). Experimental parameters such as measurement speed, imaging bin width, and mass spectrometric parameters are discussed. All datasets were converted to the standard data format imzML and displayed in a common open-source software with identical parameters for visualization, which facilitates direct comparison of MS images. The imzML conversion also allowed exchange of fully functional MS imaging datasets between the different laboratories. The experiments ranged from overview measurements of the full mouse brain to detailed analysis of smaller features (depending on spatial resolution settings), but common histological features such as the corpus callosum were visible in all measurements. High spatial resolution measurements of AP-MALDI-Orbitrap and TOF-SIMS showed comparable structures in the low-micrometer range. We discuss general considerations for planning and performing multicenter studies in mass spectrometry imaging. This includes details on the selection, distribution, and preparation of tissue samples as well as on data handling. Such multicenter studies in combination with ongoing activities for reporting guidelines, a common

  5. Non-invasive Estimation of Temperature during Physiotherapeutic Ultrasound Application Using the Average Gray-Level Content of B-Mode Images: A Metrological Approach.

    Science.gov (United States)

    Alvarenga, André V; Wilkens, Volker; Georg, Olga; Costa-Félix, Rodrigo P B

    2017-09-01

    Healing therapies that make use of ultrasound are based on raising the temperature in biological tissue. However, it is not possible to heal impaired tissue by applying a high dose of ultrasound. The temperature of the tissue is ultimately the physical quantity that has to be assessed to minimize the risk of undesired injury. Invasive temperature measurement techniques are easy to use, despite the fact that they are detrimental to human well being. Another approach to assessing a rise in tissue temperature is to derive the material's general response to temperature variations from ultrasonic parameters. In this article, a method for evaluating temperature variations is described. The method is based on the analytical study of an ultrasonic image, in which gray-level variations are correlated to the temperature variations in a tissue-mimicking material. The physical assumption is that temperature variations induce wave propagation changes modifying the backscattered ultrasound signal, which are expressed in the ultrasonographic images. For a temperature variation of about 15°C, the expanded uncertainty for a coverage probability of 0.95 was found to be 2.5°C in the heating regime and 1.9°C in the cooling regime. It is possible to use the model proposed in this article in a straightforward manner to monitor temperature variation during a physiotherapeutic ultrasound application, provided the tissue-mimicking material approach is transferred to actual biological tissue. The novelty of such approach resides in the metrology-based investigation outlined here, as well as in its ease of reproducibility. Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  6. Crack initiation life in notched Ti-6Al-4V titanium bars under uniaxial and multiaxial fatigue: synthesis based on the averaged strain energy density approach

    Directory of Open Access Journals (Sweden)

    Giovanni Meneghetti

    2017-07-01

    Full Text Available The fatigue behaviour of circumferentially notched specimens made of titanium alloy, Ti-6Al-4V, has been analysed. To investigate the notch effect on the fatigue strength, pure bending, pure torsion and multiaxial bending-torsion fatigue tests have been carried out on specimens characterized by two different root radii, namely 0.1 and 4 mm. Crack nucleation and subsequent propagation have been accurately monitored by using the direct current potential drop (DCPD technique. Based on the results obtained from the potential drop technique, the crack initiation life has been defined in correspondence of a relative potential drop increase V/V0 equal to 1%, and it has been used as failure criterion. Doing so, the effect of extrinsic mechanisms operating during crack propagation phase, such as sliding contact, friction and meshing between fracture surfaces, is expected to be reduced. The experimental fatigue test results have been re-analysed by using the local strain energy density (SED averaged over a structural volume having radius R0 and surrounding the notch tip. Finally, the use of the local strain energy density parameter allowed us to properly correlate the crack initiation life of Ti-6Al-4V notched specimens, despite the different notch geometries and loading conditions involved in the tests

  7. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  8. Superconductivity. Quasiparticle mass enhancement approaching optimal doping in a high-T(c) superconductor.

    Science.gov (United States)

    Ramshaw, B J; Sebastian, S E; McDonald, R D; Day, James; Tan, B S; Zhu, Z; Betts, J B; Liang, Ruixing; Bonn, D A; Hardy, W N; Harrison, N

    2015-04-17

    In the quest for superconductors with higher transition temperatures (T(c)), one emerging motif is that electronic interactions favorable for superconductivity can be enhanced by fluctuations of a broken-symmetry phase. Recent experiments have suggested the existence of the requisite broken-symmetry phase in the high-T(c) cuprates, but the impact of such a phase on the ground-state electronic interactions has remained unclear. We used magnetic fields exceeding 90 tesla to access the underlying metallic state of the cuprate YBa2Cu3O(6+δ) over a wide range of doping, and observed magnetic quantum oscillations that reveal a strong enhancement of the quasiparticle effective mass toward optimal doping. This mass enhancement results from increasing electronic interactions approaching optimal doping, and suggests a quantum critical point at a hole doping of p(crit) ≈ 0.18. Copyright © 2015, American Association for the Advancement of Science.

  9. Topographical change caused by moderate and small floods in a gravel bed ephemeral river – a depth-averaged morphodynamic simulation approach

    Directory of Open Access Journals (Sweden)

    E. S. Lotsari

    2018-03-01

    Full Text Available In ephemeral rivers, channel morphology represents a snapshot at the end of a succession of geomorphic changes caused by floods. In most cases, the channel shape and bedform migration during different phases of a flood hydrograph cannot be identified from field evidence. This paper analyses the timing of riverbed erosion and deposition of a gravel bed ephemeral river channel (Rambla de la Viuda, Spain during consecutive and moderate- (March 2013 and low-magnitude (May 2013 discharge events, by applying a morphodynamic model (Delft3D calibrated with pre- and post-event surveys by RTK-GPS points and mobile laser scanning. The study reach is mainly depositional and all bedload sediment supplied from adjacent upstream areas is trapped in the study segment forming gravel lobes. Therefore, estimates of total bedload sediment mass balance can be obtained from pre- and post-field survey for each flood event. The spatially varying grain size data and transport equations were the most important factors for model calibration, in addition to flow discharge. The channel acted as a braided channel during the lower flows of the two discharge events, but when bars were submerged in the high discharges of May 2013, the high fluid forces followed a meandering river planform. The model results showed that erosion and deposition were in total greater during the long-lasting receding phase than during the rising phase of the flood hydrographs. In the case of the moderate-magnitude discharge event, deposition and erosion peaks were predicted to occur at the beginning of the hydrograph, whereas deposition dominated throughout the event. Conversely, the low-magnitude discharge event only experienced the peak of channel changes after the discharge peak. Thus, both type of discharge events highlight the importance of receding phase for this type of gravel bed ephemeral river channel.

  10. Topographical change caused by moderate and small floods in a gravel bed ephemeral river - a depth-averaged morphodynamic simulation approach

    Science.gov (United States)

    Lotsari, Eliisa S.; Calle, Mikel; Benito, Gerardo; Kukko, Antero; Kaartinen, Harri; Hyyppä, Juha; Hyyppä, Hannu; Alho, Petteri

    2018-03-01

    In ephemeral rivers, channel morphology represents a snapshot at the end of a succession of geomorphic changes caused by floods. In most cases, the channel shape and bedform migration during different phases of a flood hydrograph cannot be identified from field evidence. This paper analyses the timing of riverbed erosion and deposition of a gravel bed ephemeral river channel (Rambla de la Viuda, Spain) during consecutive and moderate- (March 2013) and low-magnitude (May 2013) discharge events, by applying a morphodynamic model (Delft3D) calibrated with pre- and post-event surveys by RTK-GPS points and mobile laser scanning. The study reach is mainly depositional and all bedload sediment supplied from adjacent upstream areas is trapped in the study segment forming gravel lobes. Therefore, estimates of total bedload sediment mass balance can be obtained from pre- and post-field survey for each flood event. The spatially varying grain size data and transport equations were the most important factors for model calibration, in addition to flow discharge. The channel acted as a braided channel during the lower flows of the two discharge events, but when bars were submerged in the high discharges of May 2013, the high fluid forces followed a meandering river planform. The model results showed that erosion and deposition were in total greater during the long-lasting receding phase than during the rising phase of the flood hydrographs. In the case of the moderate-magnitude discharge event, deposition and erosion peaks were predicted to occur at the beginning of the hydrograph, whereas deposition dominated throughout the event. Conversely, the low-magnitude discharge event only experienced the peak of channel changes after the discharge peak. Thus, both type of discharge events highlight the importance of receding phase for this type of gravel bed ephemeral river channel.

  11. A Ligand-observed Mass Spectrometry Approach Integrated into the Fragment Based Lead Discovery Pipeline

    Science.gov (United States)

    Chen, Xin; Qin, Shanshan; Chen, Shuai; Li, Jinlong; Li, Lixin; Wang, Zhongling; Wang, Quan; Lin, Jianping; Yang, Cheng; Shui, Wenqing

    2015-01-01

    In fragment-based lead discovery (FBLD), a cascade combining multiple orthogonal technologies is required for reliable detection and characterization of fragment binding to the target. Given the limitations of the mainstream screening techniques, we presented a ligand-observed mass spectrometry approach to expand the toolkits and increase the flexibility of building a FBLD pipeline especially for tough targets. In this study, this approach was integrated into a FBLD program targeting the HCV RNA polymerase NS5B. Our ligand-observed mass spectrometry analysis resulted in the discovery of 10 hits from a 384-member fragment library through two independent screens of complex cocktails and a follow-up validation assay. Moreover, this MS-based approach enabled quantitative measurement of weak binding affinities of fragments which was in general consistent with SPR analysis. Five out of the ten hits were then successfully translated to X-ray structures of fragment-bound complexes to lay a foundation for structure-based inhibitor design. With distinctive strengths in terms of high capacity and speed, minimal method development, easy sample preparation, low material consumption and quantitative capability, this MS-based assay is anticipated to be a valuable addition to the repertoire of current fragment screening techniques. PMID:25666181

  12. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  13. Regional averaging and scaling in relativistic cosmology

    International Nuclear Information System (INIS)

    Buchert, Thomas; Carfora, Mauro

    2002-01-01

    Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias

  14. Risk-oriented approach application at planning and orginizing antiepidemic provision of mass events

    Directory of Open Access Journals (Sweden)

    D.V. Efremenko

    2017-03-01

    Full Text Available Mass events tend to become more and more dangerous for population health, as they cause various health risks, including infectious pathologies risks. Our research goal was to work out scientifically grounded approaches to assessing and managing epidemiologic risks as well as analyze their application practices implemented during preparation to the Olympics-2014, the Games themselves, as well as other mass events which took place in 2014–2016. We assessed epidemiologic complications risks with the use of diagnostic test-systems and applying a new technique which allowed for mass events peculiarities. The technique is based on infections ranking as per 3 potential danger categories in accordance with created criteria which represented quantitative and qualitative predictive parameters (predictors. Application of risk-oriented approach and multi-factor analysis allowed us to detect exact possible maximum requirements for providing sanitary-epidemiologic welfare in terms of each separate nosologic form. As we enhanced our laboratory base with test-systems to provide specific indication as per accomplished calculations, it enabled us, on one hand, to secure the required preparations, and, on the other hand, to avoid unnecessary expenditures. To facilitate decision-making process during the Olympics-2014 we used an innovative product, namely, a computer program based on geoinformation system (GIS. It helped us to simplify and to accelerate information exchange within the frameworks of intra- and interdepartmental interaction. "Dynamic epidemiologic threshold" was daily calculated for measles, chickenpox, acute enteric infections and acute respiratory viral infections of various etiology. And if it was exceeded or possibility of "epidemiologic spot" for one or several nosologies occurred, an automatic warning appeared in GIS. Planning prevention activities regarding feral herd infections and zoogenous extremely dangerous infections which were endemic

  15. A Bayesian geostatistical approach for evaluating the uncertainty of contaminant mass discharges from point sources

    DEFF Research Database (Denmark)

    Troldborg, Mads; Nowak, Wolfgang; Binning, Philip John

    and the hydraulic gradient across the control plane and are consistent with measurements of both hydraulic conductivity and head at the site. An analytical macro-dispersive transport solution is employed to simulate the mean concentration distribution across the control plane, and a geostatistical model of the Box-Cox...... transformed concentration data is used to simulate observed deviations from this mean solution. By combining the flow and concentration realizations, a mass discharge probability distribution is obtained. Tests show that the decoupled approach is both efficient and able to provide accurate uncertainty...

  16. A Data-Driven Approach to Develop Physically Sound Predictors: Application to Depth-Averaged Velocities and Drag Coefficients on Vegetated Flows

    Science.gov (United States)

    Tinoco, R. O.; Goldstein, E. B.; Coco, G.

    2016-12-01

    We use a machine learning approach to seek accurate, physically sound predictors, to estimate two relevant flow parameters for open-channel vegetated flows: mean velocities and drag coefficients. A genetic programming algorithm is used to find a robust relationship between properties of the vegetation and flow parameters. We use data published from several laboratory experiments covering a broad range of conditions to obtain: a) in the case of mean flow, an equation that matches the accuracy of other predictors from recent literature while showing a less complex structure, and b) for drag coefficients, a predictor that relies on both single element and array parameters. We investigate different criteria for dataset size and data selection to evaluate their impact on the resulting predictor, as well as simple strategies to obtain only dimensionally consistent equations, and avoid the need for dimensional coefficients. The results show that a proper methodology can deliver physically sound models representative of the processes involved, such that genetic programming and machine learning techniques can be used as powerful tools to study complicated phenomena and develop not only purely empirical, but "hybrid" models, coupling results from machine learning methodologies into physics-based models.

  17. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  18. Determining average yarding distance.

    Science.gov (United States)

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  19. Average Revisited in Context

    Science.gov (United States)

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  20. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...

  1. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  2. In silico approaches to study mass and energy flows in microbial consortia: a syntrophic case study

    Directory of Open Access Journals (Sweden)

    Mallette Natasha

    2009-12-01

    Full Text Available Abstract Background Three methods were developed for the application of stoichiometry-based network analysis approaches including elementary mode analysis to the study of mass and energy flows in microbial communities. Each has distinct advantages and disadvantages suitable for analyzing systems with different degrees of complexity and a priori knowledge. These approaches were tested and compared using data from the thermophilic, phototrophic mat communities from Octopus and Mushroom Springs in Yellowstone National Park (USA. The models were based on three distinct microbial guilds: oxygenic phototrophs, filamentous anoxygenic phototrophs, and sulfate-reducing bacteria. Two phases, day and night, were modeled to account for differences in the sources of mass and energy and the routes available for their exchange. Results The in silico models were used to explore fundamental questions in ecology including the prediction of and explanation for measured relative abundances of primary producers in the mat, theoretical tradeoffs between overall productivity and the generation of toxic by-products, and the relative robustness of various guild interactions. Conclusion The three modeling approaches represent a flexible toolbox for creating cellular metabolic networks to study microbial communities on scales ranging from cells to ecosystems. A comparison of the three methods highlights considerations for selecting the one most appropriate for a given microbial system. For instance, communities represented only by metagenomic data can be modeled using the pooled method which analyzes a community's total metabolic potential without attempting to partition enzymes to different organisms. Systems with extensive a priori information on microbial guilds can be represented using the compartmentalized technique, employing distinct control volumes to separate guild-appropriate enzymes and metabolites. If the complexity of a compartmentalized network creates an

  3. Complexified quantum field theory and 'mass without mass' from multidimensional fractional actionlike variational approach with dynamical fractional exponents

    International Nuclear Information System (INIS)

    El-Nabulsi, Ahmad Rami

    2009-01-01

    Multidimensional fractional actionlike variational problem with time-dependent dynamical fractional exponents is constructed. Fractional Euler-Lagrange equations are derived and discussed in some details. The results obtained are used to explore some novel aspects of fractional quantum field theory where many interesting consequences are revealed, in particular the complexification of quantum field theory, in particular Dirac operators and the novel notion of 'mass without mass'.

  4. A mass graph-based approach for the identification of modified proteoforms using top-down tandem mass spectra.

    Science.gov (United States)

    Kou, Qiang; Wu, Si; Tolic, Nikola; Paša-Tolic, Ljiljana; Liu, Yunlong; Liu, Xiaowen

    2017-05-01

    Although proteomics has rapidly developed in the past decade, researchers are still in the early stage of exploring the world of complex proteoforms, which are protein products with various primary structure alterations resulting from gene mutations, alternative splicing, post-translational modifications, and other biological processes. Proteoform identification is essential to mapping proteoforms to their biological functions as well as discovering novel proteoforms and new protein functions. Top-down mass spectrometry is the method of choice for identifying complex proteoforms because it provides a 'bird's eye view' of intact proteoforms. The combinatorial explosion of various alterations on a protein may result in billions of possible proteoforms, making proteoform identification a challenging computational problem. We propose a new data structure, called the mass graph, for efficient representation of proteoforms and design mass graph alignment algorithms. We developed TopMG, a mass graph-based software tool for proteoform identification by top-down mass spectrometry. Experiments on top-down mass spectrometry datasets showed that TopMG outperformed existing methods in identifying complex proteoforms. http://proteomics.informatics.iupui.edu/software/topmg/. xwliu@iupui.edu. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  5. Doping control analysis of trimetazidine and characterization of major metabolites using mass spectrometric approaches.

    Science.gov (United States)

    Sigmund, Gerd; Koch, Anja; Orlovius, Anne-Katrin; Guddat, Sven; Thomas, Andreas; Schänzer, Wilhelm; Thevis, Mario

    2014-01-01

    Since January 2014, the anti-anginal drug trimetazidine [1-(2,3,4-trimethoxybenzyl)-piperazine] has been classified as prohibited substance by the World Anti-Doping Agency (WADA), necessitating specific and robust detection methods in sports drug testing laboratories. In the present study, the implementation of the intact therapeutic agent into two different initial testing procedures based on gas chromatography-mass spectrometry (GC-MS) and liquid chromatography-tandem mass spectrometry (LC-MS/MS) is reported, along with the characterization of urinary metabolites by electrospray ionization-high resolution/high accuracy (tandem) mass spectrometry. For GC-MS analyses, urine samples were subjected to liquid-liquid extraction sample preparation, while LC-MS/MS analyses were conducted by established 'dilute-and-inject' approaches. Both screening methods were validated for trimetazidine concerning specificity, limits of detection (0.5-50 ng/mL), intra-day and inter-day imprecision (doping control samples were used to complement the LC-MS/MS-based assay, although intact trimetazidine was found at highest abundance of the relevant trimetazidine-related analytes in all tested sports drug testing samples. Retrospective data mining regarding doping control analyses conducted between 1999 and 2013 at the Cologne Doping Control Laboratory concerning trimetazidine revealed a considerable prevalence of the drug particularly in endurance and strength sports accounting for up to 39 findings per year. Copyright © 2014 John Wiley & Sons, Ltd.

  6. Coupled sulfur isotopic and chemical mass transfer modeling: Approach and application to dynamic hydrothermal processes

    International Nuclear Information System (INIS)

    Janecky, D.R.

    1988-01-01

    A computational modeling code (EQPSreverse arrowS) has been developed to examine sulfur isotopic distribution pathways coupled with calculations of chemical mass transfer pathways. A post processor approach to EQ6 calculations was chosen so that a variety of isotopic pathways could be examined for each reaction pathway. Two types of major bounding conditions were implemented: (1) equilibrium isotopic exchange between sulfate and sulfide species or exchange only accompanying chemical reduction and oxidation events, and (2) existence or lack of isotopic exchange between solution species and precipitated minerals, parallel to the open and closed chemical system formulations of chemical mass transfer modeling codes. All of the chemical data necessary to explicitly calculate isotopic distribution pathways is generated by most mass transfer modeling codes and can be input to the EQPS code. Routines are built in to directly handle EQ6 tabular files. Chemical reaction models of seafloor hydrothermal vent processes and accompanying sulfur isotopic distribution pathways illustrate the capabilities of coupling EQPSreverse arrowS with EQ6 calculations, including the extent of differences that can exist due to the isotopic bounding condition assumptions described above. 11 refs., 2 figs

  7. A High Throughput Ambient Mass Spectrometric Approach to Species Identification and Classification from Chemical Fingerprint Signatures

    Science.gov (United States)

    Musah, Rabi A.; Espinoza, Edgard O.; Cody, Robert B.; Lesiak, Ashton D.; Christensen, Earl D.; Moore, Hannah E.; Maleknia, Simin; Drijfhout, Falko P.

    2015-01-01

    A high throughput method for species identification and classification through chemometric processing of direct analysis in real time (DART) mass spectrometry-derived fingerprint signatures has been developed. The method entails introduction of samples to the open air space between the DART ion source and the mass spectrometer inlet, with the entire observed mass spectral fingerprint subjected to unsupervised hierarchical clustering processing. A range of both polar and non-polar chemotypes are instantaneously detected. The result is identification and species level classification based on the entire DART-MS spectrum. Here, we illustrate how the method can be used to: (1) distinguish between endangered woods regulated by the Convention for the International Trade of Endangered Flora and Fauna (CITES) treaty; (2) assess the origin and by extension the properties of biodiesel feedstocks; (3) determine insect species from analysis of puparial casings; (4) distinguish between psychoactive plants products; and (5) differentiate between Eucalyptus species. An advantage of the hierarchical clustering approach to processing of the DART-MS derived fingerprint is that it shows both similarities and differences between species based on their chemotypes. Furthermore, full knowledge of the identities of the constituents contained within the small molecule profile of analyzed samples is not required. PMID:26156000

  8. The charged Higgs boson mass of the MSSM in the Feynman-diagrammatic approach

    Energy Technology Data Exchange (ETDEWEB)

    Frank, M. [Karlsruhe Univ. (Germany). Inst. fuer Theoretische Physik; Galeta, L.; Heinemeyer, S. [Instituto de Fisica de Cantabria (CSIC-UC), Santander (Spain); Hahn, T.; Hollik, W. [Max-Planck-Institut fuer Physik (Werner-Heisenberg-Institut), Muenchen (Germany); Rzehak, H. [CERN, Geneva (Switzerland); Weiglein, G. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2013-06-15

    The interpretation of the Higgs signal at {proportional_to}126 GeV within the Minimal Supersymmetric Standard Model (MSSM) depends crucially on the predicted properties of the other Higgs states of the model, as the mass of the charged Higgs boson, M{sub H}{sup {sub {+-}}}. This mass is calculated in the Feynman-diagrammatic approach within the MSSM with real parameters. The result includes the complete one-loop contributions and the two-loop contributions of O({alpha}{sub t}{alpha}{sub s}). The one-loop contributions lead to sizable shifts in the M{sub H}{sup {sub {+-}}} prediction, reaching up to {proportional_to}8 GeV for relatively small values of M{sub A}. Even larger effects can occur depending on the sign and size of the {mu} parameter that enters the corrections affecting the relation between the bottom-quark mass and the bottom Yukawa coupling. The two-loop O({alpha}{sub t}{alpha}{sub s}) terms can shift M{sub H}{sup {sub {+-}}} by more than 2 GeV. The two-loop contributions amount to typically about 30% of the one-loop corrections for the examples that we have studied. These effects can be relevant for precision analyses of the charged MSSM Higgs boson.

  9. A mass balance approach to investigate arsenic cycling in a petroleum plume.

    Science.gov (United States)

    Ziegler, Brady A; Schreiber, Madeline E; Cozzarelli, Isabelle M; Crystal Ng, G-H

    2017-12-01

    Natural attenuation of organic contaminants in groundwater can give rise to a series of complex biogeochemical reactions that release secondary contaminants to groundwater. In a crude oil contaminated aquifer, biodegradation of petroleum hydrocarbons is coupled with the reduction of ferric iron (Fe(III)) hydroxides in aquifer sediments. As a result, naturally occurring arsenic (As) adsorbed to Fe(III) hydroxides in the aquifer sediment is mobilized from sediment into groundwater. However, Fe(III) in sediment of other zones of the aquifer has the capacity to attenuate dissolved As via resorption. In order to better evaluate how long-term biodegradation coupled with Fe-reduction and As mobilization can redistribute As mass in contaminated aquifer, we quantified mass partitioning of Fe and As in the aquifer based on field observation data. Results show that Fe and As are spatially correlated in both groundwater and aquifer sediments. Mass partitioning calculations demonstrate that 99.9% of Fe and 99.5% of As are associated with aquifer sediment. The sediments act as both sources and sinks for As, depending on the redox conditions in the aquifer. Calculations reveal that at least 78% of the original As in sediment near the oil has been mobilized into groundwater over the 35-year lifespan of the plume. However, the calculations also show that only a small percentage of As (∼0.5%) remains in groundwater, due to resorption onto sediment. At the leading edge of the plume, where groundwater is suboxic, sediments sequester Fe and As, causing As to accumulate to concentrations 5.6 times greater than background concentrations. Current As sinks can serve as future sources of As as the plume evolves over time. The mass balance approach used in this study can be applied to As cycling in other aquifers where groundwater As results from biodegradation of an organic carbon point source coupled with Fe reduction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. FIRST DETERMINATION OF THE TRUE MASS OF CORONAL MASS EJECTIONS: A NOVEL APPROACH TO USING THE TWO STEREO VIEWPOINTS

    International Nuclear Information System (INIS)

    Colaninno, Robin C.; Vourlidas, Angelos

    2009-01-01

    The twin Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI) COR2 coronagraphs of the Solar Terrestrial Relations Observatory (STEREO) provide images of the solar corona from two viewpoints in the solar system. Since their launch in late 2006, the STEREO Ahead (A) and Behind (B) spacecraft have been slowly separating from Earth at a rate of 22. 0 5 per year. By the end of 2007, the two spacecraft were separated by more than 40 deg. from each other. At that time, we began to see large-scale differences in the morphology and total intensity between coronal mass ejections (CMEs) observed with SECCHI-COR2 on STEREO-A and B. Due to the effects of the Thomson scattering geometry, the intensity of an observed CME is dependent on the angle it makes with the observed plane of the sky. From the intensity images, we can calculate the integrated line-of-sight electron density and mass. We demonstrate that it is possible to simultaneously derive the direction and true total mass of the CME if we make the simple assumption that the same mass should be observed in COR2-A and B.

  11. A Predictive Likelihood Approach to Bayesian Averaging

    Directory of Open Access Journals (Sweden)

    Tomáš Jeřábek

    2015-01-01

    Full Text Available Multivariate time series forecasting is applied in a wide range of economic activities related to regional competitiveness and is the basis of almost all macroeconomic analysis. In this paper we combine multivariate density forecasts of GDP growth, inflation and real interest rates from four various models, two type of Bayesian vector autoregression (BVAR models, a New Keynesian dynamic stochastic general equilibrium (DSGE model of small open economy and DSGE-VAR model. The performance of models is identified using historical dates including domestic economy and foreign economy, which is represented by countries of the Eurozone. Because forecast accuracy of observed models are different, the weighting scheme based on the predictive likelihood, the trace of past MSE matrix, model ranks are used to combine the models. The equal-weight scheme is used as a simple combination scheme. The results show that optimally combined densities are comparable to the best individual models.

  12. Improved EDELWEISS-III sensitivity for low-mass WIMPs using a profile likelihood approach

    Energy Technology Data Exchange (ETDEWEB)

    Hehn, L. [Karlsruher Institut fuer Technologie, Institut fuer Kernphysik, Karlsruhe (Germany); Armengaud, E.; Boissiere, T. de; Gros, M.; Navick, X.F.; Nones, C.; Paul, B. [CEA Saclay, DSM/IRFU, Gif-sur-Yvette Cedex (France); Arnaud, Q. [Univ Lyon, Universite Claude Bernard Lyon 1, CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Lyon (France); Queen' s University, Kingston (Canada); Augier, C.; Billard, J.; Cazes, A.; Charlieux, F.; Jesus, M. de; Gascon, J.; Juillard, A.; Queguiner, E.; Sanglard, V.; Vagneron, L. [Univ Lyon, Universite Claude Bernard Lyon 1, CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Lyon (France); Benoit, A.; Camus, P. [Institut Neel, CNRS/UJF, Grenoble (France); Berge, L.; Chapellier, M.; Dumoulin, L.; Giuliani, A.; Le-Sueur, H.; Marnieros, S.; Olivieri, E.; Poda, D. [CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Orsay (France); Bluemer, J. [Karlsruher Institut fuer Technologie, Institut fuer Kernphysik, Karlsruhe (Germany); Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Broniatowski, A. [CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Orsay (France); Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Eitel, K.; Kozlov, V.; Siebenborn, B. [Karlsruher Institut fuer Technologie, Institut fuer Kernphysik, Karlsruhe (Germany); Foerster, N.; Heuermann, G.; Scorza, S. [Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Jin, Y. [Laboratoire de Photonique et de Nanostructures, CNRS, Route de Nozay, Marcoussis (France); Kefelian, C. [Univ Lyon, Universite Claude Bernard Lyon 1, CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Lyon (France); Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Kleifges, M.; Tcherniakhovski, D.; Weber, M. [Karlsruher Institut fuer Technologie, Institut fuer Prozessdatenverarbeitung und Elektronik, Karlsruhe (Germany); Kraus, H. [University of Oxford, Department of Physics, Oxford (United Kingdom); Kudryavtsev, V.A. [University of Sheffield, Department of Physics and Astronomy, Sheffield (United Kingdom); Pari, P. [CEA Saclay, DSM/IRAMIS, Gif-sur-Yvette (France); Piro, M.C. [CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Orsay (France); Rensselaer Polytechnic Institute, Troy, NY (United States); Rozov, S.; Yakushev, E. [JINR, Laboratory of Nuclear Problems, Dubna, Moscow Region (Russian Federation); Schmidt, B. [Karlsruher Institut fuer Technologie, Institut fuer Kernphysik, Karlsruhe (Germany); Lawrence Berkeley National Laboratory, Berkeley, CA (United States)

    2016-10-15

    We report on a dark matter search for a Weakly Interacting Massive Particle (WIMP) in the mass range m{sub χ} element of [4, 30] GeV/c{sup 2} with the EDELWEISS-III experiment. A 2D profile likelihood analysis is performed on data from eight selected detectors with the lowest energy thresholds leading to a combined fiducial exposure of 496 kg-days. External backgrounds from γ- and β-radiation, recoils from {sup 206}Pb and neutrons as well as detector intrinsic backgrounds were modelled from data outside the region of interest and constrained in the analysis. The basic data selection and most of the background models are the same as those used in a previously published analysis based on boosted decision trees (BDT) [1]. For the likelihood approach applied in the analysis presented here, a larger signal efficiency and a subtraction of the expected background lead to a higher sensitivity, especially for the lowest WIMP masses probed. No statistically significant signal was found and upper limits on the spin-independent WIMP-nucleon scattering cross section can be set with a hypothesis test based on the profile likelihood test statistics. The 90 % C.L. exclusion limit set for WIMPs with m{sub χ} = 4 GeV/c{sup 2} is 1.6 x 10{sup -39} cm{sup 2}, which is an improvement of a factor of seven with respect to the BDT-based analysis. For WIMP masses above 15 GeV/c{sup 2} the exclusion limits found with both analyses are in good agreement. (orig.)

  13. Simple, empirical approach to predict neutron capture cross sections from nuclear masses

    Science.gov (United States)

    Couture, A.; Casten, R. F.; Cakirli, R. B.

    2017-12-01

    Background: Neutron capture cross sections are essential to understanding the astrophysical s and r processes, the modeling of nuclear reactor design and performance, and for a wide variety of nuclear forensics applications. Often, cross sections are needed for nuclei where experimental measurements are difficult. Enormous effort, over many decades, has gone into attempting to develop sophisticated statistical reaction models to predict these cross sections. Such work has met with some success but is often unable to reproduce measured cross sections to better than 40 % , and has limited predictive power, with predictions from different models rapidly differing by an order of magnitude a few nucleons from the last measurement. Purpose: To develop a new approach to predicting neutron capture cross sections over broad ranges of nuclei that accounts for their values where known and which has reliable predictive power with small uncertainties for many nuclei where they are unknown. Methods: Experimental neutron capture cross sections were compared to empirical mass observables in regions of similar structure. Results: We present an extremely simple method, based solely on empirical mass observables, that correlates neutron capture cross sections in the critical energy range from a few keV to a couple hundred keV. We show that regional cross sections are compactly correlated in medium and heavy mass nuclei with the two-neutron separation energy. These correlations are easily amenable to predict unknown cross sections, often converting the usual extrapolations to more reliable interpolations. It almost always reproduces existing data to within 25 % and estimated uncertainties are below about 40 % up to 10 nucleons beyond known data. Conclusions: Neutron capture cross sections display a surprisingly strong connection to the two-neutron separation energy, a nuclear structure property. The simple, empirical correlations uncovered provide model-independent predictions of

  14. "Polymeromics": Mass spectrometry based strategies in polymer science toward complete sequencing approaches: a review.

    Science.gov (United States)

    Altuntaş, Esra; Schubert, Ulrich S

    2014-01-15

    Mass spectrometry (MS) is the most versatile and comprehensive method in "OMICS" sciences (i.e. in proteomics, genomics, metabolomics and lipidomics). The applications of MS and tandem MS (MS/MS or MS(n)) provide sequence information of the full complement of biological samples in order to understand the importance of the sequences on their precise and specific functions. Nowadays, the control of polymer sequences and their accurate characterization is one of the significant challenges of current polymer science. Therefore, a similar approach can be very beneficial for characterizing and understanding the complex structures of synthetic macromolecules. MS-based strategies allow a relatively precise examination of polymeric structures (e.g. their molar mass distributions, monomer units, side chain substituents, end-group functionalities, and copolymer compositions). Moreover, tandem MS offer accurate structural information from intricate macromolecular structures; however, it produces vast amount of data to interpret. In "OMICS" sciences, the software application to interpret the obtained data has developed satisfyingly (e.g. in proteomics), because it is not possible to handle the amount of data acquired via (tandem) MS studies on the biological samples manually. It can be expected that special software tools will improve the interpretation of (tandem) MS output from the investigations of synthetic polymers as well. Eventually, the MS/MS field will also open up for polymer scientists who are not MS-specialists. In this review, we dissect the overall framework of the MS and MS/MS analysis of synthetic polymers into its key components. We discuss the fundamentals of polymer analyses as well as recent advances in the areas of tandem mass spectrometry, software developments, and the overall future perspectives on the way to polymer sequencing, one of the last Holy Grail in polymer science. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Mass Spectrometry-based Approaches to Understand the Molecular Basis of Memory

    Directory of Open Access Journals (Sweden)

    Arthur Henriques Pontes

    2016-10-01

    Full Text Available The central nervous system is responsible for an array of cognitive functions such as memory, learning, language and attention. These processes tend to take place in distinct brain regions; yet, they need to be integrated to give rise to adaptive or meaningful behavior. Since cognitive processes result from underlying cellular and molecular changes, genomics and transcriptomics assays have been applied to human and animal models to understand such events. Nevertheless, genes and RNAs are not the end products of most biological functions. In order to gain further insights toward the understanding of brain processes, the field of proteomics has been of increasing importance in the past years. Advancements in liquid chromatography-tandem mass spectrometry (LC-MS/MS have enable the identification and quantification of thousand of proteins with high accuracy and sensitivity, fostering a revolution in the neurosciences. Herein, we review the molecular bases of explicit memory in the hippocampus. We outline the principles of mass spectrometry (MS-based proteomics, highlighting the use of this analytical tool to study memory formation. In addition, we discuss MS-based targeted approaches as the future of protein analysis.

  16. Americans' Average Radiation Exposure

    International Nuclear Information System (INIS)

    2000-01-01

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body

  17. A scale space approach for unsupervised feature selection in mass spectra classification for ovarian cancer detection.

    Science.gov (United States)

    Ceccarelli, Michele; d'Acierno, Antonio; Facchiano, Angelo

    2009-10-15

    Mass spectrometry spectra, widely used in proteomics studies as a screening tool for protein profiling and to detect discriminatory signals, are high dimensional data. A large number of local maxima (a.k.a. peaks) have to be analyzed as part of computational pipelines aimed at the realization of efficient predictive and screening protocols. With this kind of data dimensions and samples size the risk of over-fitting and selection bias is pervasive. Therefore the development of bio-informatics methods based on unsupervised feature extraction can lead to general tools which can be applied to several fields of predictive proteomics. We propose a method for feature selection and extraction grounded on the theory of multi-scale spaces for high resolution spectra derived from analysis of serum. Then we use support vector machines for classification. In particular we use a database containing 216 samples spectra divided in 115 cancer and 91 control samples. The overall accuracy averaged over a large cross validation study is 98.18. The area under the ROC curve of the best selected model is 0.9962. We improved previous known results on the problem on the same data, with the advantage that the proposed method has an unsupervised feature selection phase. All the developed code, as MATLAB scripts, can be downloaded from http://medeaserver.isa.cnr.it/dacierno/spectracode.htm.

  18. Tribocorrosion in pressurized high temperature water: a mass flow model based on the third body approach

    Energy Technology Data Exchange (ETDEWEB)

    Guadalupe Maldonado, S.

    2014-07-01

    Pressurized water reactors (PWR) used for power generation are operated at elevated temperatures (280-300 °C) and under higher pressure (120-150 bar). In addition to these harsh environmental conditions some components of the PWR assemblies are subject to mechanical loading (sliding, vibration and impacts) leading to undesirable and hardly controllable material degradation phenomena. In such situations wear is determined by the complex interplay (tribocorrosion) between mechanical, material and physical-chemical phenomena. Tribocorrosion in PWR conditions is at present little understood and models need to be developed in order to predict component lifetime over several decades. The goal of this project, carried out in collaboration with the French company AREVA NP, is to develop a predictive model based on the mechanistic understanding of tribocorrosion of specific PWR components (stainless steel control assemblies, stellite grippers). The approach taken here is to describe degradation in terms of electro-chemical and mechanical material flows (third body concept of tribology) from the metal into the friction film (i.e. the oxidized film forming during rubbing on the metal surface) and from the friction film into the environment instead of simple mass loss considerations. The project involves the establishment of mechanistic models for describing the single flows based on ad-hoc tribocorrosion measurements operating at low temperature. The overall behaviour at high temperature and pressure in investigated using a dedicated tribometer (Aurore) including electrochemical control of the contact during rubbing. Physical laws describing the individual flows according to defined mechanisms and as a function of defined physical parameters were identified based on the obtained experimental results and from literature data. The physical laws were converted into mass flow rates and solved as differential equation system by considering the mass balance in compartments

  19. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  20. Extensive characterization of Tupaia belangeri neuropeptidome using an integrated mass spectrometric approach.

    Science.gov (United States)

    Petruzziello, Filomena; Fouillen, Laetitia; Wadensten, Henrik; Kretz, Robert; Andren, Per E; Rainer, Gregor; Zhang, Xiaozhe

    2012-02-03

    Neuropeptidomics is used to characterize endogenous peptides in the brain of tree shrews (Tupaia belangeri). Tree shrews are small animals similar to rodents in size but close relatives of primates, and are excellent models for brain research. Currently, tree shrews have no complete proteome information available on which direct database search can be allowed for neuropeptide identification. To increase the capability in the identification of neuropeptides in tree shrews, we developed an integrated mass spectrometry (MS)-based approach that combines methods including data-dependent, directed, and targeted liquid chromatography (LC)-Fourier transform (FT)-tandem MS (MS/MS) analysis, database construction, de novo sequencing, precursor protein search, and homology analysis. Using this integrated approach, we identified 107 endogenous peptides that have sequences identical or similar to those from other mammalian species. High accuracy MS and tandem MS information, with BLAST analysis and chromatographic characteristics were used to confirm the sequences of all the identified peptides. Interestingly, further sequence homology analysis demonstrated that tree shrew peptides have a significantly higher degree of homology to equivalent sequences in humans than those in mice or rats, consistent with the close phylogenetic relationship between tree shrews and primates. Our results provide the first extensive characterization of the peptidome in tree shrews, which now permits characterization of their function in nervous and endocrine system. As the approach developed fully used the conservative properties of neuropeptides in evolution and the advantage of high accuracy MS, it can be portable for identification of neuropeptides in other species for which the fully sequenced genomes or proteomes are not available.

  1. MERRA Chem 3D IAU C-Grid Wind and Mass Flux, Time Average 3-Hourly (eta coord, 2/3x1/2L72) V5.2.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The MAT3NVCHM or tavg3_3d_chm_Nv data product is the MERRA Data Assimilation System Chemistry 3-Dimensional chemistry on layers that is time averaged, 3D model...

  2. THE PANCHROMATIC HUBBLE ANDROMEDA TREASURY. IV. A PROBABILISTIC APPROACH TO INFERRING THE HIGH-MASS STELLAR INITIAL MASS FUNCTION AND OTHER POWER-LAW FUNCTIONS

    Energy Technology Data Exchange (ETDEWEB)

    Weisz, Daniel R.; Fouesneau, Morgan; Dalcanton, Julianne J.; Clifton Johnson, L.; Beerman, Lori C.; Williams, Benjamin F. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Hogg, David W.; Foreman-Mackey, Daniel T. [Center for Cosmology and Particle Physics, New York University, 4 Washington Place, New York, NY 10003 (United States); Rix, Hans-Walter; Gouliermis, Dimitrios [Max Planck Institute for Astronomy, Koenigstuhl 17, D-69117 Heidelberg (Germany); Dolphin, Andrew E. [Raytheon Company, 1151 East Hermans Road, Tucson, AZ 85756 (United States); Lang, Dustin [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States); Bell, Eric F. [Department of Astronomy, University of Michigan, 500 Church Street, Ann Arbor, MI 48109 (United States); Gordon, Karl D.; Kalirai, Jason S. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Skillman, Evan D., E-mail: dweisz@astro.washington.edu [Minnesota Institute for Astrophysics, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455 (United States)

    2013-01-10

    We present a probabilistic approach for inferring the parameters of the present-day power-law stellar mass function (MF) of a resolved young star cluster. This technique (1) fully exploits the information content of a given data set; (2) can account for observational uncertainties in a straightforward way; (3) assigns meaningful uncertainties to the inferred parameters; (4) avoids the pitfalls associated with binning data; and (5) can be applied to virtually any resolved young cluster, laying the groundwork for a systematic study of the high-mass stellar MF (M {approx}> 1 M {sub Sun }). Using simulated clusters and Markov Chain Monte Carlo sampling of the probability distribution functions, we show that estimates of the MF slope, {alpha}, are unbiased and that the uncertainty, {Delta}{alpha}, depends primarily on the number of observed stars and on the range of stellar masses they span, assuming that the uncertainties on individual masses and the completeness are both well characterized. Using idealized mock data, we compute the theoretical precision, i.e., lower limits, on {alpha}, and provide an analytic approximation for {Delta}{alpha} as a function of the observed number of stars and mass range. Comparison with literature studies shows that {approx}3/4 of quoted uncertainties are smaller than the theoretical lower limit. By correcting these uncertainties to the theoretical lower limits, we find that the literature studies yield ({alpha}) = 2.46, with a 1{sigma} dispersion of 0.35 dex. We verify that it is impossible for a power-law MF to obtain meaningful constraints on the upper mass limit of the initial mass function, beyond the lower bound of the most massive star actually observed. We show that avoiding substantial biases in the MF slope requires (1) including the MF as a prior when deriving individual stellar mass estimates, (2) modeling the uncertainties in the individual stellar masses, and (3) fully characterizing and then explicitly modeling the

  3. The Panchromatic Hubble Andromeda Treasury. IV. A Probabilistic Approach to Inferring the High-mass Stellar Initial Mass Function and Other Power-law Functions

    Science.gov (United States)

    Weisz, Daniel R.; Fouesneau, Morgan; Hogg, David W.; Rix, Hans-Walter; Dolphin, Andrew E.; Dalcanton, Julianne J.; Foreman-Mackey, Daniel T.; Lang, Dustin; Johnson, L. Clifton; Beerman, Lori C.; Bell, Eric F.; Gordon, Karl D.; Gouliermis, Dimitrios; Kalirai, Jason S.; Skillman, Evan D.; Williams, Benjamin F.

    2013-01-01

    We present a probabilistic approach for inferring the parameters of the present-day power-law stellar mass function (MF) of a resolved young star cluster. This technique (1) fully exploits the information content of a given data set; (2) can account for observational uncertainties in a straightforward way; (3) assigns meaningful uncertainties to the inferred parameters; (4) avoids the pitfalls associated with binning data; and (5) can be applied to virtually any resolved young cluster, laying the groundwork for a systematic study of the high-mass stellar MF (M >~ 1 M ⊙). Using simulated clusters and Markov Chain Monte Carlo sampling of the probability distribution functions, we show that estimates of the MF slope, α, are unbiased and that the uncertainty, Δα, depends primarily on the number of observed stars and on the range of stellar masses they span, assuming that the uncertainties on individual masses and the completeness are both well characterized. Using idealized mock data, we compute the theoretical precision, i.e., lower limits, on α, and provide an analytic approximation for Δα as a function of the observed number of stars and mass range. Comparison with literature studies shows that ~3/4 of quoted uncertainties are smaller than the theoretical lower limit. By correcting these uncertainties to the theoretical lower limits, we find that the literature studies yield langαrang = 2.46, with a 1σ dispersion of 0.35 dex. We verify that it is impossible for a power-law MF to obtain meaningful constraints on the upper mass limit of the initial mass function, beyond the lower bound of the most massive star actually observed. We show that avoiding substantial biases in the MF slope requires (1) including the MF as a prior when deriving individual stellar mass estimates, (2) modeling the uncertainties in the individual stellar masses, and (3) fully characterizing and then explicitly modeling the completeness for stars of a given mass. The precision on MF

  4. THE PANCHROMATIC HUBBLE ANDROMEDA TREASURY. IV. A PROBABILISTIC APPROACH TO INFERRING THE HIGH-MASS STELLAR INITIAL MASS FUNCTION AND OTHER POWER-LAW FUNCTIONS

    International Nuclear Information System (INIS)

    Weisz, Daniel R.; Fouesneau, Morgan; Dalcanton, Julianne J.; Clifton Johnson, L.; Beerman, Lori C.; Williams, Benjamin F.; Hogg, David W.; Foreman-Mackey, Daniel T.; Rix, Hans-Walter; Gouliermis, Dimitrios; Dolphin, Andrew E.; Lang, Dustin; Bell, Eric F.; Gordon, Karl D.; Kalirai, Jason S.; Skillman, Evan D.

    2013-01-01

    We present a probabilistic approach for inferring the parameters of the present-day power-law stellar mass function (MF) of a resolved young star cluster. This technique (1) fully exploits the information content of a given data set; (2) can account for observational uncertainties in a straightforward way; (3) assigns meaningful uncertainties to the inferred parameters; (4) avoids the pitfalls associated with binning data; and (5) can be applied to virtually any resolved young cluster, laying the groundwork for a systematic study of the high-mass stellar MF (M ∼> 1 M ☉ ). Using simulated clusters and Markov Chain Monte Carlo sampling of the probability distribution functions, we show that estimates of the MF slope, α, are unbiased and that the uncertainty, Δα, depends primarily on the number of observed stars and on the range of stellar masses they span, assuming that the uncertainties on individual masses and the completeness are both well characterized. Using idealized mock data, we compute the theoretical precision, i.e., lower limits, on α, and provide an analytic approximation for Δα as a function of the observed number of stars and mass range. Comparison with literature studies shows that ∼3/4 of quoted uncertainties are smaller than the theoretical lower limit. By correcting these uncertainties to the theoretical lower limits, we find that the literature studies yield (α) = 2.46, with a 1σ dispersion of 0.35 dex. We verify that it is impossible for a power-law MF to obtain meaningful constraints on the upper mass limit of the initial mass function, beyond the lower bound of the most massive star actually observed. We show that avoiding substantial biases in the MF slope requires (1) including the MF as a prior when deriving individual stellar mass estimates, (2) modeling the uncertainties in the individual stellar masses, and (3) fully characterizing and then explicitly modeling the completeness for stars of a given mass. The precision on MF

  5. Depression, body mass index, and chronic obstructive pulmonary disease – a holistic approach

    Directory of Open Access Journals (Sweden)

    Catalfo G

    2016-02-01

    Full Text Available Giuseppe Catalfo,1 Luciana Crea,1 Tiziana Lo Castro,1 Francesca Magnano San Lio,1 Giuseppe Minutolo,1 Gherardo Siscaro,2 Noemi Vaccino,1 Nunzio Crimi,3 Eugenio Aguglia1 1Department of Psychiatry, Policlinico “G. Rodolico” University Hospital, University of Catania, Catania, Italy; 2Operative Unit Neurorehabilitation, IRCCS Fondazione Salvatore Maugeri, Sciacca, Italy; 3Department of Pneumology, Policlinico “G. Rodolico” University Hospital, University of Catania, Catania, Italy Background: Several clinical studies suggest common underlying pathogenetic mechanisms of COPD and depressive/anxiety disorders. We aim to evaluate psychopathological and physical effects of aerobic exercise, proposed in the context of pulmonary rehabilitation, in a sample of COPD patients, through the correlation of some psychopathological variables and physical/pneumological parameters. Methods: Fifty-two consecutive subjects were enrolled. At baseline, the sample was divided into two subgroups consisting of 38 depression-positive and 14 depression-negative subjects according to the Hamilton Depression Rating Scale (HAM-D. After the rehabilitation treatment, we compared psychometric and physical examinations between the two groups. Results: The differences after the rehabilitation program in all assessed parameters demonstrated a significant improvement in psychiatric and pneumological conditions. The reduction of BMI was significantly correlated with fat mass but only in the depression-positive patients. Conclusion: Our results suggest that pulmonary rehabilitation improves depressive and anxiety symptoms in COPD. This improvement is significantly related to the reduction of fat mass and BMI only in depressed COPD patients, in whom these parameters were related at baseline. These findings suggest that depressed COPD patients could benefit from a rehabilitation program in the context of a multidisciplinary approach. Keywords: COPD, depression, aerobic exercise

  6. Integrative Mass Spectrometry Approaches to Monitor Protein Structures, Modifications, and Interactions

    NARCIS (Netherlands)

    Lössl, P.

    2017-01-01

    This thesis illustrates the current standing of mass spectrometry (MS) in molecular and structural biology. The primary aim of the herein described research is to facilitate protein characterization by combining mass spectrometric methods among each other and with complementary analytical

  7. Numerical probabilistic analysis for slope stability in fractured rock masses using DFN-DEM approach

    Directory of Open Access Journals (Sweden)

    Alireza Baghbanan

    2017-06-01

    Full Text Available Due to existence of uncertainties in input geometrical properties of fractures, there is not any unique solution for assessing the stability of slopes in jointed rock masses. Therefore, the necessity of applying probabilistic analysis in these cases is inevitable. In this study a probabilistic analysis procedure together with relevant algorithms are developed using Discrete Fracture Network-Distinct Element Method (DFN-DEM approach. In the right abutment of Karun 4 dam and downstream of the dam body, five joint sets and one major joint have been identified. According to the geometrical properties of fractures in Karun river valley, instability situations are probable in this abutment. In order to evaluate the stability of the rock slope, different combinations of joint set geometrical parameters are selected, and a series of numerical DEM simulations are performed on generated and validated DFN models in DFN-DEM approach to measure minimum required support patterns in dry and saturated conditions. Results indicate that the distribution of required bolt length is well fitted with a lognormal distribution in both circumstances. In dry conditions, the calculated mean value is 1125.3 m, and more than 80 percent of models need only 1614.99 m of bolts which is a bolt pattern with 2 m spacing and 12 m length. However, as for the slopes with saturated condition, the calculated mean value is 1821.8 m, and more than 80 percent of models need only 2653.49 m of bolts which is equivalent to a bolt pattern with 15 m length and 1.5 m spacing. Comparison between obtained results with numerical and empirical method show that investigation of a slope stability with different DFN realizations which conducted in different block patterns is more efficient than the empirical methods.

  8. Predicting polycyclic aromatic hydrocarbons using a mass fraction approach in a geostatistical framework across North Carolina.

    Science.gov (United States)

    Reyes, Jeanette M; Hubbard, Heidi F; Stiegel, Matthew A; Pleil, Joachim D; Serre, Marc L

    2018-01-09

    Currently in the United States there are no regulatory standards for ambient concentrations of polycyclic aromatic hydrocarbons (PAHs), a class of organic compounds with known carcinogenic species. As such, monitoring data are not routinely collected resulting in limited exposure mapping and epidemiologic studies. This work develops the log-mass fraction (LMF) Bayesian maximum entropy (BME) geostatistical prediction method used to predict the concentration of nine particle-bound PAHs across the US state of North Carolina. The LMF method develops a relationship between a relatively small number of collocated PAH and fine Particulate Matter (PM2.5) samples collected in 2005 and applies that relationship to a larger number of locations where PM2.5 is routinely monitored to more broadly estimate PAH concentrations across the state. Cross validation and mapping results indicate that by incorporating both PAH and PM2.5 data, the LMF BME method reduces mean squared error by 28.4% and produces more realistic spatial gradients compared to the traditional kriging approach based solely on observed PAH data. The LMF BME method efficiently creates PAH predictions in a PAH data sparse and PM2.5 data rich setting, opening the door for more expansive epidemiologic exposure assessments of ambient PAH.

  9. Quantifying in-stream retention of nitrate at catchment scales using a practical mass balance approach.

    Science.gov (United States)

    Schwientek, Marc; Selle, Benny

    2016-02-01

    As field data on in-stream nitrate retention is scarce at catchment scales, this study aimed at quantifying net retention of nitrate within the entire river network of a fourth-order stream. For this purpose, a practical mass balance approach combined with a Lagrangian sampling scheme was applied and seasonally repeated to estimate daily in-stream net retention of nitrate for a 17.4 km long, agriculturally influenced, segment of the Steinlach River in southwestern Germany. This river segment represents approximately 70% of the length of the main stem and about 32% of the streambed area of the entire river network. Sampling days in spring and summer were biogeochemically more active than in autumn and winter. Results obtained for the main stem of Steinlach River were subsequently extrapolated to the stream network in the catchment. It was demonstrated that, for baseflow conditions in spring and summer, in-stream nitrate retention could sum up to a relevant term of the catchment's nitrogen balance if the entire stream network was considered.

  10. Capillary-HPLC with tandem mass spectrometry in analysis of alkaloid dyestuffs - a new approach.

    Science.gov (United States)

    Dąbrowski, Damian; Lech, Katarzyna; Jarosz, Maciej

    2018-05-01

    Development of the identification method of alkaloid compounds in Amur cork tree as well as not examined so far Oregon grape and European Barberry shrubs are presented. The novel approach to separation of alkaloids was applied and the capillary-high-performance liquid chromatography (capillary-HPLC) system was used, which has never previously been reported for alkaloid-based dyestuffs analysis. Its optimization was conducted with three different stationary phases (unmodified octadecylsilane-bonded silica, octadecylsilane modified with polar groups and silica-bonded pentaflourophenyls) as well as with different solvent buffers. Detection of the isolated compounds was carried out using diode-array detector (DAD) and tandem mass spectrometer with electrospray ionization (ESI MS/MS). The working parameters of ESI were optimized, whereas the multiple reactions monitoring (MRM) parameters of MS/MS detection were chosen based on the product ion spectra of the quasi-molecular ions. Calibration curve of berberine has been estimated (y = 1712091x + 4785.03 with the correlation coefficient 0.9999). Limit of detection and limit of quantification were calculated to be 3.2 and 9.7 ng/mL, respectively. Numerous alkaloids (i.e., berberine, jatrorrhizine and magnoflorine, as well as phellodendrine, menisperine and berbamine) were identified in the extracts from alkaloid plants and silk and wool fibers dyed with these dyestuffs, among them their markers. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. tavg3_3d_chm_Fe: MERRA Chem 3D IAU, Precip Mass Flux, Time average 3-hourly 1.25 x 1 degree V5.2.0 (MAT3FECHM) at GES DISC

    Data.gov (United States)

    National Aeronautics and Space Administration — The MAT3FECHM or tavg3_3d_chm_Fe data product is the MERRA Data Assimilation System Chemistry 3-Dimensional chemistry on layers edges that is time averaged, 3D model...

  12. MERRA Chem 3D IAU C-Grid Edge Mass Flux, Time Average 3-Hourly (eta coord, 2/3x1/2L73) V5.2.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The MAT3NECHM or tavg3_3d_chm_Ne data product is the MERRA Data Assimilation System Chemistry 3-Dimensional chemistry on layer Edges that is time averaged, 3D model...

  13. Time-dependent mass of cosmological perturbations in the hybrid and dressed metric approaches to loop quantum cosmology

    Science.gov (United States)

    Elizaga Navascués, Beatriz; Martín de Blas, Daniel; Mena Marugán, Guillermo A.

    2018-02-01

    Loop quantum cosmology has recently been applied in order to extend the analysis of primordial perturbations to the Planck era and discuss the possible effects of quantum geometry on the cosmic microwave background. Two approaches to loop quantum cosmology with admissible ultraviolet behavior leading to predictions that are compatible with observations are the so-called hybrid and dressed metric approaches. In spite of their similarities and relations, we show in this work that the effective equations that they provide for the evolution of the tensor and scalar perturbations are somewhat different. When backreaction is neglected, the discrepancy appears only in the time-dependent mass term of the corresponding field equations. We explain the origin of this difference, arising from the distinct quantization procedures. Besides, given the privileged role that the big bounce plays in loop quantum cosmology, e.g. as a natural instant of time to set initial conditions for the perturbations, we also analyze the positivity of the time-dependent mass when this bounce occurs. We prove that the mass of the tensor perturbations is positive in the hybrid approach when the kinetic contribution to the energy density of the inflaton dominates over its potential, as well as for a considerably large sector of backgrounds around that situation, while this mass is always nonpositive in the dressed metric approach. Similar results are demonstrated for the scalar perturbations in a sector of background solutions that includes the kinetically dominated ones; namely, the mass then is positive for the hybrid approach, whereas it typically becomes negative in the dressed metric case. More precisely, this last statement is strictly valid when the potential is quadratic for values of the inflaton mass that are phenomenologically favored.

  14. A High Throughput Ambient Mass Spectrometric Approach to Species Identification and Classification from Chemical Fingerprint Signatures

    OpenAIRE

    Musah, Rabi A.; Espinoza, Edgard O.; Cody, Robert B.; Lesiak, Ashton D.; Christensen, Earl D.; Moore, Hannah E.; Maleknia, Simin; Drijfhout, Falko P.

    2015-01-01

    A high throughput method for species identification and classification through chemometric processing of direct analysis in real time (DART) mass spectrometry-derived fingerprint signatures has been developed. The method entails introduction of samples to the open air space between the DART ion source and the mass spectrometer inlet, with the entire observed mass spectral fingerprint subjected to unsupervised hierarchical clustering processing. A range of both polar and non-polar chemotypes a...

  15. Targeted metabolite profile of food bioactive compounds by Orbitrap high resolution mass spectrometry: The 'FancyTiles' approach

    NARCIS (Netherlands)

    Troise, A.D.; Ferracane, R.; Palermo, M.; Fogliano, V.

    2014-01-01

    In this paper a new targeted metabolic profile approach using Orbitrap high resolution mass spectrometry was described. For each foodmatrix various classes of bioactive compounds and some specificmetabolites of interest were selected on the basis of the existing knowledge creating an easy-to-read

  16. Developing a discrimination rule between breast cancer patients and controls using proteomics mass spectrometric data: A three-step approach

    NARCIS (Netherlands)

    Heidema, A.G.; Nagelkerke, N.

    2008-01-01

    To discriminate between breast cancer patients and controls, we used a three-step approach to obtain our decision rule. First, we ranked the mass/charge values using random forests, because it generates importance indices that take possible interactions into account. We observed that the top ranked

  17. Statistically optimal estimation of Greenland Ice Sheet mass variations from GRACE monthly solutions using an improved mascon approach

    NARCIS (Netherlands)

    Ran, J.; Ditmar, P.G.; Klees, R.; Farahani, H.

    2017-01-01

    We present an improved mascon approach to transform monthly spherical harmonic solutions based on GRACE satellite data into mass anomaly estimates in Greenland. The GRACE-based spherical harmonic coefficients are used to synthesize gravity anomalies at satellite altitude, which are then inverted

  18. A deep learning approach for the analysis of masses in mammograms with minimal user intervention.

    Science.gov (United States)

    Dhungel, Neeraj; Carneiro, Gustavo; Bradley, Andrew P

    2017-04-01

    We present an integrated methodology for detecting, segmenting and classifying breast masses from mammograms with minimal user intervention. This is a long standing problem due to low signal-to-noise ratio in the visualisation of breast masses, combined with their large variability in terms of shape, size, appearance and location. We break the problem down into three stages: mass detection, mass segmentation, and mass classification. For the detection, we propose a cascade of deep learning methods to select hypotheses that are refined based on Bayesian optimisation. For the segmentation, we propose the use of deep structured output learning that is subsequently refined by a level set method. Finally, for the classification, we propose the use of a deep learning classifier, which is pre-trained with a regression to hand-crafted feature values and fine-tuned based on the annotations of the breast mass classification dataset. We test our proposed system on the publicly available INbreast dataset and compare the results with the current state-of-the-art methodologies. This evaluation shows that our system detects 90% of masses at 1 false positive per image, has a segmentation accuracy of around 0.85 (Dice index) on the correctly detected masses, and overall classifies masses as malignant or benign with sensitivity (Se) of 0.98 and specificity (Sp) of 0.7. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. A population balance approach considering heat and mass transfer-Experiments and CFD simulations

    International Nuclear Information System (INIS)

    Krepper, Eckhard; Beyer, Matthias; Lucas, Dirk; Schmidtke, Martin

    2011-01-01

    Highlights: → The MUSIG approach was extended by mass transfer between the size groups to describe condensation or re-evaporation. → Experiments on steam bubble condensation in vertical co-current steam/water flows have been carried out. The cross sectional gas fraction distribution, the bubble size distribution ad the gas velocity profiles were measured. → The following phenomena could be reproduced with good agreement to the experiments: (a) Dependence of the condensation rate on the initial bubble size distribution and (b) re-evaporation over the height in tests with low inlet temperature subcooling. - Abstract: Bubble condensation in sub-cooled water is a complex process, to which various phenomena contribute. Since the condensation rate depends on the interfacial area density, bubble size distribution changes caused by breakup and coalescence play a crucial role. Experiments on steam bubble condensation in vertical co-current steam/water flows have been carried out in an 8 m long vertical DN200 pipe. Steam is injected into the pipe and the development of the bubbly flow is measured at different distances to the injection using a pair of wire mesh sensors. By varying the steam nozzle diameter the initial bubble size can be influenced. Larger bubbles come along with a lower interfacial area density and therefore condensate slower. Steam pressures between 1 and 6.5 MPa and sub-cooling temperatures from 2 to 12 K were applied. Due to the pressure drop along the pipe, the saturation temperature falls towards the upper pipe end. This affects the sub-cooling temperature and can even cause re-evaporation in the upper part of the test section. The experimental configurations are simulated with the CFD code CFX using an extended MUSIG approach, which includes the bubble shrinking or growth due to condensation or re-evaporation. The development of the vapour phase along the pipe with respect to vapour void fractions and bubble sizes is qualitatively well reproduced

  20. Bayesian approach to peak deconvolution and library search for high resolution gas chromatography - Mass spectrometry.

    Science.gov (United States)

    Barcaru, A; Mol, H G J; Tienstra, M; Vivó-Truyols, G

    2017-08-29

    A novel probabilistic Bayesian strategy is proposed to resolve highly coeluting peaks in high-resolution GC-MS (Orbitrap) data. Opposed to a deterministic approach, we propose to solve the problem probabilistically, using a complete pipeline. First, the retention time(s) for a (probabilistic) number of compounds for each mass channel are estimated. The statistical dependency between m/z channels was implied by including penalties in the model objective function. Second, Bayesian Information Criterion (BIC) is used as Occam's razor for the probabilistic assessment of the number of components. Third, a probabilistic set of resolved spectra, and their associated retention times are estimated. Finally, a probabilistic library search is proposed, computing the spectral match with a high resolution library. More specifically, a correlative measure was used that included the uncertainties in the least square fitting, as well as the probability for different proposals for the number of compounds in the mixture. The method was tested on simulated high resolution data, as well as on a set of pesticides injected in a GC-Orbitrap with high coelution. The proposed pipeline was able to detect accurately the retention times and the spectra of the peaks. For our case, with extremely high coelution situation, 5 out of the 7 existing compounds under the selected region of interest, were correctly assessed. Finally, the comparison with the classical methods of deconvolution (i.e., MCR and AMDIS) indicates a better performance of the proposed algorithm in terms of the number of correctly resolved compounds. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Assembly of a Vacuum Chamber: A Hands-On Approach to Introduce Mass Spectrometry

    Science.gov (United States)

    Bussie`re, Guillaume; Stoodley, Robin; Yajima, Kano; Bagai, Abhimanyu; Popowich, Aleksandra K.; Matthews, Nicholas E.

    2014-01-01

    Although vacuum technology is essential to many aspects of modern physical and analytical chemistry, vacuum experiments are rarely the focus of undergraduate laboratories. We describe an experiment that introduces students to vacuum science and mass spectrometry. The students first assemble a vacuum system, including a mass spectrometer. While…

  2. Average Costs versus Net Present Value

    NARCIS (Netherlands)

    E.A. van der Laan (Erwin); R.H. Teunter (Ruud)

    2000-01-01

    textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives

  3. Total mass difference statistics algorithm: a new approach to identification of high-mass building blocks in electrospray ionization Fourier transform ion cyclotron mass spectrometry data of natural organic matter.

    Science.gov (United States)

    Kunenkov, Erast V; Kononikhin, Alexey S; Perminova, Irina V; Hertkorn, Norbert; Gaspar, Andras; Schmitt-Kopplin, Philippe; Popov, Igor A; Garmash, Andrew V; Nikolaev, Evgeniy N

    2009-12-15

    The ultrahigh-resolution Fourier transform ion cyclotron resonance (FTICR) mass spectrum of natural organic matter (NOM) contains several thousand peaks with dozens of molecules matching the same nominal mass. Such a complexity poses a significant challenge for automatic data interpretation, in which the most difficult task is molecular formula assignment, especially in the case of heavy and/or multielement ions. In this study, a new universal algorithm for automatic treatment of FTICR mass spectra of NOM and humic substances based on total mass difference statistics (TMDS) has been developed and implemented. The algorithm enables a blind search for unknown building blocks (instead of a priori known ones) by revealing repetitive patterns present in spectra. In this respect, it differs from all previously developed approaches. This algorithm was implemented in designing FIRAN-software for fully automated analysis of mass data with high peak density. The specific feature of FIRAN is its ability to assign formulas to heavy and/or multielement molecules using "virtual elements" approach. To verify the approach, it was used for processing mass spectra of sodium polystyrene sulfonate (PSS, M(w) = 2200 Da) and polymethacrylate (PMA, M(w) = 3290 Da) which produce heavy multielement and multiply-charged ions. Application of TMDS identified unambiguously monomers present in the polymers consistent with their structure: C(8)H(7)SO(3)Na for PSS and C(4)H(6)O(2) for PMA. It also allowed unambiguous formula assignment to all multiply-charged peaks including the heaviest peak in PMA spectrum at mass 4025.6625 with charge state 6- (mass bias -0.33 ppm). Application of the TMDS-algorithm to processing data on the Suwannee River FA has proven its unique capacities in analysis of spectra with high peak density: it has not only identified the known small building blocks in the structure of FA such as CH(2), H(2), C(2)H(2)O, O but the heavier unit at 154.027 amu. The latter was

  4. A dried blood spot mass spectrometry metabolomic approach for rapid breast cancer detection

    Directory of Open Access Journals (Sweden)

    Wang Q

    2016-03-01

    Full Text Available Qingjun Wang,1,2,* Tao Sun,3,* Yunfeng Cao,1,2,4,5 Peng Gao,2,4,6 Jun Dong,2,4 Yanhua Fang,2 Zhongze Fang,2 Xiaoyu Sun,2 Zhitu Zhu1,2 1Oncology Department 2, The First Affiliated Hospital of Liaoning Medical University, 2Personalized Treatment and Diagnosis Research Center, The First Affiliated Hospital of Liaoning Medical University and Dalian Institute of Chemical Physics, Chinese Academy of Sciences, Jinzhou, 3Department of Internal Medicine 1, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Insititute, Shenyang, 4CAS Key Laboratory of Separation Science for Analytical Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Sciences, Dalian, 5Key Laboratory of Contraceptives and Devices Research (NPFPC, Shanghai Engineer and Technology Research Center of Reproductive Health Drug and Devices, Shanghai Institute of Planned Parenthood Research, Shanghai, 6Clinical Laboratory, Dalian Sixth People’s Hospital, Dalian, People’s Republic of China *These authors contributed equally to this work Objective: Breast cancer (BC is still a lethal threat to women worldwide. An accurate screening and diagnosis strategy performed in an easy-to-operate manner is highly warranted in clinical perspective. Besides the routinely focused protein markers, blood is full of small molecular metabolites with diverse structures and properties. This study aimed to screen metabolite markers with BC diagnosis potentials.Methods: A dried blood spot-based direct infusion mass spectrometry (MS metabolomic analysis was conducted for BC and non-BC differentiation. The targeted analytes included 23 amino acids and 26 acylcarnitines.Results: Multivariate analysis screened out 21 BC-related metabolites in the blood. Regression analysis generated a diagnosis model consisting of parameters Pip, Asn, Pro, C14:1/C16, Phe/Tyr, and Gly/Ala. Tested with another set of BC and non-BC samples, this model showed a sensitivity of 92.2% and a specificity

  5. Multiplatform Mass Spectrometry-Based Approach Identifies Extracellular Glycolipids of the Yeast Rhodotorula babjevae UCDFST 04-877.

    Science.gov (United States)

    Cajka, Tomas; Garay, Luis A; Sitepu, Irnayuli R; Boundy-Mills, Kyria L; Fiehn, Oliver

    2016-10-28

    A multiplatform mass spectrometry-based approach was used for elucidating extracellular lipids with biosurfactant properties produced by the oleaginous yeast Rhodotorula babjevae UCDFST 04-877. This strain secreted 8.6 ± 0.1 g/L extracellular lipids when grown in a benchtop bioreactor fed with 100 g/L glucose in medium without addition of hydrophobic substrate, such as oleic acid. Untargeted reversed-phase liquid chromatography-quadrupole/time-of-flight mass spectrometry (QTOFMS) detected native glycolipid molecules with masses of 574-716 Da. After hydrolysis into the fatty acid and sugar components and hydrophilic interaction chromatography-QTOFMS analysis, the extracellular lipids were found to consist of hydroxy fatty acids and sugar alcohols. Derivatization and chiral separation gas chromatography-mass spectrometry (GC-MS) identified these components as d-arabitol, d-mannitol, (R)-3-hydroxymyristate, (R)-3-hydroxypalmitate, and (R)-3-hydroxystearate. In order to assemble these substructures back into intact glycolipids that were detected in the initial screen, potential structures were in-silico acetylated to match the observed molar masses and subsequently characterized by matching predicted and observed MS/MS fragmentation using the Mass Frontier software program. Eleven species of acetylated sugar alcohol esters of hydroxy fatty acids were characterized for this yeast strain.

  6. A hybrid approach to protein differential expression in mass spectrometry-based proteomics

    KAUST Repository

    Wang, X.; Anderson, G. A.; Smith, R. D.; Dabney, A. R.

    2012-01-01

    MOTIVATION: Quantitative mass spectrometry-based proteomics involves statistical inference on protein abundance, based on the intensities of each protein's associated spectral peaks. However, typical MS-based proteomics datasets have substantial

  7. A Case Investigation of Product Structure Complexity in Mass Customization Using a Data Mining Approach

    DEFF Research Database (Denmark)

    Nielsen, Peter; Brunø, Thomas Ditlev; Nielsen, Kjeld

    2014-01-01

    This paper presents a data mining method for analyzing historical configuration data providing a number of opportunities for improving mass customization capabilities. The overall objective of this paper is to investigate how specific quantitative analyses, more specifically the association rule...

  8. Quantitation of multisite EGF receptor phosphorylation using mass spectrometry and a novel normalization approach

    DEFF Research Database (Denmark)

    Erba, Elisabetta Boeri; Matthiesen, Rune; Bunkenborg, Jakob

    2007-01-01

    Using stable isotope labeling and mass spectrometry, we performed a sensitive, quantitative analysis of multiple phosphorylation sites of the epidermal growth factor (EGF) receptor. Phosphopeptide detection efficiency was significantly improved by using the tyrosine phosphatase inhibitor sodium p...

  9. Mass Media and Religious Culture of the Audiences; Suggesting a Useful Approach to Media Productions for Children

    Directory of Open Access Journals (Sweden)

    Nasser Bahonar

    2008-10-01

    Full Text Available The religious program of mass media exclusively produced for children have had a significant growth in recent years. The artistic expression of stories related to the life of the great prophets and to the history of Islam as well as taking advantage of theatrical literature in religious occasions can herald successes in this neglected field. But, what is questionable in national religious policies in that why who are involved in the religious education of children whether in traditional media (family, mosques, religious communities, etc or in modern media (textbooks, press, radio and television do not follow an integrated and coherent policy based on a proved theoretical view of religious communications. In fact, this question results from the same old opposition between audience-oriented and media-oriented approaches in communications as well as the opposition between cognitivism and other approaches in psychology. The findings of the field of study conducted by the author along with psychological achievements of cognitivism in human communications and cultural audience-oriented approaches, especially reception theory in mass communications can solve some existing difficulties in the formulation of religious messages. Drawing upon the above mentioned theoretical schools, this article tries to introduce a useful approach to producing religious programs for children and describes the main tasks of mass media in this field accordingly.

  10. Demonstration of isotope-mass balance approach for water budget analyses of El-burulus Lake, Nile Delta, Egypt

    International Nuclear Information System (INIS)

    Sadek, M.A.

    2006-01-01

    The major elements of El-Burulus lake water system are rainfall, agricultural drainage discharge, groundwater, human activities, evaporation and water interaction between the lake and the Mediterranean sea. The principal input sources are agricultural drainage (8 drains at the southern borders of the lake), sea water as well as some contribution of precipitation, groundwater and human activities. Water is lost from the lake through evaporation and surface outflow. The present study has been conducted using isotopic / mass balance approach to investigate the water balance of El-Burulus lake and to emphasize the relative contribution of different input / output components which affect the environmental and hydrological terms of the system. An isotopic evaporation pan experiment was performed to estimate the parameters of relevance to water balance (isotopic composition of free air moisture and evaporating flux) and to simulate the isotopic enrichment of evaporation under atmospheric and hydraulic control. The isotopic mass balance approach employed herein facilitated the estimation of groundwater inflow to the lake, evaporated fraction of total lake inflow (E/I) and its fraction to outflow (E/O), ratio of surface inflow to surface outflow (I/O) as well as residence time of lake water. The isotopic mass balance approach has been validated by comparing the values of estimated parameters with the previous hydrological investigations; a quite good match has been indicated, the relevance of this approach is related to its integrative scale and the more simply implementation

  11. Mass transfer and slag-metal reaction in ladle refining : a CFD approach

    OpenAIRE

    Ramström, Eva

    2009-01-01

      In order to optimise the ladle treatment mass transfer modelling of aluminium addition and homogenisation time was carried out. It was stressed that incorporating slag-metal reactions into the mass transfer modelling strongly would enhance the reliability and amount of information to be analyzed from the CFD calculations.   In the present work, a thermodynamic model taking all the involved slag metal reactions into consideration was incorporated into a 2-D fluid flow model of an argon stirr...

  12. Approaches for the analysis of low molecular weight compounds with laser desorption/ionization techniques and mass spectrometry.

    Science.gov (United States)

    Bergman, Nina; Shevchenko, Denys; Bergquist, Jonas

    2014-01-01

    This review summarizes various approaches for the analysis of low molecular weight (LMW) compounds by different laser desorption/ionization mass spectrometry techniques (LDI-MS). It is common to use an agent to assist the ionization, and small molecules are normally difficult to analyze by, e.g., matrix assisted laser desorption/ionization mass spectrometry (MALDI-MS) using the common matrices available today, because the latter are generally small organic compounds themselves. This often results in severe suppression of analyte peaks, or interference of the matrix and analyte signals in the low mass region. However, intrinsic properties of several LDI techniques such as high sensitivity, low sample consumption, high tolerance towards salts and solid particles, and rapid analysis have stimulated scientists to develop methods to circumvent matrix-related issues in the analysis of LMW molecules. Recent developments within this field as well as historical considerations and future prospects are presented in this review.

  13. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  14. Slovenian National Landslide DataBase – A promising approach to slope mass movement prevention plan

    Directory of Open Access Journals (Sweden)

    Mihael Ribičič

    2007-12-01

    Full Text Available The Slovenian territory is, geologically speaking, very diverse and mainly composed of sediments or sedimentary rocks. Slope mass movements occur almost in all parts of the country. In the Alpine carbonate areas of the northern part of Slovenia rock falls, rock slides and even debris flows can be triggered.In the mountainous regions of central Slovenia composed from different clastic rocks, large soil landslides are quite usual, and in the young soil sediments of eastern part of Slovenia there is a large density of small soil landslides.The damage caused by slope mass movements is high, but still no common strategy and regulations to tackle this unwanted event, especially from the aspect of prevention, have been developed. One of the first steps towards an effective strategy of struggling against landslides and other slope mass movements is a central landslide database, where (ideally all known landslide occurrences would be reported, and described in as much detail as possible. At the end of the project of National Landslide Database construction which ended in May 2005 there were more than 6600 registered landslides, of which almost half occurred at a known location and were accompanied with the main characteristic descriptions.The erected database is a chance for Slovenia to once and for all start a solid slope mass movement prevention plan. The only part which is missing and which is the most important one is adopting a legal act that will legalise the obligation of reporting slope mass movement events to the database.

  15. New approach to 3-D, high sensitivity, high mass resolution space plasma composition measurements

    International Nuclear Information System (INIS)

    McComas, D.J.; Nordholt, J.E.

    1990-01-01

    This paper describes a new type of 3-D space plasma composition analyzer. The design combines high sensitivity, high mass resolution measurements with somewhat lower mass resolution but even higher sensitivity measurements in a single compact and robust design. While the lower resolution plasma measurements are achieved using conventional straight-through time-of-flight mass spectrometry, the high mass resolution measurements are made by timing ions reflected in a linear electric field (LEF), where the restoring force that an ion experiences is proportional to the depth it travels into the LEF region. Consequently, the ion's equation of motion in that dimension is that of a simple harmonic oscillator and its travel time is simply proportional to the square root of the ion's mass/charge (m/q). While in an ideal LEF, the m/q resolution can be arbitrarily high, in a real device the resolution is limited by the field linearity which can be achieved. In this paper we describe how a nearly linear field can be produced and discuss how the design can be optimized for various different plasma regimes and spacecraft configurations

  16. Comparison of a Mass Balance and an Ecosystem Model Approach when Evaluating the Carbon Cycling in a Lake Ecosystem

    International Nuclear Information System (INIS)

    Andersson, Eva; Sobek, Sebastian

    2006-01-01

    Carbon budgets are frequently used in order to understand the pathways of organic matter in ecosystems, and they also have an important function in the risk assessment of harmful substances. We compared two approaches, mass balance calculations and an ecosystem budget, to describe carbon processing in a shallow, oligotrophic hardwater lake. Both approaches come to the same main conclusion, namely that the lake is a net auto trophic ecosystem, in spite of its high dissolved organic carbon and low total phosphorus concentrations. However, there were several differences between the carbon budgets, e.g. in the rate of sedimentation and the air-water flux of CO 2 . The largest uncertainty in the mass balance is the contribution of emergent macrophytes to the carbon cycling of the lake, while the ecosystem budget is very sensitive towards the choice of conversion factors and literature values. While the mass balance calculations produced more robust results, the ecosystem budget gave valuable insights into the pathways of organic matter transfer in the ecosystem. We recommend that when using an ecosystem budget for the risk assessment of harmful substances, mass balance calculations should be performed in parallel in order to increase the robustness of the conclusions

  17. EXPOSURE TO MASS MEDIA AS A DOMINANT FACTOR INFLUENCING PUBLIC STIGMA TOWARD MENTAL ILLNESS BASED ON SUNRISE MODEL APPROACH

    Directory of Open Access Journals (Sweden)

    Ni Made Sintha Pratiwi

    2018-05-01

    Full Text Available Background: The person suffering mental disorders is not only burdened by his condition but also by the stigma. The impact of stigma extremely influences society that it is considered to be the obstacle in mental disorders therapy. Stigma as the society adverse view toward severe mental disorders is related with the cultural aspect. The interaction appeared from each component of nursing model namely sunrise model, which a model developed by Madeleine Leininger is connected with the wide society views about severe mental disorders condition in society. Objective: The aim of this study was to analyze the factors related to public stigma and to find out the dominant factors related to public stigma about severe mental illness through sunrise model approach in Sukonolo Village, Malang Regency. Methods: This study using observational analytical design with cross sectional approach. There were 150 respondents contributed in this study. The respondents were obtained using purposive sampling technique. Results: The results showed a significant relationship between mass media exposure, spiritual well-being, interpersonal contact, attitude, and knowledge with public stigma about mental illness. The result from multiple logistic regression shows the low exposure of mass media has the highest OR value at 26.744. Conclusion: There were significant correlation between mass media exposure, spiritual well-being, interpersonal contact, attitude, and knowledge with public stigma toward mental illness. Mass media exposure as a dominant factor influencing public stigma toward mental illness.

  18. A genome-wide approach accounting for body mass index identifies genetic variants influencing fasting glycemic traits and insulin resistance

    DEFF Research Database (Denmark)

    Manning, Alisa K; Hivert, Marie-France; Scott, Robert A

    2012-01-01

    pathways might be uncovered by accounting for differences in body mass index (BMI) and potential interactions between BMI and genetic variants. We applied a joint meta-analysis approach to test associations with fasting insulin and glucose on a genome-wide scale. We present six previously unknown loci...... associated with fasting insulin at P triglyceride and lower high-density lipoprotein (HDL) cholesterol levels, suggesting a role for these loci...

  19. Mass dispersions in a time-dependent mean-field approach

    International Nuclear Information System (INIS)

    Balian, R.; Bonche, P.; Flocard, H.; Veneroni, M.

    1984-05-01

    Characteristic functions for single-particle (s.p.) observables are evaluated by means of a time-dependent variational principle, which involves a state and an observable as conjugate variables. This provides a mean-field expression for fluctuations of s.p. observables, such as mass dispersions. The result differs from TDHF, it requires only the use of existing codes, and it presents attractive theoretical features. First numerical tests are encouraging. In particular, a calculation for 16 O + 16 O provides a significant increase of the predicted mass dispersion

  20. Reconnaissance Estimates of Recharge Based on an Elevation-dependent Chloride Mass-balance Approach

    Energy Technology Data Exchange (ETDEWEB)

    Charles E. Russell; Tim Minor

    2002-08-31

    Significant uncertainty is associated with efforts to quantity recharge in arid regions such as southern Nevada. However, accurate estimates of groundwater recharge are necessary to understanding the long-term sustainability of groundwater resources and predictions of groundwater flow rates and directions. Currently, the most widely accepted method for estimating recharge in southern Nevada is the Maxey and Eakin method. This method has been applied to most basins within Nevada and has been independently verified as a reconnaissance-level estimate of recharge through several studies. Recharge estimates derived from the Maxey and Eakin and other recharge methodologies ultimately based upon measures or estimates of groundwater discharge (outflow methods) should be augmented by a tracer-based aquifer-response method. The objective of this study was to improve an existing aquifer-response method that was based on the chloride mass-balance approach. Improvements were designed to incorporate spatial variability within recharge areas (rather than recharge as a lumped parameter), develop a more defendable lower limit of recharge, and differentiate local recharge from recharge emanating as interbasin flux. Seventeen springs, located in the Sheep Range, Spring Mountains, and on the Nevada Test Site were sampled during the course of this study and their discharge was measured. The chloride and bromide concentrations of the springs were determined. Discharge and chloride concentrations from these springs were compared to estimates provided by previously published reports. A literature search yielded previously published estimates of chloride flux to the land surface. {sup 36}Cl/Cl ratios and discharge rates of the three largest springs in the Amargosa Springs discharge area were compiled from various sources. This information was utilized to determine an effective chloride concentration for recharging precipitation and its associated uncertainty via Monte Carlo simulations

  1. Bilarge neutrino mixing and mass of the lightest neutrino from third generation dominance in a democratic approach

    International Nuclear Information System (INIS)

    Dermisek, Radovan

    2004-01-01

    We show that both small mixing in the quark sector and large mixing in the lepton sector can be obtained from a simple assumption of universality of Yukawa couplings and the right-handed neutrino Majorana mass matrix in leading order. We discuss conditions under which bilarge mixing in the lepton sector is achieved with a minimal amount of fine-tuning requirements for possible models. From knowledge of the solar and atmospheric mixing angles we determine the allowed values of sin θ 13 . If embedded into grand unified theories, the third generation Yukawa coupling unification is a generic feature while masses of the first two generations of charged fermions depend on small perturbations. In the neutrino sector, the heavier two neutrinos are model dependent, while the mass of the lightest neutrino in this approach does not depend on perturbations in the leading order. The right-handed neutrino mass scale can be identified with the GUT scale in which case the mass of the lightest neutrino is given as (m top 2 /M GUT )sin 2 θ 23 sin 2 θ 12 in the limit sin θ 13 ≅0. Discussing symmetries we make a connection with hierarchical models and show that the basis independent characteristic of this scenario is a strong dominance of the third generation right-handed neutrino, M 1 ,M 2 -4 M 3 , M 3 =M GUT

  2. Statistically optimal estimation of Greenland Ice Sheet mass variations from GRACE monthly solutions using an improved mascon approach

    Science.gov (United States)

    Ran, J.; Ditmar, P.; Klees, R.; Farahani, H. H.

    2018-03-01

    We present an improved mascon approach to transform monthly spherical harmonic solutions based on GRACE satellite data into mass anomaly estimates in Greenland. The GRACE-based spherical harmonic coefficients are used to synthesize gravity anomalies at satellite altitude, which are then inverted into mass anomalies per mascon. The limited spectral content of the gravity anomalies is properly accounted for by applying a low-pass filter as part of the inversion procedure to make the functional model spectrally consistent with the data. The full error covariance matrices of the monthly GRACE solutions are properly propagated using the law of covariance propagation. Using numerical experiments, we demonstrate the importance of a proper data weighting and of the spectral consistency between functional model and data. The developed methodology is applied to process real GRACE level-2 data (CSR RL05). The obtained mass anomaly estimates are integrated over five drainage systems, as well as over entire Greenland. We find that the statistically optimal data weighting reduces random noise by 35-69%, depending on the drainage system. The obtained mass anomaly time-series are de-trended to eliminate the contribution of ice discharge and are compared with de-trended surface mass balance (SMB) time-series computed with the Regional Atmospheric Climate Model (RACMO 2.3). We show that when using a statistically optimal data weighting in GRACE data processing, the discrepancies between GRACE-based estimates of SMB and modelled SMB are reduced by 24-47%.

  3. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  4. The mass transfer approach to multivariate discrete first order stochastic dominance

    DEFF Research Database (Denmark)

    Østerdal, Lars Peter Raahave

    2010-01-01

    A fundamental result in the theory of stochastic dominance tells that first order dominance between two finite multivariate distributions is equivalent to the property that the one can be obtained from the other by shifting probability mass from one outcome to another that is worse a finite numbe...

  5. Novel approach to determine ghrelin analogs by liquid chromatography with mass spectrometry using a monolithic column

    Czech Academy of Sciences Publication Activity Database

    Zemenová, Jana; Sýkora, D.; Adámková, H.; Maletínská, Lenka; Elbert, Tomáš; Marek, Aleš; Blechová, Miroslava

    2017-01-01

    Roč. 40, č. 5 (2017), s. 1032-1039 ISSN 1615-9306 Institutional support: RVO:61388963 Keywords : enzyme-linked immunosorbent assay * ghrelin * lipopeptides * liquid chromatography mass spectrometry * monolithic columns Subject RIV: CB - Analytical Chemistry, Separation OBOR OECD: Analytical chemistry Impact factor: 2.557, year: 2016

  6. Integrated genomic approaches implicate osteoglycin (Ogn) in the regulation of left ventricular mass

    NARCIS (Netherlands)

    Petretto, Enrico; Sarwar, Rizwan; Grieve, Ian; Lu, Han; Kumaran, Mande K.; Muckett, Phillip J.; Mangion, Jonathan; Schroen, Blanche; Benson, Matthew; Punjabi, Prakash P.; Prasad, Sanjay K.; Pennell, Dudley J.; Kiesewetter, Chris; Tasheva, Elena S.; Corpuz, Lolita M.; Webb, Megan D.; Conrad, Gary W.; Kurtz, Theodore W.; Kren, Vladimir; Fischer, Judith; Hubner, Norbert; Pinto, Yigal M.; Pravenec, Michal; Aitman, Timothy J.; Cook, Stuart A.

    2008-01-01

    Left ventricular mass (LVM) and cardiac gene expression are complex traits regulated by factors both intrinsic and extrinsic to the heart. To dissect the major determinants of LVM, we combined expression quantitative trait locus1 and quantitative trait transcript (QTT) analyses of the cardiac

  7. Constraint specification in architecture : a user-oriented approach for mass customization

    NARCIS (Netherlands)

    Niemeijer, R.A.

    2011-01-01

    The last several decades, particularly those after the end of World War II, have seen an increasing industrialization of the housing industry. This was partially driven by the large demand for new houses that resulted from the baby boom and the destruction caused by the war. By adopting mass

  8. A rapid approach for characterization of thiol-conjugated antibody-drug conjugates and calculation of drug-antibody ratio by liquid chromatography mass spectrometry.

    Science.gov (United States)

    Firth, David; Bell, Leonard; Squires, Martin; Estdale, Sian; McKee, Colin

    2015-09-15

    We present the demonstration of a rapid "middle-up" liquid chromatography mass spectrometry (LC-MS)-based workflow for use in the characterization of thiol-conjugated maleimidocaproyl-monomethyl auristatin F (mcMMAF) and valine-citrulline-monomethyl auristatin E (vcMMAE) antibody-drug conjugates. Deconvoluted spectra were generated following a combination of deglycosylation, IdeS (immunoglobulin-degrading enzyme from Streptococcus pyogenes) digestion, and reduction steps that provide a visual representation of the product for rapid lot-to-lot comparison-a means to quickly assess the integrity of the antibody structure and the applied conjugation chemistry by mass. The relative abundance of the detected ions also offer information regarding differences in drug conjugation levels between samples, and the average drug-antibody ratio can be calculated. The approach requires little material (<100 μg) and, thus, is amenable to small-scale process development testing or as an early component of a complete characterization project facilitating informed decision making regarding which aspects of a molecule might need to be examined in more detail by orthogonal methodologies. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Birth weight differences between those offered financial voucher incentives for verified smoking cessation and control participants enrolled in the Cessation in Pregnancy Incentives Trial (CPIT), employing an intuitive approach and a Complier Average Causal Effects (CACE) analysis.

    Science.gov (United States)

    McConnachie, Alex; Haig, Caroline; Sinclair, Lesley; Bauld, Linda; Tappin, David M

    2017-07-20

    The Cessation in Pregnancy Incentives Trial (CPIT), which offered financial incentives for smoking cessation during pregnancy showed a clinically and statistically significant improvement in cessation. However, infant birth weight was not seen to be affected. This study re-examines birth weight using an intuitive and a complier average causal effects (CACE) method to uncover important information missed by intention-to-treat analysis. CPIT offered financial incentives up to £400 to pregnant smokers to quit. With incentives, 68 women (23.1%) were confirmed non-smokers at primary outcome, compared to 25 (8.7%) without incentives, a difference of 14.3% (Fisher test, p financial incentives to quit. Viewed in this way, the overall birth weight gain with incentives is attributable only to potential quitters. We compared an intuitive approach to a CACE analysis. Mean birth weight of potential quitters in the incentives intervention group (who therefore quit) was 3338 g compared with potential quitters in the control group (who did not quit) 3193 g. The difference attributable to incentives, was 3338 - 3193 = 145 g (95% CI -617, +803). The mean difference in birth weight between the intervention and control groups was 21 g, and the difference in the proportion who managed to quit was 14.3%. Since the intervention consisted of the offer of incentives to quit smoking, the intervention was received by all women in the intervention group. However, "compliance" was successfully quitting with incentives, and the CACE analysis yielded an identical result, causal birth weight increase 21 g ÷ 0.143 = 145 g. Policy makers have great difficulty giving pregnant women money to stop smoking. This study indicates that a small clinically insignificant improvement in average birth weight is likely to hide an important clinically significant increase in infants born to pregnant smokers who want to stop but cannot achieve smoking cessation without the addition of financial

  10. Re-assessing Present Day Global Mass Transport and Glacial Isostatic Adjustment From a Data Driven Approach

    Science.gov (United States)

    Wu, X.; Jiang, Y.; Simonsen, S.; van den Broeke, M. R.; Ligtenberg, S.; Kuipers Munneke, P.; van der Wal, W.; Vermeersen, B. L. A.

    2017-12-01

    Determining present-day mass transport (PDMT) is complicated by the fact that most observations contain signals from both present day ice melting and Glacial Isostatic Adjustment (GIA). Despite decades of progress in geodynamic modeling and new observations, significant uncertainties remain in both. The key to separate present-day ice mass change and signals from GIA is to include data of different physical characteristics. We designed an approach to separate PDMT and GIA signatures by estimating them simultaneously using globally distributed interdisciplinary data with distinct physical information and a dynamically constructed a priori GIA model. We conducted a high-resolution global reappraisal of present-day ice mass balance with focus on Earth's polar regions and its contribution to global sea-level rise using a combination of ICESat, GRACE gravity, surface geodetic velocity data, and an ocean bottom pressure model. Adding ice altimetry supplies critically needed dual data types over the interiors of ice covered regions to enhance separation of PDMT and GIA signatures, and achieve half an order of magnitude expected higher accuracies for GIA and consequently ice mass balance estimates. The global data based approach can adequately address issues of PDMT and GIA induced geocenter motion and long-wavelength signatures important for large areas such as Antarctica and global mean sea level. In conjunction with the dense altimetry data, we solved for PDMT coefficients up to degree and order 180 by using a higher-resolution GRACE data set, and a high-resolution a priori PDMT model that includes detailed geographic boundaries. The high-resolution approach solves the problem of multiple resolutions in various data types, greatly reduces aliased errors from a low-degree truncation, and at the same time, enhances separation of signatures from adjacent regions such as Greenland and Canadian Arctic territories.

  11. Approach for domestic preparation of standard material (LSD spike) for isotope dilution mass spectrometry

    International Nuclear Information System (INIS)

    Ishikawa, Fumitaka; Sumi, Mika; Chiba, Masahiko; Suzuki, Toru; Abe, Tomoyuki; Kuno, Yusuke

    2008-01-01

    The accountancy analysis of the nuclear fuel material at Plutonium Fuel Development Center of JAEA is performed by isotope dilution mass spectrometry (IDMS; Isotope Dilution Mass Spectrometry). IDMS requires the standard material called LSD spike (Large Size Dried spike) which is indispensable for the accountancy in the facilities where the nuclear fuel materials are handled. Although the LSD spike and Pu source material have been supplied from foreign countries, the transportation for such materials has been getting more difficult recently. This difficulty may affect the operation of nuclear facilities in the future. Therefore, research and development of the domestic LSD spike and base material has been performed at JAEA. Certification for such standard nuclear materials including spikes produced in Japan is being studied. This report presents the current status and the future plan for the technological development. (author)

  12. Real estate market and building energy performance: Data for a mass appraisal approach.

    Science.gov (United States)

    Bonifaci, Pietro; Copiello, Sergio

    2015-12-01

    Mass appraisal is widely considered an advanced frontier in the real estate valuation field. Performing mass appraisal entails the need to get access to base information conveyed by a large amount of transactions, such as prices and property features. Due to the lack of transparency of many Italian real estate market segments, our survey has been addressed to gather data from residential property advertisements. The dataset specifically focuses on property offer prices and dwelling energy efficiency. The latter refers to the label expressed and exhibited by the energy performance certificate. Moreover, data are georeferenced with the highest possible accuracy: at the neighborhood level for a 76.8% of cases, at street or building number level for the remaining 23.2%. Data are related to the analysis performed in Bonifaci and Copiello [1], about the relationship between house prices and building energy performance, that is to say, the willingness to pay in order to benefit from more efficient dwellings.

  13. Westgate Shootings: An Emergency Department Approach to a Mass-casualty Incident.

    Science.gov (United States)

    Wachira, Benjamin W; Abdalla, Ramadhani O; Wallis, Lee A

    2014-10-01

    At approximately 12:30 pm on Saturday September 21, 2013, armed assailants attacked the upscale Westgate shopping mall in the Westlands area of Nairobi, Kenya. Using the seven key Major Incident Medical Management and Support (MIMMS) principles, command, safety, communication, assessment, triage, treatment, and transport, the Aga Khan University Hospital, Nairobi (AKUH,N) emergency department (ED) successfully coordinated the reception and care of all the casualties brought to the hospital. This report describes the AKUH,N ED response to the first civilian mass-casualty shooting incident in Kenya, with the hope of informing the development and implementation of mass-casualty emergency preparedness plans by other EDs and hospitals in Kenya, appropriate for the local health care system.

  14. Mathematical modelling of the mass-spring-damper system - A fractional calculus approach

    Directory of Open Access Journals (Sweden)

    Jesus Bernal Alvarado

    2012-08-01

    Full Text Available In this paper the fractional differential equation for the mass-spring-damper system in terms of the fractional time derivatives of the Caputo type is considered. In order to be consistent with the physical equation, a new parameter is introduced. This parameter char­acterizes the existence of fractional components in the system. A relation between the fractional order time derivative and the new parameter is found. Different particular cases are analyzed

  15. Mass lesions in chronic pancreatitis: benign or malignant? An "evidence-based practice" approach.

    LENUS (Irish Health Repository)

    Gerstenmaier, Jan F

    2012-02-01

    The diagnosis of a pancreatic mass lesion in the presence of chronic pancreatitis can be extremely challenging. At the same time, a high level of certainty about the diagnosis is necessary for appropriate management planning. The aim of this study was to establish current best evidence about which imaging methods reliably differentiate a benign from a malignant lesion, and show how that evidence is best applied. A diagnostic algorithm based on Bayesian analysis is proposed.

  16. Trip time prediction in mass transit companies. A machine learning approach

    OpenAIRE

    João M. Moreira; Alípio Jorge; Jorge Freire de Sousa; Carlos Soares

    2005-01-01

    In this paper we discuss how trip time prediction can be useful foroperational optimization in mass transit companies and which machine learningtechniques can be used to improve results. Firstly, we analyze which departmentsneed trip time prediction and when. Secondly, we review related work and thirdlywe present the analysis of trip time over a particular path. We proceed by presentingexperimental results conducted on real data with the forecasting techniques wefound most adequate, and concl...

  17. Brute-Force Approach for Mass Spectrometry-Based Variant Peptide Identification in Proteogenomics without Personalized Genomic Data

    Science.gov (United States)

    Ivanov, Mark V.; Lobas, Anna A.; Levitsky, Lev I.; Moshkovskii, Sergei A.; Gorshkov, Mikhail V.

    2018-02-01

    In a proteogenomic approach based on tandem mass spectrometry analysis of proteolytic peptide mixtures, customized exome or RNA-seq databases are employed for identifying protein sequence variants. However, the problem of variant peptide identification without personalized genomic data is important for a variety of applications. Following the recent proposal by Chick et al. (Nat. Biotechnol. 33, 743-749, 2015) on the feasibility of such variant peptide search, we evaluated two available approaches based on the previously suggested "open" search and the "brute-force" strategy. To improve the efficiency of these approaches, we propose an algorithm for exclusion of false variant identifications from the search results involving analysis of modifications mimicking single amino acid substitutions. Also, we propose a de novo based scoring scheme for assessment of identified point mutations. In the scheme, the search engine analyzes y-type fragment ions in MS/MS spectra to confirm the location of the mutation in the variant peptide sequence.

  18. Mass Transfer and Chemical Reaction Approach of the Kinetics of the Acetylation of Gadung Flour using Glacial Acetic Acid

    Directory of Open Access Journals (Sweden)

    Andri Cahyo Kumoro

    2015-03-01

    Full Text Available Acetylation is one of the common methods of modifying starch properties by introducing acetil (CH3CO groups to starch molecules at low temperatures. While most acetylation is conducted using starch as anhidroglucose source and acetic anhydride or vinyl acetate as nucleophilic agents, this work employ reactants, namely flour and glacial acetic acid. The purpose of this work are to study the effect of pH reaction and GAA/GF mass ratio on the rate of acetylation reaction and to determine its rate constants. The acetylation of gadung flour with glacial acetic acid in the presence of sodium hydroxide as a homogenous catalyst was studied at ambient temperature with pH ranging from 8-10 and different mass ratio of acetic acid : gadung flour (1:3; 1:4; and 1:5. It was found that increasing pH, lead to increase the degree of substitution, while increasing GAA/GF mass ratio caused such decreases in the degree of substitution, due to the hydrolysis of the acetylated starch. The desired starch acetylation reaction is accompanied by undesirable hydrolysis reaction of the acetylated starch after 40-50 minutes reaction time. Investigation of kinetics of the reaction observed that the value of mass transfer rate constant (Kcs is smaller than the surface reaction rate constant (k. Thus, it can be concluded that rate controlling step is mass transfer.  © 2015 BCREC UNDIP. All rights reservedReceived: 7th August 2014; Revised: 8th September 2014; Accepted: 14th September 2014How to Cite: Kumoro, A.C., Amelia, R. (2015. Mass Transfer and Chemical Reaction Approach of the Kinetics of the Acetylation of Gadung Flour using Glacial Acetic Acid. Bulletin of Chemical Reaction Engineering & Catalysis, 10 (1: 30-37. (doi:10.9767/bcrec.10.1.7181.30-37Permalink/DOI: http://dx.doi.org/10.9767/bcrec.10.1.7181.30-37

  19. Windbreak effect on biomass and grain mass accumulation of corn: a modeling approach

    International Nuclear Information System (INIS)

    Zhang, H.; Brandle, J.R.

    1996-01-01

    While numerous studies have indicated that field windbreaks both improve crop growing conditions and generally enhance crop growth and yield, especially under less favorable conditions, the relationship between the two is not clearly understood. A simple model is proposed to simulate biomass and grain mass accumulation of corn (Zea mays L,) with a windbreak shelter or without (exposed condition). The model is based on the positive relationship between intercepted solar radiation and biomass accumulation and requires plant population and hourly inputs of solar radiation and air temperature. Using published data, radiation use efficiency (RUE) was related to plant population, and a temperature function was established between the relative corn growth and temperature for pre-silking stages. Biomass and grain mass simulated by the model agreed well with those measured for both sheltered and unsheltered plants from 1990 to 1992. Windbreaks did not significantly increase biomass or grain mass of corn for this study, even though air temperature was greater with than without shelter, probably indicating that the microclimatic changes induced by windbreaks were not physiologically significant for the 3-yr period studied. The model has potential use in future studies to relate windbreak effects to crop yield and to evaluate windbreak designs for maximum benefits

  20. The Impact of Microstructure Geometry on the Mass Transport in Artificial Pores: A Numerical Approach

    Directory of Open Access Journals (Sweden)

    Matthias Galinsky

    2014-01-01

    Full Text Available The microstructure of porous materials used in heterogeneous catalysis determines the mass transport inside networks, which may vary over many length scales. The theoretical prediction of mass transport phenomena in porous materials, however, is incomplete and is still not completely understood. Therefore, experimental data for every specific porous system is needed. One possible experimental technique for characterizing the mass transport in such pore networks is pulse experiments. The general evaluation of experimental outcomes of these techniques follows the solution of Fick’s second law where an integral and effective diffusion coefficient is recognized. However, a detailed local understanding of diffusion and sorption processes remains a challenge. As there is lack of proved models covering different length scales, existing classical concepts need to be evaluated with respect to their ability to reflect local geometries on the nanometer level. In this study, DSMC (Direct Simulation Monte Carlo models were used to investigate the impact of pore microstructures on the diffusion behaviour of gases. It can be understood as a virtual pulse experiment within a single pore or a combination of different pore geometries.

  1. A dynamical approach in exploring the unknown mass in the Solar system using pulsar timing arrays

    Science.gov (United States)

    Guo, Y. J.; Lee, K. J.; Caballero, R. N.

    2018-04-01

    The error in the Solar system ephemeris will lead to dipolar correlations in the residuals of pulsar timing array for widely separated pulsars. In this paper, we utilize such correlated signals, and construct a Bayesian data-analysis framework to detect the unknown mass in the Solar system and to measure the orbital parameters. The algorithm is designed to calculate the waveform of the induced pulsar-timing residuals due to the unmodelled objects following the Keplerian orbits in the Solar system. The algorithm incorporates a Bayesian-analysis suit used to simultaneously analyse the pulsar-timing data of multiple pulsars to search for coherent waveforms, evaluate the detection significance of unknown objects, and to measure their parameters. When the object is not detectable, our algorithm can be used to place upper limits on the mass. The algorithm is verified using simulated data sets, and cross-checked with analytical calculations. We also investigate the capability of future pulsar-timing-array experiments in detecting the unknown objects. We expect that the future pulsar-timing data can limit the unknown massive objects in the Solar system to be lighter than 10-11-10-12 M⊙, or measure the mass of Jovian system to a fractional precision of 10-8-10-9.

  2. Interrogating the Venom of the Viperid Snake Sistrurus catenatus edwardsii by a Combined Approach of Electrospray and MALDI Mass Spectrometry.

    Directory of Open Access Journals (Sweden)

    Alex Chapeaurouge

    Full Text Available The complete sequence characterization of snake venom proteins by mass spectrometry is rather challenging due to the presence of multiple isoforms from different protein families. In the present study, we investigated the tryptic digest of the venom of the viperid snake Sistrurus catenatus edwardsii by a combined approach of liquid chromatography coupled to either electrospray (online or MALDI (offline mass spectrometry. These different ionization techniques proved to be complementary allowing the identification a great variety of isoforms of diverse snake venom protein families, as evidenced by the detection of the corresponding unique peptides. For example, ten out of eleven predicted isoforms of serine proteinases of the venom of S. c. edwardsii were distinguished using this approach. Moreover, snake venom protein families not encountered in a previous transcriptome study of the venom gland of this snake were identified. In essence, our results support the notion that complementary ionization techniques of mass spectrometry allow for the detection of even subtle sequence differences of snake venom proteins, which is fundamental for future structure-function relationship and possible drug design studies.

  3. Novel approaches in analysis of Fusarium mycotoxins in cereals employing ultra performance liquid chromatography coupled with high resolution mass spectrometry

    International Nuclear Information System (INIS)

    Zachariasova, M.; Lacina, O.; Malachova, A.; Kostelanska, M.; Poustka, J.; Godula, M.; Hajslova, J.

    2010-01-01

    Rapid, simple and cost-effective analytical methods with performance characteristics matching regulatory requirements are needed for effective control of occurrence of Fusarium toxins in cereals and cereal-based products to which they might be transferred during processing. Within this study, two alternative approaches enabling retrospective data analysis and identification of unknown signals in sample extracts have been implemented and validated for determination of 11 major Fusarium toxins. In both cases, ultra-high performance liquid chromatography (U-HPLC) coupled with high resolution mass spectrometry (HR MS) was employed. 13 C isotopically labeled surrogates as well as matrix-matched standards were employed for quantification. As far as time of flight mass analyzer (TOF-MS) was a detection tool, the use of modified QuEChERS (quick easy cheap effective rugged and safe) sample preparation procedure, widely employed in multi-pesticides residue analysis, was shown as an optimal approach to obtain low detection limits. The second challenging alternative, enabling direct analysis of crude extract, was the use of mass analyzer based on Orbitrap technology. In addition to demonstration of full compliance of the new methods with Commission Regulation (EC) No. 401/2006, also their potential to be used for confirmatory purposes according to Commission Decision 2002/657/EC has been critically assessed.

  4. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  5. A novel approach to finely tuned supersymmetric standard models: The case of the non-universal Higgs mass model

    Science.gov (United States)

    Yamaguchi, Masahiro; Yin, Wen

    2018-02-01

    Discarding the prejudice about fine tuning, we propose a novel and efficient approach to identify relevant regions of fundamental parameter space in supersymmetric models with some amount of fine tuning. The essential idea is the mapping of experimental constraints at a low-energy scale, rather than the parameter sets, to those of the fundamental parameter space. Applying this method to the non-universal Higgs mass model, we identify a new interesting superparticle mass pattern where some of the first two generation squarks are light whilst the stops are kept heavy as 6 TeV. Furthermore, as another application of this method, we show that the discrepancy of the muon anomalous magnetic dipole moment can be filled by a supersymmetric contribution within the 1{σ} level of the experimental and theoretical errors, which was overlooked by previous studies due to the extremely fine tuning required.

  6. An Artificial Gravity Spacecraft Approach which Minimizes Mass, Fuel and Orbital Assembly Reg

    Science.gov (United States)

    Bell, L.

    2002-01-01

    The Sasakawa International Center for Space Architecture (SICSA) is undertaking a multi-year research and design study that is exploring near and long-term commercial space development opportunities. Space tourism in low-Earth orbit (LEO), and possibly beyond LEO, comprises one business element of this plan. Supported by a financial gift from the owner of a national U.S. hotel chain, SICSA has examined opportunities, requirements and facility concepts to accommodate up to 100 private citizens and crewmembers in LEO, as well as on lunar/planetary rendezvous voyages. SICSA's artificial gravity Science Excursion Vehicle ("AGSEV") design which is featured in this presentation was conceived as an option for consideration to enable round-trip travel to Moon and Mars orbits and back from LEO. During the course of its development, the AGSEV would also serve other important purposes. An early assembly stage would provide an orbital science and technology testbed for artificial gravity demonstration experiments. An ultimate mature stage application would carry crews of up to 12 people on Mars rendezvous missions, consuming approximately the same propellant mass required for lunar excursions. Since artificial gravity spacecraft that rotate to create centripetal accelerations must have long spin radii to limit adverse effects of Coriolis forces upon inhabitants, SICSA's AGSEV design embodies a unique tethered body concept which is highly efficient in terms of structural mass and on-orbit assembly requirements. The design also incorporates "inflatable" as well as "hard" habitat modules to optimize internal volume/mass relationships. Other important considerations and features include: maximizing safety through element and system redundancy; means to avoid destabilizing mass imbalances throughout all construction and operational stages; optimizing ease of on-orbit servicing between missions; and maximizing comfort and performance through careful attention to human needs. A

  7. A simple theoretical approach to determine relative ion yield (RIY) in glow discharge mass spectrometry (GDMS)

    Energy Technology Data Exchange (ETDEWEB)

    Born, Sabine [Degussa AG, Hanau (Germany); Matsunami, Noriaki [Nagoya Univ. (Japan). Faculty of Engineering; Tawara, Hiroyuki [National Inst. for Fusion Science, Toki, Gifu (Japan)

    2000-01-01

    Direct current glow discharge mass spectrometry (dc-GDMS) has been applied to detect impurities in metals. The aim of this study is to understand quantitatively the processes taking place in GDMS and establish a model to calculate the relative ion yield (RIY), which is inversely proportional to the relative sensitivity factor (RSF), in order to achieve better agreement between the calculated and the experimental RIYs. A comparison is made between the calculated RIY of the present model and the experimental RIY, and also with other models. (author)

  8. A perturbative approach to mass-generation - the non-linear sigma model

    International Nuclear Information System (INIS)

    Davis, A.C.; Nahm, W.

    1985-01-01

    A calculational scheme is presented to include non-perturbative effects into the perturbation expansion. As an example we use the O(N + 1) sigma model. The scheme uses a natural parametrisation such that the lagrangian can be written in a form normal-ordered with respect to the O(N + 1) symmetric vacuum plus vacuum expectation values, the latter calculated by symmetry alone. Including such expectation values automatically leads to the inclusion of a mass-gap in the perturbation series. (orig.)

  9. Symplectic no-core shell-model approach to intermediate-mass nuclei

    Science.gov (United States)

    Tobin, G. K.; Ferriss, M. C.; Launey, K. D.; Dytrych, T.; Draayer, J. P.; Dreyfuss, A. C.; Bahri, C.

    2014-03-01

    We present a microscopic description of nuclei in the intermediate-mass region, including the proximity to the proton drip line, based on a no-core shell model with a schematic many-nucleon long-range interaction with no parameter adjustments. The outcome confirms the essential role played by the symplectic symmetry to inform the interaction and the winnowing of shell-model spaces. We show that it is imperative that model spaces be expanded well beyond the current limits up through 15 major shells to accommodate particle excitations, which appear critical to highly deformed spatial structures and the convergence of associated observables.

  10. A comparison of labeling and label-free mass spectrometry-based proteomics approaches.

    Science.gov (United States)

    Patel, Vibhuti J; Thalassinos, Konstantinos; Slade, Susan E; Connolly, Joanne B; Crombie, Andrew; Murrell, J Colin; Scrivens, James H

    2009-07-01

    The proteome of the recently discovered bacterium Methylocella silvestris has been characterized using three profiling and comparative proteomics approaches. The organism has been grown on two different substrates enabling variations in protein expression to be identified. The results obtained using the experimental approaches have been compared with respect to number of proteins identified, confidence in identification, sequence coverage and agreement of regulated proteins. The sample preparation, instrumental time and sample loading requirements of the differing experiments are compared and discussed. A preliminary screen of the protein regulation results for biological significance has also been performed.

  11. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  12. Modeling the effect of levothyroxine therapy on bone mass density in postmenopausal women: a different approach leads to new inference

    Directory of Open Access Journals (Sweden)

    Tavangar Seyed

    2007-06-01

    Full Text Available Abstract Background The diagnosis, treatment and prevention of osteoporosis is a national health emergency. Osteoporosis quietly progresses without symptoms until late stage complications occur. Older patients are more commonly at risk of fractures due to osteoporosis. The fracture risk increases when suppressive doses of levothyroxine are administered especially in postmenopausal women. The question is; "When should bone mass density be tested in postmenopausal women after the initiation of suppressive levothyroxine therapy?". Standard guidelines for the prevention of osteoporosis suggest that follow-up be done in 1 to 2 years. We were interested in predicting the level of bone mass density in postmenopausal women after the initiation of suppressive levothyroxine therapy with a novel approach. Methods The study used data from the literature on the influence of exogenous thyroid hormones on bone mass density. Four cubic polynomial equations were obtained by curve fitting for Ward's triangle, trochanter, spine and femoral neck. The behaviors of the models were investigated by statistical and mathematical analyses. Results There are four points of inflexion on the graphs of the first derivatives of the equations with respect to time at about 6, 5, 7 and 5 months. In other words, there is a maximum speed of bone loss around the 6th month after the start of suppressive L-thyroxine therapy in post-menopausal women. Conclusion It seems reasonable to check bone mass density at the 6th month of therapy. More research is needed to explain the cause and to confirm the clinical application of this phenomenon for osteoporosis, but such an approach can be used as a guide to future experimentation. The investigation of change over time may lead to more sophisticated decision making in a wide variety of clinical problems.

  13. Derivation of mass relations for composite W* and Z* from effective Lagrangian approach

    International Nuclear Information System (INIS)

    Yasue, Masaki; Oneda, Sadao.

    1985-04-01

    In an effective-Lagrangian model with gauge bosons (W,Z,γ) and their neighboring spin J=1 composites (W*,Z*), we find relations among their masses, m sub(W), m sub(Z), m sub(W*) and m sub(Z*): m sub(W) m sub(W*) = cos theta m sub(Z) m sub(Z*) (as a generalization of m sub(W) = cos theta m sub(Z)) and m sub(W) 2 + m sub(W*) 2 + tan 2 theta m sub(W0) 2 = m sub(Z) 2 + m sub(Z*) 2 with m sub(W0) being the mass of W in the standard model provided that the system respects the SU(2) sub(L) x U(1) sub(Y) symmetry. W* and Z* are taken as the lowest-lying excited states belonging to an SU(2) sub(L)-triplet in the symmetric limit. The existence of W* coupling to the V-A current modifies the relation between G sub(F) and M sub(W) and that of Z* generates a new interaction of the (Jsup(em)) 2 -type as well as the deviation of sin theta sub(W) observed at low energies from the mixing angle sin theta in neutral-current interactions. (author)

  14. New approach to the characterization of pyrolysis coal products by gas chromatography mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Cappiello, A.; Mangani, F.; Bruner, F.; Bonfanti, L. [University of Urbino, Urbino (Italy)

    1996-06-07

    A method for the characterization of coal thermal behaviour, based on gas chromatographic-mass spectrometric analysis of the pyrolysate, is presented. Twelve different coal samples representative of the entire coal rank, were selected. The pyrolysis products, obtained at 800{degree}C, were first collected and then analysed in two GC-MS systems. The sampling apparatus consisted of three different traps in order to separate the products into three fractions on the basis of their volatility. The GC-MS analysis was also arranged according to this criterion. A packed column, coupled to a double-focusing magnetic mass spectrometer, was used for the volatile fractions of the pyrolysate and a capillary column, coupled to a quadruple analyser, was employed for the analysis of the condensed fraction. Sampling and analysis procedures were carried out separately, thus allowing careful optimization of the strategy for the characterization of the pyrolysate. The condensate was analysed in the selected-ion monitoring mode for the determination of different classes of compounds. Some evaluations and comparisons, extrapolated from the results obtained, are presented.

  15. Mass Spectrometric Approaches to the Identification of Potential Ingredients in Cigarette Smoke Causing Cytotoxicity.

    Science.gov (United States)

    Horiyama, Shizuyo; Kunitomo, Masaru; Yoshikawa, Noriko; Nakamura, Kazuki

    2016-01-01

    Cigarette smoke contains many harmful chemicals that contribute to the pathogenesis of smoking-related diseases such as chronic obstructive pulmonary disease, cancer, and cardiovascular disease. Many studies have been done to identify cytotoxic chemicals in cigarette smoke and elucidate the onset of the above-mentioned diseases caused by smoking. However, definitive mechanisms for cigarette smoke toxicity remain unknown. As candidates for cytotoxic chemicals, we have recently found methyl vinyl ketone (MVK) and acetic anhydride in nicotine/tar-free cigarette smoke extract (CSE) using L-tyrosine (Tyr), an amino acid with highly reactive hydroxyl group. The presence of MVK and acetic anhydride in CSE was confirmed by gas chromatography-mass spectrometry (GC/MS). We also found new reaction products formed in B16-BL6 mouse melanoma (B16-BL6) cells treated with CSE using LC/MS. These were identified as glutathione (GSH) conjugates of α,β-unsaturated carbonyl compounds, MVK, crotonaldehyde (CA), and acrolein (ACR), by the mass value and product ion spectra of these new products. ACR and MVK are type-2 alkenes, which are well known as electron acceptors and form Michael-type adducts to nucleophilic side chain of amino acids on peptides. These α,β-unsaturated carbonyl compounds may have a key role in CSE-induced cell death.

  16. Estimating Regional Mass Balance of Himalayan Glaciers Using Hexagon Imagery: An Automated Approach

    Science.gov (United States)

    Maurer, J. M.; Rupper, S.

    2013-12-01

    Currently there is much uncertainty regarding the present and future state of Himalayan glaciers, which supply meltwater for river systems vital to more than 1.4 billion people living throughout Asia. Previous assessments of regional glacier mass balance in the Himalayas using various remote sensing and field-based methods give inconsistent results, and most assessments are over relatively short (e.g., single decade) timescales. This study aims to quantify multi-decadal changes in volume and extent of Himalayan glaciers through efficient use of the large database of declassified 1970-80s era Hexagon stereo imagery. Automation of the DEM extraction process provides an effective workflow for many images to be processed and glacier elevation changes quantified with minimal user input. The tedious procedure of manual ground control point selection necessary for block-bundle adjustment (as ephemeral data is not available for the declassified images) is automated using the Maximally Stable Extremal Regions algorithm, which matches image elements between raw Hexagon images and georeferenced Landsat 15 meter panchromatic images. Additional automated Hexagon DEM processing, co-registration, and bias correction allow for direct comparison with modern ASTER and SRTM elevation data, thus quantifying glacier elevation and area changes over several decades across largely inaccessible mountainous regions. As consistent methodology is used for all glaciers, results will likely reveal significant spatial and temporal patterns in regional ice mass balance. Ultimately, these findings could have important implications for future water resource management in light of environmental change.

  17. Bayesian approach to peak deconvolution and library search for high resolution gas chromatography - Mass spectrometry

    NARCIS (Netherlands)

    Barcaru, A.; Mol, H.G.J.; Tienstra, M.; Vivó-Truyols, G.

    2017-01-01

    A novel probabilistic Bayesian strategy is proposed to resolve highly coeluting peaks in high-resolution GC-MS (Orbitrap) data. Opposed to a deterministic approach, we propose to solve the problem probabilistically, using a complete pipeline. First, the retention time(s) for a (probabilistic) number

  18. Approaches towards the automated interpretation and prediction of electrospray tandem mass spectra of non-peptidic combinatorial compounds.

    Science.gov (United States)

    Klagkou, Katerina; Pullen, Frank; Harrison, Mark; Organ, Andy; Firth, Alistair; Langley, G John

    2003-01-01

    Combinatorial chemistry is widely used within the pharmaceutical industry as a means of rapid identification of potential drugs. With the growth of combinatorial libraries, mass spectrometry (MS) became the key analytical technique because of its speed of analysis, sensitivity, accuracy and ability to be coupled with other analytical techniques. In the majority of cases, electrospray mass spectrometry (ES-MS) has become the default ionisation technique. However, due to the absence of fragment ions in the resulting spectra, tandem mass spectrometry (MS/MS) is required to provide structural information for the identification of an unknown analyte. This work discusses the first steps of an investigation into the fragmentation pathways taking place in electrospray tandem mass spectrometry. The ultimate goal for this project is to set general fragmentation rules for non-peptidic, pharmaceutical, combinatorial compounds. As an aid, an artificial intelligence (AI) software package is used to facilitate interpretation of the spectra. This initial study has focused on determining the fragmentation rules for some classes of compound types that fit the remit as outlined above. Based on studies carried out on several combinatorial libraries of these compounds, it was established that different classes of drug molecules follow unique fragmentation pathways. In addition to these general observations, the specific ionisation processes and the fragmentation pathways involved in the electrospray mass spectra of these systems were explored. The ultimate goal will be to incorporate our findings into the computer program and allow identification of an unknown, non-peptidic compound following insertion of its ES-MS/MS spectrum into the AI package. The work herein demonstrates the potential benefit of such an approach in addressing the issue of high-throughput, automated MS/MS data interpretation. Copyright 2003 John Wiley & Sons, Ltd.

  19. Chromatographic, Spectroscopic and Mass Spectrometric Approaches for Exploring the Habitability of Mars in 2012 and Beyond with the Curiosity Rover

    Science.gov (United States)

    Mahaffy, Paul

    2012-01-01

    The Sample Analysis at Mars (SAM) suite of instruments on the Curiosity Rover of Mars Science Laboratory Mission is designed to provide chemical and isotopic analysis of organic and inorganic volatiles for both atmospheric and solid samples. The goals of the science investigation enabled by the gas chromatograph mass spectrometer and tunable laser spectrometer instruments of SAM are to work together with the other MSL investigations is to quantitatively assess habitability through a series of chemical and geological measurements. We describe the multi-column gas chromatograph system employed on SAM and the approach to extraction and analysis of organic compounds that might be preserved in ancient martian rocks.

  20. Tandem mass spectrometry: a convenient approach in the dosage of steviol glycosides in Stevia sweetened commercial food beverages.

    Science.gov (United States)

    Di Donna, L; Mazzotti, F; Santoro, I; Sindona, G

    2017-05-01

    The use of sweeteners extracted from leaves of the plant species Stevia rebaudiana is increasing worldwide. They are recognized as generally recognized as safe by the US-FDA and approved by EU-European Food Safety Authority, with some recommendation on the daily dosage that should not interfere with glucose metabolism. The results presented here introduce an easy analytical approach for the identification and assay of Stevia sweeteners in commercially available soft drink, based on liquid chromatography coupled to tandem mass spectrometry, using a natural statin-like molecule, Brutieridin, as internal standard. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Geochemical modelling of Na-SO4 type groundwater at Palmottu using a mass balance approach

    International Nuclear Information System (INIS)

    Pitkaenen, P.

    1993-01-01

    The mass balance chemical modelling technique has been applied to the groundwaters at the Palmottu analogue study site (in southwestern Finland) for radioactive waste disposal. The geochemical modelling concentrates on the evolution of Na-SO 4 type groundwater, which is spatially connected to the uranium mineralization. The results calculated along an assumed flow path are consistent with available field data and thermodynamic constraints. The results show that essential production of sulphides is unrealistic in the prevailing conditions. The increasing concentrations of Na, SO 4 and Cl along the evolution trend seem to have the same source and they could originate mainly from the leakage of fluid inclusions. Some mixing of relict sea water is also possible

  2. Global black p-brane world: a new approach to stable mass hierarchy

    International Nuclear Information System (INIS)

    Moon, Sei-Hoon; Rey, Soo-Jong; Kim, Yoonbai

    2001-01-01

    We find a class of extremal black hole-like global p-brane in higher-dimensional gravity with a negative cosmological constant. The region inside the p-brane horizon possesses all essential features required for the Randall-Sundrum type brane world scenario. The set-up allows to interpret the horizon size as the compactification size in that the Planck scale M Pl is determined by the fundamental scale M * and the horizon size r H via the familiar relation M Pl 2 ∼M * 2+n r H n , and the gravity behaves as expected in a world with n-extra dimensions compactified with size r H . Most importantly, a stable mass hierarchy between M Pl and M * can be generated from topological charge of the p-brane and the horizon size r H therein. We also offer a new perspective on various issues associated to the brane world scenarios including the cosmological constant problem

  3. Current mass spectrometry approaches and challenges for the bioanalysis of traditional Chinese medicines.

    Science.gov (United States)

    Dong, Xin; Wang, Rui; Zhou, Xu; Li, Ping; Yang, Hua

    2016-07-15

    Traditional Chinese medicines (TCMs) are gaining more and more attentions all over the world. The focus of TCMs researches were gradually shifted from chemical research to the combination study of chemical and life sciences. However, obtaining precise information of TCMs process in vivo or in vitro is still a bottleneck in bioanalysis of TCMs for their chemical composition complexity. This paper reviewed the recent analytical methods especially mass spectrometry technology in the bioanalysis of TCMs, and data processing techniques in the qualitative and quantitative analyses of metabolite of TCMs. Additionally, the difficulties encountered in the analyzing biological samples in TCMs and the solutions to these problems have been mentioned. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. An Interprofessional Approach to Continuing Education With Mass Casualty Simulation: Planning and Execution.

    Science.gov (United States)

    Saber, Deborah A; Strout, Kelley; Caruso, Lisa Swanson; Ingwell-Spolan, Charlene; Koplovsky, Aiden

    2017-10-01

    Many natural and man-made disasters require the assistance from teams of health care professionals. Knowing that continuing education about disaster simulation training is essential to nursing students, nurses, and emergency first responders (e.g., emergency medical technicians, firefighters, police officers), a university in the northeastern United States planned and implemented an interprofessional mass casualty incident (MCI) disaster simulation using the Project Management Body of Knowledge (PMBOK) management framework. The school of nursing and University Volunteer Ambulance Corps (UVAC) worked together to simulate a bus crash with disaster victim actors to provide continued education for community first responders and train nursing students on the MCI process. This article explains the simulation activity, planning process, and achieved outcomes. J Contin Educ Nurs. 2017;48(10):447-453. Copyright 2017, SLACK Incorporated.

  5. Honeybee venom proteome profile of queens and winter bees as determined by a mass spectrometric approach.

    Science.gov (United States)

    Danneels, Ellen L; Van Vaerenbergh, Matthias; Debyser, Griet; Devreese, Bart; de Graaf, Dirk C

    2015-10-30

    Venoms of invertebrates contain an enormous diversity of proteins, peptides, and other classes of substances. Insect venoms are characterized by a large interspecific variation resulting in extended lists of venom compounds. The venom composition of several hymenopterans also shows different intraspecific variation. For instance, venom from different honeybee castes, more specifically queens and workers, shows quantitative and qualitative variation, while the environment, like seasonal changes, also proves to be an important factor. The present study aimed at an in-depth analysis of the intraspecific variation in the honeybee venom proteome. In summer workers, the recent list of venom proteins resulted from merging combinatorial peptide ligand library sample pretreatment and targeted tandem mass spectrometry realized with a Fourier transform ion cyclotron resonance mass spectrometer (FT-ICR MS/MS). Now, the same technique was used to determine the venom proteome of queens and winter bees, enabling us to compare it with that of summer bees. In total, 34 putative venom toxins were found, of which two were never described in honeybee venoms before. Venom from winter workers did not contain toxins that were not present in queens or summer workers, while winter worker venom lacked the allergen Api m 12, also known as vitellogenin. Venom from queen bees, on the other hand, was lacking six of the 34 venom toxins compared to worker bees, while it contained two new venom toxins, in particularly serine proteinase stubble and antithrombin-III. Although people are hardly stung by honeybees during winter or by queen bees, these newly identified toxins should be taken into account in the characterization of a putative allergic response against Apis mellifera stings.

  6. Honeybee Venom Proteome Profile of Queens and Winter Bees as Determined by a Mass Spectrometric Approach

    Science.gov (United States)

    Danneels, Ellen L.; Van Vaerenbergh, Matthias; Debyser, Griet; Devreese, Bart; de Graaf, Dirk C.

    2015-01-01

    Venoms of invertebrates contain an enormous diversity of proteins, peptides, and other classes of substances. Insect venoms are characterized by a large interspecific variation resulting in extended lists of venom compounds. The venom composition of several hymenopterans also shows different intraspecific variation. For instance, venom from different honeybee castes, more specifically queens and workers, shows quantitative and qualitative variation, while the environment, like seasonal changes, also proves to be an important factor. The present study aimed at an in-depth analysis of the intraspecific variation in the honeybee venom proteome. In summer workers, the recent list of venom proteins resulted from merging combinatorial peptide ligand library sample pretreatment and targeted tandem mass spectrometry realized with a Fourier transform ion cyclotron resonance mass spectrometer (FT-ICR MS/MS). Now, the same technique was used to determine the venom proteome of queens and winter bees, enabling us to compare it with that of summer bees. In total, 34 putative venom toxins were found, of which two were never described in honeybee venoms before. Venom from winter workers did not contain toxins that were not present in queens or summer workers, while winter worker venom lacked the allergen Api m 12, also known as vitellogenin. Venom from queen bees, on the other hand, was lacking six of the 34 venom toxins compared to worker bees, while it contained two new venom toxins, in particularly serine proteinase stubble and antithrombin-III. Although people are hardly stung by honeybees during winter or by queen bees, these newly identified toxins should be taken into account in the characterization of a putative allergic response against Apis mellifera stings. PMID:26529016

  7. Mass spectrometric approaches for the identification of anthracycline analogs produced by actinobacteria.

    Science.gov (United States)

    Bauermeister, Anelize; Zucchi, Tiago Domingues; Moraes, Luiz Alberto Beraldo

    2016-06-01

    Anthracyclines are a well-known chemical class produced by actinobacteria used effectively in cancer treatment; however, these compounds are usually produced in few amounts because of being toxic against their producers. In this work, we successfully explored the mass spectrometry versatility to detect 18 anthracyclines in microbial crude extract. From collision-induced dissociation and nuclear magnetic resonance spectra, we proposed structures for five new and identified three more anthracyclines already described in the literature, nocardicyclins A and B and nothramicin. One new compound 8 (4-[4-(dimethylamino)-5-hydroxy-4,6-dimethyloxan-2-yl]oxy-2,5,7,12-tetrahydroxy-3,10-dimethoxy-2-methyl-3,4-dihydrotetracene-1,6,11-trione) was isolated and had its structure confirmed by (1) H nuclear magnetic resonance. The anthracyclines identified in this work show an interesting aminoglycoside, poorly found in natural products, 3-methyl-rhodosamine and derivatives. This fact encouraged to develop a focused method to identify compounds with aminoglycosides (rhodosamine, m/z 158; 3-methyl-rhodosamine, m/z 172; 4'-O-acethyl-3-C-methyl-rhodosamine, m/z 214). This method allowed the detection of four more anthracyclines. This focused method can also be applied in the search of these aminoglycosides in other microbial crude extracts. Additionally, it was observed that nocardicyclin A, nothramicin and compound 8 were able to interact to DNA through a DNA-binding study by mass spectrometry, showing its potential as anticancer drugs. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Quantifying non-linear dynamics of mass-springs in series oscillators via asymptotic approach

    Science.gov (United States)

    Starosta, Roman; Sypniewska-Kamińska, Grażyna; Awrejcewicz, Jan

    2017-05-01

    Dynamical regular response of an oscillator with two serially connected springs with nonlinear characteristics of cubic type and governed by a set of differential-algebraic equations (DAEs) is studied. The classical approach of the multiple scales method (MSM) in time domain has been employed and appropriately modified to solve the governing DAEs of two systems, i.e. with one- and two degrees-of-freedom. The approximate analytical solutions have been verified by numerical simulations.

  9. Challenging posterior mediastinal mass resection via a minimally invasive approach with neurological monitoring.

    Science.gov (United States)

    Smail, Hassiba; Baste, Jean Marc; Melki, Jean; Peillon, Christophe

    2013-02-01

    We report a novel surgical strategy for the resection of a rare type of posterior mediastinal tumour in a young patient. A melanotic schwannoma arose from the left thoracic sympathetic chain, adjacent to the origin of the artery of Adamkiewicz. Successful excision of this tumour via a minimally invasive approach without arterial or spinal cord injury was possible with the aid of neurological monitoring using spinal-evoked potentials.

  10. The influence of parent's body mass index on peer selection: an experimental approach using virtual reality.

    Science.gov (United States)

    Martarelli, Corinna S; Borter, Natalie; Bryjova, Jana; Mast, Fred W; Munsch, Simone

    2015-11-30

    Relatively little is known about the influence of psychosocial factors, such as familial role modeling and social network on the development and maintenance of childhood obesity. We investigated peer selection using an immersive virtual reality environment. In a virtual schoolyard, children were confronted with normal weight and overweight avatars either eating or playing. Fifty-seven children aged 7-13 participated. Interpersonal distance to the avatars, child's BMI, self-perception, eating behavior and parental BMI were assessed. Parental BMI was the strongest predictor for the children's minimal distance to the avatars. Specifically, a higher mothers' BMI was associated with greater interpersonal distance and children approached closer to overweight eating avatars. A higher father's BMI was associated with a lower interpersonal distance to the avatars. These children approached normal weight playing and overweight eating avatar peers closest. The importance of parental BMI for the child's social approach/avoidance behavior can be explained through social modeling mechanisms. Differential effects of paternal and maternal BMI might be due to gender specific beauty ideals. Interventions to promote social interaction with peer groups could foster weight stabilization or weight loss in children. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. Vitroprocines, new antibiotics against Acinetobacter baumannii, discovered from marine Vibrio sp. QWI-06 using mass-spectrometry-based metabolomics approach

    Science.gov (United States)

    Liaw, Chih-Chuang; Chen, Pei-Chin; Shih, Chao-Jen; Tseng, Sung-Pin; Lai, Ying-Mi; Hsu, Chi-Hsin; Dorrestein, Pieter C.; Yang, Yu-Liang

    2015-08-01

    A robust and convenient research strategy integrating state-of-the-art analytical techniques is needed to efficiently discover novel compounds from marine microbial resources. In this study, we identified a series of amino-polyketide derivatives, vitroprocines A-J, from the marine bacterium Vibrio sp. QWI-06 by an integrated approach using imaging mass spectroscopy and molecular networking, as well as conventional bioactivity-guided fractionation and isolation. The structure-activity relationship of vitroprocines against Acinetobacter baumannii is proposed. In addition, feeding experiments with 13C-labeled precursors indicated that a pyridoxal 5‧-phosphate-dependent mechanism is involved in the biosynthesis of vitroprocines. Elucidation of amino-polyketide derivatives from a species of marine bacteria for the first time demonstrates the potential of this integrated metabolomics approach to uncover marine bacterial biodiversity.

  12. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  13. Estimating the ice thickness of mountain glaciers with an inverse approach using surface topography and mass-balance

    International Nuclear Information System (INIS)

    Michel, Laurent; Picasso, Marco; Farinotti, Daniel; Bauder, Andreas; Funk, Martin; Blatter, Heinz

    2013-01-01

    We present a numerical method to estimate the ice thickness distribution within a two-dimensional, non-sliding mountain glacier, given a transient surface geometry and a mass-balance distribution, which are relatively easy to obtain for a large number of glaciers. The inverse approach is based on the shallow ice approximation (SIA) of ice flow and requires neither filtering of the surface topography with a lower slope limit nor approximation of constant basal shear stress. We first address this problem for a steady-state surface geometry. Next, we use an apparent surface mass-balance description that makes the transient evolution quasi-stationary. Then, we employ a more elaborated fixed-point method in which the bedrock solution is iteratively obtained by adding the difference between the computed and known surface geometries at the end of the considered time interval. In a sensitivity study, we show that the procedure is much more susceptible to small perturbations in surface geometry than mass-balance. Finally, we present preliminary results for bed elevations in three space dimensions. (paper)

  14. The mass-action law based algorithm for cost-effective approach for cancer drug discovery and development.

    Science.gov (United States)

    Chou, Ting-Chao

    2011-01-01

    The mass-action law based system analysis via mathematical induction and deduction lead to the generalized theory and algorithm that allows computerized simulation of dose-effect dynamics with small size experiments using a small number of data points in vitro, in animals, and in humans. The median-effect equation of the mass-action law deduced from over 300 mechanism specific-equations has been shown to be the unified theory that serves as the common-link for complicated biomedical systems. After using the median-effect principle as the common denominator, its applications are mechanism-independent, drug unit-independent, and dynamic order-independent; and can be used generally for single drug analysis or for multiple drug combinations in constant-ratio or non-constant ratios. Since the "median" is the common link and universal reference point in biological systems, these general enabling lead to computerized quantitative bio-informatics for econo-green bio-research in broad disciplines. Specific applications of the theory, especially relevant to drug discovery, drug combination, and clinical trials, have been cited or illustrated in terms of algorithms, experimental design and computerized simulation for data analysis. Lessons learned from cancer research during the past fifty years provide a valuable opportunity to reflect, and to improve the conventional divergent approach and to introduce a new convergent avenue, based on the mass-action law principle, for the efficient cancer drug discovery and the low-cost drug development.

  15. Body-mass or sex-biased tick parasitism in roe deer (Capreolus capreolus)? A GAMLSS approach.

    Science.gov (United States)

    Kiffner, C; Lödige, C; Alings, M; Vor, T; Rühe, F

    2011-03-01

    Macroparasites feeding on wildlife hosts follow skewed distributions for which basic statistical approaches are of limited use. To predict Ixodes spp. tick burden on roe deer, we applied Generalized Additive Models for Location, Scale and Shape (GAMLSS) which allow incorporating a variable dispersion. We analysed tick burden of 78 roe deer, sampled in a forest region of Germany over a period of 20 months. Assuming a negative binomial error distribution and controlling for ambient temperature, we analysed whether host sex and body mass affected individual tick burdens. Models for larval and nymphal tick burden included host sex, with male hosts being more heavily infested than female ones. However, the influence of host sex on immature tick burden was associated with wide standard errors (nymphs) or the factor was marginally significant (larvae). Adult tick burden was positively correlated with host body mass. Thus, controlled for host body mass and ambient temperature, there is weak support for sex-biased parasitism in this system. Compared with models which assume linear relationships, GAMLSS provided a better fit. Adding a variable dispersion term improved only one of the four models. Yet, the potential of modelling dispersion as a function of variables appears promising for larger datasets. © 2010 The Authors. Medical and Veterinary Entomology © 2010 The Royal Entomological Society.

  16. Analytical Approach for Estimating Preliminary Mass of ARES I Crew Launch Vehicle Upper Stage Structural Components

    Science.gov (United States)

    Aggarwal, Pravin

    2007-01-01

    electrical power functions to other Elements of the CLV, is included as secondary structure. The MSFC has an overall responsibility for the integrated US element as well as structural design an thermal control of the fuel tanks, intertank, interstage, avionics, main propulsion system, Reaction Control System (RCS) for both the Upper Stage and the First Stage. MSFC's Spacecraft and Vehicle Department, Structural and Analysis Design Division is developing a set of predicted mass of these elements. This paper details the methodology, criterion and tools used for the preliminary mass predictions of the upper stage structural assembly components. In general, weight of the cylindrical barrel sections are estimated using the commercial code Hypersizer, whereas, weight of the domes are developed using classical solutions. HyperSizer is software that performs automated structural analysis and sizing optimization based on aerospace methods for strength, stability, and stiffness. Analysis methods range from closed form, traditional hand calculations repeated every day in industry to more advanced panel buckling algorithms. Margin-of-safety reporting for every potential failure provides the engineer with a powerful insight into the structural problem. Optimization capabilities include finding minimum weight panel or beam concepts, material selections, cross sectional dimensions, thicknesses, and lay-ups from a library of 40 different stiffened and sandwich designs and a database of composite, metallic, honeycomb, and foam materials. Multiple different concepts (orthogrid, isogrid, and skin stiffener) were run for multiple loading combinations of ascent design load with and with out tank pressure as well as proof pressure condition. Subsequently, selected optimized concept obtained from Hypersizer runs was translated into a computer aid design (CAD) model to account for the wall thickness tolerance, weld land etc for developing the most probable weight of the components. The flow diram

  17. Mass transfer simulation of nanofiltration membranes for electrolyte solutions through generalized Maxwell-Stefan approach

    International Nuclear Information System (INIS)

    Hoshyargar, Vahid; Fadaei, Farzad; Ashrafizadeh, Seyed Nezameddin

    2015-01-01

    A comprehensive mathematical model is developed for simulation of ion transport through nanofiltration membranes. The model is based on the Maxwell-Stefan approach and takes into account steric, Donnan, and dielectric effects in the transport of mono and divalent ions. Theoretical ion rejection for multi-electrolyte mixtures was obtained by numerically solving the 'hindered transport' based on the generalized Maxwell-Stefan equation for the flux of ions. A computer simulation has been developed to predict the transport in the range of nanofiltration, a numerical procedure developed linearization and discretization form of the governing equations, and the finite volume method was employed for the numerical solution of equations. The developed numerical method is capable of solving equations for multicomponent systems of n species no matter to what extent the system shows stiffness. The model findings were compared and verified with the experimental data from literature for two systems of Na 2 SO 4 +NaCl and MgCl 2 +NaCl. Comparison showed great agreement for different concentrations. As such, the model is capable of predicting the rejection of different ions at various concentrations. The advantage of such a model is saving costs as a result of minimizing the number of required experiments, while it is closer to a realistic situation since the adsorption of ions has been taken into account. Using this model, the flux of permeates and rejections of multi-component liquid feeds can be calculated as a function of membrane properties. This simulation tool attempts to fill in the gap in methods used for predicting nanofiltration and optimization of the performance of charged nanofilters through generalized Maxwell-Stefan (GMS) approach. The application of the current model may weaken the latter gap, which has arisen due to the complexity of the fundamentals of ion transport processes via this approach, and may further facilitate the industrial development of

  18. A novel approach to signal normalisation in atmospheric pressure ionisation mass spectrometry.

    Science.gov (United States)

    Vogeser, Michael; Kirchhoff, Fabian; Geyer, Roland

    2012-07-01

    The aim of our study was to test an alternative principle of signal normalisation in LC-MS/MS. During analyses, post column infusion of the target analyte is done via a T-piece, generating an "area under the analyte peak" (AUP). The ratio of peak area to AUP is assessed as assay response. Acceptable analytical performance of this principle was found for an exemplary analyte. Post-column infusion may allow normalisation of ion suppression not requiring any additional standard compound. This approach can be useful in situations where no appropriate compound is available for classical internal standardisation. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Optimization of information content in a mass spectrometry based flow-chemistry system by investigating different ionization approaches.

    Science.gov (United States)

    Martha, Cornelius T; Hoogendoorn, Jan-Carel; Irth, Hubertus; Niessen, Wilfried M A

    2011-05-15

    Current development in catalyst discovery includes combinatorial synthesis methods for the rapid generation of compound libraries combined with high-throughput performance-screening methods to determine the associated activities. Of these novel methodologies, mass spectrometry (MS) based flow chemistry methods are especially attractive due to the ability to combine sensitive detection of the formed reaction product with identification of introduced catalyst complexes. Recently, such a mass spectrometry based continuous-flow reaction detection system was utilized to screen silver-adducted ferrocenyl bidentate catalyst complexes for activity in a multicomponent synthesis of a substituted 2-imidazoline. Here, we determine the merits of different ionization approaches by studying the combination of sensitive detection of product formation in the continuous-flow system with the ability to simultaneous characterize the introduced [ferrocenyl bidentate+Ag](+) catalyst complexes. To this end, we study the ionization characteristics of electrospray ionization (ESI), atmospheric-pressure chemical ionization (APCI), no-discharge APCI, dual ESI/APCI, and dual APCI/no-discharge APCI. Finally, we investigated the application potential of the different ionization approaches by the investigation of ferrocenyl bidentate catalyst complex responses in different solvents. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  1. Comparative mass spectrometry & nuclear magnetic resonance metabolomic approaches for nutraceuticals quality control analysis: a brief review.

    Science.gov (United States)

    Farag, Mohamed A

    2014-01-01

    The number of botanical dietary supplements in the market has recently increased primarily due to increased health awareness. Standardization and quality control of the constituents of these plant extracts is an important topic, particularly when such ingredients are used long term as dietary supplements, or in cases where higher doses are marketed as drugs. The development of fast, comprehensive, and effective untargeted analytical methods for plant extracts is of high interest. Nuclear magnetic resonance spectroscopy and mass spectrometry are the most informative tools, each of which enables high-throughput and global analysis of hundreds of metabolites in a single step. Although only one of the two techniques is utilized in the majority of plant metabolomics applications, there is a growing interest in combining the data from both platforms to effectively unravel the complexity of plant samples. The application of combined MS and NMR in the quality control of nutraceuticals forms the major part of this review. Finally I will look at the future developments and perspectives of these two technologies for the quality control of herbal materials.

  2. Identification of okadaic acid-induced phosphorylation events by a mass spectrometry approach

    International Nuclear Information System (INIS)

    Hill, Jennifer J.; Callaghan, Deborah A.; Ding Wen; Kelly, John F.; Chakravarthy, Balu R.

    2006-01-01

    Okadaic acid (OA) is a widely used small-molecule phosphatase inhibitor that is thought to selectively inhibit protein phosphatase 2A (PP2A). Multiple studies have demonstrated that PP2A activity is compromised in Brains of Alzheimer's disease patients. Thus, we set out to determine changes in phosphorylation that occur upon OA treatment of neuronal cells. Utilizing isotope-coded affinity tags and mass spectrometry analysis, we determined the relative abundance of proteins in a phosphoprotein enriched fraction from control and OA-treated primary cortical neurons. We identified many proteins whose phosphorylation state is regulated by OA, including glycogen synthase kinase 3β, collapsin-response mediator proteins (DRP-2, DPYSL-5, and CRMP-4), and the B subunit of PP2A itself. Most interestingly, we have found that complexin 2, an important regulator of neurotransmitter release and synaptic plasticity, is phosphorylated at serine 93 upon OA treatment of neurons. This is First report of a phosphorylation site on complexin 2

  3. Normalization Approaches for Removing Systematic Biases Associated with Mass Spectrometry and Label-Free Proteomics

    Energy Technology Data Exchange (ETDEWEB)

    Callister, Stephen J.; Barry, Richard C.; Adkins, Joshua N.; Johnson, Ethan T.; Qian, Weijun; Webb-Robertson, Bobbie-Jo M.; Smith, Richard D.; Lipton, Mary S.

    2006-02-01

    Central tendency, linear regression, locally weighted regression, and quantile techniques were investigated for normalization of peptide abundance measurements obtained from high-throughput liquid chromatography-Fourier transform ion cyclotron resonance mass spectrometry (LC-FTICR MS). Arbitrary abundances of peptides were obtained from three sample sets, including a standard protein sample, two Deinococcus radiodurans samples taken from different growth phases, and two mouse striatum samples from control and methamphetamine-stressed mice (strain C57BL/6). The selected normalization techniques were evaluated in both the absence and presence of biological variability by estimating extraneous variability prior to and following normalization. Prior to normalization, replicate runs from each sample set were observed to be statistically different, while following normalization replicate runs were no longer statistically different. Although all techniques reduced systematic bias, assigned ranks among the techniques revealed significant trends. For most LC-FTICR MS analyses, linear regression normalization ranked either first or second among the four techniques, suggesting that this technique was more generally suitable for reducing systematic biases.

  4. A Stable-Isotope Mass Spectrometry-Based Metabolic Footprinting Approach to Analyze Exudates from Phytoplankton

    Directory of Open Access Journals (Sweden)

    Mark R. Viant

    2013-10-01

    Full Text Available Phytoplankton exudates play an important role in pelagic ecology and biogeochemical cycles of elements. Exuded compounds fuel the microbial food web and often encompass bioactive secondary metabolites like sex pheromones, allelochemicals, antibiotics, or feeding attractants that mediate biological interactions. Despite this importance, little is known about the bioactive compounds present in phytoplankton exudates. We report a stable-isotope metabolic footprinting method to characterise exudates from aquatic autotrophs. Exudates from 13C-enriched alga were concentrated by solid phase extraction and analysed by high-resolution Fourier transform ion cyclotron resonance mass spectrometry. We used the harmful algal bloom forming dinoflagellate Alexandrium tamarense to prove the method. An algorithm was developed to automatically pinpoint just those metabolites with highly 13C-enriched isotope signatures, allowing us to discover algal exudates from the complex seawater background. The stable-isotope pattern (SIP of the detected metabolites then allowed for more accurate assignment to an empirical formula, a critical first step in their identification. This automated workflow provides an effective way to explore the chemical nature of the solutes exuded from phytoplankton cells and will facilitate the discovery of novel dissolved bioactive compounds.

  5. Stationarity and periodicities of linear speed of coronal mass ejection: a statistical signal processing approach

    Science.gov (United States)

    Chattopadhyay, Anirban; Khondekar, Mofazzal Hossain; Bhattacharjee, Anup Kumar

    2017-09-01

    In this paper initiative has been taken to search the periodicities of linear speed of Coronal Mass Ejection in solar cycle 23. Double exponential smoothing and Discrete Wavelet Transform are being used for detrending and filtering of the CME linear speed time series. To choose the appropriate statistical methodology for the said purpose, Smoothed Pseudo Wigner-Ville distribution (SPWVD) has been used beforehand to confirm the non-stationarity of the time series. The Time-Frequency representation tool like Hilbert Huang Transform and Empirical Mode decomposition has been implemented to unearth the underneath periodicities in the non-stationary time series of the linear speed of CME. Of all the periodicities having more than 95% Confidence Level, the relevant periodicities have been segregated out using Integral peak detection algorithm. The periodicities observed are of low scale ranging from 2-159 days with some relevant periods like 4 days, 10 days, 11 days, 12 days, 13.7 days, 14.5 and 21.6 days. These short range periodicities indicate the probable origin of the CME is the active longitude and the magnetic flux network of the sun. The results also insinuate about the probable mutual influence and causality with other solar activities (like solar radio emission, Ap index, solar wind speed, etc.) owing to the similitude between their periods and CME linear speed periods. The periodicities of 4 days and 10 days indicate the possible existence of the Rossby-type waves or planetary waves in Sun.

  6. Postmortem interval estimation: a novel approach utilizing gas chromatography/mass spectrometry-based biochemical profiling.

    Science.gov (United States)

    Kaszynski, Richard H; Nishiumi, Shin; Azuma, Takeshi; Yoshida, Masaru; Kondo, Takeshi; Takahashi, Motonori; Asano, Migiwa; Ueno, Yasuhiro

    2016-05-01

    While the molecular mechanisms underlying postmortem change have been exhaustively investigated, the establishment of an objective and reliable means for estimating postmortem interval (PMI) remains an elusive feat. In the present study, we exploit low molecular weight metabolites to estimate postmortem interval in mice. After sacrifice, serum and muscle samples were procured from C57BL/6J mice (n = 52) at seven predetermined postmortem intervals (0, 1, 3, 6, 12, 24, and 48 h). After extraction and isolation, low molecular weight metabolites were measured via gas chromatography/mass spectrometry (GC/MS) and examined via semi-quantification studies. Then, PMI prediction models were generated for each of the 175 and 163 metabolites identified in muscle and serum, respectively, using a non-linear least squares curve fitting program. A PMI estimation panel for muscle and serum was then erected which consisted of 17 (9.7%) and 14 (8.5%) of the best PMI biomarkers identified in muscle and serum profiles demonstrating statistically significant correlations between metabolite quantity and PMI. Using a single-blinded assessment, we carried out validation studies on the PMI estimation panels. Mean ± standard deviation for accuracy of muscle and serum PMI prediction panels was -0.27 ± 2.88 and -0.89 ± 2.31 h, respectively. Ultimately, these studies elucidate the utility of metabolomic profiling in PMI estimation and pave the path toward biochemical profiling studies involving human samples.

  7. A hybrid approach to protein differential expression in mass spectrometry-based proteomics

    KAUST Repository

    Wang, X.

    2012-04-19

    MOTIVATION: Quantitative mass spectrometry-based proteomics involves statistical inference on protein abundance, based on the intensities of each protein\\'s associated spectral peaks. However, typical MS-based proteomics datasets have substantial proportions of missing observations, due at least in part to censoring of low intensities. This complicates intensity-based differential expression analysis. RESULTS: We outline a statistical method for protein differential expression, based on a simple Binomial likelihood. By modeling peak intensities as binary, in terms of \\'presence/absence,\\' we enable the selection of proteins not typically amenable to quantitative analysis; e.g. \\'one-state\\' proteins that are present in one condition but absent in another. In addition, we present an analysis protocol that combines quantitative and presence/absence analysis of a given dataset in a principled way, resulting in a single list of selected proteins with a single-associated false discovery rate. AVAILABILITY: All R code available here: http://www.stat.tamu.edu/~adabney/share/xuan_code.zip.

  8. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  9. A CD45-based barcoding approach to multiplex mass-cytometry (CyTOF).

    Science.gov (United States)

    Lai, Liyun; Ong, Raymond; Li, Juntao; Albani, Salvatore

    2015-04-01

    CyTOF enables the study of the immune system with a complexity, depth, and multidimensionality never achieved before. However, the full potential of using CyTOF can be limited by scarce cell samples. Barcoding strategies developed based on direct labeling of cells using maleimido-monoamide-DOTA (m-DOTA) provide a very useful tool. However, using m-DOTA has some inherent problems, mainly associated with signal intensity. This may be a source of uncertainty when samples are multiplexed. As an alternative or complementary approach to m-DOTA, conjugating an antibody, specific for a membrane protein present on most immune cells, with different isotopes could address the issues of stability and signal intensity needed for effective barcoding. We chose for this purpose CD45, and designed experiments to address different types of cultures and the ability to detect extra- and intra-cellular targets. We show here that our approach provides an useful alternative to m-DOTA in terms of sensitivity, specificity, flexibility, and user-friendliness. Our manuscript provides details to effectively barcode immune cells, overcoming limitations in current technology and enabling the use of CyTOF with scarce samples (for instance precious clinical samples). © 2015 The Authors. Published by Wiley Periodicals, Inc.

  10. Mass balance modelling of contaminants in river basins: a flexible matrix approach.

    Science.gov (United States)

    Warren, Christopher; Mackay, Don; Whelan, Mick; Fox, Kay

    2005-12-01

    A novel and flexible approach is described for simulating the behaviour of chemicals in river basins. A number (n) of river reaches are defined and their connectivity is described by entries in an n x n matrix. Changes in segmentation can be readily accommodated by altering the matrix entries, without the need for model revision. Two models are described. The simpler QMX-R model only considers advection and an overall loss due to the combined processes of volatilization, net transfer to sediment and degradation. The rate constant for the overall loss is derived from fugacity calculations for a single segment system. The more rigorous QMX-F model performs fugacity calculations for each segment and explicitly includes the processes of advection, evaporation, water-sediment exchange and degradation in both water and sediment. In this way chemical exposure in all compartments (including equilibrium concentrations in biota) can be estimated. Both models are designed to serve as intermediate-complexity exposure assessment tools for river basins with relatively low data requirements. By considering the spatially explicit nature of emission sources and the changes in concentration which occur with transport in the channel system, the approach offers significant advantages over simple one-segment simulations while being more readily applicable than more sophisticated, highly segmented, GIS-based models.

  11. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  12. Impacts of invasive earthworms on soil mercury cycling: Two mass balance approaches to an earthworm invasion in a northern Minnesota forest

    Science.gov (United States)

    Sona Psarska; Edward A. Nater; Randy Kolka

    2016-01-01

    Invasive earthworms perturb natural forest ecosystems that initially developed without them, mainly by consuming the forest floor (an organic rich surficial soil horizon) and by mixing the upper parts of the soil. The fate of mercury (Hg) formerly contained in the forest floor is largely unknown. We used two mass balance approaches (simple mass balance and geochemical...

  13. Meeting Radiation Protection Requirements and Reducing Spacecraft Mass - A Multifunctional Materials Approach

    Science.gov (United States)

    Atwell, William; Koontz, Steve; Reddell, Brandon; Rojdev, Kristina; Franklin, Jennifer

    2010-01-01

    Both crew and radio-sensitive systems, especially electronics must be protected from the effects of the space radiation environment. One method of mitigating this radiation exposure is to use passive-shielding materials. In previous vehicle designs such as the International Space Station (ISS), materials such as aluminum and polyethylene have been used as parasitic shielding to protect crew and electronics from exposure, but these designs add mass and decrease the amount of usable volume inside the vehicle. Thus, it is of interest to understand whether structural materials can also be designed to provide the radiation shielding capability needed for crew and electronics, while still providing weight savings and increased useable volume when compared against previous vehicle shielding designs. In this paper, we present calculations and analysis using the HZETRN (deterministic) and FLUKA (Monte Carlo) codes to investigate the radiation mitigation properties of these structural shielding materials, which includes graded-Z and composite materials. This work is also a follow-on to an earlier paper, that compared computational results for three radiation transport codes, HZETRN, HETC, and FLUKA, using the Feb. 1956 solar particle event (SPE) spectrum. In the following analysis, we consider the October 1989 Ground Level Enhanced (GLE) SPE as the input source term based on the Band function fitting method. Using HZETRN and FLUKA, parametric absorbed doses at the center of a hemispherical structure on the lunar surface are calculated for various thicknesses of graded-Z layups and an all-aluminum structure. HZETRN and FLUKA calculations are compared and are in reasonable (18% to 27%) agreement. Both codes are in agreement with respect to the predicted shielding material performance trends. The results from both HZETRN and FLUKA are analyzed and the radiation protection properties and potential weight savings of various materials and materials lay-ups are compared.

  14. Mature forms of the major seed storage albumins in sunflower: A mass spectrometric approach.

    Science.gov (United States)

    Franke, Bastian; Colgrave, Michelle L; Mylne, Joshua S; Rosengren, K Johan

    2016-09-16

    Seed storage albumins are abundant, water-soluble proteins that are degraded to provide critical nutrients for the germinating seedling. It has been established that the sunflower albumins encoded by SEED STORAGE ALBUMIN 2 (SESA2), SESA20 and SESA3 are the major components of the albumin-rich fraction of the common sunflower Helianthus annuus. To determine the structure of sunflowers most important albumins we performed a detailed chromatographic and mass spectrometric characterization to assess what post-translational processing they receive prior to deposition in the protein storage vacuole. We found that SESA2 and SESA20 each encode two albumins. The first of the two SESA2 albumins (SESA2-1) exists as a monomer of 116 or 117 residues, differing by a threonine at the C-terminus. The second of the two SESA2 albumins (SESA2-2) is a monomer of 128 residues. SESA20 encodes the albumin SESA20-2, which is a 127-residue monomer, whereas SESA20-1 was not abundant enough to be structurally described. SESA3, which has been partly characterized previously, was found in several forms with methylation of its asparagine residues. In contrast to other dicot albumins, which are generally matured into a heterodimer, all the dominant mature sunflower albumins SESA2, SESA20-2, SESA3 and its post-translationally modified analogue SESA3-a are monomeric. Sunflower plants have been bred to thrive in various climate zones making them favored crops to meet the growing worldwide demand by humans for protein. The abundance of seed storage proteins makes them an important source of protein for animal and human nutrition. This study explores the structures of the dominant sunflower napin-type seed storage albumins to understand what structures evolution has favored in the most abundant proteins in sunflower seed. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  15. Evolution of a Lowland Karst Landscape; A Mass-Balance Approach

    Science.gov (United States)

    Chamberlin, C.; Heffernan, J. B.; Cohen, M. J.; Quintero, C.; Pain, A.

    2016-12-01

    Karst landscapes are highly soluble, and are vulnerable to biological acid production as a major driving factor in their evolution. Big Cypress National Park (BICY) is a low-lying karst landscape in southern Florida displaying a distinctive morphology of isolated depressions likely influenced by biology. The goal of this study is to constrain timescales of landform development in BICY. This question was addressed through the construction of landscape-scale elemental budgets for both calcium and phosphorus. Precipitation and export fluxes were calculated using available chemistry and hydrology data, and stocks were calculated from a combination of existing data, field measurements, and laboratory chemical analysis. Estimates of expected mass export given no biological acid production and given an equivalent production of 100% of GPP were compared with observed rates. Current standing stocks of phosphorus are dominated by a large soil pool, and contain 500 Gg P. Inputs are largely dominated by precipitation, and 8000 years are necessary to accumulate standing stocks of phosphorus given modern fluxes. Calcium flux is vastly dominated by dissolution of the limestone bedrock, and though some calcium is retained in the soil, most is exported. Using LiDAR generated estimates of volume loss across the landscape and current export rates, an estimated 15,000 years would be necessary to create the modern landscape. Both of these estimates indicate that the BICY landscape is geologically very young. The different behaviors of these elements (calcium is largely exported, while phosphorus is largely retained) lend additional confidence to estimates of denudation rates of the landscape. These estimates can be even closer reconciled if calcium redistribution over the landscape is allowed for. This estimate is compared to the two bounding conditions for biological weathering to indicate a likely level of biological importance to landscape development in this system.

  16. Mass renormalization and unconventional pairing in multi-band Fe-based superconductors- a phenomenological approach

    Energy Technology Data Exchange (ETDEWEB)

    Drechsler, S.L.; Efremov, D.; Grinenko, V. [IFW-Dresden (Germany); Johnston, S. [Inst. of Quantum Matter, University of British Coulumbia, Vancouver (Canada); Rosner, H. [MPI-cPfS, Dresden, (Germany); Kikoin, K. [Tel Aviv University (Israel)

    2015-07-01

    Combining DFT calculations of the density of states and plasma frequencies with experimental thermodynamic, optical, ARPES, and dHvA data taken from the literature, we estimate both the high-energy (Coulomb, Hund's rule coupling) and the low-energy (el-boson coupling) electronic mass renormalization [H(L)EMR] for typical Fe-pnictides with T{sub c}<40 K, focusing on (K,Rb,Cs)Fe{sub 2}As{sub 2}, (Ca,Na)122, (Ba,K)122, LiFeAs, and LaFeO{sub 1-x}F{sub x}As with and without As-vacancies. Using Eliashberg theory we show that these systems can NOT be described by a very strong el-boson coupling constant λ ≥ ∝ 2, being in conflict with the HEMR as seen by DMFT, ARPES and optics. Instead, an intermediate s{sub ±} coupling regime is realized, mainly based on interband spin fluctuations from one predominant pair of bands. For (Ca,Na)122, there is also a non-negligible intraband el-phonon/orbital fluctuation intraband contribution. The coexistence of magnetic As-vacancies and high-T{sub c}=28 K for LaFeO{sub 1-x}F{sub x}As{sub 1-δ} excludes an orbital fluctuation dominated s{sub ++} scenario at least for that system. In contrast, the line nodal BaFe{sub 2}(As,P){sub 2} near the quantum critical point is found as a superstrongly coupled system. The role of a pseudo-gap is briefly discussed for some of these systems.

  17. Thirty years of precise gravity measurements at Mt. Vesuvius: an approach to detect underground mass movements

    Directory of Open Access Journals (Sweden)

    Giovanna Berrino

    2013-11-01

    Full Text Available Since 1982, high precision gravity measurements have been routinely carried out on Mt. Vesuvius. The gravity network consists of selected sites most of them coinciding with, or very close to, leveling benchmarks to remove the effect of the elevation changes from gravity variations. The reference station is located in Napoli, outside the volcanic area. Since 1986, absolute gravity measurements have been periodically made on a station on Mt. Vesuvius, close to a permanent gravity station established in 1987, and at the reference in Napoli. The results of the gravity measurements since 1982 are presented and discussed. Moderate gravity changes on short-time were generally observed. On long-term significant gravity changes occurred and the overall fields displayed well defined patterns. Several periods of evolution may be recognized. Gravity changes revealed by the relative surveys have been confirmed by repeated absolute measurements, which also confirmed the long-term stability of the reference site. The gravity changes over the recognized periods appear correlated with the seismic crises and with changes of the tidal parameters obtained by continuous measurements. The absence of significant ground deformation implies masses redistribution, essentially density changes without significant volume changes, such as fluids migration at the depth of the seismic foci, i.e. at a few kilometers. The fluid migration may occur through pre-existing geological structures, as also suggested by hydrological studies, and/or through new fractures generated by seismic activity. This interpretation is supported by the analyses of the spatial gravity changes overlapping the most significant and recent seismic crises.

  18. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  19. Migration of antioxidants from polylactic acid films: A parameter estimation approach and an overview of the current mass transfer models.

    Science.gov (United States)

    Samsudin, Hayati; Auras, Rafael; Mishra, Dharmendra; Dolan, Kirk; Burgess, Gary; Rubino, Maria; Selke, Susan; Soto-Valdez, Herlinda

    2018-01-01

    Migration studies of chemicals from contact materials have been widely conducted due to their importance in determining the safety and shelf life of a food product in their packages. The US Food and Drug Administration (FDA) and the European Food Safety Authority (EFSA) require this safety assessment for food contact materials. So, migration experiments are theoretically designed and experimentally conducted to obtain data that can be used to assess the kinetics of chemical release. In this work, a parameter estimation approach was used to review and to determine the mass transfer partition and diffusion coefficients governing the migration process of eight antioxidants from poly(lactic acid), PLA, based films into water/ethanol solutions at temperatures between 20 and 50°C. Scaled sensitivity coefficients were calculated to assess simultaneously estimation of a number of mass transfer parameters. An optimal experimental design approach was performed to show the importance of properly designing a migration experiment. Additional parameters also provide better insights on migration of the antioxidants. For example, the partition coefficients could be better estimated using data from the early part of the experiment instead at the end. Experiments could be conducted for shorter periods of time saving time and resources. Diffusion coefficients of the eight antioxidants from PLA films were between 0.2 and 19×10 -14 m 2 /s at ~40°C. The use of parameter estimation approach provided additional and useful insights about the migration of antioxidants from PLA films. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Reconciling ocean mass content change based on direct and inverse approaches by utilizing data from GRACE, altimetry and Swarm

    Science.gov (United States)

    Rietbroek, R.; Uebbing, B.; Lück, C.; Kusche, J.

    2017-12-01

    Ocean mass content (OMC) change due to the melting of the ice-sheets in Greenland and Antarctica, melting of glaciers and changes in terrestrial hydrology is a major contributor to present-day sea level rise. Since 2002, the GRACE satellite mission serves as a valuable tool for directly measuring the variations in OMC. As GRACE has almost reached the end of its lifetime, efforts are being made to utilize the Swarm mission for the recovery of low degree time-variable gravity fields to bridge a possible gap until the GRACE-FO mission and to fill up periods where GRACE data was not existent. To this end we compute Swarm monthly normal equations and spherical harmonics that are found competitive to other solutions. In addition to directly measuring the OMC, combination of GRACE gravity data with altimetry data in a global inversion approach allows to separate the total sea level change into individual mass-driven and steric contributions. However, published estimates of OMC from the direct and inverse methods differ not only depending on the time window, but also are influenced by numerous post-processing choices. Here, we will look into sources of such differences between direct and inverse approaches and evaluate the capabilities of Swarm to derive OMC. Deriving time series of OMC requires several processing steps; choosing a GRACE (and altimetry) product, data coverage, masks and filters to be applied in either spatial or spectral domain, corrections related to spatial leakage, GIA and geocenter motion. In this study, we compare and quantify the effects of the different processing choices of the direct and inverse methods. Our preliminary results point to the GIA correction as the major source of difference between the two approaches.

  1. Surrogate analyte approach for quantitation of endogenous NAD(+) in human acidified blood samples using liquid chromatography coupled with electrospray ionization tandem mass spectrometry.

    Science.gov (United States)

    Liu, Liling; Cui, Zhiyi; Deng, Yuzhong; Dean, Brian; Hop, Cornelis E C A; Liang, Xiaorong

    2016-02-01

    A high-performance liquid chromatography tandem mass spectrometry (LC-MS/MS) assay for the quantitative determination of NAD(+) in human whole blood using a surrogate analyte approach was developed and validated. Human whole blood was acidified using 0.5N perchloric acid at a ratio of 1:3 (v:v, blood:perchloric acid) during sample collection. 25μL of acidified blood was extracted using a protein precipitation method and the resulting extracts were analyzed using reverse-phase chromatography and positive electrospray ionization mass spectrometry. (13)C5-NAD(+) was used as the surrogate analyte for authentic analyte, NAD(+). The standard curve ranging from 0.250 to 25.0μg/mL in acidified human blood for (13)C5-NAD(+) was fitted to a 1/x(2) weighted linear regression model. The LC-MS/MS response between surrogate analyte and authentic analyte at the same concentration was obtained before and after the batch run. This response factor was not applied when determining the NAD(+) concentration from the (13)C5-NAD(+) standard curve since the percent difference was less than 5%. The precision and accuracy of the LC-MS/MS assay based on the five analytical QC levels were well within the acceptance criteria from both FDA and EMA guidance for bioanalytical method validation. Average extraction recovery of (13)C5-NAD(+) was 94.6% across the curve range. Matrix factor was 0.99 for both high and low QC indicating minimal ion suppression or enhancement. The validated assay was used to measure the baseline level of NAD(+) in 29 male and 21 female human subjects. This assay was also used to study the circadian effect of endogenous level of NAD(+) in 10 human subjects. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. An integrated approach for determining plutonium mass in spent fuel assemblies with nondestructive assay

    International Nuclear Information System (INIS)

    Swinhoe, Martyn T.; Tobin, Stephen J.; Fensin, Mike L.; Menlove, Howard O.

    2009-01-01

    There are a variety of reasons for quantifying plutonium (Pu) in spent fuel. Below, five motivations are listed: (1) To verify the Pu content of spent fuel without depending on unverified information from the facility, as requested by the IAEA ('independent verification'). New spent fuel measurement techniques have the potential to allow the IAEA to recover continuity of knowledge and to better detect diversion. (2) To assure regulators that all of the nuclear material of interest leaving a nuclear facility actually arrives at another nuclear facility ('shipper/receiver'). Given the large stockpile of nuclear fuel at reactor sites around the world, it is clear that in the coming decades, spent fuel will need to be moved to either reprocessing facilities or storage sites. Safeguarding this transportation is of significant interest. (3) To quantify the Pu in spent fuel that is not considered 'self-protecting.' Fuel is considered self-protecting by some regulatory bodies when the dose that the fuel emits is above a given level. If the fuel is not self-protecting, then the Pu content of the fuel needs to be determined and the Pu mass recorded in the facility's accounting system. This subject area is of particular interest to facilities that have research-reactor spent fuel or old light-water reactor (LWR) fuel. It is also of interest to regulators considering changing the level at which fuel is considered self-protecting. (4) To determine the input accountability value at an electrochemical processing facility. It is not expected that an electrochemical reprocessing facility will have an input accountability tank, as is typical in an aqueous reprocessing facility. As such, one possible means of determining the input accountability value is to measure the Pu content in the spent fuel that arrives at the facility. (5) To fully understand the composition of the fuel in order to efficiently and safely pack spent fuel into a long-term repository. The NDA of spent fuel can

  3. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  4. A Multi-Step Approach to Assessing LIGO Test Mass Coatings

    Science.gov (United States)

    Glover, Lamar; Goff, Michael; Linker, Seth; Neilson, Joshua; Patel, Jignesh; Pinto, Innocenzo; Principe, Maria; Villarama, Ethan; Arriaga, Eddy; Barragan, Erik; Chao, Shiuh; Daneshgaran, Lara; DeSalvo, Riccardo; Do, Eric; Fajardo, Cameron

    2018-02-01

    Photographs of the LIGO Gravitational Wave detector mirrors illuminated by the standing beam were analyzed with an astronomical software tool designed to identify stars within images, which extracted hundreds of thousands of point-like scatterers uniformly distributed across the mirror surface, likely distributed through the depth of the coating layers. The sheer number of the observed scatterers implies a fundamental, thermodynamic origin during deposition or processing. If identified as crystallites, these scatterers would be a possible source of the mirror dissipation and thermal noise, which limit the sensitivity of observatories to Gravitational Waves. In order to learn more about the composition and location of the detected scatterers, a feasibility study is underway to develop a method that determines the location of the scatterers by producing a complete mapping of scatterers within test samples, including their depth distribution, optical amplitude distribution, and lateral distribution. Also, research is underway to accurately identify future materials and/or coating methods that possess the largest possible mechanical quality factor (Q). Current efforts propose a new experimental approach that will more precisely measure the Q of coatings by depositing them onto 100 nm Silicon Nitride membranes.

  5. A review of research on cytological approach in salivary gland masses

    Directory of Open Access Journals (Sweden)

    Arvind Babu Rajendra Santosh

    2018-01-01

    Full Text Available To evaluate the diagnostic accuracy of fine-needle aspirations (FNAs in salivary gland pathologies. A comprehensive literature search was conducted in the PubMed database using related Medical Subject Heading terms “sensitivity and specificity of FNA in salivary gland” and “diagnostic accuracy of FNA in salivary gland” for the period 1980–2016, and we found that 414 research studies had been published. PRISMA technology was utilized to prepare flow chart for displaying data search strategy. A total of 385 articles were excluded based on the established inclusion and exclusion criteria of the study. Twenty-nine research studies were included. Those twenty-nine studies on the sensitivity and specificity of FNAs in salivary gland pathology consisted of 5274 cases of benign, malignant and inflammatory salivary gland lesions. The present study identified a range of 87%–100% sensitivity and 90%–100% specificity for the usefulness of FNAs in distinguishing benign and malignant salivary gland lesions. Although a considerable number of studies have been identified that reported on sensitivity and specificity of FNAs in salivary gland pathologies, each study had a different approach in reporting the sensitivity and specificity. We emphasize that standardized reporting protocols of sensitivity and specificity report supported with checklists would help future researchers to interpret this cytological method and make more accurate clinical utility and usefulness reports on salivary gland pathologies.

  6. A new insert sample approach to paper spray mass spectrometry: a paper substrate with paraffin barriers.

    Science.gov (United States)

    Colletes, T C; Garcia, P T; Campanha, R B; Abdelnur, P V; Romão, W; Coltro, W K T; Vaz, B G

    2016-03-07

    The analytical performance for paper spray (PS) using a new insert sample approach based on paper with paraffin barriers (PS-PB) is presented. The paraffin barrier is made using a simple, fast and cheap method based on the stamping of paraffin onto a paper surface. Typical operation conditions of paper spray such as the solvent volume applied on the paper surface, and the paper substrate type are evaluated. A paper substrate with paraffin barriers shows better performance on analysis of a range of typical analytes when compared to the conventional PS-MS using normal paper (PS-NP) and PS-MS using paper with two rounded corners (PS-RC). PS-PB was applied to detect sugars and their inhibitors in sugarcane bagasse liquors from a second generation ethanol process. Moreover, the PS-PB proved to be excellent, showing results for the quantification of glucose in hydrolysis liquors with excellent linearity (R(2) = 0.99), limits of detection (2.77 mmol L(-1)) and quantification (9.27 mmol L(-1)). The results are better than for PS-NP and PS-RC. The PS-PB was also excellent in performance when compared with the HPLC-UV method for glucose quantification on hydrolysis of liquor samples.

  7. Comparison of surface mass balance of ice sheets simulated by positive-degree-day method and energy balance approach

    Directory of Open Access Journals (Sweden)

    E. Bauer

    2017-07-01

    Full Text Available Glacial cycles of the late Quaternary are controlled by the asymmetrically varying mass balance of continental ice sheets in the Northern Hemisphere. Surface mass balance is governed by processes of ablation and accumulation. Here two ablation schemes, the positive-degree-day (PDD method and the surface energy balance (SEB approach, are compared in transient simulations of the last glacial cycle with the Earth system model of intermediate complexity CLIMBER-2. The standard version of the CLIMBER-2 model incorporates the SEB approach and simulates ice volume variations in reasonable agreement with paleoclimate reconstructions during the entire last glacial cycle. Using results from the standard CLIMBER-2 model version, we simulated ablation with the PDD method in offline mode by applying different combinations of three empirical parameters of the PDD scheme. We found that none of the parameter combinations allow us to simulate a surface mass balance of the American and European ice sheets that is similar to that obtained with the standard SEB method. The use of constant values for the empirical PDD parameters led either to too much ablation during the first phase of the last glacial cycle or too little ablation during the final phase. We then substituted the standard SEB scheme in CLIMBER-2 with the PDD scheme and performed a suite of fully interactive (online simulations of the last glacial cycle with different combinations of PDD parameters. The results of these simulations confirmed the results of the offline simulations: no combination of PDD parameters realistically simulates the evolution of the ice sheets during the entire glacial cycle. The use of constant parameter values in the online simulations leads either to a buildup of too much ice volume at the end of glacial cycle or too little ice volume at the beginning. Even when the model correctly simulates global ice volume at the last glacial maximum (21 ka, it is unable to simulate

  8. When good = better than average

    Directory of Open Access Journals (Sweden)

    Don A. Moore

    2007-10-01

    Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.

  9. Autoregressive Moving Average Graph Filtering

    OpenAIRE

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  10. Averaging Robertson-Walker cosmologies

    International Nuclear Information System (INIS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-01-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models

  11. Protein biomarkers on tissue as imaged via MALDI mass spectrometry: A systematic approach to study the limits of detection.

    Science.gov (United States)

    van de Ven, Stephanie M W Y; Bemis, Kyle D; Lau, Kenneth; Adusumilli, Ravali; Kota, Uma; Stolowitz, Mark; Vitek, Olga; Mallick, Parag; Gambhir, Sanjiv S

    2016-06-01

    MALDI mass spectrometry imaging (MSI) is emerging as a tool for protein and peptide imaging across tissue sections. Despite extensive study, there does not yet exist a baseline study evaluating the potential capabilities for this technique to detect diverse proteins in tissue sections. In this study, we developed a systematic approach for characterizing MALDI-MSI workflows in terms of limits of detection, coefficients of variation, spatial resolution, and the identification of endogenous tissue proteins. Our goal was to quantify these figures of merit for a number of different proteins and peptides, in order to gain more insight in the feasibility of protein biomarker discovery efforts using this technique. Control proteins and peptides were deposited in serial dilutions on thinly sectioned mouse xenograft tissue. Using our experimental setup, coefficients of variation were biomarkers and a new benchmarking strategy that can be used for comparing diverse MALDI-MSI workflows. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Migration of antioxidants from polylactic acid films, a parameter estimation approach: Part I - A model including convective mass transfer coefficient.

    Science.gov (United States)

    Samsudin, Hayati; Auras, Rafael; Burgess, Gary; Dolan, Kirk; Soto-Valdez, Herlinda

    2018-03-01

    A two-step solution based on the boundary conditions of Crank's equations for mass transfer in a film was developed. Three driving factors, the diffusion (D), partition (K p,f ) and convective mass transfer coefficients (h), govern the sorption and/or desorption kinetics of migrants from polymer films. These three parameters were simultaneously estimated. They provide in-depth insight into the physics of a migration process. The first step was used to find the combination of D, K p,f and h that minimized the sums of squared errors (SSE) between the predicted and actual results. In step 2, an ordinary least square (OLS) estimation was performed by using the proposed analytical solution containing D, K p,f and h. Three selected migration studies of PLA/antioxidant-based films were used to demonstrate the use of this two-step solution. Additional parameter estimation approaches such as sequential and bootstrap were also performed to acquire a better knowledge about the kinetics of migration. The proposed model successfully provided the initial guesses for D, K p,f and h. The h value was determined without performing a specific experiment for it. By determining h together with D, under or overestimation issues pertaining to a migration process can be avoided since these two parameters are correlated. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Determination of Selected Polycyclic Aromatic Compounds in Particulate Matter Samples with Low Mass Loading: An Approach to Test Method Accuracy

    Directory of Open Access Journals (Sweden)

    Susana García-Alonso

    2017-01-01

    Full Text Available A miniaturized analytical procedure to determine selected polycyclic aromatic compounds (PACs in low mass loadings (<10 mg of particulate matter (PM is evaluated. The proposed method is based on a simple sonication/agitation method using small amounts of solvent for extraction. The use of a reduced sample size of particulate matter is often limiting for allowing the quantification of analytes. This also leads to the need for changing analytical procedures and evaluating its performance. The trueness and precision of the proposed method were tested using ambient air samples. Analytical results from the proposed method were compared with those of pressurized liquid and microwave extractions. Selected PACs (polycyclic aromatic hydrocarbons (PAHs and nitro polycyclic aromatic hydrocarbons (NPAHs were determined by liquid chromatography with fluorescence detection (HPLC/FD. Taking results from pressurized liquid extractions as reference values, recovery rates of sonication/agitation method were over 80% for the most abundant PAHs. Recovery rates of selected NPAHs were lower. Enhanced rates were obtained when methanol was used as a modifier. Intermediate precision was estimated by data comparison from two mathematical approaches: normalized difference data and pooled relative deviations. Intermediate precision was in the range of 10–20%. The effectiveness of the proposed method was evaluated in PM aerosol samples collected with very low mass loadings (<0.2 mg during characterization studies from turbofan engine exhausts.

  14. Mass-spectrometry analysis of histone post-translational modifications in pathology tissue using the PAT-H-MS approach

    Directory of Open Access Journals (Sweden)

    Roberta Noberini

    2016-06-01

    Full Text Available Aberrant histone post-translational modifications (hPTMs have been implicated with various pathologies, including cancer, and may represent useful epigenetic biomarkers. The data described here provide a mass spectrometry-based quantitative analysis of hPTMs from formalin-fixed paraffin-embedded (FFPE tissues, from which histones were extracted through the recently developed PAT-H-MS method. First, we analyzed FFPE samples from mouse spleen and liver or human breast cancer up to six years old, together with their corresponding fresh frozen tissue. We then combined the PAT-H-MS approach with a histone-focused version of the super-SILAC strategy-using a mix of histones from four breast cancer cell lines as a spike-in standard- to accurately quantify hPTMs from breast cancer specimens belonging to different subtypes. The data, which are associated with a recent publication (Pathology tissue-quantitative mass spectrometry analysis to profile histone post-translational modification patterns in patient samples (Noberini, 2015 [1], are deposited at the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier http://www.ebi.ac.uk/pride/archive/projects/PXD002669.

  15. Eddy covariance measurements with high-resolution time-of-flight aerosol mass spectrometry: a new approach to chemically resolved aerosol fluxes

    Directory of Open Access Journals (Sweden)

    D. K. Farmer

    2011-06-01

    Full Text Available Although laboratory studies show that biogenic volatile organic compounds (VOCs yield substantial secondary organic aerosol (SOA, production of biogenic SOA as indicated by upward fluxes has not been conclusively observed over forests. Further, while aerosols are known to deposit to surfaces, few techniques exist to provide chemically-resolved particle deposition fluxes. To better constrain aerosol sources and sinks, we have developed a new technique to directly measure fluxes of chemically-resolved submicron aerosols using the high-resolution time-of-flight aerosol mass spectrometer (HR-AMS in a new, fast eddy covariance mode. This approach takes advantage of the instrument's ability to quantitatively identify both organic and inorganic components, including ammonium, sulphate and nitrate, at a temporal resolution of several Hz. The new approach has been successfully deployed over a temperate ponderosa pine plantation in California during the BEARPEX-2007 campaign, providing both total and chemically resolved non-refractory (NR PM1 fluxes. Average deposition velocities for total NR-PM1 aerosol at noon were 2.05 ± 0.04 mm s−1. Using a high resolution measurement of the NH2+ and NH3+ fragments, we demonstrate the first eddy covariance flux measurements of particulate ammonium, which show a noon-time deposition velocity of 1.9 ± 0.7 mm s−1 and are dominated by deposition of ammonium sulphate.

  16. Targeted and untargeted high resolution mass approach for a putative profiling of glycosylated simple phenols in hybrid grapes.

    Science.gov (United States)

    Barnaba, Chiara; Dellacassa, Eduardo; Nicolini, Giorgio; Giacomelli, Mattia; Roman Villegas, Tomas; Nardin, Tiziana; Larcher, Roberto

    2017-08-01

    Vitis vinifera is one of the most widespread grapevines around the world representing the raw material for high quality wine production. The availability of more resistant interspecific hybrid vine varieties, developed from crosses between Vitis vinifera and other Vitis species, has generated much interest, also due to the low environmental effect of production. However, hybrid grape wine composition and varietal differences between interspecific hybrids have not been well defined, particularly for the simple phenols profile. The dynamic of these phenols in wines, where the glycosylated forms can be transformed into the free ones during winemaking, also raises an increasing health interest by their role as antoxidants in wine consumers. In this work an on-line SPE clean-up device, to reduce matrix interference, was combined with ultra-high liquid chromatography-high resolution mass spectrometry in order to increase understanding of the phenolic composition of hybrid grape varieties. Specifically, the phenolic composition of 4 hybrid grape varieties (red, Cabernet Cantor and Prior; white, Muscaris and Solaris) and 2 European grape varieties (red, Merlot; white, Chardonnay) was investigated, focusing on free and glycosidically bound simple phenols and considering compound distribution in pulp, skin, seeds and wine. Using a targeted approach 53 free simple phenols and 7 glycosidic precursors were quantified with quantification limits ranging from 0.001 to 2mgKg -1 and calibration R 2 of 0.99 for over 86% of compounds. The untargeted approach made it possible to tentatively identify 79 glycosylated precursors of selected free simple phenols in the form of -hexoside (N=30), -pentoside (21), -hexoside-hexoside (17), -hexoside-pentoside (4), -pentoside-hexoside (5) and -pentoside-pentoside (2) derivatives on the basis of accurate mass, isotopic pattern and MS/MS fragmentation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. The flattening of the average potential in models with fermions

    International Nuclear Information System (INIS)

    Bornholdt, S.

    1993-01-01

    The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)

  18. Multiphase averaging of periodic soliton equations

    International Nuclear Information System (INIS)

    Forest, M.G.

    1979-01-01

    The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations

  19. Recent trends in application of multivariate curve resolution approaches for improving gas chromatography-mass spectrometry analysis of essential oils.

    Science.gov (United States)

    Jalali-Heravi, Mehdi; Parastar, Hadi

    2011-08-15

    Essential oils (EOs) are valuable natural products that are popular nowadays in the world due to their effects on the health conditions of human beings and their role in preventing and curing diseases. In addition, EOs have a broad range of applications in foods, perfumes, cosmetics and human nutrition. Among different techniques for analysis of EOs, gas chromatography-mass spectrometry (GC-MS) is the most important one in recent years. However, there are some fundamental problems in GC-MS analysis including baseline drift, spectral background, noise, low S/N (signal to noise) ratio, changes in the peak shapes and co-elution. Multivariate curve resolution (MCR) approaches cope with ongoing challenges and are able to handle these problems. This review focuses on the application of MCR techniques for improving GC-MS analysis of EOs published between January 2000 and December 2010. In the first part, the importance of EOs in human life and their relevance in analytical chemistry is discussed. In the second part, an insight into some basics needed to understand prospects and limitations of the MCR techniques are given. In the third part, the significance of the combination of the MCR approaches with GC-MS analysis of EOs is highlighted. Furthermore, the commonly used algorithms for preprocessing, chemical rank determination, local rank analysis and multivariate resolution in the field of EOs analysis are reviewed. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. A Rational Approach for Discovering and Validating Cancer Markers in Very Small Samples Using Mass Spectrometry and ELISA Microarrays

    Directory of Open Access Journals (Sweden)

    Richard C. Zangar

    2004-01-01

    Full Text Available Identifying useful markers of cancer can be problematic due to limited amounts of sample. Some samples such as nipple aspirate fluid (NAF or early-stage tumors are inherently small. Other samples such as serum are collected in larger volumes but archives of these samples are very valuable and only small amounts of each sample may be available for a single study. Also, given the diverse nature of cancer and the inherent variability in individual protein levels, it seems likely that the best approach to screen for cancer will be to determine the profile of a battery of proteins. As a result, a major challenge in identifying protein markers of disease is the ability to screen many proteins using very small amounts of sample. In this review, we outline some technological advances in proteomics that greatly advance this capability. Specifically, we propose a strategy for identifying markers of breast cancer in NAF that utilizes mass spectrometry (MS to simultaneously screen hundreds or thousands of proteins in each sample. The best potential markers identified by the MS analysis can then be extensively characterized using an ELISA microarray assay. Because the microarray analysis is quantitative and large numbers of samples can be efficiently analyzed, this approach offers the ability to rapidly assess a battery of selected proteins in a manner that is directly relevant to traditional clinical assays.

  1. A problem-solving approach to effective insulin injection for patients at either end of the body mass index.

    Science.gov (United States)

    Juip, Micki; Fitzner, Karen

    2012-06-01

    People with diabetes require skills and knowledge to adhere to medication regimens and self-manage this complex disease. Effective self-management is contingent upon effective problem solving and decision making. Gaps existed regarding useful approaches to problem solving by individuals with very low and very high body mass index (BMI) who self-administer insulin injections. This article addresses those gaps by presenting findings from a patient survey, a symposium on the topic of problem solving, and recent interviews with diabetes educators to facilitate problem-solving approaches for people with diabetes with high and low BMI who inject insulin and/or other medications. In practice, problem solving involves problem identification, definition, and specification; goal and barrier identification are a prelude to generating a set of potential strategies for problem resolution and applying these strategies to implement a solution. Teaching techniques, such as site rotation and ensuring that people with diabetes use the appropriate equipment, increase confidence with medication adherence. Medication taking is more effective when people with diabetes are equipped with the knowledge, skills, and problem-solving behaviors to effectively self-manage their injections.

  2. An Alternative Humans to Mars Approach: Reducing Mission Mass with Multiple Mars Flyby Trajectories and Minimal Capability Investments

    Science.gov (United States)

    Whitley, Ryan J.; Jedrey, Richard; Landau, Damon; Ocampo, Cesar

    2015-01-01

    Mars flyby trajectories and Earth return trajectories have the potential to enable lower- cost and sustainable human exploration of Mars. Flyby and return trajectories are true minimum energy paths with low to zero post-Earth departure maneuvers. By emplacing the large crew vehicles required for human transit on these paths, the total fuel cost can be reduced. The traditional full-up repeating Earth-Mars-Earth cycler concept requires significant infrastructure, but a Mars only flyby approach minimizes mission mass and maximizes opportunities to build-up missions in a stepwise manner. In this paper multiple strategies for sending a crew of 4 to Mars orbit and back are examined. With pre-emplaced assets in Mars orbit, a transit habitat and a minimally functional Mars taxi, a complete Mars mission can be accomplished in 3 SLS launches and 2 Mars Flyby's, including Orion. While some years are better than others, ample opportunities exist within a given 15-year Earth-Mars alignment cycle. Building up a mission cadence over time, this approach can translate to Mars surface access. Risk reduction, which is always a concern for human missions, is mitigated by the use of flybys with Earth return (some of which are true free returns) capability.

  3. Topological quantization of ensemble averages

    International Nuclear Information System (INIS)

    Prodan, Emil

    2009-01-01

    We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states

  4. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  5. Identification of Serum Biomarkers for Biliary Tract Cancers by a Proteomic Approach Based on Time-of-Flight Mass Spectrometry

    International Nuclear Information System (INIS)

    Wang, Wen-Jing; Xu, Wang-Hong; Liu, Cha-Zhen; Rashid, Asif; Cheng, Jia-Rong; Liao, Ping; Hu, Heng; Chu, Lisa W.; Gao, Yu-Tang; Yu, Kai; Hsing, Ann W.

    2010-01-01

    Biliary tract cancers (BTCs) are lethal malignancies currently lacking satisfactory methods for early detection and accurate diagnosis. Surface-enhanced laser desorption/ionization time-of-flight mass spectrometry (SELDI-TOF-MS) is a promising diagnostic tool for this disease. In this pilot study, sera samples from 50 BTCs and 30 cholelithiasis patients as well as 30 healthy subjects from a population-based case-control study were randomly grouped into training set (30 BTCs, 20 cholelithiasis and 20 controls), duplicate of training set, and blind set (20 BTCs, 10 cholelithiasis and 10 controls); all sets were analyzed on Immobilized Metal Affinity Capture ProteinChips via SELDI-TOF-MS. A decision tree classifier was built using the training set and applied to all test sets. The classification tree constructed with the 3,400, 4,502, 5,680, 7,598, and 11,242 mass-to-charge ratio (m/z) protein peaks had a sensitivity of 96.7% and a specificity of 85.0% when comparing BTCs with non-cancers. When applied to the duplicate set, sensitivity was 66.7% and specificity was 70.0%, while in the blind set, sensitivity was 95.0% and specificity was 75.0%. Positive predictive values of the training, duplicate, and blind sets were 82.9%, 62.5% and 79.2%, respectively. The agreement of the training and duplicate sets was 71.4% (Kappa = 0.43, u = 3.98, P < 0.01). The coefficient of variations based on 10 replicates of one sample for the five differential peaks were 15.8–68.8% for intensity and 0–0.05% for m/z. These pilot results suggest that serum protein profiling by SELDI-TOF-MS may be a promising approach for identifying BTCs but low assay reproducibility may limit its application in clinical practice

  6. MALDI-ISD Mass Spectrometry Analysis of Hemoglobin Variants: a Top-Down Approach to the Characterization of Hemoglobinopathies

    Science.gov (United States)

    Théberge, Roger; Dikler, Sergei; Heckendorf, Christian; Chui, David H. K.; Costello, Catherine E.; McComb, Mark E.

    2015-08-01

    Hemoglobinopathies are the most common inherited disorders in humans and are thus the target of screening programs worldwide. Over the past decade, mass spectrometry (MS) has gained a more important role as a clinical means to diagnose variants, and a number of approaches have been proposed for characterization. Here we investigate the use of matrix-assisted laser desorption/ionization time-of-flight MS (MALDI-TOF MS) with sequencing using in-source decay (MALDI-ISD) for the characterization of Hb variants. We explored the effect of matrix selection using super DHB or 1,5-diaminonaphthalene on ISD fragment ion yield and distribution. MALDI-ISD MS of whole blood using super DHB simultaneously provided molecular weights for the alpha and beta chains, as well as extensive fragmentation in the form of sequence defining c-, (z + 2)-, and y-ion series. We observed sequence coverage on the first 70 amino acids positions from the N- and C-termini of the alpha and beta chains in a single experiment. An abundant beta chain N-terminal fragment ion corresponding to βc34 was determined to be a diagnostic marker ion for Hb S (β6 Glu→Val, sickle cell), Hb C (β6 Glu→Lys), and potentially for Hb E (β26 Glu→Lys). The MALDI-ISD analysis of Hb S and HbSC yielded mass shifts corresponding to the variants, demonstrating the potential for high-throughput screening. Characterization of an alpha chain variant, Hb Westmead (α122 His→Gln), generated fragments that established the location of the variant. This study is the first clinical application of MALDI-ISD MS for the determination and characterization of hemoglobin variants.

  7. Diagnosis of breast masses from dynamic contrast-enhanced and diffusion-weighted MR: a machine learning approach.

    Directory of Open Access Journals (Sweden)

    Hongmin Cai

    Full Text Available PURPOSE: Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI is increasingly used for breast cancer diagnosis as supplementary to conventional imaging techniques. Combining of diffusion-weighted imaging (DWI of morphology and kinetic features from DCE-MRI to improve the discrimination power of malignant from benign breast masses is rarely reported. MATERIALS AND METHODS: The study comprised of 234 female patients with 85 benign and 149 malignant lesions. Four distinct groups of features, coupling with pathological tests, were estimated to comprehensively characterize the pictorial properties of each lesion, which was obtained by a semi-automated segmentation method. Classical machine learning scheme including feature subset selection and various classification schemes were employed to build prognostic model, which served as a foundation for evaluating the combined effects of the multi-sided features for predicting of the types of lesions. Various measurements including cross validation and receiver operating characteristics were used to quantify the diagnostic performances of each feature as well as their combination. RESULTS: Seven features were all found to be statistically different between the malignant and the benign groups and their combination has achieved the highest classification accuracy. The seven features include one pathological variable of age, one morphological variable of slope, three texture features of entropy, inverse difference and information correlation, one kinetic feature of SER and one DWI feature of apparent diffusion coefficient (ADC. Together with the selected diagnostic features, various classical classification schemes were used to test their discrimination power through cross validation scheme. The averaged measurements of sensitivity, specificity, AUC and accuracy are 0.85, 0.89, 90.9% and 0.93, respectively. CONCLUSION: Multi-sided variables which characterize the morphological, kinetic, pathological

  8. Diagnosis of breast masses from dynamic contrast-enhanced and diffusion-weighted MR: a machine learning approach.

    Science.gov (United States)

    Cai, Hongmin; Peng, Yanxia; Ou, Caiwen; Chen, Minsheng; Li, Li

    2014-01-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is increasingly used for breast cancer diagnosis as supplementary to conventional imaging techniques. Combining of diffusion-weighted imaging (DWI) of morphology and kinetic features from DCE-MRI to improve the discrimination power of malignant from benign breast masses is rarely reported. The study comprised of 234 female patients with 85 benign and 149 malignant lesions. Four distinct groups of features, coupling with pathological tests, were estimated to comprehensively characterize the pictorial properties of each lesion, which was obtained by a semi-automated segmentation method. Classical machine learning scheme including feature subset selection and various classification schemes were employed to build prognostic model, which served as a foundation for evaluating the combined effects of the multi-sided features for predicting of the types of lesions. Various measurements including cross validation and receiver operating characteristics were used to quantify the diagnostic performances of each feature as well as their combination. Seven features were all found to be statistically different between the malignant and the benign groups and their combination has achieved the highest classification accuracy. The seven features include one pathological variable of age, one morphological variable of slope, three texture features of entropy, inverse difference and information correlation, one kinetic feature of SER and one DWI feature of apparent diffusion coefficient (ADC). Together with the selected diagnostic features, various classical classification schemes were used to test their discrimination power through cross validation scheme. The averaged measurements of sensitivity, specificity, AUC and accuracy are 0.85, 0.89, 90.9% and 0.93, respectively. Multi-sided variables which characterize the morphological, kinetic, pathological properties and DWI measurement of ADC can dramatically improve the

  9. The average Indian female nose.

    Science.gov (United States)

    Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh

    2011-12-01

    This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.

  10. Asymmetric network connectivity using weighted harmonic averages

    Science.gov (United States)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  11. THE AVERAGE STAR FORMATION HISTORIES OF GALAXIES IN DARK MATTER HALOS FROM z = 0-8

    International Nuclear Information System (INIS)

    Behroozi, Peter S.; Wechsler, Risa H.; Conroy, Charlie

    2013-01-01

    We present a robust method to constrain average galaxy star formation rates (SFRs), star formation histories (SFHs), and the intracluster light (ICL) as a function of halo mass. Our results are consistent with observed galaxy stellar mass functions, specific star formation rates (SSFRs), and cosmic star formation rates (CSFRs) from z = 0 to z = 8. We consider the effects of a wide range of uncertainties on our results, including those affecting stellar masses, SFRs, and the halo mass function at the heart of our analysis. As they are relevant to our method, we also present new calibrations of the dark matter halo mass function, halo mass accretion histories, and halo-subhalo merger rates out to z = 8. We also provide new compilations of CSFRs and SSFRs; more recent measurements are now consistent with the buildup of the cosmic stellar mass density at all redshifts. Implications of our work include: halos near 10 12 M ☉ are the most efficient at forming stars at all redshifts, the baryon conversion efficiency of massive halos drops markedly after z ∼ 2.5 (consistent with theories of cold-mode accretion), the ICL for massive galaxies is expected to be significant out to at least z ∼ 1-1.5, and dwarf galaxies at low redshifts have higher stellar mass to halo mass ratios than previous expectations and form later than in most theoretical models. Finally, we provide new fitting formulae for SFHs that are more accurate than the standard declining tau model. Our approach places a wide variety of observations relating to the SFH of galaxies into a self-consistent framework based on the modern understanding of structure formation in ΛCDM. Constraints on the stellar mass-halo mass relationship and SFRs are available for download online.

  12. The Average Star Formation Histories of Galaxies in Dark Matter Halos from z = 0-8

    Science.gov (United States)

    Behroozi, Peter S.; Wechsler, Risa H.; Conroy, Charlie

    2013-06-01

    We present a robust method to constrain average galaxy star formation rates (SFRs), star formation histories (SFHs), and the intracluster light (ICL) as a function of halo mass. Our results are consistent with observed galaxy stellar mass functions, specific star formation rates (SSFRs), and cosmic star formation rates (CSFRs) from z = 0 to z = 8. We consider the effects of a wide range of uncertainties on our results, including those affecting stellar masses, SFRs, and the halo mass function at the heart of our analysis. As they are relevant to our method, we also present new calibrations of the dark matter halo mass function, halo mass accretion histories, and halo-subhalo merger rates out to z = 8. We also provide new compilations of CSFRs and SSFRs; more recent measurements are now consistent with the buildup of the cosmic stellar mass density at all redshifts. Implications of our work include: halos near 1012 M ⊙ are the most efficient at forming stars at all redshifts, the baryon conversion efficiency of massive halos drops markedly after z ~ 2.5 (consistent with theories of cold-mode accretion), the ICL for massive galaxies is expected to be significant out to at least z ~ 1-1.5, and dwarf galaxies at low redshifts have higher stellar mass to halo mass ratios than previous expectations and form later than in most theoretical models. Finally, we provide new fitting formulae for SFHs that are more accurate than the standard declining tau model. Our approach places a wide variety of observations relating to the SFH of galaxies into a self-consistent framework based on the modern understanding of structure formation in ΛCDM. Constraints on the stellar mass-halo mass relationship and SFRs are available for download online.

  13. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  14. A genome-wide approach accounting for body mass index identifies genetic variants influencing fasting glycemic traits and insulin resistance

    Science.gov (United States)

    Manning, Alisa K.; Hivert, Marie-France; Scott, Robert A.; Grimsby, Jonna L.; Bouatia-Naji, Nabila; Chen, Han; Rybin, Denis; Liu, Ching-Ti; Bielak, Lawrence F.; Prokopenko, Inga; Amin, Najaf; Barnes, Daniel; Cadby, Gemma; Hottenga, Jouke-Jan; Ingelsson, Erik; Jackson, Anne U.; Johnson, Toby; Kanoni, Stavroula; Ladenvall, Claes; Lagou, Vasiliki; Lahti, Jari; Lecoeur, Cecile; Liu, Yongmei; Martinez-Larrad, Maria Teresa; Montasser, May E.; Navarro, Pau; Perry, John R. B.; Rasmussen-Torvik, Laura J.; Salo, Perttu; Sattar, Naveed; Shungin, Dmitry; Strawbridge, Rona J.; Tanaka, Toshiko; van Duijn, Cornelia M.; An, Ping; de Andrade, Mariza; Andrews, Jeanette S.; Aspelund, Thor; Atalay, Mustafa; Aulchenko, Yurii; Balkau, Beverley; Bandinelli, Stefania; Beckmann, Jacques S.; Beilby, John P.; Bellis, Claire; Bergman, Richard N.; Blangero, John; Boban, Mladen; Boehnke, Michael; Boerwinkle, Eric; Bonnycastle, Lori L.; Boomsma, Dorret I.; Borecki, Ingrid B.; Böttcher, Yvonne; Bouchard, Claude; Brunner, Eric; Budimir, Danijela; Campbell, Harry; Carlson, Olga; Chines, Peter S.; Clarke, Robert; Collins, Francis S.; Corbatón-Anchuelo, Arturo; Couper, David; de Faire, Ulf; Dedoussis, George V; Deloukas, Panos; Dimitriou, Maria; Egan, Josephine M; Eiriksdottir, Gudny; Erdos, Michael R.; Eriksson, Johan G.; Eury, Elodie; Ferrucci, Luigi; Ford, Ian; Forouhi, Nita G.; Fox, Caroline S; Franzosi, Maria Grazia; Franks, Paul W; Frayling, Timothy M; Froguel, Philippe; Galan, Pilar; de Geus, Eco; Gigante, Bruna; Glazer, Nicole L.; Goel, Anuj; Groop, Leif; Gudnason, Vilmundur; Hallmans, Göran; Hamsten, Anders; Hansson, Ola; Harris, Tamara B.; Hayward, Caroline; Heath, Simon; Hercberg, Serge; Hicks, Andrew A.; Hingorani, Aroon; Hofman, Albert; Hui, Jennie; Hung, Joseph; Jarvelin, Marjo Riitta; Jhun, Min A.; Johnson, Paul C.D.; Jukema, J Wouter; Jula, Antti; Kao, W.H.; Kaprio, Jaakko; Kardia, Sharon L. R.; Keinanen-Kiukaanniemi, Sirkka; Kivimaki, Mika; Kolcic, Ivana; Kovacs, Peter; Kumari, Meena; Kuusisto, Johanna; Kyvik, Kirsten Ohm; Laakso, Markku; Lakka, Timo; Lannfelt, Lars; Lathrop, G Mark; Launer, Lenore J.; Leander, Karin; Li, Guo; Lind, Lars; Lindstrom, Jaana; Lobbens, Stéphane; Loos, Ruth J. F.; Luan, Jian’an; Lyssenko, Valeriya; Mägi, Reedik; Magnusson, Patrik K. E.; Marmot, Michael; Meneton, Pierre; Mohlke, Karen L.; Mooser, Vincent; Morken, Mario A.; Miljkovic, Iva; Narisu, Narisu; O’Connell, Jeff; Ong, Ken K.; Oostra, Ben A.; Palmer, Lyle J.; Palotie, Aarno; Pankow, James S.; Peden, John F.; Pedersen, Nancy L.; Pehlic, Marina; Peltonen, Leena; Penninx, Brenda; Pericic, Marijana; Perola, Markus; Perusse, Louis; Peyser, Patricia A; Polasek, Ozren; Pramstaller, Peter P.; Province, Michael A.; Räikkönen, Katri; Rauramaa, Rainer; Rehnberg, Emil; Rice, Ken; Rotter, Jerome I.; Rudan, Igor; Ruokonen, Aimo; Saaristo, Timo; Sabater-Lleal, Maria; Salomaa, Veikko; Savage, David B.; Saxena, Richa; Schwarz, Peter; Seedorf, Udo; Sennblad, Bengt; Serrano-Rios, Manuel; Shuldiner, Alan R.; Sijbrands, Eric J.G.; Siscovick, David S.; Smit, Johannes H.; Small, Kerrin S.; Smith, Nicholas L.; Smith, Albert Vernon; Stančáková, Alena; Stirrups, Kathleen; Stumvoll, Michael; Sun, Yan V.; Swift, Amy J.; Tönjes, Anke; Tuomilehto, Jaakko; Trompet, Stella; Uitterlinden, Andre G.; Uusitupa, Matti; Vikström, Max; Vitart, Veronique; Vohl, Marie-Claude; Voight, Benjamin F.; Vollenweider, Peter; Waeber, Gerard; Waterworth, Dawn M; Watkins, Hugh; Wheeler, Eleanor; Widen, Elisabeth; Wild, Sarah H.; Willems, Sara M.; Willemsen, Gonneke; Wilson, James F.; Witteman, Jacqueline C.M.; Wright, Alan F.; Yaghootkar, Hanieh; Zelenika, Diana; Zemunik, Tatijana; Zgaga, Lina; Wareham, Nicholas J.; McCarthy, Mark I.; Barroso, Ines; Watanabe, Richard M.; Florez, Jose C.; Dupuis, Josée; Meigs, James B.; Langenberg, Claudia

    2013-01-01

    Recent genome-wide association studies have described many loci implicated in type 2 diabetes (T2D) pathophysiology and beta-cell dysfunction, but contributed little to our understanding of the genetic basis of insulin resistance. We hypothesized that genes implicated in insulin resistance pathways may be uncovered by accounting for differences in body mass index (BMI) and potential interaction between BMI and genetic variants. We applied a novel joint meta-analytical approach to test associations with fasting insulin (FI) and glucose (FG) on a genome-wide scale. We present six previously unknown FI loci at P<5×10−8 in combined discovery and follow-up analyses of 52 studies comprising up to 96,496non-diabetic individuals. Risk variants were associated with higher triglyceride and lower HDL cholesterol levels, suggestive of a role for these FI loci in insulin resistance pathways. The localization of these additional loci will aid further characterization of the role of insulin resistance in T2D pathophysiology. PMID:22581228

  15. Identification of clinically relevant Corynebacterium strains by Api Coryne, MALDI-TOF-mass spectrometry and molecular approaches.

    Science.gov (United States)

    Alibi, S; Ferjani, A; Gaillot, O; Marzouk, M; Courcol, R; Boukadida, J

    2015-09-01

    We evaluated the Bruker Biotyper matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry (MS) for the identification of 97 Corynebacterium clinical in comparison to identification strains by Api Coryne and MALDI-TOF-MS using 16S rRNA gene and hypervariable region of rpoB genes sequencing as a reference method. C. striatum was the predominant species isolated followed by C. amycolatum. There was an agreement between Api Coryne strips and MALDI-TOF-MS identification in 88.65% of cases. MALDI-TOF-MS was unable to differentiate C. aurimucosum from C. minutissimum and C. minutissimum from C. singulare but reliably identify 92 of 97 (94.84%) strains. Two strains remained incompletely identified to the species level by MALDI-TOF-MS and molecular approaches. They belonged to Cellulomonas and Pseudoclavibacter genus. In conclusion, MALDI-TOF-MS is a rapid and reliable method for the identification of Corynebacterium species. However, some limits have been noted and have to be resolved by the application of molecular methods. Copyright © 2015. Published by Elsevier SAS.

  16. A novel approach to quantifying the sensitivity of current and future cosmological datasets to the neutrino mass ordering through Bayesian hierarchical modeling

    Science.gov (United States)

    Gerbino, Martina; Lattanzi, Massimiliano; Mena, Olga; Freese, Katherine

    2017-12-01

    We present a novel approach to derive constraints on neutrino masses, as well as on other cosmological parameters, from cosmological data, while taking into account our ignorance of the neutrino mass ordering. We derive constraints from a combination of current as well as future cosmological datasets on the total neutrino mass Mν and on the mass fractions fν,i =mi /Mν (where the index i = 1 , 2 , 3 indicates the three mass eigenstates) carried by each of the mass eigenstates mi, after marginalizing over the (unknown) neutrino mass ordering, either normal ordering (NH) or inverted ordering (IH). The bounds on all the cosmological parameters, including those on the total neutrino mass, take therefore into account the uncertainty related to our ignorance of the mass hierarchy that is actually realized in nature. This novel approach is carried out in the framework of Bayesian analysis of a typical hierarchical problem, where the distribution of the parameters of the model depends on further parameters, the hyperparameters. In this context, the choice of the neutrino mass ordering is modeled via the discrete hyperparameterhtype, which we introduce in the usual Markov chain analysis. The preference from cosmological data for either the NH or the IH scenarios is then simply encoded in the posterior distribution of the hyperparameter itself. Current cosmic microwave background (CMB) measurements assign equal odds to the two hierarchies, and are thus unable to distinguish between them. However, after the addition of baryon acoustic oscillation (BAO) measurements, a weak preference for the normal hierarchical scenario appears, with odds of 4 : 3 from Planck temperature and large-scale polarization in combination with BAO (3 : 2 if small-scale polarization is also included). Concerning next-generation cosmological experiments, forecasts suggest that the combination of upcoming CMB (COrE) and BAO surveys (DESI) may determine the neutrino mass hierarchy at a high statistical

  17. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  18. A comprehensive high-resolution mass spectrometry approach for characterization of metabolites by combination of ambient ionization, chromatography and imaging methods.

    Science.gov (United States)

    Berisha, Arton; Dold, Sebastian; Guenther, Sabine; Desbenoit, Nicolas; Takats, Zoltan; Spengler, Bernhard; Römpp, Andreas

    2014-08-30

    An ideal method for bioanalytical applications would deliver spatially resolved quantitative information in real time and without sample preparation. In reality these requirements can typically not be met by a single analytical technique. Therefore, we combine different mass spectrometry approaches: chromatographic separation, ambient ionization and imaging techniques, in order to obtain comprehensive information about metabolites in complex biological samples. Samples were analyzed by laser desorption followed by electrospray ionization (LD-ESI) as an ambient ionization technique, by matrix-assisted laser desorption/ionization (MALDI) mass spectrometry imaging for spatial distribution analysis and by high-performance liquid chromatography/electrospray ionization mass spectrometry (HPLC/ESI-MS) for quantitation and validation of compound identification. All MS data were acquired with high mass resolution and accurate mass (using orbital trapping and ion cyclotron resonance mass spectrometers). Grape berries were analyzed and evaluated in detail, whereas wheat seeds and mouse brain tissue were analyzed in proof-of-concept experiments. In situ measurements by LD-ESI without any sample preparation allowed for fast screening of plant metabolites on the grape surface. MALDI imaging of grape cross sections at 20 µm pixel size revealed the detailed distribution of metabolites which were in accordance with their biological function. HPLC/ESI-MS was used to quantify 13 anthocyanin species as well as to separate and identify isomeric compounds. A total of 41 metabolites (amino acids, carbohydrates, anthocyanins) were identified with all three approaches. Mass accuracy for all MS measurements was better than 2 ppm (root mean square error). The combined approach provides fast screening capabilities, spatial distribution information and the possibility to quantify metabolites. Accurate mass measurements proved to be critical in order to reliably combine data from different MS

  19. Testing compound-specific δ13C of amino acids in mussels as a new approach to determine the average 13C values of primary production in littoral ecosystems

    Science.gov (United States)

    Vokhshoori, N. L.; Larsen, T.; McCarthy, M.

    2012-12-01

    Compound-specific isotope analysis of amino acids (CSI-AA) is a technique used to decouple trophic enrichment patterns from source changes at the base of the food web. With this new emerging tool, it is possible to precisely determine both trophic position and δ15N or δ13C source values in higher feeding organisms. While most work to date has focused on nitrogen (N) isotopic values, early work has suggested that δ13C CSI-AA has great potential as a new tracer both to a record δ13C values of primary production (unaltered by trophic transfers), and also to "fingerprint" specific carbon source organisms. Since essential amino acids (EAA) cannot be made de novo in metazoans but must be obtained from diet, the δ13C value of the primary producer is preserved through the food web. Therefore, the δ13C values of EAAs act as a unique signature of different primary producers and can be used to fingerprint the dominant carbon (C) source driving primary production at the base of the food web. In littoral ecosystems, such as the California Upwelling System (CUS), the likely dominant C sources of suspended particulate organic matter (POM) pool are kelp, upwelling phytoplankton or estuarine phytoplankton. While bulk isotopes of C and N are used extensively to resolve relative consumer hierarchy or shifting diet in a food web, we found that the δ13C bulk values in mussels cannot distinguish exact source in littoral ecosystems. Here we show 15 sites within the CUS, between Cape Blanco, OR and La Jolla, CA where mussels were sampled and analyzed for both bulk δ13C and CSI-AA. We found no latitudinal trends, but rather average bulk δ13C values for the entire coastal record were highly consistent (-15.7 ± 0.9‰). The bulk record would suggest either nutrient provisioning from kelp or upwelled phytoplankton, but 13C-AA fingerprinting confines these two sources to upwelling. This suggests that mussels are recording integrated coastal phytoplankton values, with the enriched

  20. Implications of elevated CO2 on pelagic carbon fluxes in an Arctic mesocosm study – an elemental mass balance approach

    Directory of Open Access Journals (Sweden)

    J. Czerny

    2013-05-01

    Full Text Available Recent studies on the impacts of ocean acidification on pelagic communities have identified changes in carbon to nutrient dynamics with related shifts in elemental stoichiometry. In principle, mesocosm experiments provide the opportunity of determining temporal dynamics of all relevant carbon and nutrient pools and, thus, calculating elemental budgets. In practice, attempts to budget mesocosm enclosures are often hampered by uncertainties in some of the measured pools and fluxes, in particular due to uncertainties in constraining air–sea gas exchange, particle sinking, and wall growth. In an Arctic mesocosm study on ocean acidification applying KOSMOS (Kiel Off-Shore Mesocosms for future Ocean Simulation, all relevant element pools and fluxes of carbon, nitrogen and phosphorus were measured, using an improved experimental design intended to narrow down the mentioned uncertainties. Water-column concentrations of particulate and dissolved organic and inorganic matter were determined daily. New approaches for quantitative estimates of material sinking to the bottom of the mesocosms and gas exchange in 48 h temporal resolution as well as estimates of wall growth were developed to close the gaps in element budgets. However, losses elements from the budgets into a sum of insufficiently determined pools were detected, and are principally unavoidable in mesocosm investigation. The comparison of variability patterns of all single measured datasets revealed analytic precision to be the main issue in determination of budgets. Uncertainties in dissolved organic carbon (DOC, nitrogen (DON and particulate organic phosphorus (POP were much higher than the summed error in determination of the same elements in all other pools. With estimates provided for all other major elemental pools, mass balance calculations could be used to infer the temporal development of DOC, DON and POP pools. Future elevated pCO2 was found to enhance net autotrophic community carbon

  1. An alternative scheme of the Bogolyubov's average method

    International Nuclear Information System (INIS)

    Ortiz Peralta, T.; Ondarza R, R.; Camps C, E.

    1990-01-01

    In this paper the average energy and the magnetic moment conservation laws in the Drift Theory of charged particle motion are obtained in a simple way. The approach starts from the energy and magnetic moment conservation laws and afterwards the average is performed. This scheme is more economic from the standpoint of time and algebraic calculations than the usual procedure of Bogolyubov's method. (Author)

  2. An efficient algorithmic approach for mass spectrometry-based disulfide connectivity determination using multi-ion analysis

    Directory of Open Access Journals (Sweden)

    Yen Ten-Yang

    2011-02-01

    Full Text Available Abstract Background Determining the disulfide (S-S bond pattern in a protein is often crucial for understanding its structure and function. In recent research, mass spectrometry (MS based analysis has been applied to this problem following protein digestion under both partial reduction and non-reduction conditions. However, this paradigm still awaits solutions to certain algorithmic problems fundamental amongst which is the efficient matching of an exponentially growing set of putative S-S bonded structural alternatives to the large amounts of experimental spectrometric data. Current methods circumvent this challenge primarily through simplifications, such as by assuming only the occurrence of certain ion-types (b-ions and y-ions that predominate in the more popular dissociation methods, such as collision-induced dissociation (CID. Unfortunately, this can adversely impact the quality of results. Method We present an algorithmic approach to this problem that can, with high computational efficiency, analyze multiple ions types (a, b, bo, b*, c, x, y, yo, y*, and z and deal with complex bonding topologies, such as inter/intra bonding involving more than two peptides. The proposed approach combines an approximation algorithm-based search formulation with data driven parameter estimation. This formulation considers only those regions of the search space where the correct solution resides with a high likelihood. Putative disulfide bonds thus obtained are finally combined in a globally consistent pattern to yield the overall disulfide bonding topology of the molecule. Additionally, each bond is associated with a confidence score, which aids in interpretation and assimilation of the results. Results The method was tested on nine different eukaryotic Glycosyltransferases possessing disulfide bonding topologies of varying complexity. Its performance was found to be characterized by high efficiency (in terms of time and the fraction of search space

  3. Recent advances in mass spectrometry-based approaches for proteomics and biologics: Great contribution for developing therapeutic antibodies.

    Science.gov (United States)

    Iwamoto, Noriko; Shimada, Takashi

    2018-05-01

    Since the turn of the century, mass spectrometry (MS) technologies have continued to improve dramatically, and advanced strategies that were impossible a decade ago are increasingly becoming available. The basic characteristics behind these advancements are MS resolution, quantitative accuracy, and information science for appropriate data processing. The spectral data from MS contain various types of information. The benefits of improving the resolution of MS data include accurate molecular structural-derived information, and as a result, we can obtain a refined biomolecular structure determination in a sequential and large-scale manner. Moreover, in MS data, not only accurate structural information but also the generated ion amount plays an important rule. This progress has greatly contributed a research field that captures biological events as a system by comprehensively tracing the various changes in biomolecular dynamics. The sequential changes of proteome expression in biological pathways are very essential, and the amounts of the changes often directly become the targets of drug discovery or indicators of clinical efficacy. To take this proteomic approach, it is necessary to separate the individual MS spectra derived from each biomolecule in the complexed biological samples. MS itself is not so infinite to perform the all peak separation, and we should consider improving the methods for sample processing and purification to make them suitable for injection into MS. The above-described characteristics can only be achieved using MS with any analytical instrument. Moreover, MS is expected to be applied and expand into many fields, not only basic life sciences but also forensic medicine, plant sciences, materials, and natural products. In this review, we focus on the technical fundamentals and future aspects of the strategies for accurate structural identification, structure-indicated quantitation, and on the challenges for pharmacokinetics of high

  4. Design of pulsed perforated-plate columns for industrial scale mass transfer applications - present experience and the need for a model based approach

    International Nuclear Information System (INIS)

    Roy, Amitava

    2010-01-01

    Mass transfer is a vital unit operation in the processing of spent nuclear fuel in the backend of closed fuel cycle and Pulsed perforated plate extraction columns are used as mass transfer device for more than five decades. The pulsed perforated plate column is an agitated differential contactor, which has wide applicability due to its simplicity, high mass transfer efficiency, high through put, suitability for maintenance free remote operation, ease of cleaning/decontamination and cost effectiveness. Design of pulsed columns are based on a model proposed to describe the hydrodynamics and mass transfer. In equilibrium stage model, the HETS values are obtained from pilot plant experiments and then scaled empirically to design columns for industrial application. The dispersion model accounts for mass transfer kinetics and back-mixing. The drop population balance model can describe complex hydrodynamics of dispersed phase, that is, drop formation, break-up and drop-to-drop interactions. In recent years, significant progress has been made to model pulsed columns using CFD, which provides complete mathematical description of hydrodynamics in terms of spatial distribution of flow fields and 3D visualization. Under the condition of pulsation, the poly-dispersed nature of turbulent droplet swarm renders modeling difficult. In the absence of industry acceptance of proposed models, the conventional chemical engineering practice is to use HETS-NTS concept or HTU-NTU approach to design extraction columns. The practicability of HTU-NTU approach has some limitations due to the lack of experimental data on individual film mass transfer coefficients. Presently, the HETS-NTS concept has been used for designing the columns, which has given satisfactory performance. The design objective is mainly to arrive at the diameter and height of the mass transfer section for a specific plate geometry, fluid properties and pulsing condition to meet the intended throughput (capacity) and mass

  5. Some mass measurement problems

    International Nuclear Information System (INIS)

    Merritt, J.S.

    1976-01-01

    Concerning the problem of determining the thickness of a target, an uncomplicated approach is to measure its mass and area and take the quotient. This paper examines the mass measurement aspect of such an approach. (author)

  6. Combined effects of Mass and Velocity on forward displacement and phenomenological ratings: a functional measurement approach to the Momentum metaphor

    Directory of Open Access Journals (Sweden)

    Michel-Ange Amorim

    2010-01-01

    Full Text Available Representational Momentum (RepMo refers to the phenomenon that the vanishing position of a moving target is perceived as displaced ahead in the direction of movement. Originally taken to reflect a strict internalization of physical momentum, the finding that the target implied mass did not have an effect led to its subsequent reinterpretation as a second-order isomorphism between mental representations and principles of the physical world. However, very few studies have addressed the effects of mass on RepMo, and consistent replications of the null effect are lacking. The extent of motor engagement of the observers in RepMo tasks has, on the other hand, been suggested to determine the occurrence of the phenomenon; however, no systematic investigations were made of the degree to which it might modulate the effect of target mass. In the present work, we use Information Integration Theory to study the joint effects of different motor responses, target velocity and target mass on RepMo, and also of velocity and target mass on rating responses. Outcomes point not only to an effect of mass on RepMo, as to a differential effect of response modality on kinematic (e.g., velocity and dynamic (e.g., mass variables. Comparisons of patterns of mislocalisation with phenomenological ratings suggest that simplification of physical principles, rather than strict internalization or isomorphism per se, might underlie RepMo.

  7. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  8. A Proteomic Approach for Identification of Bacteria Using Tandem Mass Spectrometry Combined With a Translatome Database and Statistical Scoring

    National Research Council Canada - National Science Library

    Dworzanski, Jacek P; Snyder, A. P; Zhang, Haiyan; Wishart, David; Chen, Rui; Li, Liang

    2005-01-01

    ... mass spectra against a database translated from fully sequenced bacterial genomes. An in-house developed algorithm for filtering of search results have been tested with Bacillus subtilis and Escherichia coli microorganism...

  9. On spectral averages in nuclear spectroscopy

    International Nuclear Information System (INIS)

    Verbaarschot, J.J.M.

    1982-01-01

    In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)

  10. Vertically averaged approaches for CO 2 migration with solubility trapping

    KAUST Repository

    Gasda, S. E.; Nordbotten, J. M.; Celia, M. A.

    2011-01-01

    The long-term storage security of injected carbon dioxide (CO2) is an essential component of geological carbon sequestration operations. In the postinjection phase, the mobile CO2 plume migrates in large part because of buoyancy forces, following

  11. Sequence protein identification by randomized sequence database and transcriptome mass spectrometry (SPIDER-TMS): from manual to automatic application of a 'de novo sequencing' approach.

    Science.gov (United States)

    Pascale, Raffaella; Grossi, Gerarda; Cruciani, Gabriele; Mecca, Giansalvatore; Santoro, Donatello; Sarli Calace, Renzo; Falabella, Patrizia; Bianco, Giuliana

    Sequence protein identification by a randomized sequence database and transcriptome mass spectrometry software package has been developed at the University of Basilicata in Potenza (Italy) and designed to facilitate the determination of the amino acid sequence of a peptide as well as an unequivocal identification of proteins in a high-throughput manner with enormous advantages of time, economical resource and expertise. The software package is a valid tool for the automation of a de novo sequencing approach, overcoming the main limits and a versatile platform useful in the proteomic field for an unequivocal identification of proteins, starting from tandem mass spectrometry data. The strength of this software is that it is a user-friendly and non-statistical approach, so protein identification can be considered unambiguous.

  12. Profiling monoterpenol glycoconjugation in Vitis vinifera L. cv. Muscat of Alexandria using a novel putative compound database approach, high resolution mass spectrometry and collision induced dissociation fragmentation analysis.

    Science.gov (United States)

    Hjelmeland, Anna K; Zweigenbaum, Jerry; Ebeler, Susan E

    2015-08-05

    In this work we present a novel approach for the identification of plant metabolites using ultrahigh performance liquid chromatography coupled to accurate mass time-of-flight mass spectrometry. The workflow involves developing an in-house compound database consisting of exact masses of previously identified as well as putative compounds. The database is used to screen accurate mass spectrometry (MS) data to identify possible compound matches. Subsequent tandem MS data is acquired for possible matches and used for structural elucidation. The methodology is applied to profile monoterpene glycosides in Vitis vinifera cv. Muscat of Alexandria grape berries over three developmental stages. Monoterpenes are a subclass of terpenes, the largest class of plant secondary metabolites, and are found in two major forms in the plant, "bound" to one or more sugar moieties or "free" of said sugar moieties. In the free form, monoterpenes are noted for their fragrance and play important roles in plant defense and as attractants for pollinators. However, glycoconjugation renders these compounds odorless, and it is this form that the plant uses for monoterpene storage. In order to gain insight into monoterpene biochemistry and their fate in the plant an analysis of intact glycosides is essential. Eighteen monoterpene glycosides were identified including a monoterpene trisaccharide glycoside, which is tentatively identified here for this first time in any plant. Additionally, while previous studies have identified monoterpene malonylated glucosides in other grapevine tissue, we tentatively identify them for the first time in grape berries. This analytical approach can be readily applied to other plants and the workflow approach can also be used for other classes of compounds. This approach, in general, provides researchers with data to support the identification of putative compounds, which is especially useful when no standard is available. Copyright © 2015 Elsevier B.V. All rights

  13. Phenomenological approach to the modelling of elliptical galaxies: The problem of the mass-to-light ratio

    Directory of Open Access Journals (Sweden)

    Samurović S.

    2007-01-01

    Full Text Available In this paper the problem of the phenomenological modelling of elliptical galaxies using various available observational data is presented. Recently, Tortora, Cardona and Piedipalumbo (2007 suggested a double power law expression for the global cumulative mass-to-light ratio of elliptical galaxies. We tested their expression on a sample of ellipticals for which we have the estimates of the mass-to-light ratio beyond ~ 3 effective radii, a region where dark matter is expected to play an important dynamical role. We found that, for all the galaxies in our sample, we have α + β > 0, but that this does not necessarily mean a high dark matter content. The galaxies with higher mass (and higher dark matter content also have higher value of α+β. It was also shown that there is an indication that the galaxies with higher value of the effective radius also have higher dark matter content. .

  14. A novel approach to quantifying the sensitivity of current and future cosmological datasets to the neutrino mass ordering through Bayesian hierarchical modeling

    Directory of Open Access Journals (Sweden)

    Martina Gerbino

    2017-12-01

    Full Text Available We present a novel approach to derive constraints on neutrino masses, as well as on other cosmological parameters, from cosmological data, while taking into account our ignorance of the neutrino mass ordering. We derive constraints from a combination of current as well as future cosmological datasets on the total neutrino mass Mν and on the mass fractions fν,i=mi/Mν (where the index i=1,2,3 indicates the three mass eigenstates carried by each of the mass eigenstates mi, after marginalizing over the (unknown neutrino mass ordering, either normal ordering (NH or inverted ordering (IH. The bounds on all the cosmological parameters, including those on the total neutrino mass, take therefore into account the uncertainty related to our ignorance of the mass hierarchy that is actually realized in nature. This novel approach is carried out in the framework of Bayesian analysis of a typical hierarchical problem, where the distribution of the parameters of the model depends on further parameters, the hyperparameters. In this context, the choice of the neutrino mass ordering is modeled via the discrete hyperparameter htype, which we introduce in the usual Markov chain analysis. The preference from cosmological data for either the NH or the IH scenarios is then simply encoded in the posterior distribution of the hyperparameter itself. Current cosmic microwave background (CMB measurements assign equal odds to the two hierarchies, and are thus unable to distinguish between them. However, after the addition of baryon acoustic oscillation (BAO measurements, a weak preference for the normal hierarchical scenario appears, with odds of 4:3 from Planck temperature and large-scale polarization in combination with BAO (3:2 if small-scale polarization is also included. Concerning next-generation cosmological experiments, forecasts suggest that the combination of upcoming CMB (COrE and BAO surveys (DESI may determine the neutrino mass hierarchy at a high

  15. Next generation offline approaches to trace organic compound speciation: Approaching comprehensive speciation with soft ionization and very high resolution tandem mass spectrometry

    Science.gov (United States)

    Khare, P.; Marcotte, A.; Sheu, R.; Ditto, J.; Gentner, D. R.

    2017-12-01

    Intermediate- and semi-volatile organic compounds (IVOCs and SVOCs) have high secondary organic aerosol (SOA) yields, as well as significant ozone formation potentials. Yet, their emission sources and oxidation pathways remain largely understudied due to limitations in current analytical capabilities. Online mass spectrometers are able to collect real time data but their limited mass resolving power renders molecular level characterization of IVOCs and SVOCs from the unresolved complex mixture unfeasible. With proper sampling techniques and powerful analytical instrumentation, our offline tandem mass spectrometry (i.e. MS×MS) techniques provide molecular-level and structural identification over wide polarity and volatility ranges. We have designed a novel analytical system for offline analysis of gas-phase SOA precursors collected on custom-made multi-bed adsorbent tubes. Samples are desorbed into helium via a gradual temperature ramp and sample flow is split equally for direct-MS×MS analysis and separation via gas chromatography (GC). The effluent from GC separation is split again for analysis via atmospheric pressure chemical ionization quadrupole time-of-flight mass spectrometry (APCI-Q×TOF) and traditional electron ionization mass spectrometry (EI-MS). The compounds for direct-MS×MS analysis are delivered via a transfer line maintained at 70ºC directly to APCI-Q×TOF, thus preserving the molecular integrity of thermally-labile, or other highly-reactive, organic compounds. Both our GC-MS×MS and direct-MS×MS analyses report high accuracy parent ion masses as well as information on molecular structure via MS×MS, which together increase the resolution of unidentified complex mixtures. We demonstrate instrument performance and present preliminary results from urban atmospheric samples collected from New York City with a wide range of compounds including highly-functionalized organic compounds previously understudied in outdoor air. Our work offers new

  16. Payload Mass Identification of a Single-Link Flexible Arm Moving under Gravity: An Algebraic Identification Approach

    Directory of Open Access Journals (Sweden)

    Juan Carlos Cambera

    2015-01-01

    Full Text Available We deal with the online identification of the payload mass carried by a single-link flexible arm that moves on a vertical plane and therefore is affected by the gravity force. Specifically, we follow a frequency domain design methodology to develop an algebraic identifier. This identifier is capable of achieving robust and efficient mass estimates even in the presence of sensor noise. In order to highlight its performance, the proposed estimator is experimentally tested and compared with other classical methods in several situations that resemble the most typical operation of a manipulator.

  17. The use of difference spectra with a filtered rolling average background in mobile gamma spectrometry measurements

    International Nuclear Information System (INIS)

    Cresswell, A.J.; Sanderson, D.C.W.

    2009-01-01

    The use of difference spectra, with a filtering of a rolling average background, as a variation of the more common rainbow plots to aid in the visual identification of radiation anomalies in mobile gamma spectrometry systems is presented. This method requires minimal assumptions about the radiation environment, and is not computationally intensive. Some case studies are presented to illustrate the method. It is shown that difference spectra produced in this manner can improve signal to background, estimate shielding or mass depth using scattered spectral components, and locate point sources. This approach could be a useful addition to the methods available for locating point sources and mapping dispersed activity in real time. Further possible developments of the procedure utilising more intelligent filters and spatial averaging of the background are identified.

  18. Large-scale retrospective evaluation of regulated liquid chromatography-mass spectrometry bioanalysis projects using different total error approaches.

    Science.gov (United States)

    Tan, Aimin; Saffaj, Taoufiq; Musuku, Adrien; Awaiye, Kayode; Ihssane, Bouchaib; Jhilal, Fayçal; Sosse, Saad Alaoui; Trabelsi, Fethi

    2015-03-01

    The current approach in regulated LC-MS bioanalysis, which evaluates the precision and trueness of an assay separately, has long been criticized for inadequate balancing of lab-customer risks. Accordingly, different total error approaches have been proposed. The aims of this research were to evaluate the aforementioned risks in reality and the difference among four common total error approaches (β-expectation, β-content, uncertainty, and risk profile) through retrospective analysis of regulated LC-MS projects. Twenty-eight projects (14 validations and 14 productions) were randomly selected from two GLP bioanalytical laboratories, which represent a wide variety of assays. The results show that the risk of accepting unacceptable batches did exist with the current approach (9% and 4% of the evaluated QC levels failed for validation and production, respectively). The fact that the risk was not wide-spread was only because the precision and bias of modern LC-MS assays are usually much better than the minimum regulatory requirements. Despite minor differences in magnitude, very similar accuracy profiles and/or conclusions were obtained from the four different total error approaches. High correlation was even observed in the width of bias intervals. For example, the mean width of SFSTP's β-expectation is 1.10-fold (CV=7.6%) of that of Saffaj-Ihssane's uncertainty approach, while the latter is 1.13-fold (CV=6.0%) of that of Hoffman-Kringle's β-content approach. To conclude, the risk of accepting unacceptable batches was real with the current approach, suggesting that total error approaches should be used instead. Moreover, any of the four total error approaches may be used because of their overall similarity. Lastly, the difficulties/obstacles associated with the application of total error approaches in routine analysis and their desirable future improvements are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  20. Modelling the surface mass balance of the Greenland ice sheet and neighbouring ice caps : A dynamical and statistical downscaling approach

    NARCIS (Netherlands)

    Noël, B.P.Y.

    2018-01-01

    The Greenland ice sheet (GrIS) is the world’s second largest ice mass, storing about one tenth of the Earth’s freshwater. If totally melted, global sea level would rise by 7.4 m, affecting low-lying regions worldwide. Since the mid-1990s, increased atmospheric and oceanic temperatures have

  1. Heat and mass exchange within the soil - plant canopy-atmosphere system : a theroretical approach and its validation

    NARCIS (Netherlands)

    El-Kilani, R.M.M.

    1997-01-01

    Heat, mass and momentum transfer between the canopy air layer and the layer of air above has a very intermittent nature. This intermittent nature is due to the passage at the canopy top of coherent structures which have a length scale at least as large as the canopy height. The periodic

  2. Desorption atmospheric pressure photoionization high-resolution mass spectrometry: a complementary approach for the chemical analysis of atmospheric aerosols

    Czech Academy of Sciences Publication Activity Database

    Parshintsev, J.; Vaikkinen, A.; Lipponen, K.; Vrkoslav, Vladimír; Cvačka, Josef; Kostiainen, R.; Kotiaho, T.; Hartonen, K.; Riekkola, M. L.; Kauppila, T. J.

    2015-01-01

    Roč. 29, č. 13 (2015), s. 1233-1241 ISSN 0951-4198 Grant - others:GA AV ČR(CZ) M200551204 Institutional support: RVO:61388963 Keywords : atmospheric aerosols * mass spectrometry * ambient ionization Subject RIV: CB - Analytical Chemistry , Separation Impact factor: 2.226, year: 2015

  3. A New Approach to Determine the Density of Liquids and Solids without Measuring Mass and Volume: Introducing the "Solidensimeter"

    Science.gov (United States)

    Kiriktas, Halit; Sahin, Mehmet; Eslek, Sinan; Kiriktas, Irem

    2018-01-01

    This study aims to design a mechanism with which the density of any solid or liquid can be determined without measuring its mass and volume in order to help students comprehend the concept of density more easily. The "solidensimeter" comprises of two scaled and nested glass containers (graduated cylinder or beaker) and sufficient water.…

  4. Bivariate copulas on the exponentially weighted moving average control chart

    Directory of Open Access Journals (Sweden)

    Sasigarn Kuvattana

    2016-10-01

    Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.

  5. Bootstrapping pre-averaged realized volatility under market microstructure noise

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour

    The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...

  6. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  7. Construction of average adult Japanese voxel phantoms for dose assessment

    International Nuclear Information System (INIS)

    Sato, Kaoru; Takahashi, Fumiaki; Satoh, Daiki; Endo, Akira

    2011-12-01

    The International Commission on Radiological Protection (ICRP) adopted the adult reference voxel phantoms based on the physiological and anatomical reference data of Caucasian on October, 2007. The organs and tissues of these phantoms were segmented on the basis of ICRP Publication 103. In future, the dose coefficients for internal dose and dose conversion coefficients for external dose calculated using the adult reference voxel phantoms will be widely used for the radiation protection fields. On the other hand, the body sizes and organ masses of adult Japanese are generally smaller than those of adult Caucasian. In addition, there are some cases that the anatomical characteristics such as body sizes, organ masses and postures of subjects influence the organ doses in dose assessment for medical treatments and radiation accident. Therefore, it was needed to use human phantoms with average anatomical characteristics of Japanese. The authors constructed the averaged adult Japanese male and female voxel phantoms by modifying the previously developed high-resolution adult male (JM) and female (JF) voxel phantoms. It has been modified in the following three aspects: (1) The heights and weights were agreed with the Japanese averages; (2) The masses of organs and tissues were adjusted to the Japanese averages within 10%; (3) The organs and tissues, which were newly added for evaluation of the effective dose in ICRP Publication 103, were modeled. In this study, the organ masses, distances between organs, specific absorbed fractions (SAFs) and dose conversion coefficients of these phantoms were compared with those evaluated using the ICRP adult reference voxel phantoms. This report provides valuable information on the anatomical and dosimetric characteristics of the averaged adult Japanese male and female voxel phantoms developed as reference phantoms of adult Japanese. (author)

  8. Identification of urinary biomarkers of exposure to di-(2-propylheptyl) phthalate using high-resolution mass spectrometry and two data-screening approaches.

    Science.gov (United States)

    Shih, Chia-Lung; Liao, Pao-Mei; Hsu, Jen-Yi; Chung, Yi-Ning; Zgoda, Victor G; Liao, Pao-Chi

    2018-02-01

    Di-(2-propylheptyl) phthalate (DPHP) is a plasticizer used in polyvinyl chloride and vinyl chloride copolymer that has been suggested to be a toxicant in rats and may affect human health. Because the use of DPHP is increasing, the general German population is being exposed to DPHP. Toxicant metabolism is important for human toxicant exposure assessments. To date, the knowledge regarding DPHP metabolism has been limited, and only four metabolites have been identified in human urine. Ultra-performance liquid chromatography was coupled with Orbitrap high-resolution mass spectrometry (MS) and two data-screening approaches-the signal mining algorithm with isotope tracing (SMAIT) and the mass defect filter (MDF)-for DPHP metabolite candidate discovery. In total, 13 and 104 metabolite candidates were identified by the two approaches, respectively, in in vitro DPHP incubation samples. Of these candidates, 17 were validated as tentative exposure biomarkers using a rat model, 13 of which have not been reported in the literature. The two approaches generated rather different tentative DPHP exposure biomarkers, indicating that these approaches are complementary for discovering exposure biomarkers. Compared with the four previously reported DPHP metabolites, the three tentative novel biomarkers had higher peak intensity ratios, and two were confirmed as DPHP hydroxyl metabolites based on their MS/MS product ion profiles. These three tentative novel biomarkers should be further investigated for potential application in human exposure assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  10. Current Direct Neutrino Mass Experiments

    Directory of Open Access Journals (Sweden)

    G. Drexlin

    2013-01-01

    Full Text Available In this contribution, we review the status and perspectives of direct neutrino mass experiments, which investigate the kinematics of β-decays of specific isotopes (3H, 187Re, 163Ho to derive model-independent information on the averaged electron (antineutrino mass. After discussing the kinematics of β-decay and the determination of the neutrino mass, we give a brief overview of past neutrino mass measurements (SN1987a-ToF studies, Mainz and Troitsk experiments for 3H, cryobolometers for 187Re. We then describe the Karlsruhe Tritium Neutrino (KATRIN experiment currently under construction at Karlsruhe Institute of Technology, which will use the MAC-E-Filter principle to push the sensitivity down to a value of 200 meV (90% C.L.. To do so, many technological challenges have to be solved related to source intensity and stability, as well as precision energy analysis and low background rate close to the kinematic endpoint of tritium β-decay at 18.6 keV. We then review new approaches such as the MARE, ECHO, and Project8 experiments, which offer the promise to perform an independent measurement of the neutrino mass in the sub-eV region. Altogether, the novel methods developed in direct neutrino mass experiments will provide vital information on the absolute mass scale of neutrinos.

  11. Sequential injection approach for simultaneous determination of ultratrace plutonium and neptunium in urine with accelerator mass spectrometry

    DEFF Research Database (Denmark)

    Qiao, Jixin; Hou, Xiaolin; Roos, Per

    2013-01-01

    An analytical method was developed for simultaneous determination of ultratrace level plutonium (Pu) and neptunium (Np) using iron hydroxide coprecipitation in combination with automated sequential injection extraction chromatography separation and accelerator mass spectrometry (AMS) measurement...... show that preboiling and aging are important for obtaining high chemical yields for both Pu and Np, which is possibly related to the aggregation and adsorption behavior of organic substances contained in urine. Although the optimal condition for Np and Pu simultaneous determination requires 5-day aging...

  12. Desorption atmospheric pressure photoionization high-resolution mass spectrometry: a complementary approach for the chemical analysis of atmospheric aerosols.

    Science.gov (United States)

    Parshintsev, Jevgeni; Vaikkinen, Anu; Lipponen, Katriina; Vrkoslav, Vladimir; Cvačka, Josef; Kostiainen, Risto; Kotiaho, Tapio; Hartonen, Kari; Riekkola, Marja-Liisa; Kauppila, Tiina J

    2015-07-15

    On-line chemical characterization methods of atmospheric aerosols are essential to increase our understanding of physicochemical processes in the atmosphere, and to study biosphere-atmosphere interactions. Several techniques, including aerosol mass spectrometry, are nowadays available, but they all suffer from some disadvantages. In this research, desorption atmospheric pressure photoionization high-resolution (Orbitrap) mass spectrometry (DAPPI-HRMS) is introduced as a complementary technique for the fast analysis of aerosol chemical composition without the need for sample preparation. Atmospheric aerosols from city air were collected on a filter, desorbed in a DAPPI source with a hot stream of toluene and nitrogen, and ionized using a vacuum ultraviolet lamp at atmospheric pressure. To study the applicability of the technique for ambient aerosol analysis, several samples were collected onto filters and analyzed, with the focus being on selected organic acids. To compare the DAPPI-HRMS data with results obtained by an established method, each filter sample was divided into two equal parts, and the second half of the filter was extracted and analyzed by liquid chromatography/mass spectrometry (LC/MS). The DAPPI results agreed with the measured aerosol particle number. In addition to the targeted acids, the LC/MS and DAPPI-HRMS methods were found to detect different compounds, thus providing complementary information about the aerosol samples. DAPPI-HRMS showed several important oxidation products of terpenes, and numerous compounds were tentatively identified. Thanks to the soft ionization, high mass resolution, fast analysis, simplicity and on-line applicability, the proposed methodology has high potential in the field of atmospheric research. Copyright © 2015 John Wiley & Sons, Ltd.

  13. New approach to the determination phosphorothioate oligonucleotides by ultra high performance liquid chromatography coupled with inductively coupled plasma mass spectrometry.

    Science.gov (United States)

    Studzińska, Sylwia; Mounicou, Sandra; Szpunar, Joanna; Łobiński, Ryszard; Buszewski, Bogusław

    2015-01-15

    This text presents a novel method for the separation and detection of phosphorothioate oligonucleotides with the use of ion pair ultra high performance liquid chromatography coupled with inductively coupled plasma mass spectrometry The research showed that hexafluoroisopropanol/triethylamine based mobile phases may be successfully used when liquid chromatography is coupled with such elemental detection. However, the concentration of both HFIP and TEA influences the final result. The lower concentration of HFIP, the lower the background in ICP-MS and the greater the sensitivity. The method applied for the analysis of serum samples was based on high resolution inductively coupled plasma mass spectrometry. Utilization of this method allows determination of fifty times lower quantity of phosphorothioate oligonucleotides than in the case of quadrupole mass analyzer. Monitoring of (31)P may be used to quantify these compounds at the level of 80 μg L(-1), while simultaneous determination of sulfur is very useful for qualitative analysis. Moreover, the results presented in this paper demonstrate the practical applicability of coupling LC with ICP-MS in determining phosphorothioate oligonucleotides and their metabolites in serum within 7 min with a very good sensitivity. The method was linear in the concentration range between 0.2 and 3 mg L(-1). The limit of detection was in the range of 0.07 and 0.13 mg L(-1). Accuracy varied with concentration, but was in the range of 3%. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. A new approach to determine the density of liquids and solids without measuring mass and volume: introducing the solidensimeter

    Science.gov (United States)

    Kiriktaş, Halit; Şahin, Mehmet; Eslek, Sinan; Kiriktaş, İrem

    2018-05-01

    This study aims to design a mechanism with which the density of any solid or liquid can be determined without measuring its mass and volume in order to help students comprehend the concept of density more easily. The solidensimeter comprises of two scaled and nested glass containers (graduated cylinder or beaker) and sufficient water. In this method, the density measurement was made using the Archimedes’ principle stating that an object fully submerged in a liquid displaces the same amount of liquid as its volume, while an object partially submerged or floating displaces the same amount of liquid as its mass. Using this method, the density of any solids or liquids can be determined using a simple mathematical ratio. At the end of the process a mechanism that helps students to comprehend the density topic more easily was designed. The system is easy-to-design, uses low-cost equipment and enables one to determine the density of any solid or liquid without measuring its mass and volume.

  15. A semi-automated approach to derive elevation time-series and calculate glacier mass balance from historical aerial imagery

    Science.gov (United States)

    Whorton, E.; Headman, A.; Shean, D. E.; McCann, E.

    2017-12-01

    Understanding the implications of glacier recession on water resources in the western U.S. requires quantifying glacier mass change across large regions over several decades. Very few glaciers in North America have long-term continuous field measurements of glacier mass balance. However, systematic aerial photography campaigns began in 1957 on many glaciers in the western U.S. and Alaska. These historical, vertical aerial stereo-photographs documenting glacier evolution have recently become publically available. Digital elevation models (DEM) of the transient glacier surface preserved in each imagery timestamp can be derived, then differenced to calculate glacier volume and mass change to improve regional geodetic solutions of glacier mass balance. In order to batch process these data, we use Python-based algorithms and Agisoft Photoscan structure from motion (SfM) photogrammetry software to semi-automate DEM creation, and orthorectify and co-register historical aerial imagery in a high-performance computing environment. Scanned photographs are rotated to reduce scaling issues, cropped to the same size to remove fiducials, and batch histogram equalization is applied to improve image quality and aid pixel-matching algorithms using the Python library OpenCV. Processed photographs are then passed to Photoscan through the Photoscan Python library to create DEMs and orthoimagery. To extend the period of record, the elevation products are co-registered to each other, airborne LiDAR data, and DEMs derived from sub-meter commercial satellite imagery. With the exception of the placement of ground control points, the process is entirely automated with Python. Current research is focused on: one, applying these algorithms to create geodetic mass balance time series for the 90 photographed glaciers in Washington State and two, evaluating the minimal amount of positional information required in Photoscan to prevent distortion effects that cannot be addressed during co

  16. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  17. DSCOVR Magnetometer Level 2 One Minute Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data

  18. DSCOVR Magnetometer Level 2 One Second Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data

  19. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  20. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  1. Discovery of safety biomarkers for atorvastatin in rat urine using mass spectrometry based metabolomics combined with global and targeted approach

    International Nuclear Information System (INIS)

    Kumar, Bhowmik Salil; Lee, Young-Joo; Yi, Hong Jae; Chung, Bong Chul; Jung, Byung Hwa

    2010-01-01

    In order to develop a safety biomarker for atorvastatin, this drug was orally administrated to hyperlipidemic rats, and a metabolomic study was performed. Atorvastatin was given in doses of either 70 mg kg -1 day -1 or 250 mg kg -1 day -1 for a period of 7 days (n = 4 for each group). To evaluate any abnormal effects of the drug, physiological and plasma biochemical parameters were measured and histopathological tests were carried out. Safety biomarkers were derived by comparing these parameters and using both global and targeted metabolic profiling. Global metabolic profiling was performed using liquid chromatography/time of flight/mass spectrometry (LC/TOF/MS) with multivariate data analysis. Several safety biomarker candidates that included various steroids and amino acids were discovered as a result of global metabolic profiling, and they were also confirmed by targeted metabolic profiling using gas chromatography/mass spectrometry (GC/MS) and capillary electrophoresis/mass spectrometry (CE/MS). Serum biochemical and histopathological tests were used to detect abnormal drug reactions in the liver after repeating oral administration of atorvastatin. The metabolic differences between control and the drug-treated groups were compared using PLS-DA score plots. These results were compared with the physiological and plasma biochemical parameters and the results of a histopathological test. Estrone, cortisone, proline, cystine, 3-ureidopropionic acid and histidine were proposed as potential safety biomarkers related with the liver toxicity of atorvastatin. These results indicate that the combined application of global and targeted metabolic profiling could be a useful tool for the discovery of drug safety biomarkers.

  2. Discovery of safety biomarkers for atorvastatin in rat urine using mass spectrometry based metabolomics combined with global and targeted approach

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Bhowmik Salil [Bioanalysis and Biotransformation Research Center, Korea Institute of Science and Technology, P.O. Box 131, Cheongryang, Seoul 130-650 (Korea, Republic of); University of Science and Technology, (305-333) 113 Gwahangno, Yuseong-gu, Daejeon (Korea, Republic of); Lee, Young-Joo; Yi, Hong Jae [College of Pharmacy, Kyung Hee University, Hoegi-dong, Dongdaemun-gu, Seoul 130-791 (Korea, Republic of); Chung, Bong Chul [Bioanalysis and Biotransformation Research Center, Korea Institute of Science and Technology, P.O. Box 131, Cheongryang, Seoul 130-650 (Korea, Republic of); Jung, Byung Hwa, E-mail: jbhluck@kist.re.kr [Bioanalysis and Biotransformation Research Center, Korea Institute of Science and Technology, P.O. Box 131, Cheongryang, Seoul 130-650 (Korea, Republic of); University of Science and Technology, (305-333) 113 Gwahangno, Yuseong-gu, Daejeon (Korea, Republic of)

    2010-02-19

    In order to develop a safety biomarker for atorvastatin, this drug was orally administrated to hyperlipidemic rats, and a metabolomic study was performed. Atorvastatin was given in doses of either 70 mg kg{sup -1} day{sup -1} or 250 mg kg{sup -1} day{sup -1} for a period of 7 days (n = 4 for each group). To evaluate any abnormal effects of the drug, physiological and plasma biochemical parameters were measured and histopathological tests were carried out. Safety biomarkers were derived by comparing these parameters and using both global and targeted metabolic profiling. Global metabolic profiling was performed using liquid chromatography/time of flight/mass spectrometry (LC/TOF/MS) with multivariate data analysis. Several safety biomarker candidates that included various steroids and amino acids were discovered as a result of global metabolic profiling, and they were also confirmed by targeted metabolic profiling using gas chromatography/mass spectrometry (GC/MS) and capillary electrophoresis/mass spectrometry (CE/MS). Serum biochemical and histopathological tests were used to detect abnormal drug reactions in the liver after repeating oral administration of atorvastatin. The metabolic differences between control and the drug-treated groups were compared using PLS-DA score plots. These results were compared with the physiological and plasma biochemical parameters and the results of a histopathological test. Estrone, cortisone, proline, cystine, 3-ureidopropionic acid and histidine were proposed as potential safety biomarkers related with the liver toxicity of atorvastatin. These results indicate that the combined application of global and targeted metabolic profiling could be a useful tool for the discovery of drug safety biomarkers.

  3. Mass carbon monoxide poisoning at an ice-hockey game: initial approach and long-term follow-up.

    Science.gov (United States)

    Mortelmans, Luc J M; Populaire, Jacques; Desruelles, Didier; Sabbe, Marc B

    2013-12-01

    A mass carbon monoxide (CO) intoxication during an ice-hockey game is described. Two hundred and thirty-five patients were seen in different hospitals, 88 of them the same night at the nearby emergency department. To evaluate long-term implications and to identify relevant indicators, a follow-up study was organized 1 year after the incident. Apart from the file data from the emergency departments, a 1-year follow-up mailing was sent to all patients. One hundred and ninety-one patients returned their questionnaire (86%). The mean age of the patients was 28 years, with 61% men. The mean carboxyhaemoglobin (COHb) was 9.9%. COHb levels were significantly higher for individuals on the ice (referee, players and maintenance personnel). There was a significant relationship with the initial presence of dizziness, fatigue and the COHb level. Headache, abdominal pain, nausea and vomiting were not significantly related to the COHb levels. The relationship between symptoms and CO level, however, should be interpreted with caution as there was a wide range between exposure and blood tests. 5.2% of patients had residual complaints, all including headache, with a significant higher incidence with high COHb levels. Only two patients had an abnormal neurological control (one slightly disturbed electroencephalography and one persistent encephalopathic complaint). Work incapacity was also significantly related to COHb levels. CO mass poisonings remain a risk in indoor sporting events. Although it causes an acute mass casualty incident, it is limited in time and delayed problems are scarce. Symptomatology is a poor tool for triage. The best prevention is the use of nonmineral energy sources such as for example electricity.

  4. A new approach to develop computer-aided diagnosis scheme of breast mass classification using deep learning technology.

    Science.gov (United States)

    Qiu, Yuchen; Yan, Shiju; Gundreddy, Rohith Reddy; Wang, Yunzhi; Cheng, Samuel; Liu, Hong; Zheng, Bin

    2017-01-01

    To develop and test a deep learning based computer-aided diagnosis (CAD) scheme of mammograms for classifying between malignant and benign masses. An image dataset involving 560 regions of interest (ROIs) extracted from digital mammograms was used. After down-sampling each ROI from 512×512 to 64×64 pixel size, we applied an 8 layer deep learning network that involves 3 pairs of convolution-max-pooling layers for automatic feature extraction and a multiple layer perceptron (MLP) classifier for feature categorization to process ROIs. The 3 pairs of convolution layers contain 20, 10, and 5 feature maps, respectively. Each convolution layer is connected with a max-pooling layer to improve the feature robustness. The output of the sixth layer is fully connected with a MLP classifier, which is composed of one hidden layer and one logistic regression layer. The network then generates a classification score to predict the likelihood of ROI depicting a malignant mass. A four-fold cross validation method was applied to train and test this deep learning network. The results revealed that this CAD scheme yields an area under the receiver operation characteristic curve (AUC) of 0.696±0.044, 0.802±0.037, 0.836±0.036, and 0.822±0.035 for fold 1 to 4 testing datasets, respectively. The overall AUC of the entire dataset is 0.790±0.019. This study demonstrates the feasibility of applying a deep learning based CAD scheme to classify between malignant and benign breast masses without a lesion segmentation, image feature computation and selection process.

  5. A New Approach to Develop Computer-aided Diagnosis Scheme of Breast Mass Classification Using Deep Learning Technology

    Science.gov (United States)

    Qiu, Yuchen; Yan, Shiju; Gundreddy, Rohith Reddy; Wang, Yunzhi; Cheng, Samuel; Liu, Hong; Zheng, Bin

    2017-01-01

    PURPOSE To develop and test a deep learning based computer-aided diagnosis (CAD) scheme of mammograms for classifying between malignant and benign masses. METHODS An image dataset involving 560 regions of interest (ROIs) extracted from digital mammograms was used. After down-sampling each ROI from 512×512 to 64×64 pixel size, we applied an 8 layer deep learning network that involves 3 pairs of convolution-max-pooling layers for automatic feature extraction and a multiple layer perceptron (MLP) classifier for feature categorization to process ROIs. The 3 pairs of convolution layers contain 20, 10, and 5 feature maps, respectively. Each convolution layer is connected with a max-pooling layer to improve the feature robustness. The output of the sixth layer is fully connected with a MLP classifier, which is composed of one hidden layer and one logistic regression layer. The network then generates a classification score to predict the likelihood of ROI depicting a malignant mass. A four-fold cross validation method was applied to train and test this deep learning network. RESULTS The results revealed that this CAD scheme yields an area under the receiver operation characteristic curve (AUC) of 0.696±0.044, 0.802±0.037, 0.836±0.036, and 0.822±0.035 for fold 1 to 4 testing datasets, respectively. The overall AUC of the entire dataset is 0.790±0.019. CONCLUSIONS This study demonstrates the feasibility of applying a deep learning based CAD scheme to classify between malignant and benign breast masses without a lesion segmentation, image feature computation and selection process. PMID:28436410

  6. Development of a new certified reference material of diosgenin using mass balance approach and Coulometric titration method.

    Science.gov (United States)

    Gong, Ningbo; Zhang, Baoxi; Hu, Fan; Du, Hui; Du, Guanhua; Gao, Zhaolin; Lu, Yang

    2014-12-01

    Certified reference materials (CRMs) can be used as a valuable tool to validate the trueness of measurement methods and to establish metrological traceability of analytical results. Diosgenin has been selected as a candidate reference material. Characterization of the material relied on two different methods, mass balance method and Coulometric titration method (CT). The certified value of diosgenin CRM is 99.80% with an expanded uncertainty of 0.37% (k=2). The new CRM of diosgenin can be used to validate analytical methods, improve the accuracy of measurement data and control the quality of diosgenin in relevant pharmaceutical formulations. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. MOOC Design – Dissemination to the Masses or Facilitation of Social Learning and a Deep Approach to Learning?

    DEFF Research Database (Denmark)

    Christensen, Inger-Marie F.; Dam Laursen, Mette; Bøggild, Jacob

    2016-01-01

    This article accounts for the design of the massive open online course (MOOC) Hans Christian Andersen’s Fairy tales on FutureLearn and reports on the effectiveness of this design in terms of engaging learners in social learning and encouraging a deep approach to learning. A learning pathway...... and increased educator feedback. Course data show that that some learners use the space provided for social interaction and mutual support. A learning pathway that engages learners in discussion and progression from week to week facilitates a deep approach to learning. However, this requires more support from...

  8. A robust combination approach for short-term wind speed forecasting and analysis – Combination of the ARIMA (Autoregressive Integrated Moving Average), ELM (Extreme Learning Machine), SVM (Support Vector Machine) and LSSVM (Least Square SVM) forecasts using a GPR (Gaussian Process Regression) model

    International Nuclear Information System (INIS)

    Wang, Jianzhou; Hu, Jianming

    2015-01-01

    With the increasing importance of wind power as a component of power systems, the problems induced by the stochastic and intermittent nature of wind speed have compelled system operators and researchers to search for more reliable techniques to forecast wind speed. This paper proposes a combination model for probabilistic short-term wind speed forecasting. In this proposed hybrid approach, EWT (Empirical Wavelet Transform) is employed to extract meaningful information from a wind speed series by designing an appropriate wavelet filter bank. The GPR (Gaussian Process Regression) model is utilized to combine independent forecasts generated by various forecasting engines (ARIMA (Autoregressive Integrated Moving Average), ELM (Extreme Learning Machine), SVM (Support Vector Machine) and LSSVM (Least Square SVM)) in a nonlinear way rather than the commonly used linear way. The proposed approach provides more probabilistic information for wind speed predictions besides improving the forecasting accuracy for single-value predictions. The effectiveness of the proposed approach is demonstrated with wind speed data from two wind farms in China. The results indicate that the individual forecasting engines do not consistently forecast short-term wind speed for the two sites, and the proposed combination method can generate a more reliable and accurate forecast. - Highlights: • The proposed approach can make probabilistic modeling for wind speed series. • The proposed approach adapts to the time-varying characteristic of the wind speed. • The hybrid approach can extract the meaningful components from the wind speed series. • The proposed method can generate adaptive, reliable and more accurate forecasting results. • The proposed model combines four independent forecasting engines in a nonlinear way.

  9. Approaching the CDF Top Quark Mass Legacy Measurement in the Lepton+Jets channel with the Matrix Element Method

    Energy Technology Data Exchange (ETDEWEB)

    Tosciri, Cecilia [Univ. of Pisa (Italy)

    2016-01-01

    The discovery of the bottom quark in 1977 at the Tevatron Collider triggered the search for its partner in the third fermion isospin doublet, the top quark, which was discovered 18 years later in 1995 by the CDF and D=0 experiments during the Tevatron Run I. By 1990, intensive efforts by many groups at several accelerators had lifted to over 90 GeV=c2 the lower mass limit, such that since then the Tevatron became the only accelerator with high-enough energy to possibly discover this amazingly massive quark. After its discovery, the determination of top quark properties has been one of the main goals of the Fermilab Tevatron Collider, and more recently also of the Large Hadron Collider (LHC) at CERN. Since the mass value plays an important role in a large number of theoretical calculations on fundamental processes, improving the accuracy of its measurement has been at any time a goal of utmost importance. The present thesis describes in detail the contributions given by the candidate to the massive preparation work needed to make the new analysis possible, during her 8 months long stay at Fermilab.

  10. Research Resource: A Dual Proteomic Approach Identifies Regulated Islet Proteins During β-Cell Mass Expansion In Vivo

    DEFF Research Database (Denmark)

    Horn, Signe; Kirkegaard, Jeannette S.; Hoelper, Soraya

    2016-01-01

    to be up regulated as a response to pregnancy. These included several proteins, not previously associated with pregnancy-induced islet expansion, such as CLIC1, STMN1, MCM6, PPIB, NEDD4, and HLTF. Confirming the validity of our approach, we also identified proteins encoded by genes known to be associated...

  11. An integrated approach for estimating global glacio isostatic adjustment, land ice, hydrology and ocean mass trends within a complete coupled Earth system framework

    Science.gov (United States)

    Schumacher, M.; Bamber, J. L.; Martin, A.

    2016-12-01

    Future sea level rise (SLR) is one of the most serious consequences of climate change. Therefore, understanding the drivers of past sea level change is crucial for improving predictions. SLR integrates many Earth system components including oceans, land ice, terrestrial water storage, as well as solid Earth effects. Traditionally, each component have been tackled separately, which has often lead to inconsistencies between discipline-specific estimates of each part of the sea level budget. To address these issues, the European Research Council has funded a five year project aimed at producing a physically-based, data-driven solution for the complete coupled land-ocean-solid Earth system that is consistent with the full suite of observations, prior knowledge and fundamental geophysical constraints. The project is called "GlobalMass" and based at University of Bristol. Observed mass movement from the GRACE mission plus vertical land motion from a global network of permanent GPS stations will be utilized in a data-driven approach to estimate glacial isostatic adjustment (GIA) without introducing any assumptions about the Earth structure or ice loading history. A Bayesian Hierarchical Model (BHM) will be used as the framework to combine the satellite and in-situ observations alongside prior information that incorporates the physics of the coupled system such as conservation of mass and characteristic length scales of different processes in both space and time. The BHM is used to implement a simultaneous solution at a global scale. It will produce a consistent partitioning of the integrated SLR signal into its steric (thermal) and barystatic (mass) component for the satellite era. The latter component is induced by hydrological mass trends and melting of land ice. The BHM was developed and tested on Antarctica, where it has been used to separate surface, ice dynamic and GIA signals simultaneously. We illustrate the approach and concepts with examples from this test case

  12. Averages, Areas and Volumes; Cambridge Conference on School Mathematics Feasibility Study No. 45.

    Science.gov (United States)

    Cambridge Conference on School Mathematics, Newton, MA.

    Presented is an elementary approach to areas, columns and other mathematical concepts usually treated in calculus. The approach is based on the idea of average and this concept is utilized throughout the report. In the beginning the average (arithmetic mean) of a set of numbers is considered and two properties of the average which often simplify…

  13. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...

  14. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  15. Tandem Affinity Purification Approach Coupled to Mass Spectrometry to Identify Post-translational Modifications of Histones Associated with Chromatin-Binding Proteins.

    Science.gov (United States)

    Beyer, Sophie; Robin, Philippe; Ait-Si-Ali, Slimane

    2017-01-01

    Protein purification by tandem affinity purification (TAP)-tag coupled to mass spectrometry analysis is usually used to reveal protein complex composition. Here we describe a TAP-tag purification of chromatin-bound proteins along with associated nucleosomes, which allow exhaustive identification of protein partners. Moreover, this method allows exhaustive identification of the post-translational modifications (PTMs) of the associated histones. Thus, in addition to partner characterization, this approach reveals the associated epigenetic landscape that can shed light on the function and properties of the studied chromatin-bound protein.

  16. Object detection by correlation coefficients using azimuthally averaged reference projections.

    Science.gov (United States)

    Nicholson, William V

    2004-11-01

    A method of computing correlation coefficients for object detection that takes advantage of using azimuthally averaged reference projections is described and compared with two alternative methods-computing a cross-correlation function or a local correlation coefficient versus the azimuthally averaged reference projections. Two examples of an application from structural biology involving the detection of projection views of biological macromolecules in electron micrographs are discussed. It is found that a novel approach to computing a local correlation coefficient versus azimuthally averaged reference projections, using a rotational correlation coefficient, outperforms using a cross-correlation function and a local correlation coefficient in object detection from simulated images with a range of levels of simulated additive noise. The three approaches perform similarly in detecting macromolecular views in electron microscope images of a globular macrolecular complex (the ribosome). The rotational correlation coefficient outperforms the other methods in detection of keyhole limpet hemocyanin macromolecular views in electron micrographs.

  17. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  18. Analysis of two-phase flow inter-subchannel mass and momentum exchanges by the two-fluid model approach

    Energy Technology Data Exchange (ETDEWEB)

    Ninokata, H. [Tokyo Institute of Technology (Japan); Deguchi, A. [ENO Mathematical Analysis, Tokyo (Japan); Kawahara, A. [Kumamoto Univ., Kumamoto (Japan)

    1995-09-01

    A new void drift model for the subchannel analysis method is presented for the thermohydraulics calculation of two-phase flows in rod bundles where the flow model uses a two-fluid formulation for the conservation of mass, momentum and energy. A void drift model is constructed based on the experimental data obtained in a geometrically simple inter-connected two circular channel test sections using air-water as working fluids. The void drift force is assumed to be an origin of void drift velocity components of the two-phase cross-flow in a gap area between two adjacent rods and to overcome the momentum exchanges at the phase interface and wall-fluid interface. This void drift force is implemented in the cross flow momentum equations. Computational results have been successfully compared to experimental data available including 3x3 rod bundle data.

  19. Time Series ARIMA Models of Undergraduate Grade Point Average.

    Science.gov (United States)

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  20. Mass movements in the Rio Grande Valley (Quebrada de Humahuaca, Northwestern Argentina): a methodological approach to reduce the risk

    Science.gov (United States)

    Marcato, G.; Pasuto, A.; Rivelli, F. R.

    2009-10-01

    Slope processes such as slides and debris flows, are among the main events that induce effects on the Rio Grande sediment transport capacity. The slides mainly affect the slope of the Rio Grande river basin while debris and mud flows phenomena take place in the tributary valleys. In the past decades several mass movements occurred causing victims and great damages to roads and villages and therefore hazard assessment and risk mitigation is of paramount importance for a correct development of the area. This is also an urgent need since the Quebrada de Humahuaca was recently included in the UNESCO World Cultural Heritage. The growing tourism business may lead to an uncontrolled urbanization of the valley with the consequent enlargement of threatened areas. In this framework mitigation measures have to take into account not only technical aspects related to the physical behaviour of the moving masses but also environmental and sociological factors that could influence the effectiveness of the countermeasures. Mitigation of landslide effects is indeed rather complex because of the large extension of the territory and the particular geological and geomorphological setting. Moreover the necessity to maintain the natural condition of the area as prescribed by UNESCO, make this task even more difficult. Nowadays no in-depth study of the entire area exists, therefore an integrated and multidisciplinary investigation plan is going to be set up including geological and geomorphological investigations as well as archaeological and historical surveys. The better understanding of geomorphological evolution processes of the Quebrada de Humahuaca will bridge the gap between the necessity of preservation and the request of safety keeping of the recommendation by UNESCO.

  1. Mass movements in the Rio Grande Valley (Quebrada de Humahuaca, Northwestern Argentina: a methodological approach to reduce the risk

    Directory of Open Access Journals (Sweden)

    G. Marcato

    2009-10-01

    Full Text Available Slope processes such as slides and debris flows, are among the main events that induce effects on the Rio Grande sediment transport capacity. The slides mainly affect the slope of the Rio Grande river basin while debris and mud flows phenomena take place in the tributary valleys. In the past decades several mass movements occurred causing victims and great damages to roads and villages and therefore hazard assessment and risk mitigation is of paramount importance for a correct development of the area. This is also an urgent need since the Quebrada de Humahuaca was recently included in the UNESCO World Cultural Heritage. The growing tourism business may lead to an uncontrolled urbanization of the valley with the consequent enlargement of threatened areas.

    In this framework mitigation measures have to take into account not only technical aspects related to the physical behaviour of the moving masses but also environmental and sociological factors that could influence the effectiveness of the countermeasures.

    Mitigation of landslide effects is indeed rather complex because of the large extension of the territory and the particular geological and geomorphological setting. Moreover the necessity to maintain the natural condition of the area as prescribed by UNESCO, make this task even more difficult.

    Nowadays no in-depth study of the entire area exists, therefore an integrated and multidisciplinary investigation plan is going to be set up including geological and geomorphological investigations as well as archaeological and historical surveys. The better understanding of geomorphological evolution processes of the Quebrada de Humahuaca will bridge the gap between the necessity of preservation and the request of safety keeping of the recommendation by UNESCO.

  2. Sensitive and specific peak detection for SELDI-TOF mass spectrometry using a wavelet/neural-network based approach.

    Directory of Open Access Journals (Sweden)

    Vincent A Emanuele

    Full Text Available SELDI-TOF mass spectrometer's compact size and automated, high throughput design have been attractive to clinical researchers, and the platform has seen steady-use in biomarker studies. Despite new algorithms and preprocessing pipelines that have been developed to address reproducibility issues, visual inspection of the results of SELDI spectra preprocessing by the best algorithms still shows miscalled peaks and systematic sources of error. This suggests that there continues to be problems with SELDI preprocessing. In this work, we study the preprocessing of SELDI in detail and introduce improvements. While many algorithms, including the vendor supplied software, can identify peak clusters of specific mass (or m/z in groups of spectra with high specificity and low false discover rate (FDR, the algorithms tend to underperform estimating the exact prevalence and intensity of peaks in those clusters. Thus group differences that at first appear very strong are shown, after careful and laborious hand inspection of the spectra, to be less than significant. Here we introduce a wavelet/neural network based algorithm which mimics what a team of expert, human users would call for peaks in each of several hundred spectra in a typical SELDI clinical study. The wavelet denoising part of the algorithm optimally smoothes the signal in each spectrum according to an improved suite of signal processing algorithms previously reported (the LibSELDI toolbox under development. The neural network part of the algorithm combines those results with the raw signal and a training dataset of expertly called peaks, to call peaks in a test set of spectra with approximately 95% accuracy. The new method was applied to data collected from a study of cervical mucus for the early detection of cervical cancer in HPV infected women. The method shows promise in addressing the ongoing SELDI reproducibility issues.

  3. Accurate quantification of endogenous androgenic steroids in cattle's meat by gas chromatography mass spectrometry using a surrogate analyte approach

    International Nuclear Information System (INIS)

    Ahmadkhaniha, Reza; Shafiee, Abbas; Rastkari, Noushin; Kobarfard, Farzad

    2009-01-01

    Determination of endogenous steroids in complex matrices such as cattle's meat is a challenging task. Since endogenous steroids always exist in animal tissues, no analyte-free matrices for constructing the standard calibration line will be available, which is crucial for accurate quantification specially at trace level. Although some methods have been proposed to solve the problem, none has offered a complete solution. To this aim, a new quantification strategy was developed in this study, which is named 'surrogate analyte approach' and is based on using isotope-labeled standards instead of natural form of endogenous steroids for preparing the calibration line. In comparison with the other methods, which are currently in use for the quantitation of endogenous steroids, this approach provides improved simplicity and speed for analysis on a routine basis. The accuracy of this method is better than other methods at low concentration and comparable to the standard addition at medium and high concentrations. The method was also found to be valid according to the ICH criteria for bioanalytical methods. The developed method could be a promising approach in the field of compounds residue analysis

  4. A quantitative approach for pesticide analysis in grape juice by direct interfacing of a matrix compatible SPME phase to dielectric barrier discharge ionization-mass spectrometry.

    Science.gov (United States)

    Mirabelli, Mario F; Gionfriddo, Emanuela; Pawliszyn, Janusz; Zenobi, Renato

    2018-02-12

    We evaluated the performance of a dielectric barrier discharge ionization (DBDI) source for pesticide analysis in grape juice, a fairly complex matrix due to the high content of sugars (≈20% w/w) and pigments. A fast sample preparation method based on direct immersion solid-phase microextraction (SPME) was developed, and novel matrix compatible SPME fibers were used to reduce in-source matrix suppression effects. A high resolution LTQ Orbitrap mass spectrometer allowed for rapid quantification in full scan mode. This direct SPME-DBDI-MS approach was proven to be effective for the rapid and direct analysis of complex sample matrices, with limits of detection in the parts-per-trillion (ppt) range and inter- and intra-day precision below 30% relative standard deviation (RSD) for samples spiked at 1, 10 and 10 ng ml -1 , with overall performance comparable or even superior to existing chromatographic approaches.

  5. Cryo-sectioning of mice for whole-body imaging of drugs and metabolites with desorption electrospray ionization mass spectrometry imaging - a simplified approach

    DEFF Research Database (Denmark)

    Okutan, Seda; Hansen, Harald S; Janfelt, Christian

    2016-01-01

    A method is presented for whole-body imaging of drugs and metabolites in mice with desorption electrospray ionization mass spectrometry imaging (DESI-MSI). Unlike most previous approaches to whole-body imaging which are based on cryo-sectioning using a cryo-macrotome, the presented approach...... to simple, sensitive and highly selective whole-body imaging in drug distribution and metabolism studies....... is based on use of the cryo-microtome which is found in any histology lab. The tissue sections are collected on tape which is analyzed directly by DESI-MSI. The method is demonstrated on mice which have been dosed intraperitoneally with the antidepressive drug amitriptyline. By combining full...

  6. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  7. Constraints on the nuclear equation of state from nuclear masses and radii in a Thomas-Fermi meta-modeling approach

    Science.gov (United States)

    Chatterjee, D.; Gulminelli, F.; Raduta, Ad. R.; Margueron, J.

    2017-12-01

    The question of correlations among empirical equation of state (EoS) parameters constrained by nuclear observables is addressed in a Thomas-Fermi meta-modeling approach. A recently proposed meta-modeling for the nuclear EoS in nuclear matter is augmented with a single finite size term to produce a minimal unified EoS functional able to describe the smooth part of the nuclear ground state properties. This meta-model can reproduce the predictions of a large variety of models, and interpolate continuously between them. An analytical approximation to the full Thomas-Fermi integrals is further proposed giving a fully analytical meta-model for nuclear masses. The parameter space is sampled and filtered through the constraint of nuclear mass reproduction with Bayesian statistical tools. We show that this simple analytical meta-modeling has a predictive power on masses, radii, and skins comparable to full Hartree-Fock or extended Thomas-Fermi calculations with realistic energy functionals. The covariance analysis on the posterior distribution shows that no physical correlation is present between the different EoS parameters. Concerning nuclear observables, a strong correlation between the slope of the symmetry energy and the neutron skin is observed, in agreement with previous studies.

  8. Molecular imaging of myocardial infarction with Gadofluorine P – A combined magnetic resonance and mass spectrometry imaging approach

    Directory of Open Access Journals (Sweden)

    Fabian Lohöfer

    2018-04-01

    Full Text Available Background: Molecular MRI is becoming increasingly important for preclinical research. Validation of targeted gadolinium probes in tissue however has been cumbersome up to now. Novel methodology to assess gadolinium distribution in tissue after in vivo application is therefore needed. Purpose: To establish combined Magnetic Resonance Imaging (MRI and Mass Spectrometry Imaging (MSI for improved detection and quantification of Gadofluorine P deposition in scar formation and myocardial remodeling. Materials and methods: Animal studies were performed according to institutionally approved protocols. Myocardial infarction was induced by permanent ligation of the left ascending artery (LAD in C57BL/6J mice. MRI was performed at 7T at 1 week and 6 weeks after myocardial infarction. Gadofluorine P was used for dynamic T1 mapping of extracellular matrix synthesis during myocardial healing and compared to Gd-DTPA. After in vivo imaging contrast agent concentration as well as distribution in tissue were validated and quantified by spatially resolved Matrix-Assisted Laser Desorption Ionization (MALDI MSI and Laser Ablation – Inductively Coupled Plasma – Mass Spectrometry (LA-ICP-MS imaging. Results: Both Gadofluorine P enhancement as well as local tissue content in the myocardial scar were highest at 15 minutes post injection. R1 values increased from 1 to 6 weeks after MI (1.62 s−1 vs 2.68 s−1, p = 0.059 paralleled by an increase in Gadofluorine P concentration in the infarct from 0.019 mM at 1 week to 0.028 mM at 6 weeks (p = 0.048, whereas Gd-DTPA enhancement showed no differences (3.95 s−1 vs 3.47 s−1, p = 0.701. MALDI-MSI results were corroborated by elemental LA-ICP-MS of Gadolinium in healthy and infarcted myocardium. Histology confirmed increased extracellular matrix synthesis at 6 weeks compared to 1 week. Conclusion: Adding quantitative MSI to MR imaging enables a quantitative validation of Gadofluorine P distribution in the heart

  9. Cryo-sectioning of mice for whole-body imaging of drugs and metabolites with desorption electrospray ionization mass spectrometry imaging - a simplified approach.

    Science.gov (United States)

    Okutan, Seda; Hansen, Harald S; Janfelt, Christian

    2016-06-01

    A method is presented for whole-body imaging of drugs and metabolites in mice with desorption electrospray ionization mass spectrometry imaging (DESI-MSI). Unlike most previous approaches to whole-body imaging which are based on cryo-sectioning using a cryo-macrotome, the presented approach is based on use of the cryo-microtome which is found in any histology lab. The tissue sections are collected on tape which is analyzed directly by DESI-MSI. The method is demonstrated on mice which have been dosed intraperitoneally with the antidepressive drug amitriptyline. By combining full-scan detection with the more selective and sensitive MS/MS detection, a number of endogenous compounds (lipids) were imaged simultaneously with the drug and one of its metabolites. The sensitivity of this approach allowed for imaging of drug and the metabolite in a mouse dosed with 2.7 mg amitriptyline per kg bodyweight which is comparable to the normal prescribed human dose. The simultaneous imaging of endogenous and exogenous compounds facilitates registration of the drug images to certain organs in the body by colored-overlay of the two types of images. The method represents a relatively low-cost approach to simple, sensitive and highly selective whole-body imaging in drug distribution and metabolism studies. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Rotational averaging of multiphoton absorption cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  11. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  12. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-01-01

    to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic

  13. Should the average tax rate be marginalized?

    Czech Academy of Sciences Publication Activity Database

    Feldman, N. E.; Katuščák, Peter

    -, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf

  14. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  15. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  16. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  17. Oligosaccharide substrate preferences of human extracellular sulfatase Sulf2 using liquid chromatography-mass spectrometry based glycomics approaches.

    Directory of Open Access Journals (Sweden)

    Yu Huang

    Full Text Available Sulfs are extracellular endosulfatases that selectively remove the 6-O-sulfate groups from cell surface heparan sulfate (HS chain. By altering the sulfation at these particular sites, Sulfs function to remodel HS chains. As a result of the remodeling activity, HSulf2 regulates a multitude of cell-signaling events that depend on interactions between proteins and HS. Previous efforts to characterize the substrate specificity of human Sulfs (HSulfs focused on the analysis of HS disaccharides and synthetic repeating units. In this study, we characterized the substrate preferences of human HSulf2 using HS oligosaccharides with various lengths and sulfation degrees from several naturally occurring HS sources by applying liquid chromatography mass spectrometry based glycomics methods. The results showed that HSulf2 preferentially digests highly sulfated HS oligosaccharides with zero acetyl groups and this preference is length dependent. In terms of length of oligosaccharides, HSulf2 digestion induced more sulfation decrease on DP6 (DP: degree of polymerization compared to DP2, DP4 and DP8. In addition, the HSulf2 preferentially digests the oligosaccharide domain located at the non-reducing end (NRE of the HS and heparin chain. In addition, the HSulf2 digestion products were altered only for specific isomers. HSulf2 treated NRE oligosaccharides also showed greater decrease in cell proliferation than those from internal domains of the HS chain. After further chromatographic separation, we identified the three most preferred unsaturated hexasaccharide for HSulf2.

  18. Identification of specific bovine blood biomarkers with a non-targeted approach using HPLC ESI tandem mass spectrometry.

    Science.gov (United States)

    Lecrenier, M C; Marbaix, H; Dieu, M; Veys, P; Saegerman, C; Raes, M; Baeten, V

    2016-12-15

    Animal by-products are valuable protein sources in animal nutrition. Among them are blood products and blood meal, which are used as high-quality material for their beneficial effects on growth and health. Within the framework of the feed ban relaxation, the development of complementary methods in order to refine the identification of processed animal proteins remains challenging. The aim of this study was to identify specific biomarkers that would allow the detection of bovine blood products and processed animal proteins using tandem mass spectrometry. Seventeen biomarkers were identified: nine peptides for bovine plasma powder; seven peptides for bovine haemoglobin powder, including six peptides for bovine blood meal; and one peptide for porcine blood. They were not detected in several commercial compound feed or feed materials, such as blood by-products of other animal origins, milk-derived products and fish meal. These biomarkers could be used for developing a species-specific and blood-specific detection method. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. GasBench/isotope ratio mass spectrometry: a carbon isotope approach to detect exogenous CO(2) in sparkling drinks.

    Science.gov (United States)

    Cabañero, Ana I; San-Hipólito, Tamar; Rupérez, Mercedes

    2007-01-01

    A new procedure for the determination of carbon dioxide (CO(2)) (13)C/(12)C isotope ratios, using direct injection into a GasBench/isotope ratio mass spectrometry (GasBench/IRMS) system, has been developed to improve isotopic methods devoted to the study of the authenticity of sparkling drinks. Thirty-nine commercial sparkling drink samples from various origins were analyzed. Values of delta(13)C(cava) ranged from -20.30 per thousand to -23.63 per thousand, when C3 sugar addition was performed for a second alcoholic fermentation. Values of delta(13)C(water) ranged from -5.59 per thousand to -6.87 per thousand in the case of naturally carbonated water or water fortified with gas from the spring, and delta(13)C(water) ranged from -29.36 per thousand to -42.09 per thousand when industrial CO(2) was added. It has been demonstrated that the addition of C4 sugar to semi-sparkling wine (aguja) and industrial CO(2) addition to sparkling wine (cava) or water can be detected. The new procedure has advantages over existing methods in terms of analysis time and sample treatment. In addition, it is the first isotopic method developed that allows (13)C/(12)C determination directly from a liquid sample without previous CO(2) extraction. No significant isotopic fractionation was observed nor any influence by secondary compounds present in the liquid phase. Copyright (c) 2007 John Wiley & Sons, Ltd.

  20. Liquid chromatography-mass spectrometry in occupational toxicology: a novel approach to the study of biotransformation of industrial chemicals.

    Science.gov (United States)

    Manini, Paola; Andreoli, Roberta; Niessen, Wilfried

    2004-11-26

    Biological monitoring and biomarkers are used in occupational toxicology for a more accurate risk assessment of occupationally exposed people. Appropriate and validated biomarkers of internal dose, like urinary metabolites, besides to be positively correlated with external exposure, have a predictive value to the risk of adverse effects. The application of liquid chromatography-mass spectrometry (LC-MS) in occupational and environmental toxicology, although relatively recent, has been demonstrated valid in the determination of traditional biomarkers of exposure, as well as in metabolism studies aimed at investigating minor metabolic routes and new more specific biomarkers. This review presents selected applications of LC-MS to the study of the metabolism of industrial chemicals, like n-hexane, benzene and other aromatic hydrocarbons, styrene and other monomers employed in plastic industry, as well as to other chemicals used in working environments, like pesticides used by farmers, and antineoplastic agents prepared by hospital personnel. Analytical and pre-analytical factors, which affect quantitative determination of urinary metabolites, i.e. sample preparation, matrix effect, ion suppression, use of internal standards, and calibration, are emphasized.

  1. A Liquid Chromatography - Tandem Mass Spectrometry Approach for the Identification of Mebendazole Residue in Pork, Chicken, and Horse.

    Directory of Open Access Journals (Sweden)

    Ji Sun Lee

    Full Text Available A confirmatory and quantitative method of liquid chromatography-tandem mass spectrometry (LC-MS/MS for the determination of mebendazole and its hydrolyzed and reduced metabolites in pork, chicken, and horse muscles was developed and validated in this study. Anthelmintic compounds were extracted with ethyl acetate after sample mixture was made alkaline followed by liquid chromatographic separation using a reversed phase C18 column. Gradient elution was performed with a mobile phase consisting of water containing 10 mM ammonium formate and methanol. This confirmatory method was validated according to EU requirements. Evaluated validation parameters included specificity, accuracy, precision (repeatability and within-laboratory reproducibility, analytical limits (decision limit and detection limit, and applicability. Most parameters were proved to be conforming to the EU requirements. The decision limit (CCα and detection capability (CCβ for all analytes ranged from 15.84 to 17.96 μgkg-1. The limit of detection (LOD and the limit of quantification (LOQ for all analytes were 0.07 μgkg-1 and 0.2 μgkg-1, respectively. The developed method was successfully applied to monitoring samples collected from the markets in major cities and proven great potential to be used as a regulatory tool to determine mebendazole residues in animal based foods.

  2. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    Science.gov (United States)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  3. Simultaneous quantification of protein phosphorylation sites using liquid chromatography-tandem mass spectrometry-based targeted proteomics: a linear algebra approach for isobaric phosphopeptides.

    Science.gov (United States)

    Xu, Feifei; Yang, Ting; Sheng, Yuan; Zhong, Ting; Yang, Mi; Chen, Yun

    2014-12-05

    As one of the most studied post-translational modifications (PTM), protein phosphorylation plays an essential role in almost all cellular processes. Current methods are able to predict and determine thousands of phosphorylation sites, whereas stoichiometric quantification of these sites is still challenging. Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS)-based targeted proteomics is emerging as a promising technique for site-specific quantification of protein phosphorylation using proteolytic peptides as surrogates of proteins. However, several issues may limit its application, one of which relates to the phosphopeptides with different phosphorylation sites and the same mass (i.e., isobaric phosphopeptides). While employment of site-specific product ions allows for these isobaric phosphopeptides to be distinguished and quantified, site-specific product ions are often absent or weak in tandem mass spectra. In this study, linear algebra algorithms were employed as an add-on to targeted proteomics to retrieve information on individual phosphopeptides from their common spectra. To achieve this simultaneous quantification, a LC-MS/MS-based targeted proteomics assay was first developed and validated for each phosphopeptide. Given the slope and intercept of calibration curves of phosphopeptides in each transition, linear algebraic equations were developed. Using a series of mock mixtures prepared with varying concentrations of each phosphopeptide, the reliability of the approach to quantify isobaric phosphopeptides containing multiple phosphorylation sites (≥ 2) was discussed. Finally, we applied this approach to determine the phosphorylation stoichiometry of heat shock protein 27 (HSP27) at Ser78 and Ser82 in breast cancer cells and tissue samples.

  4. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  5. Nonequilibrium statistical averages and thermo field dynamics

    International Nuclear Information System (INIS)

    Marinaro, A.; Scarpetta, Q.

    1984-01-01

    An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles

  6. Comparison of approaches for measuring the mass accommodation coefficient for the condensation of water and sensitivities to uncertainties in thermophysical properties.

    Science.gov (United States)

    Miles, Rachael E H; Reid, Jonathan P; Riipinen, Ilona

    2012-11-08

    We compare and contrast measurements of the mass accommodation coefficient of water on a water surface made using ensemble and single particle techniques under conditions of supersaturation and subsaturation, respectively. In particular, we consider measurements made using an expansion chamber, a continuous flow streamwise thermal gradient cloud condensation nuclei chamber, the Leipzig Aerosol Cloud Interaction Simulator, aerosol optical tweezers, and electrodynamic balances. Although this assessment is not intended to be comprehensive, these five techniques are complementary in their approach and give values that span the range from near 0.1 to 1.0 for the mass accommodation coefficient. We use the same semianalytical treatment to assess the sensitivities of the measurements made by the various techniques to thermophysical quantities (diffusion constants, thermal conductivities, saturation pressure of water, latent heat, and solution density) and experimental parameters (saturation value and temperature). This represents the first effort to assess and compare measurements made by different techniques to attempt to reduce the uncertainty in the value of the mass accommodation coefficient. Broadly, we show that the measurements are consistent within the uncertainties inherent to the thermophysical and experimental parameters and that the value of the mass accommodation coefficient should be considered to be larger than 0.5. Accurate control and measurement of the saturation ratio is shown to be critical for a successful investigation of the surface transport kinetics during condensation/evaporation. This invariably requires accurate knowledge of the partial pressure of water, the system temperature, the droplet curvature and the saturation pressure of water. Further, the importance of including and quantifying the transport of heat in interpreting droplet measurements is highlighted; the particular issues associated with interpreting measurements of condensation

  7. Optimization of Search Engines and Postprocessing Approaches to Maximize Peptide and Protein Identification for High-Resolution Mass Data.

    Science.gov (United States)

    Tu, Chengjian; Sheng, Quanhu; Li, Jun; Ma, Danjun; Shen, Xiaomeng; Wang, Xue; Shyr, Yu; Yi, Zhengping; Qu, Jun

    2015-11-06

    The two key steps for analyzing proteomic data generated by high-resolution MS are database searching and postprocessing. While the two steps are interrelated, studies on their combinatory effects and the optimization of these procedures have not been adequately conducted. Here, we investigated the performance of three popular search engines (SEQUEST, Mascot, and MS Amanda) in conjunction with five filtering approaches, including respective score-based filtering, a group-based approach, local false discovery rate (LFDR), PeptideProphet, and Percolator. A total of eight data sets from various proteomes (e.g., E. coli, yeast, and human) produced by various instruments with high-accuracy survey scan (MS1) and high- or low-accuracy fragment ion scan (MS2) (LTQ-Orbitrap, Orbitrap-Velos, Orbitrap-Elite, Q-Exactive, Orbitrap-Fusion, and Q-TOF) were analyzed. It was found combinations involving Percolator achieved markedly more peptide and protein identifications at the same FDR level than the other 12 combinations for all data sets. Among these, combinations of SEQUEST-Percolator and MS Amanda-Percolator provided slightly better performances for data sets with low-accuracy MS2 (ion trap or IT) and high accuracy MS2 (Orbitrap or TOF), respectively, than did other methods. For approaches without Percolator, SEQUEST-group performs the best for data sets with MS2 produced by collision-induced dissociation (CID) and IT analysis; Mascot-LFDR gives more identifications for data sets generated by higher-energy collisional dissociation (HCD) and analyzed in Orbitrap (HCD-OT) and in Orbitrap Fusion (HCD-IT); MS Amanda-Group excels for the Q-TOF data set and the Orbitrap Velos HCD-OT data set. Therefore, if Percolator was not used, a specific combination should be applied for each type of data set. Moreover, a higher percentage of multiple-peptide proteins and lower variation of protein spectral counts were observed when analyzing technical replicates using Percolator

  8. Protocol and baseline data for a multi-year cohort study of the effects of different mass drug treatment approaches on functional morbidities from schistosomiasis in four African countries

    DEFF Research Database (Denmark)

    Shen, Ye; King, Charles H.; Binder, Sue

    2017-01-01

    Background The Schistosomiasis Consortium for Operational Research and Evaluation (SCORE) focus is on randomized trials of different approaches to mass drug administration (MDA) in endemic countries in Africa. Because their studies provided an opportunity to evaluate the effects of mass treatment...

  9. Analysis of bovine milk caseins on organic monolithic columns: an integrated capillary liquid chromatography-high resolution mass spectrometry approach for the study of time-dependent casein degradation.

    Science.gov (United States)

    Pierri, Giuseppe; Kotoni, Dorina; Simone, Patrizia; Villani, Claudio; Pepe, Giacomo; Campiglia, Pietro; Dugo, Paola; Gasparrini, Francesco

    2013-10-25

    Casein proteins constitute approximately 80% of the proteins present in bovine milk and account for many of its nutritional and technological properties. The analysis of the casein fraction in commercially available pasteurized milk and the study of its time-dependent degradation is of considerable interest in the agro-food industry. Here we present new analytical methods for the study of caseins in fresh and expired bovine milk, based on the use of lab-made capillary organic monolithic columns. An integrated capillary high performance liquid chromatography and high-resolution mass spectrometry (Cap-LC-HRMS) approach was developed, exploiting the excellent resolution, permeability and biocompatibility of organic monoliths, which is easily adaptable to the analysis of intact proteins. The resolution obtained on the lab-made Protein-Cap-RP-Lauryl-γ-Monolithic column (270 mm × 0.250 mm length × internal diameter, L × I.D.) in the analysis of commercial standard caseins (αS-CN, β-CN and κ-CN) through Cap-HPLC-UV was compared to the one observe using two packed capillary C4 columns, the ACE C4 (3 μm, 150 mm × 0.300 mm, L × I.D.) and the Jupiter C4 column (5 μm, 150 mm × 0.300 mm, L × I.D.). Thanks to the higher resolution observed, the monolithic capillary column was chosen for the successive degradation studies of casein fractions extracted from bovine milk 1-4 weeks after expiry date. The comparison of the UV chromatographic profiles of skim, semi-skim and whole milk showed a major stability of whole milk towards time-dependent degradation of caseins, which was further sustained by high-resolution analysis on a 50-cm long monolithic column using a 120-min time gradient. Contemporarily, the exact monoisotopic and average molecular masses of intact αS-CN and β-CN protein standards were obtained through high resolution mass spectrometry and used for casein identification in Cap-LC-HRMS analysis. Finally, the proteolytic degradation of β-CN in skim milk

  10. Sampling and mass spectrometry approaches for the detection of drugs and foreign contaminants in breath for homeland security applications

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Audrey Noreen [Michigan State Univ., East Lansing, MI (United States)

    2009-01-01

    Homeland security relies heavily on analytical chemistry to identify suspicious materials and persons. Traditionally this role has focused on attribution, determining the type and origin of an explosive, for example. But as technology advances, analytical chemistry can and will play an important role in the prevention and preemption of terrorist attacks. More sensitive and selective detection techniques can allow suspicious materials and persons to be identified even before a final destructive product is made. The work presented herein focuses on the use of commercial and novel detection techniques for application to the prevention of terrorist activities. Although drugs are not commonly thought of when discussing terrorism, narcoterrorism has become a significant threat in the 21st century. The role of the drug trade in the funding of terrorist groups is prevalent; thus, reducing the trafficking of illegal drugs can play a role in the prevention of terrorism by cutting off much needed funding. To do so, sensitive, specific, and robust analytical equipment is needed to quickly identify a suspected drug sample no matter what matrix it is in. Single Particle Aerosol Mass Spectrometry (SPAMS) is a novel technique that has previously been applied to biological and chemical detection. The current work applies SPAMS to drug analysis, identifying the active ingredients in single component, multi-component, and multi-tablet drug samples in a relatively non-destructive manner. In order to do so, a sampling apparatus was created to allow particle generation from drug tablets with on-line introduction to the SPAMS instrument. Rules trees were developed to automate the identification of drug samples on a single particle basis. A novel analytical scheme was also developed to identify suspect individuals based on chemical signatures in human breath. Human breath was sampled using an RTube{trademark} and the trace volatile organic compounds (VOCs) were preconcentrated using solid

  11. Comprehensive Proteoform Characterization of Plasma Complement Component C8αβγ by Hybrid Mass Spectrometry Approaches

    Science.gov (United States)

    Franc, Vojtech; Zhu, Jing; Heck, Albert J. R.

    2018-03-01

    The human complement hetero-trimeric C8αβγ (C8) protein assembly ( 150 kDa) is an important component of the membrane attack complex (MAC). C8 initiates membrane penetration and coordinates MAC pore formation. Here, we charted in detail the structural micro-heterogeneity within C8, purified from human plasma, combining high-resolution native mass spectrometry and (glyco)peptide-centric proteomics. The intact C8 proteoform profile revealed at least 20 co-occurring MS signals. Additionally, we employed ion exchange chromatography to separate purified C8 into four distinct fractions. Their native MS analysis revealed even more detailed structural micro-heterogeneity on C8. Subsequent peptide-centric analysis, by proteolytic digestion of C8 and LC-MS/MS, provided site-specific quantitative profiles of different types of C8 glycosylation. Combining all this data provides a detailed specification of co-occurring C8 proteoforms, including experimental evidence on N-glycosylation, C-mannosylation, and O-glycosylation. In addition to the known N-glycosylation sites, two more N-glycosylation sites were detected on C8. Additionally, we elucidated the stoichiometry of all C-mannosylation sites in all the thrombospondin-like (TSP) domains of C8α and C8β. Lastly, our data contain the first experimental evidence of O-linked glycans located on C8γ. Albeit low abundant, these O-glycans are the first PTMs ever detected on this subunit. By placing the observed PTMs in structural models of free C8 and C8 embedded in the MAC, it may be speculated that some of the newly identified modifications may play a role in the MAC formation. [Figure not available: see fulltext.

  12. High-throughput screening for various classes of doping agents using a new 'dilute-and-shoot' liquid chromatography-tandem mass spectrometry multi-target approach.

    Science.gov (United States)

    Guddat, S; Solymos, E; Orlovius, A; Thomas, A; Sigmund, G; Geyer, H; Thevis, M; Schänzer, W

    2011-01-01

    A new multi-target approach based on liquid chromatography--electrospray ionization tandem mass spectrometry (LC-(ESI)-MS/MS) is presented to screen for various classes of prohibited substances using direct injection of urine specimens. With a highly sensitive new generation hybrid mass spectrometer classic groups of drugs--for example, diuretics, beta2-agonists--stimulants and narcotics are detectable at concentration levels far below the required limits. Additionally, more challenging and various new target compounds could be implemented. Model compounds of stimulant conjugates were studied to investigate a possible screening without complex sample preparation. As a main achievement, the integration of the plasma volume expanders dextran and hydroxyethyl starch (HES), commonly analyzed in time-consuming, stand-alone procedures, is accomplished. To screen for relatively new prohibited compounds, a common metabolite of the selective androgen receptor modulator (SARMs) andarine, a metabolite of growth hormone releasing peptide (GHRP-2), and 5-amino-4-imidazolecarboxyamide ribonucleoside (AICAR) are analyzed. Following a completely new approach, conjugates of di(2-ethylhexyl) phthalate (DEHP) metabolites are monitored to detect abnormally high levels of plasticizers indicating for illicit blood transfusion. The assay was fully validated for qualitative purposes considering the parameters specificity, intra- (3.2-16.6%) and inter-day precision (0.4-19.9%) at low, medium and high concentration, robustness, limit of detection (1-70 ng/ml, dextran: 30 µg/ml, HES: 10 µg/ml) and ion suppression/enhancement effects. The analyses of post-administration and routine doping control samples demonstrates the applicability of the method for sports drug testing. This straightforward and reliable approach accomplishes the combination of different screening procedures resulting in a high-throughput method that increases the efficiency of the labs daily work. Copyright © 2011 John

  13. A Bayesian approach to quantifying the effects of mass poultry vaccination upon the spatial and temporal dynamics of H5N1 in Northern Vietnam.

    Directory of Open Access Journals (Sweden)

    Patrick G T Walker

    2010-02-01

    Full Text Available Outbreaks of H5N1 in poultry in Vietnam continue to threaten the livelihoods of those reliant on poultry production whilst simultaneously posing a severe public health risk given the high mortality associated with human infection. Authorities have invested significant resources in order to control these outbreaks. Of particular interest is the decision, following a second wave of outbreaks, to move from a "stamping out" approach to the implementation of a nationwide mass vaccination campaign. Outbreaks which occurred around this shift in policy provide a unique opportunity to evaluate the relative effectiveness of these approaches and to help other countries make informed judgements when developing control strategies. Here we use Bayesian Markov Chain Monte Carlo (MCMC data augmentation techniques to derive the first quantitative estimates of the impact of the vaccination campaign on the spread of outbreaks of H5N1 in northern Vietnam. We find a substantial decrease in the transmissibility of infection between communes following vaccination. This was coupled with a significant increase in the time from infection to detection of the outbreak. Using a cladistic approach we estimated that, according to the posterior mean effect of pruning the reconstructed epidemic tree, two thirds of the outbreaks in 2007 could be attributed to this decrease in the rate of reporting. The net impact of these two effects was a less intense but longer-lasting wave and, whilst not sufficient to prevent the sustained spread of outbreaks, an overall reduction in the likelihood of the transmission of infection between communes. These findings highlight the need for more effectively targeted surveillance in order to help ensure that the effective coverage achieved by mass vaccination is converted into a reduction in the likelihood of outbreaks occurring which is sufficient to control the spread of H5N1 in Vietnam.

  14. tavg3_3d_chm_Ne: MERRA Chem 3D IAU C-Grid Edge Mass Flux, Time Average 3-Hourly 0.667 x 0.5 degree V5.2.0 (MAT3NECHM) at GES DISC

    Data.gov (United States)

    National Aeronautics and Space Administration — The MAT3NECHM or tavg3_3d_chm_Ne data product is the MERRA Data Assimilation System Chemistry 3-Dimensional chemistry on layer Edges that is time averaged, 3D model...

  15. Asynchronous Gossip for Averaging and Spectral Ranking

    Science.gov (United States)

    Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh

    2014-08-01

    We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.

  16. Benchmarking statistical averaging of spectra with HULLAC

    Science.gov (United States)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  17. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  18. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  19. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  20. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  1. Aperture averaging in strong oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  2. Efficacy of an artificial neural network-based approach to endoscopic ultrasound elastography in diagnosis of focal pancreatic masses.

    Science.gov (United States)

    Săftoiu, Adrian; Vilmann, Peter; Gorunescu, Florin; Janssen, Jan; Hocke, Michael; Larsen, Michael; Iglesias-Garcia, Julio; Arcidiacono, Paolo; Will, Uwe; Giovannini, Marc; Dietrich, Cristoph F; Havre, Roald; Gheorghe, Cristian; McKay, Colin; Gheonea, Dan Ionuţ; Ciurea, Tudorel

    2012-01-01

    By using strain assessment, real-time endoscopic ultrasound (EUS) elastography provides additional information about a lesion's characteristics in the pancreas. We assessed the accuracy of real-time EUS elastography in focal pancreatic lesions using computer-aided diagnosis by artificial neural network analysis. We performed a prospective, blinded, multicentric study at of 258 patients (774 recordings from EUS elastography) who were diagnosed with chronic pancreatitis (n = 47) or pancreatic adenocarcinoma (n = 211) from 13 tertiary academic medical centers in Europe (the European EUS Elastography Multicentric Study Group). We used postprocessing software analysis to compute individual frames of elastography movies recorded by retrieving hue histogram data from a dynamic sequence of EUS elastography into a numeric matrix. The data then were analyzed in an extended neural network analysis, to automatically differentiate benign from malignant patterns. The neural computing approach had 91.14% training accuracy (95% confidence interval [CI], 89.87%-92.42%) and 84.27% testing accuracy (95% CI, 83.09%-85.44%). These results were obtained using the 10-fold cross-validation technique. The statistical analysis of the classification process showed a sensitivity of 87.59%, a specificity of 82.94%, a positive predictive value of 96.25%, and a negative predictive value of 57.22%. Moreover, the corresponding area under the receiver operating characteristic curve was 0.94 (95% CI, 0.91%-0.97%), which was significantly higher than the values obtained by simple mean hue histogram analysis, for which the area under the receiver operating characteristic was 0.85. Use of the artificial intelligence methodology via artificial neural networks supports the medical decision process, providing fast and accurate diagnoses. Copyright © 2012 AGA Institute. Published by Elsevier Inc. All rights reserved.

  3. Average: the juxtaposition of procedure and context

    Science.gov (United States)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  4. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  5. Average contraction and synchronization of complex switched networks

    International Nuclear Information System (INIS)

    Wang Lei; Wang Qingguo

    2012-01-01

    This paper introduces an average contraction analysis for nonlinear switched systems and applies it to investigating the synchronization of complex networks of coupled systems with switching topology. For a general nonlinear system with a time-dependent switching law, a basic convergence result is presented according to average contraction analysis, and a special case where trajectories of a distributed switched system converge to a linear subspace is then investigated. Synchronization is viewed as the special case with all trajectories approaching the synchronization manifold, and is thus studied for complex networks of coupled oscillators with switching topology. It is shown that the synchronization of a complex switched network can be evaluated by the dynamics of an isolated node, the coupling strength and the time average of the smallest eigenvalue associated with the Laplacians of switching topology and the coupling fashion. Finally, numerical simulations illustrate the effectiveness of the proposed methods. (paper)

  6. A control-oriented approach to estimate the injected fuel mass on the basis of the measured in-cylinder pressure in multiple injection diesel engines

    International Nuclear Information System (INIS)

    Finesso, Roberto; Spessa, Ezio

    2015-01-01

    Highlights: • Control-oriented method to estimate injected quantities from in-cylinder pressure. • Able to calculate the injected quantities for multiple injection strategies. • Based on the inversion of a heat-release predictive model. • Low computational time demanding. - Abstract: A new control-oriented methodology has been developed to estimate the injected fuel quantities, in real-time, in multiple injection DI diesel engines on the basis of the measured in-cylinder pressure. The method is based on the inversion of a predictive combustion model that was previously developed by the authors, and that is capable of estimating the heat release rate and the in-cylinder pressure on the basis of the injection rate. The model equations have been rewritten in order to derive the injected mass as an output quantity, starting from use of the measured in-cylinder pressure as input. It has been verified that the proposed method is capable of estimating the injected mass of pilot pulses with an uncertainty of the order of ±0.15 mg/cyc, and the total injected mass with an uncertainty of the order of ±0.9 mg/cyc. The main sources of uncertainty are related to the estimation of the in-cylinder heat transfer and of the isentropic coefficient γ = c_p/c_v. The estimation of the actual injected quantities in the combustion chamber can represent a powerful means to diagnose the behavior of the injectors during engine operation, and offers the possibility of monitoring effects, such as injector ageing and injector coking, as well as of allowing an accurate control of the pilot injected quantities to be obtained; the latter are in fact usually characterized by a large dispersion, with negative consequences on the combustion quality and emission formation. The approach is characterized by a very low computational time, and is therefore suitable for control-oriented applications.

  7. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  8. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  9. Average beta measurement in EXTRAP T1

    International Nuclear Information System (INIS)

    Hedin, E.R.

    1988-12-01

    Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)

  10. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    International Nuclear Information System (INIS)

    2005-01-01

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  11. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...

  12. Gibbs equilibrium averages and Bogolyubov measure

    International Nuclear Information System (INIS)

    Sankovich, D.P.

    2011-01-01

    Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure

  13. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  14. Function reconstruction from noisy local averages

    International Nuclear Information System (INIS)

    Chen Yu; Huang Jianguo; Han Weimin

    2008-01-01

    A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies

  15. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.

  16. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  17. Essays on model averaging and political economics

    NARCIS (Netherlands)

    Wang, W.

    2013-01-01

    This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple

  18. 7 CFR 1209.12 - On average.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...

  19. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  20. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  1. Reliability Estimates for Undergraduate Grade Point Average

    Science.gov (United States)

    Westrick, Paul A.

    2017-01-01

    Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…

  2. Leading relativistic corrections for atomic P states calculated with a finite-nuclear-mass approach and all-electron explicitly correlated Gaussian functions

    Science.gov (United States)

    Stanke, Monika; Bralin, Amir; Bubin, Sergiy; Adamowicz, Ludwik

    2018-01-01

    In this work we report progress in the development and implementation of quantum-mechanical methods for calculating bound ground and excited states of small atomic systems. The work concerns singlet states with the L =1 total orbital angular momentum (P states). The method is based on the finite-nuclear-mass (non-Born-Oppenheimer; non-BO) approach and the use of all-particle explicitly correlated Gaussian functions for expanding the nonrelativistic wave function of the system. The development presented here includes derivation and implementation of algorithms for calculating the leading relativistic corrections for singlet states. The corrections are determined in the framework of the perturbation theory as expectation values of the corresponding effective operators using the non-BO wave functions. The method is tested in the calculations of the ten lowest 1P states of the helium atom and the four lowest 1P states of the beryllium atom.

  3. CONSTRAINTS ON THE RELATIONSHIP BETWEEN STELLAR MASS AND HALO MASS AT LOW AND HIGH REDSHIFT

    International Nuclear Information System (INIS)

    Moster, Benjamin P.; Somerville, Rachel S.; Maulbetsch, Christian; Van den Bosch, Frank C.; Maccio, Andrea V.; Naab, Thorsten; Oser, Ludwig

    2010-01-01

    We use a statistical approach to determine the relationship between the stellar masses of galaxies and the masses of the dark matter halos in which they reside. We obtain a parameterized stellar-to-halo mass (SHM) relation by populating halos and subhalos in an N-body simulation with galaxies and requiring that the observed stellar mass function be reproduced. We find good agreement with constraints from galaxy-galaxy lensing and predictions of semi-analytic models. Using this mapping, and the positions of the halos and subhalos obtained from the simulation, we find that our model predictions for the galaxy two-point correlation function (CF) as a function of stellar mass are in excellent agreement with the observed clustering properties in the Sloan Digital Sky Survey at z = 0. We show that the clustering data do not provide additional strong constraints on the SHM function and conclude that our model can therefore predict clustering as a function of stellar mass. We compute the conditional mass function, which yields the average number of galaxies with stellar masses in the range m ± dm/2 that reside in a halo of mass M. We study the redshift dependence of the SHM relation and show that, for low-mass halos, the SHM ratio is lower at higher redshift. The derived SHM relation is used to predict the stellar mass dependent galaxy CF and bias at high redshift. Our model predicts that not only are massive galaxies more biased than low-mass galaxies at all redshifts, but also the bias increases more rapidly with increasing redshift for massive galaxies than for low-mass ones. We present convenient fitting functions for the SHM relation as a function of redshift, the conditional mass function, and the bias as a function of stellar mass and redshift.

  4. Direct analysis in real time mass spectrometry and multivariate data analysis: a novel approach to rapid identification of analytical markers for quality control of traditional Chinese medicine preparation.

    Science.gov (United States)

    Zeng, Shanshan; Wang, Lu; Chen, Teng; Wang, Yuefei; Mo, Huanbiao; Qu, Haibin

    2012-07-06

    The paper presents a novel strategy to identify analytical markers of traditional Chinese medicine preparation (TCMP) rapidly via direct analysis in real time mass spectrometry (DART-MS). A commonly used TCMP, Danshen injection, was employed as a model. The optimal analysis conditions were achieved by measuring the contribution of various experimental parameters to the mass spectra. Salvianolic acids and saccharides were simultaneously determined within a single 1-min DART-MS run. Furthermore, spectra of Danshen injections supplied by five manufacturers were processed with principal component analysis (PCA). Obvious clustering was observed in the PCA score plot, and candidate markers were recognized from the contribution plots of PCA. The suitability of potential markers was then confirmed by contrasting with the results of traditional analysis methods. Using this strategy, fructose, glucose, sucrose, protocatechuic aldehyde and salvianolic acid A were rapidly identified as the markers of Danshen injections. The combination of DART-MS with PCA provides a reliable approach to the identification of analytical markers for quality control of TCMP. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Direct identification of bacteria in blood culture by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry: a new methodological approach.

    Science.gov (United States)

    Kroumova, Vesselina; Gobbato, Elisa; Basso, Elisa; Mucedola, Luca; Giani, Tommaso; Fortina, Giacomo

    2011-08-15

    Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) has recently been demonstrated to be a powerful tool for the rapid identification of bacteria from growing colonies. In order to speed up the identification of bacteria, several authors have evaluated the usefulness of this MALDI-TOF MS technology for the direct and quick identification bacteria from positive blood cultures. The results obtained so far have been encouraging but have also shown some limitations, mainly related to the bacterial growth and to the presence of interference substances belonging to the blood cultures. In this paper, we present a new methodological approach that we have developed to overcome these limitations, based mainly on an enrichment of the sample into a growing medium before the extraction process, prior to mass spectrometric analysis. The proposed method shows important advantages for the identification of bacterial strains, yielding an increased identification score, which gives higher confidence in the results. Copyright © 2011 John Wiley & Sons, Ltd.

  6. Distribution patterns of flavonoids from three Momordica species by ultra-high performance liquid chromatography quadrupole time of flight mass spectrometry: a metabolomic profiling approach

    Directory of Open Access Journals (Sweden)

    Ntakadzeni Edwin Madala

    Full Text Available ABSTRACT Plants from the Momordica genus, Curcubitaceae, are used for several purposes, especially for their nutritional and medicinal properties. Commonly known as bitter gourds, melon and cucumber, these plants are characterized by a bitter taste owing to the large content of cucurbitacin compounds. However, several reports have shown an undisputed correlation between the therapeutic activities and polyphenolic flavonoid content. Using ultra-high performance liquid chromatography quadrupole time of flight mass spectrometry in combination with multivariate data models such as principal component analysis and hierarchical cluster analysis, three Momordica species (M. foetida Schumach., M. charantia L. and M. balsamina L. were chemo-taxonomically grouped based on their flavonoid content. Using a conventional mass spectrometric-based approach, thirteen flavonoids were tentatively identified and the three species were found to contain different isomers of the quercetin-, kaempferol- and isorhamnetin-O-glycosides. Our results indicate that Momordica species are overall very rich sources of flavonoids but do contain different forms thereof. Furthermore, to the best of our knowledge, this is a first report on the flavonoid content of M. balsamina L.

  7. Proof of the identity between the depletion layer thickness and half the average span for an arbitrary polymer chain

    DEFF Research Database (Denmark)

    Wang, Yanwei; Peters, Günther H.J.; Hansen, Flemming Yssing

    2008-01-01

    point in the polymer chain (such as the center of mass, middle segment, and end segments) can be computed as a function of the confinement size solely based on a single sampling of the configuration space of a polymer chain in bulk. Through a simple analysis based on the CABS approach in the case...... of a single wall, we prove rigorously that (i) the depletion layer thickness delta is the same no matter which reference point is used to describe the depletion profile and (ii) the value of delta equals half the average span (the mean projection onto a line) of the macromolecule in free solution. Both...

  8. Tendon surveillance requirements - average tendon force

    International Nuclear Information System (INIS)

    Fulton, J.F.

    1982-01-01

    Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)

  9. Mean link versus average plaquette tadpoles in lattice NRQCD

    Science.gov (United States)

    Shakespeare, Norman H.; Trottier, Howard D.

    1999-03-01

    We compare mean-link and average plaquette tadpole renormalization schemes in the context of the quarkonium hyperfine splittings in lattice NRQCD. Simulations are done for the three quarkonium systems c overlinec, b overlinec, and b overlineb. The hyperfine splittings are computed both at leading and at next-to-leading order in the relativistic expansion. Results are obtained at a large number of lattice spacings. A number of features emerge, all of which favor tadpole renormalization using mean links. This includes much better scaling of the hyperfine splittings in the three quarkonium systems. We also find that relativistic corrections to the spin splittings are smaller with mean-link tadpoles, particularly for the c overlinec and b overlinec systems. We also see signs of a breakdown in the NRQCD expansion when the bare quark mass falls below about one in lattice units (with the bare quark masses turning out to be much larger with mean-link tadpoles).

  10. Comparative Ebulliometry: a Simple, Reliable Technique for Accurate Measurement of the Number Average Molecular Weight of Macromolecules. Preliminary Studies on Heavy Crude Fractions Ébulliométrie comparative : technique simple et fiable pour déterminer précisément la masse molaire moyenne en nombre des macromolécules. Etudes préliminaires sur des fractions lourdes de bruts

    Directory of Open Access Journals (Sweden)

    Behar E.

    2006-12-01

    Full Text Available This article is divided into two parts. In the first part, the authors present a comparison of the major techniques for the measurement of the molecular weight of macromolecules. The bibliographic results are gathered in several tables. In the second part, a comparative ebulliometer for the measurement of the number average molecular weight (Mn of heavy crude oil fractions is described. The high efficiency of the apparatus is demonstrated with a preliminary study of atmospheric distillation residues and resins. The measurement of molecular weights up to 2000 g/mol is possible in less than 4 hours with an uncertainty of about 2%. Cet article comprend deux parties. Dans la première, les auteurs présentent une comparaison entre les principales techniques de détermination de la masse molaire de macromolécules. Les résultats de l'étude bibliographique sont rassemblés dans plusieurs tableaux. La seconde partie décrit un ébulliomètre comparatif conçu pour la mesure de la masse molaire moyenne en nombre (Mn des fractions lourdes des bruts. Une illustration de l'efficacité de cet appareil est indiquée avec l'étude préliminaire de résidus de distillation atmosphérique et de résines. En particulier, la mesure de masses molaires pouvant atteindre 2000 g/mol est possible en moins de 4 heures avec une incertitude expérimentale de l'ordre de 2 %.

  11. Nongeostrophic theory of zonally averaged circulation. I - Formulation

    Science.gov (United States)

    Tung, Ka Kit

    1986-01-01

    A nongeostrophic theory of zonally averaged circulation is formulated using the nonlinear primitive equations (mass conservation, thermodynamics, and zonal momentum) on a sphere. The relationship between the mean meridional circulation and diabatic heating rate is studied. Differences between results of nongeostropic theory and the geostrophic formulation concerning the role of eddy forcing of the diabatic circulation and the nonlinear nearly inviscid limit versus the geostrophic limit are discussed. Consideration is given to the Eliassen-Palm flux divergence, the Eliassen-Palm pseudodivergence, the nonacceleration theorem, and the nonlinear nongeostrophic Taylor relationship.

  12. Average Case Analysis of Java 7's Dual Pivot Quicksort

    OpenAIRE

    Wild, Sebastian; Nebel, Markus E.

    2013-01-01

    Recently, a new Quicksort variant due to Yaroslavskiy was chosen as standard sorting method for Oracle's Java 7 runtime library. The decision for the change was based on empirical studies showing that on average, the new algorithm is faster than the formerly used classic Quicksort. Surprisingly, the improvement was achieved by using a dual pivot approach, an idea that was considered not promising by several theoretical studies in the past. In this paper, we identify the reason for this unexpe...

  13. Cosmological measure with volume averaging and the vacuum energy problem

    Science.gov (United States)

    Astashenok, Artyom V.; del Popolo, Antonino

    2012-04-01

    In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Λ = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero.

  14. Cosmological measure with volume averaging and the vacuum energy problem

    International Nuclear Information System (INIS)

    Astashenok, Artyom V; Del Popolo, Antonino

    2012-01-01

    In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Λ = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero. (paper)

  15. Statistics on exponential averaging of periodograms

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  16. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  17. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  18. Weighted estimates for the averaging integral operator

    Czech Academy of Sciences Publication Activity Database

    Opic, Bohumír; Rákosník, Jiří

    2010-01-01

    Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231

  19. Time-averaged MSD of Brownian motion

    OpenAIRE

    Andreanov, Alexei; Grebenkov, Denis

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...

  20. Fast food, other food choices and body mass index in teenagers in the United Kingdom (ALSPAC): a structural equation modelling approach.

    Science.gov (United States)

    Fraser, L K; Edwards, K L; Cade, J E; Clarke, G P

    2011-10-01

    To assess the association between the consumption of fast food (FF) and body mass index (BMI) of teenagers in a large UK birth cohort. A structural equation modelling (SEM) approach was chosen to allow direct statistical testing of a theoretical model. SEM is a combination of confirmatory factor and path analysis, which allows for the inclusion of latent (unmeasured) variables. This approach was used to build two models: the effect of FF outlet visits and food choices and the effect of FF exposure on consumption and BMI. A total of 3620 participants had data for height and weight from the age 13 clinic and the frequency of FF outlet visits, and so were included in these analyses. This SEM model of food choices showed that increased frequency of eating at FF outlets is positively associated with higher consumption of unhealthy foods (β=0.29, Pfoods (β=-1.02, Pfoods and were more likely to have higher BMISDS than those teenagers who did not eat frequently at FF restaurants. Teenagers who were exposed to more takeaway foods at home ate more frequently at FF restaurants and eating at FF restaurants was also associated with lower intakes of vegetables and raw fruit in this cohort.

  1. A Simple Approach for Obtaining High Resolution, High Sensitivity ¹H NMR Metabolite Spectra of Biofluids with Limited Mass Supply

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Jian Zhi; Rommereim, Donald N.; Wind, Robert A.; Minard, Kevin R.; Sears, Jesse A.

    2006-11-01

    A simple approach is reported that yields high resolution, high sensitivity ¹H NMR spectra of biofluids with limited mass supply. This is achieved by spinning a capillary sample tube containing a biofluid at the magic angle at a frequency of about 80Hz. A 2D pulse sequence called ¹H PASS is then used to produce a high-resolution ¹H NMR spectrum that is free from magnetic susceptibility induced line broadening. With this new approach a high resolution ¹H NMR spectrum of biofluids with a volume less than 1.0 µl can be easily achieved at a magnetic field strength as low as 7.05T. Furthermore, the methodology facilitates easy sample handling, i.e., the samples can be directly collected into inexpensive and disposable capillary tubes at the site of collection and subsequently used for NMR measurements. In addition, slow magic angle spinning improves magnetic field shimming and is especially suitable for high throughput investigations. In this paper first results are shown obtained in a magnetic field of 7.05T on urine samples collected from mice using a modified commercial NMR probe.

  2. Application of a novel metabolomic approach based on atmospheric pressure photoionization mass spectrometry using flow injection analysis for the study of Alzheimer's disease.

    Science.gov (United States)

    González-Domínguez, Raúl; García-Barrera, Tamara; Gómez-Ariza, José Luis

    2015-01-01

    The use of atmospheric pressure photoionization is not widespread in metabolomics, despite its considerable potential for the simultaneous analysis of compounds with diverse polarities. This work considers the development of a novel analytical approach based on flow injection analysis and atmospheric pressure photoionization mass spectrometry for rapid metabolic screening of serum samples. Several experimental parameters were optimized, such as type of dopant, flow injection solvent, and their flows, given that a careful selection of these variables is mandatory for a comprehensive analysis of metabolites. Toluene and methanol were the most suitable dopant and flow injection solvent, respectively. Moreover, analysis in negative mode required higher solvent and dopant flows (100 µl min(-1) and 40 µl min(-1), respectively) compared to positive mode (50 µl min(-1) and 20 µl min(-1)). Then, the optimized approach was used to elucidate metabolic alterations associated with Alzheimer's disease. Thereby, results confirm the increase of diacylglycerols, ceramides, ceramide-1-phosphate and free fatty acids, indicating membrane destabilization processes, and reduction of fatty acid amides and several neurotransmitters related to impairments in neuronal transmission, among others. Therefore, it could be concluded that this metabolomic tool presents a great potential for analysis of biological samples, considering its high-throughput screening capability, fast analysis and comprehensive metabolite coverage. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Targeted Quantitation of Site-Specific Cysteine Oxidation in Endogenous Proteins Using a Differential Alkylation and Multiple Reaction Monitoring Mass Spectrometry Approach

    Science.gov (United States)

    Held, Jason M.; Danielson, Steven R.; Behring, Jessica B.; Atsriku, Christian; Britton, David J.; Puckett, Rachel L.; Schilling, Birgit; Campisi, Judith; Benz, Christopher C.; Gibson, Bradford W.

    2010-01-01

    Reactive oxygen species (ROS) are both physiological intermediates in cellular signaling and mediators of oxidative stress. The cysteine-specific redox-sensitivity of proteins can shed light on how ROS are regulated and function, but low sensitivity has limited quantification of the redox state of many fundamental cellular regulators in a cellular context. Here we describe a highly sensitive and reproducible oxidation analysis approach (OxMRM) that combines protein purification, differential alkylation with stable isotopes, and multiple reaction monitoring mass spectrometry that can be applied in a targeted manner to virtually any cysteine or protein. Using this approach, we quantified the site-specific cysteine oxidation status of endogenous p53 for the first time and found that Cys182 at the dimerization interface of the DNA binding domain is particularly susceptible to diamide oxidation intracellularly. OxMRM enables analysis of sulfinic and sulfonic acid oxidation levels, which we validate by assessing the oxidation of the catalytic Cys215 of protein tyrosine phosphatase-1B under numerous oxidant conditions. OxMRM also complements unbiased redox proteomics discovery studies as a verification tool through its high sensitivity, accuracy, precision, and throughput. PMID:20233844

  4. Average configuration of the geomagnetic tail

    International Nuclear Information System (INIS)

    Fairfield, D.H.

    1979-01-01

    Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed

  5. Changing mortality and average cohort life expectancy

    Directory of Open Access Journals (Sweden)

    Robert Schoen

    2005-10-01

    Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.

  6. Validating a mass balance accounting approach to using 7Be measurements to estimate event-based erosion rates over an extended period at the catchment scale

    Science.gov (United States)

    Porto, Paolo; Walling, Des E.; Cogliandro, Vanessa; Callegari, Giovanni

    2016-07-01

    Use of the fallout radionuclides cesium-137 and excess lead-210 offers important advantages over traditional methods of quantifying erosion and soil redistribution rates. However, both radionuclides provide information on longer-term (i.e., 50-100 years) average rates of soil redistribution. Beryllium-7, with its half-life of 53 days, can provide a basis for documenting short-term soil redistribution and it has been successfully employed in several studies. However, the approach commonly used introduces several important constraints related to the timing and duration of the study period. A new approach proposed by the authors that overcomes these constraints has been successfully validated using an erosion plot experiment undertaken in southern Italy. Here, a further validation exercise undertaken in a small (1.38 ha) catchment is reported. The catchment was instrumented to measure event sediment yields and beryllium-7 measurements were employed to document the net soil loss for a series of 13 events that occurred between November 2013 and June 2015. In the absence of significant sediment storage within the catchment's ephemeral channel system and of a significant contribution from channel erosion to the measured sediment yield, the estimates of net soil loss for the individual events could be directly compared with the measured sediment yields to validate the former. The close agreement of the two sets of values is seen as successfully validating the use of beryllium-7 measurements and the new approach to obtain estimates of net soil loss for a sequence of individual events occurring over an extended period at the scale of a small catchment.

  7. Strategies for method development for an inductively coupled plasma mass spectrometer with bandpass reaction cell. Approaches with different reaction gases for the determination of selenium

    International Nuclear Information System (INIS)

    Hattendorf, Bodo; Guenther, Detlef

    2003-01-01

    An inductively coupled plasma mass spectrometer with dynamic reaction cell (DRC) was used to investigate different approaches for chemical resolution of Ar 2 + ions and to improve the determination of Se. Hydrogen, methane, oxygen and nitrous oxide were used as reaction gases. The method development for each approach consists of the acquisition of spectra for blank and spiked samples at different operating parameters, including reaction gas flow and transmission settings, of the DRC. Isotope ratio studies and the analytes signal to background ratio (SBR), were used as criteria to determine the operating conditions of the DRC where spectral interferences from the ion source or from polyatomic ions formed inside the DRC are minimized. Methane was found to provide the highest reaction efficiency for determination of Se. Nitrous oxide and oxygen also very efficiently suppress the Ar 2 + interference but reaction or scattering losses of Se + and SeO + are significant. Hydrogen is the least efficient gas for Ar 2 + reduction but little scattering or reactive loss lead to a good SBR. The determination of Se as SeO + was investigated with oxygen and nitrous oxide as reaction gases. The efficiency when using the oxygenation reaction was found to be similar to the efficiency for the charge transfer reactions but the slow oxygenation of the potentially interfering Mo + renders this approach less useful for analytical purposes. Using a natural water sample it could be shown that very good agreement is obtained using methane or hydrogen for analysis of 80 Se + at the μg/l level. Limits of detection are lowest (2 ng/l) when methane is used to suppress the Ar 2 + ion and when 80 Se + is used for analysis

  8. Operator product expansion and its thermal average

    Energy Technology Data Exchange (ETDEWEB)

    Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)

    1998-05-01

    QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.

  9. Fluctuations of wavefunctions about their classical average

    International Nuclear Information System (INIS)

    Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics

  10. Phase-averaged transport for quasiperiodic Hamiltonians

    CERN Document Server

    Bellissard, J; Schulz-Baldes, H

    2002-01-01

    For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.

  11. Baseline-dependent averaging in radio interferometry

    Science.gov (United States)

    Wijnholds, S. J.; Willis, A. G.; Salvini, S.

    2018-05-01

    This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.

  12. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  13. Time-averaged MSD of Brownian motion

    International Nuclear Information System (INIS)

    Andreanov, Alexei; Grebenkov, Denis S

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution

  14. Time-dependent angularly averaged inverse transport

    International Nuclear Information System (INIS)

    Bal, Guillaume; Jollivet, Alexandre

    2009-01-01

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain

  15. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7.  ...

  16. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...

  17. Time-averaged MSD of Brownian motion

    Science.gov (United States)

    Andreanov, Alexei; Grebenkov, Denis S.

    2012-07-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.

  18. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  19. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  20. Towards an evidence-based approach for diagnosis and management of adnexal masses: findings of the International Ovarian Tumour Analysis (IOTA) studies.

    Science.gov (United States)

    Kaijser, J

    2015-01-01

    Whilst the outcomes for patients with ovarian cancer clearly benefit from centralised, comprehensive care in dedicated cancer centres, unfortunately the majority of patients still do not receive appropriate specialist treatment. Any improvement in the accuracy of current triaging and referral pathways whether using new imaging tests or biomarkers would therefore be of value in order to optimise the appropriate selection of patients for such care. An analysis of current evidence shows that such tests are now available, but still await recognition, acceptance and widespread adoption. It is therefore to be hoped that present guidance relating to the classification of ovarian masses will soon become more "evidence-based". These promising tests include the International Ovarian Tumour Analysis (IOTA) LR2 model and ultrasound-based Simple Rules (SR). Based on a comprehensive recent meta-analysis both currently offer the optimal "evidence-based" approach to discriminating between cancer and benign conditions in women with adnexal tumours needing surgery. LR2 and SR are reliable tests having been shown to maintain a high sensitivity for cancer after independent external and temporal validation by the IOTA group in the hands of examiners with various levels of ultrasound expertise. They also offer more accurate triage compared to the existing Risk of Malignancy Index (RMI). The development of the IOTA ADNEX model represents an important step forward towards more individualised patient care in this area. ADNEX is a novel test that enables the more specific subtyping of adnexal cancers (i.e. borderline, stage 1 invasive, stage II-IV invasive, and secondary metastatic malignant tumours) and shares similar levels of accuracy to IOTA LR2 and SR for basic discrimination between cancer and benign disease. The IOTA study has made significant progress in relation to the classification of adnexal masses, however what is now needed is to see if these or new diagnostic tools can assist

  1. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  2. Averaged null energy condition from causality

    Science.gov (United States)

    Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein

    2017-07-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.

  3. Beta-energy averaging and beta spectra

    International Nuclear Information System (INIS)

    Stamatelatos, M.G.; England, T.R.

    1976-07-01

    A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality

  4. Chaotic Universe, Friedmannian on the average 2

    Energy Technology Data Exchange (ETDEWEB)

    Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij

    1980-11-01

    The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.

  5. Averaging in the presence of sliding errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1991-08-01

    In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms

  6. Average subentropy, coherence and entanglement of random mixed quantum states

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)

    2017-02-15

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.

  7. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  8. Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.

    Science.gov (United States)

    Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel

    2018-06-05

    In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.

  9. Software-aided approach to investigate peptide structure and metabolic susceptibility of amide bonds in peptide drugs based on high resolution mass spectrometry.

    Directory of Open Access Journals (Sweden)

    Tatiana Radchenko

    Full Text Available Interest in using peptide molecules as therapeutic agents due to high selectivity and efficacy is increasing within the pharmaceutical industry. However, most peptide-derived drugs cannot be administered orally because of low bioavailability and instability in the gastrointestinal tract due to protease activity. Therefore, structural modifications peptides are required to improve their stability. For this purpose, several in-silico software tools have been developed such as PeptideCutter or PoPS, which aim to predict peptide cleavage sites for different proteases. Moreover, several databases exist where this information is collected and stored from public sources such as MEROPS and ExPASy ENZYME databases. These tools can help design a peptide drug with increased stability against proteolysis, though they are limited to natural amino acids or cannot process cyclic peptides, for example. We worked to develop a new methodology to analyze peptide structure and amide bond metabolic stability based on the peptide structure (linear/cyclic, natural/unnatural amino acids. This approach used liquid chromatography / high resolution, mass spectrometry to obtain the analytical data from in vitro incubations. We collected experimental data for a set (linear/cyclic, natural/unnatural amino acids of fourteen peptide drugs and four substrate peptides incubated with different proteolytic media: trypsin, chymotrypsin, pepsin, pancreatic elastase, dipeptidyl peptidase-4 and neprilysin. Mass spectrometry data was analyzed to find metabolites and determine their structures, then all the results were stored in a chemically aware manner, which allows us to compute the peptide bond susceptibility by using a frequency analysis of the metabolic-liable bonds. In total 132 metabolites were found from the various in vitro conditions tested resulting in 77 distinct cleavage sites. The most frequent observed cleavage sites agreed with those reported in the literature. The

  10. A solid-phase microextraction-gas chromatographic approach combined with triple quadrupole mass spectrometry for the assay of carbamate pesticides in water samples.

    Science.gov (United States)

    Cavaliere, Brunella; Monteleone, Marcello; Naccarato, Attilio; Sindona, Giovanni; Tagarelli, Antonio

    2012-09-28

    A simple and sensitive method was developed for the quantification of five carbamate pesticides in water samples using solid phase microextraction (SPME) combined with gas chromatography-triple quadrupole mass spectrometry (GC-QqQ-MS). The performance of five SPME fibers was tested in univariate mode whereas the other variables affecting the efficiency of SPME analysis were optimized by the multivariate approach of design of experiment (DoE) and, in particular, a central composite design (CCD) was applied. The optimum working conditions in terms of response values were achieved by performing analysis with polydimethylsiloxane/divinylbenzene (PDMS/DVB) fiber in immersion mode for 45min at room temperature with addition of NaCl (10%). The multivariate chemometric approach was also used to explore the chromatographic behavior of the carbamates and to evaluate the importance of each variable investigated. An overall appraisement of results shows that the factor which gave a statistically significant effect on the response was only the injection temperature. Identification and quantification of carbamates was performed by using a gas chromatography-triple quadrupole mass spectrometry (GC-QqQ-MS) system in multiple reaction monitoring (MRM) acquisition. Since the choice of internal standard represented a crucial step in the development of method to achieve good reproducibility and robustness for the entire analytical protocol, three compounds (2,3,5-trimethacarb, 4-bromo-3,5-dimethylphenyl-n-methylcarbamate (BDMC) and carbaryl-d7) were evaluated as internal standards. Both precision and accuracy of the proposed protocol tested at concentration of 0.08, 5 and 3 μg l⁻¹ offered values ranging from 70.8% and 115.7% (except for carbaryl at 3 μg l⁻¹) and from 1.0% and 9.0% for accuracy and precision, respectively. Moreover, LOD and LOQ values ranging from 0.04 to 1.7 ng l⁻¹ and from 0.64 to 2.9 ng l⁻¹, respectively, can be considered very satisfactory. Copyright

  11. Longitudinal associations between body mass index, physical activity, and healthy dietary behaviors in adults: A parallel latent growth curve modeling approach.

    Directory of Open Access Journals (Sweden)

    Youngdeok Kim

    Full Text Available Physical activity (PA and healthy dietary behaviors (HDB are two well-documented lifestyle factors influencing body mass index (BMI. This study examined 7-year longitudinal associations between changes in PA, HDB, and BMI among adults using a parallel latent growth curve modeling (LGCM.We used prospective cohort data collected by a private company (SimplyWell LLC, Omaha, NE, USA implementing a workplace health screening program. Data from a total of 2,579 adults who provided valid BMI, PA, and HDB information for at least 5 out of 7 follow-up years from the time they entered the program were analyzed. PA and HDB were subjectively measured during an annual online health survey. Height and weight measured during an annual onsite health screening were used to calculate BMI (kg·m2. The parallel LGCMs stratified by gender and baseline weight status (normal: BMI30 were fitted to examine the longitudinal associations of changes in PA and HDB with change in BMI over years.On average, BMI gradually increased over years, at rates ranging from 0.06 to 0.20 kg·m2·year, with larger increases observed among those of normal baseline weight status across genders. The increases in PA and HDB were independently associated with a smaller increase in BMI for obese males (b = -1.70 and -1.98, respectively, and overweight females (b = -1.85 and -2.46, respectively and obese females (b = -2.78 and -3.08, respectively. However, no significant associations of baseline PA and HDB with changes in BMI were observed.Our study suggests that gradual increases in PA and HDB are independently associated with smaller increases in BMI in overweight and obese adults, but not in normal weight individuals. Further study is warranted to address factors that check increases in BMI in normal weight adults.

  12. Dual-domain mass-transfer parameters from electrical hysteresis: theory and analytical approach applied to laboratory, synthetic streambed, and groundwater experiments

    Science.gov (United States)

    Briggs, Martin A.; Day-Lewis, Frederick D.; Ong, John B.; Harvey, Judson W.; Lane, John W.

    2014-01-01

    Models of dual-domain mass transfer (DDMT) are used to explain anomalous aquifer transport behavior such as the slow release of contamination and solute tracer tailing. Traditional tracer experiments to characterize DDMT are performed at the flow path scale (meters), which inherently incorporates heterogeneous exchange processes; hence, estimated “effective” parameters are sensitive to experimental design (i.e., duration and injection velocity). Recently, electrical geophysical methods have been used to aid in the inference of DDMT parameters because, unlike traditional fluid sampling, electrical methods can directly sense less-mobile solute dynamics and can target specific points along subsurface flow paths. Here we propose an analytical framework for graphical parameter inference based on a simple petrophysical model explaining the hysteretic relation between measurements of bulk and fluid conductivity arising in the presence of DDMT at the local scale. Analysis is graphical and involves visual inspection of hysteresis patterns to (1) determine the size of paired mobile and less-mobile porosities and (2) identify the exchange rate coefficient through simple curve fitting. We demonstrate the approach using laboratory column experimental data, synthetic streambed experimental data, and field tracer-test data. Results from the analytical approach compare favorably with results from calibration of numerical models and also independent measurements of mobile and less-mobile porosity. We show that localized electrical hysteresis patterns resulting from diffusive exchange are independent of injection velocity, indicating that repeatable parameters can be extracted under varied experimental designs, and these parameters represent the true intrinsic properties of specific volumes of porous media of aquifers and hyporheic zones.

  13. High average power linear induction accelerator development

    International Nuclear Information System (INIS)

    Bayless, J.R.; Adler, R.J.

    1987-07-01

    There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs

  14. FEL system with homogeneous average output

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph

    2018-01-16

    A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.

  15. Quetelet, the average man and medical knowledge.

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  16. [Quetelet, the average man and medical knowledge].

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  17. Angle-averaged Compton cross sections

    International Nuclear Information System (INIS)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV

  18. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  19. Reynolds averaged simulation of unsteady separated flow

    International Nuclear Information System (INIS)

    Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.

    2003-01-01

    The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation

  20. Angle-averaged Compton cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.