WorldWideScience

Sample records for average mass approach

  1. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr......Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach...

  2. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...

  3. Direct determination approach for the multifractal detrending moving average analysis

    Science.gov (United States)

    Xu, Hai-Chuan; Gu, Gao-Feng; Zhou, Wei-Xing

    2017-11-01

    In the canonical framework, we propose an alternative approach for the multifractal analysis based on the detrending moving average method (MF-DMA). We define a canonical measure such that the multifractal mass exponent τ (q ) is related to the partition function and the multifractal spectrum f (α ) can be directly determined. The performances of the direct determination approach and the traditional approach of the MF-DMA are compared based on three synthetic multifractal and monofractal measures generated from the one-dimensional p -model, the two-dimensional p -model, and the fractional Brownian motions. We find that both approaches have comparable performances to unveil the fractal and multifractal nature. In other words, without loss of accuracy, the multifractal spectrum f (α ) can be directly determined using the new approach with less computation cost. We also apply the new MF-DMA approach to the volatility time series of stock prices and confirm the presence of multifractality.

  4. Average Transverse Momentum Quantities Approaching the Lightfront

    NARCIS (Netherlands)

    Boer, Daniel

    In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the p (T) broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large

  5. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  6. Partial Averaged Navier-Stokes approach for cavitating flow

    International Nuclear Information System (INIS)

    Zhang, L; Zhang, Y N

    2015-01-01

    Partial Averaged Navier Stokes (PANS) is a numerical approach developed for studying practical engineering problems (e.g. cavitating flow inside hydroturbines) with a resonance cost and accuracy. One of the advantages of PANS is that it is suitable for any filter width, leading a bridging method from traditional Reynolds Averaged Navier-Stokes (RANS) to direct numerical simulations by choosing appropriate parameters. Comparing with RANS, the PANS model will inherit many physical nature from parent RANS but further resolve more scales of motion in great details, leading to PANS superior to RANS. As an important step for PANS approach, one need to identify appropriate physical filter-width control parameters e.g. ratios of unresolved-to-total kinetic energy and dissipation. In present paper, recent studies of cavitating flow based on PANS approach are introduced with a focus on the influences of filter-width control parameters on the simulation results

  7. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  8. An average salary: approaches to the index determination

    Directory of Open Access Journals (Sweden)

    T. M. Pozdnyakova

    2017-01-01

    Full Text Available The article “An average salary: approaches to the index determination” is devoted to studying various methods of calculating this index, both used by official state statistics of the Russian Federation and offered by modern researchers.The purpose of this research is to analyze the existing approaches to calculating the average salary of employees of enterprises and organizations, as well as to make certain additions that would help to clarify this index.The information base of the research is laws and regulations of the Russian Federation Government, statistical and analytical materials of the Federal State Statistics Service of Russia for the section «Socio-economic indexes: living standards of the population», as well as materials of scientific papers, describing different approaches to the average salary calculation. The data on the average salary of employees of educational institutions of the Khabarovsk region served as the experimental base of research. In the process of conducting the research, the following methods were used: analytical, statistical, calculated-mathematical and graphical.The main result of the research is an option of supplementing the method of calculating average salary index within enterprises or organizations, used by Goskomstat of Russia, by means of introducing a correction factor. Its essence consists in the specific formation of material indexes for different categories of employees in enterprises or organizations, mainly engaged in internal secondary jobs. The need for introducing this correction factor comes from the current reality of working conditions of a wide range of organizations, when an employee is forced, in addition to the main position, to fulfill additional job duties. As a result, the situation is frequent when the average salary at the enterprise is difficult to assess objectively because it consists of calculating multiple rates per staff member. In other words, the average salary of

  9. The background effective average action approach to quantum gravity

    DEFF Research Database (Denmark)

    D’Odorico, G.; Codello, A.; Pagani, C.

    2016-01-01

    of an UV attractive non-Gaussian fixed-point, which we find characterized by real critical exponents. Our closure method is general and can be applied systematically to more general truncations of the gravitational effective average action. © Springer International Publishing Switzerland 2016....

  10. Single item inventory models : A time- and event- averages approach

    NARCIS (Netherlands)

    E.M. Bazsa-Oldenkamp; P. den Iseger

    2003-01-01

    textabstractThis paper extends a fundamental result about single-item inventory systems. This approach allows more general performance measures, demand processes and order policies, and leads to easier analysis and implementation, than prior research. We obtain closed form expressions for the

  11. An Approach to Predict Debris Flow Average Velocity

    Directory of Open Access Journals (Sweden)

    Chen Cao

    2017-03-01

    Full Text Available Debris flow is one of the major threats for the sustainability of environmental and social development. The velocity directly determines the impact on the vulnerability. This study focuses on an approach using radial basis function (RBF neural network and gravitational search algorithm (GSA for predicting debris flow velocity. A total of 50 debris flow events were investigated in the Jiangjia gully. These data were used for building the GSA-based RBF approach (GSA-RBF. Eighty percent (40 groups of the measured data were selected randomly as the training database. The other 20% (10 groups of data were used as testing data. Finally, the approach was applied to predict six debris flow gullies velocities in the Wudongde Dam site area, where environmental conditions were similar to the Jiangjia gully. The modified Dongchuan empirical equation and the pulled particle analysis of debris flow (PPA approach were used for comparison and validation. The results showed that: (i the GSA-RBF predicted debris flow velocity values are very close to the measured values, which performs better than those using RBF neural network alone; (ii the GSA-RBF results and the MDEE results are similar in the Jiangjia gully debris flow velocities prediction, and GSA-RBF performs better; (iii in the study area, the GSA-RBF results are validated reliable; and (iv we could consider more variables in predicting the debris flow velocity by using GSA-RBF on the basis of measured data in other areas, which is more applicable. Because the GSA-RBF approach was more accurate, both the numerical simulation and the empirical equation can be taken into consideration for constructing debris flow mitigation works. They could be complementary and verified for each other.

  12. Averaged Solar Radiation Pressure Modeling for High Area-to-Mass Ratio Objects in Geostationary Space

    Science.gov (United States)

    Eapen, Roshan Thomas

    Space Situational Awareness is aimed at providing timely and accurate information of the space environment. This was originally done by maintaining a catalog of space objects states (position and velocity). Traditionally, a cannonball model would be used to propagate the dynamics. This can be acceptable for an active satellite since its attitude motion can be stabilized. However, for non-functional space debris, the cannonball model would disappoint because it is attitude independent and the debris is prone to tumbling. Furthermore, high area-to-mass ratio objects are sensitive to very small changes in perturbations, particularly those of the non-conservative kind. This renders the cannonball model imprecise in propagating the orbital motion of such objects. With the ever-increasing population of man-made space debris, in-orbit explosions, collisions and potential impacts of near Earth objects, it has become imperative to modify the traditional approach to a more predictive, tactical and exact rendition. Hence, a more precise orbit propagation model needs to be developed which warrants a better understanding of the perturbations in the near Earth space. The attitude dependency of some perturbations renders the orbit-attitude motion to be coupled. In this work, a coupled orbit-attitude model is developed taking both conservative and non-conservative forces and torques into account. A high area-to-mass ratio multi-layer insulation in geostationary space is simulated using the coupled dynamics model. However, the high fidelity model developed is computationally expensive. This work aims at developing a model to average the short-term solar radiation pressure force to perform computationally better than the cannonball model and concurrently have a comparable fidelity to the coupled orbit-attitude model.

  13. ship between IS-month mating mass and average lifetime repro

    African Journals Online (AJOL)

    Keywords: 18-month mating mass, average lifetime reproduction, phenotypic correlation. Livemass per se, or livemass gain are often used as criteria in selection indices for woolled and mutton sheep. Recent local research findings do, however, indicate the possibility of a negative genetic relationship between growth rate ...

  14. Vertically averaged approaches for CO 2 migration with solubility trapping

    KAUST Repository

    Gasda, S. E.

    2011-05-20

    The long-term storage security of injected carbon dioxide (CO2) is an essential component of geological carbon sequestration operations. In the postinjection phase, the mobile CO2 plume migrates in large part because of buoyancy forces, following the natural topography of the geological formation. The primary trapping mechanisms are capillary and solubility trapping, which evolve over hundreds to thousands of years and can immobilize a significant portion of the mobile CO2 plume. However, both the migration and trapping processes are inherently complex, spanning multiple spatial and temporal scales. Using an appropriate model that can capture both large- and small-scale effects is essential for understanding the role of these processes on the long-term storage security of CO2 sequestration operations. Traditional numerical models quickly become prohibitively expensive for the type of large-scale, long-term modeling that is necessary for characterizing the migration and immobilization of CO2 during the postinjection period. We present an alternative modeling option that combines vertically integrated governing equations with an upscaled representation of the dissolution-convection process. With this approach, we demonstrate the effect of different modeling choices for typical large-scale geological systems and show that practical calculations can be performed at the temporal and spatial scales of interest. Copyright 2011 by the American Geophysical Union.

  15. Enhancing MALDI time-of-flight mass spectrometer performance through spectrum averaging.

    Science.gov (United States)

    Mitchell, Morgan; Mali, Sujina; King, Charles C; Bark, Steven J

    2015-01-01

    Matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF) mass spectrometers are simple and robust mass spectrometers used for analysis of biologically relevant molecules in diverse fields including pathogen identification, imaging mass spectrometry, and natural products chemistry. Despite high nominal resolution and accuracy, we have observed significant variability where 30-50% of individual replicate measurements have errors in excess of 5 parts-per-million, even when using 5-point internal calibration. Increasing the number of laser shots for each spectrum did not resolve this observed variability. What is responsible for our observed variation? Using a modern MALDI-TOF/TOF instrument, we evaluated contributions to variability. Our data suggest a major component of variability is binning of the raw flight time data by the electronics and clock speed of the analog-to-digital (AD) detection system, which requires interpolation by automated peak fitting algorithms and impacts both calibration and the observed mass spectrum. Importantly, the variation observed is predominantly normal in distribution, which implies multiple components contribute to the observed variation and suggests a method to mitigate this variability through spectrum averaging. Restarting the acquisition impacts each spectrum within the electronic error of the AD detector system and defines a new calibration function. Therefore, averaging multiple independent spectra and not a larger number of laser shots leverages this inherent binning error to mitigate variability in accurate MALDI-TOF mass measurements.

  16. Enhancing MALDI time-of-flight mass spectrometer performance through spectrum averaging.

    Directory of Open Access Journals (Sweden)

    Morgan Mitchell

    Full Text Available Matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF mass spectrometers are simple and robust mass spectrometers used for analysis of biologically relevant molecules in diverse fields including pathogen identification, imaging mass spectrometry, and natural products chemistry. Despite high nominal resolution and accuracy, we have observed significant variability where 30-50% of individual replicate measurements have errors in excess of 5 parts-per-million, even when using 5-point internal calibration. Increasing the number of laser shots for each spectrum did not resolve this observed variability. What is responsible for our observed variation? Using a modern MALDI-TOF/TOF instrument, we evaluated contributions to variability. Our data suggest a major component of variability is binning of the raw flight time data by the electronics and clock speed of the analog-to-digital (AD detection system, which requires interpolation by automated peak fitting algorithms and impacts both calibration and the observed mass spectrum. Importantly, the variation observed is predominantly normal in distribution, which implies multiple components contribute to the observed variation and suggests a method to mitigate this variability through spectrum averaging. Restarting the acquisition impacts each spectrum within the electronic error of the AD detector system and defines a new calibration function. Therefore, averaging multiple independent spectra and not a larger number of laser shots leverages this inherent binning error to mitigate variability in accurate MALDI-TOF mass measurements.

  17. A self-consistent semiclassical sum rule approach to the average properties of giant resonances

    International Nuclear Information System (INIS)

    Li Guoqiang; Xu Gongou

    1990-01-01

    The average energies of isovector giant resonances and the widths of isoscalar giant resonances are evaluated with the help of a self-consistent semiclassical Sum rule approach. The comparison of the present results with the experimental ones justifies the self-consistent semiclassical sum rule approach to the average properties of giant resonances

  18. Modelling river bank erosion processes and mass failure mechanisms using 2-D depth averaged numerical model

    Science.gov (United States)

    Die Moran, Andres; El kadi Abderrezzak, Kamal; Tassi, Pablo; Herouvet, Jean-Michel

    2014-05-01

    Bank erosion is a key process that may cause a large number of economic and environmental problems (e.g. land loss, damage to structures and aquatic habitat). Stream bank erosion (toe erosion and mass failure) represents an important form of channel morphology changes and a significant source of sediment. With the advances made in computational techniques, two-dimensional (2-D) numerical models have become valuable tools for investigating flow and sediment transport in open channels at large temporal and spatial scales. However, the implementation of mass failure process in 2D numerical models is still a challenging task. In this paper, a simple, innovative algorithm is implemented in the Telemac-Mascaret modeling platform to handle bank failure: failure occurs whether the actual slope of one given bed element is higher than the internal friction angle. The unstable bed elements are rotated around an appropriate axis, ensuring mass conservation. Mass failure of a bank due to slope instability is applied at the end of each sediment transport evolution iteration, once the bed evolution due to bed load (and/or suspended load) has been computed, but before the global sediment mass balance is verified. This bank failure algorithm is successfully tested using two laboratory experimental cases. Then, bank failure in a 1:40 scale physical model of the Rhine River composed of non-uniform material is simulated. The main features of the bank erosion and failure are correctly reproduced in the numerical simulations, namely the mass wasting at the bank toe, followed by failure at the bank head, and subsequent transport of the mobilised material in an aggradation front. Volumes of eroded material obtained are of the same order of magnitude as the volumes measured during the laboratory tests.

  19. Risk-informed Analytical Approaches to Concentration Averaging for the Purpose of Waste Classification

    International Nuclear Information System (INIS)

    Esh, D.W.; Pinkston, K.E.; Barr, C.S.; Bradford, A.H.; Ridge, A.Ch.

    2009-01-01

    Nuclear Regulatory Commission (NRC) staff has developed a concentration averaging approach and guidance for the review of Department of Energy (DOE) non-HLW determinations. Although the approach was focused on this specific application, concentration averaging is generally applicable to waste classification and thus has implications for waste management decisions as discussed in more detail in this paper. In the United States, radioactive waste has historically been classified into various categories for the purpose of ensuring that the disposal system selected is commensurate with the hazard of the waste such that public health and safety will be protected. However, the risk from the near-surface disposal of radioactive waste is not solely a function of waste concentration but is also a function of the volume (quantity) of waste and its accessibility. A risk-informed approach to waste classification for near-surface disposal of low-level waste would consider the specific characteristics of the waste, the quantity of material, and the disposal system features that limit accessibility to the waste. NRC staff has developed example analytical approaches to estimate waste concentration, and therefore waste classification, for waste disposed in facilities or with configurations that were not anticipated when the regulation for the disposal of commercial low-level waste (i.e. 10 CFR Part 61) was developed. (authors)

  20. Calculation of weighted averages approach for the estimation of ping tolerance values

    Science.gov (United States)

    Silalom, S.; Carter, J.L.; Chantaramongkol, P.

    2010-01-01

    A biotic index was created and proposed as a tool to assess water quality in the Upper Mae Ping sub-watersheds. The Ping biotic index was calculated by utilizing Ping tolerance values. This paper presents the calculation of Ping tolerance values of the collected macroinvertebrates. Ping tolerance values were estimated by a weighted averages approach based on the abundance of macroinvertebrates and six chemical constituents that include conductivity, dissolved oxygen, biochemical oxygen demand, ammonia nitrogen, nitrate nitrogen and orthophosphate. Ping tolerance values range from 0 to 10. Macroinvertebrates assigned a 0 are very sensitive to organic pollution while macroinvertebrates assigned 10 are highly tolerant to pollution.

  1. Volume averaging: Local and nonlocal closures using a Green’s function approach

    Science.gov (United States)

    Wood, Brian D.; Valdés-Parada, Francisco J.

    2013-01-01

    Modeling transport phenomena in discretely hierarchical systems can be carried out using any number of upscaling techniques. In this paper, we revisit the method of volume averaging as a technique to pass from a microscopic level of description to a macroscopic one. Our focus is primarily on developing a more consistent and rigorous foundation for the relation between the microscale and averaged levels of description. We have put a particular focus on (1) carefully establishing statistical representations of the length scales used in volume averaging, (2) developing a time-space nonlocal closure scheme with as few assumptions and constraints as are possible, and (3) carefully identifying a sequence of simplifications (in terms of scaling postulates) that explain the conditions for which various upscaled models are valid. Although the approach is general for linear differential equations, we upscale the problem of linear convective diffusion as an example to help keep the discussion from becoming overly abstract. In our efforts, we have also revisited the concept of a closure variable, and explain how closure variables can be based on an integral formulation in terms of Green’s functions. In such a framework, a closure variable then represents the integration (in time and space) of the associated Green’s functions that describe the influence of the average sources over the spatial deviations. The approach using Green’s functions has utility not only in formalizing the method of volume averaging, but by clearly identifying how the method can be extended to transient and time or space nonlocal formulations. In addition to formalizing the upscaling process using Green’s functions, we also discuss the upscaling process itself in some detail to help foster improved understanding of how the process works. Discussion about the role of scaling postulates in the upscaling process is provided, and poised, whenever possible, in terms of measurable properties of (1) the

  2. An adaptive mesh refinement approach for average current nodal expansion method in 2-D rectangular geometry

    International Nuclear Information System (INIS)

    Poursalehi, N.; Zolfaghari, A.; Minuchehr, A.

    2013-01-01

    Highlights: ► A new adaptive h-refinement approach has been developed for a class of nodal method. ► The resulting system of nodal equations is more amenable to efficient numerical solution. ► The benefit of the approach is reducing computational efforts relative to the uniform fine mesh modeling. ► Spatially adaptive approach greatly enhances the accuracy of the solution. - Abstract: The aim of this work is to develop a spatially adaptive coarse mesh strategy that progressively refines the nodes in appropriate regions of domain to solve the neutron balance equation by zeroth order nodal expansion method. A flux gradient based a posteriori estimation scheme has been utilized for checking the approximate solutions for various nodes. The relative surface net leakage of nodes has been considered as an assessment criterion. In this approach, the core module is called in by adaptive mesh generator to determine gradients of node surfaces flux to explore the possibility of node refinements in appropriate regions and directions of the problem. The benefit of the approach is reducing computational efforts relative to the uniform fine mesh modeling. For this purpose, a computer program ANRNE-2D, Adaptive Node Refinement Nodal Expansion, has been developed to solve neutron diffusion equation using average current nodal expansion method for 2D rectangular geometries. Implementing the adaptive algorithm confirms its superiority in enhancing the accuracy of the solution without using fine nodes throughout the domain and increasing the number of unknown solution. Some well-known benchmarks have been investigated and improvements are reported

  3. Systematic approach to peak-to-average power ratio in OFDM

    Science.gov (United States)

    Schurgers, Curt

    2001-11-01

    OFDM multicarrier systems support high data rate wireless transmission using orthogonal frequency channels, and require no extensive equalization, yet offer excellent immunity against fading and inter-symbol interference. The major drawback of these systems is the large Peak-to-Average power Ratio (PAR) of the transmit signal, which renders a straightforward implementation very costly and inefficient. Existing approaches that attack this PAR issue are abundant, but no systematic framework or comparison between them exist to date. They sometimes even differ in the problem definition itself and consequently in the basic approach to follow. In this work, we provide a systematic approach that resolves this ambiguity and spans the existing PAR solutions. The basis of our framework is the observation that efficient system implementations require a reduced signal dynamic range. This range reduction can be modeled as a hard limiting, also referred to as clipping, where the extra distortion has to be considered as part of the total noise tradeoff. We illustrate that the different PAR solutions manipulate this tradeoff in alternative ways in order to improve the performance. Furthermore, we discuss and compare a broad range of such techniques and organize them into three classes: block coding, clip effect transformation and probabilistic.

  4. Observer design for switched recurrent neural networks: an average dwell time approach.

    Science.gov (United States)

    Lian, Jie; Feng, Zhi; Shi, Peng

    2011-10-01

    This paper is concerned with the problem of observer design for switched recurrent neural networks with time-varying delay. The attention is focused on designing the full-order observers that guarantee the global exponential stability of the error dynamic system. Based on the average dwell time approach and the free-weighting matrix technique, delay-dependent sufficient conditions are developed for the solvability of such problem and formulated as linear matrix inequalities. The error-state decay estimate is also given. Then, the stability analysis problem for the switched recurrent neural networks can be covered as a special case of our results. Finally, four illustrative examples are provided to demonstrate the effectiveness and the superiority of the proposed methods. © 2011 IEEE

  5. Econometric modelling of Serbian current account determinants: Jackknife Model Averaging approach

    Directory of Open Access Journals (Sweden)

    Petrović Predrag

    2014-01-01

    Full Text Available This research aims to model Serbian current account determinants for the period Q1 2002 - Q4 2012. Taking into account the majority of relevant determinants, using the Jackknife Model Averaging approach, 48 different models have been estimated, where 1254 equations needed to be estimated and averaged for each of the models. The results of selected representative models indicate moderate persistence of the CA and positive influence of: fiscal balance, oil trade balance, terms of trade, relative income and real effective exchange rates, where we should emphasise: (i a rather strong influence of relative income, (ii the fact that the worsening of oil trade balance results in worsening of other components (probably non-oil trade balance of CA and (iii that the positive influence of terms of trade reveals functionality of the Harberger-Laursen-Metzler effect in Serbia. On the other hand, negative influence is evident in case of: relative economic growth, gross fixed capital formation, net foreign assets and trade openness. What particularly stands out is the strong effect of relative economic growth that, most likely, reveals high citizens' future income growth expectations, which has negative impact on the CA.

  6. Assessing the optimized precision of the aircraft mass balance method for measurement of urban greenhouse gas emission rates through averaging

    Directory of Open Access Journals (Sweden)

    Alexie M. F. Heimburger

    2017-06-01

    Full Text Available To effectively address climate change, aggressive mitigation policies need to be implemented to reduce greenhouse gas emissions. Anthropogenic carbon emissions are mostly generated from urban environments, where human activities are spatially concentrated. Improvements in uncertainty determinations and precision of measurement techniques are critical to permit accurate and precise tracking of emissions changes relative to the reduction targets. As part of the INFLUX project, we quantified carbon dioxide (CO2, carbon monoxide (CO and methane (CH4 emission rates for the city of Indianapolis by averaging results from nine aircraft-based mass balance experiments performed in November-December 2014. Our goal was to assess the achievable precision of the aircraft-based mass balance method through averaging, assuming constant CO2, CH4 and CO emissions during a three-week field campaign in late fall. The averaging method leads to an emission rate of 14,600 mol/s for CO2, assumed to be largely fossil-derived for this period of the year, and 108 mol/s for CO. The relative standard error of the mean is 17% and 16%, for CO2 and CO, respectively, at the 95% confidence level (CL, i.e. a more than 2-fold improvement from the previous estimate of ~40% for single-flight measurements for Indianapolis. For CH4, the averaged emission rate is 67 mol/s, while the standard error of the mean at 95% CL is large, i.e. ±60%. Given the results for CO2 and CO for the same flight data, we conclude that this much larger scatter in the observed CH4 emission rate is most likely due to variability of CH4 emissions, suggesting that the assumption of constant daily emissions is not correct for CH4 sources. This work shows that repeated measurements using aircraft-based mass balance methods can yield sufficient precision of the mean to inform emissions reduction efforts by detecting changes over time in urban emissions.

  7. A new approach on seismic mortality estimations based on average population density

    Science.gov (United States)

    Zhu, Xiaoxin; Sun, Baiqing; Jin, Zhanyong

    2016-12-01

    This study examines a new methodology to predict the final seismic mortality from earthquakes in China. Most studies established the association between mortality estimation and seismic intensity without considering the population density. In China, however, the data are not always available, especially when it comes to the very urgent relief situation in the disaster. And the population density varies greatly from region to region. This motivates the development of empirical models that use historical death data to provide the path to analyze the death tolls for earthquakes. The present paper employs the average population density to predict the final death tolls in earthquakes using a case-based reasoning model from realistic perspective. To validate the forecasting results, historical data from 18 large-scale earthquakes occurred in China are used to estimate the seismic morality of each case. And a typical earthquake case occurred in the northwest of Sichuan Province is employed to demonstrate the estimation of final death toll. The strength of this paper is that it provides scientific methods with overall forecast errors lower than 20 %, and opens the door for conducting final death forecasts with a qualitative and quantitative approach. Limitations and future research are also analyzed and discussed in the conclusion.

  8. Latent-variable approaches to the Jamesian model of importance-weighted averages.

    Science.gov (United States)

    Scalas, L Francesca; Marsh, Herbert W; Nagengast, Benjamin; Morin, Alexandre J S

    2013-01-01

    The individually importance-weighted average (IIWA) model posits that the contribution of specific areas of self-concept to global self-esteem varies systematically with the individual importance placed on each specific component. Although intuitively appealing, this model has weak empirical support; thus, within the framework of a substantive-methodological synergy, we propose a multiple-item latent approach to the IIWA model as applied to a range of self-concept domains (physical, academic, spiritual self-concepts) and subdomains (appearance, math, verbal self-concepts) in young adolescents from two countries. Tests considering simultaneously the effects of self-concept domains on trait self-esteem did not support the IIWA model. On the contrary, support for a normative group importance model was found, in which importance varied as a function of domains but not individuals. Individuals differentially weight the various components of self-concept; however, the weights are largely determined by normative processes, so that little additional information is gained from individual weightings.

  9. One approach for minimising the average turnaround time in Varna West Port Terminal

    Directory of Open Access Journals (Sweden)

    Tanka Milkova

    2017-12-01

    Full Text Available One of the most important criteria for port efficiency nowadays is the average turnaround time. More and more port terminals are assessed by this indicator, so its value turns to be one of the crucial factors for ports competitiveness and consequently minimising the turnaround time becomes one of the most important objectives of the ports. The purpose of this research paper is to propose one interesting tool for optimising this indicator by using the famous Simplex method of linear programming. After giving the approach some exemplary models are solved with real values based on statistics for Varna West Port Terminal which will make the design of the paper original in both directions – theoretical and practical, so that despite some limitations concerning the trade secret, the scope of the material is large enough to be used for any port with any data, even exemplary and/or approximate. The aforementioned practical implications could be pointed out as the main contribution of this paper and the easy way for calculating the final results makes the algorithm really applicable so that each port can make its own conclusion based on its particular statistical data.

  10. An integrated approach to investigate the reach-averaged bend scale dynamics of large meandering rivers

    Science.gov (United States)

    Monegaglia, Federico; Henshaw, Alex; Zolezzi, Guido; Tubino, Marco

    2016-04-01

    quantitative assessment of the location and movement of river point bars and mid-channel bars along evolving meander bends is also performed through a coordinate mapping that allows to express morphological features in an intrinsic reference system and therefore to compare their topographical structure in subsequent years. A novel bend-scale reach-averaging concept of evolutionary parameters is proposed and performed by adopting bend sinuosity as a proxy of evolutionary time. This allows to extract comparable metrics to those predicted by nonlinear morphodynamic theories that are typically developed at the meander bend scale. The proposed approach allows to develop a consistent comparison between observed evolution and predictions from morphodynamic theories.

  11. A novel approach for the averaging of magnetocardiographically recorded heart beats

    Energy Technology Data Exchange (ETDEWEB)

    DiPietroPaolo, D [Advanced Technologies Biomagnetics, Pescara (Italy); Mueller, H-P [Division for Biosignals and Imaging Technologies, Central Institute for Biomedical Engineering, Ulm University, D-89069 Ulm (Germany); Erne, S N [Division for Biosignals and Imaging Technologies, Central Institute for Biomedical Engineering, Ulm University, D-89069 Ulm (Germany)

    2005-05-21

    Performing signal averaging in an efficient and correct way is indispensable since it is a prerequisite for a broad variety of magnetocardiographic (MCG) analysis methods. One of the most common procedures for performing the signal averaging to increase the signal-to-noise ratio (SNR) in magnetocardiography, as well as in electrocardiography (ECG), is done by means of spatial or temporal techniques. In this paper, an improvement of the temporal averaging method is presented. In order to obtain an accurate signal detection, temporal alignment methods and objective classification criteria are developed. The processing technique based on hierarchical clustering is introduced to take into account the non-stationarity of the noise and, to some extent, the biological variability of the signals reaching the optimum SNR. The method implemented is especially designed to run fast and does not require any interaction from the operator. The averaging procedure described in this work is applied to the averaging of MCG data as an example, but with its intrinsic properties it can also be applied to the averaging of ECG recording, averaging of body-surface-potential mapping (BSPM) and averaging of magnetoencephalographic (MEG) or electroencephalographic (EEG) signals.

  12. A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation

    Science.gov (United States)

    Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc

    2015-10-01

    This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.

  13. An Experiment in Three Approaches To Teaching Average to Elementary School Children.

    Science.gov (United States)

    Baker, John D.; Beisel, Raymond W.

    2001-01-01

    Uses a traditional approach with problem solving, a concrete approach with manipulatives, and a visual approach with computer spreadsheets to teach the arithmetic mean to 22 children in grades 4-6 in three multiage groups. Differences among pretest, posttest, and interview performances suggest some advantage to the use of a visual instructional…

  14. New approaches for metabolomics by mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Vertes, Akos [George Washington Univ., Washington, DC (United States)

    2017-07-10

    Small molecules constitute a large part of the world around us, including fossil and some renewable energy sources. Solar energy harvested by plants and bacteria is converted into energy rich small molecules on a massive scale. Some of the worst contaminants of the environment and compounds of interest for national security also fall in the category of small molecules. The development of large scale metabolomic analysis methods lags behind the state of the art established for genomics and proteomics. This is commonly attributed to the diversity of molecular classes included in a metabolome. Unlike nucleic acids and proteins, metabolites do not have standard building blocks, and, as a result, their molecular properties exhibit a wide spectrum. This impedes the development of dedicated separation and spectroscopic methods. Mass spectrometry (MS) is a strong contender in the quest for a quantitative analytical tool with extensive metabolite coverage. Although various MS-based techniques are emerging for metabolomics, many of these approaches include extensive sample preparation that make large scale studies resource intensive and slow. New ionization methods are redefining the range of analytical problems that can be solved using MS. This project developed new approaches for the direct analysis of small molecules in unprocessed samples, as well as pushed the limits of ultratrace analysis in volume limited complex samples. The projects resulted in techniques that enabled metabolomics investigations with enhanced molecular coverage, as well as the study of cellular response to stimuli on a single cell level. Effectively individual cells became reaction vessels, where we followed the response of a complex biological system to external perturbation. We established two new analytical platforms for the direct study of metabolic changes in cells and tissues following external perturbation. For this purpose we developed a novel technique, laser ablation electrospray

  15. Loop expansion of the average effective action in the functional renormalization group approach

    Science.gov (United States)

    Lavrov, Peter M.; Merzlikin, Boris S.

    2015-10-01

    We formulate a perturbation expansion for the effective action in a new approach to the functional renormalization group method based on the concept of composite fields for regulator functions being their most essential ingredients. We demonstrate explicitly the principal difference between the properties of effective actions in these two approaches existing already on the one-loop level in a simple gauge model.

  16. An average-based accounting approach to capital asset investments: The case of project finance

    OpenAIRE

    Carlo Alberto Magni

    2014-01-01

    Literature and textbooks on capital budgeting endorse Net Present Value (NPV) and generally treat accounting rates of return as not being reliable tools. This paper shows that accounting numbers can be reconciled with NPV and fruitfully employed in real-life applications. Focusing on project finance transactions, an Average Return On Investment (AROI) is drawn from the pro forma financial statements, obtained as the ratio of aggregate income to aggregate book value. It is shown that such a me...

  17. Constrained Optimization of Average Arrival Time via a Probabilistic Approach to Transport Reliability.

    Directory of Open Access Journals (Sweden)

    Mohammad-Reza Namazi-Rad

    Full Text Available To achieve greater transit-time reduction and improvement in reliability of transport services, there is an increasing need to assist transport planners in understanding the value of punctuality; i.e. the potential improvements, not only to service quality and the consumer but also to the actual profitability of the service. In order for this to be achieved, it is important to understand the network-specific aspects that affect both the ability to decrease transit-time, and the associated cost-benefit of doing so. In this paper, we outline a framework for evaluating the effectiveness of proposed changes to average transit-time, so as to determine the optimal choice of average arrival time subject to desired punctuality levels whilst simultaneously minimizing operational costs. We model the service transit-time variability using a truncated probability density function, and simultaneously compare the trade-off between potential gains and increased service costs, for several commonly employed cost-benefit functions of general form. We formulate this problem as a constrained optimization problem to determine the optimal choice of average transit time, so as to increase the level of service punctuality, whilst simultaneously ensuring a minimum level of cost-benefit to the service operator.

  18. Mass, matter, and energy. A relativistic approach

    International Nuclear Information System (INIS)

    Bitsakis, E.

    1991-01-01

    The debate concerning the relations between matter and motion has the same age as philosophy itself. In modern times this problem was transformed into the one concerning the relations between mass and energy. Newton identified mass with matter. Classical thermodynamics brought this conception to its logical conclusion, establishing an ontic dichotomy between mass-matter and energy. On the basis of this pre-relativistic conception, Einstein's famous equation has been interpreted as a relation of equivalence between mass-matter and energy. Nevertheless, if one rejects this epistemologically illegitimate identification, it is possible to elaborate a unitary conception of matter, which at the same time is an argument for the unity between matter and motion. In particular, the classical antithesis between matter and field becomes obsolete in the frame of the proposed interpretation

  19. Determination of mean pressure from PIV in compressible flows using the Reynolds-averaging approach

    Science.gov (United States)

    van Gent, Paul L.; van Oudheusden, Bas W.; Schrijer, Ferry F. J.

    2018-03-01

    The feasibility of computing the flow pressure on the basis of PIV velocity data has been demonstrated abundantly for low-speed conditions. The added complications occurring for high-speed compressible flows have, however, so far proved to be largely inhibitive for the accurate experimental determination of instantaneous pressure. Obtaining mean pressure may remain a worthwhile and realistic goal to pursue. In a previous study, a Reynolds-averaging procedure was developed for this, under the moderate-Mach-number assumption that density fluctuations can be neglected. The present communication addresses the accuracy of this assumption, and the consistency of its implementation, by evaluating of the relevance of the different contributions resulting from the Reynolds-averaging. The methodology involves a theoretical order-of-magnitude analysis, complemented with a quantitative assessment based on a simulated and a real PIV experiment. The assessments show that it is sufficient to account for spatial variations in the mean velocity and the Reynolds-stresses and that temporal and spatial density variations (fluctuations and gradients) are of secondary importance and comparable order-of-magnitude. This result permits to simplify the calculation of mean pressure from PIV velocity data and to validate the approximation of neglecting temporal and spatial density variations without having access to reference pressure data.

  20. Banking Crisis Early Warning Model based on a Bayesian Model Averaging Approach

    Directory of Open Access Journals (Sweden)

    Taha Zaghdoudi

    2016-08-01

    Full Text Available The succession of banking crises in which most have resulted in huge economic and financial losses, prompted several authors to study their determinants. These authors constructed early warning models to prevent their occurring. It is in this same vein as our study takes its inspiration. In particular, we have developed a warning model of banking crises based on a Bayesian approach. The results of this approach have allowed us to identify the involvement of the decline in bank profitability, deterioration of the competitiveness of the traditional intermediation, banking concentration and higher real interest rates in triggering bank crisis.

  1. Using four-phase Eulerian volume averaging approach to model macrosegregation and shrinkage cavity

    Science.gov (United States)

    Wu, M.; Kharicha, A.; Ludwig, A.

    2015-06-01

    This work is to extend a previous 3-phase mixed columnar-equiaxed solidification model to treat the formation of shrinkage cavity by including an additional phase. In the previous model the mixed columnar and equiaxed solidification with consideration of multiphase transport phenomena (mass, momentum, species and enthalpy) is proposed to calculate the as- cast structure including columnar-to-equiaxed transition (CET) and formation of macrosegregation. In order to incorporate the formation of shrinkage cavity, an additional phase, i.e. gas phase or covering liquid slag phase, must be considered in addition to the previously introduced 3 phases (parent melt, solidifying columnar dendrite trunks and equiaxed grains). No mass and species transfer between the new and other 3 phases is necessary, but the treatment of the momentum and energy exchanges between them is crucially important for the formation of free surface and shrinkage cavity, which in turn influences the flow field and formation of segregation. A steel ingot is preliminarily calculated to exam the functionalities of the model.

  2. Forecasting Inflation with the Phillips Curve: A Dynamic Model Averaging Approach for Brazil

    Directory of Open Access Journals (Sweden)

    Diego Ferreira

    2015-12-01

    Full Text Available This paper proposes a generalized Phillips curve in order to forecast Brazilian inflation over the 2003:M1–2013:M10 period. To this end, we employ the Dynamic Model Averaging (DMA method, which allows for both model evolution and time-varying parameters. The procedure mainly consists in state-space representation and by Kalman filter estimation. Overall, the dynamic specifications deliver good inflation predictions for all the forecast horizons considered, underscoring the importance of time-varying features for forecasting exercises. As to the usefulness of the predictors on explaining the Brazilian inflation, there are evidences that the short- and long-term Phillips curve relationship may be rejected for Brazil while short- and medium-term exchange rate pass-through apparently has been decreasing in the last years.

  3. Multiple diagnostic approaches to palpable breast mass

    International Nuclear Information System (INIS)

    Chin, Soo Yil; Kim, Kie Hwan; Moon, Nan Mo; Kim, Yong Kyu; Jang, Ja June

    1985-01-01

    The combination of the various diagnostic methods of palpable breast mass has improved the diagnostic accuracy. From September 1983 to August 1985 pathologically proven 85 patients with palpable breast masses examined with x-ray mammography, ultrasonography, penumomammography and aspiration cytology at Korea Cancer Center Hospital were analyzed. The diagnostic accuracies of each methods were 77.6% of mammogram, 74.1% of ultrasonogram, 90.5% of penumomammogram and 92.4% of aspiration cytology. Pneumomammograms was accomplished without difficulty or complication and depicted more clearly delineated mass with various pathognomonic findings; air-ductal pattern in fibroadenoma (90.4%) and cystosarcoma phylloides (100%), air-halo in fibrocystic disease (14.2%), fibroadenoma (100%), cystosarcoma phylloides (100%), air-cystogram in cystic type of fibrocystic disease (100%) and vaculoar pattern or irregular air collection without retained peripheral gas in carcinoma

  4. Multiple diagnostic approaches to palpable breast mass

    Energy Technology Data Exchange (ETDEWEB)

    Chin, Soo Yil; Kim, Kie Hwan; Moon, Nan Mo; Kim, Yong Kyu; Jang, Ja June [Korea Cancer Center Hospital, Seoul (Korea, Republic of)

    1985-12-15

    The combination of the various diagnostic methods of palpable breast mass has improved the diagnostic accuracy. From September 1983 to August 1985 pathologically proven 85 patients with palpable breast masses examined with x-ray mammography, ultrasonography, penumomammography and aspiration cytology at Korea Cancer Center Hospital were analyzed. The diagnostic accuracies of each methods were 77.6% of mammogram, 74.1% of ultrasonogram, 90.5% of penumomammogram and 92.4% of aspiration cytology. Pneumomammograms was accomplished without difficulty or complication and depicted more clearly delineated mass with various pathognomonic findings; air-ductal pattern in fibroadenoma (90.4%) and cystosarcoma phylloides (100%), air-halo in fibrocystic disease (14.2%), fibroadenoma (100%), cystosarcoma phylloides (100%), air-cystogram in cystic type of fibrocystic disease (100%) and vaculoar pattern or irregular air collection without retained peripheral gas in carcinoma.

  5. A Probabilistic Approach for Predicting Average Slug Frequency in Horizontal Gas/Liquid Pipe Flow

    Directory of Open Access Journals (Sweden)

    Kadri U.

    2013-02-01

    Full Text Available In this paper, we present a model for predicting the average slug frequency in horizontal gas/liquid pipe flow. The model considers the probability of slug formation if slugs are triggered at the antinodes of a sinusoidal perturbation, along the pipe at the frequency of oscillation of the interface. A slug is assumed to form if and only if triggered at a space-time far enough from existing slugs. The probability of forming slugs is found to decrease with distance from the inlet, since the downstream passage of existing slugs prevents the formation of new slugs. Predictions by the model are compared with air/water, freon/water and air/oil measurements found in literature, with a satisfactory agreement. However, a deviation from measurements is observed when considering high viscosity liquid. The model contributes to the prediction of slug flow regime and can act as a guideline toward the design of gas/liquid horizontal pipe flow.

  6. Predicting water main failures using Bayesian model averaging and survival modelling approach

    International Nuclear Information System (INIS)

    Kabir, Golam; Tesfamariam, Solomon; Sadiq, Rehan

    2015-01-01

    To develop an effective preventive or proactive repair and replacement action plan, water utilities often rely on water main failure prediction models. However, in predicting the failure of water mains, uncertainty is inherent regardless of the quality and quantity of data used in the model. To improve the understanding of water main failure, a Bayesian framework is developed for predicting the failure of water mains considering uncertainties. In this study, Bayesian model averaging method (BMA) is presented to identify the influential pipe-dependent and time-dependent covariates considering model uncertainties whereas Bayesian Weibull Proportional Hazard Model (BWPHM) is applied to develop the survival curves and to predict the failure rates of water mains. To accredit the proposed framework, it is implemented to predict the failure of cast iron (CI) and ductile iron (DI) pipes of the water distribution network of the City of Calgary, Alberta, Canada. Results indicate that the predicted 95% uncertainty bounds of the proposed BWPHMs capture effectively the observed breaks for both CI and DI water mains. Moreover, the performance of the proposed BWPHMs are better compare to the Cox-Proportional Hazard Model (Cox-PHM) for considering Weibull distribution for the baseline hazard function and model uncertainties. - Highlights: • Prioritize rehabilitation and replacements (R/R) strategies of water mains. • Consider the uncertainties for the failure prediction. • Improve the prediction capability of the water mains failure models. • Identify the influential and appropriate covariates for different models. • Determine the effects of the covariates on failure

  7. An ensemble-based dynamic Bayesian averaging approach for discharge simulations using multiple global precipitation products and hydrological models

    Science.gov (United States)

    Qi, Wei; Liu, Junguo; Yang, Hong; Sweetapple, Chris

    2018-03-01

    Global precipitation products are very important datasets in flow simulations, especially in poorly gauged regions. Uncertainties resulting from precipitation products, hydrological models and their combinations vary with time and data magnitude, and undermine their application to flow simulations. However, previous studies have not quantified these uncertainties individually and explicitly. This study developed an ensemble-based dynamic Bayesian averaging approach (e-Bay) for deterministic discharge simulations using multiple global precipitation products and hydrological models. In this approach, the joint probability of precipitation products and hydrological models being correct is quantified based on uncertainties in maximum and mean estimation, posterior probability is quantified as functions of the magnitude and timing of discharges, and the law of total probability is implemented to calculate expected discharges. Six global fine-resolution precipitation products and two hydrological models of different complexities are included in an illustrative application. e-Bay can effectively quantify uncertainties and therefore generate better deterministic discharges than traditional approaches (weighted average methods with equal and varying weights and maximum likelihood approach). The mean Nash-Sutcliffe Efficiency values of e-Bay are up to 0.97 and 0.85 in training and validation periods respectively, which are at least 0.06 and 0.13 higher than traditional approaches. In addition, with increased training data, assessment criteria values of e-Bay show smaller fluctuations than traditional approaches and its performance becomes outstanding. The proposed e-Bay approach bridges the gap between global precipitation products and their pragmatic applications to discharge simulations, and is beneficial to water resources management in ungauged or poorly gauged regions across the world.

  8. Review: Management of adnexal masses: An age-guided approach ...

    African Journals Online (AJOL)

    Adnexal masses in different age groups may need different management approaches. By elimination of the type of mass that is less likely in a specific age group one can identify the most prevalent group which can guide management. Most important of all, the probability of malignancy must either be identified or ruled out.

  9. Partially-Averaged Navier-Stokes (PANS) approach for study of fluid flow and heat transfer characteristics in Czochralski melt

    Science.gov (United States)

    Verma, Sudeep; Dewan, Anupam

    2018-01-01

    The Partially-Averaged Navier-Stokes (PANS) approach has been applied for the first time to model turbulent flow and heat transfer in an ideal Czochralski set up with the realistic boundary conditions. This method provides variable level of resolution ranging from the Reynolds-Averaged Navier-Stokes (RANS) modelling to Direct Numerical Simulation (DNS) based on the filter control parameter. For the present case, a low-Re PANS model has been developed for Czochralski melt flow, which includes the effect of coriolis, centrifugal, buoyant and surface tension induced forces. The aim of the present study is to assess improvement in results on switching to PANS modelling from unsteady RANS (URANS) approach on the same computational mesh. The PANS computed results were found to be in good agreement with the reported experimental, DNS and Large Eddy Simulation (LES) data. A clear improvement in computational accuracy is observed in switching from the URANS approach to the PANS methodology. The computed results further improved with a reduction in the PANS filter width. Further the capability of the PANS model to capture key characteristics of the Czochralski crystal growth is also highlighted. It was observed that the PANS model was able to resolve the three-dimensional turbulent nature of the melt, characteristic flow structures arising due to flow instabilities and generation of thermal plumes and vortices in the Czochralski melt.

  10. Mass transfer in rolling rotary kilns : a novel approach

    NARCIS (Netherlands)

    Heydenrych, M.D.; Greeff, P.; Heesink, A. Bert M.; Versteeg, G.F.

    2002-01-01

    A novel approach to modeling mass transfer in rotary kilns or rotating cylinders is explored. The movement of gas in the interparticle voids in the bed of the kiln is considered, where particles move concentrically with the geometry of the kiln and gas is entrained by these particles. The approach

  11. Mass transfer in rolling rotary kilns: a novel approach

    NARCIS (Netherlands)

    Heydenrych, M.D.; Greeff, P.; Heesink, Albertus B.M.; Versteeg, Geert

    2002-01-01

    A novel approach to modeling mass transfer in rotary kilns or rotating cylinders is explored. The movement of gas in the interparticle voids in the bed of the kiln is considered, where particles move concentrically with the geometry of the kiln and gas is entrained by these particles. The approach

  12. Renal masses in children. An integrated imaging approach to diagnosis

    International Nuclear Information System (INIS)

    Wolfson, B.J.; Gainey, M.A.; Faerber, E.N.; Capitanio, M.A.

    1985-01-01

    In view of the continuing technologic advancements in the development and availability of diagnostic imaging modalities, it is appropriate to assess periodically the currently accepted approaches to the evaluation of renal masses in children. The roles, advantages, and disadvantages of plain film, intravenous urography, ultrasonography, radionuclide scintigraphy, computed tomography, angiography, and magnetic resonance imaging in the approach to the evaluation of renal masses in children are discussed. An integrated imaging approach that provides the most accurate and necessary information for diagnosis and treatment is recommended. 70 references

  13. Thermodynamically Constrained Averaging Theory Approach for Modeling Flow and Transport Phenomena in Porous Medium Systems: 5. Single-Fluid-Phase Transport.

    Science.gov (United States)

    Gray, William G; Miller, Cass T

    2009-05-01

    This work is the fifth in a series of papers on the thermodynamically constrained averaging theory (TCAT) approach for modeling flow and transport phenomena in multiscale porous medium systems. The general TCAT framework and the mathematical foundation presented in previous works are used to develop models that describe species transport and single-fluid-phase flow through a porous medium system in varying physical regimes. Classical irreversible thermodynamics formulations for species in fluids, solids, and interfaces are developed. Two different approaches are presented, one that makes use of a momentum equation for each entity along with constitutive relations for species diffusion and dispersion, and a second approach that makes use of a momentum equation for each species in an entity. The alternative models are developed by relying upon different approaches to constrain an entropy inequality using mass, momentum, and energy conservation equations. The resultant constrained entropy inequality is simplified and used to guide the development of closed models. Specific instances of dilute and non-dilute systems are examined and compared to alternative formulation approaches.

  14. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems

    Science.gov (United States)

    Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-01

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to

  15. An effective approach using blended learning to assist the average students to catch up with the talented ones

    Directory of Open Access Journals (Sweden)

    Baijie Yang

    2013-03-01

    Full Text Available Because the average students are the prevailing part of the student population, it is important but difficult for the educators to help average students by improving their learning efficiency and learning outcome in school tests. We conducted a quasi-experiment with two English classes taught by one teacher in the second term of the first year of a junior high school. The experimental class was composed of average students (N=37, while the control class comprised talented students (N=34. Therefore the two classes performed differently in English subject with mean difference of 13.48 that is statistically significant based on the independent sample T-Test analysis. We tailored the web-based intelligent English instruction system, called Computer Simulation in Educational Communication (CSIEC and featured with instant feedback, to the learning content in the experiment term, and the experimental class used it one school hour per week throughout the term. This blended learning setting with the focus on vocabulary and dialogue acquisition helped the students in the experimental class improve their learning performance gradually. The mean difference of the final test between the two classes was decreased to 3.78, while the mean difference of the test designed for the specially drilled vocabulary knowledge was decreased to 2.38 and was statistically not significant. The student interview and survey also demonstrated the students’ favor to the blended learning system. We conclude that the long-term integration of this content oriented blended learning system featured with instant feedback into ordinary class is an effective approach to assist the average students to catch up with the talented ones.

  16. Mass transfer kinetic mechanism in monolithic columns and application to the characterization of new research monolithic samples with different average pore sizes.

    Science.gov (United States)

    Gritti, Fabrice; Guiochon, Georges

    2009-06-05

    A general reduced HETP (height equivalent to a theoretical plate) equation is proposed that accounts for the mass transfer of a wide range of molecular weight compounds in monolithic columns. The detailed derivatization of each one of the individual and independent mass transfer contributions (longitudinal diffusion, eddy dispersion, film mass transfer resistance, and trans-skeleton mass transfer resistance) is discussed. The reduced HETPs of a series of small molecules (phenol, toluene, acenaphthene, and amylbenzene) and of a larger molecule, insulin, were measured on three research grade monolithic columns (M150, M225, M350) having different average pore size (approximately 150, 225, and 350 A, respectively) but the same dimension (100 mm x 4.6 mm). The first and second central moments of 2 muL samples were measured and corrected for the extra-column contributions. The h data were fitted to the new HETP equation in order to identify which contribution controls the band broadening in monolithic columns. The contribution of the B-term was found to be negligible compared to that of the A-term, even at very low reduced velocities (nu5), the C-term of the monolithic columns is controlled by film mass transfer resistance between the eluent circulating in the large throughpores and the eluent stagnant inside the thin porous skeleton. The experimental Sherwood number measured on the monolith columns increases from 0.05 to 0.22 while the adsorption energy increases by nearly 6 kJ/mol. Stronger adsorption leads to an increase in the value of the estimated film mass transfer coefficient when a first order film mass transfer rate is assumed (j proportional, variantk(f)DeltaC). The average pore size and the trans-skeleton mass transfer have no (<0.5%, small molecules) or little (<10%, insulin) effect on the overall C-term.

  17. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  18. An approach to the neck mass | Thandar | Continuing Medical ...

    African Journals Online (AJOL)

    An approach to the neck mass. MA Thandar, NE Jonas. Abstract. No Abstract. Full Text: EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT · AJOL African Journals Online. HOW TO USE AJOL... for Researchers · for Librarians · for Authors · FAQ's · More about AJOL ...

  19. On the Relationship between Solar Wind Speed, Earthward-Directed Coronal Mass Ejections, Geomagnetic Activity, and the Sunspot Cycle Using 12-Month Moving Averages

    Science.gov (United States)

    Wilson, Robert M.; Hathaway, David H.

    2008-01-01

    For 1996 .2006 (cycle 23), 12-month moving averages of the aa geomagnetic index strongly correlate (r = 0.92) with 12-month moving averages of solar wind speed, and 12-month moving averages of the number of coronal mass ejections (CMEs) (halo and partial halo events) strongly correlate (r = 0.87) with 12-month moving averages of sunspot number. In particular, the minimum (15.8, September/October 1997) and maximum (38.0, August 2003) values of the aa geomagnetic index occur simultaneously with the minimum (376 km/s) and maximum (547 km/s) solar wind speeds, both being strongly correlated with the following recurrent component (due to high-speed streams). The large peak of aa geomagnetic activity in cycle 23, the largest on record, spans the interval late 2002 to mid 2004 and is associated with a decreased number of halo and partial halo CMEs, whereas the smaller secondary peak of early 2005 seems to be associated with a slight rebound in the number of halo and partial halo CMEs. Based on the observed aaM during the declining portion of cycle 23, RM for cycle 24 is predicted to be larger than average, being about 168+/-60 (the 90% prediction interval), whereas based on the expected aam for cycle 24 (greater than or equal to 14.6), RM for cycle 24 should measure greater than or equal to 118+/-30, yielding an overlap of about 128+/-20.

  20. Endoscopic endonasal approach for mass resection of the pterygopalatine fossa

    Directory of Open Access Journals (Sweden)

    Jan Plzák

    Full Text Available OBJECTIVES: Access to the pterygopalatine fossa is very difficult due to its complex anatomy. Therefore, an open approach is traditionally used, but morbidity is unavoidable. To overcome this problem, an endoscopic endonasal approach was developed as a minimally invasive procedure. The surgical aim of the present study was to evaluate the utility of the endoscopic endonasal approach for the management of both benign and malignant tumors of the pterygopalatine fossa. METHOD: We report our experience with the endoscopic endonasal approach for the management of both benign and malignant tumors and summarize recent recommendations. A total of 13 patients underwent surgery via the endoscopic endonasal approach for pterygopalatine fossa masses from 2014 to 2016. This case group consisted of 12 benign tumors (10 juvenile nasopharyngeal angiofibromas and two schwannomas and one malignant tumor. RESULTS: No recurrent tumor developed during the follow-up period. One residual tumor (juvenile nasopharyngeal angiofibroma that remained in the cavernous sinus was stable. There were no significant complications. Typical sequelae included hypesthesia of the maxillary nerve, trismus, and dry eye syndrome. CONCLUSION: The low frequency of complications together with the high efficacy of resection support the use of the endoscopic endonasal approach as a feasible, safe, and beneficial technique for the management of masses in the pterygopalatine fossa.

  1. Standardization approaches in absolute quantitative proteomics with mass spectrometry.

    Science.gov (United States)

    Calderón-Celis, Francisco; Encinar, Jorge Ruiz; Sanz-Medel, Alfredo

    2017-07-31

    Mass spectrometry-based approaches have enabled important breakthroughs in quantitative proteomics in the last decades. This development is reflected in the better quantitative assessment of protein levels as well as to understand post-translational modifications and protein complexes and networks. Nowadays, the focus of quantitative proteomics shifted from the relative determination of proteins (ie, differential expression between two or more cellular states) to absolute quantity determination, required for a more-thorough characterization of biological models and comprehension of the proteome dynamism, as well as for the search and validation of novel protein biomarkers. However, the physico-chemical environment of the analyte species affects strongly the ionization efficiency in most mass spectrometry (MS) types, which thereby require the use of specially designed standardization approaches to provide absolute quantifications. Most common of such approaches nowadays include (i) the use of stable isotope-labeled peptide standards, isotopologues to the target proteotypic peptides expected after tryptic digestion of the target protein; (ii) use of stable isotope-labeled protein standards to compensate for sample preparation, sample loss, and proteolysis steps; (iii) isobaric reagents, which after fragmentation in the MS/MS analysis provide a final detectable mass shift, can be used to tag both analyte and standard samples; (iv) label-free approaches in which the absolute quantitative data are not obtained through the use of any kind of labeling, but from computational normalization of the raw data and adequate standards; (v) elemental mass spectrometry-based workflows able to provide directly absolute quantification of peptides/proteins that contain an ICP-detectable element. A critical insight from the Analytical Chemistry perspective of the different standardization approaches and their combinations used so far for absolute quantitative MS-based (molecular and

  2. Precise measurement of spin-averaged chi_{cJ}(1P) mass using photon conversions in psi(2S) -> gamma chi_{cJ}

    OpenAIRE

    Ablikim, M.; Bai, J. Z.

    2005-01-01

    Using photon conversions to e^+e^- pairs, the energy spectrum of inclusive photons from psi(2S) radiative decays is measured by BESII at the Beijing Electron-Positron Collider. The chi_{cJ}(1P) states (J=0,1,2) are clearly observed with energy resolution between 2.3 to 3.8 MeV, and their masses and the spin-averaged chi_{cJ} mass are determined to be M_{chi_{c0}}=3414.21\\pm 0.39 \\pm 0.27, M_{chi_{c1}}=3510.30\\pm 0.14\\pm 0.16, M_{chi_{c2}}=3555.70\\pm 0.59 \\pm 0.39 and M(^3P_{cog})=3524.85\\pm 0...

  3. Traditional approaches versus mass spectrometry in bacterial identification and typing.

    Science.gov (United States)

    Sloan, Angela; Wang, Gehua; Cheng, Keding

    2017-10-01

    Biochemical methods such as metabolite testing and serotyping are traditionally used in clinical microbiology laboratories to identify and categorize microorganisms. Due to the large variety of bacteria, identifying representative metabolites is tedious, while raising high-quality antisera or antibodies unique to specific biomarkers used in serotyping is very challenging, sometimes even impossible. Although serotyping is a certified approach for differentiating bacteria such as E. coli and Salmonella at the subspecies level, the method is tedious, laborious, and not practical during an infectious disease outbreak. Mass spectrometry (MS) platforms, especially matrix assisted laser desorption and ionization-time of flight mass spectrometry (MALDI-TOF-MS), have recently become popular in the field of bacterial identification due to their fast speed and low cost. In the past few years, we have used liquid chromatography-tandem mass spectrometry (LC-MS/MS)-based approaches to solve various problems hindering serotyping and have overcome some insufficiencies of the MALDI-TOF-MS platform. The current article aims to review the characteristics, advantages, and disadvantages of MS-based platforms over traditional approaches in bacterial identification and categorization. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  4. Thomas-Fermi approach to nuclear mass formula. Pt. 1

    International Nuclear Information System (INIS)

    Dutta, A.K.; Arcoragi, J.P.; Pearson, J.M.; Tondeur, F.

    1986-01-01

    With a view to having a more secure basis for the nuclear mass formula than is provided by the drop(let) model, we make a preliminary study of the possibilities offered by the Skyrme-ETF method. Two ways of incorporating shell effects are considered: the ''Strutinsky-integral'' method of Chu et al., and the ''expectation-value'' method of Brack et al. Each of these methods is compared with the HF method in an attempt to see how reliably they extrapolate from the known region of the nuclear chart out to the neutron-drip line. The Strutinsky-integral method is shown to perform particularly well, and to offer a promising approach to a more reliable mass formula. (orig.)

  5. Elucidating fluctuating diffusivity in center-of-mass motion of polymer models with time-averaged mean-square-displacement tensor

    Science.gov (United States)

    Miyaguchi, Tomoshige

    2017-10-01

    There have been increasing reports that the diffusion coefficient of macromolecules depends on time and fluctuates randomly. Here a method is developed to elucidate this fluctuating diffusivity from trajectory data. Time-averaged mean-square displacement (MSD), a common tool in single-particle-tracking (SPT) experiments, is generalized to a second-order tensor with which both magnitude and orientation fluctuations of the diffusivity can be clearly detected. This method is used to analyze the center-of-mass motion of four fundamental polymer models: the Rouse model, the Zimm model, a reptation model, and a rigid rodlike polymer. It is found that these models exhibit distinctly different types of magnitude and orientation fluctuations of diffusivity. This is an advantage of the present method over previous ones, such as the ergodicity-breaking parameter and a non-Gaussian parameter, because with either of these parameters it is difficult to distinguish the dynamics of the four polymer models. Also, the present method of a time-averaged MSD tensor could be used to analyze trajectory data obtained in SPT experiments.

  6. Metallurgical source-contribution analysis of PM10 annual average concentration: A dispersion modeling approach in moravian-silesian region

    Directory of Open Access Journals (Sweden)

    P. Jančík

    2013-10-01

    Full Text Available The goal of the article is to present analysis of metallurgical industry contribution to annual average PM10 concentrations in Moravian-Silesian based on means of the air pollution modelling in accord with the Czech reference methodology SYMOS´97.

  7. Test of Axel-Brink predictions by a discrete approach to resonance-averaged (n,γ) spectroscopy

    International Nuclear Information System (INIS)

    Raman, S.; Shahal, O.; Slaughter, G.G.

    1981-01-01

    The limitations imposed by Porter-Thomas fluctuations in the study of primary γ rays following neutron capture have been partly overcome by obtaining individual γ-ray spectra from 48 resonances in the 173 Yb(n,γ) reaction and summing them after appropriate normalizations. The resulting average radiation widths (and hence the γ-ray strength function) are in good agreement with the Axel-Brink predictions based on a giant dipole resonance model

  8. Differential population synthesis approach to mass segregation in M92

    International Nuclear Information System (INIS)

    Tobin, W.J.

    1979-01-01

    Spectra are presented of 26 low-metal stars and of the center and one-quarter intensity positions of M92. Spectral coverage is from 390 to 870 nm with resolution better than 1 nm in the blue and 2 nm in the red. Individual pixel signal-to-noise is about 100. Dwarf features are notably absent from the M92 spectra. Numerical estimates of 36 absorption features are extracted from every spectrum, as are two continuum indices. Mathematical models are constructed describing each feature's dependence on stellar color, luminosity, and metal content and then used to estimate the metal content of 6 of the stars for which the metal content is not known. For 10 features reliably measured in M92's center and edge a mass segregation sensitivity parameter is derived from each feature's deduced luminosity dependence. The ratio of feature equivalent widths at cluster edge and center are compared to this sensitivity: no convincing evidence of mass segregation is seen. The only possible edge-to-center difference seen is in the Mg b 517.4 nm feature. Three of the 10 cluster features can be of interstellar origin, at least in part; in particular the luminosity-sensitive Na D line cannot be used as a segregation indicator. The experience gained suggests that an integrated spectrum approach to globular cluster mass segregation is very difficult. An appendix describes in detail the capabilities of the Pine Bluff Observatory .91 m telescope, Cassegrain grating spectrograph, and intensified Reticon dual diode-array detector. It is possible to determine a highly consistent wavelength calibration

  9. Sustainability of algae derived biodiesel: a mass balance approach.

    Science.gov (United States)

    Pfromm, Peter H; Amanor-Boadu, Vincent; Nelson, Richard

    2011-01-01

    A rigorous chemical engineering mass balance/unit operations approach is applied here to bio-diesel from algae mass culture. An equivalent of 50,000,000 gallons per year (0.006002 m3/s) of petroleum-based Number 2 fuel oil (US, diesel for compression-ignition engines, about 0.1% of annual US consumption) from oleaginous algae is the target. Methyl algaeate and ethyl algaeate diesel can according to this analysis conceptually be produced largely in a technologically sustainable way albeit at a lower available diesel yield. About 11 square miles of algae ponds would be needed with optimistic assumptions of 50 g biomass yield per day and m2 pond area. CO2 to foster algae growth should be supplied from a sustainable source such as a biomass-based ethanol production. Reliance on fossil-based CO2 from power plants or fertilizer production renders algae diesel non-sustainable in the long term. Copyright © 2010 Elsevier Ltd. All rights reserved.

  10. A Bayesian model averaging approach to examining changes in quality of life among returning Iraq and Afghanistan veterans.

    Science.gov (United States)

    Stock, Eileen M; Kimbrel, Nathan A; Meyer, Eric C; Copeland, Laurel A; Monte, Ralph; Zeber, John E; Gulliver, Suzy Bird; Morissette, Sandra B

    2014-09-01

    Many Veterans from the conflicts in Iraq and Afghanistan return home with physical and psychological impairments that impact their ability to enjoy normal life activities and diminish their quality of life (QoL). The present research aimed to identify predictors of QoL over an eight-month period using Bayesian model averaging (BMA), which is a statistical technique useful for maximizing power with smaller sample sizes. A sample of 117 Iraq and Afghanistan Veterans receiving care in a southwestern health care system was recruited, and BMA examined the impact of key demographics (e.g., age, gender), diagnoses (e.g., depression), and treatment modalities (e.g., individual therapy, medication) on QoL over time. Multiple imputation based on Gibbs sampling was employed for incomplete data (6.4% missingness). Average follow-up QoL scores were significantly lower than at baseline (73.2 initial versus 69.5 four-month and 68.3 eight-month). Employment was associated with increased QoL during each follow-up, while post-traumatic stress disorder and Black race were inversely related. Additionally, predictive models indicated that depression, income, treatment for a medical condition, and group psychotherapy were strong negative predictors of four-month QoL but not eight-month QoL. Copyright © 2014 John Wiley & Sons, Ltd.

  11. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion...

  12. Generic features of the dynamics of complex open quantum systems: statistical approach based on averages over the unitary group.

    Science.gov (United States)

    Gessner, Manuel; Breuer, Heinz-Peter

    2013-04-01

    We obtain exact analytic expressions for a class of functions expressed as integrals over the Haar measure of the unitary group in d dimensions. Based on these general mathematical results, we investigate generic dynamical properties of complex open quantum systems, employing arguments from ensemble theory. We further generalize these results to arbitrary eigenvalue distributions, allowing a detailed comparison of typical regular and chaotic systems with the help of concepts from random matrix theory. To illustrate the physical relevance and the general applicability of our results we present a series of examples related to the fields of open quantum systems and nonequilibrium quantum thermodynamics. These include the effect of initial correlations, the average quantum dynamical maps, the generic dynamics of system-environment pure state entanglement and, finally, the equilibration of generic open and closed quantum systems.

  13. Thermodynamically Constrained Averaging Theory Approach for Modeling Flow and Transport Phenomena in Porous Medium Systems: 7. Single-Phase Megascale Flow Models.

    Science.gov (United States)

    Gray, William G; Miller, Cass T

    2009-08-01

    This work is the seventh in a series that introduces and employs the thermodynamically constrained averaging theory (TCAT) for modeling flow and transport in multiscale porous medium systems. This paper expands the previous analyses in the series by developing models at a scale where spatial variations within the system are not considered. Thus the time variation of variables averaged over the entire system is modeled in relation to fluxes at the boundary of the system. This implementation of TCAT makes use of conservation equations for mass, momentum, and energy as well as an entropy balance. Additionally, classical irreversible thermodynamics is assumed to hold at the microscale and is averaged to the megascale, or system scale. The fact that the local equilibrium assumption does not apply at the megascale points to the importance of obtaining closure relations that account for the large-scale manifestation of small-scale variations. Example applications built on this foundation are suggested to stimulate future work.

  14. Quantum wavepacket ab initio molecular dynamics: an approach for computing dynamically averaged vibrational spectra including critical nuclear quantum effects.

    Science.gov (United States)

    Sumner, Isaiah; Iyengar, Srinivasan S

    2007-10-18

    We have introduced a computational methodology to study vibrational spectroscopy in clusters inclusive of critical nuclear quantum effects. This approach is based on the recently developed quantum wavepacket ab initio molecular dynamics method that combines quantum wavepacket dynamics with ab initio molecular dynamics. The computational efficiency of the dynamical procedure is drastically improved (by several orders of magnitude) through the utilization of wavelet-based techniques combined with the previously introduced time-dependent deterministic sampling procedure measure to achieve stable, picosecond length, quantum-classical dynamics of electrons and nuclei in clusters. The dynamical information is employed to construct a novel cumulative flux/velocity correlation function, where the wavepacket flux from the quantized particle is combined with classical nuclear velocities to obtain the vibrational density of states. The approach is demonstrated by computing the vibrational density of states of [Cl-H-Cl]-, inclusive of critical quantum nuclear effects, and our results are in good agreement with experiment. A general hierarchical procedure is also provided, based on electronic structure harmonic frequencies, classical ab initio molecular dynamics, computation of nuclear quantum-mechanical eigenstates, and employing quantum wavepacket ab initio dynamics to understand vibrational spectroscopy in hydrogen-bonded clusters that display large degrees of anharmonicities.

  15. Quantification of benzene, toluene, ethylbenzene and o-xylene in internal combustion engine exhaust with time-weighted average solid phase microextraction and gas chromatography mass spectrometry.

    Science.gov (United States)

    Baimatova, Nassiba; Koziel, Jacek A; Kenessov, Bulat

    2015-05-11

    A new and simple method for benzene, toluene, ethylbenzene and o-xylene (BTEX) quantification in vehicle exhaust was developed based on diffusion-controlled extraction onto a retracted solid-phase microextraction (SPME) fiber coating. The rationale was to develop a method based on existing and proven SPME technology that is feasible for field adaptation in developing countries. Passive sampling with SPME fiber retracted into the needle extracted nearly two orders of magnitude less mass (n) compared with exposed fiber (outside of needle) and sampling was in a time weighted-averaging (TWA) mode. Both the sampling time (t) and fiber retraction depth (Z) were adjusted to quantify a wider range of Cgas. Extraction and quantification is conducted in a non-equilibrium mode. Effects of Cgas, t, Z and T were tested. In addition, contribution of n extracted by metallic surfaces of needle assembly without SPME coating was studied. Effects of sample storage time on n loss was studied. Retracted TWA-SPME extractions followed the theoretical model. Extracted n of BTEX was proportional to Cgas, t, Dg, T and inversely proportional to Z. Method detection limits were 1.8, 2.7, 2.1 and 5.2 mg m(-3) (0.51, 0.83, 0.66 and 1.62 ppm) for BTEX, respectively. The contribution of extraction onto metallic surfaces was reproducible and influenced by Cgas and t and less so by T and by the Z. The new method was applied to measure BTEX in the exhaust gas of a Ford Crown Victoria 1995 and compared with a whole gas and direct injection method. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. More controlling child-feeding practices are found among parents of boys with an average body mass index compared with parents of boys with a high body mass index.

    Science.gov (United States)

    Brann, Lynn S; Skinner, Jean D

    2005-09-01

    To determine if differences existed in mothers' and fathers' perceptions of their sons' weight, controlling child-feeding practices (ie, restriction, monitoring, and pressure to eat), and parenting styles (ie, authoritarian, authoritative, and permissive) by their sons' body mass index (BMI). One person (L.S.B.) interviewed mothers and boys using validated questionnaires and measured boys' weight and height; fathers completed questionnaires independently. Subjects were white, preadolescent boys and their parents. Boys were grouped by their BMI into an average BMI group (n=25; BMI percentile between 33rd and 68th) and a high BMI group (n=24; BMI percentile > or = 85th). Multivariate analyses of variance and analyses of variance. Mothers and fathers of boys with a high BMI saw their sons as more overweight (mothers P=.03, fathers P=.01), were more concerned about their sons' weight (Pparenting by boys' BMI groups for either mothers or fathers. More controlling child-feeding practices were found among mothers (pressure to eat) and fathers (pressure to eat and monitoring) of boys with an average BMI compared with parents of boys with a high BMI. A better understanding of the relationships between feeding practices and boys' weight is necessary. However, longitudinal research is needed to provide evidence of causal association.

  17. Geotail observations of plasma sheet ion composition over 16 years: On variations of average plasma ion mass and O+ triggering substorm model

    Science.gov (United States)

    Nosé, M.; Ieda, A.; Christon, S. P.

    2009-07-01

    We examined long-term variations of ion composition in the plasma sheet, using energetic (9.4-212.1 keV/e) ion flux data obtained by the suprathermal ion composition spectrometer (STICS) sensor of the energetic particle and ion composition (EPIC) instrument on board the Geotail spacecraft. EPIC/STICS observations are available from 17 October 1992 for more than 16 years, covering the declining phase of solar cycle 22, all of solar cycle 23, and the early phase of solar cycle 24. This unprecedented long-term data set revealed that (1) the He+/H+ and O+/H+ flux ratios in the plasma sheet were dependent on the F10.7 index; (2) the F10.7 index dependence is stronger for O+/H+ than He+/H+; (3) the O+/H+ flux ratio is also weakly correlated with the ΣKp index; and (4) the He2+/H+ flux ratio in the plasma sheet appeared to show no long-term trend. From these results, we derived empirical equations related to plasma sheet ion composition and the F10.7 index and estimated that the average plasma ion mass changes from ˜1.1 amu during solar minimum to ˜2.8 amu during solar maximum. In such a case, the Alfvén velocity during solar maximum decreases to ˜60% of the solar minimum value. Thus, physical processes in the plasma sheet are considered to be much different between solar minimum and solar maximum. We also compared long-term variation of the plasma sheet ion composition with that of the substorm occurrence rate, which is evaluated by the number of Pi2 pulsations. No correlation or negative correlation was found between them. This result contradicts the O+ triggering substorm model, in which heavy ions in the plasma sheet increase the growth rate of the linear ion tearing mode and play an important role in localization and initiation of substorms. In contrast, O+ ions in the plasma sheet may prevent occurrence of substorms.

  18. Mass Society/Culture/Media: An Eclectic Approach.

    Science.gov (United States)

    Clavner, Jerry B.

    Instructors of courses in mass society, culture, and communication start out facing three types of difficulties: the historical orientation of learning, the parochialism of various disciplines, and negative intellectually elitist attitudes toward mass culture/media. Added to these problems is the fact that many instructors have little or no…

  19. Beyond long memory in heart rate variability: An approach based on fractionally integrated autoregressive moving average time series models with conditional heteroscedasticity

    Science.gov (United States)

    Leite, Argentina; Paula Rocha, Ana; Eduarda Silva, Maria

    2013-06-01

    Heart Rate Variability (HRV) series exhibit long memory and time-varying conditional variance. This work considers the Fractionally Integrated AutoRegressive Moving Average (ARFIMA) models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH) errors. ARFIMA-GARCH models may be used to capture and remove long memory and estimate the conditional volatility in 24 h HRV recordings. The ARFIMA-GARCH approach is applied to fifteen long term HRV series available at Physionet, leading to the discrimination among normal individuals, heart failure patients, and patients with atrial fibrillation.

  20. Calculation procedure to determine average mass transfer coefficients in packed columns from experimental data for ammonia-water absorption refrigeration systems

    Energy Technology Data Exchange (ETDEWEB)

    Sieres, Jaime; Fernandez-Seara, Jose [University of Vigo, Area de Maquinas y Motores Termicos, E.T.S. de Ingenieros Industriales, Vigo (Spain)

    2008-08-15

    The ammonia purification process is critical in ammonia-water absorption refrigeration systems. In this paper, a detailed and a simplified analytical model are presented to characterize the performance of the ammonia rectification process in packed columns. The detailed model is based on mass and energy balances and simultaneous heat and mass transfer equations. The simplified model is derived and compared with the detailed model. The range of applicability of the simplified model is determined. A calculation procedure based on the simplified model is developed to determine the volumetric mass transfer coefficients in the vapour phase from experimental data. Finally, the proposed model and other simple calculation methods found in the general literature are compared. (orig.)

  1. MERRA Chem 3D IAU, Precip Mass Flux, Time average 3-hourly (eta coord edges, 1.25X1L73) V5.2.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The MAT3FECHM or tavg3_3d_chm_Fe data product is the MERRA Data Assimilation System Chemistry 3-Dimensional chemistry on layers edges that is time averaged, 3D model...

  2. Automatic individual arterial input functions calculated from PCA outperform manual and population-averaged approaches for the pharmacokinetic modeling of DCE-MR images.

    Science.gov (United States)

    Sanz-Requena, Roberto; Prats-Montalbán, José Manuel; Martí-Bonmatí, Luis; Alberich-Bayarri, Ángel; García-Martí, Gracián; Pérez, Rosario; Ferrer, Alberto

    2015-08-01

    To introduce a segmentation method to calculate an automatic arterial input function (AIF) based on principal component analysis (PCA) of dynamic contrast enhanced MR (DCE-MR) imaging and compare it with individual manually selected and population-averaged AIFs using calculated pharmacokinetic parameters. The study included 65 individuals with prostate examinations (27 tumors and 38 controls). Manual AIFs were individually extracted and also averaged to obtain a population AIF. Automatic AIFs were individually obtained by applying PCA to volumetric DCE-MR imaging data and finding the highest correlation of the PCs with a reference AIF. Variability was assessed using coefficients of variation and repeated measures tests. The different AIFs were used as inputs to the pharmacokinetic model and correlation coefficients, Bland-Altman plots and analysis of variance tests were obtained to compare the results. Automatic PCA-based AIFs were successfully extracted in all cases. The manual and PCA-based AIFs showed good correlation (r between pharmacokinetic parameters ranging from 0.74 to 0.95), with differences below the manual individual variability (RMSCV up to 27.3%). The population-averaged AIF showed larger differences (r from 0.30 to 0.61). The automatic PCA-based approach minimizes the variability associated to obtaining individual volume-based AIFs in DCE-MR studies of the prostate. © 2014 Wiley Periodicals, Inc.

  3. Mass spectrometry imaging enriches biomarker discovery approaches with candidate mapping.

    Science.gov (United States)

    Scott, Alison J; Jones, Jace W; Orschell, Christie M; MacVittie, Thomas J; Kane, Maureen A; Ernst, Robert K

    2014-01-01

    Integral to the characterization of radiation-induced tissue damage is the identification of unique biomarkers. Biomarker discovery is a challenging and complex endeavor requiring both sophisticated experimental design and accessible technology. The resources within the National Institute of Allergy and Infectious Diseases (NIAID)-sponsored Consortium, Medical Countermeasures Against Radiological Threats (MCART), allow for leveraging robust animal models with novel molecular imaging techniques. One such imaging technique, MALDI (matrix-assisted laser desorption ionization) mass spectrometry imaging (MSI), allows for the direct spatial visualization of lipids, proteins, small molecules, and drugs/drug metabolites-or biomarkers-in an unbiased manner. MALDI-MSI acquires mass spectra directly from an intact tissue slice in discrete locations across an x, y grid that are then rendered into a spatial distribution map composed of ion mass and intensity. The unique mass signals can be plotted to generate a spatial map of biomarkers that reflects pathology and molecular events. The crucial unanswered questions that can be addressed with MALDI-MSI include identification of biomarkers for radiation damage that reflect the response to radiation dose over time and the efficacy of therapeutic interventions. Techniques in MALDI-MSI also enable integration of biomarker identification among diverse animal models. Analysis of early, sublethally irradiated tissue injury samples from diverse mouse tissues (lung and ileum) shows membrane phospholipid signatures correlated with histological features of these unique tissues. This paper will discuss the application of MALDI-MSI for use in a larger biomarker discovery pipeline.

  4. Microfabricated devices: A new sample introduction approach to mass spectrometry

    Czech Academy of Sciences Publication Activity Database

    Lazar, I.M.; Grym, Jakub; Foret, František

    2006-01-01

    Roč. 25, č. 4 (2006), s. 573-594 ISSN 0277-7037 R&D Projects: GA AV ČR IBS4031209; GA ČR GA203/03/0515 Institutional research plan: CEZ:AV0Z40310501 Keywords : microfluidics * mass spectrometry * proteomics Subject RIV: CB - Analytical Chemistry, Separation Impact factor: 10.947, year: 2006

  5. Thermodynamically Constrained Averaging Theory Approach for Modeling Flow and Transport Phenomena in Porous Medium Systems: 8. Interface and Common Curve Dynamics.

    Science.gov (United States)

    Gray, William G; Miller, Cass T

    2010-12-01

    This work is the eighth in a series that develops the fundamental aspects of the thermodynamically constrained averaging theory (TCAT) that allows for a systematic increase in the scale at which multiphase transport phenomena is modeled in porous medium systems. In these systems, the explicit locations of interfaces between phases and common curves, where three or more interfaces meet, are not considered at scales above the microscale. Rather, the densities of these quantities arise as areas per volume or length per volume. Modeling of the dynamics of these measures is an important challenge for robust models of flow and transport phenomena in porous medium systems, as the extent of these regions can have important implications for mass, momentum, and energy transport between and among phases, and formulation of a capillary pressure relation with minimal hysteresis. These densities do not exist at the microscale, where the interfaces and common curves correspond to particular locations. Therefore, it is necessary for a well-developed macroscale theory to provide evolution equations that describe the dynamics of interface and common curve densities. Here we point out the challenges and pitfalls in producing such evolution equations, develop a set of such equations based on averaging theorems, and identify the terms that require particular attention in experimental and computational efforts to parameterize the equations. We use the evolution equations developed to specify a closed two-fluid-phase flow model.

  6. The Influence of Body Mass Index and Hip Anatomy on Direct Anterior Approach Total Hip Replacement.

    Science.gov (United States)

    Sang, Weilin; Zhu, Libo; Ma, Jinzhong; Lu, Haiming; Wang, Cong

    2016-01-01

    To investigate the influence of body mass index (BMI) and hip anatomy on direct anterior approach (DAA) total hip replacement. The study is a retrospective analysis of 124 cases of DAA total hip replacement from 2009 to 2012. The BMI, the ratio of the greater trochanter (GT) and anterior superior iliac spine (ASIS) bilaterally (GT/ASIS), and the vertical distance between the ASIS and GT (AGVD) were obtained from medical records. All cases were categorized into three groups (43, 49, and 32 cases in each group, respectively) based on BMI (BMI 25) or divided into two groups based on GT/ASIS (≤1.17 or >1.17) or AGVD (≤86 or >86 mm). Operating time, intraoperative bleeding, and surgical complications were compared between different groups. A longer average operating time, more intraoperative bleeding, and a higher rate of complications were observed in the group with the highest BMI. The complications included a case of intraoperative femur fracture, a wound hematoma, and a lateral femoral cutaneous nerve injury. The group with higher GT/ASIS had a shorter average operating time, less bleeding, and a lower complication rate than the group with lower GT/ASIS. Moreover, the group with higher AGVD showed a shorter average operating time, less bleeding, and a lower complication rate compared with the group with lower AGVD. Our study suggests that lower BMI and larger GT/ASIS and AGVD are associated with a shorter operating time, less bleeding, and a lower complication rate in DAA total hip replacement. These findings are valuable for clinicians to make the appropriate choice of surgery types for different individuals. © 2016 S. Karger AG, Basel.

  7. A Bayesian model averaging approach for estimating the relative risk of mortality associated with heat waves in 105 U.S. cities.

    Science.gov (United States)

    Bobb, Jennifer F; Dominici, Francesca; Peng, Roger D

    2011-12-01

    Estimating the risks heat waves pose to human health is a critical part of assessing the future impact of climate change. In this article, we propose a flexible class of time series models to estimate the relative risk of mortality associated with heat waves and conduct Bayesian model averaging (BMA) to account for the multiplicity of potential models. Applying these methods to data from 105 U.S. cities for the period 1987-2005, we identify those cities having a high posterior probability of increased mortality risk during heat waves, examine the heterogeneity of the posterior distributions of mortality risk across cities, assess sensitivity of the results to the selection of prior distributions, and compare our BMA results to a model selection approach. Our results show that no single model best predicts risk across the majority of cities, and that for some cities heat-wave risk estimation is sensitive to model choice. Although model averaging leads to posterior distributions with increased variance as compared to statistical inference conditional on a model obtained through model selection, we find that the posterior mean of heat wave mortality risk is robust to accounting for model uncertainty over a broad class of models. © 2011, The International Biometric Society.

  8. A new approach to mass spectrometer measurements of thermospheric density

    Science.gov (United States)

    Melfi, L. T., Jr.; Brock, F. J.; Brown, C. A., Jr.

    1974-01-01

    The gas sampling problem in satellite and high velocity probes was investigated by applying the theory of a drifting Maxwellian gas. A lens system using a free stream ion source was developed and experimentally evaluated over the pressure range of 0.00001 to 0.01 N/m sq (approx. 10 to the minus 7th power to 0.0001 torr). The source has high beam transparency, which minimizes gas-surface collisions within, or near, the ionization volume. It is shown that for high ion energy (60 eV), the extracted ion beam has an on-axis energy spread of less than 4 eV, and that 90 percent of the ions are within 2.5 deg of the beam axis. It is concluded that the molecular beam mass spectrometer concept, developed for gas density measurements in the upper atmosphere, substantially reduces gas-surface scattering and gas-surface reactions in the sample, and preserves the integrity of the gas sample during the analysis process. Studies show that both the Scout and Delta launch vehicles have adequate volume, control, velocity, and data acquisition capability necessary to obtain thermospheric number density in real time.

  9. Deference, Denial, and Beyond: A Repertoire Approach to Mass Media and Schooling

    Science.gov (United States)

    Rymes, Betsy

    2011-01-01

    In this article, the author outlines two general research approaches, within the education world, to these mass-mediated formations: "Deference" and "Denial." Researchers who recognize the social practices that give local meaning to mass media formations and ways of speaking do not attempt to recontextualize youth media in their own social…

  10. Fast prediction of pulsed nonlinear acoustic fields from clinically relevant sources using time-averaged wave envelope approach: comparison of numerical simulations and experimental results.

    Science.gov (United States)

    Wójcik, J; Kujawska, T; Nowicki, A; Lewin, P A

    2008-12-01

    The primary goal of this work was to verify experimentally the applicability of the recently introduced time-averaged wave envelope (TAWE) method [J. Wójcik, A. Nowicki, P.A. Lewin, P.E. Bloomfield, T. Kujawska, L. Filipczyński, Wave envelopes method for description of nonlinear acoustic wave propagation, Ultrasonics 44 (2006) 310-329.] as a tool for fast prediction of four dimensional (4D) pulsed nonlinear pressure fields from arbitrarily shaped acoustic sources in attenuating media. The experiments were performed in water at the fundamental frequency of 2.8 MHz for spherically focused (focal length F=80 mm) square (20 x 20 mm) and rectangular (10 x 25mm) sources similar to those used in the design of 1D linear arrays operating with ultrasonic imaging systems. The experimental results obtained with 10-cycle tone bursts at three different excitation levels corresponding to linear, moderately nonlinear and highly nonlinear propagation conditions (0.045, 0.225 and 0.45 MPa on-source pressure amplitude, respectively) were compared with those yielded using the TAWE approach [J. Wójcik, A. Nowicki, P.A. Lewin, P.E. Bloomfield, T. Kujawska, L. Filipczyński, Wave envelopes method for description of nonlinear acoustic wave propagation, Ultrasonics 44 (2006) 310-329.]. The comparison of the experimental results and numerical simulations has shown that the TAWE approach is well suited to predict (to within+/-1 dB) both the spatial-temporal and spatial-spectral pressure variations in the pulsed nonlinear acoustic beams. The obtained results indicated that implementation of the TAWE approach enabled shortening of computation time in comparison with the time needed for prediction of the full 4D pulsed nonlinear acoustic fields using a conventional (Fourier-series) approach [P.T. Christopher, K.J. Parker, New approaches to nonlinear diffractive field propagation, J. Acoust. Soc. Am. 90 (1) (1991) 488-499.]. The reduction in computation time depends on several parameters

  11. An Approach to Mass Customization of Military Uniforms Using Superoleophobic Nonwoven Fabrics (Postprint)

    Science.gov (United States)

    2010-11-01

    AFRL-RX-TY-TP-2010-0051 AN APPROACH TO MASS CUSTOMIZATION OF MILITARY UNIFORMS USING SUPEROLEOPHOBIC NONWOVEN FABRICS POSTPRINT Dnyanada...2010 An Approach to Mass Customization of Military Uniforms Using Superoleophobic Nonwoven Fabrics (POSTPRINT) FA8650-07-1-5916 0602102F GOVT L0...hydroentangled nonwovens and nylon-cotton blended woven fabrics were modified, and made superhydrophobic and superoleophobic to protect soldiers against the

  12. The effect of matrix-assisted laser desorption/ionization time-of-flight mass spectrometry instrumentation parameters on the matrix-assisted laser desorption/ionization simulated size exclusion chromatography number-mass, average-weight and polydispersity values of dextran against corresponding values obtained by size exclusion chromatography.

    Science.gov (United States)

    Bashir, S; Giannakopulos, A E; Liu, J

    2017-12-01

    The matrix-assisted laser desorption/ionization simulated size exclusion chromatography (SECPC) average-number mass, weight average and polydispersity of dextran 1000 were determined by matrix-assisted laser desorption/ionization (MALDI) time-of-flight (TOF) mass spectrometry. The instrument parameters were varied and the SECPC value determined via the Bruker XMASS software was compared to the value obtained from aqueous-phase size exclusion chromatography. The aqueous-phase size exclusion chromatography values for average-number mass, weight average and polydispersity were 1223 Da, 1500 Da and 1.23 (1010 Da, 1270 Da and 1.26 from manufacturer), whereas the SECPC value varied on the instrumental parameters. The factors that had the greatest effect on the average-number mass, weight average and polydispersity were: (most effect on SECPC value) laser attenuation > matrix-analyte molar concentration > matrix-analyte molar ratios > delay extraction time > solvent-system composition > detector delay (least effect on SECPC value). The oligosaccharide signal distribution as a function of laser attenuation indicate that two distinct regions exist in dextran 1000, where one corresponds to the higher mass oligosaccharides (hexasaccharide or greater), while another region corresponds to lower oligosaccharides (tetra-saccharide). This distribution depends upon the crystallization of the biopolymer and the efficiency of desorption/ionization, which yields the SECPC value. There was broad agreement between the SECPC values and size exclusion chromatography values for dextran, although the polydispersity indicated by SECPC was less than size exclusion chromatography (1.10 vs. 1.26). It can be shown that for narrow polydisperse biopolymers the instrumental conditions are less critical in the determination of average-number mass, weight average and polydispersity, although the SECPC Mn, and weight average values are often higher than the corresponding values

  13. Bayesian neural network approaches to ovarian cancer identification from high-resolution mass spectrometry data.

    Science.gov (United States)

    Yu, Jiangsheng; Chen, Xue-Wen

    2005-06-01

    The classification of high-dimensional data is always a challenge to statistical machine learning. We propose a novel method named shallow feature selection that assigns each feature a probability of being selected based on the structure of training data itself. Independent of particular classifiers, the high dimension of biodata can be fleetly reduced to an applicable case for consequential processing. Moreover, to improve both efficiency and performance of classification, these prior probabilities are further used to specify the distributions of top-level hyperparameters in hierarchical models of Bayesian neural network (BNN), as well as the parameters in Gaussian process models. Three BNN approaches were derived and then applied to identify ovarian cancer from NCI's high-resolution mass spectrometry data, which yielded an excellent performance in 1000 independent k-fold cross validations (k = 2,...,10). For instance, indices of average sensitivity and specificity of 98.56 and 98.42%, respectively, were achieved in the 2-fold cross validations. Furthermore, only one control and one cancer were misclassified in the leave-one-out cross validation. Some other popular classifiers were also tested for comparison. The programs implemented in MatLab, R and Neal's fbm.2004-11-10.

  14. Mass

    International Nuclear Information System (INIS)

    Quigg, Chris

    2007-01-01

    In the classical physics we inherited from Isaac Newton, mass does not arise, it simply is. The mass of a classical object is the sum of the masses of its parts. Albert Einstein showed that the mass of a body is a measure of its energy content, inviting us to consider the origins of mass. The protons we accelerate at Fermilab are prime examples of Einsteinian matter: nearly all of their mass arises from stored energy. Missing mass led to the discovery of the noble gases, and a new form of missing mass leads us to the notion of dark matter. Starting with a brief guided tour of the meanings of mass, the colloquium will explore the multiple origins of mass. We will see how far we have come toward understanding mass, and survey the issues that guide our research today.

  15. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... to a non-linear manifold and re-normalization or orthogonalization must be applied to obtain proper rotations. These latter steps have been viewed as ad hoc corrections for the errors introduced by assuming a vector space. The article shows that the two approximative methods can be derived from natural...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation....

  16. Cell-Averaged discretization for incompressible Navier-Stokes with embedded boundaries and locally refined Cartesian meshes: a high-order finite volume approach

    Science.gov (United States)

    Bhalla, Amneet Pal Singh; Johansen, Hans; Graves, Dan; Martin, Dan; Colella, Phillip; Applied Numerical Algorithms Group Team

    2017-11-01

    We present a consistent cell-averaged discretization for incompressible Navier-Stokes equations on complex domains using embedded boundaries. The embedded boundary is allowed to freely cut the locally-refined background Cartesian grid. Implicit-function representation is used for the embedded boundary, which allows us to convert the required geometric moments in the Taylor series expansion (upto arbitrary order) of polynomials into an algebraic problem in lower dimensions. The computed geometric moments are then used to construct stencils for various operators like the Laplacian, divergence, gradient, etc., by solving a least-squares system locally. We also construct the inter-level data-transfer operators like prolongation and restriction for multi grid solvers using the same least-squares system approach. This allows us to retain high-order of accuracy near coarse-fine interface and near embedded boundaries. Canonical problems like Taylor-Green vortex flow and flow past bluff bodies will be presented to demonstrate the proposed method. U.S. Department of Energy, Office of Science, ASCR (Award Number DE-AC02-05CH11231).

  17. Heat and mass transfer intensification and shape optimization a multi-scale approach

    CERN Document Server

    2013-01-01

    Is the heat and mass transfer intensification defined as a new paradigm of process engineering, or is it just a common and old idea, renamed and given the current taste? Where might intensification occur? How to achieve intensification? How the shape optimization of thermal and fluidic devices leads to intensified heat and mass transfers? To answer these questions, Heat & Mass Transfer Intensification and Shape Optimization: A Multi-scale Approach clarifies  the definition of the intensification by highlighting the potential role of the multi-scale structures, the specific interfacial area, the distribution of driving force, the modes of energy supply and the temporal aspects of processes.   A reflection on the methods of process intensification or heat and mass transfer enhancement in multi-scale structures is provided, including porous media, heat exchangers, fluid distributors, mixers and reactors. A multi-scale approach to achieve intensification and shape optimization is developed and clearly expla...

  18. Mass energy-absorption coefficients and average atomic energy-absorption cross-sections for amino acids in the energy range 0.122-1.330 MeV

    Energy Technology Data Exchange (ETDEWEB)

    More, Chaitali V., E-mail: chaitalimore89@gmail.com; Lokhande, Rajkumar M.; Pawar, Pravina P., E-mail: pravinapawar4@gmail.com [Department of physics, Dr. Babasaheb Ambedkar Marathwada University, Aurangabad 431004 (India)

    2016-05-06

    Mass attenuation coefficients of amino acids such as n-acetyl-l-tryptophan, n-acetyl-l-tyrosine and d-tryptophan were measured in the energy range 0.122-1.330 MeV. NaI (Tl) scintillation detection system was used to detect gamma rays with a resolution of 8.2% at 0.662 MeV. The measured attenuation coefficient values were then used to determine the mass energy-absorption coefficients (σ{sub a,en}) and average atomic energy-absorption cross sections (μ{sub en}/ρ) of the amino acids. Theoretical values were calculated based on XCOM data. Theoretical and experimental values are found to be in good agreement.

  19. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  20. A Bayesian geostatistical approach for evaluating the uncertainty of contaminant mass discharges from point sources

    DEFF Research Database (Denmark)

    Troldborg, Mads; Nowak, Wolfgang; Binning, Philip John

    of both concentration and groundwater flow. For risk assessments or any other decisions that are being based on mass discharge estimates, it is essential to address these uncertainties. We present a novel Bayesian geostatistical approach for quantifying the uncertainty of the mass discharge across...... and the hydraulic gradient across the control plane and are consistent with measurements of both hydraulic conductivity and head at the site. An analytical macro-dispersive transport solution is employed to simulate the mean concentration distribution across the control plane, and a geostatistical model of the Box......Estimates of mass discharge (mass/time) are increasingly being used when assessing risks of groundwater contamination and designing remedial systems at contaminated sites. Mass discharge estimates are, however, prone to rather large uncertainties as they integrate uncertain spatial distributions...

  1. Nuclear mass predictions based on Bayesian neural network approach with pairing and shell effects

    Science.gov (United States)

    Niu, Z. M.; Liang, H. Z.

    2018-03-01

    Bayesian neural network (BNN) approach is employed to improve the nuclear mass predictions of various models. It is found that the noise error in the likelihood function plays an important role in the predictive performance of the BNN approach. By including a distribution for the noise error, an appropriate value can be found automatically in the sampling process, which optimizes the nuclear mass predictions. Furthermore, two quantities related to nuclear pairing and shell effects are added to the input layer in addition to the proton and mass numbers. As a result, the theoretical accuracies are significantly improved not only for nuclear masses but also for single-nucleon separation energies. Due to the inclusion of the shell effect, in the unknown region, the BNN approach predicts a similar shell-correction structure to that in the known region, e.g., the predictions of underestimation of nuclear mass around the magic numbers in the relativistic mean-field model. This manifests that better predictive performance can be achieved if more physical features are included in the BNN approach.

  2. Assessment of uncertainties of an aircraft-based mass balance approach for quantifying urban greenhouse gas emissions

    Science.gov (United States)

    Cambaliza, M. O. L.; Shepson, P. B.; Caulton, D. R.; Stirm, B.; Samarov, D.; Gurney, K. R.; Turnbull, J.; Davis, K. J.; Possolo, A.; Karion, A.; Sweeney, C.; Moser, B.; Hendricks, A.; Lauvaux, T.; Mays, K.; Whetstone, J.; Huang, J.; Razlivanov, I.; Miles, N. L.; Richardson, S. J.

    2014-09-01

    Urban environments are the primary contributors to global anthropogenic carbon emissions. Because much of the growth in CO2 emissions will originate from cities, there is a need to develop, assess, and improve measurement and modeling strategies for quantifying and monitoring greenhouse gas emissions from large urban centers. In this study the uncertainties in an aircraft-based mass balance approach for quantifying carbon dioxide and methane emissions from an urban environment, focusing on Indianapolis, IN, USA, are described. The relatively level terrain of Indianapolis facilitated the application of mean wind fields in the mass balance approach. We investigate the uncertainties in our aircraft-based mass balance approach by (1) assessing the sensitivity of the measured flux to important measurement and analysis parameters including wind speed, background CO2 and CH4, boundary layer depth, and interpolation technique, and (2) determining the flux at two or more downwind distances from a point or area source (with relatively large source strengths such as solid waste facilities and a power generating station) in rapid succession, assuming that the emission flux is constant. When we quantify the precision in the approach by comparing the estimated emissions derived from measurements at two or more downwind distances from an area or point source, we find that the minimum and maximum repeatability were 12 and 52%, with an average of 31%. We suggest that improvements in the experimental design can be achieved by careful determination of the background concentration, monitoring the evolution of the boundary layer through the measurement period, and increasing the number of downwind horizontal transect measurements at multiple altitudes within the boundary layer.

  3. GIS-based Approaches to Catchment Area Analyses of Mass Transit

    DEFF Research Database (Denmark)

    Andersen, Jonas Lohmann Elkjær; Landex, Alex

    2009-01-01

    Catchment area analyses of stops or stations are used to investigate potential number of travelers to public transportation. These analyses are considered a strong decision tool in the planning process of mass transit especially railroads. Catchment area analyses are GIS-based buffer and overlay...... analyses with different approaches depending on the desired level of detail. A simple but straightforward approach to implement is the Circular Buffer Approach where catchment areas are circular. A more detailed approach is the Service Area Approach where catchment areas are determined by a street network...... search to simulate the actual walking distances. A refinement of the Service Area Approach is to implement additional time resistance in the network search to simulate obstacles in the walking environment. This paper reviews and compares the different GIS-based catchment area approaches, their level...

  4. Sliding Mode Control for Mass Moment Aerospace Vehicles Using Dynamic Inversion Approach

    Directory of Open Access Journals (Sweden)

    Xiao-Yu Zhang

    2013-01-01

    Full Text Available The moving mass actuation technique offers significant advantages over conventional aerodynamic control surfaces and reaction control systems, because the actuators are contained entirely within the airframe geometrical envelope. Modeling, control, and simulation of Mass Moment Aerospace Vehicles (MMAV utilizing moving mass actuators are discussed. Dynamics of the MMAV are separated into two parts on the basis of the two time-scale separation theory: the dynamics of fast state and the dynamics of slow state. And then, in order to restrain the system chattering and keep the track performance of the system by considering aerodynamic parameter perturbation, the flight control system is designed for the two subsystems, respectively, utilizing fuzzy sliding mode control approach. The simulation results describe the effectiveness of the proposed autopilot design approach. Meanwhile, the chattering phenomenon that frequently appears in the conventional variable structure systems is also eliminated without deteriorating the system robustness.

  5. Effective-mass approach for n-type semiconductor nanowire MOSFETs arbitrarily oriented

    International Nuclear Information System (INIS)

    Bescond, Marc; Cavassilas, Nicolas; Lannoo, Michel

    2007-01-01

    A method which calculates the effective masses in arbitrarily oriented semiconductor nanowires is presented. In order to avoid the full three-dimensional (3D) resolution of the Schroedinger equation, the method decouples within a Cartesian system the transport direction from the cross section. Results give the new effective mass expressions for each valley and channel orientation. As a direct application, transport in [100]-oriented Ge nanowire metal-oxide-semiconductor field-effect transistors (MOSFETs) is then studied by using a self-consistent 'mode-space' approach expressed in the nonequilibrium Green's function formalism. Along this wire orientation, we show that the effective masses resulting from our approach are very close to the one obtained using a sp 3 tight-binding band-structure calculation for nanowires as thin as 4 nm

  6. Effective-mass approach for n-type semiconductor nanowire MOSFETs arbitrarily oriented

    Energy Technology Data Exchange (ETDEWEB)

    Bescond, Marc [Institut de Microelectronique, Electromagnetisme et Photonique (IMEP, UMR CNRS 5130)-MINATEC, 3 Parvis Louis Neel, BP 257, F-38016 Grenoble Cedex 1 (France); Cavassilas, Nicolas [Laboratoire Materiaux et Microelectronique de Provence (L2MP, UMR CNRS 6137), Batiment IRPHE, 49 rue Joliot-Curie, BP 146, F-13384 Marseille Cedex 13 (France); Lannoo, Michel [Laboratoire Materiaux et Microelectronique de Provence (L2MP, UMR CNRS 6137), Batiment IRPHE, 49 rue Joliot-Curie, BP 146, F-13384 Marseille Cedex 13 (France)

    2007-06-27

    A method which calculates the effective masses in arbitrarily oriented semiconductor nanowires is presented. In order to avoid the full three-dimensional (3D) resolution of the Schroedinger equation, the method decouples within a Cartesian system the transport direction from the cross section. Results give the new effective mass expressions for each valley and channel orientation. As a direct application, transport in [100]-oriented Ge nanowire metal-oxide-semiconductor field-effect transistors (MOSFETs) is then studied by using a self-consistent 'mode-space' approach expressed in the nonequilibrium Green's function formalism. Along this wire orientation, we show that the effective masses resulting from our approach are very close to the one obtained using a sp{sup 3} tight-binding band-structure calculation for nanowires as thin as 4 nm.

  7. Refining mass formulas for astrophysical applications: A Bayesian neural network approach

    Science.gov (United States)

    Utama, R.; Piekarewicz, J.

    2017-10-01

    Background: Exotic nuclei, particularly those near the drip lines, are at the core of one of the fundamental questions driving nuclear structure and astrophysics today: What are the limits of nuclear binding? Exotic nuclei play a critical role in both informing theoretical models as well as in our understanding of the origin of the heavy elements. Purpose: Our aim is to refine existing mass models through the training of an artificial neural network that will mitigate the large model discrepancies far away from stability. Methods: The basic paradigm of our two-pronged approach is an existing mass model that captures as much as possible of the underlying physics followed by the implementation of a Bayesian neural network (BNN) refinement to account for the missing physics. Bayesian inference is employed to determine the parameters of the neural network so that model predictions may be accompanied by theoretical uncertainties. Results: Despite the undeniable quality of the mass models adopted in this work, we observe a significant improvement (of about 40%) after the BNN refinement is implemented. Indeed, in the specific case of the Duflo-Zuker mass formula, we find that the rms deviation relative to experiment is reduced from σrms=0.503 MeV to σrms=0.286 MeV. These newly refined mass tables are used to map the neutron drip lines (or rather "drip bands") and to study a few critical r -process nuclei. Conclusions: The BNN approach is highly successful in refining the predictions of existing mass models. In particular, the large discrepancy displayed by the original "bare" models in regions where experimental data are unavailable is considerably quenched after the BNN refinement. This lends credence to our approach and has motivated us to publish refined mass tables that we trust will be helpful for future astrophysical applications.

  8. Averaging in cosmological models

    OpenAIRE

    Coley, Alan

    2010-01-01

    The averaging problem in cosmology is of considerable importance for the correct interpretation of cosmological data. We review cosmological observations and discuss some of the issues regarding averaging. We present a precise definition of a cosmological model and a rigorous mathematical definition of averaging, based entirely in terms of scalar invariants.

  9. Average action for the N-component φ4 theory

    International Nuclear Information System (INIS)

    Ringwald, A.; Wetterich, C.

    1990-01-01

    The average action is a continuum version of the block spin action in lattice field theories. We compute the one-loop approximation to the average potential for the N-component φ 4 theory in the spontaneously broken phase. For a finite (linear) block size ∝ anti k -1 this potential is real and nonconvex. For small φ the average potential is quadratic, U k =-1/2anti k 2 φ 2 , and independent of the original mass parameter and quartic coupling constant. It approaches the convex effective potential as anti k vanishes. (orig.)

  10. Evaluation of a mass-balance approach to determine consumptive water use in northeastern Illinois

    Science.gov (United States)

    Mills, Patrick C.; Duncker, James J.; Over, Thomas M.; Marian Domanski,; ,; Engel, Frank

    2014-01-01

    A principal component of evaluating and managing water use is consumptive use. This is the portion of water withdrawn for a particular use, such as residential, which is evaporated, transpired, incorporated into products or crops, consumed by humans or livestock, or otherwise removed from the immediate water environment. The amount of consumptive use may be estimated by a water (mass)-balance approach; however, because of the difficulty of obtaining necessary data, its application typically is restricted to the facility scale. The general governing mass-balance equation is: Consumptive use = Water supplied - Return flows.

  11. Bayesian Model Averaging for Propensity Score Analysis.

    Science.gov (United States)

    Kaplan, David; Chen, Jianshen

    2014-01-01

    This article considers Bayesian model averaging as a means of addressing uncertainty in the selection of variables in the propensity score equation. We investigate an approximate Bayesian model averaging approach based on the model-averaged propensity score estimates produced by the R package BMA but that ignores uncertainty in the propensity score. We also provide a fully Bayesian model averaging approach via Markov chain Monte Carlo sampling (MCMC) to account for uncertainty in both parameters and models. A detailed study of our approach examines the differences in the causal estimate when incorporating noninformative versus informative priors in the model averaging stage. We examine these approaches under common methods of propensity score implementation. In addition, we evaluate the impact of changing the size of Occam's window used to narrow down the range of possible models. We also assess the predictive performance of both Bayesian model averaging propensity score approaches and compare it with the case without Bayesian model averaging. Overall, results show that both Bayesian model averaging propensity score approaches recover the treatment effect estimates well and generally provide larger uncertainty estimates, as expected. Both Bayesian model averaging approaches offer slightly better prediction of the propensity score compared with the Bayesian approach with a single propensity score equation. Covariate balance checks for the case study show that both Bayesian model averaging approaches offer good balance. The fully Bayesian model averaging approach also provides posterior probability intervals of the balance indices.

  12. A dynamic programming approach to de novo peptide sequencing via tandem mass spectrometry.

    Science.gov (United States)

    Chen, T; Kao, M Y; Tepel, M; Rush, J; Church, G M

    2001-01-01

    Tandem mass spectrometry fragments a large number of molecules of the same peptide sequence into charged molecules of prefix and suffix peptide subsequences and then measures mass/charge ratios of these ions. The de novo peptide sequencing problem is to reconstruct the peptide sequence from a given tandem mass spectral data of k ions. By implicitly transforming the spectral data into an NC-spectrum graph G (V, E) where /V/ = 2k + 2, we can solve this problem in O(/V//E/) time and O(/V/2) space using dynamic programming. For an ideal noise-free spectrum with only b- and y-ions, we improve the algorithm to O(/V/ + /E/) time and O(/V/) space. Our approach can be further used to discover a modified amino acid in O(/V//E/) time. The algorithms have been implemented and tested on experimental data.

  13. 3D Multiscale Integrated Modeling Approach of Complex Rock Mass Structures

    Directory of Open Access Journals (Sweden)

    Mingchao Li

    2014-01-01

    Full Text Available Based on abundant geological data of different regions and different scales in hydraulic engineering, a new approach of 3D engineering-scale and statistical-scale integrated modeling was put forward, considering the complex relationships among geological structures and discontinuities and hydraulic structures. For engineering-scale geological structures, the 3D rock mass model of the study region was built by the exact match modeling method and the reliability analysis technique. For statistical-scale jointed rock mass, the random network simulation modeling method was realized, including Baecher structure plane model, Monte Carlo simulation, and dynamic check of random discontinuities, and the corresponding software program was developed. Finally, the refined model was reconstructed integrating with the engineering-scale model of rock structures, the statistical-scale model of discontinuities network, and the hydraulic structures model. It has been applied to the practical hydraulic project and offers the model basis for the analysis of hydraulic rock mass structures.

  14. A data base approach for prediction of deforestation-induced mass wasting events

    Science.gov (United States)

    Logan, T. L.

    1981-01-01

    A major topic of concern in timber management is determining the impact of clear-cutting on slope stability. Deforestation treatments on steep mountain slopes have often resulted in a high frequency of major mass wasting events. The Geographic Information System (GIS) is a potentially useful tool for predicting the location of mass wasting sites. With a raster-based GIS, digitally encoded maps of slide hazard parameters can be overlayed and modeled to produce new maps depicting high probability slide areas. The present investigation has the objective to examine the raster-based information system as a tool for predicting the location of the clear-cut mountain slopes which are most likely to experience shallow soil debris avalanches. A literature overview is conducted, taking into account vegetation, roads, precipitation, soil type, slope-angle and aspect, and models predicting mass soil movements. Attention is given to a data base approach and aspects of slide prediction.

  15. Baseline ICIQ-UI score, body mass index, age, average birth weight, and perineometry duration as promising predictors of the short-term efficacy of Er:YAG laser treatment in stress urinary incontinent women: A prospective cohort study.

    Science.gov (United States)

    Fistonić, Ivan; Fistonić, Nikola

    2018-01-23

    A growing body of evidence indicates that a non-invasive erbium yttrium-aluminum-garnet (Er:YAG) laser may be an effective and highly tolerable treatment for stress urinary incontinence (SUI) in women. The primary objective was to identify pre-intervention predictors of short-term Er:YAG outcomes. The secondary objective was to identify patient segments with the best Er:YAG laser treatment short-term outcomes. A prospective cohort study performed in 2016 at Ob/Gyn Clinic, Zagreb, Croatia, recruited 85 female patients who suffered from SUI. The intervention was performed with a 2940 nm wave length Er:YAG laser (XS Dynamis, Fotona, Slovenia). Outcomes were absolute change in the International Consultation on Incontinence Questionnaire-Short Form (ICIQ-UI SF) and a relative decrease in ICIQ-UI score of ≥30% 2-6 months after the intervention. Age and pre-intervention ICIQ-UI values were independent significant predictors of laser treatment efficacy for SUI. A decrease in ICIQ-UI score (minimum important difference, MID) of ≥30% was independently significantly associated with body mass index and ICIQ-UI values before the intervention. All patients with four or five positive predictors saw a clinically relevant decrease in ICIQ-UI of ≥30%. The total accuracy of the predictive model defined by the area under the curve was 0.83 (95%CI 0.74-0.91). At the cut-off ≥3 positive predictors, C-index was 0.80 (95%CI 0.71-0.90), positive predictive value was 0.97 (95%CI 0.87-0.99), and negative predictive value was 0.53 (95%CI 0.45-0.55). A relevant decrease in ICIQ-UI (MID) of ≥30% can be predicted based on age, body mass index, average birth weight, perineometer squeeze duration, and ICIQ-UI scores before the intervention. The association between Q-tip test and treatment outcome was moderated by age. Q-tip was a significant predictor for patients between 44 and 53 years of age. The best results should be expected in younger women with a body mass index of ≤23

  16. A new approach for accurate mass assignment on a multi-turn time-of-flight mass spectrometer.

    Science.gov (United States)

    Hondo, Toshinobu; Jensen, Kirk R; Aoki, Jun; Toyoda, Michisato

    2017-12-01

    A simple, effective accurate mass assignment procedure for a time-of-flight mass spectrometer is desirable. External mass calibration using a mass calibration standard together with an internal mass reference (lock mass) is a common technique for mass assignment, however, using polynomial fitting can result in mass-dependent errors. By using the multi-turn time-of-flight mass spectrometer infiTOF-UHV, we were able to obtain multiple time-of-flight data from an ion monitored under several different numbers of laps that was then used to calculate a mass calibration equation. We have developed a data acquisition system that simultaneously monitors spectra at several different lap conditions with on-the-fly centroid determination and scan law estimation, which is a function of acceleration voltage, flight path, and instrumental time delay. Less than 0.9 mDa mass errors were observed for assigned mass to charge ratios ( m/z) ranging between 4 and 134 using only 40 Ar + as a reference. It was also observed that estimating the scan law on-the-fly provides excellent mass drift compensation.

  17. Non-invasive Estimation of Temperature during Physiotherapeutic Ultrasound Application Using the Average Gray-Level Content of B-Mode Images: A Metrological Approach.

    Science.gov (United States)

    Alvarenga, André V; Wilkens, Volker; Georg, Olga; Costa-Félix, Rodrigo P B

    2017-09-01

    Healing therapies that make use of ultrasound are based on raising the temperature in biological tissue. However, it is not possible to heal impaired tissue by applying a high dose of ultrasound. The temperature of the tissue is ultimately the physical quantity that has to be assessed to minimize the risk of undesired injury. Invasive temperature measurement techniques are easy to use, despite the fact that they are detrimental to human well being. Another approach to assessing a rise in tissue temperature is to derive the material's general response to temperature variations from ultrasonic parameters. In this article, a method for evaluating temperature variations is described. The method is based on the analytical study of an ultrasonic image, in which gray-level variations are correlated to the temperature variations in a tissue-mimicking material. The physical assumption is that temperature variations induce wave propagation changes modifying the backscattered ultrasound signal, which are expressed in the ultrasonographic images. For a temperature variation of about 15°C, the expanded uncertainty for a coverage probability of 0.95 was found to be 2.5°C in the heating regime and 1.9°C in the cooling regime. It is possible to use the model proposed in this article in a straightforward manner to monitor temperature variation during a physiotherapeutic ultrasound application, provided the tissue-mimicking material approach is transferred to actual biological tissue. The novelty of such approach resides in the metrology-based investigation outlined here, as well as in its ease of reproducibility. Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  18. Mass Spectrometry Imaging of Biological Tissue: An Approach for Multicenter Studies

    Energy Technology Data Exchange (ETDEWEB)

    Rompp, Andreas; Both, Jean-Pierre; Brunelle, Alain; Heeren, Ronald M.; Laprevote, Olivier; Prideaux, Brendan; Seyer, Alexandre; Spengler, Bernhard; Stoeckli, Markus; Smith, Donald F.

    2015-03-01

    Mass spectrometry imaging has become a popular tool for probing the chemical complexity of biological surfaces. This led to the development of a wide range of instrumentation and preparation protocols. It is thus desirable to evaluate and compare the data output from different methodologies and mass spectrometers. Here, we present an approach for the comparison of mass spectrometry imaging data from different laboratories (often referred to as multicenter studies). This is exemplified by the analysis of mouse brain sections in five laboratories in Europe and the USA. The instrumentation includes matrix-assisted laser desorption/ionization (MALDI)-time-of-flight (TOF), MALDI-QTOF, MALDIFourier transform ion cyclotron resonance (FTICR), atmospheric-pressure (AP)-MALDI-Orbitrap, and cluster TOF-secondary ion mass spectrometry (SIMS). Experimental parameters such as measurement speed, imaging bin width, and mass spectrometric parameters are discussed. All datasets were converted to the standard data format imzML and displayed in a common open-source software with identical parameters for visualization, which facilitates direct comparison of MS images. The imzML conversion also allowed exchange of fully functional MS imaging datasets between the different laboratories. The experiments ranged from overview measurements of the full mouse brain to detailed analysis of smaller features (depending on spatial resolution settings), but common histological features such as the corpus callosum were visible in all measurements. High spatial resolution measurements of AP-MALDI-Orbitrap and TOF-SIMS showed comparable structures in the low-micrometer range. We discuss general considerations for planning and performing multicenter studies in mass spectrometry imaging. This includes details on the selection, distribution, and preparation of tissue samples as well as on data handling. Such multicenter studies in combination with ongoing activities for reporting guidelines, a common

  19. Development and evaluation of an early detection intervention for mouth cancer using a mass media approach.

    Science.gov (United States)

    Eadie, D; MacKintosh, A M; MacAskill, S; Brown, A

    2009-12-03

    Scotland has a high incidence of mouth cancer, but public awareness and knowledge are low compared with other cancers. The West of Scotland Cancer Awareness Project sought to increase public awareness and knowledge of mouth cancer and to encourage early detection of symptoms among an at-risk population of people aged over 40 years from lower socio-economic groups using a mass media approach. The media campaign aimed to increase people's feelings of personal risk, while also enhancing feelings of efficacy and control. To achieve this, a testimonial approach (using real people to tell their own stories) was adopted. Campaign impact and reach was assessed using in-home interviews with a representative sample of the target population in both the campaign area and controls outside of the target area. Surveys were conducted at three stages: at baseline before the campaign was launched, and at 7 and 12 months thereafter. Awareness of media coverage was higher at both follow-up points in the intervention area than in the control area, the differences largely being accounted for by television advertising. The campaign had a short-term, but not a long-term impact on awareness of the disease and intention to respond to the symptoms targeted by the campaign. Awareness of two of the symptoms featured in the campaign (ulcers and lumps) increased, post-campaign, among the intervention group. While the study provides evidence for the effectiveness of the self-referral model, further work is needed to assess its ability to build public capacity to respond appropriately to symptoms and to compare the cost-effectiveness of a mass media approach against alternative communication approaches and more conventional mass screening.

  20. Topographical change caused by moderate and small floods in a gravel bed ephemeral river – a depth-averaged morphodynamic simulation approach

    Directory of Open Access Journals (Sweden)

    E. S. Lotsari

    2018-03-01

    Full Text Available In ephemeral rivers, channel morphology represents a snapshot at the end of a succession of geomorphic changes caused by floods. In most cases, the channel shape and bedform migration during different phases of a flood hydrograph cannot be identified from field evidence. This paper analyses the timing of riverbed erosion and deposition of a gravel bed ephemeral river channel (Rambla de la Viuda, Spain during consecutive and moderate- (March 2013 and low-magnitude (May 2013 discharge events, by applying a morphodynamic model (Delft3D calibrated with pre- and post-event surveys by RTK-GPS points and mobile laser scanning. The study reach is mainly depositional and all bedload sediment supplied from adjacent upstream areas is trapped in the study segment forming gravel lobes. Therefore, estimates of total bedload sediment mass balance can be obtained from pre- and post-field survey for each flood event. The spatially varying grain size data and transport equations were the most important factors for model calibration, in addition to flow discharge. The channel acted as a braided channel during the lower flows of the two discharge events, but when bars were submerged in the high discharges of May 2013, the high fluid forces followed a meandering river planform. The model results showed that erosion and deposition were in total greater during the long-lasting receding phase than during the rising phase of the flood hydrographs. In the case of the moderate-magnitude discharge event, deposition and erosion peaks were predicted to occur at the beginning of the hydrograph, whereas deposition dominated throughout the event. Conversely, the low-magnitude discharge event only experienced the peak of channel changes after the discharge peak. Thus, both type of discharge events highlight the importance of receding phase for this type of gravel bed ephemeral river channel.

  1. Topographical change caused by moderate and small floods in a gravel bed ephemeral river - a depth-averaged morphodynamic simulation approach

    Science.gov (United States)

    Lotsari, Eliisa S.; Calle, Mikel; Benito, Gerardo; Kukko, Antero; Kaartinen, Harri; Hyyppä, Juha; Hyyppä, Hannu; Alho, Petteri

    2018-03-01

    In ephemeral rivers, channel morphology represents a snapshot at the end of a succession of geomorphic changes caused by floods. In most cases, the channel shape and bedform migration during different phases of a flood hydrograph cannot be identified from field evidence. This paper analyses the timing of riverbed erosion and deposition of a gravel bed ephemeral river channel (Rambla de la Viuda, Spain) during consecutive and moderate- (March 2013) and low-magnitude (May 2013) discharge events, by applying a morphodynamic model (Delft3D) calibrated with pre- and post-event surveys by RTK-GPS points and mobile laser scanning. The study reach is mainly depositional and all bedload sediment supplied from adjacent upstream areas is trapped in the study segment forming gravel lobes. Therefore, estimates of total bedload sediment mass balance can be obtained from pre- and post-field survey for each flood event. The spatially varying grain size data and transport equations were the most important factors for model calibration, in addition to flow discharge. The channel acted as a braided channel during the lower flows of the two discharge events, but when bars were submerged in the high discharges of May 2013, the high fluid forces followed a meandering river planform. The model results showed that erosion and deposition were in total greater during the long-lasting receding phase than during the rising phase of the flood hydrographs. In the case of the moderate-magnitude discharge event, deposition and erosion peaks were predicted to occur at the beginning of the hydrograph, whereas deposition dominated throughout the event. Conversely, the low-magnitude discharge event only experienced the peak of channel changes after the discharge peak. Thus, both type of discharge events highlight the importance of receding phase for this type of gravel bed ephemeral river channel.

  2. Superconductivity. Quasiparticle mass enhancement approaching optimal doping in a high-T(c) superconductor.

    Science.gov (United States)

    Ramshaw, B J; Sebastian, S E; McDonald, R D; Day, James; Tan, B S; Zhu, Z; Betts, J B; Liang, Ruixing; Bonn, D A; Hardy, W N; Harrison, N

    2015-04-17

    In the quest for superconductors with higher transition temperatures (T(c)), one emerging motif is that electronic interactions favorable for superconductivity can be enhanced by fluctuations of a broken-symmetry phase. Recent experiments have suggested the existence of the requisite broken-symmetry phase in the high-T(c) cuprates, but the impact of such a phase on the ground-state electronic interactions has remained unclear. We used magnetic fields exceeding 90 tesla to access the underlying metallic state of the cuprate YBa2Cu3O(6+δ) over a wide range of doping, and observed magnetic quantum oscillations that reveal a strong enhancement of the quasiparticle effective mass toward optimal doping. This mass enhancement results from increasing electronic interactions approaching optimal doping, and suggests a quantum critical point at a hole doping of p(crit) ≈ 0.18. Copyright © 2015, American Association for the Advancement of Science.

  3. The Multiplexed Chemical Kinetic Photoionization Mass Spectrometer: A New Approach To Isomer-resolved Chemical Kinetics

    Energy Technology Data Exchange (ETDEWEB)

    Osborne, David L.; Zou, Peng; Johnsen, Howard; Hayden, Carl C.; Taatjes, Craig A.; Knyazev, Vadim D.; North, Simon W.; Peterka, Darcy S.; Ahmed, Musahid; Leone, Stephen R.

    2008-08-28

    We have developed a multiplexed time- and photon-energy?resolved photoionizationmass spectrometer for the study of the kinetics and isomeric product branching of gasphase, neutral chemical reactions. The instrument utilizes a side-sampled flow tubereactor, continuously tunable synchrotron radiation for photoionization, a multi-massdouble-focusing mass spectrometer with 100percent duty cycle, and a time- and positionsensitive detector for single ion counting. This approach enables multiplexed, universal detection of molecules with high sensitivity and selectivity. In addition to measurement of rate coefficients as a function of temperature and pressure, different structural isomers can be distinguished based on their photoionization efficiency curves, providing a more detailed probe of reaction mechanisms. The multiplexed 3-dimensional data structure (intensity as a function of molecular mass, reaction time, and photoionization energy) provides insights that might not be available in serial acquisition, as well as additional constraints on data interpretation.

  4. Agenesis of Anterior Falx Cerebri in Patient with Planned Interhemispheric Approach to Third Ventricle Mass.

    Science.gov (United States)

    Finch, Nathan W; Ding, Dale; Oldfield, Edward H; Druzgal, Jason

    2018-01-01

    Complete or partial agenesis of the falx cerebri may occur in pediatric patients with developmental anomalies. However, isolated agenesis of the falx in a developmentally normal adult is exceptionally rare. We describe the first reported case of a patient with a third ventricular mass associated with partial agenesis of the anterior falx cerebri, a circumstance that influenced surgical access to a third ventricular epidermoid cyst. A 60-year-old developmentally normal woman presented with progressively worsening aphasia and altered mental status. Brain magnetic resonance imaging showed obstructive hydrocephalus from a third ventricular mass. An anterior interhemispheric transcallosal approach was planned to remove the tumor. However, upon dural opening there was no evidence of a falx cerebri, an anomaly visible but not reported on the prior imaging studies. An interhemispheric fissure was present, but the medial frontal lobes were densely adherent, with multiple traversing veins within the superficial arachnoid of the fissure. Therefore, a left frontal transcortical approach was performed for microsurgical resection of the tumor. Histopathologic analysis identified the lesion to be an epidermoid cyst. Partial agenesis of the falx cerebri is exceedingly rare in a developmentally normal adult, particularly in the presence of an anatomically normal superior sagittal sinus. If present, however, it is important to note this association preoperatively because partial agenesis of the falx cerebri precludes an interhemispheric transcallosal approach to the lateral and third ventricles. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Anarchy and hierarchy: An approach to study models of fermion masses and mixings

    Science.gov (United States)

    Haba, Naoyuki; Murayama, Hitoshi

    2001-03-01

    We advocate a new approach to study models of fermion masses and mixings, namely, the anarchy proposed by Hall, Murayama, and Weiner. In this approach, we scan the O(1) coefficients randomly. We argue that this is the correct approach when the fundamental theory is sufficiently complicated. Assuming that there is no physical distinction among three generations of neutrinos, the probability distributions in Maki-Nakagawa-Sakata mixing angles can be predicted independent of the choice of the measure. This is because the mixing angles are distributed according to the Haar measure of the Lie groups whose elements diagonalize the mass matrices. The near-maximal mixings, as observed in the atmospheric neutrino data and as required in the large mixing angle solution to the solar neutrino problem, are highly probable. A small hierarchy between Δm2 for the atmospheric and the solar neutrinos is obtained very easily; the complex seesaw case gives a hierarchy of a factor of 20 as the most probable one, even though this conclusion is more measure dependent. Ue3 has to be just below the current limit from the CHOOZ experiment. The CP-violating parameter sin δ is preferred to be maximal. We present a simple SU(5)-like extension of anarchy to the charged lepton and quark sectors that works well phenomenologically.

  6. Anarchy and hierarchy: An approach to study models of fermion masses and mixings

    International Nuclear Information System (INIS)

    Haba, Naoyuki; Murayama, Hitoshi

    2001-01-01

    We advocate a new approach to study models of fermion masses and mixings, namely, the anarchy proposed by Hall, Murayama, and Weiner. In this approach, we scan the O(1) coefficients randomly. We argue that this is the correct approach when the fundamental theory is sufficiently complicated. Assuming that there is no physical distinction among three generations of neutrinos, the probability distributions in Maki-Nakagawa-Sakata mixing angles can be predicted independent of the choice of the measure. This is because the mixing angles are distributed according to the Haar measure of the Lie groups whose elements diagonalize the mass matrices. The near-maximal mixings, as observed in the atmospheric neutrino data and as required in the large mixing angle solution to the solar neutrino problem, are highly probable. A small hierarchy between Δm 2 for the atmospheric and the solar neutrinos is obtained very easily; the complex seesaw case gives a hierarchy of a factor of 20 as the most probable one, even though this conclusion is more measure dependent. U e3 has to be just below the current limit from the CHOOZ experiment. The CP-violating parameter sinδ is preferred to be maximal. We present a simple SU(5)-like extension of anarchy to the charged lepton and quark sectors that works well phenomenologically

  7. Risk-oriented approach application at planning and orginizing antiepidemic provision of mass events

    Directory of Open Access Journals (Sweden)

    D.V. Efremenko

    2017-03-01

    Full Text Available Mass events tend to become more and more dangerous for population health, as they cause various health risks, including infectious pathologies risks. Our research goal was to work out scientifically grounded approaches to assessing and managing epidemiologic risks as well as analyze their application practices implemented during preparation to the Olympics-2014, the Games themselves, as well as other mass events which took place in 2014–2016. We assessed epidemiologic complications risks with the use of diagnostic test-systems and applying a new technique which allowed for mass events peculiarities. The technique is based on infections ranking as per 3 potential danger categories in accordance with created criteria which represented quantitative and qualitative predictive parameters (predictors. Application of risk-oriented approach and multi-factor analysis allowed us to detect exact possible maximum requirements for providing sanitary-epidemiologic welfare in terms of each separate nosologic form. As we enhanced our laboratory base with test-systems to provide specific indication as per accomplished calculations, it enabled us, on one hand, to secure the required preparations, and, on the other hand, to avoid unnecessary expenditures. To facilitate decision-making process during the Olympics-2014 we used an innovative product, namely, a computer program based on geoinformation system (GIS. It helped us to simplify and to accelerate information exchange within the frameworks of intra- and interdepartmental interaction. "Dynamic epidemiologic threshold" was daily calculated for measles, chickenpox, acute enteric infections and acute respiratory viral infections of various etiology. And if it was exceeded or possibility of "epidemiologic spot" for one or several nosologies occurred, an automatic warning appeared in GIS. Planning prevention activities regarding feral herd infections and zoogenous extremely dangerous infections which were endemic

  8. First-Principles Approach to Heat and Mass Transfer Effects in Model Catalyst Studies

    OpenAIRE

    Matera, Sebastian; Reuter, Karsten

    2009-01-01

    We assess heat and mass transfer limitations in in situ studies of model catalysts with a first-principles based multiscale modeling approach that integrates a detailed description of the surface reaction chemistry and the macro-scale flow structures. Using the CO oxidation at RuO2(110) as a prototypical example we demonstrate that factors like a suppressed heat conduction at the backside of the thin single-crystal, and the build-up of a product boundary layer above the flat-faced surface pla...

  9. Critical questions concerning a statistical approach to the hadron mass spectrum

    CERN Document Server

    Bassetto, A

    1973-01-01

    The authors study the problem of the hadron mass spectrum starting from an S-matrix which is explicitly considered as a unitary operator. Making use of the additional property that the connected part T/sub c/ is a compact operator, they discuss the approximation of T/sub c/ by resonances. The connection with statistics is made by using the Bernstein-Dashen-Ma formalism together with a bootstrap assumption. Although the approach remains open, it allows a critical discussion of subtle issues concerning both the Hagedorn and the dual theories. (9 refs).

  10. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  11. In silico approaches to study mass and energy flows in microbial consortia: a syntrophic case study

    Directory of Open Access Journals (Sweden)

    Mallette Natasha

    2009-12-01

    Full Text Available Abstract Background Three methods were developed for the application of stoichiometry-based network analysis approaches including elementary mode analysis to the study of mass and energy flows in microbial communities. Each has distinct advantages and disadvantages suitable for analyzing systems with different degrees of complexity and a priori knowledge. These approaches were tested and compared using data from the thermophilic, phototrophic mat communities from Octopus and Mushroom Springs in Yellowstone National Park (USA. The models were based on three distinct microbial guilds: oxygenic phototrophs, filamentous anoxygenic phototrophs, and sulfate-reducing bacteria. Two phases, day and night, were modeled to account for differences in the sources of mass and energy and the routes available for their exchange. Results The in silico models were used to explore fundamental questions in ecology including the prediction of and explanation for measured relative abundances of primary producers in the mat, theoretical tradeoffs between overall productivity and the generation of toxic by-products, and the relative robustness of various guild interactions. Conclusion The three modeling approaches represent a flexible toolbox for creating cellular metabolic networks to study microbial communities on scales ranging from cells to ecosystems. A comparison of the three methods highlights considerations for selecting the one most appropriate for a given microbial system. For instance, communities represented only by metagenomic data can be modeled using the pooled method which analyzes a community's total metabolic potential without attempting to partition enzymes to different organisms. Systems with extensive a priori information on microbial guilds can be represented using the compartmentalized technique, employing distinct control volumes to separate guild-appropriate enzymes and metabolites. If the complexity of a compartmentalized network creates an

  12. A mass graph-based approach for the identification of modified proteoforms using top-down tandem mass spectra

    Energy Technology Data Exchange (ETDEWEB)

    Kou, Qiang; Wu, Si; Tolić, Nikola; Paša-Tolić, Ljiljana; Liu, Yunlong; Liu, Xiaowen

    2016-12-21

    Motivation: Although proteomics has rapidly developed in the past decade, researchers are still in the early stage of exploring the world of complex proteoforms, which are protein products with various primary structure alterations resulting from gene mutations, alternative splicing, post-translational modifications, and other biological processes. Proteoform identification is essential to mapping proteoforms to their biological functions as well as discovering novel proteoforms and new protein functions. Top-down mass spectrometry is the method of choice for identifying complex proteoforms because it provides a “bird’s eye view” of intact proteoforms. The combinatorial explosion of various alterations on a protein may result in billions of possible proteoforms, making proteoform identification a challenging computational problem. Results: We propose a new data structure, called the mass graph, for efficient representation of proteoforms and design mass graph alignment algorithms. We developed TopMG, a mass graph-based software tool for proteoform identification by top-down mass spectrometry. Experiments on top-down mass spectrometry data sets showed that TopMG outperformed existing methods in identifying complex proteoforms.

  13. A High Throughput Ambient Mass Spectrometric Approach to Species Identification and Classification from Chemical Fingerprint Signatures

    Science.gov (United States)

    Musah, Rabi A.; Espinoza, Edgard O.; Cody, Robert B.; Lesiak, Ashton D.; Christensen, Earl D.; Moore, Hannah E.; Maleknia, Simin; Drijfhout, Falko P.

    2015-01-01

    A high throughput method for species identification and classification through chemometric processing of direct analysis in real time (DART) mass spectrometry-derived fingerprint signatures has been developed. The method entails introduction of samples to the open air space between the DART ion source and the mass spectrometer inlet, with the entire observed mass spectral fingerprint subjected to unsupervised hierarchical clustering processing. A range of both polar and non-polar chemotypes are instantaneously detected. The result is identification and species level classification based on the entire DART-MS spectrum. Here, we illustrate how the method can be used to: (1) distinguish between endangered woods regulated by the Convention for the International Trade of Endangered Flora and Fauna (CITES) treaty; (2) assess the origin and by extension the properties of biodiesel feedstocks; (3) determine insect species from analysis of puparial casings; (4) distinguish between psychoactive plants products; and (5) differentiate between Eucalyptus species. An advantage of the hierarchical clustering approach to processing of the DART-MS derived fingerprint is that it shows both similarities and differences between species based on their chemotypes. Furthermore, full knowledge of the identities of the constituents contained within the small molecule profile of analyzed samples is not required. PMID:26156000

  14. Coupled sulfur isotopic and chemical mass transfer modeling: Approach and application to dynamic hydrothermal processes

    International Nuclear Information System (INIS)

    Janecky, D.R.

    1988-01-01

    A computational modeling code (EQPSreverse arrowS) has been developed to examine sulfur isotopic distribution pathways coupled with calculations of chemical mass transfer pathways. A post processor approach to EQ6 calculations was chosen so that a variety of isotopic pathways could be examined for each reaction pathway. Two types of major bounding conditions were implemented: (1) equilibrium isotopic exchange between sulfate and sulfide species or exchange only accompanying chemical reduction and oxidation events, and (2) existence or lack of isotopic exchange between solution species and precipitated minerals, parallel to the open and closed chemical system formulations of chemical mass transfer modeling codes. All of the chemical data necessary to explicitly calculate isotopic distribution pathways is generated by most mass transfer modeling codes and can be input to the EQPS code. Routines are built in to directly handle EQ6 tabular files. Chemical reaction models of seafloor hydrothermal vent processes and accompanying sulfur isotopic distribution pathways illustrate the capabilities of coupling EQPSreverse arrowS with EQ6 calculations, including the extent of differences that can exist due to the isotopic bounding condition assumptions described above. 11 refs., 2 figs

  15. Charged Higgs boson mass of the MSSM in the Feynman diagrammatic approach

    CERN Document Server

    Frank, M.; Hahn, T.; Heinemeyer, S.; Hollik, W.; Rzehak, H.; Weiglein, G.

    2013-01-01

    The interpretation of the Higgs signal at $\\sim$ 126 GeV within the Minimal Supersymmetric Standard Model (MSSM) depends crucially on the predicted properties of the other Higgs states of the model, as the mass of the charged Higgs boson, $M_{H^{\\pm}}$. This mass is calculated in the Feynman-diagrammatic approach within the MSSM with real parameters. The result includes the complete one-loop contributions and the two-loop contributions of $O(\\alpha_t\\alpha_s)$. The one-loop contributions lead to sizable shifts in the $M_H^{\\pm}$ prediction, reaching up to $\\sim$ 8 GeV for relatively small values of M_A. Even larger effects can occur depending on the sign and size of the mu parameter that enters the corrections affecting the relation between the bottom-quark mass and the bottom Yukawa coupling. The two-loop $O(\\alpha_t\\alpha_s)$ terms can shift $M_H^{\\pm}$ by more than 2 GeV. The two-loop contributions amount to typically about 30% of the one-loop corrections for the examples that we have studied. These effect...

  16. High-temperature thermodynamics by laser-vaporization mass spectrometry: An approach based on statistical mechanics

    International Nuclear Information System (INIS)

    Belloni, Fabio; Manara, Dario; Pflieger, Rachel; Colle, Jean-Yves; Rondinella, Vincenzo V.

    2008-01-01

    The problem of correlation between the temperature of the target surface and the mass-spectrometer signal in laser-vaporization mass spectrometry has been analyzed theoretically. An approach based on statistical mechanics has been applied in order to describe the transient vaporization into vacuum of molecules effused from the area of the target surface struck by a laser pulse of moderate power density and time duration of some tens of ms (Langmuir vaporization). In particular, an expression for the intensity of the output signal of the mass spectrometer, I(l,t), has been derived as a function of the detection time, t, and of the distance, l, of the ionizing chamber of the spectrometer from the target. A simple numerical method for the calculation of I(l,t) according to the time profile of the target temperature is also provided. By fitting experimental I(t) values with the theoretical expression one can retrieve thermodynamic quantities involved in the sublimation/evaporation process of the molecular species analyzed, such as enthalpy and equilibrium vapor pressure (or, alternatively, vaporization coefficient). As an illustration, this fitting was performed on experimental measurements of pyrolytic graphite sublimation in the temperature range 3200-3700 K. The analysis developed will be useful for the interpretation of experimental datasets in order to retrieve high-temperature thermodynamic data, especially on high-melting materials. Research in this domain is being launched for nuclear materials, particularly for Generation IV advanced fuels

  17. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Arithmetic mean of objects in a space need not lie in the space. [Frechet; 1948] Finding mean of right-angled triangles. S = {(x,y,z) ∈ R+3 : x2 + y2 = z2}. = {. [ z x − ιy x + ιy z. ] : x,y,z > 0,z2 = x2 + y2}. Surface of right triangles : Arithmetic mean not on S. Tanvi Jain. Averaging operations on matrices ...

  18. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. ... then the expected extension of geometric mean A1/2B1/2 is not even self-adjoint, leave alone positive definite. Tanvi Jain. Averaging operations on matrices ...

  19. A CAD Approach to Developing Mass Distribution and Composition Models for Spaceflight Radiation Risk Analyses

    Science.gov (United States)

    Zapp, E.; Shelfer, T.; Semones, E.; Johnson, A.; Weyland, M.; Golightly, M.; Smith, G.; Dardano, C.

    For roughly the past three decades, combinatorial geometries have been the predominant mode for the development of mass distribution models associated with the estimation of radiological risk for manned space flight. Examples of these are the MEVDP (Modified Elemental Volume Dose Program) vehicle representation of Liley and Hamilton, and the quadratic functional representation of the CAM/CAF (Computerized Anatomical Male/Female) human body models as modified by Billings and Yucker. These geometries, have the advantageous characteristics of being simple for a familiarized user to maintain, and because of the relative lack of any operating system or run-time library dependence, they are also easy to transfer from one computing platform to another. Unfortunately they are also limited in the amount of modeling detail possible, owing to the abstract geometric representation. In addition, combinatorial representations are also known to be error-prone in practice, since there is no convenient method for error identification (i.e. overlap, etc.), and extensive calculation and/or manual comparison may is often necessary to demonstrate that the geometry is adequately represented. We present an alternate approach linking materials -specific, CAD-based mass models directly to geometric analysis tools requiring no approximation with respect to materials , nor any meshing (i.e. tessellation) of the representative geometry. A new approach to ray tracing is presented which makes use of the fundamentals of the CAD representation to perform geometric analysis directly on the NURBS (Non-Uniform Rational BSpline) surfaces themselves. In this way we achieve a framework for- the rapid, precise development and analysis of materials-specific mass distribution models.

  20. A mass balance approach to investigate arsenic cycling in a petroleum plume.

    Science.gov (United States)

    Ziegler, Brady A; Schreiber, Madeline E; Cozzarelli, Isabelle M; Crystal Ng, G-H

    2017-12-01

    Natural attenuation of organic contaminants in groundwater can give rise to a series of complex biogeochemical reactions that release secondary contaminants to groundwater. In a crude oil contaminated aquifer, biodegradation of petroleum hydrocarbons is coupled with the reduction of ferric iron (Fe(III)) hydroxides in aquifer sediments. As a result, naturally occurring arsenic (As) adsorbed to Fe(III) hydroxides in the aquifer sediment is mobilized from sediment into groundwater. However, Fe(III) in sediment of other zones of the aquifer has the capacity to attenuate dissolved As via resorption. In order to better evaluate how long-term biodegradation coupled with Fe-reduction and As mobilization can redistribute As mass in contaminated aquifer, we quantified mass partitioning of Fe and As in the aquifer based on field observation data. Results show that Fe and As are spatially correlated in both groundwater and aquifer sediments. Mass partitioning calculations demonstrate that 99.9% of Fe and 99.5% of As are associated with aquifer sediment. The sediments act as both sources and sinks for As, depending on the redox conditions in the aquifer. Calculations reveal that at least 78% of the original As in sediment near the oil has been mobilized into groundwater over the 35-year lifespan of the plume. However, the calculations also show that only a small percentage of As (∼0.5%) remains in groundwater, due to resorption onto sediment. At the leading edge of the plume, where groundwater is suboxic, sediments sequester Fe and As, causing As to accumulate to concentrations 5.6 times greater than background concentrations. Current As sinks can serve as future sources of As as the plume evolves over time. The mass balance approach used in this study can be applied to As cycling in other aquifers where groundwater As results from biodegradation of an organic carbon point source coupled with Fe reduction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. A Predictive Likelihood Approach to Bayesian Averaging

    Directory of Open Access Journals (Sweden)

    Tomáš Jeřábek

    2015-01-01

    Full Text Available Multivariate time series forecasting is applied in a wide range of economic activities related to regional competitiveness and is the basis of almost all macroeconomic analysis. In this paper we combine multivariate density forecasts of GDP growth, inflation and real interest rates from four various models, two type of Bayesian vector autoregression (BVAR models, a New Keynesian dynamic stochastic general equilibrium (DSGE model of small open economy and DSGE-VAR model. The performance of models is identified using historical dates including domestic economy and foreign economy, which is represented by countries of the Eurozone. Because forecast accuracy of observed models are different, the weighting scheme based on the predictive likelihood, the trace of past MSE matrix, model ranks are used to combine the models. The equal-weight scheme is used as a simple combination scheme. The results show that optimally combined densities are comparable to the best individual models.

  2. A Simple Approach To Mass Movement Hazard Evaluation In Developing Countries: Example From NW Nicaragua.

    Science.gov (United States)

    Pallàs, R.; Vilaplana, J. M.; Guinau, M.; Falgàs, E.; Alemany, X.; Muñoz, A.

    Current trends in landslide hazard assessment involve a complex combination of methodologies. In spite of being the most vulnerable and in need of mitigation poli- cies, developing countries lack the general socioeconomic structures and technical facilities for such complex approaches to be implemented. The main difficulties com- monly encountered in those countries are the scarcity of previous topographic, geo- logical, geotechnical, historical and instrumental data, and the unavailability of aerial- photo coverages at suitable times and scales. In consequence, there is a strong need for developing simple methodologies of landslide hazard assessment and mitigation, which can be readily tested and implemented by developing countries themselves. To explore this line of research, we selected an area of about 20 square km severely hit by Hurricane Mitch, at the Departamento de Chinandega (NW Nicaragua). The abun- dant mass movements (mainly debris flows) produced during the Mitch rainfall event were investigated through aerial photographs at 1:60.000 scale (flight of December 1998), while much less conspicuous pre-Mich landslides were detected on 1:40.000 aerial photographs (1996 flight). We mapped over one hundred mass movements at 1:10.000 scale in the field, and recorded information concerning regolith composi- tion and thickness, mass movement dimensions and volumes, failure angle (around 22 degrees) and land use for each movement. We realised that, due to the extreme fragility of antropic structures found in the area, any mass movement is highly destructive whatever its magnitude. On the other hand, we found an almost complete lack of data concerning frequency of landsliding. Thus, the concepts of magnitude and frequency commonly used for hazard evaluation pur- poses were of little help in this case. With these considerations in mind, we found that hazard evaluation and zoning could be approached by combining two main concepts: (1) the observed degree of slope

  3. Endoscope-assisted frenotomy approach to median upper neck masses: clinical outcomes and safety (from a phase II clinical trial).

    Science.gov (United States)

    Woo, Seung Hoon; Jeong, Han-Sin; Kim, Jin Pyeong; Park, Jung Je; Baek, Chung-Hwan

    2014-07-01

    An endoscope-assisted frenotomy approach (EFA) to resection of the median upper neck mass has been introduced to clinical practice. However, its technical feasibility, indications, and safety have not been fully studied. Here, we report the results of a prospective phase II clinical trial to evaluate the clinical outcomes. Twenty patients were enrolled in this trial. The masses were divided into 3 subtypes. We implemented EFA to remove the masses after receiving informed patient consent. We evaluated the clinical outcomes and complications related to this procedure for more than a 2-year period. EFA successfully removed the masses in all cases without any injuries to adjacent nerves or ducts. During the more than 2-year follow-up period, recurrence or revision surgeries were not required. EFA can be a very effective and safe approach for median upper neck masses, and can also lead to excellent cosmetic and functional results. Copyright © 2013 Wiley Periodicals, Inc.

  4. Americans' Average Radiation Exposure

    International Nuclear Information System (INIS)

    2000-01-01

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body

  5. Simple, empirical approach to predict neutron capture cross sections from nuclear masses

    Science.gov (United States)

    Couture, A.; Casten, R. F.; Cakirli, R. B.

    2017-12-01

    Background: Neutron capture cross sections are essential to understanding the astrophysical s and r processes, the modeling of nuclear reactor design and performance, and for a wide variety of nuclear forensics applications. Often, cross sections are needed for nuclei where experimental measurements are difficult. Enormous effort, over many decades, has gone into attempting to develop sophisticated statistical reaction models to predict these cross sections. Such work has met with some success but is often unable to reproduce measured cross sections to better than 40 % , and has limited predictive power, with predictions from different models rapidly differing by an order of magnitude a few nucleons from the last measurement. Purpose: To develop a new approach to predicting neutron capture cross sections over broad ranges of nuclei that accounts for their values where known and which has reliable predictive power with small uncertainties for many nuclei where they are unknown. Methods: Experimental neutron capture cross sections were compared to empirical mass observables in regions of similar structure. Results: We present an extremely simple method, based solely on empirical mass observables, that correlates neutron capture cross sections in the critical energy range from a few keV to a couple hundred keV. We show that regional cross sections are compactly correlated in medium and heavy mass nuclei with the two-neutron separation energy. These correlations are easily amenable to predict unknown cross sections, often converting the usual extrapolations to more reliable interpolations. It almost always reproduces existing data to within 25 % and estimated uncertainties are below about 40 % up to 10 nucleons beyond known data. Conclusions: Neutron capture cross sections display a surprisingly strong connection to the two-neutron separation energy, a nuclear structure property. The simple, empirical correlations uncovered provide model-independent predictions of

  6. Improved EDELWEISS-III sensitivity for low-mass WIMPs using a profile likelihood approach

    Energy Technology Data Exchange (ETDEWEB)

    Hehn, L. [Karlsruher Institut fuer Technologie, Institut fuer Kernphysik, Karlsruhe (Germany); Armengaud, E.; Boissiere, T. de; Gros, M.; Navick, X.F.; Nones, C.; Paul, B. [CEA Saclay, DSM/IRFU, Gif-sur-Yvette Cedex (France); Arnaud, Q. [Univ Lyon, Universite Claude Bernard Lyon 1, CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Lyon (France); Queen' s University, Kingston (Canada); Augier, C.; Billard, J.; Cazes, A.; Charlieux, F.; Jesus, M. de; Gascon, J.; Juillard, A.; Queguiner, E.; Sanglard, V.; Vagneron, L. [Univ Lyon, Universite Claude Bernard Lyon 1, CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Lyon (France); Benoit, A.; Camus, P. [Institut Neel, CNRS/UJF, Grenoble (France); Berge, L.; Chapellier, M.; Dumoulin, L.; Giuliani, A.; Le-Sueur, H.; Marnieros, S.; Olivieri, E.; Poda, D. [CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Orsay (France); Bluemer, J. [Karlsruher Institut fuer Technologie, Institut fuer Kernphysik, Karlsruhe (Germany); Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Broniatowski, A. [CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Orsay (France); Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Eitel, K.; Kozlov, V.; Siebenborn, B. [Karlsruher Institut fuer Technologie, Institut fuer Kernphysik, Karlsruhe (Germany); Foerster, N.; Heuermann, G.; Scorza, S. [Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Jin, Y. [Laboratoire de Photonique et de Nanostructures, CNRS, Route de Nozay, Marcoussis (France); Kefelian, C. [Univ Lyon, Universite Claude Bernard Lyon 1, CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Lyon (France); Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Kleifges, M.; Tcherniakhovski, D.; Weber, M. [Karlsruher Institut fuer Technologie, Institut fuer Prozessdatenverarbeitung und Elektronik, Karlsruhe (Germany); Kraus, H. [University of Oxford, Department of Physics, Oxford (United Kingdom); Kudryavtsev, V.A. [University of Sheffield, Department of Physics and Astronomy, Sheffield (United Kingdom); Pari, P. [CEA Saclay, DSM/IRAMIS, Gif-sur-Yvette (France); Piro, M.C. [CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Orsay (France); Rensselaer Polytechnic Institute, Troy, NY (United States); Rozov, S.; Yakushev, E. [JINR, Laboratory of Nuclear Problems, Dubna, Moscow Region (Russian Federation); Schmidt, B. [Karlsruher Institut fuer Technologie, Institut fuer Kernphysik, Karlsruhe (Germany); Lawrence Berkeley National Laboratory, Berkeley, CA (United States)

    2016-10-15

    We report on a dark matter search for a Weakly Interacting Massive Particle (WIMP) in the mass range m{sub χ} element of [4, 30] GeV/c{sup 2} with the EDELWEISS-III experiment. A 2D profile likelihood analysis is performed on data from eight selected detectors with the lowest energy thresholds leading to a combined fiducial exposure of 496 kg-days. External backgrounds from γ- and β-radiation, recoils from {sup 206}Pb and neutrons as well as detector intrinsic backgrounds were modelled from data outside the region of interest and constrained in the analysis. The basic data selection and most of the background models are the same as those used in a previously published analysis based on boosted decision trees (BDT) [1]. For the likelihood approach applied in the analysis presented here, a larger signal efficiency and a subtraction of the expected background lead to a higher sensitivity, especially for the lowest WIMP masses probed. No statistically significant signal was found and upper limits on the spin-independent WIMP-nucleon scattering cross section can be set with a hypothesis test based on the profile likelihood test statistics. The 90 % C.L. exclusion limit set for WIMPs with m{sub χ} = 4 GeV/c{sup 2} is 1.6 x 10{sup -39} cm{sup 2}, which is an improvement of a factor of seven with respect to the BDT-based analysis. For WIMP masses above 15 GeV/c{sup 2} the exclusion limits found with both analyses are in good agreement. (orig.)

  7. Defining elastic fiber interactions by molecular fishing: an affinity purification and mass spectrometry approach.

    Science.gov (United States)

    Cain, Stuart A; McGovern, Amanda; Small, Elaine; Ward, Lyle J; Baldock, Clair; Shuttleworth, Adrian; Kielty, Cay M

    2009-12-01

    Deciphering interacting networks of the extracellular matrix is a major challenge. We describe an affinity purification and mass spectrometry strategy that has provided new insights into the molecular interactions of elastic fibers, essential extracellular assemblies that provide elastic recoil in dynamic tissues. Using cell culture models, we defined primary and secondary elastic fiber interaction networks by identifying molecular interactions with the elastic fiber molecules fibrillin-1, MAGP-1, fibulin-5, and lysyl oxidase. The sensitivity and validity of our method was confirmed by identification of known interactions with the bait proteins. Our study revealed novel extracellular protein interactions with elastic fiber molecules and delineated secondary interacting networks with fibronectin and heparan sulfate-associated molecules. This strategy is a novel approach to define the macromolecular interactions that sustain complex extracellular matrix assemblies and to gain insights into how they are integrated into their surrounding matrix.

  8. [Historical and biological approaches to the study of Modern Age French plague mass burials].

    Science.gov (United States)

    Bianuccii, Raffaella; Tzortzis, Stéfan; Fornaciari, Gino; Signoli, Michel

    2010-01-01

    The "Black Death" and subsequent epidemics from 1346 to the early 18th century spread from the Caspian Sea all over Europe six hundred years after the outbreak of the Justinian plague (541-767 AD). Plague has been one of the most devastating infectious diseases that affected the humankind and has caused approximately 200 million human deaths historically. Here we describe the different approaches adopted in the study of several French putative plague mass burials dating to the Modern Age (16th-18th centuries). Through complementation of historical, archaeological and paleobiological data, ample knowledge of both the causes that favoured the spread of the Medieval plague in cities, towns and small villages and of the modification of the customary funerary practices in urban and rural areas due to plague are gained.

  9. An approach to a multi walled carbon nanotube based mass sensor

    DEFF Research Database (Denmark)

    Mateiu, Ramona Valentina; Davis, Zachary James; Madsen, Dorte Nørgaard

    2004-01-01

    We propose an approach to a nanoscale mass sensor based on a gold electrode structure, on which a multi-walled carbon nanotube (MWCNT) bridge can be placed and soldered. The structure is comprised of three electrodes with a width of 2 or 4 mum. Two outer electrodes with a length of 10 or 15 mum...... serve as source and drain electrodes for the MWCNT bridge, whereas an inner electrode with a length of 8 or 13 mum is for electrostatic excitation of the CNT. Some structures have an extra pair of outer electrodes, which may deflect the inner electrodes and thereby be used for stretching or compressing...... the bridging nanotube. The free standing MWCNTs were fabricated by chemical vapour deposition of Fe(H) phthalocyanine. A nanomanipulator with an x - y - z translation stage was used for placing the MWCNTs across the source-drain electrodes. The nanotubes were soldered onto the substrate by electron beam...

  10. Mass Spectrometry-Based Approaches to Understand the Molecular Basis of Memory

    Science.gov (United States)

    Pontes, Arthur H.; de Sousa, Marcelo V.

    2016-01-01

    The central nervous system is responsible for an array of cognitive functions such as memory, learning, language, and attention. These processes tend to take place in distinct brain regions; yet, they need to be integrated to give rise to adaptive or meaningful behavior. Since cognitive processes result from underlying cellular and molecular changes, genomics and transcriptomics assays have been applied to human and animal models to understand such events. Nevertheless, genes and RNAs are not the end products of most biological functions. In order to gain further insights toward the understanding of brain processes, the field of proteomics has been of increasing importance in the past years. Advancements in liquid chromatography-tandem mass spectrometry (LC-MS/MS) have enabled the identification and quantification of thousands of proteins with high accuracy and sensitivity, fostering a revolution in the neurosciences. Herein, we review the molecular bases of explicit memory in the hippocampus. We outline the principles of mass spectrometry (MS)-based proteomics, highlighting the use of this analytical tool to study memory formation. In addition, we discuss MS-based targeted approaches as the future of protein analysis. PMID:27790611

  11. Promoting oral cancer awareness and early detection using a mass media approach.

    Science.gov (United States)

    Saleh, Amyza; Yang, Yi-Hsin; Wan Abd Ghani, Wan Maria Nabillah; Abdullah, Norlida; Doss, Jennifer Geraldine; Navonil, Roy; Abdul Rahman, Zainal Ariff; Ismail, Siti Mazlipah; Talib, Norain Abu; Zain, Rosnah Binti; Cheong, Sok Ching

    2012-01-01

    Less than 50% of oral cancer cases are diagnosed at early stages of the disease and this is in part due to poor awareness and lack of knowledge on the signs and symptoms of oral cancer. This study sought to measure the baseline awareness of oral cancer in Malaysia and aimed to increase public awareness and knowledge of oral cancer using a mass media campaign. Baseline awareness and impact of the campaign was measured using self-administered questionnaires sent via email to individuals. The campaign was aired on two national television channels and the reach was monitored through an independent programme monitoring system. 78.2% of respondents had heard of oral cancer, and this increased significantly after the campaign. However, the ability to recognize signs and symptoms remains unchanged. We found that the level of awareness differed between the distinct ethnic subgroups and the reach of the campaign was not uniform across all ethnicities. This substantial study to measure the oral cancer awareness in Malaysia provides important baseline data for the planning of public health policies. Despite encouraging evidence that a mass media campaign could increase the awareness of oral cancer, further research is required to address the acceptability, comprehensiveness and effectiveness. Furthermore, different campaign approaches may be required for specific ethnic groups in a multi-ethnic country such as Malaysia.

  12. Mass Spectrometry-based Approaches to Understand the Molecular Basis of Memory

    Directory of Open Access Journals (Sweden)

    Arthur Henriques Pontes

    2016-10-01

    Full Text Available The central nervous system is responsible for an array of cognitive functions such as memory, learning, language and attention. These processes tend to take place in distinct brain regions; yet, they need to be integrated to give rise to adaptive or meaningful behavior. Since cognitive processes result from underlying cellular and molecular changes, genomics and transcriptomics assays have been applied to human and animal models to understand such events. Nevertheless, genes and RNAs are not the end products of most biological functions. In order to gain further insights toward the understanding of brain processes, the field of proteomics has been of increasing importance in the past years. Advancements in liquid chromatography-tandem mass spectrometry (LC-MS/MS have enable the identification and quantification of thousand of proteins with high accuracy and sensitivity, fostering a revolution in the neurosciences. Herein, we review the molecular bases of explicit memory in the hippocampus. We outline the principles of mass spectrometry (MS-based proteomics, highlighting the use of this analytical tool to study memory formation. In addition, we discuss MS-based targeted approaches as the future of protein analysis.

  13. Mass Spectrometry-based Approaches to Understand the Molecular Basis of Memory

    Science.gov (United States)

    Pontes, Arthur; de Sousa, Marcelo

    2016-10-01

    The central nervous system is responsible for an array of cognitive functions such as memory, learning, language and attention. These processes tend to take place in distinct brain regions; yet, they need to be integrated to give rise to adaptive or meaningful behavior. Since cognitive processes result from underlying cellular and molecular changes, genomics and transcriptomics assays have been applied to human and animal models to understand such events. Nevertheless, genes and RNAs are not the end products of most biological functions. In order to gain further insights toward the understanding of brain processes, the field of proteomics has been of increasing importance in the past years. Advancements in liquid chromatography-tandem mass spectrometry (LC-MS/MS) have enable the identification and quantification of thousand of proteins with high accuracy and sensitivity, fostering a revolution in the neurosciences. Herein, we review the molecular bases of explicit memory in the hippocampus. We outline the principles of mass spectrometry (MS)-based proteomics, highlighting the use of this analytical tool to study memory formation. In addition, we discuss MS-based targeted approaches as the future of protein analysis.

  14. A coupled DEM-DFN approach to rock mass strength characterization

    Science.gov (United States)

    Harthong, Barthelemy; Scholtes, Luc; Donze, Frederic

    2013-04-01

    An enhanced version of the discrete element method (DEM) has been specifically developed for the analysis of fractured rock masses [Scholtes L, Donze F, 2012]. In addition to the discrete representation of the intact medium which enables the description of the localized stress-induced damage caused by heterogeneities inherent to rocks, structural defects can be explicitly taken into account in the modeling to represent pre-existing fractures or discontinuities of size typically larger than the discrete element size. From laboratory scale simulations to slope stability case studies, the capability of this approach to simulate the progressive failure mechanisms occurring in jointed rock are presented is assessed on the basis of referenced experiments and in situ observations. For instance, the challenging wing crack extension, typical of brittle material fracturing, can be successfully reproduced under both compressive and shear loading path, as a result of the progressive coalescence of micro-cracks induced by stress concentration at the tips of pre-existing fractures. In this study, the dedicated DEM is coupled to a discrete fracture network (DFN) model to assess the influence of DFN properties on the mechanical behavior of fractured rock masses where progressive failure can occur. The DFN model assumes the distribution of fractures barycentres to be fractal and the distribution of fracture sizes to follow a power-law distribution [Davy P, Le Goc P, Darcel C, Bour O, de Dreuzy JR, Munier R, 2010]. The proposed DEM/DFN model is used to characterize the influence of clustering and size distribution of pre-existing fractures on the strength of fractured rock masses. The results show that the mechanical behaviour of fractured rock masses is mainly dependent on the fracture intensity. However, for a given fracture intensity, the strength can exhibit a 50 per cent variability depending on the size distribution of the pre-existing fractures. This difference can be

  15. A scale space approach for unsupervised feature selection in mass spectra classification for ovarian cancer detection.

    Science.gov (United States)

    Ceccarelli, Michele; d'Acierno, Antonio; Facchiano, Angelo

    2009-10-15

    Mass spectrometry spectra, widely used in proteomics studies as a screening tool for protein profiling and to detect discriminatory signals, are high dimensional data. A large number of local maxima (a.k.a. peaks) have to be analyzed as part of computational pipelines aimed at the realization of efficient predictive and screening protocols. With this kind of data dimensions and samples size the risk of over-fitting and selection bias is pervasive. Therefore the development of bio-informatics methods based on unsupervised feature extraction can lead to general tools which can be applied to several fields of predictive proteomics. We propose a method for feature selection and extraction grounded on the theory of multi-scale spaces for high resolution spectra derived from analysis of serum. Then we use support vector machines for classification. In particular we use a database containing 216 samples spectra divided in 115 cancer and 91 control samples. The overall accuracy averaged over a large cross validation study is 98.18. The area under the ROC curve of the best selected model is 0.9962. We improved previous known results on the problem on the same data, with the advantage that the proposed method has an unsupervised feature selection phase. All the developed code, as MATLAB scripts, can be downloaded from http://medeaserver.isa.cnr.it/dacierno/spectracode.htm.

  16. Tribocorrosion in pressurized high temperature water: a mass flow model based on the third body approach

    Energy Technology Data Exchange (ETDEWEB)

    Guadalupe Maldonado, S.

    2014-07-01

    Pressurized water reactors (PWR) used for power generation are operated at elevated temperatures (280-300 °C) and under higher pressure (120-150 bar). In addition to these harsh environmental conditions some components of the PWR assemblies are subject to mechanical loading (sliding, vibration and impacts) leading to undesirable and hardly controllable material degradation phenomena. In such situations wear is determined by the complex interplay (tribocorrosion) between mechanical, material and physical-chemical phenomena. Tribocorrosion in PWR conditions is at present little understood and models need to be developed in order to predict component lifetime over several decades. The goal of this project, carried out in collaboration with the French company AREVA NP, is to develop a predictive model based on the mechanistic understanding of tribocorrosion of specific PWR components (stainless steel control assemblies, stellite grippers). The approach taken here is to describe degradation in terms of electro-chemical and mechanical material flows (third body concept of tribology) from the metal into the friction film (i.e. the oxidized film forming during rubbing on the metal surface) and from the friction film into the environment instead of simple mass loss considerations. The project involves the establishment of mechanistic models for describing the single flows based on ad-hoc tribocorrosion measurements operating at low temperature. The overall behaviour at high temperature and pressure in investigated using a dedicated tribometer (Aurore) including electrochemical control of the contact during rubbing. Physical laws describing the individual flows according to defined mechanisms and as a function of defined physical parameters were identified based on the obtained experimental results and from literature data. The physical laws were converted into mass flow rates and solved as differential equation system by considering the mass balance in compartments

  17. MERRA Chem 3D IAU C-Grid Wind and Mass Flux, Time Average 3-Hourly (eta coord, 2/3x1/2L72) V5.2.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The MAT3NVCHM or tavg3_3d_chm_Nv data product is the MERRA Data Assimilation System Chemistry 3-Dimensional chemistry on layers that is time averaged, 3D model...

  18. THE PANCHROMATIC HUBBLE ANDROMEDA TREASURY. IV. A PROBABILISTIC APPROACH TO INFERRING THE HIGH-MASS STELLAR INITIAL MASS FUNCTION AND OTHER POWER-LAW FUNCTIONS

    Energy Technology Data Exchange (ETDEWEB)

    Weisz, Daniel R.; Fouesneau, Morgan; Dalcanton, Julianne J.; Clifton Johnson, L.; Beerman, Lori C.; Williams, Benjamin F. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Hogg, David W.; Foreman-Mackey, Daniel T. [Center for Cosmology and Particle Physics, New York University, 4 Washington Place, New York, NY 10003 (United States); Rix, Hans-Walter; Gouliermis, Dimitrios [Max Planck Institute for Astronomy, Koenigstuhl 17, D-69117 Heidelberg (Germany); Dolphin, Andrew E. [Raytheon Company, 1151 East Hermans Road, Tucson, AZ 85756 (United States); Lang, Dustin [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States); Bell, Eric F. [Department of Astronomy, University of Michigan, 500 Church Street, Ann Arbor, MI 48109 (United States); Gordon, Karl D.; Kalirai, Jason S. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Skillman, Evan D., E-mail: dweisz@astro.washington.edu [Minnesota Institute for Astrophysics, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455 (United States)

    2013-01-10

    We present a probabilistic approach for inferring the parameters of the present-day power-law stellar mass function (MF) of a resolved young star cluster. This technique (1) fully exploits the information content of a given data set; (2) can account for observational uncertainties in a straightforward way; (3) assigns meaningful uncertainties to the inferred parameters; (4) avoids the pitfalls associated with binning data; and (5) can be applied to virtually any resolved young cluster, laying the groundwork for a systematic study of the high-mass stellar MF (M {approx}> 1 M {sub Sun }). Using simulated clusters and Markov Chain Monte Carlo sampling of the probability distribution functions, we show that estimates of the MF slope, {alpha}, are unbiased and that the uncertainty, {Delta}{alpha}, depends primarily on the number of observed stars and on the range of stellar masses they span, assuming that the uncertainties on individual masses and the completeness are both well characterized. Using idealized mock data, we compute the theoretical precision, i.e., lower limits, on {alpha}, and provide an analytic approximation for {Delta}{alpha} as a function of the observed number of stars and mass range. Comparison with literature studies shows that {approx}3/4 of quoted uncertainties are smaller than the theoretical lower limit. By correcting these uncertainties to the theoretical lower limits, we find that the literature studies yield ({alpha}) = 2.46, with a 1{sigma} dispersion of 0.35 dex. We verify that it is impossible for a power-law MF to obtain meaningful constraints on the upper mass limit of the initial mass function, beyond the lower bound of the most massive star actually observed. We show that avoiding substantial biases in the MF slope requires (1) including the MF as a prior when deriving individual stellar mass estimates, (2) modeling the uncertainties in the individual stellar masses, and (3) fully characterizing and then explicitly modeling the

  19. THE PANCHROMATIC HUBBLE ANDROMEDA TREASURY. IV. A PROBABILISTIC APPROACH TO INFERRING THE HIGH-MASS STELLAR INITIAL MASS FUNCTION AND OTHER POWER-LAW FUNCTIONS

    International Nuclear Information System (INIS)

    Weisz, Daniel R.; Fouesneau, Morgan; Dalcanton, Julianne J.; Clifton Johnson, L.; Beerman, Lori C.; Williams, Benjamin F.; Hogg, David W.; Foreman-Mackey, Daniel T.; Rix, Hans-Walter; Gouliermis, Dimitrios; Dolphin, Andrew E.; Lang, Dustin; Bell, Eric F.; Gordon, Karl D.; Kalirai, Jason S.; Skillman, Evan D.

    2013-01-01

    We present a probabilistic approach for inferring the parameters of the present-day power-law stellar mass function (MF) of a resolved young star cluster. This technique (1) fully exploits the information content of a given data set; (2) can account for observational uncertainties in a straightforward way; (3) assigns meaningful uncertainties to the inferred parameters; (4) avoids the pitfalls associated with binning data; and (5) can be applied to virtually any resolved young cluster, laying the groundwork for a systematic study of the high-mass stellar MF (M ∼> 1 M ☉ ). Using simulated clusters and Markov Chain Monte Carlo sampling of the probability distribution functions, we show that estimates of the MF slope, α, are unbiased and that the uncertainty, Δα, depends primarily on the number of observed stars and on the range of stellar masses they span, assuming that the uncertainties on individual masses and the completeness are both well characterized. Using idealized mock data, we compute the theoretical precision, i.e., lower limits, on α, and provide an analytic approximation for Δα as a function of the observed number of stars and mass range. Comparison with literature studies shows that ∼3/4 of quoted uncertainties are smaller than the theoretical lower limit. By correcting these uncertainties to the theoretical lower limits, we find that the literature studies yield (α) = 2.46, with a 1σ dispersion of 0.35 dex. We verify that it is impossible for a power-law MF to obtain meaningful constraints on the upper mass limit of the initial mass function, beyond the lower bound of the most massive star actually observed. We show that avoiding substantial biases in the MF slope requires (1) including the MF as a prior when deriving individual stellar mass estimates, (2) modeling the uncertainties in the individual stellar masses, and (3) fully characterizing and then explicitly modeling the completeness for stars of a given mass. The precision on MF

  20. Depression, body mass index, and chronic obstructive pulmonary disease – a holistic approach

    Directory of Open Access Journals (Sweden)

    Catalfo G

    2016-02-01

    Full Text Available Giuseppe Catalfo,1 Luciana Crea,1 Tiziana Lo Castro,1 Francesca Magnano San Lio,1 Giuseppe Minutolo,1 Gherardo Siscaro,2 Noemi Vaccino,1 Nunzio Crimi,3 Eugenio Aguglia1 1Department of Psychiatry, Policlinico “G. Rodolico” University Hospital, University of Catania, Catania, Italy; 2Operative Unit Neurorehabilitation, IRCCS Fondazione Salvatore Maugeri, Sciacca, Italy; 3Department of Pneumology, Policlinico “G. Rodolico” University Hospital, University of Catania, Catania, Italy Background: Several clinical studies suggest common underlying pathogenetic mechanisms of COPD and depressive/anxiety disorders. We aim to evaluate psychopathological and physical effects of aerobic exercise, proposed in the context of pulmonary rehabilitation, in a sample of COPD patients, through the correlation of some psychopathological variables and physical/pneumological parameters. Methods: Fifty-two consecutive subjects were enrolled. At baseline, the sample was divided into two subgroups consisting of 38 depression-positive and 14 depression-negative subjects according to the Hamilton Depression Rating Scale (HAM-D. After the rehabilitation treatment, we compared psychometric and physical examinations between the two groups. Results: The differences after the rehabilitation program in all assessed parameters demonstrated a significant improvement in psychiatric and pneumological conditions. The reduction of BMI was significantly correlated with fat mass but only in the depression-positive patients. Conclusion: Our results suggest that pulmonary rehabilitation improves depressive and anxiety symptoms in COPD. This improvement is significantly related to the reduction of fat mass and BMI only in depressed COPD patients, in whom these parameters were related at baseline. These findings suggest that depressed COPD patients could benefit from a rehabilitation program in the context of a multidisciplinary approach. Keywords: COPD, depression, aerobic exercise

  1. Endoscopic resection of upper neck masses via retroauricular approach is feasible with excellent cosmetic outcomes.

    Science.gov (United States)

    Lee, Hyoung Shin; Lee, Dongwon; Koo, Yong Cheol; Shin, Hyang Ae; Koh, Yoon Woo; Choi, Eun Chang

    2013-03-01

    In this study, the authors introduce and evaluate the feasibility of endoscopic resection using the retroauricular approach for various benign lesions of the upper neck. A retrospective comparative analysis was performed on the clinical outcomes of patients who underwent surgery for upper neck masses as endoscopic resection using the retroauricular approach or conventional transcervical resection at the authors' center from January 2010 through August 2011. The primary outcome was the cosmetic satisfaction of the patients in each group. In addition, the feasibility of the procedure was evaluated by comparing the operation time; hospital stay; amount and duration of drainage; complications such as marginal mandibular nerve, lingual, or hypoglossal nerve palsy; paresthesia of the ear lobe; and wound problems such as hematoma and skin necrosis. Statistical analysis was performed by independent-samples t test and the Fisher exact test, and a P value less than .05 was considered statistically significant. Thirty-six patients underwent endoscopic resection (endo group; 15 men, 21 women; mean age, 38.8 ± 15.0 years) and 40 patients underwent conventional transcervical resection (conventional group; 18 men, 22 women; mean age, 45.1 ± 14.1 years). The operating time in the endo group was longer than in the conventional group (P = .003). No significant difference was observed in the overall perioperative complications between the 2 groups. Cosmetic satisfaction evaluated with a graded scale showed much better results in the endo group (P cosmetic results. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  2. Targeted mass spectrometry: An emerging powerful approach to unblock the bottleneck in phosphoproteomics.

    Science.gov (United States)

    Osinalde, Nerea; Aloria, Kerman; Omaetxebarria, Miren J; Kratchmarova, Irina

    2017-06-15

    Following the rapid expansion of the proteomics field, the investigation of post translational modifications (PTM) has become extremely popular changing our perspective of how proteins constantly fine tune cellular functions. Reversible protein phosphorylation plays a pivotal role in virtually all biological processes in the cell and it is one the most characterized PTM up to date. During the last decade, the development of phosphoprotein/phosphopeptide enrichment strategies and mass spectrometry (MS) technology has revolutionized the field of phosphoproteomics discovering thousands of new site-specific phosphorylations and unveiling unprecedented evidence about their modulation under distinct cellular conditions. The field has expanded so rapidly that the use of traditional methods to validate and characterize the biological role of the phosphosites is not feasible any longer. Targeted MS holds great promise for becoming the method of choice to study with high precision and sensitivity already known site-specific phosphorylation events. This review summarizes the contribution of large-scale unbiased MS analyses and highlights the need of targeted MS-based approaches for follow-up investigation. Additionally, the article illustrates the biological relevance of protein phosphorylation by providing examples of disease-related phosphorylation events and emphasizes the benefits of applying targeted MS in clinics for disease diagnosis, prognosis and drug-response evaluation. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. HDL proteome in hemodialysis patients: a quantitative nanoflow liquid chromatography-tandem mass spectrometry approach.

    Directory of Open Access Journals (Sweden)

    Alain Mangé

    Full Text Available Aside from a decrease in the high-density lipoprotein (HDL cholesterol levels, qualitative abnormalities of HDL can contribute to an increase in cardiovascular (CV risk in end-stage renal disease (ESRD patients undergoing chronic hemodialysis (HD. Dysfunctional HDL leads to an alteration of reverse cholesterol transport and the antioxidant and anti-inflammatory properties of HDL. In this study, a quantitative proteomics approach, based on iTRAQ labeling and nanoflow liquid chromatography mass spectrometry analysis, was used to generate detailed data on HDL-associated proteins. The HDL composition was compared between seven chronic HD patients and a pool of seven healthy controls. To confirm the proteomics results, specific biochemical assays were then performed in triplicate in the 14 samples as well as 46 sex-matched independent chronic HD patients and healthy volunteers. Of the 122 proteins identified in the HDL fraction, 40 were differentially expressed between the healthy volunteers and the HD patients. These proteins are involved in many HDL functions, including lipid metabolism, the acute inflammatory response, complement activation, the regulation of lipoprotein oxidation, and metal cation homeostasis. Among the identified proteins, apolipoprotein C-II and apolipoprotein C-III were significantly increased in the HDL fraction of HD patients whereas serotransferrin was decreased. In this study, we identified new markers of potential relevance to the pathways linked to HDL dysfunction in HD. Proteomic analysis of the HDL fraction provides an efficient method to identify new and uncharacterized candidate biomarkers of CV risk in HD patients.

  4. Quantifying in-stream retention of nitrate at catchment scales using a practical mass balance approach.

    Science.gov (United States)

    Schwientek, Marc; Selle, Benny

    2016-02-01

    As field data on in-stream nitrate retention is scarce at catchment scales, this study aimed at quantifying net retention of nitrate within the entire river network of a fourth-order stream. For this purpose, a practical mass balance approach combined with a Lagrangian sampling scheme was applied and seasonally repeated to estimate daily in-stream net retention of nitrate for a 17.4 km long, agriculturally influenced, segment of the Steinlach River in southwestern Germany. This river segment represents approximately 70% of the length of the main stem and about 32% of the streambed area of the entire river network. Sampling days in spring and summer were biogeochemically more active than in autumn and winter. Results obtained for the main stem of Steinlach River were subsequently extrapolated to the stream network in the catchment. It was demonstrated that, for baseflow conditions in spring and summer, in-stream nitrate retention could sum up to a relevant term of the catchment's nitrogen balance if the entire stream network was considered.

  5. Capillary-HPLC with tandem mass spectrometry in analysis of alkaloid dyestuffs - a new approach.

    Science.gov (United States)

    Dąbrowski, Damian; Lech, Katarzyna; Jarosz, Maciej

    2017-11-10

    Development of the identification method of alkaloid compounds in Amur cork tree as well as not examined so far Oregon grape and European Barberry shrubs are presented. The novel approach to separation of alkaloids was applied and the capillary-high-performance liquid chromatography (capillary-HPLC) system was used, which has never previously been reported for alkaloid-based dyestuffs analysis. Its optimization was conducted with three different stationary phases (unmodified octadecylsilane-bonded silica, octadecylsilane modified with polar groups and silica-bonded pentaflourophenyls) as well as with different solvent buffers. Detection of the isolated compounds was carried out using diode-array detector (DAD) and tandem mass spectrometer with electrospray ionization (ESI MS/MS). The working parameters of ESI were optimized, whereas the multiple reactions monitoring (MRM) parameters of MS/MS detection were chosen based on the product ion spectra of the quasi-molecular ions. Calibration curve of berberine has been estimated (y = 1712091x + 4785.03 with the correlation coefficient 0.9999). Limit of detection and limit of quantification were calculated to be 3.2 and 9.7 ng/mL, respectively. Numerous alkaloids (i.e., berberine, jatrorrhizine and magnoflorine, as well as phellodendrine, menisperine and berbamine) were identified in the extracts from alkaloid plants and silk and wool fibers dyed with these dyestuffs, among them their markers. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. A mass spectrometric approach for probing the stability of bioorganic radicals.

    Science.gov (United States)

    Tan, Lei; Hu, Hanfeng; Francisco, Joseph S; Xia, Yu

    2014-02-10

    Glycyl radicals are important bioorganic radical species involved in enzymatic catalysis. Herein, we demonstrate that the stability of glycyl-type radicals (X-(.) CH-Y) can be tuned on a molecular level by varying the X and Y substituents and experimentally probed by mass spectrometry. This approach is based on the gas-phase dissociation of cysteine sulfinyl radical (X-Cys SO .-Y) ions through homolysis of a Cα Cβ bond. This fragmentation produces a glycyl-type radical upon losing CH2 SO, and the degree of this loss is closely tied to the stability of the as-formed radical. Theoretical calculations indicate that the energy of the Cα Cβ bond homolysis is predominantly affected by the stability of the glycyl radical product through the captodative effect, rather than that of the parent sulfinyl radical. This finding suggests a novel experimental method to probe the stability of bioorganic radicals, which can potentially broaden our understanding of these important reactive intermediates. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. tavg3_3d_chm_Fe: MERRA Chem 3D IAU, Precip Mass Flux, Time average 3-hourly 1.25 x 1 degree V5.2.0 (MAT3FECHM) at GES DISC

    Data.gov (United States)

    National Aeronautics and Space Administration — The MAT3FECHM or tavg3_3d_chm_Fe data product is the MERRA Data Assimilation System Chemistry 3-Dimensional chemistry on layers edges that is time averaged, 3D model...

  8. MERRA Chem 3D IAU C-Grid Edge Mass Flux, Time Average 3-Hourly (eta coord, 2/3x1/2L73) V5.2.0

    Data.gov (United States)

    National Aeronautics and Space Administration — The MAT3NECHM or tavg3_3d_chm_Ne data product is the MERRA Data Assimilation System Chemistry 3-Dimensional chemistry on layer Edges that is time averaged, 3D model...

  9. Bottom-up approach to moduli dynamics in heavy gravitino scenario: Superpotential, soft terms, and sparticle mass spectrum

    International Nuclear Information System (INIS)

    Endo, Motoi; Yamaguchi, Masahiro; Yoshioka, Koichi

    2005-01-01

    The physics of moduli fields is examined in the scenario where the gravitino is relatively heavy with mass of order 10 TeV, which is favored in view of the severe gravitino problem. The form of the moduli superpotential is shown to be determined, if one imposes a phenomenological requirement that no physical CP phase arise in gaugino masses from conformal anomaly mediation. This bottom-up approach allows only two types of superpotential, each of which can have its origins in a fundamental underlying theory such as superstring. One superpotential is the sum of an exponential and a constant, which is identical to that obtained by Kachru et al. (KKLT), and the other is the racetrack superpotential with two exponentials. The general form of soft supersymmetry-breaking masses is derived, and the pattern of the superparticle mass spectrum in the minimal supersymmetric standard model is discussed with the KKLT-type superpotential. It is shown that the moduli mediation and the anomaly mediation make comparable contributions to the soft masses. At the weak scale, the gaugino masses are rather degenerate compared to the minimal supergravity, which bring characteristic features on the superparticle masses. In particular, the lightest neutralino, which often constitutes the lightest superparticle and thus a dark matter candidate, is a considerable admixture of gauginos and Higgsinos. We also find a small mass hierarchy among the moduli, gravitino, and superpartners of the standard-model fields. Cosmological implications of the scenario are briefly described

  10. A fuzzy rule-based approach for characterization of mammogram masses into BI-RADS shape categories.

    Science.gov (United States)

    Vadivel, A; Surendiran, B

    2013-05-01

    We present new geometric shape and margin features for classifying mammogram mass lesions into BI-RADS shape categories: round, oval, lobular and irregular. According to Breast Imaging Reporting and Data System (BIRADS), masses can be differentiated using its shape, size and density, which is how radiologist visualizes the mammograms. Measuring regular and irregular shapes mathematically is found to be a difficult task, since there is no single measure available to differentiate various shapes. It is known that for mammograms, shape features are superior to Haralick and wavelet based features. Various geometrical shape and margin features have been introduced based on maximum and minimum radius of mass to classify the morphology of masses. These geometric features are found to be good in discriminating regular shapes from irregular shapes. In this paper, each mass is described by shape feature vector consists of 17 shape and margin properties. The masses are classified into 4 categories such as round, oval, lobular and irregular. Classifying masses into 4 categories is a very difficult task compared to classifying masses as benign, malignant or normal vs. abnormal. Only shape and margin characteristics can be used to discriminate these 4 categories effectively. Experiments have been conducted on mammogram images from the Digital Database for Screening Mammography (DDSM) and classified using C5.0 decision tree classifier. Total of 224 DDSM mammogram masses are considered for experiment. The C5.0 decision tree algorithm is used to generate simple rules, which can be easily implemented and used in fuzzy inference system as if…then…else statements. The rules are used to construct the generalized fuzzy membership function for classifying the masses as round, oval, lobular or irregular. Proposed approach is twice effective than existing Beamlet based features for classifying the mass as round, oval, lobular or irregular. Copyright © 2013 Elsevier Ltd. All rights

  11. Time-dependent mass of cosmological perturbations in the hybrid and dressed metric approaches to loop quantum cosmology

    Science.gov (United States)

    Elizaga Navascués, Beatriz; Martín de Blas, Daniel; Mena Marugán, Guillermo A.

    2018-02-01

    Loop quantum cosmology has recently been applied in order to extend the analysis of primordial perturbations to the Planck era and discuss the possible effects of quantum geometry on the cosmic microwave background. Two approaches to loop quantum cosmology with admissible ultraviolet behavior leading to predictions that are compatible with observations are the so-called hybrid and dressed metric approaches. In spite of their similarities and relations, we show in this work that the effective equations that they provide for the evolution of the tensor and scalar perturbations are somewhat different. When backreaction is neglected, the discrepancy appears only in the time-dependent mass term of the corresponding field equations. We explain the origin of this difference, arising from the distinct quantization procedures. Besides, given the privileged role that the big bounce plays in loop quantum cosmology, e.g. as a natural instant of time to set initial conditions for the perturbations, we also analyze the positivity of the time-dependent mass when this bounce occurs. We prove that the mass of the tensor perturbations is positive in the hybrid approach when the kinetic contribution to the energy density of the inflaton dominates over its potential, as well as for a considerably large sector of backgrounds around that situation, while this mass is always nonpositive in the dressed metric approach. Similar results are demonstrated for the scalar perturbations in a sector of background solutions that includes the kinetically dominated ones; namely, the mass then is positive for the hybrid approach, whereas it typically becomes negative in the dressed metric case. More precisely, this last statement is strictly valid when the potential is quadratic for values of the inflaton mass that are phenomenologically favored.

  12. The green economy for sustainable development: a spatial multi-criteria analysis - ordered weighted averaging approach in the siting process for short rotation forestry in the Basilicata Region, Italy

    Directory of Open Access Journals (Sweden)

    Severino Romano

    2013-09-01

    Full Text Available Optimising bioenergy chains and the creation of a bio-energy district can make a positive contribution to territorial development, land use planning and employment, while reducing environmental pollution. Energy planning issues are complex problems with multiple decision makers and criteria. Given the spatial nature of the problem, the present paper proposes a spatial multi-criteria analysis approach for supporting decision makers in the site selection process for short rotation forestry planting in the Basilicata Region, southern Italy. The methodology applied in the decision-support system is ordered weighted averaging, extended by means of fuzzy linguistic quantifiers. The purpose of the research is to formulate a systematic procedure to analyse complex decision problems, while supplying decision makers with a flexible tool to decide on possible agro-energy policies. The outcomes of the analysis may support decision makers in defining targeted agro-energy policies and help the private sector to identify the most appropriate cropping plan.

  13. A heat & mass integration approach to reduce capital and operating costs of a distillation configuration

    Energy Technology Data Exchange (ETDEWEB)

    Madenoor Ramapriya, Gautham [Purdue University; Jiang, Zheyu [Purdue University; Tawarmalani, Mohit [Purdue University; Agrawal, Rakesh [Purdue University

    2015-11-11

    We propose a general method to consolidate distillation columns of a distillation configuration using heat and mass integration. The proposed method encompasses all heat and mass integrations known till date, and includes many more. Each heat and mass integration eliminates a distillation column, a condenser, a reboiler and the heat duty associated with a reboiler. Thus, heat and mass integration can potentially offer significant capital and operating cost benefits. In this talk, we will study the various possible heat and mass integrations in detail, and demonstrate their benefits using case studies. This work will lay out a framework to synthesize an entire new class of useful configurations based on heat and mass integration of distillation columns.

  14. Targeted metabolite profile of food bioactive compounds by Orbitrap high resolution mass spectrometry: The 'FancyTiles' approach

    NARCIS (Netherlands)

    Troise, A.D.; Ferracane, R.; Palermo, M.; Fogliano, V.

    2014-01-01

    In this paper a new targeted metabolic profile approach using Orbitrap high resolution mass spectrometry was described. For each foodmatrix various classes of bioactive compounds and some specificmetabolites of interest were selected on the basis of the existing knowledge creating an easy-to-read

  15. Statistically optimal estimation of Greenland Ice Sheet mass variations from GRACE monthly solutions using an improved mascon approach

    NARCIS (Netherlands)

    Ran, J.; Ditmar, P.G.; Klees, R.; Farahani, H.

    2017-01-01

    We present an improved mascon approach to transform monthly spherical harmonic solutions based on GRACE satellite data into mass anomaly estimates in Greenland. The GRACE-based spherical harmonic coefficients are used to synthesize gravity anomalies at satellite altitude, which are then inverted

  16. A High Throughput Ambient Mass Spectrometric Approach to Species Identification and Classification from Chemical Fingerprint Signatures

    OpenAIRE

    Musah, Rabi A.; Espinoza, Edgard O.; Cody, Robert B.; Lesiak, Ashton D.; Christensen, Earl D.; Moore, Hannah E.; Maleknia, Simin; Drijfhout, Falko P.

    2015-01-01

    A high throughput method for species identification and classification through chemometric processing of direct analysis in real time (DART) mass spectrometry-derived fingerprint signatures has been developed. The method entails introduction of samples to the open air space between the DART ion source and the mass spectrometer inlet, with the entire observed mass spectral fingerprint subjected to unsupervised hierarchical clustering processing. A range of both polar and non-polar chemotypes a...

  17. A deep learning approach for the analysis of masses in mammograms with minimal user intervention.

    Science.gov (United States)

    Dhungel, Neeraj; Carneiro, Gustavo; Bradley, Andrew P

    2017-04-01

    We present an integrated methodology for detecting, segmenting and classifying breast masses from mammograms with minimal user intervention. This is a long standing problem due to low signal-to-noise ratio in the visualisation of breast masses, combined with their large variability in terms of shape, size, appearance and location. We break the problem down into three stages: mass detection, mass segmentation, and mass classification. For the detection, we propose a cascade of deep learning methods to select hypotheses that are refined based on Bayesian optimisation. For the segmentation, we propose the use of deep structured output learning that is subsequently refined by a level set method. Finally, for the classification, we propose the use of a deep learning classifier, which is pre-trained with a regression to hand-crafted feature values and fine-tuned based on the annotations of the breast mass classification dataset. We test our proposed system on the publicly available INbreast dataset and compare the results with the current state-of-the-art methodologies. This evaluation shows that our system detects 90% of masses at 1 false positive per image, has a segmentation accuracy of around 0.85 (Dice index) on the correctly detected masses, and overall classifies masses as malignant or benign with sensitivity (Se) of 0.98 and specificity (Sp) of 0.7. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Applying rock mass classifications to carbonate rocks for engineering purposes with a new approach using the rock engineering system

    Directory of Open Access Journals (Sweden)

    Gioacchino Francesco Andriani

    2017-04-01

    Full Text Available Classical rock mass classification systems are not applicable to carbonate rocks, especially when these are affected by karst processes. Their applications to such settings could therefore result in outcomes not representative of the real stress–strain behavior. In this study, we propose a new classification of carbonate rock masses for engineering purposes, by adapting the rock engineering system (RES method by Hudson for fractured and karstified rock masses, in order to highlight the problems of implementation of geomechanical models to carbonate rocks. This new approach allows a less rigid classification for carbonate rock masses, taking into account the local properties of the outcrops, the site conditions and the type of engineering work as well.

  19. Bayesian approach to peak deconvolution and library search for high resolution gas chromatography - Mass spectrometry.

    Science.gov (United States)

    Barcaru, A; Mol, H G J; Tienstra, M; Vivó-Truyols, G

    2017-08-29

    A novel probabilistic Bayesian strategy is proposed to resolve highly coeluting peaks in high-resolution GC-MS (Orbitrap) data. Opposed to a deterministic approach, we propose to solve the problem probabilistically, using a complete pipeline. First, the retention time(s) for a (probabilistic) number of compounds for each mass channel are estimated. The statistical dependency between m/z channels was implied by including penalties in the model objective function. Second, Bayesian Information Criterion (BIC) is used as Occam's razor for the probabilistic assessment of the number of components. Third, a probabilistic set of resolved spectra, and their associated retention times are estimated. Finally, a probabilistic library search is proposed, computing the spectral match with a high resolution library. More specifically, a correlative measure was used that included the uncertainties in the least square fitting, as well as the probability for different proposals for the number of compounds in the mixture. The method was tested on simulated high resolution data, as well as on a set of pesticides injected in a GC-Orbitrap with high coelution. The proposed pipeline was able to detect accurately the retention times and the spectra of the peaks. For our case, with extremely high coelution situation, 5 out of the 7 existing compounds under the selected region of interest, were correctly assessed. Finally, the comparison with the classical methods of deconvolution (i.e., MCR and AMDIS) indicates a better performance of the proposed algorithm in terms of the number of correctly resolved compounds. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Mass balance approaches for estimating the intestinal absorption and metabolism of peptides and analogues: theoretical development and applications

    Science.gov (United States)

    Sinko, P. J.; Leesman, G. D.; Amidon, G. L.

    1993-01-01

    A theoretical analysis for estimating the extent of intestinal peptide and peptide analogue absorption was developed on the basis of a mass balance approach that incorporates convection, permeability, and reaction. The macroscopic mass balance analysis (MMBA) was extended to include chemical and enzymatic degradation. A microscopic mass balance analysis, a numerical approach, was also developed and the results compared to the MMBA. The mass balance equations for the fraction of a drug absorbed and reacted in the tube were derived from the general steady state mass balance in a tube: [formula: see text] where M is mass, z is the length of the tube, R is the tube radius, Pw is the intestinal wall permeability, kr is the reaction rate constant, C is the concentration of drug in the volume element over which the mass balance is taken, VL is the volume of the tube, and vz is the axial velocity of drug. The theory was first applied to the oral absorption of two tripeptide analogues, cefaclor (CCL) and cefatrizine (CZN), which degrade and dimerize in the intestine. Simulations using the mass balance equations, the experimental absorption parameters, and the literature stability rate constants yielded a mean estimated extent of CCL (250-mg dose) and CZN (1000-mg dose) absorption of 89 and 51%, respectively, which was similar to the mean extent of absorption reported in humans (90 and 50%). It was proposed previously that 15% of the CCL dose spontaneously degraded systematically; however, our simulations suggest that significant CCL degradation occurs (8 to 17%) presystemically in the intestinal lumen.(ABSTRACT TRUNCATED AT 250 WORDS).

  1. Average Costs versus Net Present Value

    NARCIS (Netherlands)

    E.A. van der Laan (Erwin); R.H. Teunter (Ruud)

    2000-01-01

    textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives

  2. A squeeze-like operator approach to position-dependent mass in quantum mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Moya-Cessa, Héctor M.; Soto-Eguibar, Francisco [Instituto Nacional de Astrofísica, Óptica y Electrónica, Calle Luis Enrique Erro No. 1, Santa María Tonantzintla, San Andrés Cholula, Puebla CP 72840 (Mexico); Christodoulides, Demetrios N. [CREOL/College of Optics and Photonics, University of Central Florida, Orlando, Florida 32816-2700 (United States)

    2014-08-15

    We provide a squeeze-like transformation that allows one to remove a position dependent mass from the Hamiltonian. Methods to solve the Schrödinger equation may then be applied to find the respective eigenvalues and eigenfunctions. As an example, we consider a position-dependent-mass that leads to the integrable Morse potential and therefore to well-known solutions.

  3. Assembly of a Vacuum Chamber: A Hands-On Approach to Introduce Mass Spectrometry

    Science.gov (United States)

    Bussie`re, Guillaume; Stoodley, Robin; Yajima, Kano; Bagai, Abhimanyu; Popowich, Aleksandra K.; Matthews, Nicholas E.

    2014-01-01

    Although vacuum technology is essential to many aspects of modern physical and analytical chemistry, vacuum experiments are rarely the focus of undergraduate laboratories. We describe an experiment that introduces students to vacuum science and mass spectrometry. The students first assemble a vacuum system, including a mass spectrometer. While…

  4. Correctional Facility Average Daily Population

    Data.gov (United States)

    Montgomery County of Maryland — This dataset contains Accumulated monthly with details from Pre-Trial Average daily caseload * Detention Services, Average daily population for MCCF, MCDC, PRRS and...

  5. Characterization of the formaldehyde-H2O system using combined spectroscopic and mass spectrometry approaches

    Science.gov (United States)

    Oancea, A.; Hanoune, B.; Facq, S.; Focsa, C.; Chazallon, B.

    2009-04-01

    The atmosphere is a multiphase reactor in which physical exchange processes, heterogeneous reactions and photochemical reactions take place. The oxygenated organics (formaldehyde, ethanol, acetone etc.) present at trace concentrations into the atmosphere are known to play an important role in atmospheric chemistry due for example to their contribution in the production of HOx radicals, which largely determine the lifetime of pollutants [1]. Further, it has been shown that the interaction of oxygenated organics with ice particles in the atmosphere has the potential to promote heterogeneous chemistry [2]. In the polar lower troposphere, formaldehyde (H2CO) was measured in concentrations that are much higher that those predicted by chemistry models [3]. The mechanism at the origin of the formaldehyde production remains however controversial as the incorporation / partitioning of H2CO in ice crystal has to be determined first. Incorporation of formaldehyde into ice can take place according to several different physical mechanisms like co-condensation, riming, adsorption/desorption. The partitioning of formaldehyde between the gas phase, the liquid and the solid phases is an important parameter that leads to a better understanding of the incorporation mechanisms. In our work, different experimental approaches are used to characterize the partitioning between the different phases in which the H2O-H2CO system exists. Recently, we investigated by mass spectrometry and infrared diode laser spectroscopy the vapor liquid equilibrium (VLE) of formaldehyde aqueous solutions of different concentrations at room temperature. From the data collected on the vapor pressures at atmospherically relevant formaldehyde concentrations, we derived the Henry's coefficients at 295 K [4]. In this study we present first results on the solubility of formaldehyde in ice. This allows a better characterization of the partitioning of formaldehyde vapors above supercooled droplets and/or ice at low

  6. Mass driver retrievals of earth-approaching asteroids. [earth orbit capture for mining purposes

    Science.gov (United States)

    Oleary, B.

    1977-01-01

    Mass driver tugs can be designed to move Apollo and Amor asteroids at opportunities of low velocity increment to the vicinity of the earth. The cost of transferring asteroids through a velocity interval of 3 km/sec by mass driver is about 16 cents per kilogram amortized over 10 years, about ten times less than that required to retrieve lunar resources during the early phases of a program of space manufacturing. About 22 per cent of a 200-meter diameter asteroid could be transferred to high earth orbit by an automated 100 megawatt solar-powered mass driver in a period of five years for a cost of approximately $1 billion. Estimates of the total investment of a space manufacturing program could be reduced twofold by using asteroidal instead of lunar resources; such a program could begin several years sooner with minimal concurrent development if asteroidal search programs and mass driver development are immediately accelerated.

  7. On-line reaction monitoring by mass spectrometry, modern approaches for the analysis of chemical reactions.

    Science.gov (United States)

    Ray, Andrew; Bristow, Tony; Whitmore, Chris; Mosely, Jackie

    2017-06-19

    The application of on-line mass spectrometry for direct analysis of chemical and other types of process continues to grow in importance and impact. The ability of the technique to characterize many aspects of a chemical reaction such as product and impurity formation, along with reactant consumption in a single experiment is key to its adoption and development. Innovations in ionization techniques and mass spectrometry instrumentation are enabling this adoption. An increasing range of ambient ionization techniques make on-line mass spectrometry applicable to a large range of chemistries. The academic development and commercialization of small footprint portable/transportable mass spectrometers is providing technology that can be positioned with any process under investigation. These developments, coupled with research into new ways of sampling representatively from both the condensed and gaseous phases, are positioning mass spectrometry as an essential technology for on-line process optimization, understanding and intelligent control. It is recognized that quantitative capability of mass spectrometry in this application can cause some resistance to its adoption, but research activities to tackle this limitation are on-going. © 2017 Wiley Periodicals, Inc.

  8. Solid non-invasive ovarian masses on MR: Histopathology and a diagnostic approach

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, Yumiko O., E-mail: ytanaka@md.tsukuba.ac.jp [Department of Radiology, Graduate School of Comprehensive Human Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8575 (Japan); Okada, Satoshi; Satoh, Toyomi; Matsumoto, Koji [Department of Obstetrics and Gynecology, Graduate School of Comprehensive Human Sciences, University of Tsukuba (Japan); Saida, Tsukasa [Department of Radiology, Graduate School of Comprehensive Human Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8575 (Japan); Oki, Akinori; Yoshikawa, Hiroyuki [Department of Obstetrics and Gynecology, Graduate School of Comprehensive Human Sciences, University of Tsukuba (Japan); Minami, Manabu [Department of Radiology, Graduate School of Comprehensive Human Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8575 (Japan)

    2011-11-15

    Purpose: The purpose is to clarify the histopathology of the solid, non-invasive ovarian masses and to investigate the MR characteristics that distinguish benign from malignant. Materials and methods: From 1996 to 2008, we identified 38 cases with predominantly solid non-invasive ovarian masses examined by contrast MR. We evaluated the signal intensity on T2WI and degree of contrast enhancement. In 31 of these cases with dynamic contrast study, we classified the enhancing patterns of the masses into gradually increasing and plateau after rapid increase patterns. Result: Sixteen cases were benign sex-cord stromal tumors, three were other types of benign tumors, nine cases were diagnosed with primary malignant ovarian tumors, and 10 showed metastatic tumors. Low intensity on T2WI was observed in 15 benign and 2 malignant tumors. The gradually increasing pattern was observed in all 17 benignancies and 5 of the 14 malignancies. In the equilibrium phase, the masses were weakly enhanced in all 19 benignancies and only 4 of 19 malignancies. The diagnostic criteria, that low signal intensity masses with gradual weak enhancement are benign showed 93.3% accuracy and 100% positive predictive value. Conclusion: Benign solid ovarian masses tended to show low signal intensity on T2WI and gradual weak enhancement.

  9. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  10. Average action for models with fermions

    International Nuclear Information System (INIS)

    Bornholdt, S.; Wetterich, C.

    1993-01-01

    The average action is a new tool for investigating spontaneous symmetry breaking in elementary particle theory and statistical mechanics beyond the validity of standard perturbation theory. The aim of this work is to provide techniques for an investigation of models with fermions and scalars by means of the average potential. In the phase with spontaneous symmetry breaking, the inner region of the average potential becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations in this region necessitate a calculation of the fermion determinant in a spin wave background. We also compute the fermionic contribution to the wave function renormalization in the scalar kinetic term. (orig.)

  11. Slovenian National Landslide DataBase – A promising approach to slope mass movement prevention plan

    Directory of Open Access Journals (Sweden)

    Mihael Ribičič

    2007-12-01

    Full Text Available The Slovenian territory is, geologically speaking, very diverse and mainly composed of sediments or sedimentary rocks. Slope mass movements occur almost in all parts of the country. In the Alpine carbonate areas of the northern part of Slovenia rock falls, rock slides and even debris flows can be triggered.In the mountainous regions of central Slovenia composed from different clastic rocks, large soil landslides are quite usual, and in the young soil sediments of eastern part of Slovenia there is a large density of small soil landslides.The damage caused by slope mass movements is high, but still no common strategy and regulations to tackle this unwanted event, especially from the aspect of prevention, have been developed. One of the first steps towards an effective strategy of struggling against landslides and other slope mass movements is a central landslide database, where (ideally all known landslide occurrences would be reported, and described in as much detail as possible. At the end of the project of National Landslide Database construction which ended in May 2005 there were more than 6600 registered landslides, of which almost half occurred at a known location and were accompanied with the main characteristic descriptions.The erected database is a chance for Slovenia to once and for all start a solid slope mass movement prevention plan. The only part which is missing and which is the most important one is adopting a legal act that will legalise the obligation of reporting slope mass movement events to the database.

  12. Reconnaissance Estimates of Recharge Based on an Elevation-dependent Chloride Mass-balance Approach

    Energy Technology Data Exchange (ETDEWEB)

    Charles E. Russell; Tim Minor

    2002-08-31

    Significant uncertainty is associated with efforts to quantity recharge in arid regions such as southern Nevada. However, accurate estimates of groundwater recharge are necessary to understanding the long-term sustainability of groundwater resources and predictions of groundwater flow rates and directions. Currently, the most widely accepted method for estimating recharge in southern Nevada is the Maxey and Eakin method. This method has been applied to most basins within Nevada and has been independently verified as a reconnaissance-level estimate of recharge through several studies. Recharge estimates derived from the Maxey and Eakin and other recharge methodologies ultimately based upon measures or estimates of groundwater discharge (outflow methods) should be augmented by a tracer-based aquifer-response method. The objective of this study was to improve an existing aquifer-response method that was based on the chloride mass-balance approach. Improvements were designed to incorporate spatial variability within recharge areas (rather than recharge as a lumped parameter), develop a more defendable lower limit of recharge, and differentiate local recharge from recharge emanating as interbasin flux. Seventeen springs, located in the Sheep Range, Spring Mountains, and on the Nevada Test Site were sampled during the course of this study and their discharge was measured. The chloride and bromide concentrations of the springs were determined. Discharge and chloride concentrations from these springs were compared to estimates provided by previously published reports. A literature search yielded previously published estimates of chloride flux to the land surface. {sup 36}Cl/Cl ratios and discharge rates of the three largest springs in the Amargosa Springs discharge area were compiled from various sources. This information was utilized to determine an effective chloride concentration for recharging precipitation and its associated uncertainty via Monte Carlo simulations

  13. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  14. Statistically optimal estimation of Greenland Ice Sheet mass variations from GRACE monthly solutions using an improved mascon approach

    Science.gov (United States)

    Ran, J.; Ditmar, P.; Klees, R.; Farahani, H. H.

    2018-03-01

    We present an improved mascon approach to transform monthly spherical harmonic solutions based on GRACE satellite data into mass anomaly estimates in Greenland. The GRACE-based spherical harmonic coefficients are used to synthesize gravity anomalies at satellite altitude, which are then inverted into mass anomalies per mascon. The limited spectral content of the gravity anomalies is properly accounted for by applying a low-pass filter as part of the inversion procedure to make the functional model spectrally consistent with the data. The full error covariance matrices of the monthly GRACE solutions are properly propagated using the law of covariance propagation. Using numerical experiments, we demonstrate the importance of a proper data weighting and of the spectral consistency between functional model and data. The developed methodology is applied to process real GRACE level-2 data (CSR RL05). The obtained mass anomaly estimates are integrated over five drainage systems, as well as over entire Greenland. We find that the statistically optimal data weighting reduces random noise by 35-69%, depending on the drainage system. The obtained mass anomaly time-series are de-trended to eliminate the contribution of ice discharge and are compared with de-trended surface mass balance (SMB) time-series computed with the Regional Atmospheric Climate Model (RACMO 2.3). We show that when using a statistically optimal data weighting in GRACE data processing, the discrepancies between GRACE-based estimates of SMB and modelled SMB are reduced by 24-47%.

  15. Birth weight differences between those offered financial voucher incentives for verified smoking cessation and control participants enrolled in the Cessation in Pregnancy Incentives Trial (CPIT), employing an intuitive approach and a Complier Average Causal Effects (CACE) analysis.

    Science.gov (United States)

    McConnachie, Alex; Haig, Caroline; Sinclair, Lesley; Bauld, Linda; Tappin, David M

    2017-07-20

    The Cessation in Pregnancy Incentives Trial (CPIT), which offered financial incentives for smoking cessation during pregnancy showed a clinically and statistically significant improvement in cessation. However, infant birth weight was not seen to be affected. This study re-examines birth weight using an intuitive and a complier average causal effects (CACE) method to uncover important information missed by intention-to-treat analysis. CPIT offered financial incentives up to £400 to pregnant smokers to quit. With incentives, 68 women (23.1%) were confirmed non-smokers at primary outcome, compared to 25 (8.7%) without incentives, a difference of 14.3% (Fisher test, p financial incentives to quit. Viewed in this way, the overall birth weight gain with incentives is attributable only to potential quitters. We compared an intuitive approach to a CACE analysis. Mean birth weight of potential quitters in the incentives intervention group (who therefore quit) was 3338 g compared with potential quitters in the control group (who did not quit) 3193 g. The difference attributable to incentives, was 3338 - 3193 = 145 g (95% CI -617, +803). The mean difference in birth weight between the intervention and control groups was 21 g, and the difference in the proportion who managed to quit was 14.3%. Since the intervention consisted of the offer of incentives to quit smoking, the intervention was received by all women in the intervention group. However, "compliance" was successfully quitting with incentives, and the CACE analysis yielded an identical result, causal birth weight increase 21 g ÷ 0.143 = 145 g. Policy makers have great difficulty giving pregnant women money to stop smoking. This study indicates that a small clinically insignificant improvement in average birth weight is likely to hide an important clinically significant increase in infants born to pregnant smokers who want to stop but cannot achieve smoking cessation without the addition of financial

  16. Fault Detection of Inline Reciprocating Diesel Engine: A Mass and Gas-Torque Approach

    Directory of Open Access Journals (Sweden)

    S. H. Gawande

    2012-01-01

    Full Text Available Early fault detection and diagnosis for medium-speed diesel engines are important to ensure reliable operation throughout the course of their service. This work presents an investigation of the diesel engine combustion-related fault detection capability of crankshaft torsional vibrations. Proposed methodology state the way of early fault detection in the operating six-cylinder diesel engine. The model of six cylinders DI Diesel engine is developed appropriately. As per the earlier work by the same author the torsional vibration amplitudes are used to superimpose the mass and gas torque. Further mass and gas torque analysis is used to detect fault in the operating engine. The DFT of the measured crankshaft’s speed, under steady-state operating conditions at constant load shows significant variation of the amplitude of the lowest major harmonic order. This is valid both for uniform operating and faulty conditions and the lowest harmonic orders may be used to correlate its amplitude to the gas pressure torque and mass torque for a given engine. The amplitudes of the lowest harmonic orders (0.5, 1, and 1.5 of the gas pressure torque and mass torque are used to map the fault. A method capable to detect faulty cylinder of operating Kirloskar diesel engine of SL90 Engine-SL8800TA type is developed, based on the phases of the lowest three harmonic orders.

  17. Intuitive Physics of Free Fall: An Information Integration Approach to the Mass-Speed Belief

    Science.gov (United States)

    Vicovaro, Michele

    2014-01-01

    In this study, the intuitive physics of free fall was explored using Information Integration Theory and Functional Measurement. The participants had to rate the speed of objects differing in mass and height of release at the end of an imagined free fall. According to physics, falling speed increases with height of release but it is substantially…

  18. A Case Investigation of Product Structure Complexity in Mass Customization Using a Data Mining Approach

    DEFF Research Database (Denmark)

    Nielsen, Peter; Brunø, Thomas Ditlev; Nielsen, Kjeld

    2014-01-01

    Apriori, can support the development within the three fundamental mass customization capabilities. The results of the Apriori analysis can be utilized for improving the configuration process by introducing soft constraints and consolidating the product structure by joining components or modules...... and finally for improving production planning and control....

  19. Novel approach to determine ghrelin analogs by liquid chromatography with mass spectrometry using a monolithic column

    Czech Academy of Sciences Publication Activity Database

    Zemenová, Jana; Sýkora, D.; Adámková, H.; Maletínská, Lenka; Elbert, Tomáš; Marek, Aleš; Blechová, Miroslava

    2017-01-01

    Roč. 40, č. 5 (2017), s. 1032-1039 ISSN 1615-9306 Institutional support: RVO:61388963 Keywords : enzyme-linked immunosorbent assay * ghrelin * lipopeptides * liquid chromatography mass spectrometry * monolithic columns Subject RIV: CB - Analytical Chemistry, Separation OBOR OECD: Analytical chemistry Impact factor: 2.557, year: 2016

  20. Integrated genomic approaches implicate osteoglycin (Ogn) in the regulation of left ventricular mass

    NARCIS (Netherlands)

    Petretto, Enrico; Sarwar, Rizwan; Grieve, Ian; Lu, Han; Kumaran, Mande K.; Muckett, Phillip J.; Mangion, Jonathan; Schroen, Blanche; Benson, Matthew; Punjabi, Prakash P.; Prasad, Sanjay K.; Pennell, Dudley J.; Kiesewetter, Chris; Tasheva, Elena S.; Corpuz, Lolita M.; Webb, Megan D.; Conrad, Gary W.; Kurtz, Theodore W.; Kren, Vladimir; Fischer, Judith; Hubner, Norbert; Pinto, Yigal M.; Pravenec, Michal; Aitman, Timothy J.; Cook, Stuart A.

    2008-01-01

    Left ventricular mass (LVM) and cardiac gene expression are complex traits regulated by factors both intrinsic and extrinsic to the heart. To dissect the major determinants of LVM, we combined expression quantitative trait locus1 and quantitative trait transcript (QTT) analyses of the cardiac

  1. Re-assessing Present Day Global Mass Transport and Glacial Isostatic Adjustment From a Data Driven Approach

    Science.gov (United States)

    Wu, X.; Jiang, Y.; Simonsen, S.; van den Broeke, M. R.; Ligtenberg, S.; Kuipers Munneke, P.; van der Wal, W.; Vermeersen, B. L. A.

    2017-12-01

    Determining present-day mass transport (PDMT) is complicated by the fact that most observations contain signals from both present day ice melting and Glacial Isostatic Adjustment (GIA). Despite decades of progress in geodynamic modeling and new observations, significant uncertainties remain in both. The key to separate present-day ice mass change and signals from GIA is to include data of different physical characteristics. We designed an approach to separate PDMT and GIA signatures by estimating them simultaneously using globally distributed interdisciplinary data with distinct physical information and a dynamically constructed a priori GIA model. We conducted a high-resolution global reappraisal of present-day ice mass balance with focus on Earth's polar regions and its contribution to global sea-level rise using a combination of ICESat, GRACE gravity, surface geodetic velocity data, and an ocean bottom pressure model. Adding ice altimetry supplies critically needed dual data types over the interiors of ice covered regions to enhance separation of PDMT and GIA signatures, and achieve half an order of magnitude expected higher accuracies for GIA and consequently ice mass balance estimates. The global data based approach can adequately address issues of PDMT and GIA induced geocenter motion and long-wavelength signatures important for large areas such as Antarctica and global mean sea level. In conjunction with the dense altimetry data, we solved for PDMT coefficients up to degree and order 180 by using a higher-resolution GRACE data set, and a high-resolution a priori PDMT model that includes detailed geographic boundaries. The high-resolution approach solves the problem of multiple resolutions in various data types, greatly reduces aliased errors from a low-degree truncation, and at the same time, enhances separation of signatures from adjacent regions such as Greenland and Canadian Arctic territories.

  2. Brute-Force Approach for Mass Spectrometry-Based Variant Peptide Identification in Proteogenomics without Personalized Genomic Data

    Science.gov (United States)

    Ivanov, Mark V.; Lobas, Anna A.; Levitsky, Lev I.; Moshkovskii, Sergei A.; Gorshkov, Mikhail V.

    2018-02-01

    In a proteogenomic approach based on tandem mass spectrometry analysis of proteolytic peptide mixtures, customized exome or RNA-seq databases are employed for identifying protein sequence variants. However, the problem of variant peptide identification without personalized genomic data is important for a variety of applications. Following the recent proposal by Chick et al. (Nat. Biotechnol. 33, 743-749, 2015) on the feasibility of such variant peptide search, we evaluated two available approaches based on the previously suggested "open" search and the "brute-force" strategy. To improve the efficiency of these approaches, we propose an algorithm for exclusion of false variant identifications from the search results involving analysis of modifications mimicking single amino acid substitutions. Also, we propose a de novo based scoring scheme for assessment of identified point mutations. In the scheme, the search engine analyzes y-type fragment ions in MS/MS spectra to confirm the location of the mutation in the variant peptide sequence.

  3. Determination of the Fundamental Frequency of Perforated Rectangular Plates: Concentrated Negative Mass Approach for the Perforation

    Directory of Open Access Journals (Sweden)

    Kiran D. Mali

    2013-01-01

    Full Text Available This paper is concerned with a vibration analysis of perforated rectangular plates with rectangular perforation pattern of circular holes. The study is particularly useful in the understanding of the vibration of sound absorbing screens, head plates, end covers, or supports for tube bundles typically including tube sheets and support plates used in the mechanical devices. An energy method is developed to obtain analytical frequencies of the perforated plates with clamped edge, support conditions. Perforated plate is considered as plate with uniformly distributed mass. Holes are considered as concentrated negative masses. The analytical procedure using the Galerkin method is adopted. The deflected surface of the plate is approximated by the cosine series which satisfies the boundary conditions. Finite element method (FEM results have been used to illustrate the validity of the analytical model. The comparisons show that the analytical model predicts natural frequencies reasonably well for holes of small size.

  4. Westgate Shootings: An Emergency Department Approach to a Mass-casualty Incident.

    Science.gov (United States)

    Wachira, Benjamin W; Abdalla, Ramadhani O; Wallis, Lee A

    2014-10-01

    At approximately 12:30 pm on Saturday September 21, 2013, armed assailants attacked the upscale Westgate shopping mall in the Westlands area of Nairobi, Kenya. Using the seven key Major Incident Medical Management and Support (MIMMS) principles, command, safety, communication, assessment, triage, treatment, and transport, the Aga Khan University Hospital, Nairobi (AKUH,N) emergency department (ED) successfully coordinated the reception and care of all the casualties brought to the hospital. This report describes the AKUH,N ED response to the first civilian mass-casualty shooting incident in Kenya, with the hope of informing the development and implementation of mass-casualty emergency preparedness plans by other EDs and hospitals in Kenya, appropriate for the local health care system.

  5. Quantization of Hamiltonian systems with a position dependent mass: Killing vector fields and Noether momenta approach

    Science.gov (United States)

    Cariñena, José F.; Rañada, Manuel F.; Santander, Mariano

    2017-11-01

    The quantization of systems with a position dependent mass (PDM) is studied. We present a method that starts with the study of the existence of Killing vector fields for the PDM geodesic motion (Lagrangian with a PDM kinetic term but without any potential) and the construction of the associated Noether momenta. Then the method considers, as the appropriate Hilbert space, the space of functions that are square integrable with respect to a measure related with the PDM and, after that, it establishes the quantization, not of the canonical momenta p, but of the Noether momenta P instead. The quantum Hamiltonian, that depends on the Noether momenta, is obtained as an Hermitian operator defined on the PDM Hilbert space. In the second part several systems with position-dependent mass, most of them related with nonlinear oscillators, are quantized by making use of the method proposed in the first part.

  6. Real estate market and building energy performance: Data for a mass appraisal approach.

    Science.gov (United States)

    Bonifaci, Pietro; Copiello, Sergio

    2015-12-01

    Mass appraisal is widely considered an advanced frontier in the real estate valuation field. Performing mass appraisal entails the need to get access to base information conveyed by a large amount of transactions, such as prices and property features. Due to the lack of transparency of many Italian real estate market segments, our survey has been addressed to gather data from residential property advertisements. The dataset specifically focuses on property offer prices and dwelling energy efficiency. The latter refers to the label expressed and exhibited by the energy performance certificate. Moreover, data are georeferenced with the highest possible accuracy: at the neighborhood level for a 76.8% of cases, at street or building number level for the remaining 23.2%. Data are related to the analysis performed in Bonifaci and Copiello [1], about the relationship between house prices and building energy performance, that is to say, the willingness to pay in order to benefit from more efficient dwellings.

  7. Approach for domestic preparation of standard material (LSD spike) for isotope dilution mass spectrometry

    International Nuclear Information System (INIS)

    Ishikawa, Fumitaka; Sumi, Mika; Chiba, Masahiko; Suzuki, Toru; Abe, Tomoyuki; Kuno, Yusuke

    2008-01-01

    The accountancy analysis of the nuclear fuel material at Plutonium Fuel Development Center of JAEA is performed by isotope dilution mass spectrometry (IDMS; Isotope Dilution Mass Spectrometry). IDMS requires the standard material called LSD spike (Large Size Dried spike) which is indispensable for the accountancy in the facilities where the nuclear fuel materials are handled. Although the LSD spike and Pu source material have been supplied from foreign countries, the transportation for such materials has been getting more difficult recently. This difficulty may affect the operation of nuclear facilities in the future. Therefore, research and development of the domestic LSD spike and base material has been performed at JAEA. Certification for such standard nuclear materials including spikes produced in Japan is being studied. This report presents the current status and the future plan for the technological development. (author)

  8. Mass lesions in chronic pancreatitis: benign or malignant? An "evidence-based practice" approach.

    LENUS (Irish Health Repository)

    Gerstenmaier, Jan F

    2012-02-01

    The diagnosis of a pancreatic mass lesion in the presence of chronic pancreatitis can be extremely challenging. At the same time, a high level of certainty about the diagnosis is necessary for appropriate management planning. The aim of this study was to establish current best evidence about which imaging methods reliably differentiate a benign from a malignant lesion, and show how that evidence is best applied. A diagnostic algorithm based on Bayesian analysis is proposed.

  9. Theoretical approach for enhanced mass transfer effects in-duct flue gas desulfurization processes

    Energy Technology Data Exchange (ETDEWEB)

    1990-10-24

    During the reporting of July 1 to September 30, 1990, bench- and pilot-scale experiments were conducted to measure mass transfer and kinetic rates under simulated duct-injection conditions. This report describes the results of stirred-tank modelling experiments; experiments with moist solids in a short-time differential reactor in order to study and compare SO{sub 2} conversions; an investigation of the agglomeration of damp Ca(OH)-based solids; and evaluation of speciality sorbents.

  10. Trip time prediction in mass transit companies. A machine learning approach

    OpenAIRE

    João M. Moreira; Alípio Jorge; Jorge Freire de Sousa; Carlos Soares

    2005-01-01

    In this paper we discuss how trip time prediction can be useful foroperational optimization in mass transit companies and which machine learningtechniques can be used to improve results. Firstly, we analyze which departmentsneed trip time prediction and when. Secondly, we review related work and thirdlywe present the analysis of trip time over a particular path. We proceed by presentingexperimental results conducted on real data with the forecasting techniques wefound most adequate, and concl...

  11. Mass Transfer and Chemical Reaction Approach of the Kinetics of the Acetylation of Gadung Flour using Glacial Acetic Acid

    Directory of Open Access Journals (Sweden)

    Andri Cahyo Kumoro

    2015-03-01

    Full Text Available Acetylation is one of the common methods of modifying starch properties by introducing acetil (CH3CO groups to starch molecules at low temperatures. While most acetylation is conducted using starch as anhidroglucose source and acetic anhydride or vinyl acetate as nucleophilic agents, this work employ reactants, namely flour and glacial acetic acid. The purpose of this work are to study the effect of pH reaction and GAA/GF mass ratio on the rate of acetylation reaction and to determine its rate constants. The acetylation of gadung flour with glacial acetic acid in the presence of sodium hydroxide as a homogenous catalyst was studied at ambient temperature with pH ranging from 8-10 and different mass ratio of acetic acid : gadung flour (1:3; 1:4; and 1:5. It was found that increasing pH, lead to increase the degree of substitution, while increasing GAA/GF mass ratio caused such decreases in the degree of substitution, due to the hydrolysis of the acetylated starch. The desired starch acetylation reaction is accompanied by undesirable hydrolysis reaction of the acetylated starch after 40-50 minutes reaction time. Investigation of kinetics of the reaction observed that the value of mass transfer rate constant (Kcs is smaller than the surface reaction rate constant (k. Thus, it can be concluded that rate controlling step is mass transfer.  © 2015 BCREC UNDIP. All rights reservedReceived: 7th August 2014; Revised: 8th September 2014; Accepted: 14th September 2014How to Cite: Kumoro, A.C., Amelia, R. (2015. Mass Transfer and Chemical Reaction Approach of the Kinetics of the Acetylation of Gadung Flour using Glacial Acetic Acid. Bulletin of Chemical Reaction Engineering & Catalysis, 10 (1: 30-37. (doi:10.9767/bcrec.10.1.7181.30-37Permalink/DOI: http://dx.doi.org/10.9767/bcrec.10.1.7181.30-37

  12. Windbreak effect on biomass and grain mass accumulation of corn: a modeling approach

    International Nuclear Information System (INIS)

    Zhang, H.; Brandle, J.R.

    1996-01-01

    While numerous studies have indicated that field windbreaks both improve crop growing conditions and generally enhance crop growth and yield, especially under less favorable conditions, the relationship between the two is not clearly understood. A simple model is proposed to simulate biomass and grain mass accumulation of corn (Zea mays L,) with a windbreak shelter or without (exposed condition). The model is based on the positive relationship between intercepted solar radiation and biomass accumulation and requires plant population and hourly inputs of solar radiation and air temperature. Using published data, radiation use efficiency (RUE) was related to plant population, and a temperature function was established between the relative corn growth and temperature for pre-silking stages. Biomass and grain mass simulated by the model agreed well with those measured for both sheltered and unsheltered plants from 1990 to 1992. Windbreaks did not significantly increase biomass or grain mass of corn for this study, even though air temperature was greater with than without shelter, probably indicating that the microclimatic changes induced by windbreaks were not physiologically significant for the 3-yr period studied. The model has potential use in future studies to relate windbreak effects to crop yield and to evaluate windbreak designs for maximum benefits

  13. A dynamical approach in exploring the unknown mass in the Solar system using pulsar timing arrays

    Science.gov (United States)

    Guo, Y. J.; Lee, K. J.; Caballero, R. N.

    2018-04-01

    The error in the Solar system ephemeris will lead to dipolar correlations in the residuals of pulsar timing array for widely separated pulsars. In this paper, we utilize such correlated signals, and construct a Bayesian data-analysis framework to detect the unknown mass in the Solar system and to measure the orbital parameters. The algorithm is designed to calculate the waveform of the induced pulsar-timing residuals due to the unmodelled objects following the Keplerian orbits in the Solar system. The algorithm incorporates a Bayesian-analysis suit used to simultaneously analyse the pulsar-timing data of multiple pulsars to search for coherent waveforms, evaluate the detection significance of unknown objects, and to measure their parameters. When the object is not detectable, our algorithm can be used to place upper limits on the mass. The algorithm is verified using simulated data sets, and cross-checked with analytical calculations. We also investigate the capability of future pulsar-timing-array experiments in detecting the unknown objects. We expect that the future pulsar-timing data can limit the unknown massive objects in the Solar system to be lighter than 10-11-10-12 M⊙, or measure the mass of Jovian system to a fractional precision of 10-8-10-9.

  14. Interrogating the Venom of the Viperid Snake Sistrurus catenatus edwardsii by a Combined Approach of Electrospray and MALDI Mass Spectrometry.

    Directory of Open Access Journals (Sweden)

    Alex Chapeaurouge

    Full Text Available The complete sequence characterization of snake venom proteins by mass spectrometry is rather challenging due to the presence of multiple isoforms from different protein families. In the present study, we investigated the tryptic digest of the venom of the viperid snake Sistrurus catenatus edwardsii by a combined approach of liquid chromatography coupled to either electrospray (online or MALDI (offline mass spectrometry. These different ionization techniques proved to be complementary allowing the identification a great variety of isoforms of diverse snake venom protein families, as evidenced by the detection of the corresponding unique peptides. For example, ten out of eleven predicted isoforms of serine proteinases of the venom of S. c. edwardsii were distinguished using this approach. Moreover, snake venom protein families not encountered in a previous transcriptome study of the venom gland of this snake were identified. In essence, our results support the notion that complementary ionization techniques of mass spectrometry allow for the detection of even subtle sequence differences of snake venom proteins, which is fundamental for future structure-function relationship and possible drug design studies.

  15. An accurate and adaptable photogrammetric approach for estimating the mass and body condition of pinnipeds using an unmanned aerial system.

    Science.gov (United States)

    Krause, Douglas J; Hinke, Jefferson T; Perryman, Wayne L; Goebel, Michael E; LeRoi, Donald J

    2017-01-01

    Measurements of body size and mass are fundamental to pinniped population management and research. Manual measurements tend to be accurate but are invasive and logistically challenging to obtain. Ground-based photogrammetric techniques are less invasive, but inherent limitations make them impractical for many field applications. The recent proliferation of unmanned aerial systems (UAS) in wildlife monitoring has provided a promising new platform for the photogrammetry of free-ranging pinnipeds. Leopard seals (Hydrurga leptonyx) are an apex predator in coastal Antarctica whose body condition could be a valuable indicator of ecosystem health. We aerially surveyed leopard seals of known body size and mass to test the precision and accuracy of photogrammetry from a small UAS. Flights were conducted in January and February of 2013 and 2014 and 50 photogrammetric samples were obtained from 15 unrestrained seals. UAS-derived measurements of standard length were accurate to within 2.01 ± 1.06%, and paired comparisons with ground measurements were statistically indistinguishable. An allometric linear mixed effects model predicted leopard seal mass within 19.40 kg (4.4% error for a 440 kg seal). Photogrammetric measurements from a single, vertical image obtained using UAS provide a noninvasive approach for estimating the mass and body condition of pinnipeds that may be widely applicable.

  16. An accurate and adaptable photogrammetric approach for estimating the mass and body condition of pinnipeds using an unmanned aerial system.

    Directory of Open Access Journals (Sweden)

    Douglas J Krause

    Full Text Available Measurements of body size and mass are fundamental to pinniped population management and research. Manual measurements tend to be accurate but are invasive and logistically challenging to obtain. Ground-based photogrammetric techniques are less invasive, but inherent limitations make them impractical for many field applications. The recent proliferation of unmanned aerial systems (UAS in wildlife monitoring has provided a promising new platform for the photogrammetry of free-ranging pinnipeds. Leopard seals (Hydrurga leptonyx are an apex predator in coastal Antarctica whose body condition could be a valuable indicator of ecosystem health. We aerially surveyed leopard seals of known body size and mass to test the precision and accuracy of photogrammetry from a small UAS. Flights were conducted in January and February of 2013 and 2014 and 50 photogrammetric samples were obtained from 15 unrestrained seals. UAS-derived measurements of standard length were accurate to within 2.01 ± 1.06%, and paired comparisons with ground measurements were statistically indistinguishable. An allometric linear mixed effects model predicted leopard seal mass within 19.40 kg (4.4% error for a 440 kg seal. Photogrammetric measurements from a single, vertical image obtained using UAS provide a noninvasive approach for estimating the mass and body condition of pinnipeds that may be widely applicable.

  17. Lagrangian averaging with geodesic mean

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler-α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  18. A novel approach to finely tuned supersymmetric standard models: The case of the non-universal Higgs mass model

    Science.gov (United States)

    Yamaguchi, Masahiro; Yin, Wen

    2018-02-01

    Discarding the prejudice about fine tuning, we propose a novel and efficient approach to identify relevant regions of fundamental parameter space in supersymmetric models with some amount of fine tuning. The essential idea is the mapping of experimental constraints at a low-energy scale, rather than the parameter sets, to those of the fundamental parameter space. Applying this method to the non-universal Higgs mass model, we identify a new interesting superparticle mass pattern where some of the first two generation squarks are light whilst the stops are kept heavy as 6 TeV. Furthermore, as another application of this method, we show that the discrepancy of the muon anomalous magnetic dipole moment can be filled by a supersymmetric contribution within the 1{σ} level of the experimental and theoretical errors, which was overlooked by previous studies due to the extremely fine tuning required.

  19. The Krebs cycle and mitochondrial mass are early victims of endothelial dysfunction: proteomic approach.

    Science.gov (United States)

    Addabbo, Francesco; Ratliff, Brian; Park, Hyeong-Cheon; Kuo, Mei-Chuan; Ungvari, Zoltan; Csiszar, Anna; Ciszar, Anna; Krasnikov, Boris; Krasnikof, Boris; Sodhi, Komal; Zhang, Fung; Nasjletti, Alberto; Goligorsky, Michael S

    2009-01-01

    Endothelial cell dysfunction is associated with bioavailable nitric oxide deficiency and an excessive generation of reactive oxygen species. We modeled this condition by chronically inhibiting nitric oxide generation with subpressor doses of N(G)-monomethyl-L-arginine (L-NMMA) in C57B6 and Tie-2/green fluorescent protein mouse strains. L-NMMA-treated mice exhibited a slight reduction in vasorelaxation ability, as well as detectable abnormalities in soluble adhesion molecules (soluble intercellular adhesion molecule-1 and vascular cellular adhesion molecule-1, and matrix metalloproteinase 9), which represent surrogate indicators of endothelial dysfunction. Proteomic analysis of the isolated microvasculature using 2-dimensional gel electrophoresis and matrix-assisted laser desorption/ionization time-of-flight mass spectroscopy revealed abnormal expression of a cluster of mitochondrial enzymes, which was confirmed using immunodetection. Aconitase-2 and enoyl-CoA-hydratase-1 expression levels were decreased in L-NMMA-treated animals; this phenotype was absent in nitric oxide synthase-1 and -3 knockout mice. Depletion of aconitase-2 and enoyl-CoA-hydratase-1 resulted in the inhibition of the Krebs cycle and enhanced pyruvate shunting toward the glycolytic pathway. To assess mitochondrial mass in vivo, co-localization of green fluorescent protein and MitoTracker fluorescence was detected by intravital microscopy. Quantitative analysis of fluorescence intensity showed that L-NMMA-treated animals exhibited lower fluorescence of MitoTracker in microvascular endothelia as a result of reduced mitochondrial mass. These findings provide conclusive and unbiased evidence that mitochondriopathy represents an early manifestation of endothelial dysfunction, shifting cell metabolism toward "metabolic hypoxia" through the selective depletion of both aconitase-2 and enoyl-CoA-hydratase-1. These findings may contribute to an early preclinical diagnosis of endothelial dysfunction.

  20. Management of Renal Masses in an Octogenarian Cohort: Is There a Right Approach?

    Science.gov (United States)

    Tang, Dominic H; Nawlo, Jude; Chipollini, Juan; Gilbert, Scott M; Poch, Michael; Pow-Sang, Julio M; Sexton, Wade J; Spiess, Philippe E

    2017-12-01

    We reviewed the outcomes for an octogenarian population to investigate whether active surveillance (AS) provides comparable survival to partial nephrectomy (PN) or radical nephrectomy (RN). Data were collected from 115 octogenarian patients referred for management of renal masses at Moffitt Cancer Center from 2000 to 2013. Patients were treated with AS, PN, or RN. Univariable and multivariable Cox regression models measured the association between management modality and survival. Kaplan-Meier survival analysis was used to calculate survival, and log-rank tests were used to compare survival curves. The median age was 82 years (interquartile range, 81-85 years). The median follow-up period was 51 months (interquartile range, 23-81 months). Of the 115 patients, 31 (27%) underwent AS, 31 (27%) underwent PN, and 53 (46%) underwent RN. The patients who underwent RN had a larger mean tumor size at 5.5 cm, with 19 patients (36%) having stage ≥ pT3 (P < .001). We found no difference in overall survival or disease-specific survival among the 3 management strategies on univariable analysis (P = .39 and P = .1, respectively). On multivariable analysis for overall survival, only the Charlson comorbidity index was associated with worse survival (hazard ratio, 1.2; 95% confidence interval, 1.1-1.3; P = .002). In a subgroup analysis of cT1a patients, we also found no difference in overall or disease-specific survival among the treatment arms on univariable analysis (P = .74 and P = .9, respectively). Active treatment with PN and RN might not provide a survival advantage compared with AS in the octogenarian population with a small renal mass. However, larger renal masses should undergo active treatment in appropriately selected patients. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. An Artificial Gravity Spacecraft Approach which Minimizes Mass, Fuel and Orbital Assembly Reg

    Science.gov (United States)

    Bell, L.

    2002-01-01

    The Sasakawa International Center for Space Architecture (SICSA) is undertaking a multi-year research and design study that is exploring near and long-term commercial space development opportunities. Space tourism in low-Earth orbit (LEO), and possibly beyond LEO, comprises one business element of this plan. Supported by a financial gift from the owner of a national U.S. hotel chain, SICSA has examined opportunities, requirements and facility concepts to accommodate up to 100 private citizens and crewmembers in LEO, as well as on lunar/planetary rendezvous voyages. SICSA's artificial gravity Science Excursion Vehicle ("AGSEV") design which is featured in this presentation was conceived as an option for consideration to enable round-trip travel to Moon and Mars orbits and back from LEO. During the course of its development, the AGSEV would also serve other important purposes. An early assembly stage would provide an orbital science and technology testbed for artificial gravity demonstration experiments. An ultimate mature stage application would carry crews of up to 12 people on Mars rendezvous missions, consuming approximately the same propellant mass required for lunar excursions. Since artificial gravity spacecraft that rotate to create centripetal accelerations must have long spin radii to limit adverse effects of Coriolis forces upon inhabitants, SICSA's AGSEV design embodies a unique tethered body concept which is highly efficient in terms of structural mass and on-orbit assembly requirements. The design also incorporates "inflatable" as well as "hard" habitat modules to optimize internal volume/mass relationships. Other important considerations and features include: maximizing safety through element and system redundancy; means to avoid destabilizing mass imbalances throughout all construction and operational stages; optimizing ease of on-orbit servicing between missions; and maximizing comfort and performance through careful attention to human needs. A

  2. A simple theoretical approach to determine relative ion yield (RIY) in glow discharge mass spectrometry (GDMS)

    International Nuclear Information System (INIS)

    Born, Sabine; Matsunami, Noriaki

    2000-01-01

    Direct current glow discharge mass spectrometry (dc-GDMS) has been applied to detect impurities in metals. The aim of this study is to understand quantitatively the processes taking place in GDMS and establish a model to calculate the relative ion yield (RIY), which is inversely proportional to the relative sensitivity factor (RSF), in order to achieve better agreement between the calculated and the experimental RIYs. A comparison is made between the calculated RIY of the present model and the experimental RIY, and also with other models. (author)

  3. A comparison of labeling and label-free mass spectrometry-based proteomics approaches.

    Science.gov (United States)

    Patel, Vibhuti J; Thalassinos, Konstantinos; Slade, Susan E; Connolly, Joanne B; Crombie, Andrew; Murrell, J Colin; Scrivens, James H

    2009-07-01

    The proteome of the recently discovered bacterium Methylocella silvestris has been characterized using three profiling and comparative proteomics approaches. The organism has been grown on two different substrates enabling variations in protein expression to be identified. The results obtained using the experimental approaches have been compared with respect to number of proteins identified, confidence in identification, sequence coverage and agreement of regulated proteins. The sample preparation, instrumental time and sample loading requirements of the differing experiments are compared and discussed. A preliminary screen of the protein regulation results for biological significance has also been performed.

  4. Needleless coaxial electrospinning: A novel approach to mass production of coaxial nanofibers.

    Science.gov (United States)

    Vysloužilová, Lucie; Buzgo, Matej; Pokorný, Pavel; Chvojka, Jiří; Míčková, Andrea; Rampichová, Michala; Kula, Jiří; Pejchar, Karel; Bílek, Martin; Lukáš, David; Amler, Evžen

    2017-01-10

    Herein, we describe a simple spinneret setup for needleless coaxial electrospinning that exceeds the limited production capacity of current approaches. The proposed weir spinneret enables coaxial electrospinning from free liquid surface. This approach leads to the formation of coaxial nanofibers with higher and uniform shell/core ratio, which results in the possibility of better tuning of the degradation rate. The throughput and quality increase favor the broader application of coaxial nanofibers from weir spinnerets as systems for controlled drug delivery in regenerative medicine and tissue engineering. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Modeling the effect of levothyroxine therapy on bone mass density in postmenopausal women: a different approach leads to new inference

    Science.gov (United States)

    Mohammadi, Babak; Haghpanah, Vahid; Tavangar, Seyed Mohammad; Larijani, Bagher

    2007-01-01

    Background The diagnosis, treatment and prevention of osteoporosis is a national health emergency. Osteoporosis quietly progresses without symptoms until late stage complications occur. Older patients are more commonly at risk of fractures due to osteoporosis. The fracture risk increases when suppressive doses of levothyroxine are administered especially in postmenopausal women. The question is; "When should bone mass density be tested in postmenopausal women after the initiation of suppressive levothyroxine therapy?". Standard guidelines for the prevention of osteoporosis suggest that follow-up be done in 1 to 2 years. We were interested in predicting the level of bone mass density in postmenopausal women after the initiation of suppressive levothyroxine therapy with a novel approach. Methods The study used data from the literature on the influence of exogenous thyroid hormones on bone mass density. Four cubic polynomial equations were obtained by curve fitting for Ward's triangle, trochanter, spine and femoral neck. The behaviors of the models were investigated by statistical and mathematical analyses. Results There are four points of inflexion on the graphs of the first derivatives of the equations with respect to time at about 6, 5, 7 and 5 months. In other words, there is a maximum speed of bone loss around the 6th month after the start of suppressive L-thyroxine therapy in post-menopausal women. Conclusion It seems reasonable to check bone mass density at the 6th month of therapy. More research is needed to explain the cause and to confirm the clinical application of this phenomenon for osteoporosis, but such an approach can be used as a guide to future experimentation. The investigation of change over time may lead to more sophisticated decision making in a wide variety of clinical problems. PMID:17559682

  6. Modeling the effect of levothyroxine therapy on bone mass density in postmenopausal women: a different approach leads to new inference

    Directory of Open Access Journals (Sweden)

    Tavangar Seyed

    2007-06-01

    Full Text Available Abstract Background The diagnosis, treatment and prevention of osteoporosis is a national health emergency. Osteoporosis quietly progresses without symptoms until late stage complications occur. Older patients are more commonly at risk of fractures due to osteoporosis. The fracture risk increases when suppressive doses of levothyroxine are administered especially in postmenopausal women. The question is; "When should bone mass density be tested in postmenopausal women after the initiation of suppressive levothyroxine therapy?". Standard guidelines for the prevention of osteoporosis suggest that follow-up be done in 1 to 2 years. We were interested in predicting the level of bone mass density in postmenopausal women after the initiation of suppressive levothyroxine therapy with a novel approach. Methods The study used data from the literature on the influence of exogenous thyroid hormones on bone mass density. Four cubic polynomial equations were obtained by curve fitting for Ward's triangle, trochanter, spine and femoral neck. The behaviors of the models were investigated by statistical and mathematical analyses. Results There are four points of inflexion on the graphs of the first derivatives of the equations with respect to time at about 6, 5, 7 and 5 months. In other words, there is a maximum speed of bone loss around the 6th month after the start of suppressive L-thyroxine therapy in post-menopausal women. Conclusion It seems reasonable to check bone mass density at the 6th month of therapy. More research is needed to explain the cause and to confirm the clinical application of this phenomenon for osteoporosis, but such an approach can be used as a guide to future experimentation. The investigation of change over time may lead to more sophisticated decision making in a wide variety of clinical problems.

  7. FURSMASA: a new approach to rapid scoring functions that uses a MD-averaged potential energy grid and a solvent-accessible surface area term with parameters GA fit to experimental data.

    Science.gov (United States)

    Pearlman, David A; Rao, B Govinda; Charifson, Paul

    2008-05-15

    We demonstrate a new approach to the development of scoring functions through the formulation and parameterization of a new function, which can be used both for rapidly ranking the binding of ligands to proteins and for estimating relative aqueous molecular solubilities. The intent of this work is to introduce a new paradigm for creation of scoring functions, wherein we impose the following criteria upon the function: (1) simple; (2) intuitive; (3) requires no postparameterization tweaking; (4) can be applied (without reparameterization) to multiple target systems; and (5) can be rapidly evaluated for any potential ligand. Following these criteria, a new function, FURSMASA (function for rapid scoring using an MD-averaged grid and the accessible surface area) has been developed. Three novel features of the function include: (1) use of an MD-averaged potential energy grid for ligand-protein interactions, rather than a simple static grid; (2) inclusion of a term that depends on the change in the solvent-accessible surface area changes on an atomic (not molecular) basis; and (3) use of the recently derived predictive index (PI) target when optimizing the function, which focuses the function on its intended purpose of relative ranking. A genetic algorithm is used to optimize the function against test data sets that include ligands for the following proteins: IMPDH, p38, gyrase B, HIV-1, and TACE, as well as the Syracuse Research solubility database. We find that the function is predictive, and can simultaneously fit all the test data sets with cross-validated predictive indices ranging from 0.68 to 0.82. As a test of the ability of this function to predict binding for systems not in the training set, the resulting fitted FURSAMA function is then applied to 23 ligands of the COX-2 enzyme. Comparing the results for COX-2 against those obtained using a variety of well-known rapid scoring functions demonstrates that FURSMASA outperforms all of them in terms of the PI and

  8. Convergence of multiple ergodic averages

    OpenAIRE

    Host, Bernard

    2006-01-01

    These notes are based on a course for a general audience given at the Centro de Modeliamento Matem\\'atico of the University of Chile, in December 2004. We study the mean convergence of multiple ergodic averages, that is, averages of a product of functions taken at different times. We also describe the relations between this area of ergodic theory and some classical and some recent results in additive number theory.

  9. Comparing targeted and non-targeted high-resolution mass spectrometric approaches for assessing advanced oxidation reactor performance.

    Science.gov (United States)

    Parry, Emily; Young, Thomas M

    2016-11-01

    High resolution mass spectrometry (HR-MS) offers the opportunity to track large numbers of non-target analytes through water treatment processes, providing a more comprehensive view of reactor performance than targeted evaluation. Both approaches were used to evaluate the performance of a pilot scale advanced oxidation process (AOP) employing ultraviolet light and hydrogen peroxide (UV/H 2 O 2 ) to treat municipal wastewater effluent. Twelve pharmaceuticals and personal care products were selected as target compounds and added to reactor influent. Target compound removal over a range of flow rates and hydrogen peroxide addition levels was assessed using a liquid chromatograph combined with a quadrupole time-of-flight mass spectrometer (LC-qTOF-MS). Target compound removals were used to determine hydroxyl radical concentrations and UV fluence under pilot scale conditions. The experiments were also analyzed using a nontarget approach, which identified "molecular features" in either reactor influent or effluent. Strong correlation (r = 0.94) was observed between target compound removals calculated using the targeted and non-targeted approaches across the range of reactor conditions tested. The two approaches also produced consistent rankings of the performance of the various reactor operating conditions, although the distribution of compound removal efficiencies was usually less favorable with the broader, nontarget approach. For example, in the UV only treatment 8.3% of target compounds and 2.2% of non-target compounds exhibited removals above 50%, while 100% of target compounds and 74% of non-target compounds exhibited removals above 50% in the best condition tested. These results suggest that HR-MS methods can provide more holistic evaluation of reactor performance, and may reduce biases caused by selection of a limited number of target compounds. HR-MS methods also offer insights into the composition of poorly removed compounds and the formation of

  10. Derivation of mass relations for composite W* and Z* from effective Lagrangian approach

    International Nuclear Information System (INIS)

    Yasue, Masaki; Oneda, Sadao.

    1985-04-01

    In an effective-Lagrangian model with gauge bosons (W,Z,γ) and their neighboring spin J=1 composites (W*,Z*), we find relations among their masses, m sub(W), m sub(Z), m sub(W*) and m sub(Z*): m sub(W) m sub(W*) = cos theta m sub(Z) m sub(Z*) (as a generalization of m sub(W) = cos theta m sub(Z)) and m sub(W) 2 + m sub(W*) 2 + tan 2 theta m sub(W0) 2 = m sub(Z) 2 + m sub(Z*) 2 with m sub(W0) being the mass of W in the standard model provided that the system respects the SU(2) sub(L) x U(1) sub(Y) symmetry. W* and Z* are taken as the lowest-lying excited states belonging to an SU(2) sub(L)-triplet in the symmetric limit. The existence of W* coupling to the V-A current modifies the relation between G sub(F) and M sub(W) and that of Z* generates a new interaction of the (Jsup(em)) 2 -type as well as the deviation of sin theta sub(W) observed at low energies from the mixing angle sin theta in neutral-current interactions. (author)

  11. Approaches towards the automated interpretation and prediction of electrospray tandem mass spectra of non-peptidic combinatorial compounds.

    Science.gov (United States)

    Klagkou, Katerina; Pullen, Frank; Harrison, Mark; Organ, Andy; Firth, Alistair; Langley, G John

    2003-01-01

    Combinatorial chemistry is widely used within the pharmaceutical industry as a means of rapid identification of potential drugs. With the growth of combinatorial libraries, mass spectrometry (MS) became the key analytical technique because of its speed of analysis, sensitivity, accuracy and ability to be coupled with other analytical techniques. In the majority of cases, electrospray mass spectrometry (ES-MS) has become the default ionisation technique. However, due to the absence of fragment ions in the resulting spectra, tandem mass spectrometry (MS/MS) is required to provide structural information for the identification of an unknown analyte. This work discusses the first steps of an investigation into the fragmentation pathways taking place in electrospray tandem mass spectrometry. The ultimate goal for this project is to set general fragmentation rules for non-peptidic, pharmaceutical, combinatorial compounds. As an aid, an artificial intelligence (AI) software package is used to facilitate interpretation of the spectra. This initial study has focused on determining the fragmentation rules for some classes of compound types that fit the remit as outlined above. Based on studies carried out on several combinatorial libraries of these compounds, it was established that different classes of drug molecules follow unique fragmentation pathways. In addition to these general observations, the specific ionisation processes and the fragmentation pathways involved in the electrospray mass spectra of these systems were explored. The ultimate goal will be to incorporate our findings into the computer program and allow identification of an unknown, non-peptidic compound following insertion of its ES-MS/MS spectrum into the AI package. The work herein demonstrates the potential benefit of such an approach in addressing the issue of high-throughput, automated MS/MS data interpretation. Copyright 2003 John Wiley & Sons, Ltd.

  12. Biogeochemical mass balances in a turbid tropical reservoir. Field data and modelling approach

    Science.gov (United States)

    Phuong Doan, Thuy Kim; Némery, Julien; Gratiot, Nicolas; Schmid, Martin

    2014-05-01

    The turbid tropical Cointzio reservoir, located in the Trans Mexican Volcanic Belt (TMVB), behaves as a warm monomictic water body (area = 6 km2, capacity 66 Mm3, residence time ~ 1 year). It is strategic for the drinking water supply of the city of Morelia, capital of the state of Michoacán, and for downstream irrigation during the dry season. This reservoir is a perfect example of a human-impacted system since its watershed is mainly composed of degraded volcanic soils and is subjected to high erosion processes and agricultural loss. The reservoir is threatened by sediment accumulation and nutrients originating from untreated waters in the upstream watershed. The high content of very fine clay particles and the lack of water treatment plants lead to serious episodes of eutrophication (up to 70 μg chl. a L-1), high levels of turbidity (Secchi depth water vertical profiles, reservoir inflow and outflow) we determined suspended sediment (SS), carbon (C), nitrogen (N) and phosphorus (P) mass balances. Watershed SS yields were estimated at 35 t km2 y-1 of which 89-92 % were trapped in the Cointzio reservoir. As a consequence the reservoir has already lost 25 % of its initial storage capacity since its construction in 1940. Nutrient mass balances showed that 50 % and 46 % of incoming P and N were retained by sedimentation, and mainly eliminated through denitrification respectively. Removal of C by 30 % was also observed both by sedimentation and through gas emission. To complete field data analyses we examined the ability of vertical one dimensional (1DV) numerical models (Aquasim biogeochemical model coupled with k-ɛ mixing model) to reproduce the main biogeochemical cycles in the Cointzio reservoir. The model can describe all the mineralization processes both in the water column and in the sediment. The values of the entire mass balance of nutrients and of the mineralization rates (denitrification and aerobic benthic mineralization) calculated from the model

  13. Micropatterned nanostructures: a bioengineered approach to mass-produce functional myocardial grafts

    Science.gov (United States)

    Serpooshan, Vahid; Mahmoudi, Morteza

    2015-02-01

    Cell-based therapies are a recently established path for treating a wide range of human disease. Tissue engineering of contractile heart muscle for replacement therapy is among the most exciting and important of these efforts. However, current in vitro techniques of cultivating functional mature cardiac grafts have only been moderately successful due to the poor capability of traditional two-dimensional cell culture systems to recapitulate necessary in vivo conditions. In this issue, Kiefer et al [1] introduce a laser-patterned nanostructured substrate (Al/Al2O3 nanowires) for efficient maintenance of oriented human cardiomyocytes, with great potential to open new roads to mass-production of contractile myocardial grafts for cardiovascular tissue engineering.

  14. Integrated genomic approaches implicate osteoglycin (Ogn) in the regulation of left ventricular mass

    Czech Academy of Sciences Publication Activity Database

    Petretto, E.; Sarwar, R.; Grieve, I.; Lu, H.; Kumaran, M. K.; Muckett, P.J.; Mangion, J.; Schroen, B.; Benson, M.; Punjabi, P.P.; Prasad, S.K.; Pennell, D. J.; Kiesewetter, Ch.; Tasheva, E. S.; Corpuz, L. M.; Webb, M.D.; Conrad, G.W.; Kurtz, T. W.; Křen, Vladimír; Fischer, J.; Hubner, N.; Pinto, Y. M.; Pravenec, Michal; Aitman, T. J.; Cook, S.A.

    2008-01-01

    Roč. 40, č. 5 (2008), s. 546-552 ISSN 1061-4036 R&D Projects: GA MŠk(CZ) 1P05ME791; GA MŠk(CZ) 1M0520; GA ČR(CZ) GA301/06/0028; GA ČR(CZ) GA301/08/0166 Grant - others:HHMI(US) 55005624; -(XE) LSHG-CT-2005-019015 Institutional research plan: CEZ:AV0Z50110509 Source of funding: N - neverejné zdroje ; R - rámcový projekt EK Keywords : left ventricle mass * osteoglycin * genetical genomics Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 30.259, year: 2008

  15. Bone mass and mineral metabolism alterations in adult celiac disease: pathophysiology and clinical approach.

    Science.gov (United States)

    Di Stefano, Michele; Mengoli, Caterina; Bergonzi, Manuela; Corazza, Gino Roberto

    2013-11-22

    Osteoporosis affects many patients with celiac disease (CD), representing the consequence of calcium malabsorption and persistent activation of mucosal inflammation. A slight increase of fracture risk is evident in this condition, particularly in those with overt malabsorption and in postmenopausal state. The adoption of a correct gluten-free diet (GFD) improves bone derangement, but is not able to normalize bone mass in all the patients. Biomarkers effective in the prediction of bone response to gluten-free diet are not yet available and the indications of guidelines are still imperfect and debated. In this review, the pathophysiology of bone loss is correlated to clinical aspects, defining an alternative proposal of management for this condition.

  16. Bone Mass and Mineral Metabolism Alterations in Adult Celiac Disease: Pathophysiology and Clinical Approach

    Directory of Open Access Journals (Sweden)

    Michele Di Stefano

    2013-11-01

    Full Text Available Osteoporosis affects many patients with celiac disease (CD, representing the consequence of calcium malabsorption and persistent activation of mucosal inflammation. A slight increase of fracture risk is evident in this condition, particularly in those with overt malabsorption and in postmenopausal state. The adoption of a correct gluten-free diet (GFD improves bone derangement, but is not able to normalize bone mass in all the patients. Biomarkers effective in the prediction of bone response to gluten-free diet are not yet available and the indications of guidelines are still imperfect and debated. In this review, the pathophysiology of bone loss is correlated to clinical aspects, defining an alternative proposal of management for this condition.

  17. Current mass spectrometry approaches and challenges for the bioanalysis of traditional Chinese medicines.

    Science.gov (United States)

    Dong, Xin; Wang, Rui; Zhou, Xu; Li, Ping; Yang, Hua

    2016-07-15

    Traditional Chinese medicines (TCMs) are gaining more and more attentions all over the world. The focus of TCMs researches were gradually shifted from chemical research to the combination study of chemical and life sciences. However, obtaining precise information of TCMs process in vivo or in vitro is still a bottleneck in bioanalysis of TCMs for their chemical composition complexity. This paper reviewed the recent analytical methods especially mass spectrometry technology in the bioanalysis of TCMs, and data processing techniques in the qualitative and quantitative analyses of metabolite of TCMs. Additionally, the difficulties encountered in the analyzing biological samples in TCMs and the solutions to these problems have been mentioned. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. An Interprofessional Approach to Continuing Education With Mass Casualty Simulation: Planning and Execution.

    Science.gov (United States)

    Saber, Deborah A; Strout, Kelley; Caruso, Lisa Swanson; Ingwell-Spolan, Charlene; Koplovsky, Aiden

    2017-10-01

    Many natural and man-made disasters require the assistance from teams of health care professionals. Knowing that continuing education about disaster simulation training is essential to nursing students, nurses, and emergency first responders (e.g., emergency medical technicians, firefighters, police officers), a university in the northeastern United States planned and implemented an interprofessional mass casualty incident (MCI) disaster simulation using the Project Management Body of Knowledge (PMBOK) management framework. The school of nursing and University Volunteer Ambulance Corps (UVAC) worked together to simulate a bus crash with disaster victim actors to provide continued education for community first responders and train nursing students on the MCI process. This article explains the simulation activity, planning process, and achieved outcomes. J Contin Educ Nurs. 2017;48(10):447-453. Copyright 2017, SLACK Incorporated.

  19. Honeybee Venom Proteome Profile of Queens and Winter Bees as Determined by a Mass Spectrometric Approach

    Science.gov (United States)

    Danneels, Ellen L.; Van Vaerenbergh, Matthias; Debyser, Griet; Devreese, Bart; de Graaf, Dirk C.

    2015-01-01

    Venoms of invertebrates contain an enormous diversity of proteins, peptides, and other classes of substances. Insect venoms are characterized by a large interspecific variation resulting in extended lists of venom compounds. The venom composition of several hymenopterans also shows different intraspecific variation. For instance, venom from different honeybee castes, more specifically queens and workers, shows quantitative and qualitative variation, while the environment, like seasonal changes, also proves to be an important factor. The present study aimed at an in-depth analysis of the intraspecific variation in the honeybee venom proteome. In summer workers, the recent list of venom proteins resulted from merging combinatorial peptide ligand library sample pretreatment and targeted tandem mass spectrometry realized with a Fourier transform ion cyclotron resonance mass spectrometer (FT-ICR MS/MS). Now, the same technique was used to determine the venom proteome of queens and winter bees, enabling us to compare it with that of summer bees. In total, 34 putative venom toxins were found, of which two were never described in honeybee venoms before. Venom from winter workers did not contain toxins that were not present in queens or summer workers, while winter worker venom lacked the allergen Api m 12, also known as vitellogenin. Venom from queen bees, on the other hand, was lacking six of the 34 venom toxins compared to worker bees, while it contained two new venom toxins, in particularly serine proteinase stubble and antithrombin-III. Although people are hardly stung by honeybees during winter or by queen bees, these newly identified toxins should be taken into account in the characterization of a putative allergic response against Apis mellifera stings. PMID:26529016

  20. Honeybee venom proteome profile of queens and winter bees as determined by a mass spectrometric approach.

    Science.gov (United States)

    Danneels, Ellen L; Van Vaerenbergh, Matthias; Debyser, Griet; Devreese, Bart; de Graaf, Dirk C

    2015-10-30

    Venoms of invertebrates contain an enormous diversity of proteins, peptides, and other classes of substances. Insect venoms are characterized by a large interspecific variation resulting in extended lists of venom compounds. The venom composition of several hymenopterans also shows different intraspecific variation. For instance, venom from different honeybee castes, more specifically queens and workers, shows quantitative and qualitative variation, while the environment, like seasonal changes, also proves to be an important factor. The present study aimed at an in-depth analysis of the intraspecific variation in the honeybee venom proteome. In summer workers, the recent list of venom proteins resulted from merging combinatorial peptide ligand library sample pretreatment and targeted tandem mass spectrometry realized with a Fourier transform ion cyclotron resonance mass spectrometer (FT-ICR MS/MS). Now, the same technique was used to determine the venom proteome of queens and winter bees, enabling us to compare it with that of summer bees. In total, 34 putative venom toxins were found, of which two were never described in honeybee venoms before. Venom from winter workers did not contain toxins that were not present in queens or summer workers, while winter worker venom lacked the allergen Api m 12, also known as vitellogenin. Venom from queen bees, on the other hand, was lacking six of the 34 venom toxins compared to worker bees, while it contained two new venom toxins, in particularly serine proteinase stubble and antithrombin-III. Although people are hardly stung by honeybees during winter or by queen bees, these newly identified toxins should be taken into account in the characterization of a putative allergic response against Apis mellifera stings.

  1. The influence of parent's body mass index on peer selection: an experimental approach using virtual reality.

    Science.gov (United States)

    Martarelli, Corinna S; Borter, Natalie; Bryjova, Jana; Mast, Fred W; Munsch, Simone

    2015-11-30

    Relatively little is known about the influence of psychosocial factors, such as familial role modeling and social network on the development and maintenance of childhood obesity. We investigated peer selection using an immersive virtual reality environment. In a virtual schoolyard, children were confronted with normal weight and overweight avatars either eating or playing. Fifty-seven children aged 7-13 participated. Interpersonal distance to the avatars, child's BMI, self-perception, eating behavior and parental BMI were assessed. Parental BMI was the strongest predictor for the children's minimal distance to the avatars. Specifically, a higher mothers' BMI was associated with greater interpersonal distance and children approached closer to overweight eating avatars. A higher father's BMI was associated with a lower interpersonal distance to the avatars. These children approached normal weight playing and overweight eating avatar peers closest. The importance of parental BMI for the child's social approach/avoidance behavior can be explained through social modeling mechanisms. Differential effects of paternal and maternal BMI might be due to gender specific beauty ideals. Interventions to promote social interaction with peer groups could foster weight stabilization or weight loss in children. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. Vitroprocines, new antibiotics against Acinetobacter baumannii, discovered from marine Vibrio sp. QWI-06 using mass-spectrometry-based metabolomics approach

    Science.gov (United States)

    Liaw, Chih-Chuang; Chen, Pei-Chin; Shih, Chao-Jen; Tseng, Sung-Pin; Lai, Ying-Mi; Hsu, Chi-Hsin; Dorrestein, Pieter C.; Yang, Yu-Liang

    2015-08-01

    A robust and convenient research strategy integrating state-of-the-art analytical techniques is needed to efficiently discover novel compounds from marine microbial resources. In this study, we identified a series of amino-polyketide derivatives, vitroprocines A-J, from the marine bacterium Vibrio sp. QWI-06 by an integrated approach using imaging mass spectroscopy and molecular networking, as well as conventional bioactivity-guided fractionation and isolation. The structure-activity relationship of vitroprocines against Acinetobacter baumannii is proposed. In addition, feeding experiments with 13C-labeled precursors indicated that a pyridoxal 5‧-phosphate-dependent mechanism is involved in the biosynthesis of vitroprocines. Elucidation of amino-polyketide derivatives from a species of marine bacteria for the first time demonstrates the potential of this integrated metabolomics approach to uncover marine bacterial biodiversity.

  3. A dynamic programming approach for the alignment of signal peaks in multiple gas chromatography-mass spectrometry experiments

    Directory of Open Access Journals (Sweden)

    McConville Malcolm J

    2007-10-01

    Full Text Available Abstract Background Gas chromatography-mass spectrometry (GC-MS is a robust platform for the profiling of certain classes of small molecules in biological samples. When multiple samples are profiled, including replicates of the same sample and/or different sample states, one needs to account for retention time drifts between experiments. This can be achieved either by the alignment of chromatographic profiles prior to peak detection, or by matching signal peaks after they have been extracted from chromatogram data matrices. Automated retention time correction is particularly important in non-targeted profiling studies. Results A new approach for matching signal peaks based on dynamic programming is presented. The proposed approach relies on both peak retention times and mass spectra. The alignment of more than two peak lists involves three steps: (1 all possible pairs of peak lists are aligned, and similarity of each pair of peak lists is estimated; (2 the guide tree is built based on the similarity between the peak lists; (3 peak lists are progressively aligned starting with the two most similar peak lists, following the guide tree until all peak lists are exhausted. When two or more experiments are performed on different sample states and each consisting of multiple replicates, peak lists within each set of replicate experiments are aligned first (within-state alignment, and subsequently the resulting alignments are aligned themselves (between-state alignment. When more than two sets of replicate experiments are present, the between-state alignment also employs the guide tree. We demonstrate the usefulness of this approach on GC-MS metabolic profiling experiments acquired on wild-type and mutant Leishmania mexicana parasites. Conclusion We propose a progressive method to match signal peaks across multiple GC-MS experiments based on dynamic programming. A sensitive peak similarity function is proposed to balance peak retention time and peak

  4. Body-mass or sex-biased tick parasitism in roe deer (Capreolus capreolus)? A GAMLSS approach.

    Science.gov (United States)

    Kiffner, C; Lödige, C; Alings, M; Vor, T; Rühe, F

    2011-03-01

    Macroparasites feeding on wildlife hosts follow skewed distributions for which basic statistical approaches are of limited use. To predict Ixodes spp. tick burden on roe deer, we applied Generalized Additive Models for Location, Scale and Shape (GAMLSS) which allow incorporating a variable dispersion. We analysed tick burden of 78 roe deer, sampled in a forest region of Germany over a period of 20 months. Assuming a negative binomial error distribution and controlling for ambient temperature, we analysed whether host sex and body mass affected individual tick burdens. Models for larval and nymphal tick burden included host sex, with male hosts being more heavily infested than female ones. However, the influence of host sex on immature tick burden was associated with wide standard errors (nymphs) or the factor was marginally significant (larvae). Adult tick burden was positively correlated with host body mass. Thus, controlled for host body mass and ambient temperature, there is weak support for sex-biased parasitism in this system. Compared with models which assume linear relationships, GAMLSS provided a better fit. Adding a variable dispersion term improved only one of the four models. Yet, the potential of modelling dispersion as a function of variables appears promising for larger datasets. © 2010 The Authors. Medical and Veterinary Entomology © 2010 The Royal Entomological Society.

  5. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  6. Analytical Approach for Estimating Preliminary Mass of ARES I Crew Launch Vehicle Upper Stage Structural Components

    Science.gov (United States)

    Aggarwal, Pravin

    2007-01-01

    electrical power functions to other Elements of the CLV, is included as secondary structure. The MSFC has an overall responsibility for the integrated US element as well as structural design an thermal control of the fuel tanks, intertank, interstage, avionics, main propulsion system, Reaction Control System (RCS) for both the Upper Stage and the First Stage. MSFC's Spacecraft and Vehicle Department, Structural and Analysis Design Division is developing a set of predicted mass of these elements. This paper details the methodology, criterion and tools used for the preliminary mass predictions of the upper stage structural assembly components. In general, weight of the cylindrical barrel sections are estimated using the commercial code Hypersizer, whereas, weight of the domes are developed using classical solutions. HyperSizer is software that performs automated structural analysis and sizing optimization based on aerospace methods for strength, stability, and stiffness. Analysis methods range from closed form, traditional hand calculations repeated every day in industry to more advanced panel buckling algorithms. Margin-of-safety reporting for every potential failure provides the engineer with a powerful insight into the structural problem. Optimization capabilities include finding minimum weight panel or beam concepts, material selections, cross sectional dimensions, thicknesses, and lay-ups from a library of 40 different stiffened and sandwich designs and a database of composite, metallic, honeycomb, and foam materials. Multiple different concepts (orthogrid, isogrid, and skin stiffener) were run for multiple loading combinations of ascent design load with and with out tank pressure as well as proof pressure condition. Subsequently, selected optimized concept obtained from Hypersizer runs was translated into a computer aid design (CAD) model to account for the wall thickness tolerance, weld land etc for developing the most probable weight of the components. The flow diram

  7. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  8. Comparison of averaging techniques for the calculation of the 'European average exposure indicator' for particulate matter.

    Science.gov (United States)

    Brown, Richard J C; Woods, Peter T

    2012-01-01

    A comparison of various averaging techniques to calculate the Average Exposure Indicator (AEI) specified in European Directive 2008/50/EC for particulate matter in ambient air has been performed. This was done for data from seventeen sites around the UK for which PM(10) mass concentration data is available for the years 1998-2000 and 2008-2010 inclusive. The results have shown that use of the geometric mean produces significantly lower AEI values within the required three year averaging periods and slightly lower changes in the AEI value between the three year averaging periods than the use of the arithmetic mean. The use of weighted means in the calculation, using the data capture at each site as the weighting parameter, has also been tested and this is proposed as a useful way of taking account of the confidence of each data set.

  9. Mass spectrometry is a new approach to diagnosing adenomyosis and cancer of the corpus uteri

    Directory of Open Access Journals (Sweden)

    A. V. Sorokina

    2011-01-01

    Full Text Available Sera from 60 apparently healthy women (mean age 40 years; a control group, 40 patients with a verified diagnosis of adenomyosis (mean age 41 years and 42 patients with uterine corpus cancer (UCC (mean age 58 years were fractionated on magnetic beads with weak cation exchange surface, followed by an examination of the obtained fractions by time-of-flight mass spectroscopy (MS with ma- trix-activated laser desorption/ionization. MS data analysis using classification algorithms, such as a genetic algorithm and a learning neural network, made it possible to construct mathematical models that were able to differentiate MS profiles of the above sample groups with a high specificity and a high sensitivity. The best values of the specificity and sensitivity of the classification models adenomyosis- control and UCC-control were 86.2, 93.8, 90.5, and 90.5%, respectively. Analysis of the statistical diagrams of these peak areas between different sample groups could identify 3 MS profile peaks for adenomyosis and 3 peaks for UCC.

  10. Postmortem interval estimation: a novel approach utilizing gas chromatography/mass spectrometry-based biochemical profiling.

    Science.gov (United States)

    Kaszynski, Richard H; Nishiumi, Shin; Azuma, Takeshi; Yoshida, Masaru; Kondo, Takeshi; Takahashi, Motonori; Asano, Migiwa; Ueno, Yasuhiro

    2016-05-01

    While the molecular mechanisms underlying postmortem change have been exhaustively investigated, the establishment of an objective and reliable means for estimating postmortem interval (PMI) remains an elusive feat. In the present study, we exploit low molecular weight metabolites to estimate postmortem interval in mice. After sacrifice, serum and muscle samples were procured from C57BL/6J mice (n = 52) at seven predetermined postmortem intervals (0, 1, 3, 6, 12, 24, and 48 h). After extraction and isolation, low molecular weight metabolites were measured via gas chromatography/mass spectrometry (GC/MS) and examined via semi-quantification studies. Then, PMI prediction models were generated for each of the 175 and 163 metabolites identified in muscle and serum, respectively, using a non-linear least squares curve fitting program. A PMI estimation panel for muscle and serum was then erected which consisted of 17 (9.7%) and 14 (8.5%) of the best PMI biomarkers identified in muscle and serum profiles demonstrating statistically significant correlations between metabolite quantity and PMI. Using a single-blinded assessment, we carried out validation studies on the PMI estimation panels. Mean ± standard deviation for accuracy of muscle and serum PMI prediction panels was -0.27 ± 2.88 and -0.89 ± 2.31 h, respectively. Ultimately, these studies elucidate the utility of metabolomic profiling in PMI estimation and pave the path toward biochemical profiling studies involving human samples.

  11. A Stable-Isotope Mass Spectrometry-Based Metabolic Footprinting Approach to Analyze Exudates from Phytoplankton

    Directory of Open Access Journals (Sweden)

    Mark R. Viant

    2013-10-01

    Full Text Available Phytoplankton exudates play an important role in pelagic ecology and biogeochemical cycles of elements. Exuded compounds fuel the microbial food web and often encompass bioactive secondary metabolites like sex pheromones, allelochemicals, antibiotics, or feeding attractants that mediate biological interactions. Despite this importance, little is known about the bioactive compounds present in phytoplankton exudates. We report a stable-isotope metabolic footprinting method to characterise exudates from aquatic autotrophs. Exudates from 13C-enriched alga were concentrated by solid phase extraction and analysed by high-resolution Fourier transform ion cyclotron resonance mass spectrometry. We used the harmful algal bloom forming dinoflagellate Alexandrium tamarense to prove the method. An algorithm was developed to automatically pinpoint just those metabolites with highly 13C-enriched isotope signatures, allowing us to discover algal exudates from the complex seawater background. The stable-isotope pattern (SIP of the detected metabolites then allowed for more accurate assignment to an empirical formula, a critical first step in their identification. This automated workflow provides an effective way to explore the chemical nature of the solutes exuded from phytoplankton cells and will facilitate the discovery of novel dissolved bioactive compounds.

  12. A hybrid approach to protein differential expression in mass spectrometry-based proteomics

    KAUST Repository

    Wang, X.

    2012-04-19

    MOTIVATION: Quantitative mass spectrometry-based proteomics involves statistical inference on protein abundance, based on the intensities of each protein\\'s associated spectral peaks. However, typical MS-based proteomics datasets have substantial proportions of missing observations, due at least in part to censoring of low intensities. This complicates intensity-based differential expression analysis. RESULTS: We outline a statistical method for protein differential expression, based on a simple Binomial likelihood. By modeling peak intensities as binary, in terms of \\'presence/absence,\\' we enable the selection of proteins not typically amenable to quantitative analysis; e.g. \\'one-state\\' proteins that are present in one condition but absent in another. In addition, we present an analysis protocol that combines quantitative and presence/absence analysis of a given dataset in a principled way, resulting in a single list of selected proteins with a single-associated false discovery rate. AVAILABILITY: All R code available here: http://www.stat.tamu.edu/~adabney/share/xuan_code.zip.

  13. Normalization Approaches for Removing Systematic Biases Associated with Mass Spectrometry and Label-Free Proteomics

    Energy Technology Data Exchange (ETDEWEB)

    Callister, Stephen J.; Barry, Richard C.; Adkins, Joshua N.; Johnson, Ethan T.; Qian, Weijun; Webb-Robertson, Bobbie-Jo M.; Smith, Richard D.; Lipton, Mary S.

    2006-02-01

    Central tendency, linear regression, locally weighted regression, and quantile techniques were investigated for normalization of peptide abundance measurements obtained from high-throughput liquid chromatography-Fourier transform ion cyclotron resonance mass spectrometry (LC-FTICR MS). Arbitrary abundances of peptides were obtained from three sample sets, including a standard protein sample, two Deinococcus radiodurans samples taken from different growth phases, and two mouse striatum samples from control and methamphetamine-stressed mice (strain C57BL/6). The selected normalization techniques were evaluated in both the absence and presence of biological variability by estimating extraneous variability prior to and following normalization. Prior to normalization, replicate runs from each sample set were observed to be statistically different, while following normalization replicate runs were no longer statistically different. Although all techniques reduced systematic bias, assigned ranks among the techniques revealed significant trends. For most LC-FTICR MS analyses, linear regression normalization ranked either first or second among the four techniques, suggesting that this technique was more generally suitable for reducing systematic biases.

  14. Identification of okadaic acid-induced phosphorylation events by a mass spectrometry approach

    International Nuclear Information System (INIS)

    Hill, Jennifer J.; Callaghan, Deborah A.; Ding Wen; Kelly, John F.; Chakravarthy, Balu R.

    2006-01-01

    Okadaic acid (OA) is a widely used small-molecule phosphatase inhibitor that is thought to selectively inhibit protein phosphatase 2A (PP2A). Multiple studies have demonstrated that PP2A activity is compromised in Brains of Alzheimer's disease patients. Thus, we set out to determine changes in phosphorylation that occur upon OA treatment of neuronal cells. Utilizing isotope-coded affinity tags and mass spectrometry analysis, we determined the relative abundance of proteins in a phosphoprotein enriched fraction from control and OA-treated primary cortical neurons. We identified many proteins whose phosphorylation state is regulated by OA, including glycogen synthase kinase 3β, collapsin-response mediator proteins (DRP-2, DPYSL-5, and CRMP-4), and the B subunit of PP2A itself. Most interestingly, we have found that complexin 2, an important regulator of neurotransmitter release and synaptic plasticity, is phosphorylated at serine 93 upon OA treatment of neurons. This is First report of a phosphorylation site on complexin 2

  15. Elliptic Cylinder Airborne Sampling and Geostatistical Mass Balance Approach for Quantifying Local Greenhouse Gas Emissions.

    Science.gov (United States)

    Tadić, Jovan M; Michalak, Anna M; Iraci, Laura; Ilić, Velibor; Biraud, Sébastien C; Feldman, Daniel R; Bui, Thaopaul; Johnson, Matthew S; Loewenstein, Max; Jeong, Seongeun; Fischer, Marc L; Yates, Emma L; Ryoo, Ju-Mee

    2017-09-05

    In this study, we explore observational, experimental, methodological, and practical aspects of the flux quantification of greenhouse gases from local point sources by using in situ airborne observations, and suggest a series of conceptual changes to improve flux estimates. We address the major sources of uncertainty reported in previous studies by modifying (1) the shape of the typical flight path, (2) the modeling of covariance and anisotropy, and (3) the type of interpolation tools used. We show that a cylindrical flight profile offers considerable advantages compared to traditional profiles collected as curtains, although this new approach brings with it the need for a more comprehensive subsequent analysis. The proposed flight pattern design does not require prior knowledge of wind direction and allows for the derivation of an ad hoc empirical correction factor to partially alleviate errors resulting from interpolation and measurement inaccuracies. The modified approach is applied to a use-case for quantifying CH 4 emission from an oil field south of San Ardo, CA, and compared to a bottom-up CH 4 emission estimate.

  16. Averaging of multivalued differential equations

    Directory of Open Access Journals (Sweden)

    G. Grammel

    2003-04-01

    Full Text Available Nonlinear multivalued differential equations with slow and fast subsystems are considered. Under transitivity conditions on the fast subsystem, the slow subsystem can be approximated by an averaged multivalued differential equation. The approximation in the Hausdorff sense is of order O(ϵ1/3 as ϵ→0.

  17. Fuzzy Weighted Average: Analytical Solution

    NARCIS (Netherlands)

    van den Broek, P.M.; Noppen, J.A.R.

    2009-01-01

    An algorithm is presented for the computation of analytical expressions for the extremal values of the α-cuts of the fuzzy weighted average, for triangular or trapeizoidal weights and attributes. Also, an algorithm for the computation of the inverses of these expressions is given, providing exact

  18. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  19. Polyhedral Painting with Group Averaging

    Science.gov (United States)

    Farris, Frank A.; Tsao, Ryan

    2016-01-01

    The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…

  20. Impacts of invasive earthworms on soil mercury cycling: Two mass balance approaches to an earthworm invasion in a northern Minnesota forest

    Science.gov (United States)

    Sona Psarska; Edward A. Nater; Randy Kolka

    2016-01-01

    Invasive earthworms perturb natural forest ecosystems that initially developed without them, mainly by consuming the forest floor (an organic rich surficial soil horizon) and by mixing the upper parts of the soil. The fate of mercury (Hg) formerly contained in the forest floor is largely unknown. We used two mass balance approaches (simple mass balance and geochemical...

  1. Cryo-Electron Tomography and Subtomogram Averaging.

    Science.gov (United States)

    Wan, W; Briggs, J A G

    2016-01-01

    Cryo-electron tomography (cryo-ET) allows 3D volumes to be reconstructed from a set of 2D projection images of a tilted biological sample. It allows densities to be resolved in 3D that would otherwise overlap in 2D projection images. Cryo-ET can be applied to resolve structural features in complex native environments, such as within the cell. Analogous to single-particle reconstruction in cryo-electron microscopy, structures present in multiple copies within tomograms can be extracted, aligned, and averaged, thus increasing the signal-to-noise ratio and resolution. This reconstruction approach, termed subtomogram averaging, can be used to determine protein structures in situ. It can also be applied to facilitate more conventional 2D image analysis approaches. In this chapter, we provide an introduction to cryo-ET and subtomogram averaging. We describe the overall workflow, including tomographic data collection, preprocessing, tomogram reconstruction, subtomogram alignment and averaging, classification, and postprocessing. We consider theoretical issues and practical considerations for each step in the workflow, along with descriptions of recent methodological advances and remaining limitations. © 2016 Elsevier Inc. All rights reserved.

  2. Relation between body mass index and depression: a structural equation modeling approach

    Directory of Open Access Journals (Sweden)

    Akhtar-Danesh Noori

    2007-04-01

    Full Text Available Abstract Background Obesity and depression are two major diseases which are associated with many other health problems such as hypertension, dyslipidemia, diabetes mellitus, coronary heart disease, stroke, myocardial infarction, heart failure in patients with systolic hypertension, low bone mineral density and increased mortality. Both diseases share common health complications but there are inconsistent findings concerning the relationship between obesity and depression. In this work we used the structural equation modeling (SEM technique to examine the relation between body mass index (BMI, as a proxy for obesity, and depression using the Canadian Community Health Survey, Cycle 1.2. Methods In this SEM model we postulate that 1 BMI and depression are directly related, 2 BMI is directly affected by the physical activity and, 3depression is directly influenced by stress. SEM was also used to assess the relation between BMI and depression separately for males and females. Results The results indicate that higher BMI is associated with more severe form of depression. On the other hand, the more severe form of depression may result in less weight gain. However, the association between depression and BMI is gender dependent. In males, the higher BMI may result in a more severe form of depression while in females the relation may not be the same. Also, there was a negative relationship between physical activity and BMI. Conclusion In general, use of SEM method showed that the two major diseases, obesity and depression, are associated but the form of the relation is different among males and females. More research is necessary to further understand the complexity of the relationship between obesity and depression. It also demonstrated that SEM is a feasible technique for modeling the relation between obesity and depression.

  3. A general approach to the screening and confirmation of tryptamines and phenethylamines by mass spectral fragmentation.

    Science.gov (United States)

    Chen, Bo-Hong; Liu, Ju-Tsung; Chen, Wen-Xiong; Chen, Hung-Ming; Lin, Cheng-Huang

    2008-01-15

    Certain characteristic fragmentations of tryptamines (indoleethylamine) and phenethylamines are described. Based on the GC-EI/MS, LC-ESI/MS and MALDI/TOFMS, the mass fragmentations of 13 standard compounds, including alpha-methyltryptamine (AMT), N,N-dimethyltryptamine (DMT), 5-methoxy-alpha-methyltryptamine (5-MeO-AMT), N,N-diethyltryptamine (DET), N,N-dipropyltryptamine (DPT), N,N-dibutyltryptamine (DBT), N,N-diisopropyltryptamine (DIPT), 5-methoxy-N,N-dimethyltryptamine (5-MeO-DMT), 5-methoxy-N,N-diisopropyltryptamine (5-MeO-DIPT), methamphetamine (MAMP), 3,4-methylenedioxyamphetamine (3,4-MDA), 3,4-methylenedioxymethamphetamine (3,4-MDMA) and 2-methylamino-1-(3,4-methylenedioxyphenyl)butane (MBDB), were compared. As a result, the parent ions of these analytes were hard to be obtained by GC/MS whereas the protonated molecular ions can be observed clearly by means of ESI/MS and MALDI/TOFMS. Furthermore, two major characteristic fragmentations, namely and alpha-cleavage ([M+H](+)-->[3-vinylindole](+)) and beta-cleavage ([M+H](+)-->[CH(2)N(+)R(N1)R(N2)]), are produced when the ESI and MALDI modes are used, respectively. In the case of ESI/MS, the fragment obtained from alpha-cleavage is the major process. In contrast to this, in the case of MALDI/TOFMS, the major fragment is produced via beta-cleavage. The ionization efficiency and fragments formed from either alpha- or beta-cleavages are closely related to the degree of alkylation of the side chain nitrogen in both cases.

  4. Evolution of a Lowland Karst Landscape; A Mass-Balance Approach

    Science.gov (United States)

    Chamberlin, C.; Heffernan, J. B.; Cohen, M. J.; Quintero, C.; Pain, A.

    2016-12-01

    Karst landscapes are highly soluble, and are vulnerable to biological acid production as a major driving factor in their evolution. Big Cypress National Park (BICY) is a low-lying karst landscape in southern Florida displaying a distinctive morphology of isolated depressions likely influenced by biology. The goal of this study is to constrain timescales of landform development in BICY. This question was addressed through the construction of landscape-scale elemental budgets for both calcium and phosphorus. Precipitation and export fluxes were calculated using available chemistry and hydrology data, and stocks were calculated from a combination of existing data, field measurements, and laboratory chemical analysis. Estimates of expected mass export given no biological acid production and given an equivalent production of 100% of GPP were compared with observed rates. Current standing stocks of phosphorus are dominated by a large soil pool, and contain 500 Gg P. Inputs are largely dominated by precipitation, and 8000 years are necessary to accumulate standing stocks of phosphorus given modern fluxes. Calcium flux is vastly dominated by dissolution of the limestone bedrock, and though some calcium is retained in the soil, most is exported. Using LiDAR generated estimates of volume loss across the landscape and current export rates, an estimated 15,000 years would be necessary to create the modern landscape. Both of these estimates indicate that the BICY landscape is geologically very young. The different behaviors of these elements (calcium is largely exported, while phosphorus is largely retained) lend additional confidence to estimates of denudation rates of the landscape. These estimates can be even closer reconciled if calcium redistribution over the landscape is allowed for. This estimate is compared to the two bounding conditions for biological weathering to indicate a likely level of biological importance to landscape development in this system.

  5. Thirty years of precise gravity measurements at Mt. Vesuvius: an approach to detect underground mass movements

    Directory of Open Access Journals (Sweden)

    Giovanna Berrino

    2013-11-01

    Full Text Available Since 1982, high precision gravity measurements have been routinely carried out on Mt. Vesuvius. The gravity network consists of selected sites most of them coinciding with, or very close to, leveling benchmarks to remove the effect of the elevation changes from gravity variations. The reference station is located in Napoli, outside the volcanic area. Since 1986, absolute gravity measurements have been periodically made on a station on Mt. Vesuvius, close to a permanent gravity station established in 1987, and at the reference in Napoli. The results of the gravity measurements since 1982 are presented and discussed. Moderate gravity changes on short-time were generally observed. On long-term significant gravity changes occurred and the overall fields displayed well defined patterns. Several periods of evolution may be recognized. Gravity changes revealed by the relative surveys have been confirmed by repeated absolute measurements, which also confirmed the long-term stability of the reference site. The gravity changes over the recognized periods appear correlated with the seismic crises and with changes of the tidal parameters obtained by continuous measurements. The absence of significant ground deformation implies masses redistribution, essentially density changes without significant volume changes, such as fluids migration at the depth of the seismic foci, i.e. at a few kilometers. The fluid migration may occur through pre-existing geological structures, as also suggested by hydrological studies, and/or through new fractures generated by seismic activity. This interpretation is supported by the analyses of the spatial gravity changes overlapping the most significant and recent seismic crises.

  6. Reconciling ocean mass content change based on direct and inverse approaches by utilizing data from GRACE, altimetry and Swarm

    Science.gov (United States)

    Rietbroek, R.; Uebbing, B.; Lück, C.; Kusche, J.

    2017-12-01

    Ocean mass content (OMC) change due to the melting of the ice-sheets in Greenland and Antarctica, melting of glaciers and changes in terrestrial hydrology is a major contributor to present-day sea level rise. Since 2002, the GRACE satellite mission serves as a valuable tool for directly measuring the variations in OMC. As GRACE has almost reached the end of its lifetime, efforts are being made to utilize the Swarm mission for the recovery of low degree time-variable gravity fields to bridge a possible gap until the GRACE-FO mission and to fill up periods where GRACE data was not existent. To this end we compute Swarm monthly normal equations and spherical harmonics that are found competitive to other solutions. In addition to directly measuring the OMC, combination of GRACE gravity data with altimetry data in a global inversion approach allows to separate the total sea level change into individual mass-driven and steric contributions. However, published estimates of OMC from the direct and inverse methods differ not only depending on the time window, but also are influenced by numerous post-processing choices. Here, we will look into sources of such differences between direct and inverse approaches and evaluate the capabilities of Swarm to derive OMC. Deriving time series of OMC requires several processing steps; choosing a GRACE (and altimetry) product, data coverage, masks and filters to be applied in either spatial or spectral domain, corrections related to spatial leakage, GIA and geocenter motion. In this study, we compare and quantify the effects of the different processing choices of the direct and inverse methods. Our preliminary results point to the GIA correction as the major source of difference between the two approaches.

  7. Migration of antioxidants from polylactic acid films: A parameter estimation approach and an overview of the current mass transfer models.

    Science.gov (United States)

    Samsudin, Hayati; Auras, Rafael; Mishra, Dharmendra; Dolan, Kirk; Burgess, Gary; Rubino, Maria; Selke, Susan; Soto-Valdez, Herlinda

    2018-01-01

    Migration studies of chemicals from contact materials have been widely conducted due to their importance in determining the safety and shelf life of a food product in their packages. The US Food and Drug Administration (FDA) and the European Food Safety Authority (EFSA) require this safety assessment for food contact materials. So, migration experiments are theoretically designed and experimentally conducted to obtain data that can be used to assess the kinetics of chemical release. In this work, a parameter estimation approach was used to review and to determine the mass transfer partition and diffusion coefficients governing the migration process of eight antioxidants from poly(lactic acid), PLA, based films into water/ethanol solutions at temperatures between 20 and 50°C. Scaled sensitivity coefficients were calculated to assess simultaneously estimation of a number of mass transfer parameters. An optimal experimental design approach was performed to show the importance of properly designing a migration experiment. Additional parameters also provide better insights on migration of the antioxidants. For example, the partition coefficients could be better estimated using data from the early part of the experiment instead at the end. Experiments could be conducted for shorter periods of time saving time and resources. Diffusion coefficients of the eight antioxidants from PLA films were between 0.2 and 19×10 -14 m 2 /s at ~40°C. The use of parameter estimation approach provided additional and useful insights about the migration of antioxidants from PLA films. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  9. Identification of differentially expressed proteins in retinoblastoma tumors using mass spectrometry-based comparative proteomic approach.

    Science.gov (United States)

    Naru, Jasmine; Aggarwal, Ritu; Mohanty, Ashok Kumar; Singh, Usha; Bansal, Deepak; Kakkar, Nandita; Agnihotri, Navneet

    2017-04-21

    In India, retinoblastoma is among the top five childhood cancers. Children mostly present with extraocular extension and high risk features that results in unsatisfactory treatment and low survival rate. In addition, lack of potential therapeutic and prognostic targets is another challenge in the management of retinoblastoma. We studied comparative proteome of retinoblastoma patients (HPV positive and negative (n=4 each) and controls (n=4), in order to identify potential retinoblastoma-specific protein targets. 2D-DIGE coupled MALDI-TOF/TOF mass spectrometry identified 39 unique proteins. Highly deregulated proteins were GFAP,RBP3,APOA1,CRYAA,CRABP1,SAG and TF. Gene ontology (Panther 7.0) revealed majority of proteins to be associated with metabolic processes (26%) and catalytic activity (38%). 8 proteins were significantly upregulated in HPV positive vis-a-vis HPV negative cases. Patient group exhibited 12 upregulated and 18 downregulated proteins compared to controls. Pathway and network analysis (IPA software) revealed CTNNB1 as most significantly regulated signalling pathway in HPV positive than HPV negative retinoblastoma. The trends in transcriptional change of 9 genes were consistent with those at proteomic level. The Western blot analysis confirmed the expression pattern of RBP3,GFAP and CRABP1. We suggest GFAP,RBP3,CRABP1,CRYAAA,APOA1 and SAG as prospective targets that could further be explored as potential candidates in therapy and may further assist in studying the disease mechanism. In this study we evaluated tumor tissue specimens from retinoblastoma patients and identified 39 differentially regulated proteins compared to healthy retina. From these, we propose RBP3, CRABP1, GFAP, CRYAA, APOA1 and SAG as promising proteomic signatures that could further be explored as efficient prognostic and therapeutic targets in retinoblastoma. The present study is not only a contribution to the ongoing endeavour for the discovery of proteomic signatures in

  10. When good = better than average

    Directory of Open Access Journals (Sweden)

    Don A. Moore

    2007-10-01

    Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.

  11. A two-step mass-conservation approach to infer ice thickness maps: Performance for different glacier types on Svalbard

    Science.gov (United States)

    Fürst, Johannes J.; Seehaus, Thorsten; Sass, Björn; Aas, Kjetil; Benham, Toby J.; Dowdeswell, Julian; Fettweis, Xavier; Gillet-Chaulet, Fabien; Moholdt, Geir; Navarro, Francisco; Nuth, Christopher; Petterson, Rickard; Braun, Matthias

    2017-04-01

    Satellite remote sensing based on optical or radar instruments has enable us to measure glacier-wide surface velocities as well as changes both in glacier extent and in surface elevation with good coverage worldwide. Yet, for the large majority of all glaciers and ice caps, there is in fact no information on how thick the ice cover is. Any attempt to predict glacier demise under climatic warming and to estimate the future contribution to sea-level rise is limited as long as the glacier thickness is not well constrained. Moreover, the poor knowledge of the bed topography inhibits the applicability of ice-flow models which could help to understand dominant processes controlling the ice-front evolution of marine-terminating glaciers. The reason is that the basal topography exerts major control on the dynamic response of grounded ice. As it is impractical to measure ice thicknesses on most glaciers, reconstruction approaches have been forwarded that can infer thickness fields from available geometric, climatic and ice-flow information. Here, we presented a two-step, mass-conserving reconstruction approach to infer 2D ice-thickness fields with prior knowledge on source and sink terms in the mass budget. The first-step reconstruction is aimed at glaciers for which not much information is available. Input requirements for this first step are comparable to other reconstruction approaches that have successfully been applied to glaciers world-wide. In fast-flowing areas where surface velocity measurements are most reliable, these observations enter a second-step reconstruction providing an improved thickness estimate. In both steps, available thickness measurements are readily assimilated to constrain the reconstruction. The approach is tested on different glacier geometries on Svalbard were an abundant thickness record was available. On these test geometries, we show that the approach performs well for entire ice caps as well as for marine- and land-terminating glaciers

  12. A Multi-Step Approach to Assessing LIGO Test Mass Coatings

    Science.gov (United States)

    Glover, Lamar; Goff, Michael; Linker, Seth; Neilson, Joshua; Patel, Jignesh; Pinto, Innocenzo; Principe, Maria; Villarama, Ethan; Arriaga, Eddy; Barragan, Erik; Chao, Shiuh; Daneshgaran, Lara; DeSalvo, Riccardo; Do, Eric; Fajardo, Cameron

    2018-02-01

    Photographs of the LIGO Gravitational Wave detector mirrors illuminated by the standing beam were analyzed with an astronomical software tool designed to identify stars within images, which extracted hundreds of thousands of point-like scatterers uniformly distributed across the mirror surface, likely distributed through the depth of the coating layers. The sheer number of the observed scatterers implies a fundamental, thermodynamic origin during deposition or processing. If identified as crystallites, these scatterers would be a possible source of the mirror dissipation and thermal noise, which limit the sensitivity of observatories to Gravitational Waves. In order to learn more about the composition and location of the detected scatterers, a feasibility study is underway to develop a method that determines the location of the scatterers by producing a complete mapping of scatterers within test samples, including their depth distribution, optical amplitude distribution, and lateral distribution. Also, research is underway to accurately identify future materials and/or coating methods that possess the largest possible mechanical quality factor (Q). Current efforts propose a new experimental approach that will more precisely measure the Q of coatings by depositing them onto 100 nm Silicon Nitride membranes.

  13. A new insert sample approach to paper spray mass spectrometry: a paper substrate with paraffin barriers.

    Science.gov (United States)

    Colletes, T C; Garcia, P T; Campanha, R B; Abdelnur, P V; Romão, W; Coltro, W K T; Vaz, B G

    2016-03-07

    The analytical performance for paper spray (PS) using a new insert sample approach based on paper with paraffin barriers (PS-PB) is presented. The paraffin barrier is made using a simple, fast and cheap method based on the stamping of paraffin onto a paper surface. Typical operation conditions of paper spray such as the solvent volume applied on the paper surface, and the paper substrate type are evaluated. A paper substrate with paraffin barriers shows better performance on analysis of a range of typical analytes when compared to the conventional PS-MS using normal paper (PS-NP) and PS-MS using paper with two rounded corners (PS-RC). PS-PB was applied to detect sugars and their inhibitors in sugarcane bagasse liquors from a second generation ethanol process. Moreover, the PS-PB proved to be excellent, showing results for the quantification of glucose in hydrolysis liquors with excellent linearity (R(2) = 0.99), limits of detection (2.77 mmol L(-1)) and quantification (9.27 mmol L(-1)). The results are better than for PS-NP and PS-RC. The PS-PB was also excellent in performance when compared with the HPLC-UV method for glucose quantification on hydrolysis of liquor samples.

  14. A holistic approach combining factor analysis, positive matrix factorization, and chemical mass balance applied to receptor modeling.

    Science.gov (United States)

    Selvaraju, N; Pushpavanam, S; Anu, N

    2013-12-01

    Rapid urbanization and population growth resulted in severe deterioration of air quality in most of the major cities in India. Therefore, it is essential to ascertain the contribution of various sources of air pollution to enable us to determine effective control policies. The present work focuses on the holistic approach of combining factor analysis (FA), positive matrix factorization (PMF), and chemical mass balance (CMB) for receptor modeling in order to identify the sources and their contributions in air quality studies. Insight from the emission inventory was used to remove subjectivity in source identification. Each approach has its own limitations. Factor analysis can identify qualitatively a minimal set of important factors which can account for the variations in the measured data. This step uses information from emission inventory to qualitatively match source profiles with factor loadings. This signifies the identification of dominant sources through factors. PMF gives source profiles and source contributions from the entire receptor data matrix. The data from FA is applied for rank reduction in PMF. Whenever multiple solutions exist, emission inventory identifies source profiles uniquely, so that they have a physical relevance. CMB identifies the source contributions obtained from FA and PMF. The novel approach proposed here overcomes the limitations of the individual methods in a synergistic way. The adopted methodology is found valid for a synthetic data and also the data of field study.

  15. Instantaneous, phase-averaged, and time-averaged pressure from particle image velocimetry

    Science.gov (United States)

    de Kat, Roeland

    2015-11-01

    Recent work on pressure determination using velocity data from particle image velocimetry (PIV) resulted in approaches that allow for instantaneous and volumetric pressure determination. However, applying these approaches is not always feasible (e.g. due to resolution, access, or other constraints) or desired. In those cases pressure determination approaches using phase-averaged or time-averaged velocity provide an alternative. To assess the performance of these different pressure determination approaches against one another, they are applied to a single data set and their results are compared with each other and with surface pressure measurements. For this assessment, the data set of a flow around a square cylinder (de Kat & van Oudheusden, 2012, Exp. Fluids 52:1089-1106) is used. RdK is supported by a Leverhulme Trust Early Career Fellowship.

  16. Protein biomarkers on tissue as imaged via MALDI mass spectrometry: A systematic approach to study the limits of detection.

    Science.gov (United States)

    van de Ven, Stephanie M W Y; Bemis, Kyle D; Lau, Kenneth; Adusumilli, Ravali; Kota, Uma; Stolowitz, Mark; Vitek, Olga; Mallick, Parag; Gambhir, Sanjiv S

    2016-06-01

    MALDI mass spectrometry imaging (MSI) is emerging as a tool for protein and peptide imaging across tissue sections. Despite extensive study, there does not yet exist a baseline study evaluating the potential capabilities for this technique to detect diverse proteins in tissue sections. In this study, we developed a systematic approach for characterizing MALDI-MSI workflows in terms of limits of detection, coefficients of variation, spatial resolution, and the identification of endogenous tissue proteins. Our goal was to quantify these figures of merit for a number of different proteins and peptides, in order to gain more insight in the feasibility of protein biomarker discovery efforts using this technique. Control proteins and peptides were deposited in serial dilutions on thinly sectioned mouse xenograft tissue. Using our experimental setup, coefficients of variation were biomarkers and a new benchmarking strategy that can be used for comparing diverse MALDI-MSI workflows. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Migration of antioxidants from polylactic acid films, a parameter estimation approach: Part I - A model including convective mass transfer coefficient.

    Science.gov (United States)

    Samsudin, Hayati; Auras, Rafael; Burgess, Gary; Dolan, Kirk; Soto-Valdez, Herlinda

    2018-03-01

    A two-step solution based on the boundary conditions of Crank's equations for mass transfer in a film was developed. Three driving factors, the diffusion (D), partition (K p,f ) and convective mass transfer coefficients (h), govern the sorption and/or desorption kinetics of migrants from polymer films. These three parameters were simultaneously estimated. They provide in-depth insight into the physics of a migration process. The first step was used to find the combination of D, K p,f and h that minimized the sums of squared errors (SSE) between the predicted and actual results. In step 2, an ordinary least square (OLS) estimation was performed by using the proposed analytical solution containing D, K p,f and h. Three selected migration studies of PLA/antioxidant-based films were used to demonstrate the use of this two-step solution. Additional parameter estimation approaches such as sequential and bootstrap were also performed to acquire a better knowledge about the kinetics of migration. The proposed model successfully provided the initial guesses for D, K p,f and h. The h value was determined without performing a specific experiment for it. By determining h together with D, under or overestimation issues pertaining to a migration process can be avoided since these two parameters are correlated. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. The flattening of the average potential in models with fermions

    International Nuclear Information System (INIS)

    Bornholdt, S.

    1993-01-01

    The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)

  19. Decadal-scale joint inversion of NOx and SO2 using a hybrid 4D-Var / mass balance approach

    Science.gov (United States)

    Qu, Z.; Henze, D. K.; Capps, S.; Wang, Y.; Xu, X.; Wang, J.; Keller, M.

    2016-12-01

    Quantifying the emissions of nitrogen oxides (NOx) and sulfur dioxide (SO2) is important for improving our understanding of acid rain, formation of aerosols, and human health problems. Traditional top-down estimates have provided valuable constraints for NOx and SO2 emission inventories in China, but are either time-consuming (e.g., 4D-Var) or only crudely represent the influence of atmospheric transport and chemistry (e.g., mass balance). We develop an approach combining mass balance and an adjoint-based four-dimensional variational (4D-Var) methods that facilitates decadal-scale emission inversions. This hybrid inversion is first evaluated with a single species inversion using NO2 pseudo observations. In a set of seven-year pseudo observation test, hybrid posterior NOx emissions have smaller normalized mean square error (by 54% to 94%) than that of mass balance when compared to true emissions in most cases, and have slightly better performance in detecting emissions magnitudes and trends. Using this hybrid method, NO2 observations from the Ozone Monitoring Instrument (OMI), and the GEOS-Chem chemical transport model, we have derived monthly top-down NOx emissions for China from 2005 to 2012. Our posterior emissions have the same seasonality as recent bottom-up inventories, and smaller emissions (by 13.4% to 23.5%) as well as emission growth rate (by 0.6% to 4.1%). The hybrid method is further implemented for long-term joint inversion of NOx and SO2 emissions in China using combined observations of OMI NO2 and SO2 column densities. A 4D-Var inversion is first performed to optimize NOx and SO2 emissions in the base year using GEOS-Chem adjoint. Mass balance scaling factor is then applied to these posterior to improve their inter-annual variation. Overall, these studies augment the utility of remote sensing data for evaluating emission control strategies and mitigating the impact of NOx and SO2 on human health and the environment.

  20. New approaches for the chemical and physical characterization of aerosols using a single particle mass spectrometry based technique

    Science.gov (United States)

    Spencer, Matthew Todd

    Aerosols affect the lives of people every day. They can decrease visibility, alter cloud formation and cloud lifetimes, change the energy balance of the earth and are implicated in causing numerous health problems. Measuring the physical and chemical properties of aerosols is essential to understand and mitigate any negative impacts that aerosols might have on climate and human health. Aerosol time-of-flight mass spectrometry (ATOFMS) is a technique that measures the size and chemical composition of individual particles in real time. The goal of this dissertation is to develop new and useful approaches for measuring the physical and/or chemical properties of particles using ATOFMS. This has been accomplished using laboratory experiments, ambient field measurements and sometimes comparisons between them. A comparison of mass spectra generated from petrochemical particles was made to light duty vehicle (LDV) and heavy duty diesel vehicle (HDDV) particle mass spectra. This comparison has given us new insight into how to differentiate between particles from these two sources. A method for coating elemental carbon (EC) particles with organic carbon (OC) was used to generate a calibration curve for quantifying the fraction of organic carbon and elemental carbon on particles using ATOFMS. This work demonstrates that it is possible to obtain quantitative chemical information with regards to EC and OC using ATOFMS. The relationship between electrical mobility diameter and aerodynamic diameter is used to develop a tandem differential mobility analyzer-ATOFMS technique to measure the effective density, size and chemical composition of particles. The method is applied in the field and gives new insight into the physical/chemical properties of particles. The size resolved chemical composition of aerosols was measured in the Indian Ocean during the monsoonal transition period. This field work shows that a significant fraction of aerosol transported from India was from biomass

  1. Self-Averaging Expectation Propagation

    DEFF Research Database (Denmark)

    Cakmak, Burak; Opper, Manfred; Fleury, Bernard Henri

    We investigate the problem of approximate inference using Expectation Propagation (EP) for large systems under some statistical assumptions. Our approach tries to overcome the numerical bottleneck of EP caused by the inversion of large matrices. Assuming that the measurement matrices are realizat...... on a signal recovery problem of compressed sensing and compare with standard EP....

  2. Resection of tumors of the third ventricle involving the hypothalamus: effects on body mass index using a dedicated surgical approach.

    Science.gov (United States)

    Mortini, Pietro; Gagliardi, Filippo; Bailo, Michele; Boari, Nicola; Castellano, Antonella; Falini, Andrea; Losa, Marco

    2017-07-01

    Resection of large lesions growing into the third ventricle is considered nowadays still a demanding surgery, due to the high risk of severe endocrine and neurological complications. Some neurosurgical approaches were considered in the past the procedures of choice to access the third ventricle, however they were burden by endocrine and neurological consequences, like memory loss and epilepsy. We report here the endocrine and functional results in a series of patients operated with a recently developed approach specifically tailored for the resection of large lesions growing into the third ventricle. Authors conducted a retrospective analysis on 10 patients, operated between 2011 and 2012, for the resection of large tumors growing into the third ventricle. Total resection was achieved in all patients. No perioperative deaths were recorded and all patients were alive after the follow-up. One year after surgery 8/10 patients had an excellent outcome with a Karnofsky Performance Status of 100 and a Glasgow Outcome score of 5, with 8 patients experiencing an improvement of the Body Mass Index. Modern neurosurgery allows a safe and effective treatment of large lesions growing into the third ventricle with a postoperative good functional status.

  3. Common Peak Approach Using Mass Spectrometry Data Sets for Predicting the Effects of Anticancer Drugs on Breast Cancer

    Directory of Open Access Journals (Sweden)

    Masaru Ushijima

    2007-01-01

    Full Text Available We propose a method for biomarker discovery from mass spectrometry data, improving the common peak approach developed by Fushiki et al. (BMC Bioinformatics, 7:358, 2006. The common peak method is a simple way to select the sensible peaks that are shared with many subjects among all detected peaks by combining a standard spectrum alignment and kernel density estimates. The key idea of our proposed method is to apply the common peak approach to each class label separately. Hence, the proposed method gains more informative peaks for predicting class labels, while minor peaks associated with specifi c subjects are deleted correctly. We used a SELDI-TOF MS data set from laser microdissected cancer tissues for predicting the treatment effects of neoadjuvant therapy using an anticancer drug on breast cancer patients. The AdaBoost algorithm is adopted for pattern recognition, based on the set of candidate peaks selected by the proposed method. The analysis gives good performance in the sense of test errors for classifying the class labels for a given feature vector of selected peak values.

  4. A problem-solving approach to effective insulin injection for patients at either end of the body mass index.

    Science.gov (United States)

    Juip, Micki; Fitzner, Karen

    2012-06-01

    People with diabetes require skills and knowledge to adhere to medication regimens and self-manage this complex disease. Effective self-management is contingent upon effective problem solving and decision making. Gaps existed regarding useful approaches to problem solving by individuals with very low and very high body mass index (BMI) who self-administer insulin injections. This article addresses those gaps by presenting findings from a patient survey, a symposium on the topic of problem solving, and recent interviews with diabetes educators to facilitate problem-solving approaches for people with diabetes with high and low BMI who inject insulin and/or other medications. In practice, problem solving involves problem identification, definition, and specification; goal and barrier identification are a prelude to generating a set of potential strategies for problem resolution and applying these strategies to implement a solution. Teaching techniques, such as site rotation and ensuring that people with diabetes use the appropriate equipment, increase confidence with medication adherence. Medication taking is more effective when people with diabetes are equipped with the knowledge, skills, and problem-solving behaviors to effectively self-manage their injections.

  5. Dual-core mass-balance approach for evaluating mercury and 210Pb atmospheric fallout and focusing to lakes.

    Science.gov (United States)

    Van Metre, Peter C; Fuller, Christopher C

    2009-01-01

    Determining atmospheric deposition rates of mercury and other contaminants using lake sediment cores requires a quantitative understanding of sediment focusing. Here we present a novel approach that solves mass-balance equations fortwo cores algebraicallyto estimate contaminant contributions to sediment from direct atmospheric fallout and from watershed and in-lake focusing. The model is applied to excess 210Pb and Hg in coresfrom Hobbs Lake, a high-altitude lake in Wyoming. Model results for excess 210Pb are consistent with estimates of fallout and focusing factors computed using excess 210Pb burdens in lake cores and soil cores from the watershed and model results for Hg fallout are consistent with fallout estimated using the soil-core-based 210Pb focusing factors. The lake cores indicate small increases in mercury deposition beginning in the late 1800s and large increases after 1940, with the maximum at the tops of the cores of 16-20 microg/m2 x year. These results suggest that global Hg emissions and possibly regional emissions in the western United States are affecting the north-central Rocky Mountains. Hg fallout estimates are generally consistent with fallout reported from an ice core from the nearby Upper Fremont Glacier, but with several notable differences. The model might not work for lakes with complex geometries and multiple sediment inputs, but for lakes with simple geometries, like Hobbs, it can provide a quantitative approach for evaluating sediment focusing and estimating contaminant fallout.

  6. Recent trends in application of multivariate curve resolution approaches for improving gas chromatography-mass spectrometry analysis of essential oils.

    Science.gov (United States)

    Jalali-Heravi, Mehdi; Parastar, Hadi

    2011-08-15

    Essential oils (EOs) are valuable natural products that are popular nowadays in the world due to their effects on the health conditions of human beings and their role in preventing and curing diseases. In addition, EOs have a broad range of applications in foods, perfumes, cosmetics and human nutrition. Among different techniques for analysis of EOs, gas chromatography-mass spectrometry (GC-MS) is the most important one in recent years. However, there are some fundamental problems in GC-MS analysis including baseline drift, spectral background, noise, low S/N (signal to noise) ratio, changes in the peak shapes and co-elution. Multivariate curve resolution (MCR) approaches cope with ongoing challenges and are able to handle these problems. This review focuses on the application of MCR techniques for improving GC-MS analysis of EOs published between January 2000 and December 2010. In the first part, the importance of EOs in human life and their relevance in analytical chemistry is discussed. In the second part, an insight into some basics needed to understand prospects and limitations of the MCR techniques are given. In the third part, the significance of the combination of the MCR approaches with GC-MS analysis of EOs is highlighted. Furthermore, the commonly used algorithms for preprocessing, chemical rank determination, local rank analysis and multivariate resolution in the field of EOs analysis are reviewed. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. A Rational Approach for Discovering and Validating Cancer Markers in Very Small Samples Using Mass Spectrometry and ELISA Microarrays

    Directory of Open Access Journals (Sweden)

    Richard C. Zangar

    2004-01-01

    Full Text Available Identifying useful markers of cancer can be problematic due to limited amounts of sample. Some samples such as nipple aspirate fluid (NAF or early-stage tumors are inherently small. Other samples such as serum are collected in larger volumes but archives of these samples are very valuable and only small amounts of each sample may be available for a single study. Also, given the diverse nature of cancer and the inherent variability in individual protein levels, it seems likely that the best approach to screen for cancer will be to determine the profile of a battery of proteins. As a result, a major challenge in identifying protein markers of disease is the ability to screen many proteins using very small amounts of sample. In this review, we outline some technological advances in proteomics that greatly advance this capability. Specifically, we propose a strategy for identifying markers of breast cancer in NAF that utilizes mass spectrometry (MS to simultaneously screen hundreds or thousands of proteins in each sample. The best potential markers identified by the MS analysis can then be extensively characterized using an ELISA microarray assay. Because the microarray analysis is quantitative and large numbers of samples can be efficiently analyzed, this approach offers the ability to rapidly assess a battery of selected proteins in a manner that is directly relevant to traditional clinical assays.

  8. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  9. MALDI-ISD Mass Spectrometry Analysis of Hemoglobin Variants: a Top-Down Approach to the Characterization of Hemoglobinopathies

    Science.gov (United States)

    Théberge, Roger; Dikler, Sergei; Heckendorf, Christian; Chui, David H. K.; Costello, Catherine E.; McComb, Mark E.

    2015-08-01

    Hemoglobinopathies are the most common inherited disorders in humans and are thus the target of screening programs worldwide. Over the past decade, mass spectrometry (MS) has gained a more important role as a clinical means to diagnose variants, and a number of approaches have been proposed for characterization. Here we investigate the use of matrix-assisted laser desorption/ionization time-of-flight MS (MALDI-TOF MS) with sequencing using in-source decay (MALDI-ISD) for the characterization of Hb variants. We explored the effect of matrix selection using super DHB or 1,5-diaminonaphthalene on ISD fragment ion yield and distribution. MALDI-ISD MS of whole blood using super DHB simultaneously provided molecular weights for the alpha and beta chains, as well as extensive fragmentation in the form of sequence defining c-, (z + 2)-, and y-ion series. We observed sequence coverage on the first 70 amino acids positions from the N- and C-termini of the alpha and beta chains in a single experiment. An abundant beta chain N-terminal fragment ion corresponding to βc34 was determined to be a diagnostic marker ion for Hb S (β6 Glu→Val, sickle cell), Hb C (β6 Glu→Lys), and potentially for Hb E (β26 Glu→Lys). The MALDI-ISD analysis of Hb S and HbSC yielded mass shifts corresponding to the variants, demonstrating the potential for high-throughput screening. Characterization of an alpha chain variant, Hb Westmead (α122 His→Gln), generated fragments that established the location of the variant. This study is the first clinical application of MALDI-ISD MS for the determination and characterization of hemoglobin variants.

  10. Diagnosis of breast masses from dynamic contrast-enhanced and diffusion-weighted MR: a machine learning approach.

    Directory of Open Access Journals (Sweden)

    Hongmin Cai

    Full Text Available PURPOSE: Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI is increasingly used for breast cancer diagnosis as supplementary to conventional imaging techniques. Combining of diffusion-weighted imaging (DWI of morphology and kinetic features from DCE-MRI to improve the discrimination power of malignant from benign breast masses is rarely reported. MATERIALS AND METHODS: The study comprised of 234 female patients with 85 benign and 149 malignant lesions. Four distinct groups of features, coupling with pathological tests, were estimated to comprehensively characterize the pictorial properties of each lesion, which was obtained by a semi-automated segmentation method. Classical machine learning scheme including feature subset selection and various classification schemes were employed to build prognostic model, which served as a foundation for evaluating the combined effects of the multi-sided features for predicting of the types of lesions. Various measurements including cross validation and receiver operating characteristics were used to quantify the diagnostic performances of each feature as well as their combination. RESULTS: Seven features were all found to be statistically different between the malignant and the benign groups and their combination has achieved the highest classification accuracy. The seven features include one pathological variable of age, one morphological variable of slope, three texture features of entropy, inverse difference and information correlation, one kinetic feature of SER and one DWI feature of apparent diffusion coefficient (ADC. Together with the selected diagnostic features, various classical classification schemes were used to test their discrimination power through cross validation scheme. The averaged measurements of sensitivity, specificity, AUC and accuracy are 0.85, 0.89, 90.9% and 0.93, respectively. CONCLUSION: Multi-sided variables which characterize the morphological, kinetic, pathological

  11. Diagnosis of breast masses from dynamic contrast-enhanced and diffusion-weighted MR: a machine learning approach.

    Science.gov (United States)

    Cai, Hongmin; Peng, Yanxia; Ou, Caiwen; Chen, Minsheng; Li, Li

    2014-01-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is increasingly used for breast cancer diagnosis as supplementary to conventional imaging techniques. Combining of diffusion-weighted imaging (DWI) of morphology and kinetic features from DCE-MRI to improve the discrimination power of malignant from benign breast masses is rarely reported. The study comprised of 234 female patients with 85 benign and 149 malignant lesions. Four distinct groups of features, coupling with pathological tests, were estimated to comprehensively characterize the pictorial properties of each lesion, which was obtained by a semi-automated segmentation method. Classical machine learning scheme including feature subset selection and various classification schemes were employed to build prognostic model, which served as a foundation for evaluating the combined effects of the multi-sided features for predicting of the types of lesions. Various measurements including cross validation and receiver operating characteristics were used to quantify the diagnostic performances of each feature as well as their combination. Seven features were all found to be statistically different between the malignant and the benign groups and their combination has achieved the highest classification accuracy. The seven features include one pathological variable of age, one morphological variable of slope, three texture features of entropy, inverse difference and information correlation, one kinetic feature of SER and one DWI feature of apparent diffusion coefficient (ADC). Together with the selected diagnostic features, various classical classification schemes were used to test their discrimination power through cross validation scheme. The averaged measurements of sensitivity, specificity, AUC and accuracy are 0.85, 0.89, 90.9% and 0.93, respectively. Multi-sided variables which characterize the morphological, kinetic, pathological properties and DWI measurement of ADC can dramatically improve the

  12. Asymmetric network connectivity using weighted harmonic averages

    Science.gov (United States)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  13. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  14. Group averaging for de Sitter free fields

    Energy Technology Data Exchange (ETDEWEB)

    Marolf, Donald; Morrison, Ian A, E-mail: marolf@physics.ucsb.ed, E-mail: ian_morrison@physics.ucsb.ed [Department of Physics, University of California, Santa Barbara, CA 93106 (United States)

    2009-12-07

    Perturbative gravity about global de Sitter space is subject to linearization-stability constraints. Such constraints imply that quantum states of matter fields couple consistently to gravity only if the matter state has vanishing de Sitter charges, i.e. only if the state is invariant under the symmetries of de Sitter space. As noted by Higuchi, the usual Fock spaces for matter fields contain no de Sitter-invariant states except the vacuum, though a new Hilbert space of de Sitter-invariant states can be constructed via so-called group-averaging techniques. We study this construction for free scalar fields of arbitrary positive mass in any dimension, and for linear vector and tensor gauge fields in any dimension. Our main result is to show in each case that group averaging converges for states containing a sufficient number of particles. We consider general N-particle states with smooth wavefunctions, though we obtain somewhat stronger results when the wavefunctions are finite linear combinations of de Sitter harmonics. Along the way we obtain explicit expressions for general boost matrix elements in a familiar basis.

  15. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  16. In-capillary approach to eliminate SDS interferences in antibody analysis by capillary electrophoresis coupled to mass spectrometry.

    Science.gov (United States)

    Sánchez-Hernández, Laura; Montealegre, Cristina; Kiessig, Steffen; Moritz, Bernd; Neusüß, Christian

    2017-04-01

    Capillary electrophoresis is an important technique for the characterization of monoclonal antibodies (mAbs), especially in the pharmaceutical context. However, identification is difficult as upscaling and hyphenation of used methods directly to mass spectrometry is often not possible due to separation medium components that are incompatible with MS detection. Here a CE-MS method for the analysis of mAbs is presented analyzing SDS-complexed samples. To obtain narrow and intensive peaks of SDS-treated antibodies, an in-capillary strategy was developed based on the co-injection of positively charged surfactants and methanol as organic solvent. For samples containing 0.2% (v/v) of SDS, recovered MS peak intensities up to 97 and 95% were achieved using cetyltrimethylammonium bromide or benzalkonium chloride, respectively. Successful removal of SDS was shown in neutral coated capillaries but also in a capillary with a positively charged coating applying reversed polarity. The usefulness of this in-capillary strategy was demonstrated also for other proteins and for antibodies dissolved in up to 10% v/v SDS solution, and in other SDS-containing matrices, including the sieving matrix used in a standard CE-SDS method and gel-buffers applied in SDS-PAGE methods. The developed CE-MS approaches enable fast and reproducible characterization of SDS-complexed antibodies. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Identification of clinically relevant Corynebacterium strains by Api Coryne, MALDI-TOF-mass spectrometry and molecular approaches.

    Science.gov (United States)

    Alibi, S; Ferjani, A; Gaillot, O; Marzouk, M; Courcol, R; Boukadida, J

    2015-09-01

    We evaluated the Bruker Biotyper matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry (MS) for the identification of 97 Corynebacterium clinical in comparison to identification strains by Api Coryne and MALDI-TOF-MS using 16S rRNA gene and hypervariable region of rpoB genes sequencing as a reference method. C. striatum was the predominant species isolated followed by C. amycolatum. There was an agreement between Api Coryne strips and MALDI-TOF-MS identification in 88.65% of cases. MALDI-TOF-MS was unable to differentiate C. aurimucosum from C. minutissimum and C. minutissimum from C. singulare but reliably identify 92 of 97 (94.84%) strains. Two strains remained incompletely identified to the species level by MALDI-TOF-MS and molecular approaches. They belonged to Cellulomonas and Pseudoclavibacter genus. In conclusion, MALDI-TOF-MS is a rapid and reliable method for the identification of Corynebacterium species. However, some limits have been noted and have to be resolved by the application of molecular methods. Copyright © 2015. Published by Elsevier SAS.

  18. Evaluating the potential for environmental pollution from chromated copper arsenate (CCA)-treated wood waste: a new mass balance approach.

    Science.gov (United States)

    Mercer, T G; Frostick, L E

    2014-07-15

    The potential for pollution from arsenic, chromium and copper in chromated copper arsenate (CCA) treated wood waste was assessed using two lysimeter studies. The first utilised lysimeters containing soil and CCA wood waste mulch exposed to natural conditions over a five month period. The second study used the same lysimeter setup in a regulated greenhouse setting with a manual watering regime. Woodchip, soil and leachate samples were evaluated for arsenic, chromium and copper concentrations. Resultant concentration data were used to produce mass balances, an approach thus far unused in such studies. This novel analysis revealed new patterns of mobility and distribution of the elements in the system. The results suggest that CCA wood waste tends to leach on initial exposure to a leachant and during weathering of the wood. When in contact with soil, metal(loid) transport is reduced due to complexation reactions. With higher water application or where the adsorption capacity of the soil is exceeded, the metal(loid)s are transported through the soil column as leachate. Overall, there was an unexplained loss of metal(loid)s from the system that might be attributed to volatilisation of arsenic and plant uptake. This suggests a hitherto unidentified risk to both the environment and human health. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. A novel approach to quantifying the sensitivity of current and future cosmological datasets to the neutrino mass ordering through Bayesian hierarchical modeling

    Science.gov (United States)

    Gerbino, Martina; Lattanzi, Massimiliano; Mena, Olga; Freese, Katherine

    2017-12-01

    We present a novel approach to derive constraints on neutrino masses, as well as on other cosmological parameters, from cosmological data, while taking into account our ignorance of the neutrino mass ordering. We derive constraints from a combination of current as well as future cosmological datasets on the total neutrino mass Mν and on the mass fractions fν,i =mi /Mν (where the index i = 1 , 2 , 3 indicates the three mass eigenstates) carried by each of the mass eigenstates mi, after marginalizing over the (unknown) neutrino mass ordering, either normal ordering (NH) or inverted ordering (IH). The bounds on all the cosmological parameters, including those on the total neutrino mass, take therefore into account the uncertainty related to our ignorance of the mass hierarchy that is actually realized in nature. This novel approach is carried out in the framework of Bayesian analysis of a typical hierarchical problem, where the distribution of the parameters of the model depends on further parameters, the hyperparameters. In this context, the choice of the neutrino mass ordering is modeled via the discrete hyperparameterhtype, which we introduce in the usual Markov chain analysis. The preference from cosmological data for either the NH or the IH scenarios is then simply encoded in the posterior distribution of the hyperparameter itself. Current cosmic microwave background (CMB) measurements assign equal odds to the two hierarchies, and are thus unable to distinguish between them. However, after the addition of baryon acoustic oscillation (BAO) measurements, a weak preference for the normal hierarchical scenario appears, with odds of 4 : 3 from Planck temperature and large-scale polarization in combination with BAO (3 : 2 if small-scale polarization is also included). Concerning next-generation cosmological experiments, forecasts suggest that the combination of upcoming CMB (COrE) and BAO surveys (DESI) may determine the neutrino mass hierarchy at a high statistical

  20. Implications of elevated CO2 on pelagic carbon fluxes in an Arctic mesocosm study – an elemental mass balance approach

    Directory of Open Access Journals (Sweden)

    J. Czerny

    2013-05-01

    Full Text Available Recent studies on the impacts of ocean acidification on pelagic communities have identified changes in carbon to nutrient dynamics with related shifts in elemental stoichiometry. In principle, mesocosm experiments provide the opportunity of determining temporal dynamics of all relevant carbon and nutrient pools and, thus, calculating elemental budgets. In practice, attempts to budget mesocosm enclosures are often hampered by uncertainties in some of the measured pools and fluxes, in particular due to uncertainties in constraining air–sea gas exchange, particle sinking, and wall growth. In an Arctic mesocosm study on ocean acidification applying KOSMOS (Kiel Off-Shore Mesocosms for future Ocean Simulation, all relevant element pools and fluxes of carbon, nitrogen and phosphorus were measured, using an improved experimental design intended to narrow down the mentioned uncertainties. Water-column concentrations of particulate and dissolved organic and inorganic matter were determined daily. New approaches for quantitative estimates of material sinking to the bottom of the mesocosms and gas exchange in 48 h temporal resolution as well as estimates of wall growth were developed to close the gaps in element budgets. However, losses elements from the budgets into a sum of insufficiently determined pools were detected, and are principally unavoidable in mesocosm investigation. The comparison of variability patterns of all single measured datasets revealed analytic precision to be the main issue in determination of budgets. Uncertainties in dissolved organic carbon (DOC, nitrogen (DON and particulate organic phosphorus (POP were much higher than the summed error in determination of the same elements in all other pools. With estimates provided for all other major elemental pools, mass balance calculations could be used to infer the temporal development of DOC, DON and POP pools. Future elevated pCO2 was found to enhance net autotrophic community carbon

  1. Exact Membership Functions for the Fuzzy Weighted Average

    NARCIS (Netherlands)

    van den Broek, P.M.; Noppen, J.A.R.

    2011-01-01

    The problem of computing the fuzzy weighted average, where both attributes and weights are fuzzy numbers, is well studied in the literature. Generally, the approach is to apply Zadeh’s extension principle to compute α-cuts of the fuzzy weighted average from the α-cuts of the attributes and weights

  2. Site Averaged Neutron Soil Moisture: 1988 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: Site averaged product of the neutron probe soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged...

  3. Site Averaged Gravimetric Soil Moisture: 1989 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — Site averaged product of the gravimetric soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged for each...

  4. Site Averaged Gravimetric Soil Moisture: 1988 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — Site averaged product of the gravimetric soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged for each...

  5. Site Averaged Gravimetric Soil Moisture: 1987 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: Site averaged product of the gravimetric soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged...

  6. Site Averaged Gravimetric Soil Moisture: 1987 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — Site averaged product of the gravimetric soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged for each...

  7. A tendem mass spectrometric approach for determining the structure of molecular species of ceramide in the marine sponge, Haliclona cribricutis

    Digital Repository Service at National Institute of Oceanography (India)

    Tilvi, S.; Majik, M.; Naik, C.G.

    molecular species. These included gas chromatography/ mass spectrometry (GC/MS), 6 fast atom bombardment (FAB) MS 7–10 and, more recently, electrospray ionization (ESI) and matrix-assisted laser desorption/ionization (MALDI) 11 techniques. ESI offers..., “Identification and fragmentation study of plasticizers with post-source decay matrix-assisted laser desorption/ ionization mass spectrometry”, Rapid Commun. Mass Spectrom. 17, 783 (2003). 19. K. Mills, A. Johnson and B. Winchester, “Synthesis of novel internal...

  8. Recent advances in mass spectrometry-based approaches for proteomics and biologics: Great contribution for developing therapeutic antibodies.

    Science.gov (United States)

    Iwamoto, Noriko; Shimada, Takashi

    2017-12-22

    Since the turn of the century, mass spectrometry (MS) technologies have continued to improve dramatically, and advanced strategies that were impossible a decade ago are increasingly becoming available. The basic characteristics behind these advancements are MS resolution, quantitative accuracy, and information science for appropriate data processing. The spectral data from MS contain various types of information. The benefits of improving the resolution of MS data include accurate molecular structural-derived information, and as a result, we can obtain a refined biomolecular structure determination in a sequential and large-scale manner. Moreover, in MS data, not only accurate structural information but also the generated ion amount plays an important rule. This progress has greatly contributed a research field that captures biological events as a system by comprehensively tracing the various changes in biomolecular dynamics. The sequential changes of proteome expression in biological pathways are very essential, and the amounts of the changes often directly become the targets of drug discovery or indicators of clinical efficacy. To take this proteomic approach, it is necessary to separate the individual MS spectra derived from each biomolecule in the complexed biological samples. MS itself is not so infinite to perform the all peak separation, and we should consider improving the methods for sample processing and purification to make them suitable for injection into MS. The above-described characteristics can only be achieved using MS with any analytical instrument. Moreover, MS is expected to be applied and expand into many fields, not only basic life sciences but also forensic medicine, plant sciences, materials, and natural products. In this review, we focus on the technical fundamentals and future aspects of the strategies for accurate structural identification, structure-indicated quantitation, and on the challenges for pharmacokinetics of high

  9. Design of pulsed perforated-plate columns for industrial scale mass transfer applications - present experience and the need for a model based approach

    International Nuclear Information System (INIS)

    Roy, Amitava

    2010-01-01

    Mass transfer is a vital unit operation in the processing of spent nuclear fuel in the backend of closed fuel cycle and Pulsed perforated plate extraction columns are used as mass transfer device for more than five decades. The pulsed perforated plate column is an agitated differential contactor, which has wide applicability due to its simplicity, high mass transfer efficiency, high through put, suitability for maintenance free remote operation, ease of cleaning/decontamination and cost effectiveness. Design of pulsed columns are based on a model proposed to describe the hydrodynamics and mass transfer. In equilibrium stage model, the HETS values are obtained from pilot plant experiments and then scaled empirically to design columns for industrial application. The dispersion model accounts for mass transfer kinetics and back-mixing. The drop population balance model can describe complex hydrodynamics of dispersed phase, that is, drop formation, break-up and drop-to-drop interactions. In recent years, significant progress has been made to model pulsed columns using CFD, which provides complete mathematical description of hydrodynamics in terms of spatial distribution of flow fields and 3D visualization. Under the condition of pulsation, the poly-dispersed nature of turbulent droplet swarm renders modeling difficult. In the absence of industry acceptance of proposed models, the conventional chemical engineering practice is to use HETS-NTS concept or HTU-NTU approach to design extraction columns. The practicability of HTU-NTU approach has some limitations due to the lack of experimental data on individual film mass transfer coefficients. Presently, the HETS-NTS concept has been used for designing the columns, which has given satisfactory performance. The design objective is mainly to arrive at the diameter and height of the mass transfer section for a specific plate geometry, fluid properties and pulsing condition to meet the intended throughput (capacity) and mass

  10. Atomic configuration average simulations for plasma spectroscopy

    International Nuclear Information System (INIS)

    Kilcrease, D.P.; Abdallah, J. Jr.; Keady, J.J.; Clark, R.E.H.

    1993-01-01

    Configuration average atomic physics based on Hartree-Fock methods and an unresolved transition array (UTA) simulation theory are combined to provide a computationally efficient approach for calculating the spectral properties of plasmas involving complex ions. The UTA theory gives an overall representation for the many lines associated with a radiative transition from one configuration to another without calculating the fine structure in full detail. All of the atomic quantities required for synthesis of the spectrum are calculated in the same approximation and used to generate the parameters required for representation of each UTA, the populations of the various atomic states, and the oscillator strengths. We use this method to simulate the transmission of x-rays through an aluminium plasma. (author)

  11. A mascon approach to assess ice sheet and glacier mass balances and their uncertainties from GRACE data

    NARCIS (Netherlands)

    Schrama, E.J.O.; Wouters, B.; Rietbroek, R.

    2014-01-01

    The purpose of this paper is to assess the mass changes of the Greenland Ice Sheet (GrIS), Ice Sheets over Antarctica, and Land glaciers and Ice Caps with a global mascon method that yields monthly mass variations at 10,242 mascons. Input for this method are level 2 data from the Gravity Recovery

  12. Weighted south-wide average pulpwood prices

    Science.gov (United States)

    James E. Granskog; Kevin D. Growther

    1991-01-01

    Weighted average prices provide a more accurate representation of regional pulpwood price trends when production volumes valy widely by state. Unweighted South-wide average delivered prices for pulpwood, as reported by Timber Mart-South, were compared to average annual prices weighted by each state's pulpwood production from 1977 to 1986. Weighted average prices...

  13. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  14. Characterization of polyphenols in apricot and peach purees by UHPLC coupled to HRMS Q-Exactive(™) mass spectrometer: an approach in the identification of adulterations.

    Science.gov (United States)

    Cocconi, E; Stingone, C; Zanotti, A; Trifirò, A

    2016-09-01

    The genuineness of fruit juices and purees is regulated by guidelines of European Fruit Juice Association. Nevertheless, the addition of peach puree to apricot puree is considered the most common adulteration, very difficult to discover. In this study, the composition in free and conjugated polyphenols of apricot and peach purees was performed by target and untarget approaches with Q-Exactive(™) quadrupole-Orbitrap mass spectrometer. Apricot purees showed a higher polyphenol content than those of peaches. Between target coumpounds, chlorogenic acid, rutin, catechin and smaller quantities of hyperoside and kaempferol-3-rutinoside were found in both purees. Apricot puree was also found to contain epicatechin and procianidin B2, absent in peach puree. Peach puree was found to contain small amounts of kaempferol-3-glucoside, absent in apricot. In order to identify untarget polyphenols, data obtained by ultra-high pressure liquid chromatography tandem mass spectrometry analysis were processed with Thermo Scientific automated label-free differential expression software (sieve(™) 2.1 software). Three hydroxycinnamic acid conjugates and a procyanidin were identified and confirmed by tandem mass spectrometry spectra. Some compounds of interest found from differential analysis had a putative identification, while others remained unidentified. The high-resolution mass spectrometry approach using Q-Exactive(™) quadrupole-Orbitrap mass spectrometer could be an important and powerful tool for determination of new biomarkers in fruits and vegetables. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. An Integrated Numerical Modelling-Discrete Fracture Network Approach Applied to the Characterisation of Rock Mass Strength of Naturally Fractured Pillars

    Science.gov (United States)

    Elmo, Davide; Stead, Doug

    2010-02-01

    Naturally fractured mine pillars provide an excellent example of the importance of accurately determining rock mass strength. Failure in slender pillars is predominantly controlled by naturally occurring discontinuities, their influence diminishing with increasing pillar width, with wider pillars failing through a combination of brittle and shearing processes. To accurately simulate this behaviour by numerical modelling, the current analysis incorporates a more realistic representation of the mechanical behaviour of discrete fracture systems. This involves realistic simulation and representation of fracture networks, either as individual entities or as a collective system of fracture sets, or a combination of both. By using an integrated finite element/discrete element-discrete fracture network approach it is possible to study the failure of rock masses in tension and compression, along both existing pre-existing fractures and through intact rock bridges, and incorporating complex kinematic mechanisms. The proposed modelling approach fully captures the anisotropic and inhomogeneous effects of natural jointing and is considered to be more realistic than methods relying solely on continuum or discontinuum representation. The paper concludes with a discussion on the development of synthetic rock mass properties, with the intention of providing a more robust link between rock mass strength and rock mass classification systems.

  16. Combined effects of Mass and Velocity on forward displacement and phenomenological ratings: a functional measurement approach to the Momentum metaphor

    Directory of Open Access Journals (Sweden)

    Michel-Ange Amorim

    2010-01-01

    Full Text Available Representational Momentum (RepMo refers to the phenomenon that the vanishing position of a moving target is perceived as displaced ahead in the direction of movement. Originally taken to reflect a strict internalization of physical momentum, the finding that the target implied mass did not have an effect led to its subsequent reinterpretation as a second-order isomorphism between mental representations and principles of the physical world. However, very few studies have addressed the effects of mass on RepMo, and consistent replications of the null effect are lacking. The extent of motor engagement of the observers in RepMo tasks has, on the other hand, been suggested to determine the occurrence of the phenomenon; however, no systematic investigations were made of the degree to which it might modulate the effect of target mass. In the present work, we use Information Integration Theory to study the joint effects of different motor responses, target velocity and target mass on RepMo, and also of velocity and target mass on rating responses. Outcomes point not only to an effect of mass on RepMo, as to a differential effect of response modality on kinematic (e.g., velocity and dynamic (e.g., mass variables. Comparisons of patterns of mislocalisation with phenomenological ratings suggest that simplification of physical principles, rather than strict internalization or isomorphism per se, might underlie RepMo.

  17. On spectral averages in nuclear spectroscopy

    International Nuclear Information System (INIS)

    Verbaarschot, J.J.M.

    1982-01-01

    In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)

  18. Computerized decision support system for mass identification in breast using digital mammogram: a study on GA-based neuro-fuzzy approaches.

    Science.gov (United States)

    Das, Arpita; Bhattacharya, Mahua

    2011-01-01

    In the present work, authors have developed a treatment planning system implementing genetic based neuro-fuzzy approaches for accurate analysis of shape and margin of tumor masses appearing in breast using digital mammogram. It is obvious that a complicated structure invites the problem of over learning and misclassification. In proposed methodology, genetic algorithm (GA) has been used for searching of effective input feature vectors combined with adaptive neuro-fuzzy model for final classification of different boundaries of tumor masses. The study involves 200 digitized mammograms from MIAS and other databases and has shown 86% correct classification rate.

  19. Rapid determination of actinides in urine by inductively coupled plasma mass spectrometry and alpha spectrometry: a hybrid approach.

    Science.gov (United States)

    Maxwell, Sherrod L; Jones, Vernon D

    2009-11-15

    A new rapid separation method that allows separation and preconcentration of actinides in urine samples was developed for the measurement of longer lived actinides by inductively coupled plasma mass spectrometry (ICP-MS) and short-lived actinides by alpha spectrometry; a hybrid approach. This method uses stacked extraction chromatography cartridges and vacuum box technology to facilitate rapid separations. Preconcentration, if required, is performed using a streamlined calcium phosphate precipitation. Similar technology has been applied to separate actinides prior to measurement by alpha spectrometry, but this new method has been developed with elution reagents now compatible with ICP-MS as well. Purified solutions are split between ICP-MS and alpha spectrometry so that long- and short-lived actinide isotopes can be measured successfully. The method allows for simultaneous extraction of 24 samples (including QC samples) in less than 3h. Simultaneous sample preparation can offer significant time savings over sequential sample preparation. For example, sequential sample preparation of 24 samples taking just 15 min each requires 6h to complete. The simplicity and speed of this new method makes it attractive for radiological emergency response. If preconcentration is applied, the method is applicable to larger sample aliquots for occupational exposures as well. The chemical recoveries are typically greater than 90%, in contrast to other reported methods using flow injection separation techniques for urine samples where plutonium yields were 70-80%. This method allows measurement of both long-lived and short-lived actinide isotopes. (239)Pu, (242)Pu, (237)Np, (243)Am, (234)U, (235)U and (238)U were measured by ICP-MS, while (236)Pu, (238)Pu, (239)Pu, (241)Am, (243)Am and (244)Cm were measured by alpha spectrometry. The method can also be adapted so that the separation of uranium isotopes for assay is not required, if uranium assay by direct dilution of the urine sample

  20. RAPID DETERMINATION OF ACTINIDES IN URINE BY INDUCTIVELY-COUPLED PLASMA MASS SPECTROMETRY AND ALPHA SPECTROMETRY: A HYBRID APPROACH

    Energy Technology Data Exchange (ETDEWEB)

    Maxwell, S.; Jones, V.

    2009-05-27

    A new rapid separation method that allows separation and preconcentration of actinides in urine samples was developed for the measurement of longer lived actinides by inductively coupled plasma mass spectrometry (ICP-MS) and short-lived actinides by alpha spectrometry; a hybrid approach. This method uses stacked extraction chromatography cartridges and vacuum box technology to facilitate rapid separations. Preconcentration, if required, is performed using a streamlined calcium phosphate precipitation. Similar technology has been applied to separate actinides prior to measurement by alpha spectrometry, but this new method has been developed with elution reagents now compatible with ICP-MS as well. Purified solutions are split between ICP-MS and alpha spectrometry so that long- and short-lived actinide isotopes can be measured successfully. The method allows for simultaneous extraction of 24 samples (including QC samples) in less than 3 h. Simultaneous sample preparation can offer significant time savings over sequential sample preparation. For example, sequential sample preparation of 24 samples taking just 15 min each requires 6 h to complete. The simplicity and speed of this new method makes it attractive for radiological emergency response. If preconcentration is applied, the method is applicable to larger sample aliquots for occupational exposures as well. The chemical recoveries are typically greater than 90%, in contrast to other reported methods using flow injection separation techniques for urine samples where plutonium yields were 70-80%. This method allows measurement of both long-lived and short-lived actinide isotopes. 239Pu, 242Pu, 237Np, 243Am, 234U, 235U and 238U were measured by ICP-MS, while 236Pu, 238Pu, 239Pu, 241Am, 243Am and 244Cm were measured by alpha spectrometry. The method can also be adapted so that the separation of uranium isotopes for assay is not required, if uranium assay by direct dilution of the urine sample is preferred instead

  1. A Proteomic Approach for Identification of Bacteria Using Tandem Mass Spectrometry Combined With a Translatome Database and Statistical Scoring

    National Research Council Canada - National Science Library

    Dworzanski, Jacek P; Snyder, A. P; Zhang, Haiyan; Wishart, David; Chen, Rui; Li, Liang

    2005-01-01

    ... mass spectra against a database translated from fully sequenced bacterial genomes. An in-house developed algorithm for filtering of search results have been tested with Bacillus subtilis and Escherichia coli microorganism...

  2. Grade-Average Method: A Statistical Approach for Estimating ...

    African Journals Online (AJOL)

    In this paper we propose an alternative way for finding an estimate of a missing score for continuous assessment mark of an examination so as to allocate an appropriate grade. We considered four different examinations and randomly selected five students of different class of grade in each, with their actual Examinations ...

  3. MALDI imaging mass spectrometry analysis-A new approach for protein mapping in multiple sclerosis brain lesions.

    Science.gov (United States)

    Maccarrone, Giuseppina; Nischwitz, Sandra; Deininger, Sören-Oliver; Hornung, Joachim; König, Fatima Barbara; Stadelmann, Christine; Turck, Christoph W; Weber, Frank

    2017-03-15

    Multiple sclerosis is a disease of the central nervous system characterized by recurrent inflammatory demyelinating lesions in the early disease stage. Lesion formation and mechanisms leading to lesion remyelination are not fully understood. Matrix Assisted Laser Desorption Ionisation Mass Spectrometry imaging (MALDI-IMS) is a technology which analyses proteins and peptides in tissue, preserves their spatial localization, and generates molecular maps within the tissue section. In a pilot study we employed MALDI imaging mass spectrometry to profile and identify peptides and proteins expressed in normal-appearing white matter, grey matter and multiple sclerosis brain lesions with different extents of remyelination. The unsupervised clustering analysis of the mass spectra generated images which reflected the tissue section morphology in luxol fast blue stain and in myelin basic protein immunohistochemistry. Lesions with low remyelination extent were defined by compounds with molecular weight smaller than 5300Da, while more completely remyelinated lesions showed compounds with molecular weights greater than 15,200Da. An in-depth analysis of the mass spectra enabled the detection of cortical lesions which were not seen by routine luxol fast blue histology. An ion mass, mainly distributed at the rim of multiple sclerosis lesions, was identified by liquid chromatography and tandem mass spectrometry as thymosin beta-4, a protein known to be involved in cell migration and in restorative processes. The ion mass of thymosin beta-4 was profiled by MALDI imaging mass spectrometry in brain slides of 12 multiple sclerosis patients and validated by immunohistochemical analysis. In summary, our results demonstrate the ability of the MALDI-IMS technology to map proteins within the brain parenchyma and multiple sclerosis lesions and to identify potential markers involved in multiple sclerosis pathogenesis and/or remyelination. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. An approach for determining quantitative measures for bone volume and bone mass in the pediatric spina bifida population.

    Science.gov (United States)

    Horenstein, Rachel E; Shefelbine, Sandra J; Mueske, Nicole M; Fisher, Carissa L; Wren, Tishya A L

    2015-08-01

    The pediatric spina bifida population suffers from decreased mobility and recurrent fractures. This study aimed to develop a method for quantifying bone mass along the entire tibia in youth with spina bifida. This will provide information about all potential sites of bone deficiencies. Computed tomography images of the tibia for 257 children (n=80 ambulatory spina bifida, n=10 non-ambulatory spina bifida, n=167 typically developing) were analyzed. Bone area was calculated at regular intervals along the entire tibia length and then weighted by calibrated pixel intensity for density weighted bone area. Integrals of density weighted bone area were used to quantify bone mass in the proximal and distal epiphyses and diaphysis. Group differences were evaluated using analysis of variance. Non-ambulatory children suffer from decreased bone mass in the diaphysis and proximal and distal epiphyses compared to ambulatory and control children (P≤0.001). Ambulatory children with spina bifida showed statistically insignificant differences in bone mass in comparison to typically developing children at these sites (P>0.5). This method provides insight into tibial bone mass distribution in the pediatric spina bifida population by incorporating information along the whole length of the bone, thereby providing more information than dual-energy x-ray absorptiometry and peripheral quantitative computed tomography. This method can be applied to any population to assess bone mass distribution across the length of any long bone. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. The H-Index of `An Approach to Correlate Tandem Mass Spectral Data of Peptides with Amino Acid Sequences in a Protein Database'

    Science.gov (United States)

    Washburn, Michael P.

    2015-11-01

    Over 20 years ago a remarkable paper was published in the Journal of American Society for Mass Spectrometry. This paper from Jimmy Eng, Ashley McCormack, and John Yates described the use of protein databases to drive the interpretation of tandem mass spectra of peptides. This paper now has over 3660 citations and continues to average more than 260 per year over the last decade. This is an amazing scientific achievement. The reason for this is the paper was a cutting edge development at the moment in time when genomes of organisms were being sequenced, protein and peptide mass spectrometry was growing into the field of proteomics, and the power of computing was growing quickly in accordance with Moore's law. This work by the Yates lab grew in importance as genomics, proteomics, and computation all advanced and eventually resulted in the widely used SEQUEST algorithm and platform for the analysis of tandem mass spectrometry data. This commentary provides an analysis of the impact of this paper by analyzing the citations it has generated and the impact of these citing papers.

  6. Averaging in cosmological models using scalars

    International Nuclear Information System (INIS)

    Coley, A A

    2010-01-01

    The averaging problem in cosmology is of considerable importance for the correct interpretation of cosmological data. A rigorous mathematical definition of averaging in a cosmological model is necessary. In general, a spacetime is completely characterized by its scalar curvature invariants, and this suggests a particular spacetime averaging scheme based entirely on scalars. We clearly identify the problems of averaging in a cosmological model. We then present a precise definition of a cosmological model, and based upon this definition, we propose an averaging scheme in terms of scalar curvature invariants. This scheme is illustrated in a simple static spherically symmetric perfect fluid cosmological spacetime, where the averaging scales are clearly identified.

  7. A mascon approach to assess ice sheet and glacier mass balances and their uncertainties from GRACE data

    Science.gov (United States)

    Schrama, Ernst J. O.; Wouters, Bert; Rietbroek, Roelof

    2014-07-01

    The purpose of this paper is to assess the mass changes of the Greenland Ice Sheet (GrIS), Ice Sheets over Antarctica, and Land glaciers and Ice Caps with a global mascon method that yields monthly mass variations at 10,242 mascons. Input for this method are level 2 data from the Gravity Recovery and Climate Experiment (GRACE) system collected between February 2003 and June 2013 to which a number of corrections are made. With glacial isostatic adjustment (GIA) corrections from an ensemble of models based on different ice histories and rheologic Earth model parameters, we find for Greenland a mass loss of -278 ± 19 Gt/yr. Whereas the mass balances for the GrIS appear to be less sensitive to GIA modeling uncertainties, this is not the case with the mass balance of Antarctica. Ice history models for Antarctica were recently improved, and updated historic ice height data sets and GPS time series have been used to generate new GIA models. We investigated the effect of two new GIA models for Antarctica and found -92 ± 26 Gt/yr which is half of what is obtained with ICE-5G-based GIA models, where the largest GIA model differences occur on East Antarctica. The mass balance of land glaciers and ice caps currently stands at -162 ± 10 Gt/yr. With the help of new GIA models for Antarctica, we assess the mass contribution to the mean sea level at 1.47 ± 0.09 mm/yr or 532 ± 34Gt/yr which is roughly half of the global sea level rise signal obtained from tide gauges and satellite altimetry.

  8. Novel Approaches to Visualization and Data Mining Reveals Diagnostic Information in the Low Amplitude Region of Serum Mass Spectra from Ovarian Cancer Patients

    Directory of Open Access Journals (Sweden)

    Donald J. Johann

    2004-01-01

    Full Text Available The ability to identify patterns of diagnostic signatures in proteomic data generated by high throughput mass spectrometry (MS based serum analysis has recently generated much excitement and interest from the scientific community. These data sets can be very large, with high-resolution MS instrumentation producing 1–2 million data points per sample. Approaches to analyze mass spectral data using unsupervised and supervised data mining operations would greatly benefit from tools that effectively allow for data reduction without losing important diagnostic information. In the past, investigators have proposed approaches where data reduction is performed by a priori “peak picking” and alignment/warping/smoothing components using rule-based signal-to-noise measurements. Unfortunately, while this type of system has been employed for gene microarray analysis, it is unclear whether it will be effective in the analysis of mass spectral data, which unlike microarray data, is comprised of continuous measurement operations. Moreover, it is unclear where true signal begins and noise ends. Therefore, we have developed an approach to MS data analysis using new types of data visualization and mining operations in which data reduction is accomplished by culling via the intensity of the peaks themselves instead of by location. Applying this new analysis method on a large study set of high resolution mass spectra from healthy and ovarian cancer patients, shows that all of the diagnostic information is contained within the very lowest amplitude regions of the mass spectra. This region can then be selected and studied to identify the exact location and amplitude of the diagnostic biomarkers.

  9. Novel Approaches to Visualization and Data Mining Reveals Diagnostic Information in the Low Amplitude Region of Serum Mass Spectra from Ovarian Cancer Patients

    Science.gov (United States)

    Johann, Donald J.; McGuigan, Michael D.; Tomov, Stanimire; Fusaro, Vincent A.; Ross, Sally; Conrads, Thomas P.; Veenstra, Timothy D.; Fishman, David A.; Whiteley, Gordon R.; Petricoin, Emanuel F.; Liotta, Lance A.

    2004-01-01

    The ability to identify patterns of diagnostic signatures in proteomic data generated by high throughput mass spectrometry (MS) based serum analysis has recently generated much excitement and interest from the scientific community. These data sets can be very large, with high-resolution MS instrumentation producing 1–2 million data points per sample. Approaches to analyze mass spectral data using unsupervised and supervised data mining operations would greatly benefit from tools that effectively allow for data reduction without losing important diagnostic information. In the past, investigators have proposed approaches where data reduction is performed by a priori “peak picking” and alignment/warping/smoothing components using rule-based signal-to-noise measurements. Unfortunately, while this type of system has been employed for gene microarray analysis, it is unclear whether it will be effective in the analysis of mass spectral data, which unlike microarray data, is comprised of continuous measurement operations. Moreover, it is unclear where true signal begins and noise ends. Therefore, we have developed an approach to MS data analysis using new types of data visualization and mining operations in which data reduction is accomplished by culling via the intensity of the peaks themselves instead of by location. Applying this new analysis method on a large study set of high resolution mass spectra from healthy and ovarian cancer patients, shows that all of the diagnostic information is contained within the very lowest amplitude regions of the mass spectra. This region can then be selected and studied to identify the exact location and amplitude of the diagnostic biomarkers. PMID:15258334

  10. Sequence protein identification by randomized sequence database and transcriptome mass spectrometry (SPIDER-TMS): from manual to automatic application of a 'de novo sequencing' approach.

    Science.gov (United States)

    Pascale, Raffaella; Grossi, Gerarda; Cruciani, Gabriele; Mecca, Giansalvatore; Santoro, Donatello; Sarli Calace, Renzo; Falabella, Patrizia; Bianco, Giuliana

    Sequence protein identification by a randomized sequence database and transcriptome mass spectrometry software package has been developed at the University of Basilicata in Potenza (Italy) and designed to facilitate the determination of the amino acid sequence of a peptide as well as an unequivocal identification of proteins in a high-throughput manner with enormous advantages of time, economical resource and expertise. The software package is a valid tool for the automation of a de novo sequencing approach, overcoming the main limits and a versatile platform useful in the proteomic field for an unequivocal identification of proteins, starting from tandem mass spectrometry data. The strength of this software is that it is a user-friendly and non-statistical approach, so protein identification can be considered unambiguous.

  11. Stable isotope approach to fission product element studies of soil-to-plant transfer and in vitro modelling of ruminant digestion using inductively coupled plasma mass spectrometry

    International Nuclear Information System (INIS)

    Robb, Paul; Owen, L.M.W.; Crews, H.M.

    1995-01-01

    A stable isotope approach has been used to investigate two aspects of the behaviour of fission product elements in the environment and food chains using inductively coupled plasma mass spectrometry (ICP-MS). Limits of detection (dry mass LODs) of 0.053 mg kg -1 for Sr, 0.011 mg kg -1 for Cs and 0.084 mg kg -1 for Ce were low enough to allow the determination of soil-to-plant transfer factors for soft fruit and the application of the approach to an in vitro model of ruminant digestion. The multi-element measurement capability of ICP-MS also permitted the analysis of selected nutrients, including zinc, in in vitro experiments. (author)

  12. Recent advances for measurement of protein synthesis rates, use of the 'Virtual Biopsy' approach, and measurement of muscle mass.

    Science.gov (United States)

    Hellerstein, Marc; Evans, William

    2017-05-01

    Flux-rate measurements of protein synthesis and breakdown (turnover) in muscle represent an ideal class of mechanism-based biomarkers for conditions of altered muscle mass and function. We describe here new metabolic labeling techniques for flux-rate measurements in humans, focusing on skeletal muscle. Dynamics of the muscle proteome are accurately measured in humans by combining long-term heavy water labeling with tandem mass spectrometry. Broad proteomic flux signatures or kinetics of targeted proteins are measurable. After interventions, early fractional synthesis rates of skeletal muscle proteins predict later changes in muscle mass. The 'virtual biopsy' method for measuring tissue protein turnover rates from body fluids has been validated for skeletal muscle, from labeling of plasma creatine kinase-type M or carbonic anhydrase-3. Label in these proteins in plasma reflects label of cognate proteins in the tissue, and response in plasma predicts longer term outcomes. Skeletal muscle mass can also be measured noninvasively from a spot urine, based on dilution of labeled creatine. This method correlates well with whole body MRI assessment of muscle mass and predicts clinical outcomes in older men. Flux measurements are available and more interpretable functionally than static measurements for several reasons, which are discussed.

  13. Phenomenological approach to the modelling of elliptical galaxies: The problem of the mass-to-light ratio

    Directory of Open Access Journals (Sweden)

    Samurović S.

    2007-01-01

    Full Text Available In this paper the problem of the phenomenological modelling of elliptical galaxies using various available observational data is presented. Recently, Tortora, Cardona and Piedipalumbo (2007 suggested a double power law expression for the global cumulative mass-to-light ratio of elliptical galaxies. We tested their expression on a sample of ellipticals for which we have the estimates of the mass-to-light ratio beyond ~ 3 effective radii, a region where dark matter is expected to play an important dynamical role. We found that, for all the galaxies in our sample, we have α + β > 0, but that this does not necessarily mean a high dark matter content. The galaxies with higher mass (and higher dark matter content also have higher value of α+β. It was also shown that there is an indication that the galaxies with higher value of the effective radius also have higher dark matter content. .

  14. Next generation offline approaches to trace organic compound speciation: Approaching comprehensive speciation with soft ionization and very high resolution tandem mass spectrometry

    Science.gov (United States)

    Khare, P.; Marcotte, A.; Sheu, R.; Ditto, J.; Gentner, D. R.

    2017-12-01

    Intermediate- and semi-volatile organic compounds (IVOCs and SVOCs) have high secondary organic aerosol (SOA) yields, as well as significant ozone formation potentials. Yet, their emission sources and oxidation pathways remain largely understudied due to limitations in current analytical capabilities. Online mass spectrometers are able to collect real time data but their limited mass resolving power renders molecular level characterization of IVOCs and SVOCs from the unresolved complex mixture unfeasible. With proper sampling techniques and powerful analytical instrumentation, our offline tandem mass spectrometry (i.e. MS×MS) techniques provide molecular-level and structural identification over wide polarity and volatility ranges. We have designed a novel analytical system for offline analysis of gas-phase SOA precursors collected on custom-made multi-bed adsorbent tubes. Samples are desorbed into helium via a gradual temperature ramp and sample flow is split equally for direct-MS×MS analysis and separation via gas chromatography (GC). The effluent from GC separation is split again for analysis via atmospheric pressure chemical ionization quadrupole time-of-flight mass spectrometry (APCI-Q×TOF) and traditional electron ionization mass spectrometry (EI-MS). The compounds for direct-MS×MS analysis are delivered via a transfer line maintained at 70ºC directly to APCI-Q×TOF, thus preserving the molecular integrity of thermally-labile, or other highly-reactive, organic compounds. Both our GC-MS×MS and direct-MS×MS analyses report high accuracy parent ion masses as well as information on molecular structure via MS×MS, which together increase the resolution of unidentified complex mixtures. We demonstrate instrument performance and present preliminary results from urban atmospheric samples collected from New York City with a wide range of compounds including highly-functionalized organic compounds previously understudied in outdoor air. Our work offers new

  15. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  16. Bivariate copulas on the exponentially weighted moving average control chart

    Directory of Open Access Journals (Sweden)

    Sasigarn Kuvattana

    2016-10-01

    Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.

  17. A New Approach to Determine the Density of Liquids and Solids without Measuring Mass and Volume: Introducing the "Solidensimeter"

    Science.gov (United States)

    Kiriktas, Halit; Sahin, Mehmet; Eslek, Sinan; Kiriktas, Irem

    2018-01-01

    This study aims to design a mechanism with which the density of any solid or liquid can be determined without measuring its mass and volume in order to help students comprehend the concept of density more easily. The "solidensimeter" comprises of two scaled and nested glass containers (graduated cylinder or beaker) and sufficient water.…

  18. Desorption atmospheric pressure photoionization high-resolution mass spectrometry: a complementary approach for the chemical analysis of atmospheric aerosols

    Czech Academy of Sciences Publication Activity Database

    Parshintsev, J.; Vaikkinen, A.; Lipponen, K.; Vrkoslav, Vladimír; Cvačka, Josef; Kostiainen, R.; Kotiaho, T.; Hartonen, K.; Riekkola, M. L.; Kauppila, T. J.

    2015-01-01

    Roč. 29, č. 13 (2015), s. 1233-1241 ISSN 0951-4198 Grant - others:GA AV ČR(CZ) M200551204 Institutional support: RVO:61388963 Keywords : atmospheric aerosols * mass spectrometry * ambient ionization Subject RIV: CB - Analytical Chemistry, Separation Impact factor: 2.226, year: 2015

  19. Differentiation of whole grain and refined wheat (T. aestivum) flour using a fuzzy mass spectrometric fingerprinting and chemometric approaches

    Science.gov (United States)

    A fuzzy mass spectrometric (MS) fingerprinting method combined with chemometric analysis was established to provide rapid discrimination between whole grain and refined wheat flour. Twenty one samples, including thirteen samples from three cultivars and eight from local grocery store, were studied....

  20. Modelling the surface mass balance of the Greenland ice sheet and neighbouring ice caps : A dynamical and statistical downscaling approach

    NARCIS (Netherlands)

    Noël, B.P.Y.|info:eu-repo/dai/nl/370612345

    2018-01-01

    The Greenland ice sheet (GrIS) is the world’s second largest ice mass, storing about one tenth of the Earth’s freshwater. If totally melted, global sea level would rise by 7.4 m, affecting low-lying regions worldwide. Since the mid-1990s, increased atmospheric and oceanic temperatures have

  1. New Approach to School Health Initiatives: Using Fitness Measures Instead of Body Mass Index to Evaluate Outcomes

    Science.gov (United States)

    Phelps, Joshua; Smith, Amanda; Parker, Stephany; Hermann, Janice

    2016-01-01

    Oklahoma Cooperative Extension Service provided elementary school students with a program that included a noncompetitive physical activity component: circuit training that combined cardiovascular, strength, and flexibility activities without requiring high skill levels. The intent was to improve fitness without focusing on body mass index as an…

  2. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  3. Bootstrapping pre-averaged realized volatility under market microstructure noise

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour

    The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...

  4. Cosmological ensemble and directional averages of observables

    CERN Document Server

    Bonvin, Camille; Durrer, Ruth; Maartens, Roy; Umeh, Obinna

    2015-01-01

    We show that at second order ensemble averages of observables and directional averages do not commute due to gravitational lensing. In principle this non-commutativity is significant for a variety of quantities we often use as observables. We derive the relation between the ensemble average and the directional average of an observable, at second-order in perturbation theory. We discuss the relevance of these two types of averages for making predictions of cosmological observables, focussing on observables related to distances and magnitudes. In particular, we show that the ensemble average of the distance is increased by gravitational lensing, whereas the directional average of the distance is decreased. We show that for a generic observable, there exists a particular function of the observable that is invariant under second-order lensing perturbations.

  5. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  6. A high-resolution accurate mass (HR/AM) approach to identification, profiling and characterization of in vitro nefazodone metabolites using a hybrid quadrupole Orbitrap (Q-Exactive).

    Science.gov (United States)

    Perry, Simon J; Nász, Szilárd; Saeed, Mansoor

    2015-09-15

    This paper describes a strategy for the profiling and identification of metabolites based on chemical group classification using high-resolution accurate mass (HR/AM) full scan mass spectrometry (MS) and All-Ion fragmentation (AIF) MS 2 data. The proposed strategy uses a hybrid quadrupole Orbitrap (Q-Exactive) employing stepped normalised collision energy (NCE) at 35% and 80% to produce key chemically diagnostic product ions from full coverage of the product ion spectrum. This approach allows filtering of high-resolution AIF MS 2 data in order to identify parent-related compounds produced following incubation in rat liver microsomes (RLMs). An antidepressant drug, nefazodone (NEF), was selected as the model test compound to demonstrate the proposed workflow for metabolite profiling. This resulted in the identification of three indicative chemical groups within NEF: triazolone, phenoxy and chlorophenylpiperazine. High-resolution mass spectrometry provides increased specificity to distinguish between two characteristic product ion masses m/z 154.0975 (C 7 H 12 N 3 O) and 154.0419 (C 8 H 9 NCl), which are not fully resolved by spectrometers operating at nominal mass resolution, indicative of compounds containing the triazolone and chlorophenylpiperazine moieties, respectively. This post-acquisition processing strategy provides comprehensive detection and identification of high- and low-level metabolites from an 'all-in-one' analysis. This enables functional groups to be systematically traced across a wide range of metabolites, leading to the successful identification of 28 in vitro NEF-related metabolites. In our hands this approach has been applied to agrochemical environmental fate and dietary metabolism studies, as well as metabolomics and biomarker analysis. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Identification of urinary biomarkers of exposure to di-(2-propylheptyl) phthalate using high-resolution mass spectrometry and two data-screening approaches.

    Science.gov (United States)

    Shih, Chia-Lung; Liao, Pao-Mei; Hsu, Jen-Yi; Chung, Yi-Ning; Zgoda, Victor G; Liao, Pao-Chi

    2018-02-01

    Di-(2-propylheptyl) phthalate (DPHP) is a plasticizer used in polyvinyl chloride and vinyl chloride copolymer that has been suggested to be a toxicant in rats and may affect human health. Because the use of DPHP is increasing, the general German population is being exposed to DPHP. Toxicant metabolism is important for human toxicant exposure assessments. To date, the knowledge regarding DPHP metabolism has been limited, and only four metabolites have been identified in human urine. Ultra-performance liquid chromatography was coupled with Orbitrap high-resolution mass spectrometry (MS) and two data-screening approaches-the signal mining algorithm with isotope tracing (SMAIT) and the mass defect filter (MDF)-for DPHP metabolite candidate discovery. In total, 13 and 104 metabolite candidates were identified by the two approaches, respectively, in in vitro DPHP incubation samples. Of these candidates, 17 were validated as tentative exposure biomarkers using a rat model, 13 of which have not been reported in the literature. The two approaches generated rather different tentative DPHP exposure biomarkers, indicating that these approaches are complementary for discovering exposure biomarkers. Compared with the four previously reported DPHP metabolites, the three tentative novel biomarkers had higher peak intensity ratios, and two were confirmed as DPHP hydroxyl metabolites based on their MS/MS product ion profiles. These three tentative novel biomarkers should be further investigated for potential application in human exposure assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Current Direct Neutrino Mass Experiments

    Directory of Open Access Journals (Sweden)

    G. Drexlin

    2013-01-01

    Full Text Available In this contribution, we review the status and perspectives of direct neutrino mass experiments, which investigate the kinematics of β-decays of specific isotopes (3H, 187Re, 163Ho to derive model-independent information on the averaged electron (antineutrino mass. After discussing the kinematics of β-decay and the determination of the neutrino mass, we give a brief overview of past neutrino mass measurements (SN1987a-ToF studies, Mainz and Troitsk experiments for 3H, cryobolometers for 187Re. We then describe the Karlsruhe Tritium Neutrino (KATRIN experiment currently under construction at Karlsruhe Institute of Technology, which will use the MAC-E-Filter principle to push the sensitivity down to a value of 200 meV (90% C.L.. To do so, many technological challenges have to be solved related to source intensity and stability, as well as precision energy analysis and low background rate close to the kinematic endpoint of tritium β-decay at 18.6 keV. We then review new approaches such as the MARE, ECHO, and Project8 experiments, which offer the promise to perform an independent measurement of the neutrino mass in the sub-eV region. Altogether, the novel methods developed in direct neutrino mass experiments will provide vital information on the absolute mass scale of neutrinos.

  9. Bounce-averaged Fokker-Planck code for stellarator transport

    International Nuclear Information System (INIS)

    Mynick, H.E.; Hitchon, W.N.G.

    1985-07-01

    A computer code for solving the bounce-averaged Fokker-Planck equation appropriate to stellarator transport has been developed, and its first applications made. The code is much faster than the bounce-averaged Monte-Carlo codes, which up to now have provided the most efficient numerical means for studying stellarator transport. Moreover, because the connection to analytic kinetic theory of the Fokker-Planck approach is more direct than for the Monte-Carlo approach, a comparison of theory and numerical experiment is now possible at a considerably more detailed level than previously

  10. A Mass Spectrometry-Based Approach for Mapping Protein Subcellular Localization Reveals the Spatial Proteome of Mouse Primary Neurons

    Directory of Open Access Journals (Sweden)

    Daniel N. Itzhak

    2017-09-01

    Full Text Available We previously developed a mass spectrometry-based method, dynamic organellar maps, for the determination of protein subcellular localization and identification of translocation events in comparative experiments. The use of metabolic labeling for quantification (stable isotope labeling by amino acids in cell culture [SILAC] renders the method best suited to cells grown in culture. Here, we have adapted the workflow to both label-free quantification (LFQ and chemical labeling/multiplexing strategies (tandem mass tagging [TMT]. Both methods are highly effective for the generation of organellar maps and capture of protein translocations. Furthermore, application of label-free organellar mapping to acutely isolated mouse primary neurons provided subcellular localization and copy-number information for over 8,000 proteins, allowing a detailed analysis of organellar organization. Our study extends the scope of dynamic organellar maps to any cell type or tissue and also to high-throughput screening.

  11. Desorption atmospheric pressure photoionization high-resolution mass spectrometry: a complementary approach for the chemical analysis of atmospheric aerosols.

    Science.gov (United States)

    Parshintsev, Jevgeni; Vaikkinen, Anu; Lipponen, Katriina; Vrkoslav, Vladimir; Cvačka, Josef; Kostiainen, Risto; Kotiaho, Tapio; Hartonen, Kari; Riekkola, Marja-Liisa; Kauppila, Tiina J

    2015-07-15

    On-line chemical characterization methods of atmospheric aerosols are essential to increase our understanding of physicochemical processes in the atmosphere, and to study biosphere-atmosphere interactions. Several techniques, including aerosol mass spectrometry, are nowadays available, but they all suffer from some disadvantages. In this research, desorption atmospheric pressure photoionization high-resolution (Orbitrap) mass spectrometry (DAPPI-HRMS) is introduced as a complementary technique for the fast analysis of aerosol chemical composition without the need for sample preparation. Atmospheric aerosols from city air were collected on a filter, desorbed in a DAPPI source with a hot stream of toluene and nitrogen, and ionized using a vacuum ultraviolet lamp at atmospheric pressure. To study the applicability of the technique for ambient aerosol analysis, several samples were collected onto filters and analyzed, with the focus being on selected organic acids. To compare the DAPPI-HRMS data with results obtained by an established method, each filter sample was divided into two equal parts, and the second half of the filter was extracted and analyzed by liquid chromatography/mass spectrometry (LC/MS). The DAPPI results agreed with the measured aerosol particle number. In addition to the targeted acids, the LC/MS and DAPPI-HRMS methods were found to detect different compounds, thus providing complementary information about the aerosol samples. DAPPI-HRMS showed several important oxidation products of terpenes, and numerous compounds were tentatively identified. Thanks to the soft ionization, high mass resolution, fast analysis, simplicity and on-line applicability, the proposed methodology has high potential in the field of atmospheric research. Copyright © 2015 John Wiley & Sons, Ltd.

  12. An accurate and adaptable photogrammetric approach for estimating the mass and body condition of pinnipeds using an unmanned aerial system

    OpenAIRE

    Krause, Douglas J.; Hinke, Jefferson T.; Perryman, Wayne L.; Goebel, Michael E.; LeRoi, Donald J.

    2017-01-01

    Measurements of body size and mass are fundamental to pinniped population management and research. Manual measurements tend to be accurate but are invasive and logistically challenging to obtain. Ground-based photogrammetric techniques are less invasive, but inherent limitations make them impractical for many field applications. The recent proliferation of unmanned aerial systems (UAS) in wildlife monitoring has provided a promising new platform for the photogrammetry of free-ranging pinniped...

  13. Identification of HL60 proteins affected by 5-aminolevulinic acid-based photodynamic therapy using mass spectrometric approach+

    Czech Academy of Sciences Publication Activity Database

    Halada, Petr; Man, Petr; Grebeňová, D.; Hrkal, Z.; Havlíček, Vladimír

    2001-01-01

    Roč. 66, - (2001), s. 1720-1728 ISSN 0010-0765. [ASMS Conference on Mass Spectrometry and Alliied Topics /49./. Illinois, 27.05.2001-31.05.2001] R&D Projects: GA ČR GA303/01/1445 Institutional research plan: CEZ:AV0Z5020903 Keywords : identification * proteins * affected Subject RIV: EE - Microbiology, Virology Impact factor: 0.778, year: 2001

  14. Molecular dynamic approach to the study of the intense heat and mass transfer processes on the vapor-liquid interface

    Science.gov (United States)

    Levashov, V. Yu; Kamenov, P. K.

    2017-10-01

    The paper is devoted to research of the heat and mass transfer processes on the vapor-liquid interface. These processes can be realized for example at metal tempering, accidents at nuclear power stations, followed by the release of the corium into the heat carrier, getting hot magma into the water during volcanic eruptions and other. In all these examples the vapor film can arise on the heated body surface. In this paper the vapor film formation process will be considered with help of molecular dynamics simulation methods. The main attention during this process modeling will be focused on the subject of the fluid and vapor interactions with the heater surface. Another direction of this work is to study of the processes inside the droplet that may take place as result of impact of the high-power laser radiation. Such impact can lead to intensive evaporation and explosive destruction of the droplet. At that the duration of heat and mass transfer processes in droplet substance is tens of femtoseconds. Thus, the methods of molecular dynamics simulation can give the possibilities describe the heat and mass transfer processes in the droplet and the vapor phase formation.

  15. A two-dimensional two-phase mass transport model for direct methanol fuel cells adopting a modified agglomerate approach

    Science.gov (United States)

    Miao, Zheng; He, Ya-Ling; Li, Xiang-Lin; Zou, Jin-Qiang

    A two-dimensional two-phase mass transport model for liquid-feed direct methanol fuel cells (DMFCs) is presented in this paper. The fluid flow and mass transport across the membrane electrode assembly (MEA) is formulated based on the classical multiphase flow theory in the porous media. The modeling of mass transport in the catalyst layers (CLs) and membrane is given more attentions. The effect of the two-dimensional migration of protons in the electrolyte phase on the liquid flow behavior is considered. Water and methanol crossovers through the membrane are implicitly calculated in the governing equations of momentum and methanol concentration. A modified agglomerate model is developed to characterize the microstructure of the CLs. A self-written computer code is used to solve the inherently coupled differential governing equations. Then this model is applied to investigate the mechanisms of species transport and the distributions of the species concentrations, overpotential and the electrochemical reaction rates in CLs. The effects of radius and overlapping angle of agglomerates on cell performance are also explored in this work.

  16. A new approach to determine the density of liquids and solids without measuring mass and volume: introducing the solidensimeter

    Science.gov (United States)

    Kiriktaş, Halit; Şahin, Mehmet; Eslek, Sinan; Kiriktaş, İrem

    2018-05-01

    This study aims to design a mechanism with which the density of any solid or liquid can be determined without measuring its mass and volume in order to help students comprehend the concept of density more easily. The solidensimeter comprises of two scaled and nested glass containers (graduated cylinder or beaker) and sufficient water. In this method, the density measurement was made using the Archimedes’ principle stating that an object fully submerged in a liquid displaces the same amount of liquid as its volume, while an object partially submerged or floating displaces the same amount of liquid as its mass. Using this method, the density of any solids or liquids can be determined using a simple mathematical ratio. At the end of the process a mechanism that helps students to comprehend the density topic more easily was designed. The system is easy-to-design, uses low-cost equipment and enables one to determine the density of any solid or liquid without measuring its mass and volume.

  17. Optimization of OSEM parameters in myocardial perfusion imaging reconstruction as a function of body mass index: a clinical approach*

    Science.gov (United States)

    de Barros, Pietro Paolo; Metello, Luis F.; Camozzato, Tatiane Sabriela Cagol; Vieira, Domingos Manuel da Silva

    2015-01-01

    Objective The present study is aimed at contributing to identify the most appropriate OSEM parameters to generate myocardial perfusion imaging reconstructions with the best diagnostic quality, correlating them with patients’ body mass index. Materials and Methods The present study included 28 adult patients submitted to myocardial perfusion imaging in a public hospital. The OSEM method was utilized in the images reconstruction with six different combinations of iterations and subsets numbers. The images were analyzed by nuclear cardiology specialists taking their diagnostic value into consideration and indicating the most appropriate images in terms of diagnostic quality. Results An overall scoring analysis demonstrated that the combination of four iterations and four subsets has generated the most appropriate images in terms of diagnostic quality for all the classes of body mass index; however, the role played by the combination of six iterations and four subsets is highlighted in relation to the higher body mass index classes. Conclusion The use of optimized parameters seems to play a relevant role in the generation of images with better diagnostic quality, ensuring the diagnosis and consequential appropriate and effective treatment for the patient. PMID:26543282

  18. A semi-automated approach to derive elevation time-series and calculate glacier mass balance from historical aerial imagery

    Science.gov (United States)

    Whorton, E.; Headman, A.; Shean, D. E.; McCann, E.

    2017-12-01

    Understanding the implications of glacier recession on water resources in the western U.S. requires quantifying glacier mass change across large regions over several decades. Very few glaciers in North America have long-term continuous field measurements of glacier mass balance. However, systematic aerial photography campaigns began in 1957 on many glaciers in the western U.S. and Alaska. These historical, vertical aerial stereo-photographs documenting glacier evolution have recently become publically available. Digital elevation models (DEM) of the transient glacier surface preserved in each imagery timestamp can be derived, then differenced to calculate glacier volume and mass change to improve regional geodetic solutions of glacier mass balance. In order to batch process these data, we use Python-based algorithms and Agisoft Photoscan structure from motion (SfM) photogrammetry software to semi-automate DEM creation, and orthorectify and co-register historical aerial imagery in a high-performance computing environment. Scanned photographs are rotated to reduce scaling issues, cropped to the same size to remove fiducials, and batch histogram equalization is applied to improve image quality and aid pixel-matching algorithms using the Python library OpenCV. Processed photographs are then passed to Photoscan through the Photoscan Python library to create DEMs and orthoimagery. To extend the period of record, the elevation products are co-registered to each other, airborne LiDAR data, and DEMs derived from sub-meter commercial satellite imagery. With the exception of the placement of ground control points, the process is entirely automated with Python. Current research is focused on: one, applying these algorithms to create geodetic mass balance time series for the 90 photographed glaciers in Washington State and two, evaluating the minimal amount of positional information required in Photoscan to prevent distortion effects that cannot be addressed during co

  19. Modelling transport in media with heterogeneous advection properties and mass transfer with a Continuous Time Random Walk approach

    Science.gov (United States)

    Comolli, Alessandro; Moussey, Charlie; Dentz, Marco

    2016-04-01

    Transport processes in groundwater systems are strongly affected by the presence of heterogeneity. The heterogeneity leads to non-Fickian features, that manifest themselves in the heavy-tailed breakthrough curves, as well as in the non-linear growth of the mean squared displacement and in the non-Gaussian plumes of solute particles. The causes of non-Fickian transport can be the heterogeneity in the flow fields and the processes of mass exchange between mobile and immobile phases, such as sorption/desorption reactions and diffusive mass transfer. Here, we present a Continuous Time Random Walk (CTRW) model that describes the transport of solutes in d-dimensional systems by taking into account both heterogeneous advection and mobile-immobile mass transfer. In order to account for these processes in the CTRW, the heterogeneities are mapped onto a distribution of transition times, which can be decomposed into advective transition times and trapping times, the latter being treated as a compound Poisson process. While advective transition times are related to the Eulerian flow velocities and, thus, to the conductivity distribution, trapping times depend on the sorption/desorption time scale, in case of reactive problems, or on the distribution of diffusion times in the immobile zones. Since the trapping time scale is typically much larger than the advective time scale, we observe the existence of two temporal regimes. The pre-asymptotic regime is defined by a characteristic time scale at which the properties of transport are fully determined by the heterogeneity of the advective field. On the other hand, in the asymptotic regime both the heterogeneity and the mass exchange processes play a role in conditioning the behaviour of transport. We consider different scenarios to discuss the relative importance of the advective heterogeneity and the mass transfer for the occurrence of non-Fickian transport. For each case we calculate analytically the scalings of the breakthrough

  20. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  1. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Department of Resources — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  2. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  3. "TOF2H": A precision toolbox for rapid, high density/high coverage hydrogen-deuterium exchange mass spectrometry via an LC-MALDI approach, covering the data pipeline from spectral acquisition to HDX rate analysis

    Directory of Open Access Journals (Sweden)

    Koter Marek D

    2008-09-01

    Full Text Available Abstract Background Protein-amide proton hydrogen-deuterium exchange (HDX is used to investigate protein conformation, conformational changes and surface binding sites for other molecules. To our knowledge, software tools to automate data processing and analysis from sample fractionating (LC-MALDI mass-spectrometry-based HDX workflows are not publicly available. Results An integrated data pipeline (Solvent Explorer/TOF2H has been developed for the processing of LC-MALDI-derived HDX data. Based on an experiment-wide template, and taking an ab initio approach to chromatographic and spectral peak finding, initial data processing is based on accurate mass-matching to fully deisotoped peaklists accommodating, in MS/MS-confirmed peptide library searches, ambiguous mass-hits to non-target proteins. Isotope-shift re-interrogation of library search results allows quick assessment of the extent of deuteration from peaklist data alone. During raw spectrum editing, each spectral segment is validated in real time, consistent with the manageable spectral numbers resulting from LC-MALDI experiments. A semi-automated spectral-segment editor includes a semi-automated or automated assessment of the quality of all spectral segments as they are pooled across an XIC peak for summing, centroid mass determination, building of rates plots on-the-fly, and automated back exchange correction. The resulting deuterium uptake rates plots from various experiments can be averaged, subtracted, re-scaled, error-barred, and/or scatter-plotted from individual spectral segment centroids, compared to solvent exposure and hydrogen bonding predictions and receive a color suggestion for 3D visualization. This software lends itself to a "divorced" HDX approach in which MS/MS-confirmed peptide libraries are built via nano or standard ESI without source modification, and HDX is performed via LC-MALDI using a standard MALDI-TOF. The complete TOF2H package includes additional (eg LC

  4. A robust combination approach for short-term wind speed forecasting and analysis – Combination of the ARIMA (Autoregressive Integrated Moving Average), ELM (Extreme Learning Machine), SVM (Support Vector Machine) and LSSVM (Least Square SVM) forecasts using a GPR (Gaussian Process Regression) model

    International Nuclear Information System (INIS)

    Wang, Jianzhou; Hu, Jianming

    2015-01-01

    With the increasing importance of wind power as a component of power systems, the problems induced by the stochastic and intermittent nature of wind speed have compelled system operators and researchers to search for more reliable techniques to forecast wind speed. This paper proposes a combination model for probabilistic short-term wind speed forecasting. In this proposed hybrid approach, EWT (Empirical Wavelet Transform) is employed to extract meaningful information from a wind speed series by designing an appropriate wavelet filter bank. The GPR (Gaussian Process Regression) model is utilized to combine independent forecasts generated by various forecasting engines (ARIMA (Autoregressive Integrated Moving Average), ELM (Extreme Learning Machine), SVM (Support Vector Machine) and LSSVM (Least Square SVM)) in a nonlinear way rather than the commonly used linear way. The proposed approach provides more probabilistic information for wind speed predictions besides improving the forecasting accuracy for single-value predictions. The effectiveness of the proposed approach is demonstrated with wind speed data from two wind farms in China. The results indicate that the individual forecasting engines do not consistently forecast short-term wind speed for the two sites, and the proposed combination method can generate a more reliable and accurate forecast. - Highlights: • The proposed approach can make probabilistic modeling for wind speed series. • The proposed approach adapts to the time-varying characteristic of the wind speed. • The hybrid approach can extract the meaningful components from the wind speed series. • The proposed method can generate adaptive, reliable and more accurate forecasting results. • The proposed model combines four independent forecasting engines in a nonlinear way.

  5. A new approach to develop computer-aided diagnosis scheme of breast mass classification using deep learning technology.

    Science.gov (United States)

    Qiu, Yuchen; Yan, Shiju; Gundreddy, Rohith Reddy; Wang, Yunzhi; Cheng, Samuel; Liu, Hong; Zheng, Bin

    2017-01-01

    To develop and test a deep learning based computer-aided diagnosis (CAD) scheme of mammograms for classifying between malignant and benign masses. An image dataset involving 560 regions of interest (ROIs) extracted from digital mammograms was used. After down-sampling each ROI from 512×512 to 64×64 pixel size, we applied an 8 layer deep learning network that involves 3 pairs of convolution-max-pooling layers for automatic feature extraction and a multiple layer perceptron (MLP) classifier for feature categorization to process ROIs. The 3 pairs of convolution layers contain 20, 10, and 5 feature maps, respectively. Each convolution layer is connected with a max-pooling layer to improve the feature robustness. The output of the sixth layer is fully connected with a MLP classifier, which is composed of one hidden layer and one logistic regression layer. The network then generates a classification score to predict the likelihood of ROI depicting a malignant mass. A four-fold cross validation method was applied to train and test this deep learning network. The results revealed that this CAD scheme yields an area under the receiver operation characteristic curve (AUC) of 0.696±0.044, 0.802±0.037, 0.836±0.036, and 0.822±0.035 for fold 1 to 4 testing datasets, respectively. The overall AUC of the entire dataset is 0.790±0.019. This study demonstrates the feasibility of applying a deep learning based CAD scheme to classify between malignant and benign breast masses without a lesion segmentation, image feature computation and selection process.

  6. Discovery of safety biomarkers for atorvastatin in rat urine using mass spectrometry based metabolomics combined with global and targeted approach

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Bhowmik Salil [Bioanalysis and Biotransformation Research Center, Korea Institute of Science and Technology, P.O. Box 131, Cheongryang, Seoul 130-650 (Korea, Republic of); University of Science and Technology, (305-333) 113 Gwahangno, Yuseong-gu, Daejeon (Korea, Republic of); Lee, Young-Joo; Yi, Hong Jae [College of Pharmacy, Kyung Hee University, Hoegi-dong, Dongdaemun-gu, Seoul 130-791 (Korea, Republic of); Chung, Bong Chul [Bioanalysis and Biotransformation Research Center, Korea Institute of Science and Technology, P.O. Box 131, Cheongryang, Seoul 130-650 (Korea, Republic of); Jung, Byung Hwa, E-mail: jbhluck@kist.re.kr [Bioanalysis and Biotransformation Research Center, Korea Institute of Science and Technology, P.O. Box 131, Cheongryang, Seoul 130-650 (Korea, Republic of); University of Science and Technology, (305-333) 113 Gwahangno, Yuseong-gu, Daejeon (Korea, Republic of)

    2010-02-19

    In order to develop a safety biomarker for atorvastatin, this drug was orally administrated to hyperlipidemic rats, and a metabolomic study was performed. Atorvastatin was given in doses of either 70 mg kg{sup -1} day{sup -1} or 250 mg kg{sup -1} day{sup -1} for a period of 7 days (n = 4 for each group). To evaluate any abnormal effects of the drug, physiological and plasma biochemical parameters were measured and histopathological tests were carried out. Safety biomarkers were derived by comparing these parameters and using both global and targeted metabolic profiling. Global metabolic profiling was performed using liquid chromatography/time of flight/mass spectrometry (LC/TOF/MS) with multivariate data analysis. Several safety biomarker candidates that included various steroids and amino acids were discovered as a result of global metabolic profiling, and they were also confirmed by targeted metabolic profiling using gas chromatography/mass spectrometry (GC/MS) and capillary electrophoresis/mass spectrometry (CE/MS). Serum biochemical and histopathological tests were used to detect abnormal drug reactions in the liver after repeating oral administration of atorvastatin. The metabolic differences between control and the drug-treated groups were compared using PLS-DA score plots. These results were compared with the physiological and plasma biochemical parameters and the results of a histopathological test. Estrone, cortisone, proline, cystine, 3-ureidopropionic acid and histidine were proposed as potential safety biomarkers related with the liver toxicity of atorvastatin. These results indicate that the combined application of global and targeted metabolic profiling could be a useful tool for the discovery of drug safety biomarkers.

  7. Discovery of safety biomarkers for atorvastatin in rat urine using mass spectrometry based metabolomics combined with global and targeted approach

    International Nuclear Information System (INIS)

    Kumar, Bhowmik Salil; Lee, Young-Joo; Yi, Hong Jae; Chung, Bong Chul; Jung, Byung Hwa

    2010-01-01

    In order to develop a safety biomarker for atorvastatin, this drug was orally administrated to hyperlipidemic rats, and a metabolomic study was performed. Atorvastatin was given in doses of either 70 mg kg -1 day -1 or 250 mg kg -1 day -1 for a period of 7 days (n = 4 for each group). To evaluate any abnormal effects of the drug, physiological and plasma biochemical parameters were measured and histopathological tests were carried out. Safety biomarkers were derived by comparing these parameters and using both global and targeted metabolic profiling. Global metabolic profiling was performed using liquid chromatography/time of flight/mass spectrometry (LC/TOF/MS) with multivariate data analysis. Several safety biomarker candidates that included various steroids and amino acids were discovered as a result of global metabolic profiling, and they were also confirmed by targeted metabolic profiling using gas chromatography/mass spectrometry (GC/MS) and capillary electrophoresis/mass spectrometry (CE/MS). Serum biochemical and histopathological tests were used to detect abnormal drug reactions in the liver after repeating oral administration of atorvastatin. The metabolic differences between control and the drug-treated groups were compared using PLS-DA score plots. These results were compared with the physiological and plasma biochemical parameters and the results of a histopathological test. Estrone, cortisone, proline, cystine, 3-ureidopropionic acid and histidine were proposed as potential safety biomarkers related with the liver toxicity of atorvastatin. These results indicate that the combined application of global and targeted metabolic profiling could be a useful tool for the discovery of drug safety biomarkers.

  8. MOOC design – Dissemination to the masses or facilitation of social learning and a deep approach to learning?

    Directory of Open Access Journals (Sweden)

    Inger-Marie Falgren Christensen

    2016-11-01

    Full Text Available This article accounts for the design of the massive open online course (MOOC Hans Christian Andersen’s Fairy tales on FutureLearn and reports on the effectiveness of this design in terms of engaging learners in social learning and encouraging a deep approach to learning. A learning pathway was designed that provided learners with relevant knowledge, allowed them to practice their analysis skills and provided model responses. In the first run of the MOOC, a light facilitation approach was used to motivate and engage learners. In the second run, this was supplemented with live Q & A sessions and increased educator feedback. Course data show that that some learners use the space provided for social interaction and mutual support. A learning pathway that engages learners in discussion and progression from week to week facilitates a deep approach to learning. However, this requires more support from the educators and the course host.

  9. Development of a new certified reference material of diosgenin using mass balance approach and Coulometric titration method.

    Science.gov (United States)

    Gong, Ningbo; Zhang, Baoxi; Hu, Fan; Du, Hui; Du, Guanhua; Gao, Zhaolin; Lu, Yang

    2014-12-01

    Certified reference materials (CRMs) can be used as a valuable tool to validate the trueness of measurement methods and to establish metrological traceability of analytical results. Diosgenin has been selected as a candidate reference material. Characterization of the material relied on two different methods, mass balance method and Coulometric titration method (CT). The certified value of diosgenin CRM is 99.80% with an expanded uncertainty of 0.37% (k=2). The new CRM of diosgenin can be used to validate analytical methods, improve the accuracy of measurement data and control the quality of diosgenin in relevant pharmaceutical formulations. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. An approach based on liquid chromatography/electrospray ionization–mass spectrometry to detect diol metabolites as biomarkers of exposure to styrene and 1,3-butadiene

    Science.gov (United States)

    Shen, Shuijie; Zhang, Fan; Zeng, Su; Zheng, Jiang

    2012-01-01

    Styrene and 1,3-butadiene are important intermediates used extensively in the plastics industry. They are metabolized mainly through cytochrome P450-mediated oxidation to the corresponding epoxides, which are subsequently converted to diols by epoxide hydrolase or through spontaneous hydration. The resulting styrene glycol and 3-butene-1,2-diol have been suggested as biomarkers of exposure to styrene and 1,3-butadiene, respectively. Unfortunately, poor ionization of the diols within electrospray mass spectrometers becomes an obstacle to the detection of the two diols by liquid chromatography/electrospray ionization–mass spectrometry (LC/ESI–MS). We developed an LC/ESI–MS approach to analyze styrene glycol and 3-butene-1,2-diol by means of derivatization with 2-bromopyridine-5-boronic acid (BPBA), which not only dramatically increases the sensitivity of diol detection but also facilitates the identification of the diols. The analytical approach developed was simple, quick, and convincing without the need for complicated chemical derivatization. To evaluate the feasibility of BPBA as a derivatizing reagent of diols, we investigated the impact of diol configuration on the affinity of a selection of diols to BPBA using the established LC/ESI–MS approach. We found that both cis and trans diols can be derivatized by BPBA. In conclusion, BPBA may be used as a general derivatizing reagent for the detection of vicinal diols by LC/MS. PMID:19111668

  11. MOOC Design – Dissemination to the Masses or Facilitation of Social Learning and a Deep Approach to Learning?

    DEFF Research Database (Denmark)

    Christensen, Inger-Marie F.; Dam Laursen, Mette; Bøggild, Jacob

    2016-01-01

    This article accounts for the design of the massive open online course (MOOC) Hans Christian Andersen’s Fairy tales on FutureLearn and reports on the effectiveness of this design in terms of engaging learners in social learning and encouraging a deep approach to learning. A learning pathway...... and increased educator feedback. Course data show that that some learners use the space provided for social interaction and mutual support. A learning pathway that engages learners in discussion and progression from week to week facilitates a deep approach to learning. However, this requires more support from...

  12. Estimating the path-average rainwater content and updraft speed along a microwave link

    Science.gov (United States)

    Jameson, Arthur R.

    1993-01-01

    There is a scarcity of methods for accurately estimating the mass of rainwater rather than its flux. A recently proposed technique uses the difference between the observed rates of attenuation A with increasing distance at 38 and 25 GHz, A(38-25), to estimate the rainwater content W. Unfortunately, this approach is still somewhat sensitive to the form of the drop-size distribution. An alternative proposed here uses the ratio A38/A25 to estimate the mass-weighted average raindrop size Dm. Rainwater content is then estimated from measurements of polarization propagation differential phase shift (Phi-DP) divided by (1-R), where R is the mass-weighted mean axis ratio of the raindrops computed from Dm. This paper investigates these two water-content estimators using results from a numerical simulation of observations along a microwave link. From these calculations, it appears that the combination (R, Phi-DP) produces more accurate estimates of W than does A38-25. In addition, by combining microwave estimates of W and the rate of rainfall in still air with the mass-weighted mean terminal fall speed derived using A38/A25, it is possible to detect the potential influence of vertical air motion on the raingage-microwave rainfall comparisons.

  13. Fixed Average Spectra of Orchestral Instrument Tones

    Directory of Open Access Journals (Sweden)

    Joseph Plazak

    2010-04-01

    Full Text Available The fixed spectrum for an average orchestral instrument tone is presented based on spectral data from the Sandell Harmonic Archive (SHARC. This database contains non-time-variant spectral analyses for 1,338 recorded instrument tones from 23 Western instruments ranging from contrabassoon to piccolo. From these spectral analyses, a grand average was calculated, providing what might be considered an average non-time-variant harmonic spectrum. Each of these tones represents the average of all instruments in the SHARC database capable of producing that pitch. These latter tones better represent common spectral changes with respect to pitch register, and might be regarded as an “average instrument.” Although several caveats apply, an average harmonic tone or instrument may prove useful in analytic and modeling studies. In addition, for perceptual experiments in which non-time-variant stimuli are needed, an average harmonic spectrum may prove to be more ecologically appropriate than common technical waveforms, such as sine tones or pulse trains. Synthesized average tones are available via the web.

  14. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  15. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    is less clear if the teacher distribution is unknown. I define a class of averaging procedures, the temperated likelihoods, including both Bayes averaging with a uniform prior and maximum likelihood estimation as special cases. I show that Bayes is generalization optimal in this family for any teacher...

  16. Averaging Einstein's equations : The linearized case

    NARCIS (Netherlands)

    Stoeger, William R.; Helmi, Amina; Torres, Diego F.

    We introduce a simple and straightforward averaging procedure, which is a generalization of one which is commonly used in electrodynamics, and show that it possesses all the characteristics we require for linearized averaging in general relativity and cosmology for weak-field and perturbed FLRW

  17. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  18. Metabolomic approach for identifying and visualizing molecular tissue markers in tadpoles of Xenopus tropicalis by mass spectrometry imaging

    Directory of Open Access Journals (Sweden)

    Naoko Goto-Inoue

    2016-09-01

    Full Text Available In developmental and cell biology it is crucial to evaluate the dynamic profiles of metabolites. An emerging frog model system using Xenopus tropicalis, whose genome sequence and inbred strains are available, is now ready for metabolomics investigation in amphibians. In this study we applied matrix-assisted laser desorption/ionization (MALDI-mass spectrometry imaging (MSI analysis to identify and visualize metabolomic molecular markers in tadpoles of Xenopus tropicalis. We detected tissue-specific peaks and visualized their distribution in tissues, and distinguished 19 tissues and their specific peaks. We identified, for the first time, some of their molecular localizations via tandem mass spectrometric analysis: hydrocortisone in artery, L-DOPA in rhombencephalon, taurine in eye, corticosterone in gill, heme in heart, inosine monophosphate and carnosine in muscle, dopamine in nerves, and phosphatidylethanolamine (16:0/20:4 in pharynx. This is the first MALDI-MSI study of X. tropicalis tadpoles, as in small tadpoles it is hard to distinguish and dissect the various organs. Furthermore, until now there has been no data about the metabolomic profile of each organ. Our results suggest that MALDI-MSI is potentially a powerful tool for examining the dynamics of metabolomics in metamorphosis as well as conformational changes due to metabolic changes.

  19. A mass balance approach to the fate of viruses in a municipal wastewater treatment plant during summer and winter seasons.

    Science.gov (United States)

    Ulbricht, Katharina; Selinka, Hans-Christoph; Wolter, Stefanie; Rosenwinkel, Karl-Heinz; Nogueira, Regina

    2014-01-01

    In contrast to previous discussion on general virus removal efficiency and identifying surrogates for human pathogenic viruses, this study focuses on virus retention within each step of a wastewater treatment plant (WWTP). Additionally, the influence of weather conditions on virus removal was addressed. To account for the virus retention, this study describes a mass balance of somatic coliphages (bacterial viruses) in a municipal WWTP, performed in the winter and summer seasons of 2011. In the winter season, the concentration of coliphages entering the WWTP was about 1 log lower than in summer. The mass balance in winter revealed a virus inactivation of 85.12 ± 13.97%. During the summer season, virus inactivation was significantly higher (95.25 ± 3.69%, p-value virus removal in the secondary clarifier by insolation. Thus, a total removal of coliphages of about 2.78 log units was obtained in summer compared to 1.95 log units in winter. Rainfall events did not statistically correlate with the concentrations of coliphages entering the WWTP in summer.

  20. In Situ Mass Spectrometric Monitoring of the Dynamic Electrochemical Process at the Electrode–Electrolyte Interface: a SIMS Approach

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Zhaoying; Zhang, Yanyan; Liu, Bingwen; Wu, Kui; Thevuthasan, Suntharampillai; Baer, Donald R.; Zhu, Zihua; Yu, Xiao-Ying; Wang, Fuyi

    2017-01-03

    The in situ molecular characterization of reaction intermediates and products at electrode-electrolyte interfaces is central to mechanistic studies of complex electrochemical processes, yet a great challenge. The coupling of electrochemistry (EC) and mass spectrometry (MS) has seen rapid development and found broad applicability in tackling challenges in analytical and bioanalytical chemistry. However, few truly in situ and real-time EC-MS studies have been reported at electrode-electrolyte interfaces. An innovative EC-MS coupling method named in situ liquid secondary ion mass spectrometry (SIMS) was recently developed by combining SIMS with a vacuum compatible microfluidic electrochemical device. Using this novel capability we report the first in situ elucidation of the electro-oxidation mechanism of a biologically significant organic compound, ascorbic acid (AA), at the electrode-electrolyte interface. The short-lived radical intermediate was successfully captured, which had not been detected directly before. Moreover, we demonstrated the power of this new technique in real-time monitoring of the formation and dynamic evolution of electrical double layers at the electrode-electrolyte interface. This work suggests further promising applications of in situ liquid SIMS in studying more complex chemical and biological events at the electrode-electrolyte interface.

  1. A novel approach for computing glueball masses and matrix elements in Yang-Mills theories on the lattice

    CERN Document Server

    Della Morte, Michele

    2011-01-01

    We make use of the global symmetries of the Yang-Mills theory on the lattice to design a new computational strategy for extracting glueball masses and matrix elements which achieves an exponential reduction of the statistical error with respect to standard techniques. By generalizing our previous work on the parity symmetry, the partition function of the theory is decomposed into a sum of path integrals each giving the contribution from multiplets of states with fixed quantum numbers associated to parity, charge conjugation, translations, rotations and central conjugations Z_N^3. Ratios of path integrals and correlation functions can then be computed with a multi-level Monte Carlo integration scheme whose numerical cost, at a fixed statistical precision and at asymptotically large times, increases power-like with the time extent of the lattice. The strategy is implemented for the SU(3) Yang--Mills theory, and a full-fledged computation of the mass and multiplicity of the lightest glueball with vacuum quantum ...

  2. Approaching the CDF Top Quark Mass Legacy Measurement in the Lepton+Jets channel with the Matrix Element Method

    Energy Technology Data Exchange (ETDEWEB)

    Tosciri, Cecilia [Univ. of Pisa (Italy)

    2016-01-01

    The discovery of the bottom quark in 1977 at the Tevatron Collider triggered the search for its partner in the third fermion isospin doublet, the top quark, which was discovered 18 years later in 1995 by the CDF and D=0 experiments during the Tevatron Run I. By 1990, intensive efforts by many groups at several accelerators had lifted to over 90 GeV=c2 the lower mass limit, such that since then the Tevatron became the only accelerator with high-enough energy to possibly discover this amazingly massive quark. After its discovery, the determination of top quark properties has been one of the main goals of the Fermilab Tevatron Collider, and more recently also of the Large Hadron Collider (LHC) at CERN. Since the mass value plays an important role in a large number of theoretical calculations on fundamental processes, improving the accuracy of its measurement has been at any time a goal of utmost importance. The present thesis describes in detail the contributions given by the candidate to the massive preparation work needed to make the new analysis possible, during her 8 months long stay at Fermilab.

  3. An integrated approach for estimating global glacio isostatic adjustment, land ice, hydrology and ocean mass trends within a complete coupled Earth system framework

    Science.gov (United States)

    Schumacher, M.; Bamber, J. L.; Martin, A.

    2016-12-01

    Future sea level rise (SLR) is one of the most serious consequences of climate change. Therefore, understanding the drivers of past sea level change is crucial for improving predictions. SLR integrates many Earth system components including oceans, land ice, terrestrial water storage, as well as solid Earth effects. Traditionally, each component have been tackled separately, which has often lead to inconsistencies between discipline-specific estimates of each part of the sea level budget. To address these issues, the European Research Council has funded a five year project aimed at producing a physically-based, data-driven solution for the complete coupled land-ocean-solid Earth system that is consistent with the full suite of observations, prior knowledge and fundamental geophysical constraints. The project is called "GlobalMass" and based at University of Bristol. Observed mass movement from the GRACE mission plus vertical land motion from a global network of permanent GPS stations will be utilized in a data-driven approach to estimate glacial isostatic adjustment (GIA) without introducing any assumptions about the Earth structure or ice loading history. A Bayesian Hierarchical Model (BHM) will be used as the framework to combine the satellite and in-situ observations alongside prior information that incorporates the physics of the coupled system such as conservation of mass and characteristic length scales of different processes in both space and time. The BHM is used to implement a simultaneous solution at a global scale. It will produce a consistent partitioning of the integrated SLR signal into its steric (thermal) and barystatic (mass) component for the satellite era. The latter component is induced by hydrological mass trends and melting of land ice. The BHM was developed and tested on Antarctica, where it has been used to separate surface, ice dynamic and GIA signals simultaneously. We illustrate the approach and concepts with examples from this test case

  4. Object detection by correlation coefficients using azimuthally averaged reference projections.

    Science.gov (United States)

    Nicholson, William V

    2004-11-01

    A method of computing correlation coefficients for object detection that takes advantage of using azimuthally averaged reference projections is described and compared with two alternative methods-computing a cross-correlation function or a local correlation coefficient versus the azimuthally averaged reference projections. Two examples of an application from structural biology involving the detection of projection views of biological macromolecules in electron micrographs are discussed. It is found that a novel approach to computing a local correlation coefficient versus azimuthally averaged reference projections, using a rotational correlation coefficient, outperforms using a cross-correlation function and a local correlation coefficient in object detection from simulated images with a range of levels of simulated additive noise. The three approaches perform similarly in detecting macromolecular views in electron microscope images of a globular macrolecular complex (the ribosome). The rotational correlation coefficient outperforms the other methods in detection of keyhole limpet hemocyanin macromolecular views in electron micrographs.

  5. Discriminating Ability of Abbreviated Impactor Measurement Approach (AIM) to Detect Changes in Mass Median Aerodynamic Diameter (MMAD) of an Albuterol/Salbutamol pMDI Aerosol.

    Science.gov (United States)

    David Christopher, J; Patel, Rajni B; Mitchell, Jolyon P; Tougas, Terrence P; Goodey, Adrian P; Quiroz, Jorge; Andersson, Patrik U; Lyapustina, Svetlana

    2017-11-01

    This article reports on results from a two-lab, multiple impactor experiment evaluating the abbreviated impactor measurement (AIM) concept, conducted by the Cascade Impaction Working Group of the International Pharmaceutical Aerosol Consortium on Regulation and Science (IPAC-RS). The goal of this experiment was to expand understanding of the performance of an AIM-type apparatus based on the Andersen eight-stage non-viable cascade impactor (ACI) for the assessment of inhalation aerosols and sprays, compared with the full-resolution version of that impactor described in the pharmacopeial compendia. The experiment was conducted at two centers with a representative commercially available pressurized metered dose inhaler (pMDI) containing albuterol (salbutamol) as active pharmaceutical ingredient (API). Metrics of interest were total mass (TM) emitted from the inhaler, impactor-sized mass (ISM), as well as the ratio of large particle mass (LPM) to small particle mass (SPM). ISM and the LPM/SPM ratio together comprise the efficient data analysis (EDA) metrics. The results of the comparison demonstrated that in this study, the AIM approach had adequate discrimination to detect changes in the mass median aerodynamic diameter (MMAD) of the ACI-sampled aerodynamic particle size distribution (APSD), and therefore could be employed for routine product quality control (QC). As with any test method considered for inclusion in a regulatory filing, the transition from an ACI (used in development) to an appropriate AIM/EDA methodology (used in QC) should be evaluated and supported by data on a product-by-product basis.

  6. tavg3_3d_chm_Nv: MERRA Chem 3D IAU C-Grid Edge Mass Flux, Time Average 3-Hourly 1.25 x 1.25 degree V5.2.0 (MAT3NVCHM) at GES DISC

    Data.gov (United States)

    National Aeronautics and Space Administration — The MAT3NVCHM or tavg3_3d_chm_Nv data product is the MERRA Data Assimilation System Chemistry 3-Dimensional chemistry on layers that is time averaged, 3D model...

  7. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  8. Maximum and average field strength in enclosed environments

    NARCIS (Netherlands)

    Leferink, Frank Bernardus Johannes

    2010-01-01

    Electromagnetic fields in large enclosed environments are reflected many times and cannot be predicted anymore using conventional models. The common approach is to compare such environments with highly reflecting reverberation chambers. The average field strength can easily be predicted using the

  9. Modeling of Sokoto Daily Average Temperature: A Fractional ...

    African Journals Online (AJOL)

    Modeling of Sokoto Daily Average Temperature: A Fractional Integration Approach. 22 extension of the class of ARIMA processes stemming from Box and Jenkins methodology. One of their originalities is the explicit modeling of the long term correlation structure (Diebolt and. Guiraud, 2000). Autoregressive fractionally.

  10. Tandem Affinity Purification Approach Coupled to Mass Spectrometry to Identify Post-translational Modifications of Histones Associated with Chromatin-Binding Proteins.

    Science.gov (United States)

    Beyer, Sophie; Robin, Philippe; Ait-Si-Ali, Slimane

    2017-01-01

    Protein purification by tandem affinity purification (TAP)-tag coupled to mass spectrometry analysis is usually used to reveal protein complex composition. Here we describe a TAP-tag purification of chromatin-bound proteins along with associated nucleosomes, which allow exhaustive identification of protein partners. Moreover, this method allows exhaustive identification of the post-translational modifications (PTMs) of the associated histones. Thus, in addition to partner characterization, this approach reveals the associated epigenetic landscape that can shed light on the function and properties of the studied chromatin-bound protein.

  11. Small scale magnetic flux-averaged magnetohydrodynamics

    International Nuclear Information System (INIS)

    Pfirsch, D.; Sudan, R.N.

    1994-01-01

    By relaxing exact magnetic flux conservation below a scale λ a system of flux-averaged magnetohydrodynamic equations are derived from Hamilton's principle with modified constraints. An energy principle can be derived from the linearized averaged system because the total system energy is conserved. This energy principle is employed to treat the resistive tearing instability and the exact growth rate is recovered when λ is identified with the resistive skin depth. A necessary and sufficient stability criteria of the tearing instability with line tying at the ends for solar coronal loops is also obtained. The method is extended to both spatial and temporal averaging in Hamilton's principle. The resulting system of equations not only allows flux reconnection but introduces irreversibility for appropriate choice of the averaging function. Except for boundary contributions which are modified by the time averaging process total energy and momentum are conserved over times much longer than the averaging time τ but not for less than τ. These modified boundary contributions correspond to the existence, also, of damped waves and shock waves in this theory. Time and space averaging is applied to electron magnetohydrodynamics and in one-dimensional geometry predicts solitons and shocks in different limits

  12. Degradation products of profenofos as identified by high-field FTICR mass spectrometry: Isotopic fine structure approach.

    Science.gov (United States)

    Angthararuk, Dusit; Harir, Mourad; Schmitt-Kopplin, Philippe; Sutthivaiyakit, Somyote; Kettrup, Antonius; Sutthivaiyakit, Pakawadee

    2017-01-02

    This study was performed to identify the degradation products of profenofos "a phenyl organothiophosphate insecticide" in raw water (RW) collected from the entry point of Metropolitan Water Works Authority "Bangkaen, Thailand" and ultrapure water (UPW) with and without TiO 2 under simulated sunlight irradiation. Degradation of profenofos was followed with ultrahigh performance liquid chromatography (UHPLC) and follows pseudo first-order kinetic. Accordingly, high-field FTICR mass spectrometry coupled to an electrospray ionization source was used to reveal the degradation routes of profenofos and the isotopic fine structures (IFS) elucidations to approve the chemical structures of its degradation products. More degradation products were detected in UPW as compared to RW. Consequently, two main degradation pathways namely (i) interactive replacements of bromine and hydrogen by hydroxyl functional groups and (ii) rupture of PO, PS, CBr and CCl bonds were observed. None interactive replacement of chlorine by hydroxyl functional group was detected. Accordingly, mechanistical pathways of the main degradation products were established.

  13. Analysis of two-phase flow inter-subchannel mass and momentum exchanges by the two-fluid model approach

    Energy Technology Data Exchange (ETDEWEB)

    Ninokata, H. [Tokyo Institute of Technology (Japan); Deguchi, A. [ENO Mathematical Analysis, Tokyo (Japan); Kawahara, A. [Kumamoto Univ., Kumamoto (Japan)

    1995-09-01

    A new void drift model for the subchannel analysis method is presented for the thermohydraulics calculation of two-phase flows in rod bundles where the flow model uses a two-fluid formulation for the conservation of mass, momentum and energy. A void drift model is constructed based on the experimental data obtained in a geometrically simple inter-connected two circular channel test sections using air-water as working fluids. The void drift force is assumed to be an origin of void drift velocity components of the two-phase cross-flow in a gap area between two adjacent rods and to overcome the momentum exchanges at the phase interface and wall-fluid interface. This void drift force is implemented in the cross flow momentum equations. Computational results have been successfully compared to experimental data available including 3x3 rod bundle data.

  14. Mass movements in the Rio Grande Valley (Quebrada de Humahuaca, Northwestern Argentina): a methodological approach to reduce the risk

    Science.gov (United States)

    Marcato, G.; Pasuto, A.; Rivelli, F. R.

    2009-10-01

    Slope processes such as slides and debris flows, are among the main events that induce effects on the Rio Grande sediment transport capacity. The slides mainly affect the slope of the Rio Grande river basin while debris and mud flows phenomena take place in the tributary valleys. In the past decades several mass movements occurred causing victims and great damages to roads and villages and therefore hazard assessment and risk mitigation is of paramount importance for a correct development of the area. This is also an urgent need since the Quebrada de Humahuaca was recently included in the UNESCO World Cultural Heritage. The growing tourism business may lead to an uncontrolled urbanization of the valley with the consequent enlargement of threatened areas. In this framework mitigation measures have to take into account not only technical aspects related to the physical behaviour of the moving masses but also environmental and sociological factors that could influence the effectiveness of the countermeasures. Mitigation of landslide effects is indeed rather complex because of the large extension of the territory and the particular geological and geomorphological setting. Moreover the necessity to maintain the natural condition of the area as prescribed by UNESCO, make this task even more difficult. Nowadays no in-depth study of the entire area exists, therefore an integrated and multidisciplinary investigation plan is going to be set up including geological and geomorphological investigations as well as archaeological and historical surveys. The better understanding of geomorphological evolution processes of the Quebrada de Humahuaca will bridge the gap between the necessity of preservation and the request of safety keeping of the recommendation by UNESCO.

  15. Sensitive and specific peak detection for SELDI-TOF mass spectrometry using a wavelet/neural-network based approach.

    Directory of Open Access Journals (Sweden)

    Vincent A Emanuele

    Full Text Available SELDI-TOF mass spectrometer's compact size and automated, high throughput design have been attractive to clinical researchers, and the platform has seen steady-use in biomarker studies. Despite new algorithms and preprocessing pipelines that have been developed to address reproducibility issues, visual inspection of the results of SELDI spectra preprocessing by the best algorithms still shows miscalled peaks and systematic sources of error. This suggests that there continues to be problems with SELDI preprocessing. In this work, we study the preprocessing of SELDI in detail and introduce improvements. While many algorithms, including the vendor supplied software, can identify peak clusters of specific mass (or m/z in groups of spectra with high specificity and low false discover rate (FDR, the algorithms tend to underperform estimating the exact prevalence and intensity of peaks in those clusters. Thus group differences that at first appear very strong are shown, after careful and laborious hand inspection of the spectra, to be less than significant. Here we introduce a wavelet/neural network based algorithm which mimics what a team of expert, human users would call for peaks in each of several hundred spectra in a typical SELDI clinical study. The wavelet denoising part of the algorithm optimally smoothes the signal in each spectrum according to an improved suite of signal processing algorithms previously reported (the LibSELDI toolbox under development. The neural network part of the algorithm combines those results with the raw signal and a training dataset of expertly called peaks, to call peaks in a test set of spectra with approximately 95% accuracy. The new method was applied to data collected from a study of cervical mucus for the early detection of cervical cancer in HPV infected women. The method shows promise in addressing the ongoing SELDI reproducibility issues.

  16. Mass movements in the Rio Grande Valley (Quebrada de Humahuaca, Northwestern Argentina: a methodological approach to reduce the risk

    Directory of Open Access Journals (Sweden)

    G. Marcato

    2009-10-01

    Full Text Available Slope processes such as slides and debris flows, are among the main events that induce effects on the Rio Grande sediment transport capacity. The slides mainly affect the slope of the Rio Grande river basin while debris and mud flows phenomena take place in the tributary valleys. In the past decades several mass movements occurred causing victims and great damages to roads and villages and therefore hazard assessment and risk mitigation is of paramount importance for a correct development of the area. This is also an urgent need since the Quebrada de Humahuaca was recently included in the UNESCO World Cultural Heritage. The growing tourism business may lead to an uncontrolled urbanization of the valley with the consequent enlargement of threatened areas.

    In this framework mitigation measures have to take into account not only technical aspects related to the physical behaviour of the moving masses but also environmental and sociological factors that could influence the effectiveness of the countermeasures.

    Mitigation of landslide effects is indeed rather complex because of the large extension of the territory and the particular geological and geomorphological setting. Moreover the necessity to maintain the natural condition of the area as prescribed by UNESCO, make this task even more difficult.

    Nowadays no in-depth study of the entire area exists, therefore an integrated and multidisciplinary investigation plan is going to be set up including geological and geomorphological investigations as well as archaeological and historical surveys. The better understanding of geomorphological evolution processes of the Quebrada de Humahuaca will bridge the gap between the necessity of preservation and the request of safety keeping of the recommendation by UNESCO.

  17. Sensitive and specific peak detection for SELDI-TOF mass spectrometry using a wavelet/neural-network based approach.

    Science.gov (United States)

    Emanuele, Vincent A; Panicker, Gitika; Gurbaxani, Brian M; Lin, Jin-Mann S; Unger, Elizabeth R

    2012-01-01

    SELDI-TOF mass spectrometer's compact size and automated, high throughput design have been attractive to clinical researchers, and the platform has seen steady-use in biomarker studies. Despite new algorithms and preprocessing pipelines that have been developed to address reproducibility issues, visual inspection of the results of SELDI spectra preprocessing by the best algorithms still shows miscalled peaks and systematic sources of error. This suggests that there continues to be problems with SELDI preprocessing. In this work, we study the preprocessing of SELDI in detail and introduce improvements. While many algorithms, including the vendor supplied software, can identify peak clusters of specific mass (or m/z) in groups of spectra with high specificity and low false discover rate (FDR), the algorithms tend to underperform estimating the exact prevalence and intensity of peaks in those clusters. Thus group differences that at first appear very strong are shown, after careful and laborious hand inspection of the spectra, to be less than significant. Here we introduce a wavelet/neural network based algorithm which mimics what a team of expert, human users would call for peaks in each of several hundred spectra in a typical SELDI clinical study. The wavelet denoising part of the algorithm optimally smoothes the signal in each spectrum according to an improved suite of signal processing algorithms previously reported (the LibSELDI toolbox under development). The neural network part of the algorithm combines those results with the raw signal and a training dataset of expertly called peaks, to call peaks in a test set of spectra with approximately 95% accuracy. The new method was applied to data collected from a study of cervical mucus for the early detection of cervical cancer in HPV infected women. The method shows promise in addressing the ongoing SELDI reproducibility issues.

  18. Inline roasting hyphenated with gas chromatography-mass spectrometry as an innovative approach for assessment of cocoa fermentation quality and aroma formation potential.

    Science.gov (United States)

    Van Durme, Jim; Ingels, Isabel; De Winne, Ann

    2016-08-15

    Today, the cocoa industry is in great need of faster and robust analytical techniques to objectively assess incoming cocoa quality. In this work, inline roasting hyphenated with a cooled injection system coupled to a gas chromatograph-mass spectrometer (ILR-CIS-GC-MS) has been explored for the first time to assess fermentation quality and/or overall aroma formation potential of cocoa. This innovative approach resulted in the in-situ formation of relevant cocoa aroma compounds. After comparison with data obtained by headspace solid phase micro extraction (HS-SPME-GC-MS) on conventional roasted cocoa beans, ILR-CIS-GC-MS data on unroasted cocoa beans showed similar formation trends of important cocoa aroma markers as a function of fermentation quality. The latter approach only requires small aliquots of unroasted cocoa beans, can be automatated, requires no sample preparation, needs relatively short analytical times (<1h) and is highly reproducible. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Average-passage flow model development

    Science.gov (United States)

    Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark

    1989-01-01

    A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the time averaged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.

  20. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  1. Cosmic inhomogeneities and averaged cosmological dynamics.

    Science.gov (United States)

    Paranjape, Aseem; Singh, T P

    2008-10-31

    If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics.

  2. Monthly snow/ice averages (ISCCP)

    Data.gov (United States)

    National Aeronautics and Space Administration — September Arctic sea ice is now declining at a rate of 11.5 percent per decade, relative to the 1979 to 2000 average. Data from NASA show that the land ice sheets in...

  3. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  4. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  5. Schedule of average annual equipment ownership expense

    Science.gov (United States)

    2003-03-06

    The "Schedule of Average Annual Equipment Ownership Expense" is designed for use on Force Account bills of Contractors performing work for the Illinois Department of Transportation and local government agencies who choose to adopt these rates. This s...

  6. Should the average tax rate be marginalized?

    Czech Academy of Sciences Publication Activity Database

    Feldman, N. E.; Katuščák, Peter

    -, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf

  7. Development of a new sodium diclofenac certified reference material using the mass balance approach and ¹H qNMR to determine the certified property value.

    Science.gov (United States)

    Nogueira, Raquel; Garrido, Bruno C; Borges, Ricardo M; Silva, Gisele E B; Queiroz, Suzane M; Cunha, Valnei S

    2013-02-14

    Certified reference materials (CRMs) are essential tools to guarantee the metrological traceability of measurement results to the International System of Units (SI), which means the accuracy and comparability of results over time and space. In the pharmaceutical area, only a few CRMs are available and the use of (non-certified) reference materials is a much more common practice. In this paper, the studies on a new candidate CRM of sodium diclofenac based on the ISO Guides 34:2009 and 35:2005 are described. The project steps included characterization, homogeneity test, stability studies, and uncertainties estimation. In the characterization, the mass fractions of organic, inorganic, and volatile impurities were determined, and the results were cross-checked by independent reference methods or interlaboratorial study. The API mass fraction was calculated by mass balance and cross-checked by quantitative proton nuclear magnetic resonance (¹H qNMR). The paper also presents a Monte Carlo simulation to estimate the measurement uncertainty as an approach to validate the GUM results in ¹H qNMR. The homogeneity between batch units was verified, and the candidate CRM stability under transport and storage conditions was evaluated in short- and long-term stability studies. The CRM certified property value and corresponding expanded uncertainty, obtained from the combined standard uncertainty multiplied by the coverage factor (k=2), for a confidence level of 95%, was (999.76+0.10) mg g⁻¹. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Constraints on the nuclear equation of state from nuclear masses and radii in a Thomas-Fermi meta-modeling approach

    Science.gov (United States)

    Chatterjee, D.; Gulminelli, F.; Raduta, Ad. R.; Margueron, J.

    2017-12-01

    The question of correlations among empirical equation of state (EoS) parameters constrained by nuclear observables is addressed in a Thomas-Fermi meta-modeling approach. A recently proposed meta-modeling for the nuclear EoS in nuclear matter is augmented with a single finite size term to produce a minimal unified EoS functional able to describe the smooth part of the nuclear ground state properties. This meta-model can reproduce the predictions of a large variety of models, and interpolate continuously between them. An analytical approximation to the full Thomas-Fermi integrals is further proposed giving a fully analytical meta-model for nuclear masses. The parameter space is sampled and filtered through the constraint of nuclear mass reproduction with Bayesian statistical tools. We show that this simple analytical meta-modeling has a predictive power on masses, radii, and skins comparable to full Hartree-Fock or extended Thomas-Fermi calculations with realistic energy functionals. The covariance analysis on the posterior distribution shows that no physical correlation is present between the different EoS parameters. Concerning nuclear observables, a strong correlation between the slope of the symmetry energy and the neutron skin is observed, in agreement with previous studies.

  9. Symmetric Euler orientation representations for orientational averaging.

    Science.gov (United States)

    Mayerhöfer, Thomas G

    2005-09-01

    A new kind of orientation representation called symmetric Euler orientation representation (SEOR) is presented. It is based on a combination of the conventional Euler orientation representations (Euler angles) and Hamilton's quaternions. The properties of the SEORs concerning orientational averaging are explored and compared to those of averaging schemes that are based on conventional Euler orientation representations. To that aim, the reflectance of a hypothetical polycrystalline material with orthorhombic crystal symmetry was calculated. The calculation was carried out according to the average refractive index theory (ARIT [T.G. Mayerhöfer, Appl. Spectrosc. 56 (2002) 1194]). It is shown that the use of averaging schemes based on conventional Euler orientation representations leads to a dependence of the result from the specific Euler orientation representation that was utilized and from the initial position of the crystal. The latter problem can be overcome partly by the introduction of a weighing factor, but only for two-axes-type Euler orientation representations. In case of a numerical evaluation of the average, a residual difference remains also if a two-axes type Euler orientation representation is used despite of the utilization of a weighing factor. In contrast, this problem does not occur if a symmetric Euler orientation representation is used as a matter of principle, while the result of the averaging for both types of orientation representations converges with increasing number of orientations considered in the numerical evaluation. Additionally, the use of a weighing factor and/or non-equally spaced steps in the numerical evaluation of the average is not necessary. The symmetrical Euler orientation representations are therefore ideally suited for the use in orientational averaging procedures.

  10. Aplikasi Moving Average Filter Pada Teknologi Enkripsi

    OpenAIRE

    Hermawi, Adrianto

    2007-01-01

    A method of encrypting and decrypting is introduced. The type of information experimented on is a mono wave sound file with frequency 44 KHZ. The encryption technology uses a regular noise wave sound file (with equal frequency) and moving average filter to decrypt and obtain the original signal. All experiments are programmed using MATLAB. By the end of the experiment the author concludes that the Moving Average Filter can indeed be used as an alternative to encryption technology.

  11. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  12. Longitudinal multiple imputation approaches for body mass index or other variables with very low individual-level variability: the mibmi command in Stata.

    Science.gov (United States)

    Kontopantelis, Evangelos; Parisi, Rosa; Springate, David A; Reeves, David

    2017-01-13

    In modern health care systems, the computerization of all aspects of clinical care has led to the development of large data repositories. For example, in the UK, large primary care databases hold millions of electronic medical records, with detailed information on diagnoses, treatments, outcomes and consultations. Careful analyses of these observational datasets of routinely collected data can complement evidence from clinical trials or even answer research questions that cannot been addressed in an experimental setting. However, 'missingness' is a common problem for routinely collected data, especially for biological parameters over time. Absence of complete data for the whole of a individual's study period is a potential bias risk and standard complete-case approaches may lead to biased estimates. However, the structure of the data values makes standard cross-sectional multiple-imputation approaches unsuitable. In this paper we propose and evaluate mibmi, a new command for cleaning and imputing longitudinal body mass index data. The regression-based data cleaning aspects of the algorithm can be useful when researchers analyze messy longitudinal data. Although the multiple imputation algorithm is computationally expensive, it performed similarly or even better to existing alternatives, when interpolating observations. The mibmi algorithm can be a useful tool for analyzing longitudinal body mass index data, or other longitudinal data with very low individual-level variability.

  13. Simultaneous quantification of protein phosphorylation sites using liquid chromatography-tandem mass spectrometry-based targeted proteomics: a linear algebra approach for isobaric phosphopeptides.

    Science.gov (United States)

    Xu, Feifei; Yang, Ting; Sheng, Yuan; Zhong, Ting; Yang, Mi; Chen, Yun

    2014-12-05

    As one of the most studied post-translational modifications (PTM), protein phosphorylation plays an essential role in almost all cellular processes. Current methods are able to predict and determine thousands of phosphorylation sites, whereas stoichiometric quantification of these sites is still challenging. Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS)-based targeted proteomics is emerging as a promising technique for site-specific quantification of protein phosphorylation using proteolytic peptides as surrogates of proteins. However, several issues may limit its application, one of which relates to the phosphopeptides with different phosphorylation sites and the same mass (i.e., isobaric phosphopeptides). While employment of site-specific product ions allows for these isobaric phosphopeptides to be distinguished and quantified, site-specific product ions are often absent or weak in tandem mass spectra. In this study, linear algebra algorithms were employed as an add-on to targeted proteomics to retrieve information on individual phosphopeptides from their common spectra. To achieve this simultaneous quantification, a LC-MS/MS-based targeted proteomics assay was first developed and validated for each phosphopeptide. Given the slope and intercept of calibration curves of phosphopeptides in each transition, linear algebraic equations were developed. Using a series of mock mixtures prepared with varying concentrations of each phosphopeptide, the reliability of the approach to quantify isobaric phosphopeptides containing multiple phosphorylation sites (≥ 2) was discussed. Finally, we applied this approach to determine the phosphorylation stoichiometry of heat shock protein 27 (HSP27) at Ser78 and Ser82 in breast cancer cells and tissue samples.

  14. Evaluation of nandrolone and ractopamine in the urine of veal calves: liquid chromatography-tandem mass spectrometry approach.

    Science.gov (United States)

    Chiesa, L; Panseri, S; Cannizzo, F T; Biolatti, B; Divari, S; Benevelli, R; Arioli, F; Pavlovic, R

    2017-04-01

    Under European legislation, the use of growth promoters is forbidden in food-producing livestock. The application of unofficial protocols with diverse combinations of veterinary drugs, administered in very low concentrations, hinders reliable detection and subsequent operative prevention. It was observed that nandrolone (anabolic steroid) and ractopamine (β-adrenergic agonist) are occasionally administered to animals, but little is known about their synergic action when they are administered together. Two specific analytical methods based on liquid chromatography-tandem mass spectrometry have been developed, both of which include hydrolysis of the corresponding conjugates. For the nandrolone method, solid-phase extraction was necessary for the complete elimination of the interferences, while employment of the Quantitation Enhanced Data-Dependent scan mode during MS acquisition of ractopamine enabled the utilization of simple liquid-liquid extraction. The nandrolone method was linear in the range of 0.5-25 ng/mL, while the ractopamine calibration curve was constructed from 0.5 to 1000 ng/mL. The corresponding coefficients of correlations were >0.9907. The lower limit of quantification for both methods was 0.5 ng/mL, followed by overall recoveries >81%. Precisions expressed as relative standard deviations were ractopamine-enriched diet were analysed. Those methods might be useful for studying the elimination patterns of the administered compounds along with characterization of the main metabolic pathways. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. A Liquid Chromatography - Tandem Mass Spectrometry Approach for the Identification of Mebendazole Residue in Pork, Chicken, and Horse.

    Directory of Open Access Journals (Sweden)

    Ji Sun Lee

    Full Text Available A confirmatory and quantitative method of liquid chromatography-tandem mass spectrometry (LC-MS/MS for the determination of mebendazole and its hydrolyzed and reduced metabolites in pork, chicken, and horse muscles was developed and validated in this study. Anthelmintic compounds were extracted with ethyl acetate after sample mixture was made alkaline followed by liquid chromatographic separation using a reversed phase C18 column. Gradient elution was performed with a mobile phase consisting of water containing 10 mM ammonium formate and methanol. This confirmatory method was validated according to EU requirements. Evaluated validation parameters included specificity, accuracy, precision (repeatability and within-laboratory reproducibility, analytical limits (decision limit and detection limit, and applicability. Most parameters were proved to be conforming to the EU requirements. The decision limit (CCα and detection capability (CCβ for all analytes ranged from 15.84 to 17.96 μgkg-1. The limit of detection (LOD and the limit of quantification (LOQ for all analytes were 0.07 μgkg-1 and 0.2 μgkg-1, respectively. The developed method was successfully applied to monitoring samples collected from the markets in major cities and proven great potential to be used as a regulatory tool to determine mebendazole residues in animal based foods.

  16. Identification of specific bovine blood biomarkers with a non-targeted approach using HPLC ESI tandem mass spectrometry.

    Science.gov (United States)

    Lecrenier, M C; Marbaix, H; Dieu, M; Veys, P; Saegerman, C; Raes, M; Baeten, V

    2016-12-15

    Animal by-products are valuable protein sources in animal nutrition. Among them are blood products and blood meal, which are used as high-quality material for their beneficial effects on growth and health. Within the framework of the feed ban relaxation, the development of complementary methods in order to refine the identification of processed animal proteins remains challenging. The aim of this study was to identify specific biomarkers that would allow the detection of bovine blood products and processed animal proteins using tandem mass spectrometry. Seventeen biomarkers were identified: nine peptides for bovine plasma powder; seven peptides for bovine haemoglobin powder, including six peptides for bovine blood meal; and one peptide for porcine blood. They were not detected in several commercial compound feed or feed materials, such as blood by-products of other animal origins, milk-derived products and fish meal. These biomarkers could be used for developing a species-specific and blood-specific detection method. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Liquid chromatography-mass spectrometry in occupational toxicology: a novel approach to the study of biotransformation of industrial chemicals.

    Science.gov (United States)

    Manini, Paola; Andreoli, Roberta; Niessen, Wilfried

    2004-11-26

    Biological monitoring and biomarkers are used in occupational toxicology for a more accurate risk assessment of occupationally exposed people. Appropriate and validated biomarkers of internal dose, like urinary metabolites, besides to be positively correlated with external exposure, have a predictive value to the risk of adverse effects. The application of liquid chromatography-mass spectrometry (LC-MS) in occupational and environmental toxicology, although relatively recent, has been demonstrated valid in the determination of traditional biomarkers of exposure, as well as in metabolism studies aimed at investigating minor metabolic routes and new more specific biomarkers. This review presents selected applications of LC-MS to the study of the metabolism of industrial chemicals, like n-hexane, benzene and other aromatic hydrocarbons, styrene and other monomers employed in plastic industry, as well as to other chemicals used in working environments, like pesticides used by farmers, and antineoplastic agents prepared by hospital personnel. Analytical and pre-analytical factors, which affect quantitative determination of urinary metabolites, i.e. sample preparation, matrix effect, ion suppression, use of internal standards, and calibration, are emphasized.

  18. Oligosaccharide substrate preferences of human extracellular sulfatase Sulf2 using liquid chromatography-mass spectrometry based glycomics approaches.

    Directory of Open Access Journals (Sweden)

    Yu Huang

    Full Text Available Sulfs are extracellular endosulfatases that selectively remove the 6-O-sulfate groups from cell surface heparan sulfate (HS chain. By altering the sulfation at these particular sites, Sulfs function to remodel HS chains. As a result of the remodeling activity, HSulf2 regulates a multitude of cell-signaling events that depend on interactions between proteins and HS. Previous efforts to characterize the substrate specificity of human Sulfs (HSulfs focused on the analysis of HS disaccharides and synthetic repeating units. In this study, we characterized the substrate preferences of human HSulf2 using HS oligosaccharides with various lengths and sulfation degrees from several naturally occurring HS sources by applying liquid chromatography mass spectrometry based glycomics methods. The results showed that HSulf2 preferentially digests highly sulfated HS oligosaccharides with zero acetyl groups and this preference is length dependent. In terms of length of oligosaccharides, HSulf2 digestion induced more sulfation decrease on DP6 (DP: degree of polymerization compared to DP2, DP4 and DP8. In addition, the HSulf2 preferentially digests the oligosaccharide domain located at the non-reducing end (NRE of the HS and heparin chain. In addition, the HSulf2 digestion products were altered only for specific isomers. HSulf2 treated NRE oligosaccharides also showed greater decrease in cell proliferation than those from internal domains of the HS chain. After further chromatographic separation, we identified the three most preferred unsaturated hexasaccharide for HSulf2.

  19. Comparison of Approaches for Measuring the Mass Accommodation Coefficient for the Condensation of Water and Sensitivities to Uncertainties in Thermophysical Properties

    Science.gov (United States)

    2012-01-01

    We compare and contrast measurements of the mass accommodation coefficient of water on a water surface made using ensemble and single particle techniques under conditions of supersaturation and subsaturation, respectively. In particular, we consider measurements made using an expansion chamber, a continuous flow streamwise thermal gradient cloud condensation nuclei chamber, the Leipzig Aerosol Cloud Interaction Simulator, aerosol optical tweezers, and electrodynamic balances. Although this assessment is not intended to be comprehensive, these five techniques are complementary in their approach and give values that span the range from near 0.1 to 1.0 for the mass accommodation coefficient. We use the same semianalytical treatment to assess the sensitivities of the measurements made by the various techniques to thermophysical quantities (diffusion constants, thermal conductivities, saturation pressure of water, latent heat, and solution density) and experimental parameters (saturation value and temperature). This represents the first effort to assess and compare measurements made by different techniques to attempt to reduce the uncertainty in the value of the mass accommodation coefficient. Broadly, we show that the measurements are consistent within the uncertainties inherent to the thermophysical and experimental parameters and that the value of the mass accommodation coefficient should be considered to be larger than 0.5. Accurate control and measurement of the saturation ratio is shown to be critical for a successful investigation of the surface transport kinetics during condensation/evaporation. This invariably requires accurate knowledge of the partial pressure of water, the system temperature, the droplet curvature and the saturation pressure of water. Further, the importance of including and quantifying the transport of heat in interpreting droplet measurements is highlighted; the particular issues associated with interpreting measurements of condensation

  20. Optimization of Search Engines and Postprocessing Approaches to Maximize Peptide and Protein Identification for High-Resolution Mass Data.

    Science.gov (United States)

    Tu, Chengjian; Sheng, Quanhu; Li, Jun; Ma, Danjun; Shen, Xiaomeng; Wang, Xue; Shyr, Yu; Yi, Zhengping; Qu, Jun

    2015-11-06

    The two key steps for analyzing proteomic data generated by high-resolution MS are database searching and postprocessing. While the two steps are interrelated, studies on their combinatory effects and the optimization of these procedures have not been adequately conducted. Here, we investigated the performance of three popular search engines (SEQUEST, Mascot, and MS Amanda) in conjunction with five filtering approaches, including respective score-based filtering, a group-based approach, local false discovery rate (LFDR), PeptideProphet, and Percolator. A total of eight data sets from various proteomes (e.g., E. coli, yeast, and human) produced by various instruments with high-accuracy survey scan (MS1) and high- or low-accuracy fragment ion scan (MS2) (LTQ-Orbitrap, Orbitrap-Velos, Orbitrap-Elite, Q-Exactive, Orbitrap-Fusion, and Q-TOF) were analyzed. It was found combinations involving Percolator achieved markedly more peptide and protein identifications at the same FDR level than the other 12 combinations for all data sets. Among these, combinations of SEQUEST-Percolator and MS Amanda-Percolator provided slightly better performances for data sets with low-accuracy MS2 (ion trap or IT) and high accuracy MS2 (Orbitrap or TOF), respectively, than did other methods. For approaches without Percolator, SEQUEST-group performs the best for data sets with MS2 produced by collision-induced dissociation (CID) and IT analysis; Mascot-LFDR gives more identifications for data sets generated by higher-energy collisional dissociation (HCD) and analyzed in Orbitrap (HCD-OT) and in Orbitrap Fusion (HCD-IT); MS Amanda-Group excels for the Q-TOF data set and the Orbitrap Velos HCD-OT data set. Therefore, if Percolator was not used, a specific combination should be applied for each type of data set. Moreover, a higher percentage of multiple-peptide proteins and lower variation of protein spectral counts were observed when analyzing technical replicates using Percolator

  1. Demonstration of a Model Averaging Capability in FRAMES

    Science.gov (United States)

    Meyer, P. D.; Castleton, K. J.

    2009-12-01

    Uncertainty in model structure can be incorporated in risk assessment using multiple alternative models and model averaging. To facilitate application of this approach to regulatory applications based on risk or dose assessment, a model averaging capability was integrated with the Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES) version 2 software. FRAMES is a software platform that allows the non-parochial communication between disparate models, databases, and other frameworks. Users have the ability to implement and select environmental models for specific risk assessment and management problems. Standards are implemented so that models produce information that is readable by other downstream models and accept information from upstream models. Models can be linked across multiple media and from source terms to quantitative risk/dose estimates. Parameter sensitivity and uncertainty analysis tools are integrated. A model averaging module was implemented to accept output from multiple models and produce average results. These results can be deterministic quantities or probability distributions obtained from an analysis of parameter uncertainty. Output from alternative models is averaged using weights determined from user input and/or model calibration results. A model calibration module based on the PEST code was implemented to provide FRAMES with a general calibration capability. An application illustrates the implementation, user interfaces, execution, and results of the FRAMES model averaging capabilities.

  2. Sampling and mass spectrometry approaches for the detection of drugs and foreign contaminants in breath for homeland security applications

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Audrey Noreen [Michigan State Univ., East Lansing, MI (United States)

    2009-01-01

    Homeland security relies heavily on analytical chemistry to identify suspicious materials and persons. Traditionally this role has focused on attribution, determining the type and origin of an explosive, for example. But as technology advances, analytical chemistry can and will play an important role in the prevention and preemption of terrorist attacks. More sensitive and selective detection techniques can allow suspicious materials and persons to be identified even before a final destructive product is made. The work presented herein focuses on the use of commercial and novel detection techniques for application to the prevention of terrorist activities. Although drugs are not commonly thought of when discussing terrorism, narcoterrorism has become a significant threat in the 21st century. The role of the drug trade in the funding of terrorist groups is prevalent; thus, reducing the trafficking of illegal drugs can play a role in the prevention of terrorism by cutting off much needed funding. To do so, sensitive, specific, and robust analytical equipment is needed to quickly identify a suspected drug sample no matter what matrix it is in. Single Particle Aerosol Mass Spectrometry (SPAMS) is a novel technique that has previously been applied to biological and chemical detection. The current work applies SPAMS to drug analysis, identifying the active ingredients in single component, multi-component, and multi-tablet drug samples in a relatively non-destructive manner. In order to do so, a sampling apparatus was created to allow particle generation from drug tablets with on-line introduction to the SPAMS instrument. Rules trees were developed to automate the identification of drug samples on a single particle basis. A novel analytical scheme was also developed to identify suspect individuals based on chemical signatures in human breath. Human breath was sampled using an RTube{trademark} and the trace volatile organic compounds (VOCs) were preconcentrated using solid

  3. Compression simulations of plant tissue in 3D using a mass-spring system approach and discrete element method.

    Science.gov (United States)

    Pieczywek, Piotr M; Zdunek, Artur

    2017-10-18

    A hybrid model based on a mass-spring system methodology coupled with the discrete element method (DEM) was implemented to simulate the deformation of cellular structures in 3D. Models of individual cells were constructed using the particles which cover the surfaces of cell walls and are interconnected in a triangle mesh network by viscoelastic springs. The spatial arrangement of the cells required to construct a virtual tissue was obtained using Poisson-disc sampling and Voronoi tessellation in 3D space. Three structural features were included in the model: viscoelastic material of cell walls, linearly elastic interior of the cells (simulating compressible liquid) and a gas phase in the intercellular spaces. The response of the models to an external load was demonstrated during quasi-static compression simulations. The sensitivity of the model was investigated at fixed compression parameters with variable tissue porosity, cell size and cell wall properties, such as thickness and Young's modulus, and a stiffness of the cell interior that simulated turgor pressure. The extent of the agreement between the simulation results and other models published is discussed. The model demonstrated the significant influence of tissue structure on micromechanical properties and allowed for the interpretation of the compression test results with respect to changes occurring in the structure of the virtual tissue. During compression virtual structures composed of smaller cells produced higher reaction forces and therefore they were stiffer than structures with large cells. The increase in the number of intercellular spaces (porosity) resulted in a decrease in reaction forces. The numerical model was capable of simulating the quasi-static compression experiment and reproducing the strain stiffening observed in experiment. Stress accumulation at the edges of the cell walls where three cells meet suggests that cell-to-cell debonding and crack propagation through the contact edge of

  4. Comprehensive Proteoform Characterization of Plasma Complement Component C8αβγ by Hybrid Mass Spectrometry Approaches

    Science.gov (United States)

    Franc, Vojtech; Zhu, Jing; Heck, Albert J. R.

    2018-03-01

    The human complement hetero-trimeric C8αβγ (C8) protein assembly ( 150 kDa) is an important component of the membrane attack complex (MAC). C8 initiates membrane penetration and coordinates MAC pore formation. Here, we charted in detail the structural micro-heterogeneity within C8, purified from human plasma, combining high-resolution native mass spectrometry and (glyco)peptide-centric proteomics. The intact C8 proteoform profile revealed at least 20 co-occurring MS signals. Additionally, we employed ion exchange chromatography to separate purified C8 into four distinct fractions. Their native MS analysis revealed even more detailed structural micro-heterogeneity on C8. Subsequent peptide-centric analysis, by proteolytic digestion of C8 and LC-MS/MS, provided site-specific quantitative profiles of different types of C8 glycosylation. Combining all this data provides a detailed specification of co-occurring C8 proteoforms, including experimental evidence on N-glycosylation, C-mannosylation, and O-glycosylation. In addition to the known N-glycosylation sites, two more N-glycosylation sites were detected on C8. Additionally, we elucidated the stoichiometry of all C-mannosylation sites in all the thrombospondin-like (TSP) domains of C8α and C8β. Lastly, our data contain the first experimental evidence of O-linked glycans located on C8γ. Albeit low abundant, these O-glycans are the first PTMs ever detected on this subunit. By placing the observed PTMs in structural models of free C8 and C8 embedded in the MAC, it may be speculated that some of the newly identified modifications may play a role in the MAC formation. [Figure not available: see fulltext.

  5. Mitral annulus caseous calcification mimicking cardiac mass in asymptomatic patient – multimodality imaging approach to incidental echocardiographic finding

    International Nuclear Information System (INIS)

    Możeńska, Olga; Sypuła, Sławomir; Celińska-Spoder, Małgorzata; Walecki, Jerzy; Kosior, Dariusz A.

    2014-01-01

    Caseous calcification of mitral annulus is rather rare echocardiographic finding with prevalence of 0.6% in pts. with proven mitral annular calcification and 0.06% to 0.07% in large series of subjects in all ages. Echocardiographic images of caseous calcification are often heterogenous due to calcium and lipid deposits, and the masses show hyperechogenic and hypoechogenic areas. However the appearance of caseous calcification can imitate that of abscess, tumors and cysts, surgical treatment may not be needed when there is no obstruction. 76-year old obese (BMI 32 kg/m 2 ), female patient with history of hypertension, stable coronary artery disease, diabetes type 2 and hyperlipidemia presented with no symptoms of mitral valve dysfunction and had no abnormalities on physical exam. Transesophageal echocardiography identified well-organized, composite, immobile lesion (22×15 mm) localized in the posterior part of the mitral annulus, with markedly calcified margins, and no significant impact on the valve function. In computed tomography (CT) lesion was described as calcified (24×22×17.5 mm), connected with posterior leaflet and posterior part of the mitral annulus, reducing posterior leaflet mobility. CT brought the suggestion of caseous mitral annular calcification. Coming to a conclusion, bearing in mind no mitral valve dysfunction at that time, patient was offered conservative treatment. Although caseous mitral annular calcification is typically an incidental finding, accurate recognition is needed to avoid mistaking the lesion for a tumor or abscess, which may result in unnecessary cardiac surgery. However this entity is diagnosed on cardiac MRI, multi-modality imaging, especially non-contrast CT, allows for the confident, prospective diagnosis

  6. Sequential injection approach for simultaneous determination of ultratrace plutonium and neptunium in urine with accelerator mass spectrometry

    DEFF Research Database (Denmark)

    Qiao, Jixin; Hou, Xiaolin; Roos, Per

    2013-01-01

    . Several experimental parameters affecting the analytical performance were investigated and compared including sample preboiling operation, aging time, amount of coprecipitating reagent, reagent for pH adjustment, sedimentation time, and organic matter decomposition approach. The overall analytical results...... show that preboiling and aging are important for obtaining high chemical yields for both Pu and Np, which is possibly related to the aggregation and adsorption behavior of organic substances contained in urine. Although the optimal condition for Np and Pu simultaneous determination requires 5-day aging...... time, an immediate coprecipitation without preboiling and aging could also provide fairly satisfactory chemical yields for both Np and Pu (50-60%) with high sample throughput (4 h/sample). Within the developed method, (242)Pu was exploited as chemical yield tracer for both Pu and Np isotopes. (242)Pu...

  7. tavg3_3d_chm_Ne: MERRA Chem 3D IAU C-Grid Edge Mass Flux, Time Average 3-Hourly 0.667 x 0.5 degree V5.2.0 (MAT3NECHM) at GES DISC

    Data.gov (United States)

    National Aeronautics and Space Administration — The MAT3NECHM or tavg3_3d_chm_Ne data product is the MERRA Data Assimilation System Chemistry 3-Dimensional chemistry on layer Edges that is time averaged, 3D model...

  8. A Bayesian approach to quantifying the effects of mass poultry vaccination upon the spatial and temporal dynamics of H5N1 in Northern Vietnam.

    Directory of Open Access Journals (Sweden)

    Patrick G T Walker

    2010-02-01

    Full Text Available Outbreaks of H5N1 in poultry in Vietnam continue to threaten the livelihoods of those reliant on poultry production whilst simultaneously posing a severe public health risk given the high mortality associated with human infection. Authorities have invested significant resources in order to control these outbreaks. Of particular interest is the decision, following a second wave of outbreaks, to move from a "stamping out" approach to the implementation of a nationwide mass vaccination campaign. Outbreaks which occurred around this shift in policy provide a unique opportunity to evaluate the relative effectiveness of these approaches and to help other countries make informed judgements when developing control strategies. Here we use Bayesian Markov Chain Monte Carlo (MCMC data augmentation techniques to derive the first quantitative estimates of the impact of the vaccination campaign on the spread of outbreaks of H5N1 in northern Vietnam. We find a substantial decrease in the transmissibility of infection between communes following vaccination. This was coupled with a significant increase in the time from infection to detection of the outbreak. Using a cladistic approach we estimated that, according to the posterior mean effect of pruning the reconstructed epidemic tree, two thirds of the outbreaks in 2007 could be attributed to this decrease in the rate of reporting. The net impact of these two effects was a less intense but longer-lasting wave and, whilst not sufficient to prevent the sustained spread of outbreaks, an overall reduction in the likelihood of the transmission of infection between communes. These findings highlight the need for more effectively targeted surveillance in order to help ensure that the effective coverage achieved by mass vaccination is converted into a reduction in the likelihood of outbreaks occurring which is sufficient to control the spread of H5N1 in Vietnam.

  9. High-throughput screening for various classes of doping agents using a new 'dilute-and-shoot' liquid chromatography-tandem mass spectrometry multi-target approach.

    Science.gov (United States)

    Guddat, S; Solymos, E; Orlovius, A; Thomas, A; Sigmund, G; Geyer, H; Thevis, M; Schänzer, W

    2011-01-01

    A new multi-target approach based on liquid chromatography--electrospray ionization tandem mass spectrometry (LC-(ESI)-MS/MS) is presented to screen for various classes of prohibited substances using direct injection of urine specimens. With a highly sensitive new generation hybrid mass spectrometer classic groups of drugs--for example, diuretics, beta2-agonists--stimulants and narcotics are detectable at concentration levels far below the required limits. Additionally, more challenging and various new target compounds could be implemented. Model compounds of stimulant conjugates were studied to investigate a possible screening without complex sample preparation. As a main achievement, the integration of the plasma volume expanders dextran and hydroxyethyl starch (HES), commonly analyzed in time-consuming, stand-alone procedures, is accomplished. To screen for relatively new prohibited compounds, a common metabolite of the selective androgen receptor modulator (SARMs) andarine, a metabolite of growth hormone releasing peptide (GHRP-2), and 5-amino-4-imidazolecarboxyamide ribonucleoside (AICAR) are analyzed. Following a completely new approach, conjugates of di(2-ethylhexyl) phthalate (DEHP) metabolites are monitored to detect abnormally high levels of plasticizers indicating for illicit blood transfusion. The assay was fully validated for qualitative purposes considering the parameters specificity, intra- (3.2-16.6%) and inter-day precision (0.4-19.9%) at low, medium and high concentration, robustness, limit of detection (1-70 ng/ml, dextran: 30 µg/ml, HES: 10 µg/ml) and ion suppression/enhancement effects. The analyses of post-administration and routine doping control samples demonstrates the applicability of the method for sports drug testing. This straightforward and reliable approach accomplishes the combination of different screening procedures resulting in a high-throughput method that increases the efficiency of the labs daily work. Copyright © 2011 John

  10. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  11. Backus and Wyllie Averages for Seismic Attenuation

    Science.gov (United States)

    Qadrouh, Ayman N.; Carcione, José M.; Ba, Jing; Gei, Davide; Salim, Ahmed M.

    2018-01-01

    Backus and Wyllie equations are used to obtain average seismic velocities at zero and infinite frequencies, respectively. Here, these equations are generalized to obtain averages of the seismic quality factor (inversely proportional to attenuation). The results indicate that the Wyllie velocity is higher than the corresponding Backus quantity, as expected, since the ray velocity is a high-frequency limit. On the other hand, the Wyllie quality factor is higher than the Backus one, following the velocity trend, i.e., the higher the velocity (the stiffer the medium), the higher the attenuation. Since the quality factor can be related to properties such as porosity, permeability, and fluid viscosity, these averages can be useful for evaluating reservoir properties.

  12. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  13. Aperture averaging in strong oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  14. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  15. Determination of size and mass-and number-based concentration of biogenic SeNPs synthesized by lactic acid bacteria by using a multimethod approach.

    Science.gov (United States)

    Moreno-Martin, Gustavo; Pescuma, Micaela; Pérez-Corona, Teresa; Mozzi, Fernanda; Madrid, Yolanda

    2017-11-01

    Selenium nanoparticles (SeNPs) were synthesized by a green technology using lactic acid bacteria (LAB, Lactobacillus acidophilus, L. delbrueckii subsp. bulgaricus and L. reuteri). The exposure of aqueous sodium selenite to LAB led to the synthesis of SeNPs. Characterization of SeNPs by transmission electron microscopy with energy dispersive X-ray spectrum (EDXS) analysis revealed the presence of stable, predominantly monodispersed and spherical SeNPs of an average size of 146 ± 71 nm. Additionally, SeNPs hydrodynamic size was determined by dispersive light scattering (DLS) and nanoparticle tracking analysis (NTA). For this purpose, a methodology based on the use of surfactants in basic medium was developed for isolating SeNPs from the bacterial pellet. The hydrodynamic size values provided by DLS and NTA were 258 ± 4 and 187 ± 56 nm, respectively. NTA measurements of number-based concentration reported values of (4.67±0.30)x10 9 SeNPs mL -1 with a relative standard deviation lower than 5% (n = 3). The quantitative results obtained by NTA were supported by theoretical calculations. Asymmetrical flow field flow fractionation (AF 4 ) on line coupled to the inductively couple plasma mass spectrometry (ICP-MS) and off-line coupled to DLS was further employed to characterize biogenic SeNPs. The distribution of the particle size for the Se-containing peak provide an average size of (247 ± 14) nm. The data obtained by independent techniques were in good agreement and the developed methodology could be implemented for characterizing NPs in complex matrices such as biogenic nanoparticles embedded inside microbial material. Copyright © 2017. Published by Elsevier B.V.

  16. Quantum Averaging of Squeezed States of Light

    DEFF Research Database (Denmark)

    Squeezing has been recognized as the main resource for quantum information processing and an important resource for beating classical detection strategies. It is therefore of high importance to reliably generate stable squeezing over longer periods of time. The averaging procedure for a single qu...

  17. Bayesian Model Averaging for Propensity Score Analysis

    Science.gov (United States)

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  18. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    Inspired by Raychaudhuri's work, and using the equation named after him as a basic ingredient, a new singularity theorem is proved. Open non-rotating Universes, expanding everywhere with a non-vanishing spatial average of the matter variables, show severe geodesic incompletness in the past. Another way of stating ...

  19. Average beta measurement in EXTRAP T1

    International Nuclear Information System (INIS)

    Hedin, E.R.

    1988-12-01

    Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)

  20. Averages of operators in finite Fermion systems

    International Nuclear Information System (INIS)

    Ginocchio, J.N.

    1980-01-01

    The important ingredients in the spectral analysis of Fermion systems are the average of operators. In this paper we shall derive expressions for averages of operators in truncated Fermion spaces in terms of the minimal information needed about the operator. If we take the operator to be powers of the Hamiltonian we can then study the conditions on a Hamiltonian for the eigenvalues of the Hamiltonian in the truncated space to be Gaussian distributed. The theory of scalar traces is reviewed, and the dependence on nucleon number and single-particle states is reviewed. These results are used to show that a dilute non-interacting system will have Gaussian distributed eigenvalues, i.e., its cumulants will tend to zero, for a large number of Fermions. The dominant terms in the cumulants of a dilute interacting Fermion system are derived. In this case the cumulants depend crucially on the interaction even for a large number of Fermions. Configuration averaging is briefly discussed. Finally, comments are made on averaging for a fixed number of Fermions and angular momentum

  1. Full averaging of fuzzy impulsive differential inclusions

    Directory of Open Access Journals (Sweden)

    Natalia V. Skripnik

    2010-09-01

    Full Text Available In this paper the substantiation of the method of full averaging for fuzzy impulsive differential inclusions is studied. We extend the similar results for impulsive differential inclusions with Hukuhara derivative (Skripnik, 2007, for fuzzy impulsive differential equations (Plotnikov and Skripnik, 2009, and for fuzzy differential inclusions (Skripnik, 2009.

  2. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  3. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  4. High Average Power Optical FEL Amplifiers

    CERN Document Server

    Ben-Zvi, I; Litvinenko, V

    2005-01-01

    Historically, the first demonstration of the FEL was in an amplifier configuration at Stanford University. There were other notable instances of amplifying a seed laser, such as the LLNL amplifier and the BNL ATF High-Gain Harmonic Generation FEL. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance a 100 kW average power FEL. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting energy recovery linacs combine well with the high-gain FEL amplifier to produce unprecedented average power FELs with some advantages. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Li...

  5. Reliability Estimates for Undergraduate Grade Point Average

    Science.gov (United States)

    Westrick, Paul A.

    2017-01-01

    Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…

  6. Sample size for estimating average productive traits of pigeon pea

    Directory of Open Access Journals (Sweden)

    Giovani Facco

    2016-04-01

    Full Text Available ABSTRACT: The objectives of this study were to determine the sample size, in terms of number of plants, needed to estimate the average values of productive traits of the pigeon pea and to determine whether the sample size needed varies between traits and between crop years. Separate uniformity trials were conducted in 2011/2012 and 2012/2013. In each trial, 360 plants were demarcated, and the fresh and dry masses of roots, stems, and leaves and of shoots and the total plant were evaluated during blossoming for 10 productive traits. Descriptive statistics were calculated, normality and randomness were checked, and the sample size was calculated. There was variability in the sample size between the productive traits and crop years of the pigeon pea culture. To estimate the averages of the productive traits with a 20% maximum estimation error and 95% confidence level, 70 plants are sufficient.

  7. A control-oriented approach to estimate the injected fuel mass on the basis of the measured in-cylinder pressure in multiple injection diesel engines

    International Nuclear Information System (INIS)

    Finesso, Roberto; Spessa, Ezio

    2015-01-01

    Highlights: • Control-oriented method to estimate injected quantities from in-cylinder pressure. • Able to calculate the injected quantities for multiple injection strategies. • Based on the inversion of a heat-release predictive model. • Low computational time demanding. - Abstract: A new control-oriented methodology has been developed to estimate the injected fuel quantities, in real-time, in multiple injection DI diesel engines on the basis of the measured in-cylinder pressure. The method is based on the inversion of a predictive combustion model that was previously developed by the authors, and that is capable of estimating the heat release rate and the in-cylinder pressure on the basis of the injection rate. The model equations have been rewritten in order to derive the injected mass as an output quantity, starting from use of the measured in-cylinder pressure as input. It has been verified that the proposed method is capable of estimating the injected mass of pilot pulses with an uncertainty of the order of ±0.15 mg/cyc, and the total injected mass with an uncertainty of the order of ±0.9 mg/cyc. The main sources of uncertainty are related to the estimation of the in-cylinder heat transfer and of the isentropic coefficient γ = c p /c v . The estimation of the actual injected quantities in the combustion chamber can represent a powerful means to diagnose the behavior of the injectors during engine operation, and offers the possibility of monitoring effects, such as injector ageing and injector coking, as well as of allowing an accurate control of the pilot injected quantities to be obtained; the latter are in fact usually characterized by a large dispersion, with negative consequences on the combustion quality and emission formation. The approach is characterized by a very low computational time, and is therefore suitable for control-oriented applications.

  8. Comparative Ebulliometry: a Simple, Reliable Technique for Accurate Measurement of the Number Average Molecular Weight of Macromolecules. Preliminary Studies on Heavy Crude Fractions Ébulliométrie comparative : technique simple et fiable pour déterminer précisément la masse molaire moyenne en nombre des macromolécules. Etudes préliminaires sur des fractions lourdes de bruts

    Directory of Open Access Journals (Sweden)

    Behar E.

    2006-12-01

    Full Text Available This article is divided into two parts. In the first part, the authors present a comparison of the major techniques for the measurement of the molecular weight of macromolecules. The bibliographic results are gathered in several tables. In the second part, a comparative ebulliometer for the measurement of the number average molecular weight (Mn of heavy crude oil fractions is described. The high efficiency of the apparatus is demonstrated with a preliminary study of atmospheric distillation residues and resins. The measurement of molecular weights up to 2000 g/mol is possible in less than 4 hours with an uncertainty of about 2%. Cet article comprend deux parties. Dans la première, les auteurs présentent une comparaison entre les principales techniques de détermination de la masse molaire de macromolécules. Les résultats de l'étude bibliographique sont rassemblés dans plusieurs tableaux. La seconde partie décrit un ébulliomètre comparatif conçu pour la mesure de la masse molaire moyenne en nombre (Mn des fractions lourdes des bruts. Une illustration de l'efficacité de cet appareil est indiquée avec l'étude préliminaire de résidus de distillation atmosphérique et de résines. En particulier, la mesure de masses molaires pouvant atteindre 2000 g/mol est possible en moins de 4 heures avec une incertitude expérimentale de l'ordre de 2 %.

  9. Average Case Analysis of Java 7's Dual Pivot Quicksort

    OpenAIRE

    Wild, Sebastian; Nebel, Markus E.

    2013-01-01

    Recently, a new Quicksort variant due to Yaroslavskiy was chosen as standard sorting method for Oracle's Java 7 runtime library. The decision for the change was based on empirical studies showing that on average, the new algorithm is faster than the formerly used classic Quicksort. Surprisingly, the improvement was achieved by using a dual pivot approach, an idea that was considered not promising by several theoretical studies in the past. In this paper, we identify the reason for this unexpe...

  10. Average Likelihood Methods for Code Division Multiple Access (CDMA)

    Science.gov (United States)

    2014-05-01

    the number of unknown variables grows, the averaging process becomes an extremely complex task. In the multiuser detection , a closely related problem...Theoretical Background The classification of DS/CDMA signals should not be confused with the problem of multiuser detection . The multiuser detection deals...beginning of the sequence. For simplicity, our approach will use similar assumptions to those used in multiuser detection , i.e., chip

  11. Global chemical profiling based quality evaluation approach of rhubarb using ultra performance liquid chromatography with tandem quadrupole time-of-flight mass spectrometry.

    Science.gov (United States)

    Zhang, Li; Liu, Haiyu; Qin, Lingling; Zhang, Zhixin; Wang, Qing; Zhang, Qingqing; Lu, Zhiwei; Wei, Shengli; Gao, Xiaoyan; Tu, Pengfei

    2015-02-01

    A global chemical profiling based quality evaluation approach using ultra performance liquid chromatography with tandem quadrupole time-of-flight mass spectrometry was developed for the quality evaluation of three rhubarb species, including Rheum palmatum L., Rheum tanguticum Maxim. ex Balf., and Rheum officinale Baill. Considering that comprehensive detection of chemical components is crucial for the global profile, a systemic column performance evaluation method was developed. Based on this, a Cortecs column was used to acquire the chemical profile, and Chempattern software was employed to conduct similarity evaluation and hierarchical cluster analysis. The results showed R. tanguticum could be differentiated from R. palmatum and R. officinale at the similarity value 0.65, but R. palmatum and R. officinale could not be distinguished effectively. Therefore, a common pattern based on three rhubarb species was developed to conduct the quality evaluation, and the similarity value 0.50 was set as an appropriate threshold to control the quality of rhubarb. A total of 88 common peaks were identified by their accurate mass and fragmentation, and partially verified by reference standards. Through the verification, the newly developed method could be successfully used for evaluating the holistic quality of rhubarb. It would provide a reference for the quality control of other herbal medicines. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. The geochemical transformation of soils by agriculture and its dependence on soil erosion: An application of the geochemical mass balance approach.

    Science.gov (United States)

    Yoo, Kyungsoo; Fisher, Beth; Ji, Junling; Aufdenkampe, Anthony; Klaminder, Jonatan

    2015-07-15

    Agricultural activities alter elemental budgets of soils and thus their long-term geochemical development and suitability for food production. This study examined the utility of a geochemical mass balance approach that has been frequently used for understanding geochemical aspect of soil formation, but has not previously been applied to agricultural settings. Protected forest served as a reference to quantify the cumulative fluxes of Ca, P, K, and Pb at a nearby tilled crop land. This comparison was made at two sites with contrasting erosional environments: relatively flat Coastal Plain in Delaware vs. hilly Piedmont in Pennsylvania. Mass balance calculations suggested that liming not only replenished the Ca lost prior to agricultural practice but also added substantial surplus at both sites. At the relatively slowly eroding Coastal Plain site, the agricultural soil exhibited enrichment of P and less depletion of K, while both elements were depleted in the forest soil. At the rapidly eroding Piedmont site, erosion inhibited P enrichment. In similar, agricultural Pb contamination appeared to have resulted in Pb enrichment in the relatively slowly eroding Coastal Plain agricultural soil, while not in the rapidly eroding Piedmont soils. We conclude that agricultural practices transform soils into a new geochemical state where current levels of Ca, P, and Pb exceed those provided by the local soil minerals, but such impacts are significantly offset by soil erosion. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Distribution patterns of flavonoids from three Momordica species by ultra-high performance liquid chromatography quadrupole time of flight mass spectrometry: a metabolomic profiling approach

    Directory of Open Access Journals (Sweden)

    Ntakadzeni Edwin Madala

    Full Text Available ABSTRACT Plants from the Momordica genus, Curcubitaceae, are used for several purposes, especially for their nutritional and medicinal properties. Commonly known as bitter gourds, melon and cucumber, these plants are characterized by a bitter taste owing to the large content of cucurbitacin compounds. However, several reports have shown an undisputed correlation between the therapeutic activities and polyphenolic flavonoid content. Using ultra-high performance liquid chromatography quadrupole time of flight mass spectrometry in combination with multivariate data models such as principal component analysis and hierarchical cluster analysis, three Momordica species (M. foetida Schumach., M. charantia L. and M. balsamina L. were chemo-taxonomically grouped based on their flavonoid content. Using a conventional mass spectrometric-based approach, thirteen flavonoids were tentatively identified and the three species were found to contain different isomers of the quercetin-, kaempferol- and isorhamnetin-O-glycosides. Our results indicate that Momordica species are overall very rich sources of flavonoids but do contain different forms thereof. Furthermore, to the best of our knowledge, this is a first report on the flavonoid content of M. balsamina L.

  14. The state support of small and average entrepreneurship in Ukraine

    Directory of Open Access Journals (Sweden)

    Т.О. Melnyk

    2015-03-01

    Full Text Available Purposes, principles and the basic directions of a state policy in development of small and average business in Ukraine are defined. Conditions and restrictions in granting of the state support to subjects of small and average business are outlined. The modern infrastructure of business support by regions is considered. Different kinds of the state support of small and average business are characterized: financial, information, consulting, in sphere of innovations, science and industrial production, subjects who conduct export activity, in sphere of preparation, retraining and improvement of professional skill of administrative and business dealing stuff. Approaches to reforming the state control of small and average business are generalized, esp. in aspects of risk degree estimation of economic activities, quantity and frequency of checks, registration of certificates which are made by the results of planned state control actions, creation of the effective mechanism of the state control bodies coordination. The most perspective directions of the state support of small and average business in Ukraine in modern economic conditions are defined.

  15. Study of different HILIC, mixed-mode, and other aqueous normal-phase approaches for the liquid chromatography/mass spectrometry-based determination of challenging polar pesticides.

    Science.gov (United States)

    Vass, Andrea; Robles-Molina, José; Pérez-Ortega, Patricia; Gilbert-López, Bienvenida; Dernovics, Mihaly; Molina-Díaz, Antonio; García-Reyes, Juan F

    2016-07-01

    The aim of the study was to evaluate the performance of different chromatographic approaches for the liquid chromatography/mass spectrometry (LC-MS(/MS)) determination of 24 highly polar pesticides. The studied compounds, which are in most cases unsuitable for conventional LC-MS(/MS) multiresidue methods were tested with nine different chromatographic conditions, including two different hydrophilic interaction liquid chromatography (HILIC) columns, two zwitterionic-type mixed-mode columns, three normal-phase columns operated in HILIC-mode (bare silica and two silica-based chemically bonded columns (cyano and amino)), and two standard reversed-phase C18 columns. Different sets of chromatographic parameters in positive (for 17 analytes) and negative ionization modes (for nine analytes) were examined. In order to compare the different approaches, a semi-quantitative classification was proposed, calculated as the percentage of an empirical performance value, which consisted of three main features: (i) capacity factor (k) to characterize analyte separation from the void, (ii) relative response factor, and (iii) peak shape based on analytes' peak width. While no single method was able to provide appropriate detection of all the 24 studied species in a single run, the best suited approach for the compounds ionized in positive mode was based on a UHPLC HILIC column with 1.8 μm particle size, providing appropriate results for 22 out of the 24 species tested. In contrast, the detection of glyphosate and aminomethylphosphonic acid could only be achieved with a zwitterionic-type mixed-mode column, which proved to be suitable only for the pesticides detected in negative ion mode. Finally, the selected approach (UHPLC HILIC) was found to be useful for the determination of multiple pesticides in oranges using HILIC-ESI-MS/MS, with limits of quantitation in the low microgram per kilogram in most cases. Graphical Abstract HILIC improves separation of multiclass polar pesticides.

  16. Averaging theorems in finite deformation plasticity

    CERN Document Server

    Nemat-Nasser, S C

    1999-01-01

    The transition from micro- to macro-variables of a representative volume element (RVE) of a finitely deformed aggregate (e.g., a composite or a polycrystal) is explored. A number of exact fundamental results on averaging techniques, $9 valid at finite deformations and rotations of any arbitrary heterogeneous continuum, are obtained. These results depend on the choice of suitable kinematic and dynamic variables. For finite deformations, the deformation gradient and $9 its rate, and the nominal stress and its rate, are optimally suited for the averaging purposes. A set of exact identities is presented in terms of these variables. An exact method for homogenization of an ellipsoidal inclusion in an $9 unbounded finitely deformed homogeneous solid is presented, generalizing Eshelby's method for application to finite deformation problems. In terms of the nominal stress rate and the rate of change of the deformation gradient, $9 measured relative to any arbitrary state, a general phase-transformation problem is con...

  17. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  18. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  19. Average Annual Rainfall over the Globe

    Science.gov (United States)

    Agrawal, D. C.

    2013-01-01

    The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…

  20. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS.

    Energy Technology Data Exchange (ETDEWEB)

    BEN-ZVI, ILAN, DAYRAN, D.; LITVINENKO, V.

    2005-08-21

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.

  1. Technological progress and average job matching quality

    OpenAIRE

    Centeno, Mário; Corrêa, Márcio V.

    2009-01-01

    Our objective is to study, in a labor market characterized by search frictions, the effect of technological progress on the average quality of job matches. For that, we use an extension of Mortensen and Pissarides (1998) and obtain as results that the effects of technological progress on the labor market depend upon the initial conditions of the economy. If the economy is totally characterized by the presence of low-quality job matches, an increase in technological progress is accompanied by ...

  2. Time-averaged MSD of Brownian motion

    OpenAIRE

    Andreanov, Alexei; Grebenkov, Denis

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...

  3. Weighted estimates for the averaging integral operator

    Czech Academy of Sciences Publication Activity Database

    Opic, Bohumír; Rákosník, Jiří

    2010-01-01

    Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231

  4. Cumulative and Averaging Fission of Beliefs

    OpenAIRE

    Josang, Audun

    2007-01-01

    Belief fusion is the principle of combining separate beliefs or bodies of evidence originating from different sources. Depending on the situation to be modelled, different belief fusion methods can be applied. Cumulative and averaging belief fusion is defined for fusing opinions in subjective logic, and for fusing belief functions in general. The principle of fission is the opposite of fusion, namely to eliminate the contribution of a specific belief from an already fused belief, with the pur...

  5. ANTINOMY OF THE MODERN AVERAGE PROFESSIONAL EDUCATION

    Directory of Open Access Journals (Sweden)

    A. A. Listvin

    2017-01-01

    Full Text Available Introduction. Successful upgrade of the secondary professionaleducation (SPE has a key value for innovative development of domestic economy. First of all, the solution of a task of creation of high-technology workplaces and development of labour productivity has a key role. However, the process of reforming of SPE, which dragged on for many years, is followed by the internal and external contradictions which are slowing down an inflow of necessary skilled personnel on the labor market.The aim of the present article consists in drawing attention to the collected problems and contradictions in the system of SPE which are in many respects caused by inconsistency in rule-making and legislative activity of the Ministry of Education and Science of the Russian Federation, special-purpose committees of the State Duma of Federal Assembly of the Russian Federation in education.Methods. The research is based on the analysis of normative and program documents in the field of SPE, realization of competence-based approach in practice of activity of establishments of professional education.Results and scientific novelty consist in identification and designation of the obvious and hidden antinomy existing in the SPE system now. The reasons for the late reaction of system of training of personnel to the transformations happening in economy are disclosed: discrepancy of training programs to requirements of employers; unsatisfactory quality of training of graduates; fall of prestige of working professions and specialties among youth; the broken ratio in a life-long education chain «a worker – a technician – an engineer»; reduction of the staff of masters of professional training; disinterest of employers in the joint SPE problem resolution (in cooperation with pedagogical community; excessively fractional specialization of professions of workers in available occupational classifications, etc. The designated problems enable to begin a search

  6. Approach to improve compound recovery in a high-throughput Caco-2 permeability assay supported by liquid chromatography-tandem mass spectrometry.

    Science.gov (United States)

    Cai, Xianmei; Walker, Aaron; Cheng, Charles; Paiva, Anthony; Li, Ying; Kolb, Janet; Herbst, John; Shou, Wilson; Weller, Harold

    2012-08-01

    The Caco-2 cell culture system is widely employed as an in vitro model for prediction of intestinal absorption of test compounds in early drug discovery. Poor recovery is a commonly encountered issue in Caco-2 assay, which can lead to difficulty in data interpretation and underestimation of the apparent permeability of affected compounds. In this study, we systematically investigated the potential sources of compound loss in our automated, high-throughput Caco-2 assay, sample storage, and analysis processes, and as a result found the nonspecific binding to various plastic surfaces to be the major cause of poor compound recovery. To minimize the nonspecific binding, we implemented a simple and practical approach in our assay automation by preloading collection plates with organic solvent containing internal standard prior to transferring incubations samples. The implementation of this new method has been shown to significantly increase recovery in many compounds previously identified as having poor recovery in the Caco-2 permeability assay. With improved recovery, permeability results were obtained for many compounds that were previously not detected in the basolateral samples. In addition to recovery improvement, this new approach also simplified sample preparation for liquid chromatography-tandem mass spectrometric analysis and therefore achieved time and cost savings for the bioanalyst. Copyright © 2012 Wiley Periodicals, Inc.

  7. Application of a novel metabolomic approach based on atmospheric pressure photoionization mass spectrometry using flow injection analysis for the study of Alzheimer's disease.

    Science.gov (United States)

    González-Domínguez, Raúl; García-Barrera, Tamara; Gómez-Ariza, José Luis

    2015-01-01

    The use of atmospheric pressure photoionization is not widespread in metabolomics, despite its considerable potential for the simultaneous analysis of compounds with diverse polarities. This work considers the development of a novel analytical approach based on flow injection analysis and atmospheric pressure photoionization mass spectrometry for rapid metabolic screening of serum samples. Several experimental parameters were optimized, such as type of dopant, flow injection solvent, and their flows, given that a careful selection of these variables is mandatory for a comprehensive analysis of metabolites. Toluene and methanol were the most suitable dopant and flow injection solvent, respectively. Moreover, analysis in negative mode required higher solvent and dopant flows (100 µl min(-1) and 40 µl min(-1), respectively) compared to positive mode (50 µl min(-1) and 20 µl min(-1)). Then, the optimized approach was used to elucidate metabolic alterations associated with Alzheimer's disease. Thereby, results confirm the increase of diacylglycerols, ceramides, ceramide-1-phosphate and free fatty acids, indicating membrane destabilization processes, and reduction of fatty acid amides and several neurotransmitters related to impairments in neuronal transmission, among others. Therefore, it could be concluded that this metabolomic tool presents a great potential for analysis of biological samples, considering its high-throughput screening capability, fast analysis and comprehensive metabolite coverage. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Targeted Quantitation of Site-Specific Cysteine Oxidation in Endogenous Proteins Using a Differential Alkylation and Multiple Reaction Monitoring Mass Spectrometry Approach

    Science.gov (United States)

    Held, Jason M.; Danielson, Steven R.; Behring, Jessica B.; Atsriku, Christian; Britton, David J.; Puckett, Rachel L.; Schilling, Birgit; Campisi, Judith; Benz, Christopher C.; Gibson, Bradford W.

    2010-01-01

    Reactive oxygen species (ROS) are both physiological intermediates in cellular signaling and mediators of oxidative stress. The cysteine-specific redox-sensitivity of proteins can shed light on how ROS are regulated and function, but low sensitivity has limited quantification of the redox state of many fundamental cellular regulators in a cellular context. Here we describe a highly sensitive and reproducible oxidation analysis approach (OxMRM) that combines protein purification, differential alkylation with stable isotopes, and multiple reaction monitoring mass spectrometry that can be applied in a targeted manner to virtually any cysteine or protein. Using this approach, we quantified the site-specific cysteine oxidation status of endogenous p53 for the first time and found that Cys182 at the dimerization interface of the DNA binding domain is particularly susceptible to diamide oxidation intracellularly. OxMRM enables analysis of sulfinic and sulfonic acid oxidation levels, which we validate by assessing the oxidation of the catalytic Cys215 of protein tyrosine phosphatase-1B under numerous oxidant conditions. OxMRM also complements unbiased redox proteomics discovery studies as a verification tool through its high sensitivity, accuracy, precision, and throughput. PMID:20233844

  9. Development of a methodological approach for the characterization of bioaerosols in exhaust air from pig fattening farms with MALDI-TOF mass spectrometry.

    Science.gov (United States)

    Druckenmüller, Katharina; Gärtner, Andrea; Jäckel, Udo; Klug, Kerstin; Schiffels, Johannes; Günther, Klaus; Elbers, Gereon

    2017-08-01

    In this paper, we evaluated matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) as a cultivation-independent, routinely applicable approach to identify microbial fractions in bioaerosol emission samples. We developed a streamlined protocol in line with the German state-of-the-art impingement sampling guideline. Following isokinetic sampling, a fast and reliable pre-treatment methodology involving a series of cascade filtration steps was implemented, which produced fractions for spectrometric measurement devoid of interfering substances. We sampled the exhaust air from eight pig fattening farms around western Germany, which yielded two sets of samples for both method development and validation. For method development, in total 65 bacterial isolates were produced directly from the exhaust air samples, taxonomically classified by 16S rRNA-Gene sequencing, and subjected to MALDI-TOF analysis. In this way, we could assign fingerprint biomarkers to classified bacterial genera or even species to build up a preliminary reference database. For verification of the novel methodology and application of the reference database, we subjected the second set of exhaust air samples to the developed protocol. Here, 18 out of 21 bacterial species deposited in the database were successfully retrieved, including organisms classified in risk group 2, which might be used to evaluate the pathogenic potential of sampled exhaust air. Overall, this study pursues an entirely new approach to rapidly analyze airborne microbial fractions. Copyright © 2017 The Authors. Published by Elsevier GmbH.. All rights reserved.

  10. A Simple Approach for Obtaining High Resolution, High Sensitivity ¹H NMR Metabolite Spectra of Biofluids with Limited Mass Supply

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Jian Zhi; Rommereim, Donald N.; Wind, Robert A.; Minard, Kevin R.; Sears, Jesse A.

    2006-11-01

    A simple approach is reported that yields high resolution, high sensitivity ¹H NMR spectra of biofluids with limited mass supply. This is achieved by spinning a capillary sample tube containing a biofluid at the magic angle at a frequency of about 80Hz. A 2D pulse sequence called ¹H PASS is then used to produce a high-resolution ¹H NMR spectrum that is free from magnetic susceptibility induced line broadening. With this new approach a high resolution ¹H NMR spectrum of biofluids with a volume less than 1.0 µl can be easily achieved at a magnetic field strength as low as 7.05T. Furthermore, the methodology facilitates easy sample handling, i.e., the samples can be directly collected into inexpensive and disposable capillary tubes at the site of collection and subsequently used for NMR measurements. In addition, slow magic angle spinning improves magnetic field shimming and is especially suitable for high throughput investigations. In this paper first results are shown obtained in a magnetic field of 7.05T on urine samples collected from mice using a modified commercial NMR probe.

  11. Fast food, other food choices and body mass index in teenagers in the United Kingdom (ALSPAC): a structural equation modelling approach.

    Science.gov (United States)

    Fraser, L K; Edwards, K L; Cade, J E; Clarke, G P

    2011-10-01

    To assess the association between the consumption of fast food (FF) and body mass index (BMI) of teenagers in a large UK birth cohort. A structural equation modelling (SEM) approach was chosen to allow direct statistical testing of a theoretical model. SEM is a combination of confirmatory factor and path analysis, which allows for the inclusion of latent (unmeasured) variables. This approach was used to build two models: the effect of FF outlet visits and food choices and the effect of FF exposure on consumption and BMI. A total of 3620 participants had data for height and weight from the age 13 clinic and the frequency of FF outlet visits, and so were included in these analyses. This SEM model of food choices showed that increased frequency of eating at FF outlets is positively associated with higher consumption of unhealthy foods (β=0.29, Pfoods (β=-1.02, Pfoods and were more likely to have higher BMISDS than those teenagers who did not eat frequently at FF restaurants. Teenagers who were exposed to more takeaway foods at home ate more frequently at FF restaurants and eating at FF restaurants was also associated with lower intakes of vegetables and raw fruit in this cohort.

  12. Average Precision: Good Guide or False Friend to Multimedia Search Effectiveness?

    NARCIS (Netherlands)

    Aly, Robin; Trieschnigg, Rudolf Berend; McGuinness, Kevin; O' Connor, Noel E.; de Jong, Franciska M.G.

    Approaches to multimedia search often evolve from existing approaches with strong average precision. However, work on search evaluation shows that average precision does not always capture effectiveness in terms of satisfying user needs because it ignores the diversity of search results. This paper

  13. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  14. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...

  15. Fluctuations of wavefunctions about their classical average

    International Nuclear Information System (INIS)

    Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics

  16. Average local ionization energy: A review.

    Science.gov (United States)

    Politzer, Peter; Murray, Jane S; Bulat, Felipe A

    2010-11-01

    The average local ionization energy I(r) is the energy necessary to remove an electron from the point r in the space of a system. Its lowest values reveal the locations of the least tightly-held electrons, and thus the favored sites for reaction with electrophiles or radicals. In this paper, we review the definition of I(r) and some of its key properties. Apart from its relevance to reactive behavior, I(r) has an important role in several fundamental areas, including atomic shell structure, electronegativity and local polarizability and hardness. All of these aspects of I(r) are discussed.

  17. Time-averaged MSD of Brownian motion

    Science.gov (United States)

    Andreanov, Alexei; Grebenkov, Denis S.

    2012-07-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.

  18. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7.  ...

  19. The Effects of Cooperative Learning and Learner Control on High- and Average-Ability Students.

    Science.gov (United States)

    Hooper, Simon; And Others

    1993-01-01

    Describes a study that examined the effects of cooperative versus individual computer-based instruction on the performance of high- and average-ability fourth-grade students. Effects of learner and program control are investigated; student attitudes toward instructional content, learning in groups, and partners are discussed; and further research…

  20. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  1. Global atmospheric circulation statistics: Four year averages

    Science.gov (United States)

    Wu, M. F.; Geller, M. A.; Nash, E. R.; Gelman, M. E.

    1987-01-01

    Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity.

  2. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  3. Average subentropy, coherence and entanglement of random mixed quantum states

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)

    2017-02-15

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.

  4. An approach to on-line electrospray mass spectrometric detection of polypeptide antibiotics of enramycin for high-speed counter-current chromatographic separation.

    Science.gov (United States)

    Inoue, Koichi; Hattori, Yasuko; Hino, Tomoaki; Oka, Hisao

    2010-04-06

    In the field of pharmaceutical and biomedical analysis of peptides, a rapid on-line detection and identification for a methodology have been required for the discovery of new biological active products. In this study, a high-speed counter-current chromatography with electrospray mass spectrometry (HSCCC/ESI-MS) was developed for the on-line detection and purification of polypeptide antibiotics of enramycin-A and -B. The analytes were purified on HSCCC model CCC-1000 (multi-layer coil planet centrifuge) with a volatile solvent of two-phase system composed of n-butanol/hexane/0.05% aqueous trifluoroacetic acid solution (43/7/50, V/V/V), and detected on an LCMS-2010EV quadrupole mass spectrometer fitted with an ESI source system in positive ionization following scan mode (m/z 100-2000). The HSCCC/ESI-MS peaks indicated that enramycin-A (major m/z 786 [M+3H](3+) and minor m/z 1179 [M+2H](2+)) and enramycin-B (major m/z 791 [M+3H](3+) and minor m/z 1185 [M+2H](2+)) have the peak resolution value of 2.9 from 15mg of loaded enramycin powder. The HSCCC collected amounts of the peak fractions were additionally 4.3mg (enramycin-A), and 5.9mg (enramycin-B), respectively. These purified substances were analyzed by LC/ESI-MS with scan positive mode. Based on the LC/ESI-MS chromatograms and spectra of the fractions, enramycin-A and -B were estimated to be over 95% purity. The overall results indicate that this approach of HSCCC/ESI-MS is a powerful technique for the purification and identification of bioactive peptides. Copyright 2009 Elsevier B.V. All rights reserved.

  5. Software-aided approach to investigate peptide structure and metabolic susceptibility of amide bonds in peptide drugs based on high resolution mass spectrometry.

    Directory of Open Access Journals (Sweden)

    Tatiana Radchenko

    Full Text Available Interest in using peptide molecules as therapeutic agents due to high selectivity and efficacy is increasing within the pharmaceutical industry. However, most peptide-derived drugs cannot be administered orally because of low bioavailability and instability in the gastrointestinal tract due to protease activity. Therefore, structural modifications peptides are required to improve their stability. For this purpose, several in-silico software tools have been developed such as PeptideCutter or PoPS, which aim to predict peptide cleavage sites for different proteases. Moreover, several databases exist where this information is collected and stored from public sources such as MEROPS and ExPASy ENZYME databases. These tools can help design a peptide drug with increased stability against proteolysis, though they are limited to natural amino acids or cannot process cyclic peptides, for example. We worked to develop a new methodology to analyze peptide structure and amide bond metabolic stability based on the peptide structure (linear/cyclic, natural/unnatural amino acids. This approach used liquid chromatography / high resolution, mass spectrometry to obtain the analytical data from in vitro incubations. We collected experimental data for a set (linear/cyclic, natural/unnatural amino acids of fourteen peptide drugs and four substrate peptides incubated with different proteolytic media: trypsin, chymotrypsin, pepsin, pancreatic elastase, dipeptidyl peptidase-4 and neprilysin. Mass spectrometry data was analyzed to find metabolites and determine their structures, then all the results were stored in a chemically aware manner, which allows us to compute the peptide bond susceptibility by using a frequency analysis of the metabolic-liable bonds. In total 132 metabolites were found from the various in vitro conditions tested resulting in 77 distinct cleavage sites. The most frequent observed cleavage sites agreed with those reported in the literature. The

  6. Software-aided approach to investigate peptide structure and metabolic susceptibility of amide bonds in peptide drugs based on high resolution mass spectrometry

    Science.gov (United States)

    Fontaine, Fabien; Morettoni, Luca; Zamora, Ismael

    2017-01-01

    Interest in using peptide molecules as therapeutic agents due to high selectivity and efficacy is increasing within the pharmaceutical industry. However, most peptide-derived drugs cannot be administered orally because of low bioavailability and instability in the gastrointestinal tract due to protease activity. Therefore, structural modifications peptides are required to improve their stability. For this purpose, several in-silico software tools have been developed such as PeptideCutter or PoPS, which aim to predict peptide cleavage sites for different proteases. Moreover, several databases exist where this information is collected and stored from public sources such as MEROPS and ExPASy ENZYME databases. These tools can help design a peptide drug with increased stability against proteolysis, though they are limited to natural amino acids or cannot process cyclic peptides, for example. We worked to develop a new methodology to analyze peptide structure and amide bond metabolic stability based on the peptide structure (linear/cyclic, natural/unnatural amino acids). This approach used liquid chromatography / high resolution, mass spectrometry to obtain the analytical data from in vitro incubations. We collected experimental data for a set (linear/cyclic, natural/unnatural amino acids) of fourteen peptide drugs and four substrate peptides incubated with different proteolytic media: trypsin, chymotrypsin, pepsin, pancreatic elastase, dipeptidyl peptidase-4 and neprilysin. Mass spectrometry data was analyzed to find metabolites and determine their structures, then all the results were stored in a chemically aware manner, which allows us to compute the peptide bond susceptibility by using a frequency analysis of the metabolic-liable bonds. In total 132 metabolites were found from the various in vitro conditions tested resulting in 77 distinct cleavage sites. The most frequent observed cleavage sites agreed with those reported in the literature. The main advantages of

  7. Longitudinal associations between body mass index, physical activity, and healthy dietary behaviors in adults: A parallel latent growth curve modeling approach.

    Directory of Open Access Journals (Sweden)

    Youngdeok Kim

    Full Text Available Physical activity (PA and healthy dietary behaviors (HDB are two well-documented lifestyle factors influencing body mass index (BMI. This study examined 7-year longitudinal associations between changes in PA, HDB, and BMI among adults using a parallel latent growth curve modeling (LGCM.We used prospective cohort data collected by a private company (SimplyWell LLC, Omaha, NE, USA implementing a workplace health screening program. Data from a total of 2,579 adults who provided valid BMI, PA, and HDB information for at least 5 out of 7 follow-up years from the time they entered the program were analyzed. PA and HDB were subjectively measured during an annual online health survey. Height and weight measured during an annual onsite health screening were used to calculate BMI (kg·m2. The parallel LGCMs stratified by gender and baseline weight status (normal: BMI30 were fitted to examine the longitudinal associations of changes in PA and HDB with change in BMI over years.On average, BMI gradually increased over years, at rates ranging from 0.06 to 0.20 kg·m2·year, with larger increases observed among those of normal baseline weight status across genders. The increases in PA and HDB were independently associated with a smaller increase in BMI for obese males (b = -1.70 and -1.98, respectively, and overweight females (b = -1.85 and -2.46, respectively and obese females (b = -2.78 and -3.08, respectively. However, no significant associations of baseline PA and HDB with changes in BMI were observed.Our study suggests that gradual increases in PA and HDB are independently associated with smaller increases in BMI in overweight and obese adults, but not in normal weight individuals. Further study is warranted to address factors that check increases in BMI in normal weight adults.

  8. Angle-averaged Compton cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.

  9. Reynolds averaged simulation of unsteady separated flow

    International Nuclear Information System (INIS)

    Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.

    2003-01-01

    The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation

  10. FEL system with homogeneous average output

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph

    2018-01-16

    A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.

  11. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  12. Angle-averaged Compton cross sections

    International Nuclear Information System (INIS)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV

  13. Criticality evaluation of BWR MOX fuel transport packages using average Pu content

    International Nuclear Information System (INIS)

    Mattera, C.; Martinotti, B.

    2004-01-01

    Currently in France, criticality studies in transport configurations for Boiling Water Reactor Mixed Oxide fuel assemblies are based on conservative hypothesis assuming that all rods (Mixed Oxide (Uranium and Plutonium), Uranium Oxide, Uranium and Gadolinium Oxide rods) are Mixed Oxide rods with the same Plutonium-content, corresponding to the maximum value. In that way, the real heterogeneous mapping of the assembly is masked and covered by a homogeneous Plutonium-content assembly, enriched at the maximum value. As this calculation hypothesis is extremely conservative, COGEMA LOGISTICS has studied a new calculation method based on the average Plutonium-content in the criticality studies. The use of the average Plutonium-content instead of the real Plutonium-content profiles provides a highest reactivity value that makes it globally conservative. This method can be applied for all Boiling Water Reactor Mixed Oxide complete fuel assemblies of type 8 x 8, 9 x 9 and 10 x 10 which Plutonium-content in mass weight does not exceed 15%; it provides advantages which are discussed in our approach. With this new method, for the same package reactivity, the Pu-content allowed in the package design approval can be higher. The COGEMA LOGISTICS' new method allows, at the design stage, to optimise the basket, materials or geometry for higher payload, keeping the same reactivity

  14. Quantum black hole wave packet: Average area entropy and temperature dependent width

    Directory of Open Access Journals (Sweden)

    Aharon Davidson

    2014-09-01

    Full Text Available A quantum Schwarzschild black hole is described, at the mini super spacetime level, by a non-singular wave packet composed of plane wave eigenstates of the momentum Dirac-conjugate to the mass operator. The entropy of the mass spectrum acquires then independent contributions from the average mass and the width. Hence, Bekenstein's area entropy is formulated using the 〈mass2〉 average, leaving the 〈mass〉 average to set the Hawking temperature. The width function peaks at the Planck scale for an elementary (zero entropy, zero free energy micro black hole of finite rms size, and decreases Doppler-like towards the classical limit.

  15. Sun-to-Earth simulations of geo-effective Coronal Mass Ejections with EUHFORIA: a heliospheric-magnetospheric model chain approach

    Science.gov (United States)

    Scolini, C.; Verbeke, C.; Gopalswamy, N.; Wijsen, N.; Poedts, S.; Mierla, M.; Rodriguez, L.; Pomoell, J.; Cramer, W. D.; Raeder, J.

    2017-12-01

    Coronal Mass Ejections (CMEs) and their interplanetary counterparts are considered to be the major space weather drivers. An accurate modelling of their onset and propagation up to 1 AU represents a key issue for more reliable space weather forecasts, and predictions about their actual geo-effectiveness can only be performed by coupling global heliospheric models to 3D models describing the terrestrial environment, e.g. magnetospheric and ionospheric codes in the first place. In this work we perform a Sun-to-Earth comprehensive analysis of the July 12, 2012 CME with the aim of testing the space weather predictive capabilities of the newly developed EUHFORIA heliospheric model integrated with the Gibson-Low (GL) flux rope model. In order to achieve this goal, we make use of a model chain approach by using EUHFORIA outputs at Earth as input parameters for the OpenGGCM magnetospheric model. We first reconstruct the CME kinematic parameters by means of single- and multi- spacecraft reconstruction methods based on coronagraphic and heliospheric CME observations. The magnetic field-related parameters of the flux rope are estimated based on imaging observations of the photospheric and low coronal source regions of the eruption. We then simulate the event with EUHFORIA, testing the effect of the different CME kinematic input parameters on simulation results at L1. We compare simulation outputs with in-situ measurements of the Interplanetary CME and we use them as input for the OpenGGCM model, so to investigate the magnetospheric response to solar perturbations. From simulation outputs we extract some global geomagnetic activity indexes and compare them with actual data records and with results obtained by the use of empirical relations. Finally, we discuss the forecasting capabilities of such kind of approach and its future improvements.

  16. Dual-domain mass-transfer parameters from electrical hysteresis: theory and analytical approach applied to laboratory, synthetic streambed, and groundwater experiments

    Science.gov (United States)

    Briggs, Martin A.; Day-Lewis, Frederick D.; Ong, John B.; Harvey, Judson W.; Lane, John W.

    2014-01-01

    Models of dual-domain mass transfer (DDMT) are used to explain anomalous aquifer transport behavior such as the slow release of contamination and solute tracer tailing. Traditional tracer experiments to characterize DDMT are performed at the flow path scale (meters), which inherently incorporates heterogeneous exchange processes; hence, estimated “effective” parameters are sensitive to experimental design (i.e., duration and injection velocity). Recently, electrical geophysical methods have been used to aid in the inference of DDMT parameters because, unlike traditional fluid sampling, electrical methods can directly sense less-mobile solute dynamics and can target specific points along subsurface flow paths. Here we propose an analytical framework for graphical parameter inference based on a simple petrophysical model explaining the hysteretic relation between measurements of bulk and fluid conductivity arising in the presence of DDMT at the local scale. Analysis is graphical and involves visual inspection of hysteresis patterns to (1) determine the size of paired mobile and less-mobile porosities and (2) identify the exchange rate coefficient through simple curve fitting. We demonstrate the approach using laboratory column experimental data, synthetic streambed experimental data, and field tracer-test data. Results from the analytical approach compare favorably with results from calibration of numerical models and also independent measurements of mobile and less-mobile porosity. We show that localized electrical hysteresis patterns resulting from diffusive exchange are independent of injection velocity, indicating that repeatable parameters can be extracted under varied experimental designs, and these parameters represent the true intrinsic properties of specific volumes of porous media of aquifers and hyporheic zones.

  17. Targeted Peptide Measurements in Biology and Medicine: Best Practices for Mass Spectrometry-based Assay Development Using a Fit-for-Purpose Approach

    Energy Technology Data Exchange (ETDEWEB)

    Carr, Steven A.; Abbateillo, Susan E.; Ackermann, Bradley L.; Borchers, Christoph H.; Domon, Bruno; Deutsch, Eric W.; Grant, Russel; Hoofnagle, Andrew N.; Huttenhain, Ruth; Koomen, John M.; Liebler, Daniel; Liu, Tao; MacLean, Brendan; Mani, DR; Mansfield, Elizabeth; Neubert, Hendrik; Paulovich, Amanda G.; Reiter, Lukas; Vitek, Olga; Aebersold, Ruedi; Anderson, Leigh N.; Bethem, Robert; Blonder, Josip; Boja, Emily; Botelho, Julianne; Boyne, Michael; Bradshaw, Ralph A.; Burlingame, Alma S.; Chan, Daniel W.; Keshishian, Hasmik; Kuhn, Eric; Kingsinger, Christopher R.; Lee, Jerry S.; Lee, Sang-Won; Moritz, Robert L.; Oses-Prieto, Juan; Rifai, Nader; Ritchie, James E.; Rodriguez, Henry; Srinivas, Pothur R.; Townsend, Reid; Van Eyk , Jennifer; Whiteley, Gordon; Wiita, Arun; Weintraub, Susan

    2014-01-14

    Adoption of targeted mass spectrometry (MS) approaches such as multiple reaction monitoring (MRM) to study biological and biomedical questions is well underway in the proteomics community. Successful application depends on the ability to generate reliable assays that uniquely and confidently identify target peptides in a sample. Unfortunately, there is a wide range of criteria being applied to say that an assay has been successfully developed. There is no consensus on what criteria are acceptable and little understanding of the impact of variable criteria on the quality of the results generated. Publications describing targeted MS assays for peptides frequently do not contain sufficient information for readers to establish confidence that the tests work as intended or to be able to apply the tests described in their own labs. Guidance must be developed so that targeted MS assays with established performance can be made widely distributed and applied by many labs worldwide. To begin to address the problems and their solutions, a workshop was held at the National Institutes of Health with representatives from the multiple communities developing and employing targeted MS assays. Participants discussed the analytical goals of their experiments and the experimental evidence needed to establish that the assays they develop work as intended and are achieving the required levels of performance. Using this “fit-for-purpose” approach, the group defined three tiers of assays distinguished by their performance and extent of analytical characterization. Computational and statistical tools useful for the analysis of targeted MS results were described. Participants also detailed the information that authors need to provide in their manuscripts to enable reviewers and readers to clearly understand what procedures were performed and to evaluate the reliability of the peptide or protein quantification measurements reported. This paper presents a summary of the meeting and

  18. Bottom-up approach for the reaction of xenobiotics and their metabolites with model substances for natural organic matter by electrochemistry-mass spectrometry (EC-MS).

    Science.gov (United States)

    Chen, Lei; Hofmann, Diana; Klumpp, Erwin; Xiang, Xinyi; Chen, Yingxu; Küppers, Stephan

    2012-11-01

    Risk assessment of xenobiotics requires a comprehensive understanding of their transformation in the environment. As most of the transformation processes usually involve a redox reaction or a hydrolysis as the first steps of the transformation, we applied an approach that uses an electrochemical cell to investigate model "redox" reactions in aqueous solutions for environmental processes. We investigated the degradation of a variety of xenobiotics from polar to nonpolar and analyzed their degradation products by on-line coupling of electrochemistry with mass spectrometry (EC-MS). Furthermore, we evaluated possible binding reactions with regard to the generation of non-extractable residues with some model substances (catechol, phthalic acid, γ-L-Glutamyl-L-cysteinyl-glycine (GSH) and L-histidine) deduced from a natural organic matter (NOM) structure model and identified possible binding-sites. Whereas typically investigations in soil/water-systems have been applied, we used to our knowledge for the first time a bottom-up approach, starting from the chemicals of interest and different model substances for natural organic matter to evaluate chemical binding mechanisms (or processes) in the EC-MS under redox conditions. Under oxidative conditions, bindings of the xenobiotics with catechol, GSH and histidine were found, but no reactions with the model compound phthalic acid were observed. In general, no chemical binding has yet been found under reductive conditions. In some cases (i.e. benzo[a]anthracene) the oxidation product only underwent a binding reaction, whereas the xenobiotic itself did not undergo any reactions. EC-MS is a promising fast and simple screening method to investigate the environmental behavior of xenobiotics and to evaluate the potential risks of newly synthesized substances. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Evaluation of the Effect of Physical Activity Programs on Self-Esteem and Body Mass Index of Overweight Adolescent Girls, based on Health Belief Model with School-Centered Approach

    Directory of Open Access Journals (Sweden)

    Leili Rabiei

    2018-02-01

    Full Text Available Background: Obesity in adolescents leads to physical and mental complications. Exercise is one of the main components of weight control programs. This study aimed to evaluate the effect of physical activity programs on self-esteem and Body Mass Index of overweight adolescent girls. Materials and Methods: This study was a semi experimental study.The subjects were 140 second grade student girls from two high schools in 5th district of Isfahan. Data collection scales included: tape measure, carriage scale, questionnaire to collect background and personal information, designed questionnaire based on Health Belief Model, weekly physical self- reportedand adolescent weekly food record form, parent’s nutritional performance questionnaire, teachers’ attitude on adolescents’ nutrition questionnaire and Cooper Smith's Standard Self-esteem questionnaire. Education based program on Health Belief Model for improving nutritional status consistent with model structures during six sessions each 60-minute was conducted with emphasis on diet to control weight in overweight and at-risk adolescents. Questionnaires were compared immediately afterand two months after intervention. Results: Average score of model structures and self-esteem of students in both groups had no significant difference at baseline, but immediately after and 2 months after the intervention, the mean component scores were significantly higher in intervention group in comparison with the control group. There was a significant difference in component scores at different times in the experimental group. Significant difference in BMI scores was seen at different times in experimental group. Conclusion: Findings of this study showed that school based approach of physical activity training leads to increase in knowledge, sensitivity, severity and perceived benefits and eventually increase in self- esteem and physical activity in students.

  20. Metabolomics study on the toxicity of Annona squamosa by ultraperformance liquid-chromatography high-definition mass spectrometry coupled with pattern recognition approach and metabolic pathways analysis.

    Science.gov (United States)

    Miao, Yun-Jie; Shi, Ye-Ye; Li, Fu-Qiang; Shan, Chen-Xiao; Chen, Yong; Chen, Jian-Wei; Li, Xiang

    2016-05-26

    Annona squamosa Linn (Annonaceae) is a commonly used and effective traditional Chinese medicine (TCM) especially in the South China. The seeds of Annona squamosa Linn (SAS) have been used as a folk remedy to treat "malignant sores" (cancer) in South of China, but they also have high toxicity on human body. To discover the potential biomarkers in the mice caused by SAS. We made metabonomics studies on the toxicity of SAS by ultraperformance liquid-chromatography high-definition mass spectrometry coupled with pattern recognition approach and metabolic pathways analysis. The significant difference in metabolic profiles and changes of metabolite biomarkers between the Control group and SAS group were well observed. 11 positive ions and 9 negative ions (P<0.05) were indicated based on UFLC-QTOF-HDMS. The metabolic pathways of SAS group are discussed according to the identified endogenous metabolites, and eight metabolic pathways are identified using Kyoto Encyclopedia of Genes and Genomes (KEGG). The present study demonstrates that metabonomics analysis could greatly facilitate and provide useful information for the further comprehensive understanding of the pharmacological activity and potential toxicity of SAS in the progress of them being designed to a new anti-tumor medicine. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.