WorldWideScience

Sample records for processing methods quantify

  1. Quantifying Quantum-Mechanical Processes.

    Science.gov (United States)

    Hsieh, Jen-Hsiang; Chen, Shih-Hsuan; Li, Che-Ming

    2017-10-19

    The act of describing how a physical process changes a system is the basis for understanding observed phenomena. For quantum-mechanical processes in particular, the affect of processes on quantum states profoundly advances our knowledge of the natural world, from understanding counter-intuitive concepts to the development of wholly quantum-mechanical technology. Here, we show that quantum-mechanical processes can be quantified using a generic classical-process model through which any classical strategies of mimicry can be ruled out. We demonstrate the success of this formalism using fundamental processes postulated in quantum mechanics, the dynamics of open quantum systems, quantum-information processing, the fusion of entangled photon pairs, and the energy transfer in a photosynthetic pigment-protein complex. Since our framework does not depend on any specifics of the states being processed, it reveals a new class of correlations in the hierarchy between entanglement and Einstein-Podolsky-Rosen steering and paves the way for the elaboration of a generic method for quantifying physical processes.

  2. A Bayesian statistical method for quantifying model form uncertainty and two model combination methods

    International Nuclear Information System (INIS)

    Park, Inseok; Grandhi, Ramana V.

    2014-01-01

    Apart from parametric uncertainty, model form uncertainty as well as prediction error may be involved in the analysis of engineering system. Model form uncertainty, inherently existing in selecting the best approximation from a model set cannot be ignored, especially when the predictions by competing models show significant differences. In this research, a methodology based on maximum likelihood estimation is presented to quantify model form uncertainty using the measured differences of experimental and model outcomes, and is compared with a fully Bayesian estimation to demonstrate its effectiveness. While a method called the adjustment factor approach is utilized to propagate model form uncertainty alone into the prediction of a system response, a method called model averaging is utilized to incorporate both model form uncertainty and prediction error into it. A numerical problem of concrete creep is used to demonstrate the processes for quantifying model form uncertainty and implementing the adjustment factor approach and model averaging. Finally, the presented methodology is applied to characterize the engineering benefits of a laser peening process

  3. Quantifying the measurement uncertainty of results from environmental analytical methods.

    Science.gov (United States)

    Moser, J; Wegscheider, W; Sperka-Gottlieb, C

    2001-07-01

    The Eurachem-CITAC Guide Quantifying Uncertainty in Analytical Measurement was put into practice in a public laboratory devoted to environmental analytical measurements. In doing so due regard was given to the provisions of ISO 17025 and an attempt was made to base the entire estimation of measurement uncertainty on available data from the literature or from previously performed validation studies. Most environmental analytical procedures laid down in national or international standards are the result of cooperative efforts and put into effect as part of a compromise between all parties involved, public and private, that also encompasses environmental standards and statutory limits. Central to many procedures is the focus on the measurement of environmental effects rather than on individual chemical species. In this situation it is particularly important to understand the measurement process well enough to produce a realistic uncertainty statement. Environmental analytical methods will be examined as far as necessary, but reference will also be made to analytical methods in general and to physical measurement methods where appropriate. This paper describes ways and means of quantifying uncertainty for frequently practised methods of environmental analysis. It will be shown that operationally defined measurands are no obstacle to the estimation process as described in the Eurachem/CITAC Guide if it is accepted that the dominating component of uncertainty comes from the actual practice of the method as a reproducibility standard deviation.

  4. New Methods for Processing and Quantifying VO2 Kinetics to Steady State: VO2 Onset Kinetics

    Directory of Open Access Journals (Sweden)

    Craig R. McNulty

    2017-09-01

    Full Text Available Current methods of oxygen uptake (VO2 kinetics data handling may be too simplistic for the complex physiology involved in the underlying physiological processes. Therefore, the aim of this study was to quantify the VO2 kinetics to steady state across the full range of sub-ventilatory threshold work rates, with a particular focus on the VO2 onset kinetics. Ten healthy, moderately trained males participated in five bouts of cycling. Each bout involved 10 min at a percentage of the subject's ventilation threshold (30, 45, 60, 75, 90% from unloaded cycling. The VO2 kinetics was quantified using the conventional mono-exponential time constant (tau, τ, as well as the new methods for VO2 onset kinetics. Compared to linear modeling, non-linear modeling caused a deterioration of goodness of fit (main effect, p < 0.001 across all exercise intensities. Remainder kinetics were also improved using a modified application of the mono-exponential model (main effect, p < 0.001. Interestingly, the slope from the linear regression of the onset kinetics data is similar across all subjects and absolute exercise intensities, and thereby independent of subject fitness and τ. This could indicate that there are no functional limitations between subjects during this onset phase, with limitations occurring for the latter transition to steady state. Finally, the continuing use of mono-exponential modeling could mask important underlying physiology of more instantaneous VO2 responses to steady state. Consequently, further research should be conducted on this new approach to VO2 onset kinetics.

  5. A robust nonparametric method for quantifying undetected extinctions.

    Science.gov (United States)

    Chisholm, Ryan A; Giam, Xingli; Sadanandan, Keren R; Fung, Tak; Rheindt, Frank E

    2016-06-01

    How many species have gone extinct in modern times before being described by science? To answer this question, and thereby get a full assessment of humanity's impact on biodiversity, statistical methods that quantify undetected extinctions are required. Such methods have been developed recently, but they are limited by their reliance on parametric assumptions; specifically, they assume the pools of extant and undetected species decay exponentially, whereas real detection rates vary temporally with survey effort and real extinction rates vary with the waxing and waning of threatening processes. We devised a new, nonparametric method for estimating undetected extinctions. As inputs, the method requires only the first and last date at which each species in an ensemble was recorded. As outputs, the method provides estimates of the proportion of species that have gone extinct, detected, or undetected and, in the special case where the number of undetected extant species in the present day is assumed close to zero, of the absolute number of undetected extinct species. The main assumption of the method is that the per-species extinction rate is independent of whether a species has been detected or not. We applied the method to the resident native bird fauna of Singapore. Of 195 recorded species, 58 (29.7%) have gone extinct in the last 200 years. Our method projected that an additional 9.6 species (95% CI 3.4, 19.8) have gone extinct without first being recorded, implying a true extinction rate of 33.0% (95% CI 31.0%, 36.2%). We provide R code for implementing our method. Because our method does not depend on strong assumptions, we expect it to be broadly useful for quantifying undetected extinctions. © 2016 Society for Conservation Biology.

  6. Method for quantifying defects in materials by microdensitometry of radiography films

    International Nuclear Information System (INIS)

    Deleuze, M.

    1981-09-01

    This report describes the principle of a method for quantifying defects on metal parts obtained by casting. It is based on the relation expressing the optical density of the image according to the thickness of the material. The three techniques used, to wit: gammagraphy, microdensitometry and data processing, are presented, as is the equipment used and the methodology adopted [fr

  7. [Digitalization of histological images as a method of quantifying the demyelinating process].

    Science.gov (United States)

    Stojiljković, G; Tasić, M; Budakov, B

    1996-01-01

    The basic aim of this paper was to check the hypothesis whether after head trauma the brain tissue looses myelinic membrane which surrounds the axon, and if this possibly established loss can be quantified, that is if it is possible to determine the degree of disintegration. One of the aims was to examine this method itself. The gathered results show that both the hypothesis and the aims were justified. It has been established that the diffuse axonal lesion in the examined samples reflects in a loss of axon's myelinic membrane. The loss was 50% greater in the test group in regard to the control group. To digitalize histologic pictures we have used Laser Scanner Densitometry Station and software by Biomed. In regard to medical jurisprudence, the laser scanner densitometry offers more relevant data in cases apparently unclear and in sudden deaths after head injuries. Application of this method and further investigations will be directed to further attempts to clear up connections between the mechanism of injury and degree of biologic response of the brain tissue.

  8. Methods to Quantify Nickel in Soils and Plant Tissues

    Directory of Open Access Journals (Sweden)

    Bruna Wurr Rodak

    2015-06-01

    Full Text Available In comparison with other micronutrients, the levels of nickel (Ni available in soils and plant tissues are very low, making quantification very difficult. The objective of this paper is to present optimized determination methods of Ni availability in soils by extractants and total content in plant tissues for routine commercial laboratory analyses. Samples of natural and agricultural soils were processed and analyzed by Mehlich-1 extraction and by DTPA. To quantify Ni in the plant tissues, samples were digested with nitric acid in a closed system in a microwave oven. The measurement was performed by inductively coupled plasma/optical emission spectrometry (ICP-OES. There was a positive and significant correlation between the levels of available Ni in the soils subjected to Mehlich-1 and DTPA extraction, while for plant tissue samples the Ni levels recovered were high and similar to the reference materials. The availability of Ni in some of the natural soil and plant tissue samples were lower than the limits of quantification. Concentrations of this micronutrient were higher in the soil samples in which Ni had been applied. Nickel concentration differed in the plant parts analyzed, with highest levels in the grains of soybean. The grain, in comparison with the shoot and leaf concentrations, were better correlated with the soil available levels for both extractants. The methods described in this article were efficient in quantifying Ni and can be used for routine laboratory analysis of soils and plant tissues.

  9. User guide : process for quantifying the benefits of research.

    Science.gov (United States)

    2017-07-01

    The Minnesota Department of Transportation Research Services has adopted a process for quantifying the monetary benefits of research projects, such as the dollar value of particular ideas when implemented across the states transportation system. T...

  10. Detector and quantifier of ionizing x-radiation by indirect method

    International Nuclear Information System (INIS)

    Pablo, Aramayo; Roberto, Cruz; Luis, Rocha; Rotger Viviana I; Olivera, Juan Manuel

    2007-01-01

    The work presents the development of a device able to detect and quantify ionizing radiations. The transduction principle proposed for the design of the detector consists on using the properties of the fluorescent screens able to respond to the incident radiation with a proportional brightness. Though the method is well-known, it proved necessary to optimize the design of the detectors in order to get a greater efficiency in the relationship radiation/brightness; to that purpose, different models were tried out, varying its geometry and the optoelectronic device. The resultant signal was processed and presented in a visualization system. It is important to highlight that the project is in development and the results we obtained are preliminary

  11. Energetic arousal and language: predictions from the computational theory of quantifiers processing.

    Science.gov (United States)

    Zajenkowski, Marcin

    2013-10-01

    The author examines the relationship between energetic arousal (EA) and the processing of sentences containing natural-language quantifiers. Previous studies and theories have shown that energy may differentially affect various cognitive functions. Recent investigations devoted to quantifiers strongly support the theory that various types of quantifiers involve different cognitive functions in the sentence-picture verification task. In the present study, 201 students were presented with a sentence-picture verification task consisting of simple propositions containing a quantifier that referred to the color of a car on display. Color pictures of cars accompanied the propositions. In addition, the level of participants' EA was measured before and after the verification task. It was found that EA and performance on proportional quantifiers (e.g., "More than half of the cars are red") are in an inverted U-shaped relationship. This result may be explained by the fact that proportional sentences engage working memory to a high degree, and previous models of EA-cognition associations have been based on the assumption that tasks that require parallel attentional and memory processes are best performed when energy is moderate. The research described in the present article has several applications, as it shows the optimal human conditions for verbal comprehension. For instance, it may be important in workplace design to control the level of arousal experienced by office staff when work is mostly related to the processing of complex texts. Energy level may be influenced by many factors, such as noise, time of day, or thermal conditions.

  12. Quantifying solute transport processes: are chemically "conservative" tracers electrically conservative?

    Science.gov (United States)

    Singha, Kamini; Li, Li; Day-Lewis, Frederick D.; Regberg, Aaron B.

    2012-01-01

    The concept of a nonreactive or conservative tracer, commonly invoked in investigations of solute transport, requires additional study in the context of electrical geophysical monitoring. Tracers that are commonly considered conservative may undergo reactive processes, such as ion exchange, thus changing the aqueous composition of the system. As a result, the measured electrical conductivity may reflect not only solute transport but also reactive processes. We have evaluated the impacts of ion exchange reactions, rate-limited mass transfer, and surface conduction on quantifying tracer mass, mean arrival time, and temporal variance in laboratory-scale column experiments. Numerical examples showed that (1) ion exchange can lead to resistivity-estimated tracer mass, velocity, and dispersivity that may be inaccurate; (2) mass transfer leads to an overestimate in the mobile tracer mass and an underestimate in velocity when using electrical methods; and (3) surface conductance does not notably affect estimated moments when high-concentration tracers are used, although this phenomenon may be important at low concentrations or in sediments with high and/or spatially variable cation-exchange capacity. In all cases, colocated groundwater concentration measurements are of high importance for interpreting geophysical data with respect to the controlling transport processes of interest.

  13. Quantifying Diffuse Contamination: Method and Application to Pb in Soil.

    Science.gov (United States)

    Fabian, Karl; Reimann, Clemens; de Caritat, Patrice

    2017-06-20

    A new method for detecting and quantifying diffuse contamination at the continental to regional scale is based on the analysis of cumulative distribution functions (CDFs). It uses cumulative probability (CP) plots for spatially representative data sets, preferably containing >1000 determinations. Simulations demonstrate how different types of contamination influence elemental CDFs of different sample media. It is found that diffuse contamination is characterized by a distinctive shift of the low-concentration end of the distribution of the studied element in its CP plot. Diffuse contamination can be detected and quantified via either (1) comparing the distribution of the contaminating element to that of an element with a geochemically comparable behavior but no contamination source (e.g., Pb vs Rb), or (2) comparing the top soil distribution of an element to the distribution of the same element in subsoil samples from the same area, taking soil forming processes into consideration. Both procedures are demonstrated for geochemical soil data sets from Europe, Australia, and the U.S.A. Several different data sets from Europe deliver comparable results at different scales. Diffuse Pb contamination in surface soil is estimated to be contamination sources and can be used to efficiently monitor diffuse contamination at the continental to regional scale.

  14. Quantifying Vegetation Biophysical Variables from Imaging Spectroscopy Data: A Review on Retrieval Methods

    Science.gov (United States)

    Verrelst, Jochem; Malenovský, Zbyněk; Van der Tol, Christiaan; Camps-Valls, Gustau; Gastellu-Etchegorry, Jean-Philippe; Lewis, Philip; North, Peter; Moreno, Jose

    2018-06-01

    An unprecedented spectroscopic data stream will soon become available with forthcoming Earth-observing satellite missions equipped with imaging spectroradiometers. This data stream will open up a vast array of opportunities to quantify a diversity of biochemical and structural vegetation properties. The processing requirements for such large data streams require reliable retrieval techniques enabling the spatiotemporally explicit quantification of biophysical variables. With the aim of preparing for this new era of Earth observation, this review summarizes the state-of-the-art retrieval methods that have been applied in experimental imaging spectroscopy studies inferring all kinds of vegetation biophysical variables. Identified retrieval methods are categorized into: (1) parametric regression, including vegetation indices, shape indices and spectral transformations; (2) nonparametric regression, including linear and nonlinear machine learning regression algorithms; (3) physically based, including inversion of radiative transfer models (RTMs) using numerical optimization and look-up table approaches; and (4) hybrid regression methods, which combine RTM simulations with machine learning regression methods. For each of these categories, an overview of widely applied methods with application to mapping vegetation properties is given. In view of processing imaging spectroscopy data, a critical aspect involves the challenge of dealing with spectral multicollinearity. The ability to provide robust estimates, retrieval uncertainties and acceptable retrieval processing speed are other important aspects in view of operational processing. Recommendations towards new-generation spectroscopy-based processing chains for operational production of biophysical variables are given.

  15. Complex methodology of the model elaboration of the quantified transnationalization process assessment

    Directory of Open Access Journals (Sweden)

    Larysa Rudenko-Sudarieva

    2009-03-01

    Full Text Available In the article there are studied the theoretical fundamentals of transnationalization, the peculiarities of its development based on the studying of the world theory and practices; suggested a systematic approach of the methodical background as for determination of the economic category of «transnationalization» and its author’s definition; developed a complex methodology of the model building of the quantified transnationalization process assessment based on the seven-milestone algorithm of the formation of key indicators; systematized and carried out synthesis of the empiric investigations concerning the state, development of the available tendencies, comparative analysis of the transnationalization level within the separate TNC’s groups.

  16. A kernel plus method for quantifying wind turbine performance upgrades

    KAUST Repository

    Lee, Giwhyun; Ding, Yu; Xie, Le; Genton, Marc G.

    2014-01-01

    Power curves are commonly estimated using the binning method recommended by the International Electrotechnical Commission, which primarily incorporates wind speed information. When such power curves are used to quantify a turbine's upgrade

  17. Quantifying quantum coherence with quantum Fisher information.

    Science.gov (United States)

    Feng, X N; Wei, L F

    2017-11-14

    Quantum coherence is one of the old but always important concepts in quantum mechanics, and now it has been regarded as a necessary resource for quantum information processing and quantum metrology. However, the question of how to quantify the quantum coherence has just been paid the attention recently (see, e.g., Baumgratz et al. PRL, 113. 140401 (2014)). In this paper we verify that the well-known quantum Fisher information (QFI) can be utilized to quantify the quantum coherence, as it satisfies the monotonicity under the typical incoherent operations and the convexity under the mixing of the quantum states. Differing from most of the pure axiomatic methods, quantifying quantum coherence by QFI could be experimentally testable, as the bound of the QFI is practically measurable. The validity of our proposal is specifically demonstrated with the typical phase-damping and depolarizing evolution processes of a generic single-qubit state, and also by comparing it with the other quantifying methods proposed previously.

  18. Characterization of autoregressive processes using entropic quantifiers

    Science.gov (United States)

    Traversaro, Francisco; Redelico, Francisco O.

    2018-01-01

    The aim of the contribution is to introduce a novel information plane, the causal-amplitude informational plane. As previous works seems to indicate, Bandt and Pompe methodology for estimating entropy does not allow to distinguish between probability distributions which could be fundamental for simulation or for probability analysis purposes. Once a time series is identified as stochastic by the causal complexity-entropy informational plane, the novel causal-amplitude gives a deeper understanding of the time series, quantifying both, the autocorrelation strength and the probability distribution of the data extracted from the generating processes. Two examples are presented, one from climate change model and the other from financial markets.

  19. Advanced image processing methods as a tool to map and quantify different types of biological soil crust

    Science.gov (United States)

    Rodríguez-Caballero, Emilio; Escribano, Paula; Cantón, Yolanda

    2014-04-01

    Biological soil crusts (BSCs) modify numerous soil surface properties and affect many key ecosystem processes. As BSCs are considered one of the most important components of semiarid ecosystems, accurate characterisation of their spatial distribution is increasingly in demand. This paper describes a novel methodology for identifying the areas dominated by different types of BSCs and quantifying their relative cover at subpixel scale in a semiarid ecosystem of SE Spain. The approach consists of two consecutive steps: (i) First, Support Vector Machine (SVM) classification to identify the main ground units, dominated by homogenous surface cover (bare soil, cyanobacteria BSC, lichen BSC, green and dry vegetation), which are of strong ecological relevance. (ii) Spectral mixture analysis (SMA) of the ground units to quantify the proportion of each type of surface cover within each pixel, to correctly characterize the complex spatial heterogeneity inherent to semiarid ecosystems. SVM classification showed very good results with a Kappa coefficient of 0.93%, discriminating among areas dominated by bare soil, cyanobacteria BSC, lichen BSC, green and dry vegetation. Subpixel relative abundance images achieved relatively high accuracy for both types of BSCs (about 80%), whereas general overestimation of vegetation was observed. Our results open the possibility of introducing the effect of presence and of relative cover of BSCs in spatially distributed hydrological and ecological models, and assessment and monitoring aimed at reducing degradation in these areas.

  20. A kernel plus method for quantifying wind turbine performance upgrades

    KAUST Repository

    Lee, Giwhyun

    2014-04-21

    Power curves are commonly estimated using the binning method recommended by the International Electrotechnical Commission, which primarily incorporates wind speed information. When such power curves are used to quantify a turbine\\'s upgrade, the results may not be accurate because many other environmental factors in addition to wind speed, such as temperature, air pressure, turbulence intensity, wind shear and humidity, all potentially affect the turbine\\'s power output. Wind industry practitioners are aware of the need to filter out effects from environmental conditions. Toward that objective, we developed a kernel plus method that allows incorporation of multivariate environmental factors in a power curve model, thereby controlling the effects from environmental factors while comparing power outputs. We demonstrate that the kernel plus method can serve as a useful tool for quantifying a turbine\\'s upgrade because it is sensitive to small and moderate changes caused by certain turbine upgrades. Although we demonstrate the utility of the kernel plus method in this specific application, the resulting method is a general, multivariate model that can connect other physical factors, as long as their measurements are available, with a turbine\\'s power output, which may allow us to explore new physical properties associated with wind turbine performance. © 2014 John Wiley & Sons, Ltd.

  1. Quantifying the indirect impacts of climate on agriculture: an inter-method comparison

    Science.gov (United States)

    Calvin, Kate; Fisher-Vanden, Karen

    2017-11-01

    Climate change and increases in CO2 concentration affect the productivity of land, with implications for land use, land cover, and agricultural production. Much of the literature on the effect of climate on agriculture has focused on linking projections of changes in climate to process-based or statistical crop models. However, the changes in productivity have broader economic implications that cannot be quantified in crop models alone. How important are these socio-economic feedbacks to a comprehensive assessment of the impacts of climate change on agriculture? In this paper, we attempt to measure the importance of these interaction effects through an inter-method comparison between process models, statistical models, and integrated assessment model (IAMs). We find the impacts on crop yields vary widely between these three modeling approaches. Yield impacts generated by the IAMs are 20%-40% higher than the yield impacts generated by process-based or statistical crop models, with indirect climate effects adjusting yields by between -12% and +15% (e.g. input substitution and crop switching). The remaining effects are due to technological change.

  2. A digital PCR method for identifying and quantifying adulteration of meat species in raw and processed food.

    Directory of Open Access Journals (Sweden)

    Junan Ren

    Full Text Available Meat adulteration is a worldwide concern. In this paper, a new droplet digital PCR (ddPCR method was developed for the quantitative determination of the presence of chicken in sheep and goat meat products. Meanwhile, a constant (multiplication factor was introduced to transform the ratio of copy numbers to the proportion of meats. The presented ddPCR method was also proved to be more accurate (showing bias of less than 9% in the range from 5% to 80% than real-time PCR, which has been widely used in this determination. The method exhibited good repeatability and stability in different thermal treatments and at ultra-high pressure. The relative standard deviation (RSD values of 5% chicken content was less than 5.4% for ultra-high pressure or heat treatment. Moreover, we confirmed that different parts of meat had no effect on quantification accuracy of the ddPCR method. In contrast to real-time PCR, we examined the performance of ddPCR as a more precise, sensitive and stable analytical strategy to overcome potential problems of discrepancies in amplification efficiency discrepancy and to obtain the copy numbers directly without standard curves. The method and strategy developed in this study can be applied to quantify the presence and to confirm the absence of adulterants not only to sheep but also to other kinds of meat and meat products.

  3. Development of a process for quantifying the benefits of research : final report.

    Science.gov (United States)

    2017-07-04

    MnDOT Research Services funds and administers approximately 180 transportation research projects annually at a cost of slightly more than $3 million. This project developed an easy-to-apply process for quantifying the potential benefits of research a...

  4. Quantifying chaotic dynamics from integrate-and-fire processes

    Energy Technology Data Exchange (ETDEWEB)

    Pavlov, A. N. [Department of Physics, Saratov State University, Astrakhanskaya Str. 83, 410012 Saratov (Russian Federation); Saratov State Technical University, Politehnicheskaya Str. 77, 410054 Saratov (Russian Federation); Pavlova, O. N. [Department of Physics, Saratov State University, Astrakhanskaya Str. 83, 410012 Saratov (Russian Federation); Mohammad, Y. K. [Department of Physics, Saratov State University, Astrakhanskaya Str. 83, 410012 Saratov (Russian Federation); Tikrit University Salahudin, Tikrit Qadisiyah, University Str. P.O. Box 42, Tikrit (Iraq); Kurths, J. [Potsdam Institute for Climate Impact Research, Telegraphenberg A 31, 14473 Potsdam (Germany); Institute of Physics, Humboldt University Berlin, 12489 Berlin (Germany)

    2015-01-15

    Characterizing chaotic dynamics from integrate-and-fire (IF) interspike intervals (ISIs) is relatively easy performed at high firing rates. When the firing rate is low, a correct estimation of Lyapunov exponents (LEs) describing dynamical features of complex oscillations reflected in the IF ISI sequences becomes more complicated. In this work we discuss peculiarities and limitations of quantifying chaotic dynamics from IF point processes. We consider main factors leading to underestimated LEs and demonstrate a way of improving numerical determining of LEs from IF ISI sequences. We show that estimations of the two largest LEs can be performed using around 400 mean periods of chaotic oscillations in the regime of phase-coherent chaos. Application to real data is discussed.

  5. A simple method for quantifying jump loads in volleyball athletes.

    Science.gov (United States)

    Charlton, Paula C; Kenneally-Dabrowski, Claire; Sheppard, Jeremy; Spratford, Wayne

    2017-03-01

    Evaluate the validity of a commercially available wearable device, the Vert, for measuring vertical displacement and jump count in volleyball athletes. Propose a potential method of quantifying external load during training and match play within this population. Validation study. The ability of the Vert device to measure vertical displacement in male, junior elite volleyball athletes was assessed against reference standard laboratory motion analysis. The ability of the Vert device to count jumps during training and match-play was assessed via comparison with retrospective video analysis to determine precision and recall. A method of quantifying external load, known as the load index (LdIx) algorithm was proposed using the product of the jump count and average kinetic energy. Correlation between two separate Vert devices and three-dimensional trajectory data were good to excellent for all jump types performed (r=0.83-0.97), with a mean bias of between 3.57-4.28cm. When matched against jumps identified through video analysis, the Vert demonstrated excellent precision (0.995-1.000) evidenced by a low number of false positives. The number of false negatives identified with the Vert was higher resulting in lower recall values (0.814-0.930). The Vert is a commercially available tool that has potential for measuring vertical displacement and jump count in elite junior volleyball athletes without the need for time-consuming analysis and bespoke software. Subsequently, allowing the collected data to better quantify load using the proposed algorithm (LdIx). Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  6. Quantifying social norms: by coupling the ecosystem management concept and semi-quantitative sociological methods

    Science.gov (United States)

    Zhang, D.; Xu, H.

    2012-12-01

    Over recent decades, human-induced environmental changes have steadily and rapidly grown in intensity and impact to where they now often exceed natural impacts. As one of important components of human activities, social norms play key roles in environmental and natural resources management. But the lack of relevant quantitative data about social norms greatly limits our scientific understanding of the complex linkages between humans and nature, and hampers our solving of pressing environmental and social problems. In this study, we built a quantified method by coupling the ecosystem management concept, semi-quantitative sociological methods and mathematical statistics. We got the quantified value of social norms from two parts, whether the content of social norms coincide with the concept of ecosystem management (content value) and how about the performance after social norms were put into implementation (implementation value) . First, we separately identified 12 core elements of ecosystem management and 16 indexes of social norms, and then matched them one by one. According to their matched degree, we got the content value of social norms. Second, we selected 8 key factors that can represent the performance of social norms after they were put into implementation, and then we got the implementation value by Delph method. Adding these two parts values, we got the final value of each social norms. Third, we conducted a case study in Heihe river basin, the second largest inland river in China, by selecting 12 official edicts related to the river basin ecosystem management of Heihe River Basin. By doing so, we first got the qualified data of social norms which can be directly applied to the research that involved observational or experimental data collection of natural processes. Second, each value was supported by specific contents, so it can assist creating a clear road map for building or revising management and policy guidelines. For example, in this case study

  7. Quantifying graininess of glossy food products

    DEFF Research Database (Denmark)

    Møller, Flemming; Carstensen, Jens Michael

    The sensory quality of yoghurt can be altered when changing the milk composition or processing conditions. Part of the sensory quality may be assessed visually. It is described how a non-contact method for quantifying surface gloss and grains in yoghurt can be made. It was found that the standard...

  8. Quantifying geocode location error using GIS methods

    Directory of Open Access Journals (Sweden)

    Gardner Bennett R

    2007-04-01

    Full Text Available Abstract Background The Metropolitan Atlanta Congenital Defects Program (MACDP collects maternal address information at the time of delivery for infants and fetuses with birth defects. These addresses have been geocoded by two independent agencies: (1 the Georgia Division of Public Health Office of Health Information and Policy (OHIP and (2 a commercial vendor. Geographic information system (GIS methods were used to quantify uncertainty in the two sets of geocodes using orthoimagery and tax parcel datasets. Methods We sampled 599 infants and fetuses with birth defects delivered during 1994–2002 with maternal residence in either Fulton or Gwinnett County. Tax parcel datasets were obtained from the tax assessor's offices of Fulton and Gwinnett County. High-resolution orthoimagery for these counties was acquired from the U.S. Geological Survey. For each of the 599 addresses we attempted to locate the tax parcel corresponding to the maternal address. If the tax parcel was identified the distance and the angle between the geocode and the residence were calculated. We used simulated data to characterize the impact of geocode location error. In each county 5,000 geocodes were generated and assigned their corresponding Census 2000 tract. Each geocode was then displaced at a random angle by a random distance drawn from the distribution of observed geocode location errors. The census tract of the displaced geocode was determined. We repeated this process 5,000 times and report the percentage of geocodes that resolved into incorrect census tracts. Results Median location error was less than 100 meters for both OHIP and commercial vendor geocodes; the distribution of angles appeared uniform. Median location error was approximately 35% larger in Gwinnett (a suburban county relative to Fulton (a county with urban and suburban areas. Location error occasionally caused the simulated geocodes to be displaced into incorrect census tracts; the median percentage

  9. A systematic review of methods for quantifying serum testosterone in patients with prostate cancer who underwent castration.

    Science.gov (United States)

    Comas, I; Ferrer, R; Planas, J; Celma, A; Regis, L; Morote, J

    2018-03-01

    The clinical practice guidelines recommend measuring serum testosterone in patients with prostate cancer (PC) who undergo castration. The serum testosterone concentration should be IA) has become widespread, although their metrological characteristics do not seem appropriate for quantifying low testosterone concentrations. The objective of this review is to analyse the methods for quantifying testosterone and to establish whether there is scientific evidence that justifies measuring it in patients with PC who undergo castration, through liquid chromatography attached to a mass spectrometry in tandem (LC-MSMS). We performed a search in PubMed with the following MeSH terms: measurement, testosterone, androgen suppression and prostate cancer. We selected 12 studies that compared the metrological characteristics of various methods for quantifying serum testosterone compared with MS detection methods. IAs are standard tools for measuring testosterone levels; however, there is evidence that IAs lack accuracy and precision for quantifying low concentrations. Most chemiluminescent IAs overestimate their concentration, especially below 100ng/dL. The procedures that use LC-MSMS have an adequate lower quantification limit and proper accuracy and precision. We found no specific evidence in patients with PC who underwent castration. LC-MSMS is the appropriate method for quantifying low serum testosterone concentrations. We need to define the level of castration with this method and the optimal level related to better progression of the disease. Copyright © 2017 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.

  10. Low-Cost Method for Quantifying Sodium in Coconut Water and Seawater for the Undergraduate Analytical Chemistry Laboratory: Flame Test, a Mobile Phone Camera, and Image Processing

    Science.gov (United States)

    Moraes, Edgar P.; da Silva, Nilbert S. A.; de Morais, Camilo de L. M.; das Neves, Luiz S.; de Lima, Kassio M. G.

    2014-01-01

    The flame test is a classical analytical method that is often used to teach students how to identify specific metals. However, some universities in developing countries have difficulties acquiring the sophisticated instrumentation needed to demonstrate how to identify and quantify metals. In this context, a method was developed based on the flame…

  11. A method for quantifying mechanical properties of tissue following viral infection.

    Directory of Open Access Journals (Sweden)

    Vy Lam

    Full Text Available Viral infection and replication involves the reorganization of the actin network within the host cell. Actin plays a central role in the mechanical properties of cells. We have demonstrated a method to quantify changes in mechanical properties of fabricated model three-dimensional (3D connective tissue following viral infection. Using this method, we have characterized the impact of infection by the human herpesvirus, cytomegalovirus (HCMV. HCMV is a member of the herpesvirus family and infects a variety of cell types including fibroblasts. In the body, fibroblasts are necessary for maintaining connective tissue and function by creating mechanical force. Using this 3D connective tissue model, we observed that infection disrupted the cell's ability to generate force and reduced the cumulative contractile force of the tissue. The addition of HCMV viral particles in the absence of both viral gene expression and DNA replication was sufficient to disrupt tissue function. We observed that alterations of the mechanical properties are, in part, due to a disruption of the underlying complex actin microfilament network established by the embedded fibroblasts. Finally, we were able to prevent HCMV-mediated disruption of tissue function by the addition of human immune globulin against HCMV. This study demonstrates a method to quantify the impact of viral infection on mechanical properties which are not evident using conventional cell culture systems.

  12. QUANTIFYING THE SHORT LIFETIME WITH TCSPC-FLIM: FIRST MOMENT VERSUS FITTING METHODS

    Directory of Open Access Journals (Sweden)

    LINGLING XU

    2013-10-01

    Full Text Available Combing the time-correlated single photon counting (TCSPC with fluorescence lifetime imaging microscopy (FLIM provides promising opportunities in revealing important information on the microenvironment of cells and tissues, but the applications are thus far mainly limited by the accuracy and precision of the TCSPC-FLIM technique. Here we present a comprehensive investigation on the performance of two data analysis methods, the first moment (M1 method and the conventional least squares (Fitting method, in quantifying fluorescence lifetime. We found that the M1 method is more superior than the Fitting method when the lifetime is short (70 ~ 400 ps or the signal intensity is weak (<103 photons.

  13. Quantifying Functional Reuse from Object Oriented Requirements Specifications

    NARCIS (Netherlands)

    Condori-Fernandez, Nelly; Condori-Fernández, N.; Pastor, O; Daneva, Maia; Abran, A.; Castro, J.; Quer, C.; Carvallo, J. B.; Fernandes da Silva, L.

    2008-01-01

    Software reuse is essential in improving efficiency and productivity in the software development process. This paper analyses reuse within requirements engineering phase by taking and adapting a standard functional size measurement method, COSMIC FFP. Our proposal attempts to quantify reusability

  14. A practical method for quantifying atherosclerotic lesions in rabbits.

    Science.gov (United States)

    Zhang, C; Zheng, H; Yu, Q; Yang, P; Li, Y; Cheng, F; Fan, J; Liu, E

    2010-01-01

    The rabbit has been widely used for the study of human atherosclerosis; however, the method for analysis of the atherosclerotic lesions has not been standardized between laboratories. The present study reports a practical method for quantifying the changes that occur in aortic atherosclerosis of rabbits. Male Japanese white rabbits were fed with either a standard chow or a diet containing 10% fat and 0.3% cholesterol for 16 weeks. Plasma concentrations of glucose, insulin, total cholesterol, triglycerides and high-density lipoprotein were measured. Aortic atherosclerotic lesions were assessed in quantitative fashion using an image analysis system that measured (1) the gross area of the entire aorta affected by atherosclerosis as defined by Sudan IV staining, (2) the microscopical intimal lesion defined by the elastic van Gieson stain and (3) the infiltration of macrophages and smooth muscle cell proliferation as determined immunohistochemically. The rabbits developed severe aortic atherosclerosis without apparent abnormality of glucose metabolism. The quantitative method described here will be useful for the further investigation of atherosclerosis in rabbits. Copyright 2009 Elsevier Ltd. All rights reserved.

  15. A method to quantify the "cone of economy".

    Science.gov (United States)

    Haddas, Ram; Lieberman, Isador H

    2018-05-01

    A non-randomized, prospective, concurrent control cohort study. The purpose of this study is to develop and evaluate a method to quantify the dimensions of the cone of economy (COE) and the energy expenditure associated with maintaining a balanced posture within the COE, scoliosis patients and compare them to matched non-scoliotic controls in a group of adult degenerative. Balance is defined as the ability of the human body to maintain its center of mass (COM) within the base of support with minimal postural sway. The cone of economy refers to the stable region of upright standing posture. The underlying assumption is that deviating outside one's individual cone challenges the balance mechanisms. Adult degenerative scoliosis (ADS) patients exhibit a variety of postural changes within their COE, involving the spine, pelvis and lower extremities, in their effort to compensate for the altered posture. Ten ADS patients and ten non-scoliotic volunteers performed a series of functional balance tests. The dimensions of the COE and the energy expenditure related to maintaining balance within the COE were measured using a human motion video capture system and dynamic surface electromyography. ADS patients presented more COM sway in the sagittal (ADS: 1.59 cm vs. H: 0.61 cm; p = 0.049) and coronal (ADS: 2.84 cm vs. H: 1.72 cm; p = 0.046) directions in comparison to the non-scoliotic control. ADS patients presented with more COM (ADS: 33.30 cm vs. H: 19.13 cm; p = 0.039) and head (ADS: 31.06 cm vs. H: 19.13 cm; p = 0.013) displacements in comparison to the non-scoliotic controls. Scoliosis patients expended more muscle activity to maintain static standing, as manifest by increased muscle activity in their erector spinae (ADS: 37.16 mV vs. H: 20.31 mV; p = 0.050), and gluteus maximus (ADS: 33.12 mV vs. H: 12.09 mV; p = 0.001) muscles. We were able to develop and evaluate a method that quantifies the COE boundaries, COM displacement, and amount of sway within the COE

  16. Defect window analysis by using SEM-contour based shape quantifying method for sub-20nm node production

    Science.gov (United States)

    Hibino, Daisuke; Hsu, Mingyi; Shindo, Hiroyuki; Izawa, Masayuki; Enomoto, Yuji; Lin, J. F.; Hu, J. R.

    2013-04-01

    The impact on yield loss due to systematic defect which remains after Optical Proximity Correction (OPC) modeling has increased, and achieving an acceptable yield has become more difficult in the leading technology beyond 20 nm node production. Furthermore Process-Window has become narrow because of the complexity of IC design and less process margin. In the past, the systematic defects have been inspected by human-eyes. However the judgment by human-eyes is sometime unstable and not accurate. Moreover an enormous amount of time and labor will have to be expended on the one-by-one judgment for several thousands of hot-spot defects. In order to overcome these difficulties and improve the yield and manufacturability, the automated system, which can quantify the shape difference with high accuracy and speed, is needed. Inspection points could be increased for getting higher yield, if the automated system achieves our goal. Defect Window Analysis (DWA) system by using high-precision-contour extraction from SEM image on real silicon and quantifying method which can calculate the difference between defect pattern and non-defect pattern automatically, which was developed by Hitachi High-Technologies, has been applied to the defect judgment instead of the judgment by human-eyes. The DWA result which describes process behavior might be feedback to design or OPC or mask. This new methodology and evaluation results will be presented in detail in this paper.

  17. A colorimetric method to quantify endo-polygalacturonase activity.

    Science.gov (United States)

    Torres, Sebastián; Sayago, Jorge E; Ordoñez, Roxana M; Isla, María Inés

    2011-02-08

    We report a new colorimetric assay to quantify endo-polygalacturonase activity, which hydrolyzes polygalacturonic acid to produce smaller chains of galacturonate. Some of the reported polygalacturonase assays measure the activity by detecting the appearance of reducing ends such as the Somogyi-Nelson method. As a result of being general towards reducing groups, the Somogyi-Nelson method is not appropriate when studying polygalacturonase and polygalacturonase inhibitors in plant crude extracts, which often have a strong reducing power. Ruthenium Red is an inorganic dye that binds polygalacturonic acid and causes its precipitation. In the presence of polygalacturonase, polygalacturonic acid is hydrolyzed bringing about a corresponding gain in soluble Ruthenium Red. The described assay utilizes Ruthenium Red as the detection reagent which has been used previously in plate-based assays but not in liquid medium reactions. The new method measures the disappearance of the substrate polygalacturonic acid and is compared to the Somogyi-Nelson assay. The experimental results using lemon peel, a fern fronds and castor leaf crude extracts demonstrate that the new method provides a way to the quickly screening of polygalacturonase activity and polygalacturonase inhibitors in plant crude extracts containing high amounts of reducing power. On the other hand, the Ruthenium Red assay is not able to determine the activity of an exo-polygalacturonase as initial velocity and thus would allow the differentiation between endo- and exo-polygalacturonase activities. Copyright © 2010 Elsevier Inc. All rights reserved.

  18. Quantifying Protein Concentrations Using Smartphone Colorimetry: A New Method for an Established Test

    Science.gov (United States)

    Gee, Clifford T.; Kehoe, Eric; Pomerantz, William C. K.; Penn, R. Lee

    2017-01-01

    Proteins are involved in nearly every biological process, which makes them of interest to a range of scientists. Previous work has shown that hand-held cameras can be used to determine the concentration of colored analytes in solution, and this paper extends the approach to reactions involving a color change in order to quantify protein…

  19. Nanoscale Mechanical Stimulation Method for Quantifying C. elegans Mechanosensory Behavior and Memory.

    Science.gov (United States)

    Sugi, Takuma; Okumura, Etsuko; Kiso, Kaori; Igarashi, Ryuji

    2016-01-01

    Withdrawal escape response of C. elegans to nonlocalized vibration is a useful behavioral paradigm to examine mechanisms underlying mechanosensory behavior and its memory-dependent change. However, there are very few methods for investigating the degree of vibration frequency, amplitude and duration needed to induce behavior and memory. Here, we establish a new system to quantify C. elegans mechanosensory behavior and memory using a piezoelectric sheet speaker. In the system, we can flexibly change the vibration properties at a nanoscale displacement level and quantify behavioral responses under each vibration property. This system is an economic setup and easily replicated in other laboratories. By using the system, we clearly detected withdrawal escape responses and confirmed habituation memory. This system will facilitate the understanding of physiological aspects of C. elegans mechanosensory behavior in the future.

  20. Novel Methods for Measuring Depth of Anesthesia by Quantifying Dominant Information Flow in Multichannel EEGs

    Directory of Open Access Journals (Sweden)

    Kab-Mun Cha

    2017-01-01

    Full Text Available In this paper, we propose novel methods for measuring depth of anesthesia (DOA by quantifying dominant information flow in multichannel EEGs. Conventional methods mainly use few EEG channels independently and most of multichannel EEG based studies are limited to specific regions of the brain. Therefore the function of the cerebral cortex over wide brain regions is hardly reflected in DOA measurement. Here, DOA is measured by the quantification of dominant information flow obtained from principle bipartition. Three bipartitioning methods are used to detect the dominant information flow in entire EEG channels and the dominant information flow is quantified by calculating information entropy. High correlation between the proposed measures and the plasma concentration of propofol is confirmed from the experimental results of clinical data in 39 subjects. To illustrate the performance of the proposed methods more easily we present the results for multichannel EEG on a two-dimensional (2D brain map.

  1. Rapid Quadrupole-Time-of-Flight Mass Spectrometry Method Quantifies Oxygen-Rich Lignin Compound in Complex Mixtures

    Science.gov (United States)

    Boes, Kelsey S.; Roberts, Michael S.; Vinueza, Nelson R.

    2018-03-01

    Complex mixture analysis is a costly and time-consuming task facing researchers with foci as varied as food science and fuel analysis. When faced with the task of quantifying oxygen-rich bio-oil molecules in a complex diesel mixture, we asked whether complex mixtures could be qualitatively and quantitatively analyzed on a single mass spectrometer with mid-range resolving power without the use of lengthy separations. To answer this question, we developed and evaluated a quantitation method that eliminated chromatography steps and expanded the use of quadrupole-time-of-flight mass spectrometry from primarily qualitative to quantitative as well. To account for mixture complexity, the method employed an ionization dopant, targeted tandem mass spectrometry, and an internal standard. This combination of three techniques achieved reliable quantitation of oxygen-rich eugenol in diesel from 300 to 2500 ng/mL with sufficient linearity (R2 = 0.97 ± 0.01) and excellent accuracy (percent error = 0% ± 5). To understand the limitations of the method, it was compared to quantitation attained on a triple quadrupole mass spectrometer, the gold standard for quantitation. The triple quadrupole quantified eugenol from 50 to 2500 ng/mL with stronger linearity (R2 = 0.996 ± 0.003) than the quadrupole-time-of-flight and comparable accuracy (percent error = 4% ± 5). This demonstrates that a quadrupole-time-of-flight can be used for not only qualitative analysis but also targeted quantitation of oxygen-rich lignin molecules in complex mixtures without extensive sample preparation. The rapid and cost-effective method presented here offers new possibilities for bio-oil research, including: (1) allowing for bio-oil studies that demand repetitive analysis as process parameters are changed and (2) making this research accessible to more laboratories. [Figure not available: see fulltext.

  2. Quantifying information leakage of randomized protocols

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Malacaria, Pasquale

    2015-01-01

    The quantification of information leakage provides a quantitative evaluation of the security of a system. We propose the usage of Markovian processes to model deterministic and probabilistic systems. By using a methodology generalizing the lattice of information approach we model refined attackers...... capable to observe the internal behavior of the system, and quantify the information leakage of such systems. We also use our method to obtain an algorithm for the computation of channel capacity from our Markovian models. Finally, we show how to use the method to analyze timed and non-timed attacks...

  3. Development and implementation of a novel measure for quantifying training loads in rowing: the T2minute method.

    Science.gov (United States)

    Tran, Jacqueline; Rice, Anthony J; Main, Luana C; Gastin, Paul B

    2014-04-01

    The systematic management of training requires accurate training load measurement. However, quantifying the training of elite Australian rowers is challenging because of (a) the multicenter, multistate structure of the national program; (b) the variety of training undertaken; and (c) the limitations of existing methods for quantifying the loads accumulated from varied training formats. Therefore, the purpose of this project was to develop a new measure for quantifying training loads in rowing (the T2minute method). Sport scientists and senior coaches at the National Rowing Center of Excellence collaborated to develop the measure, which incorporates training duration, intensity, and mode to quantify a single index of training load. To account for training at different intensities, the method uses standardized intensity zones (T zones) established at the Australian Institute of Sport. Each zone was assigned a weighting factor according to the curvilinear relationship between power output and blood lactate response. Each training mode was assigned a weighting factor based on whether coaches perceived it to be "harder" or "easier" than on-water rowing. A common measurement unit, the T2minute, was defined to normalize sessions in different modes to a single index of load; one T2minute is equivalent to 1 minute of on-water single scull rowing at T2 intensity (approximately 60-72% VO2max). The T2minute method was successfully implemented to support national training strategies in Australian high performance rowing. By incorporating duration, intensity, and mode, the T2minute method extends the concepts that underpin current load measures, providing 1 consistent system to quantify loads from varied training formats.

  4. A Rapid Method for Quantifying Viable Mycobacterium avium subsp. paratuberculosis in Cellular Infection Assays

    Science.gov (United States)

    Pooley, Hannah B.; de Silva, Kumudika; Purdie, Auriol C.; Begg, Douglas J.; Whittington, Richard J.

    2016-01-01

    ABSTRACT Determining the viability of bacteria is a key outcome of in vitro cellular infection assays. Currently, this is done by culture, which is problematic for fastidious slow-growing bacteria such as Mycobacterium avium subsp. paratuberculosis, where it can take up to 4 months to confirm growth. This study aimed to identify an assay that can rapidly quantify the number of viable M. avium subsp. paratuberculosis cells in a cellular sample. Three commercially available bacterial viability assays along with a modified liquid culture method coupled with high-throughput quantitative PCR growth detection were assessed. Criteria for assessment included the ability of each assay to differentiate live and dead M. avium subsp. paratuberculosis organisms and their accuracy at low bacterial concentrations. Using the culture-based method, M. avium subsp. paratuberculosis growth was reliably detected and quantified within 2 weeks. There was a strong linear association between the 2-week growth rate and the initial inoculum concentration. The number of viable M. avium subsp. paratuberculosis cells in an unknown sample was quantified based on the growth rate, by using growth standards. In contrast, none of the commercially available viability assays were suitable for use with samples from in vitro cellular infection assays. IMPORTANCE Rapid quantification of the viability of Mycobacterium avium subsp. paratuberculosis in samples from in vitro cellular infection assays is important, as it allows these assays to be carried out on a large scale. In vitro cellular infection assays can function as a preliminary screening tool, for vaccine development or antimicrobial screening, and also to extend findings derived from experimental animal trials. Currently, by using culture, it takes up to 4 months to obtain quantifiable results regarding M. avium subsp. paratuberculosis viability after an in vitro infection assay; however, with the quantitative PCR and liquid culture method

  5. Adobe photoshop quantification (PSQ) rather than point-counting: A rapid and precise method for quantifying rock textural data and porosities

    Science.gov (United States)

    Zhang, Xuefeng; Liu, Bo; Wang, Jieqiong; Zhang, Zhe; Shi, Kaibo; Wu, Shuanglin

    2014-08-01

    Commonly used petrological quantification methods are visual estimation, counting, and image analyses. However, in this article, an Adobe Photoshop-based analyzing method (PSQ) is recommended for quantifying the rock textural data and porosities. Adobe Photoshop system provides versatile abilities in selecting an area of interest and the pixel number of a selection could be read and used to calculate its area percentage. Therefore, Adobe Photoshop could be used to rapidly quantify textural components, such as content of grains, cements, and porosities including total porosities and different genetic type porosities. This method was named as Adobe Photoshop Quantification (PSQ). The workflow of the PSQ method was introduced with the oolitic dolomite samples from the Triassic Feixianguan Formation, Northeastern Sichuan Basin, China, for example. And the method was tested by comparing with the Folk's and Shvetsov's "standard" diagrams. In both cases, there is a close agreement between the "standard" percentages and those determined by the PSQ method with really small counting errors and operator errors, small standard deviations and high confidence levels. The porosities quantified by PSQ were evaluated against those determined by the whole rock helium gas expansion method to test the specimen errors. Results have shown that the porosities quantified by the PSQ are well correlated to the porosities determined by the conventional helium gas expansion method. Generally small discrepancies (mostly ranging from -3% to 3%) are caused by microporosities which would cause systematic underestimation of 2% and/or by macroporosities causing underestimation or overestimation in different cases. Adobe Photoshop could be used to quantify rock textural components and porosities. This method has been tested to be precise and accurate. It is time saving compared with usual methods.

  6. Method of quantifying the loss of acidification activity of lactic acid starters during freezing and frozen storage.

    Science.gov (United States)

    Fonseca, F; Béal, C; Corrieu, G

    2000-02-01

    We have developed a method to quantify the resistance to freezing and frozen storage of lactic acid starters, based on measuring the time necessary to reach the maximum acidification rate in milk (tm) using the Cinac system. Depending on the operating conditions, tm increased during the freezing step and storage. The loss of acidification activity during freezing was quantified by the difference (delta tm) between the tm values of the concentrated cell suspension before and after freezing. During storage at -20 degrees C, linear relationships between tm and the storage time were established. Their slope, k, allowed the quantitation of the decrease in acidification activity during 9-14 weeks of frozen storage. The method was applied to determine the resistance to freezing and frozen storage of four strains of lactic acid bacteria and to quantify the cryoprotective effect of glycerol.

  7. Deep Learning Methods for Quantifying Invasive Benthic Species in the Great Lakes

    Science.gov (United States)

    Billings, G.; Skinner, K.; Johnson-Roberson, M.

    2017-12-01

    In recent decades, invasive species such as the round goby and dreissenid mussels have greatly impacted the Great Lakes ecosystem. It is critical to monitor these species, model their distribution, and quantify the impacts on the native fisheries and surrounding ecosystem in order to develop an effective management response. However, data collection in underwater environments is challenging and expensive. Furthermore, the round goby is typically found in rocky habitats, which are inaccessible to standard survey techniques such as bottom trawling. In this work we propose a robotic system for visual data collection to automatically detect and quantify invasive round gobies and mussels in the Great Lakes. Robotic platforms equipped with cameras can perform efficient, cost-effective, low-bias benthic surveys. This data collection can be further optimized through automatic detection and annotation of the target species. Deep learning methods have shown success in image recognition tasks. However, these methods often rely on a labelled training dataset, with up to millions of labelled images. Hand labeling large numbers of images is expensive and often impracticable. Furthermore, data collected in the field may be sparse when only considering images that contain the objects of interest. It is easier to collect dense, clean data in controlled lab settings, but this data is not a realistic representation of real field environments. In this work, we propose a deep learning approach to generate a large set of labelled training data realistic of underwater environments in the field. To generate these images, first we draw random sample images of individual fish and mussels from a library of images captured in a controlled lab environment. Next, these randomly drawn samples will be automatically merged into natural background images. Finally, we will use a generative adversarial network (GAN) that incorporates constraints of the physical model of underwater light propagation

  8. Method for quantifying percentage wood failure in block-shear specimens by a laser scanning profilometer

    Science.gov (United States)

    C. T. Scott; R. Hernandez; C. Frihart; R. Gleisner; T. Tice

    2005-01-01

    A new method for quantifying percentage wood failure of an adhesively bonded block-shear specimen has been developed. This method incorporates a laser displacement gage with an automated two-axis positioning system that functions as a highly sensitive profilometer. The failed specimen is continuously scanned across its width to obtain a surface failure profile. The...

  9. Quantifying structural uncertainty on fault networks using a marked point process within a Bayesian framework

    Science.gov (United States)

    Aydin, Orhun; Caers, Jef Karel

    2017-08-01

    Faults are one of the building-blocks for subsurface modeling studies. Incomplete observations of subsurface fault networks lead to uncertainty pertaining to location, geometry and existence of faults. In practice, gaps in incomplete fault network observations are filled based on tectonic knowledge and interpreter's intuition pertaining to fault relationships. Modeling fault network uncertainty with realistic models that represent tectonic knowledge is still a challenge. Although methods that address specific sources of fault network uncertainty and complexities of fault modeling exists, a unifying framework is still lacking. In this paper, we propose a rigorous approach to quantify fault network uncertainty. Fault pattern and intensity information are expressed by means of a marked point process, marked Strauss point process. Fault network information is constrained to fault surface observations (complete or partial) within a Bayesian framework. A structural prior model is defined to quantitatively express fault patterns, geometries and relationships within the Bayesian framework. Structural relationships between faults, in particular fault abutting relations, are represented with a level-set based approach. A Markov Chain Monte Carlo sampler is used to sample posterior fault network realizations that reflect tectonic knowledge and honor fault observations. We apply the methodology to a field study from Nankai Trough & Kumano Basin. The target for uncertainty quantification is a deep site with attenuated seismic data with only partially visible faults and many faults missing from the survey or interpretation. A structural prior model is built from shallow analog sites that are believed to have undergone similar tectonics compared to the site of study. Fault network uncertainty for the field is quantified with fault network realizations that are conditioned to structural rules, tectonic information and partially observed fault surfaces. We show the proposed

  10. CAD system for quantifying emphysema severity based on multi-class classifier using CT image and spirometry information

    International Nuclear Information System (INIS)

    Nimura, Yukitaka; Mori, Kensaku; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi

    2010-01-01

    Many diagnosis methods based on CT image processing are proposed for quantifying emphysema. The most of these diagnosis methods extract lesions as Low-Attenuation Areas (LAA) by simple threshold processing and evaluate their severity by calculating the LAA (LAA%) in the lung. However, pulmonary emphysema is diagnosed by not only the LAA but also the changes of pulmonary blood vessel and the spirometric measurements. This paper proposes a novel computer-aided detection (CAD) system for quantifying emphysema by combining spirometric measurements and results of CT image processing. The experimental results revealed that the accuracy rate of the proposed method was 78.3%. It is 13.1% improvement compared with the method based on only the LAA%. (author)

  11. An experimental method to quantify the impact fatigue behavior of rocks

    International Nuclear Information System (INIS)

    Wu, Bangbiao; Xia, Kaiwen; Kanopoulos, Patrick; Luo, Xuedong

    2014-01-01

    Fatigue failure is an important failure mode of engineering materials. The fatigue behavior of both ductile and brittle materials has been under investigation for many years. While the fatigue failure of ductile materials is well established, only a few studies have been carried out on brittle materials. In addition, most fatigue studies on rocks are conducted under quasi-static loading conditions. To address engineering applications involving repeated blasting, this paper proposes a method to quantify the impact fatigue properties of rocks. In this method, a split Hopkinson pressure bar system is adopted to exert impact load on the sample, which is placed in a specially designed steel sleeve to limit the displacement of the sample and thus to enable the recovery of the rock after each impact. The method is then applied to Laurentian granite, which is fine-grained and isotropic material. The results demonstrate that this is a practicable means to conduct impact fatigue tests on rocks and other brittle solids. (paper)

  12. An experimental method to quantify the impact fatigue behavior of rocks

    Science.gov (United States)

    Wu, Bangbiao; Kanopoulos, Patrick; Luo, Xuedong; Xia, Kaiwen

    2014-07-01

    Fatigue failure is an important failure mode of engineering materials. The fatigue behavior of both ductile and brittle materials has been under investigation for many years. While the fatigue failure of ductile materials is well established, only a few studies have been carried out on brittle materials. In addition, most fatigue studies on rocks are conducted under quasi-static loading conditions. To address engineering applications involving repeated blasting, this paper proposes a method to quantify the impact fatigue properties of rocks. In this method, a split Hopkinson pressure bar system is adopted to exert impact load on the sample, which is placed in a specially designed steel sleeve to limit the displacement of the sample and thus to enable the recovery of the rock after each impact. The method is then applied to Laurentian granite, which is fine-grained and isotropic material. The results demonstrate that this is a practicable means to conduct impact fatigue tests on rocks and other brittle solids.

  13. Development of an inexpensive optical method for studies of dental erosion process in vitro

    Science.gov (United States)

    Nasution, A. M. T.; Noerjanto, B.; Triwanto, L.

    2008-09-01

    Teeth have important roles in digestion of food, supporting the facial-structure, as well as in articulation of speech. Abnormality in teeth structure can be initiated by an erosion process due to diet or beverages consumption that lead to destruction which affect their functionality. Research to study the erosion processes that lead to teeth's abnormality is important in order to be used as a care and prevention purpose. Accurate measurement methods would be necessary as a research tool, in order to be capable for quantifying dental destruction's degree. In this work an inexpensive optical method as tool to study dental erosion process is developed. It is based on extraction the parameters from the 3D dental visual information. The 3D visual image is obtained from reconstruction of multiple lateral projection of 2D images that captured from many angles. Using a simple motor stepper and a pocket digital camera, sequence of multi-projection 2D images of premolar tooth is obtained. This images are then reconstructed to produce a 3D image, which is useful for quantifying related dental erosion parameters. The quantification process is obtained from the shrinkage of dental volume as well as surface properties due to erosion process. Results of quantification is correlated to the ones of dissolved calcium atom which released from the tooth using atomic absorption spectrometry. This proposed method would be useful as visualization tool in many engineering, dentistry, and medical research. It would be useful also for the educational purposes.

  14. Fuzzy Entropy Method for Quantifying Supply Chain Networks Complexity

    Science.gov (United States)

    Zhang, Jihui; Xu, Junqin

    Supply chain is a special kind of complex network. Its complexity and uncertainty makes it very difficult to control and manage. Supply chains are faced with a rising complexity of products, structures, and processes. Because of the strong link between a supply chain’s complexity and its efficiency the supply chain complexity management becomes a major challenge of today’s business management. The aim of this paper is to quantify the complexity and organization level of an industrial network working towards the development of a ‘Supply Chain Network Analysis’ (SCNA). By measuring flows of goods and interaction costs between different sectors of activity within the supply chain borders, a network of flows is built and successively investigated by network analysis. The result of this study shows that our approach can provide an interesting conceptual perspective in which the modern supply network can be framed, and that network analysis can handle these issues in practice.

  15. Study of the aging processes in polyurethane adhesives using thermal treatment and differential calorimetric, dielectric, and mechanical techniques ; 1, identifying the aging processes ; 2, quantifying the aging effect

    CERN Document Server

    Althouse, L P

    1979-01-01

    Study of the aging processes in polyurethane adhesives using thermal treatment and differential calorimetric, dielectric, and mechanical techniques ; 1, identifying the aging processes ; 2, quantifying the aging effect

  16. Using infrared thermography for understanding and quantifying soil surface processes

    Science.gov (United States)

    de Lima, João L. M. P.

    2017-04-01

    At present, our understanding of the soil hydrologic response is restricted by measurement limitations. In the literature, there have been repeatedly calls for interdisciplinary approaches to expand our knowledge in this field and eventually overcome the limitations that are inherent to conventional measuring techniques used, for example, for tracing water at the basin, hillslope and even field or plot scales. Infrared thermography is a versatile, accurate and fast technique of monitoring surface temperature and has been used in a variety of fields, such as military surveillance, medical diagnosis, industrial processes optimisation, building inspections and agriculture. However, many applications are still to be fully explored. In surface hydrology, it has been successfully employed as a high spatial and temporal resolution non-invasive and non-destructive imaging tool to e.g. access groundwater discharges into waterbodies or quantify thermal heterogeneities of streams. It is believed that thermal infrared imagery can grasp the spatial and temporal variability of many processes at the soil surface. Thermography interprets the heat signals and can provide an attractive view for identifying both areas where water is flowing or has infiltrated more, or accumulated temporarily in depressions or macropores. Therefore, we hope to demonstrate the potential for thermal infrared imagery to indirectly make a quantitative estimation of several hydrologic processes. Applications include: e.g. mapping infiltration, microrelief and macropores; estimating flow velocities; defining sampling strategies; identifying water sources, accumulation of waters or even connectivity. Protocols for the assessment of several hydrologic processes with the help of IR thermography will be briefly explained, presenting some examples from laboratory soil flumes and field.

  17. A Signal Processing Method to Explore Similarity in Protein Flexibility

    Directory of Open Access Journals (Sweden)

    Simina Vasilache

    2010-01-01

    Full Text Available Understanding mechanisms of protein flexibility is of great importance to structural biology. The ability to detect similarities between proteins and their patterns is vital in discovering new information about unknown protein functions. A Distance Constraint Model (DCM provides a means to generate a variety of flexibility measures based on a given protein structure. Although information about mechanical properties of flexibility is critical for understanding protein function for a given protein, the question of whether certain characteristics are shared across homologous proteins is difficult to assess. For a proper assessment, a quantified measure of similarity is necessary. This paper begins to explore image processing techniques to quantify similarities in signals and images that characterize protein flexibility. The dataset considered here consists of three different families of proteins, with three proteins in each family. The similarities and differences found within flexibility measures across homologous proteins do not align with sequence-based evolutionary methods.

  18. UV-vis spectra as an alternative to the Lowry method for quantify hair damage induced by surfactants.

    Science.gov (United States)

    Pires-Oliveira, Rafael; Joekes, Inés

    2014-11-01

    It is well known that long term use of shampoo causes damage to human hair. Although the Lowry method has been widely used to quantify hair damage, it is unsuitable to determine this in the presence of some surfactants and there is no other method proposed in literature. In this work, a different method is used to investigate and compare the hair damage induced by four types of surfactants (including three commercial-grade surfactants) and water. Hair samples were immersed in aqueous solution of surfactants under conditions that resemble a shower (38 °C, constant shaking). These solutions become colored with time of contact with hair and its UV-vis spectra were recorded. For comparison, the amount of extracted proteins from hair by sodium dodecyl sulfate (SDS) and by water were estimated by the Lowry method. Additionally, non-pigmented vs. pigmented hair and also sepia melanin were used to understand the washing solution color and their spectra. The results presented herein show that hair degradation is mostly caused by the extraction of proteins, cuticle fragments and melanin granules from hair fiber. It was found that the intensity of solution color varies with the charge density of the surfactants. Furthermore, the intensity of solution color can be correlated to the amount of proteins quantified by the Lowry method as well as to the degree of hair damage. UV-vis spectrum of hair washing solutions is a simple and straightforward method to quantify and compare hair damages induced by different commercial surfactants. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Enhanced computational methods for quantifying the effect of geographic and environmental isolation on genetic differentiation

    NARCIS (Netherlands)

    Botta, Filippo; Eriksen, Casper; Fontaine, Michael Christophe; Guillot, Gilles

    2015-01-01

    In a recent paper, Bradburd et al. (2013) proposed a model to quantify the relative effect ofgeographic and environmental distance on genetic differentiation. Here, we enhance this method in several ways. 1. We modify the covariance model so as to fit better with mainstream geostatistical models and

  20. Rapid solid-phase extraction method to quantify [11C]-verapamil, and its [11C]-metabolites, in human and macaque plasma

    International Nuclear Information System (INIS)

    Unadkat, Jashvant D.; Chung, Francisco; Sasongko, Lucy; Whittington, Dale; Eyal, Sara; Mankoff, David; Collier, Ann C.; Muzi, Mark; Link, Jeanne

    2008-01-01

    Introduction: P-glycoprotein (P-gp), an efflux transporter, is a significant barrier to drug entry into the brain and the fetus. The positron emission tomography (PET) ligand, [ 11 C]-verapamil, has been used to measure in vivo P-gp activity at various tissue-blood barriers of humans and animals. Since verapamil is extensively metabolized in vivo, it is important to quantify the extent of verapamil metabolism in order to interpret such P-gp activity. Therefore, we developed a rapid solid-phase extraction (SPE) method to separate, and then quantify, verapamil and its radiolabeled metabolites in plasma. Methods: Using high-performance liquid chromatography (HPLC), we established that the major identifiable circulating radioactive metabolite of [ 11 C]-verapamil in plasma of humans and the nonhuman primate, Macaca nemestrina, was [ 11 C]-D-617/717. Using sequential and differential pH elution on C 8 SPE cartridges, we developed a rapid method to separate [ 11 C]-verapamil and [ 11 C]-D-617/717. Recovery was measured by spiking the samples with the corresponding nonradioactive compounds and assaying these compounds by HPLC. Results: Verapamil and D-617/717 recovery with the SPE method was >85%. When the method was applied to PET studies in humans and nonhuman primates, significant plasma concentration of D-617/717 and unknown polar metabolite(s) were observed. The SPE and the HPLC methods were not significantly different in the quantification of verapamil and D-617/717. Conclusions: The SPE method simultaneously processes multiple samples in less than 5 min. Given the short half-life of [ 11 C], this method provides a valuable tool to rapidly determine the concentration of [ 11 C]-verapamil and its [ 11 C]-metabolites in human and nonhuman primate plasma

  1. Quantifying viruses and bacteria in wastewater—Results, interpretation methods, and quality control

    Science.gov (United States)

    Francy, Donna S.; Stelzer, Erin A.; Bushon, Rebecca N.; Brady, Amie M.G.; Mailot, Brian E.; Spencer, Susan K.; Borchardt, Mark A.; Elber, Ashley G.; Riddell, Kimberly R.; Gellner, Terry M.

    2011-01-01

    Membrane bioreactors (MBR), used for wastewater treatment in Ohio and elsewhere in the United States, have pore sizes small enough to theoretically reduce concentrations of protozoa and bacteria, but not viruses. Sampling for viruses in wastewater is seldom done and not required. Instead, the bacterial indicators Escherichia coli (E. coli) and fecal coliforms are the required microbial measures of effluents for wastewater-discharge permits. Information is needed on the effectiveness of MBRs in removing human enteric viruses from wastewaters, particularly as compared to conventional wastewater treatment before and after disinfection. A total of 73 regular and 28 quality-control (QC) samples were collected at three MBR and two conventional wastewater plants in Ohio during 23 regular and 3 QC sampling trips in 2008-10. Samples were collected at various stages in the treatment processes and analyzed for bacterial indicators E. coli, fecal coliforms, and enterococci by membrane filtration; somatic and F-specific coliphage by the single agar layer (SAL) method; adenovirus, enterovirus, norovirus GI and GII, rotavirus, and hepatitis A virus by molecular methods; and viruses by cell culture. While addressing the main objective of the study-comparing removal of viruses and bacterial indicators in MBR and conventional plants-it was realized that work was needed to identify data analysis and quantification methods for interpreting enteric virus and QC data. Therefore, methods for quantifying viruses, qualifying results, and applying QC data to interpretations are described in this report. During each regular sampling trip, samples were collected (1) before conventional or MBR treatment (post-preliminary), (2) after secondary or MBR treatment (post-secondary or post-MBR), (3) after tertiary treatment (one conventional plant only), and (4) after disinfection (post-disinfection). Glass-wool fiber filtration was used to concentrate enteric viruses from large volumes, and small

  2. Method for quantifying the uncertainty with the extraction of the raw data of a gamma ray spectrum by deconvolution software

    International Nuclear Information System (INIS)

    Vigineix, Thomas; Guillot, Nicolas; Saurel, Nicolas

    2013-06-01

    Gamma ray spectrometry is a passive non destructive assay most commonly used to identify and quantify the radionuclides present in complex huge objects such as nuclear waste packages. The treatment of spectra from the measurement of nuclear waste is done in two steps: the first step is to extract the raw data from the spectra (energies and the net photoelectric absorption peaks area) and the second step is to determine the detection efficiency of the measuring scene. Commercial software use different methods to extract the raw data spectrum but none are optimal in the treatment of spectra containing actinides. Spectra should be handled individually and requires settings and an important feedback part from the operator, which prevents the automatic process of spectrum and increases the risk of human error. In this context the Nuclear Measurement and Valuation Laboratory (LMNE) in the Atomic Energy Commission Valduc (CEA Valduc) has developed a new methodology for quantifying the uncertainty associated with the extraction of the raw data over spectrum. This methodology was applied with raw data and commercial software that need configuration by the operator (GENIE2000, Interwinner...). This robust and fully automated methodology of uncertainties calculation is performed on the entire process of the software. The methodology ensures for all peaks processed by the deconvolution software an extraction of energy peaks closed to 2 channels and an extraction of net areas with an uncertainty less than 5 percents. The methodology was tested experimentally with actinides spectrum. (authors)

  3. Improved Methods for Production Manufacturing Processes in Environmentally Benign Manufacturing

    Directory of Open Access Journals (Sweden)

    Yan-Yan Wang

    2011-09-01

    Full Text Available How to design a production process with low carbon emissions and low environmental impact as well as high manufacturing performance is a key factor in the success of low-carbon production. It is important to address concerns about climate change for the large carbon emission source manufacturing industries because of their high energy consumption and environmental impact during the manufacturing stage of the production life cycle. In this paper, methodology for determining a production process is developed. This methodology integrates process determination from three different levels: new production processing, selected production processing and batch production processing. This approach is taken within a manufacturing enterprise based on prior research. The methodology is aimed at providing decision support for implementing Environmentally Benign Manufacturing (EBM and low-carbon production to improve the environmental performance of the manufacturing industry. At the first level, a decision-making model for new production processes based on the Genetic Simulated Annealing Algorithm (GSAA is presented. The decision-making model considers not only the traditional factors, such as time, quality and cost, but also energy and resource consumption and environmental impact, which are different from the traditional methods. At the second level, a methodology is developed based on an IPO (Input-Process-Output model that integrates assessments of resource consumption and environmental impact in terms of a materials balance principle for batch production processes. At the third level, based on the above two levels, a method for determining production processes that focus on low-carbon production is developed based on case-based reasoning, expert systems and feature technology for designing the process flow of a new component. Through the above three levels, a method for determining the production process to identify, quantify, assess, and optimize the

  4. Neural basis for generalized quantifier comprehension.

    Science.gov (United States)

    McMillan, Corey T; Clark, Robin; Moore, Peachie; Devita, Christian; Grossman, Murray

    2005-01-01

    Generalized quantifiers like "all cars" are semantically well understood, yet we know little about their neural representation. Our model of quantifier processing includes a numerosity device, operations that combine number elements and working memory. Semantic theory posits two types of quantifiers: first-order quantifiers identify a number state (e.g. "at least 3") and higher-order quantifiers additionally require maintaining a number state actively in working memory for comparison with another state (e.g. "less than half"). We used BOLD fMRI to test the hypothesis that all quantifiers recruit inferior parietal cortex associated with numerosity, while only higher-order quantifiers recruit prefrontal cortex associated with executive resources like working memory. Our findings showed that first-order and higher-order quantifiers both recruit right inferior parietal cortex, suggesting that a numerosity component contributes to quantifier comprehension. Moreover, only probes of higher-order quantifiers recruited right dorsolateral prefrontal cortex, suggesting involvement of executive resources like working memory. We also observed activation of thalamus and anterior cingulate that may be associated with selective attention. Our findings are consistent with a large-scale neural network centered in frontal and parietal cortex that supports comprehension of generalized quantifiers.

  5. Comparison of extraction methods for quantifying vitamin E from animal tissues.

    Science.gov (United States)

    Xu, Zhimin

    2008-12-01

    Four extraction methods: (1) solvent (SOL), (2) ultrasound assisted solvent (UA), (3) saponification and solvent (SP), and (4) saponification and ultrasound assisted solvent (SP-UA), were used in sample preparation for quantifying vitamin E (tocopherols) in chicken liver and plasma samples. The extraction yields of SOL, UA, SP, and SP-UA methods obtained by adding delta-tocopherol as internal reference were 95%, 104%, 65%, and 62% for liver and 98%, 103%, 97%, and 94% for plasma, respectively. The methods with saponification significantly affected the stabilities of tocopherols in liver samples. The measured values of alpha- and gamma-tocopherols using the solvent only extraction (SOL) method were much lower than that using any of the other extraction methods. This indicated that less of the tocopherols in those samples were in a form that could be extracted directly by solvent. The measured value of alpha-tocopherol in the liver sample using the ultrasound assisted solvent (UA) method was 1.5-2.5 times of that obtained from the saponification and solvent (SP) method. The differences in measured values of tocopherols in the plasma samples by using the two methods were not significant. However, the measured value of the saponification and ultrasound assisted solvent (SP-UA) method was lower than either the saponification and solvent (SP) or the ultrasound assisted solvent (UA) method. Also, the reproducibility of the ultrasound assisted solvent (UA) method was greater than any of the saponification methods. Compared with the traditional saponification method, the ultrasound assisted solvent method could effectively extract tocopherols from sample matrix without any chemical degradation reactions, especially for complex animal tissue such as liver.

  6. A single-run liquid chromatography mass spectrometry method to quantify neuroactive kynurenine pathway metabolites in rat plasma.

    Science.gov (United States)

    Orsatti, Laura; Speziale, Roberto; Orsale, Maria Vittoria; Caretti, Fulvia; Veneziano, Maria; Zini, Matteo; Monteagudo, Edith; Lyons, Kathryn; Beconi, Maria; Chan, Kelvin; Herbst, Todd; Toledo-Sherman, Leticia; Munoz-Sanjuan, Ignacio; Bonelli, Fabio; Dominguez, Celia

    2015-03-25

    Neuroactive metabolites in the kynurenine pathway of tryptophan catabolism are associated with neurodegenerative disorders. Tryptophan is transported across the blood-brain barrier and converted via the kynurenine pathway to N-formyl-L-kynurenine, which is further degraded to L-kynurenine. This metabolite can then generate a group of metabolites called kynurenines, most of which have neuroactive properties. The association of tryptophan catabolic pathway alterations with various central nervous system (CNS) pathologies has raised interest in analytical methods to accurately quantify kynurenines in body fluids. We here describe a rapid and sensitive reverse-phase HPLC-MS/MS method to quantify L-kynurenine (KYN), kynurenic acid (KYNA), 3-hydroxy-L-kynurenine (3HK) and anthranilic acid (AA) in rat plasma. Our goal was to quantify these metabolites in a single run; given their different physico-chemical properties, major efforts were devoted to develop a chromatography suitable for all metabolites that involves plasma protein precipitation with acetonitrile followed by chromatographic separation by C18 RP chromatography, detected by electrospray mass spectrometry. Quantitation range was 0.098-100 ng/ml for 3HK, 9.8-20,000 ng/ml for KYN, 0.49-1000 ng/ml for KYNA and AA. The method was linear (r>0.9963) and validation parameters were within acceptance range (calibration standards and QC accuracy within ±30%). Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Quantitative Risk Analysis: Method And Process

    Directory of Open Access Journals (Sweden)

    Anass BAYAGA

    2010-03-01

    Full Text Available Recent and past studies (King III report, 2009: 73-75; Stoney 2007;Committee of Sponsoring Organisation-COSO, 2004, Bartell, 2003; Liebenberg and Hoyt, 2003; Reason, 2000; Markowitz 1957 lament that although, the introduction of quantifying risk to enhance degree of objectivity in finance for instance was quite parallel to its development in the manufacturing industry, it is not the same in Higher Education Institution (HEI. In this regard, the objective of the paper was to demonstrate the methods and process of Quantitative Risk Analysis (QRA through likelihood of occurrence of risk (phase I. This paper serves as first of a two-phased study, which sampled hundred (100 risk analysts in a University in the greater Eastern Cape Province of South Africa.The analysis of likelihood of occurrence of risk by logistic regression and percentages were conducted to investigate whether there were a significant difference or not between groups (analyst in respect of QRA.The Hosmer and Lemeshow test was non-significant with a chi-square(X2 =8.181; p = 0.300, which indicated that there was a good model fit, since the data did not significantly deviate from the model. The study concluded that to derive an overall likelihood rating that indicated the probability that a potential risk may be exercised within the construct of an associated threat environment, the following governing factors must be considered: (1 threat source motivation and capability (2 nature of the vulnerability (3 existence and effectiveness of current controls (methods and process.

  8. Analyzing complex networks evolution through Information Theory quantifiers

    International Nuclear Information System (INIS)

    Carpi, Laura C.; Rosso, Osvaldo A.; Saco, Patricia M.; Ravetti, Martin Gomez

    2011-01-01

    A methodology to analyze dynamical changes in complex networks based on Information Theory quantifiers is proposed. The square root of the Jensen-Shannon divergence, a measure of dissimilarity between two probability distributions, and the MPR Statistical Complexity are used to quantify states in the network evolution process. Three cases are analyzed, the Watts-Strogatz model, a gene network during the progression of Alzheimer's disease and a climate network for the Tropical Pacific region to study the El Nino/Southern Oscillation (ENSO) dynamic. We find that the proposed quantifiers are able not only to capture changes in the dynamics of the processes but also to quantify and compare states in their evolution.

  9. Validity and reliability of the session-RPE method for quantifying training load in karate athletes.

    Science.gov (United States)

    Tabben, M; Tourny, C; Haddad, M; Chaabane, H; Chamari, K; Coquart, J B

    2015-04-24

    To test the construct validity and reliability of the session rating of perceived exertion (sRPE) method by examining the relationship between RPE and physiological parameters (heart rate: HR and blood lactate concentration: [La --] ) and the correlations between sRPE and two HR--based methods for quantifying internal training load (Banister's method and Edwards's method) during karate training camp. Eighteen elite karate athletes: ten men (age: 24.2 ± 2.3 y, body mass: 71.2 ± 9.0 kg, body fat: 8.2 ± 1.3% and height: 178 ± 7 cm) and eight women (age: 22.6 ± 1.2 y, body mass: 59.8 ± 8.4 kg, body fat: 20.2 ± 4.4%, height: 169 ± 4 cm) were included in the study. During training camp, subjects participated in eight karate--training sessions including three training modes (4 tactical--technical, 2 technical--development, and 2 randori training), during which RPE, HR, and [La -- ] were recorded. Significant correlations were found between RPE and physiological parameters (percentage of maximal HR: r = 0.75, 95% CI = 0.64--0.86; [La --] : r = 0.62, 95% CI = 0.49--0.75; P training load ( r = 0.65--0.95; P reliability of the same intensity across training sessions (Cronbach's α = 0.81, 95% CI = 0.61--0.92). This study demonstrates that the sRPE method is valid for quantifying internal training load and intensity in karate.

  10. Quantifying camouflage: how to predict detectability from appearance.

    Science.gov (United States)

    Troscianko, Jolyon; Skelhorn, John; Stevens, Martin

    2017-01-06

    Quantifying the conspicuousness of objects against particular backgrounds is key to understanding the evolution and adaptive value of animal coloration, and in designing effective camouflage. Quantifying detectability can reveal how colour patterns affect survival, how animals' appearances influence habitat preferences, and how receiver visual systems work. Advances in calibrated digital imaging are enabling the capture of objective visual information, but it remains unclear which methods are best for measuring detectability. Numerous descriptions and models of appearance have been used to infer the detectability of animals, but these models are rarely empirically validated or directly compared to one another. We compared the performance of human 'predators' to a bank of contemporary methods for quantifying the appearance of camouflaged prey. Background matching was assessed using several established methods, including sophisticated feature-based pattern analysis, granularity approaches and a range of luminance and contrast difference measures. Disruptive coloration is a further camouflage strategy where high contrast patterns disrupt they prey's tell-tale outline, making it more difficult to detect. Disruptive camouflage has been studied intensely over the past decade, yet defining and measuring it have proven far more problematic. We assessed how well existing disruptive coloration measures predicted capture times. Additionally, we developed a new method for measuring edge disruption based on an understanding of sensory processing and the way in which false edges are thought to interfere with animal outlines. Our novel measure of disruptive coloration was the best predictor of capture times overall, highlighting the importance of false edges in concealment over and above pattern or luminance matching. The efficacy of our new method for measuring disruptive camouflage together with its biological plausibility and computational efficiency represents a substantial

  11. Evaluation of statistical methods for quantifying fractal scaling in water-quality time series with irregular sampling

    Science.gov (United States)

    Zhang, Qian; Harman, Ciaran J.; Kirchner, James W.

    2018-02-01

    River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1) fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2) the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling - in the form of spectral slope (β) or other equivalent scaling parameters (e.g., Hurst exponent) - are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1) they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β = 0) to Brown noise (β = 2) and (2) their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths) in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb-Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among all methods for a wide range of

  12. Quantifying Neonatal Sucking Performance: Promise of New Methods.

    Science.gov (United States)

    Capilouto, Gilson J; Cunningham, Tommy J; Mullineaux, David R; Tamilia, Eleonora; Papadelis, Christos; Giannone, Peter J

    2017-04-01

    Neonatal feeding has been traditionally understudied so guidelines and evidence-based support for common feeding practices are limited. A major contributing factor to the paucity of evidence-based practice in this area has been the lack of simple-to-use, low-cost tools for monitoring sucking performance. We describe new methods for quantifying neonatal sucking performance that hold significant clinical and research promise. We present early results from an ongoing study investigating neonatal sucking as a marker of risk for adverse neurodevelopmental outcomes. We include quantitative measures of sucking performance to better understand how movement variability evolves during skill acquisition. Results showed the coefficient of variation of suck duration was significantly different between preterm neonates at high risk for developmental concerns (HRPT) and preterm neonates at low risk for developmental concerns (LRPT). For HRPT, results indicated the coefficient of variation of suck smoothness increased from initial feeding to discharge and remained significantly greater than healthy full-term newborns (FT) at discharge. There was no significant difference in our measures between FT and LRPT at discharge. Our findings highlight the need to include neonatal sucking assessment as part of routine clinical care in order to capture the relative risk of adverse neurodevelopmental outcomes at discharge. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  13. Analyzing complex networks evolution through Information Theory quantifiers

    Energy Technology Data Exchange (ETDEWEB)

    Carpi, Laura C., E-mail: Laura.Carpi@studentmail.newcastle.edu.a [Civil, Surveying and Environmental Engineering, University of Newcastle, University Drive, Callaghan NSW 2308 (Australia); Departamento de Fisica, Instituto de Ciencias Exatas, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, Belo Horizonte (31270-901), MG (Brazil); Rosso, Osvaldo A., E-mail: rosso@fisica.ufmg.b [Departamento de Fisica, Instituto de Ciencias Exatas, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, Belo Horizonte (31270-901), MG (Brazil); Chaos and Biology Group, Instituto de Calculo, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Pabellon II, Ciudad Universitaria, 1428 Ciudad de Buenos Aires (Argentina); Saco, Patricia M., E-mail: Patricia.Saco@newcastle.edu.a [Civil, Surveying and Environmental Engineering, University of Newcastle, University Drive, Callaghan NSW 2308 (Australia); Departamento de Hidraulica, Facultad de Ciencias Exactas, Ingenieria y Agrimensura, Universidad Nacional de Rosario, Avenida Pellegrini 250, Rosario (Argentina); Ravetti, Martin Gomez, E-mail: martin.ravetti@dep.ufmg.b [Departamento de Engenharia de Producao, Universidade Federal de Minas Gerais, Av. Antonio Carlos, 6627, Belo Horizonte (31270-901), MG (Brazil)

    2011-01-24

    A methodology to analyze dynamical changes in complex networks based on Information Theory quantifiers is proposed. The square root of the Jensen-Shannon divergence, a measure of dissimilarity between two probability distributions, and the MPR Statistical Complexity are used to quantify states in the network evolution process. Three cases are analyzed, the Watts-Strogatz model, a gene network during the progression of Alzheimer's disease and a climate network for the Tropical Pacific region to study the El Nino/Southern Oscillation (ENSO) dynamic. We find that the proposed quantifiers are able not only to capture changes in the dynamics of the processes but also to quantify and compare states in their evolution.

  14. Methods for monitoring corals and crustose coralline algae to quantify in-situ calcification rates

    Science.gov (United States)

    Morrison, Jennifer M.; Kuffner, Ilsa B.; Hickey, T. Don

    2013-01-01

    The potential effect of global climate change on calcifying marine organisms, such as scleractinian (reef-building) corals, is becoming increasingly evident. Understanding the process of coral calcification and establishing baseline calcification rates are necessary to detect future changes in growth resulting from climate change or other stressors. Here we describe the methods used to establish a network of calcification-monitoring stations along the outer Florida Keys Reef Tract in 2009. In addition to detailing the initial setup and periodic monitoring of calcification stations, we discuss the utility and success of our design and offer suggestions for future deployments. Stations were designed such that whole coral colonies were securely attached to fixed apparati (n = 10 at each site) on the seafloor but also could be easily removed and reattached as needed for periodic weighing. Corals were weighed every 6 months, using the buoyant weight technique, to determine calcification rates in situ. Sites were visited in May and November to obtain winter and summer rates, respectively, and identify seasonal patterns in calcification. Calcification rates of the crustose coralline algal community also were measured by affixing commercially available plastic tiles, deployed vertically, at each station. Colonization by invertebrates and fleshy algae on the tiles was low, indicating relative specificity for the crustose coralline algal community. We also describe a new, nonlethal technique for sampling the corals, used following the completion of the monitoring period, in which two slabs were obtained from the center of each colony. Sampled corals were reattached to the seafloor, and most corals had completely recovered within 6 months. The station design and sampling methods described herein provide an effective approach to assessing coral and crustose coralline algal calcification rates across time and space, offering the ability to quantify the potential effects of

  15. Quantifying IT estimation risks

    NARCIS (Netherlands)

    Kulk, G.P.; Peters, R.J.; Verhoef, C.

    2009-01-01

    A statistical method is proposed for quantifying the impact of factors that influence the quality of the estimation of costs for IT-enabled business projects. We call these factors risk drivers as they influence the risk of the misestimation of project costs. The method can effortlessly be

  16. A method to quantify visual information processing in children using eye tracking

    NARCIS (Netherlands)

    M.J.G. Kooiker (Marlou); J.J.M. Pel (Johan); S.P. Van Der Steen-Kant; J. van der Steen (Hans)

    2016-01-01

    textabstractVisual problems that occur early in life can have major impact on a child's development. Without verbal communication and only based on observational methods, it is difficult to make a quantitative assessment of a child's visual problems. This limits accurate diagnostics in children

  17. Evaluation of statistical methods for quantifying fractal scaling in water-quality time series with irregular sampling

    Directory of Open Access Journals (Sweden)

    Q. Zhang

    2018-02-01

    Full Text Available River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1 fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2 the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling – in the form of spectral slope (β or other equivalent scaling parameters (e.g., Hurst exponent – are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1 they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β  =  0 to Brown noise (β  =  2 and (2 their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb–Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among

  18. Pros and cons of analytical methods to quantify surrogate contaminants from the challenge test in recycled polyethylene terephthalate

    Energy Technology Data Exchange (ETDEWEB)

    Felix, Juliana S., E-mail: jfelix@unizar.es [Department of Analytical Chemistry, Aragon Institute of Engineering Research (I3A), CPS, University of Zaragoza, Torres Quevedo Bldg., Maria de Luna St. 3, E-50018 Zaragoza (Spain); Alfaro, Pilar, E-mail: palfarot@unizar.es [Department of Analytical Chemistry, Aragon Institute of Engineering Research (I3A), CPS, University of Zaragoza, Torres Quevedo Bldg., Maria de Luna St. 3, E-50018 Zaragoza (Spain); Nerin, Cristina, E-mail: cnerin@unizar.es [Department of Analytical Chemistry, Aragon Institute of Engineering Research (I3A), CPS, University of Zaragoza, Torres Quevedo Bldg., Maria de Luna St. 3, E-50018 Zaragoza (Spain)

    2011-02-14

    Different analytical methods were optimized and applied to quantify certain surrogate contaminants (toluene, chlorobenzene, phenol, limonene and benzophenone) in samples of contaminated and recycled flakes and virgin pellets of polyethylene terephthalate (PET) coming from the industrial challenge test. A screening analysis of the PET samples was carried out by direct solid-phase microextraction (SPME) in headspace mode (HS). The methods developed and used for quantitative analysis were a) total dissolution of PET samples in dichloroacetic acid and analysis by HS-SPME coupled to gas chromatography-mass spectrometry (GC-MS) and, b) dichloromethane extraction and analysis by GC-MS. The concentration of all surrogates in the contaminated PET flakes analyzed by HS-SPME method was lower than expected according to information provided by the supplier. Dichloroacetic acid interacted with the surrogates, resulting in a tremendous decrease of limonene concentration. The degradation compounds from limonene were identified. Dichloromethane extraction and GC-MS analysis evidenced the highest values of analytes in these PET samples. Based on the foregoing data, the efficiency of the recycling process was evaluated, whereby the removal of 99.9% of the surrogates proceeding from the contaminated flakes was confirmed.

  19. Pros and cons of analytical methods to quantify surrogate contaminants from the challenge test in recycled polyethylene terephthalate

    International Nuclear Information System (INIS)

    Felix, Juliana S.; Alfaro, Pilar; Nerin, Cristina

    2011-01-01

    Different analytical methods were optimized and applied to quantify certain surrogate contaminants (toluene, chlorobenzene, phenol, limonene and benzophenone) in samples of contaminated and recycled flakes and virgin pellets of polyethylene terephthalate (PET) coming from the industrial challenge test. A screening analysis of the PET samples was carried out by direct solid-phase microextraction (SPME) in headspace mode (HS). The methods developed and used for quantitative analysis were a) total dissolution of PET samples in dichloroacetic acid and analysis by HS-SPME coupled to gas chromatography-mass spectrometry (GC-MS) and, b) dichloromethane extraction and analysis by GC-MS. The concentration of all surrogates in the contaminated PET flakes analyzed by HS-SPME method was lower than expected according to information provided by the supplier. Dichloroacetic acid interacted with the surrogates, resulting in a tremendous decrease of limonene concentration. The degradation compounds from limonene were identified. Dichloromethane extraction and GC-MS analysis evidenced the highest values of analytes in these PET samples. Based on the foregoing data, the efficiency of the recycling process was evaluated, whereby the removal of 99.9% of the surrogates proceeding from the contaminated flakes was confirmed.

  20. Rapid solid-phase extraction method to quantify [{sup 11}C]-verapamil, and its [{sup 11}C]-metabolites, in human and macaque plasma

    Energy Technology Data Exchange (ETDEWEB)

    Unadkat, Jashvant D. [Department of Pharmaceutics, University of Washington, Box 357610, Seattle, WA 98195 (United States)], E-mail: jash@u.washington.edu; Chung, Francisco; Sasongko, Lucy; Whittington, Dale; Eyal, Sara [Department of Pharmaceutics, University of Washington, Box 357610, Seattle, WA 98195 (United States); Mankoff, David [Department of Radiology, University of Washington, Box 356004, Seattle, WA 98195 (United States); Collier, Ann C. [Department of Medicine, University of Washington, Box 359929, Seattle, WA 98195 (United States); Muzi, Mark; Link, Jeanne [Department of Radiology, University of Washington, Box 356004, Seattle, WA 98195 (United States)

    2008-11-15

    Introduction: P-glycoprotein (P-gp), an efflux transporter, is a significant barrier to drug entry into the brain and the fetus. The positron emission tomography (PET) ligand, [{sup 11}C]-verapamil, has been used to measure in vivo P-gp activity at various tissue-blood barriers of humans and animals. Since verapamil is extensively metabolized in vivo, it is important to quantify the extent of verapamil metabolism in order to interpret such P-gp activity. Therefore, we developed a rapid solid-phase extraction (SPE) method to separate, and then quantify, verapamil and its radiolabeled metabolites in plasma. Methods: Using high-performance liquid chromatography (HPLC), we established that the major identifiable circulating radioactive metabolite of [{sup 11}C]-verapamil in plasma of humans and the nonhuman primate, Macaca nemestrina, was [{sup 11}C]-D-617/717. Using sequential and differential pH elution on C{sub 8} SPE cartridges, we developed a rapid method to separate [{sup 11}C]-verapamil and [{sup 11}C]-D-617/717. Recovery was measured by spiking the samples with the corresponding nonradioactive compounds and assaying these compounds by HPLC. Results: Verapamil and D-617/717 recovery with the SPE method was >85%. When the method was applied to PET studies in humans and nonhuman primates, significant plasma concentration of D-617/717 and unknown polar metabolite(s) were observed. The SPE and the HPLC methods were not significantly different in the quantification of verapamil and D-617/717. Conclusions: The SPE method simultaneously processes multiple samples in less than 5 min. Given the short half-life of [{sup 11}C], this method provides a valuable tool to rapidly determine the concentration of [{sup 11}C]-verapamil and its [{sup 11}C]-metabolites in human and nonhuman primate plasma.

  1. Identification of wastewater treatment processes for nutrient removal on a full-scale WWTP by statistical methods

    DEFF Research Database (Denmark)

    Carstensen, Jakob; Madsen, Henrik; Poulsen, Niels Kjølstad

    1994-01-01

    of the processes, i.e. including prior knowledge, with the significant effects found in data by using statistical identification methods. Rates of the biochemical and hydraulic processes are identified by statistical methods and the related constants for the biochemical processes are estimated assuming Monod...... kinetics. The models only include those hydraulic and kinetic parameters, which have shown to be significant in a statistical sense, and hence they can be quantified. The application potential of these models is on-line control, because the present state of the plant is given by the variables of the models......The introduction of on-line sensors of nutrient salt concentrations on wastewater treatment plants opens a wide new area of modelling wastewater processes. Time series models of these processes are very useful for gaining insight in real time operation of wastewater treatment systems which deal...

  2. Methods for quantifying T cell receptor binding affinities and thermodynamics

    Science.gov (United States)

    Piepenbrink, Kurt H.; Gloor, Brian E.; Armstrong, Kathryn M.; Baker, Brian M.

    2013-01-01

    αβ T cell receptors (TCRs) recognize peptide antigens bound and presented by class I or class II major histocompatibility complex (MHC) proteins. Recognition of a peptide/MHC complex is required for initiation and propagation of a cellular immune response, as well as the development and maintenance of the T cell repertoire. Here we discuss methods to quantify the affinities and thermodynamics of interactions between soluble ectodomains of TCRs and their peptide/MHC ligands, focusing on titration calorimetry, surface plasmon resonance, and fluorescence anisotropy. As TCRs typically bind ligand with weak-to-moderate affinities, we focus the discussion on means to enhance the accuracy and precision of low affinity measurements. In addition to further elucidating the biology of the T cell mediated immune response, more reliable low affinity measurements will aid with more probing studies with mutants or altered peptides that can help illuminate the physical underpinnings of how TCRs achieve their remarkable recognition properties. PMID:21609868

  3. Divided-evolution-based pulse scheme for quantifying exchange processes in proteins: powerful complement to relaxation dispersion experiments.

    Science.gov (United States)

    Bouvignies, Guillaume; Hansen, D Flemming; Vallurupalli, Pramodh; Kay, Lewis E

    2011-02-16

    A method for quantifying millisecond time scale exchange in proteins is presented based on scaling the rate of chemical exchange using a 2D (15)N, (1)H(N) experiment in which (15)N dwell times are separated by short spin-echo pulse trains. Unlike the popular Carr-Purcell-Meiboom-Gill (CPMG) experiment where the effects of a radio frequency field on measured transverse relaxation rates are quantified, the new approach measures peak positions in spectra that shift as the effective exchange time regime is varied. The utility of the method is established through an analysis of data recorded on an exchanging protein-ligand system for which the exchange parameters have been accurately determined using alternative approaches. Computations establish that a combined analysis of CPMG and peak shift profiles extends the time scale that can be studied to include exchanging systems with highly skewed populations and exchange rates as slow as 20 s(-1).

  4. Comparing methods to quantify experimental transmission of infectious agents

    NARCIS (Netherlands)

    Velthuis, A.G.J.; Jong, de M.C.M.; Bree, de J.

    2007-01-01

    Transmission of an infectious agent can be quantified from experimental data using the transient-state (TS) algorithm. The TS algorithm is based on the stochastic SIR model and provides a time-dependent probability distribution over the number of infected individuals during an epidemic, with no need

  5. Adaptation, validation and application of the chemo-thermal oxidation method to quantify black carbon in soils

    International Nuclear Information System (INIS)

    Agarwal, Tripti; Bucheli, Thomas D.

    2011-01-01

    The chemo-thermal oxidation method at 375 o C (CTO-375) has been widely used to quantify black carbon (BC) in sediments. In the present study, CTO-375 was tested and adapted for application to soil, accounting for some matrix specific properties like high organic carbon (≤39%) and carbonate (≤37%) content. Average recoveries of standard reference material SRM-2975 ranged from 25 to 86% for nine representative Swiss and Indian samples, which is similar to literature data for sediments. The adapted method was applied to selected samples of the Swiss soil monitoring network (NABO). BC content exhibited different patterns in three soil profiles while contribution of BC to TOC was found maximum below the topsoil at all three sites, however at different depths (60-130 cm). Six different NABO sites exhibited largely constant BC concentrations over the last 25 years, with short-term (6 months) prevailing over long-term (5 years) temporal fluctuations. - Research highlights: → The CTO-375 method was adapted and validated for BC analysis in soils. → Method validation figures of merit proofed satisfactory. → Application is shown with soil cores and topsoil temporal variability. → BC content can be elevated in subsurface soils. → BC contents in surface soils were largely constant over the last 25 years. - Although widely used also for soils, the chemo-thermal oxidation method at 375 o C to quantify black carbon has never been properly validated for this matrix before.

  6. Selection of Prediction Methods for Thermophysical Properties for Process Modeling and Product Design of Biodiesel Manufacturing

    DEFF Research Database (Denmark)

    Su, Yung-Chieh; Liu, Y. A.; Díaz Tovar, Carlos Axel

    2011-01-01

    To optimize biodiesel manufacturing, many reported studies have built simulation models to quantify the relationship between operating conditions and process performance. For mass and energy balance simulations, it is essential to know the four fundamental thermophysical properties of the feed oil...... prediction methods on our group Web site (www.design.che.vt.edu) for the reader to download without charge....

  7. Quantifying induced effects of subsurface renewable energy storage

    Science.gov (United States)

    Bauer, Sebastian; Beyer, Christof; Pfeiffer, Tilmann; Boockmeyer, Anke; Popp, Steffi; Delfs, Jens-Olaf; Wang, Bo; Li, Dedong; Dethlefsen, Frank; Dahmke, Andreas

    2015-04-01

    New methods and technologies for energy storage are required for the transition to renewable energy sources. Subsurface energy storage systems such as salt caverns or porous formations offer the possibility of hosting large amounts of energy or substance. When employing these systems, an adequate system and process understanding is required in order to assess the feasibility of the individual storage option at the respective site and to predict the complex and interacting effects induced. This understanding is the basis for assessing the potential as well as the risks connected with a sustainable usage of these storage options, especially when considering possible mutual influences. For achieving this aim, in this work synthetic scenarios for the use of the geological underground as an energy storage system are developed and parameterized. The scenarios are designed to represent typical conditions in North Germany. The types of subsurface use investigated here include gas storage and heat storage in porous formations. The scenarios are numerically simulated and interpreted with regard to risk analysis and effect forecasting. For this, the numerical simulators Eclipse and OpenGeoSys are used. The latter is enhanced to include the required coupled hydraulic, thermal, geomechanical and geochemical processes. Using the simulated and interpreted scenarios, the induced effects are quantified individually and monitoring concepts for observing these effects are derived. This presentation will detail the general investigation concept used and analyze the parameter availability for this type of model applications. Then the process implementation and numerical methods required and applied for simulating the induced effects of subsurface storage are detailed and explained. Application examples show the developed methods and quantify induced effects and storage sizes for the typical settings parameterized. This work is part of the ANGUS+ project, funded by the German Ministry

  8. A new method to quantify fluidized bed agglomeration in the combustion of biomass fuels

    Energy Technology Data Exchange (ETDEWEB)

    Oehman, M. [Umeaa Univ. (Sweden). Dept. of Chemistry

    1997-12-31

    The present licentiate thesis is a summary and discussion of four papers, dealing with the development, evaluation and use of a new method to quantify bed agglomeration tendencies for biomass fuels. An increased utilization of biomass related fuels has many environmental benefits, but also requires careful studies of potential new problems associated with these fuels such as bed agglomeration/defluidization during combustion and gasification in fluidized beds. From a thorough literature survey, no suitable methods to determine bed agglomeration tendencies of different fuels, fuel combinations or fuels with additives appeared to be available. It therefore seemed of considerable interest to develop a new method for the quantification of fluidized bed agglomeration tendencies for different fuels. A bench scale fluidized bed reactor (5 kW), specially designed to obtain a homogeneous isothermal bed temperature, is used. The method is based on controlled increase of the bed temperature by applying external heat to the primary air and to the bed section walls. The initial agglomeration temperature is determined by on- or off-line principal component analysis of the variations in measured bed temperatures and differential pressures. Samples of ash and bed material for evaluation of agglomeration mechanisms may also be collected throughout the operation. To determine potential effects of all the process related variables on the determined fuel specific bed agglomeration temperature, an extensive sensitivity analysis was performed according to a statistical experimental design. The results showed that the process variables had only relatively small effects on the agglomeration temperature, which could be determined to 899 deg C with a reproducibility of {+-} 5 deg C (STD). The inaccuracy was determined to be {+-} 30 deg C (STD). The method was also used to study the mechanism of both bed agglomeration using two biomass fuels and prevention of bed agglomeration by co

  9. Quantifying complexity in translational research: an integrated approach.

    Science.gov (United States)

    Munoz, David A; Nembhard, Harriet Black; Kraschnewski, Jennifer L

    2014-01-01

    The purpose of this paper is to quantify complexity in translational research. The impact of major operational steps and technical requirements is calculated with respect to their ability to accelerate moving new discoveries into clinical practice. A three-phase integrated quality function deployment (QFD) and analytic hierarchy process (AHP) method was used to quantify complexity in translational research. A case study in obesity was used to usability. Generally, the evidence generated was valuable for understanding various components in translational research. Particularly, the authors found that collaboration networks, multidisciplinary team capacity and community engagement are crucial for translating new discoveries into practice. As the method is mainly based on subjective opinion, some argue that the results may be biased. However, a consistency ratio is calculated and used as a guide to subjectivity. Alternatively, a larger sample may be incorporated to reduce bias. The integrated QFD-AHP framework provides evidence that could be helpful to generate agreement, develop guidelines, allocate resources wisely, identify benchmarks and enhance collaboration among similar projects. Current conceptual models in translational research provide little or no clue to assess complexity. The proposed method aimed to fill this gap. Additionally, the literature review includes various features that have not been explored in translational research.

  10. A method to quantify FRET stoichiometry with phasor plot analysis and acceptor lifetime ingrowth.

    Science.gov (United States)

    Chen, WeiYue; Avezov, Edward; Schlachter, Simon C; Gielen, Fabrice; Laine, Romain F; Harding, Heather P; Hollfelder, Florian; Ron, David; Kaminski, Clemens F

    2015-03-10

    FRET is widely used for the study of protein-protein interactions in biological samples. However, it is difficult to quantify both the FRET efficiency (E) and the affinity (Kd) of the molecular interaction from intermolecular FRET signals in samples of unknown stoichiometry. Here, we present a method for the simultaneous quantification of the complete set of interaction parameters, including fractions of bound donors and acceptors, local protein concentrations, and dissociation constants, in each image pixel. The method makes use of fluorescence lifetime information from both donor and acceptor molecules and takes advantage of the linear properties of the phasor plot approach. We demonstrate the capability of our method in vitro in a microfluidic device and also in cells, via the determination of the binding affinity between tagged versions of glutathione and glutathione S-transferase, and via the determination of competitor concentration. The potential of the method is explored with simulations. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Quantifying Spiral Ganglion Neurite and Schwann Behavior on Micropatterned Polymer Substrates.

    Science.gov (United States)

    Cheng, Elise L; Leigh, Braden; Guymon, C Allan; Hansen, Marlan R

    2016-01-01

    The first successful in vitro experiments on the cochlea were conducted in 1928 by Honor Fell (Fell, Arch Exp Zellforsch 7(1):69-81, 1928). Since then, techniques for culture of this tissue have been refined, and dissociated primary culture of the spiral ganglion has become a widely accepted in vitro model for studying nerve damage and regeneration in the cochlea. Additionally, patterned substrates have been developed that facilitate and direct neural outgrowth. A number of automated and semi-automated methods for quantifying this neurite outgrowth have been utilized in recent years (Zhang et al., J Neurosci Methods 160(1):149-162, 2007; Tapias et al., Neurobiol Dis 54:158-168, 2013). Here, we describe a method to study the effect of topographical cues on spiral ganglion neurite and Schwann cell alignment. We discuss our microfabrication process, characterization of pattern features, cell culture techniques for both spiral ganglion neurons and spiral ganglion Schwann cells. In addition, we describe protocols for reducing fibroblast count, immunocytochemistry, and methods for quantifying neurite and Schwann cell alignment.

  12. A method for quantifying and comparing the costs and benefits of alternative riparian zone buffer widths

    Science.gov (United States)

    Chris B. LeDoux; Ethel Wilkerson

    2008-01-01

    We developed a method that can be used to quantify the opportunity costs and ecological benefits of implementing alternative streamside management zones/buffer zone widths. The opportunity costs are computed based on the net value of the timber left behind in the buffer zone, the stump-to-mill logging costs for the logging technology that would have been used to...

  13. Nanoscale mechanical stimulation method for quantifying C. elegans mechanosensory behavior and memory

    OpenAIRE

    Kiso, Kaori; Sugi, Takuma; Okumura, Etsuko; Igarashi, Ryuji

    2016-01-01

    Here, we establish a novel economic system to quantify C. elegans mechanosensory behavior and memory by a controllable nanoscale mechanical stimulation. Using piezoelectric sheet speaker, we can flexibly change the vibration properties at a nanoscale displacement level and quantify behavioral responses and memory under the control of each vibration property. This system will facilitate understanding of physiological aspects of C. elegans mechanosensory behavior and memory.

  14. Methods for quantifying adipose tissue insulin resistance in overweight/obese humans.

    Science.gov (United States)

    Ter Horst, K W; van Galen, K A; Gilijamse, P W; Hartstra, A V; de Groot, P F; van der Valk, F M; Ackermans, M T; Nieuwdorp, M; Romijn, J A; Serlie, M J

    2017-08-01

    Insulin resistance of adipose tissue is an important feature of obesity-related metabolic disease. However, assessment of lipolysis in humans requires labor-intensive and expensive methods, and there is limited validation of simplified measurement methods. We aimed to validate simplified methods for the quantification of adipose tissue insulin resistance against the assessment of insulin sensitivity of lipolysis suppression during hyperinsulinemic-euglycemic clamp studies. We assessed the insulin-mediated suppression of lipolysis by tracer-dilution of [1,1,2,3,3- 2 H 5 ]glycerol during hyperinsulinemic-euglycemic clamp studies in 125 overweight or obese adults (85 men, 40 women; age 50±11 years; body mass index 38±7 kg m -2 ). Seven indices of adipose tissue insulin resistance were validated against the reference measurement method. Low-dose insulin infusion resulted in suppression of the glycerol rate of appearance ranging from 4% (most resistant) to 85% (most sensitive), indicating a good range of adipose tissue insulin sensitivity in the study population. The reference method correlated with (1) insulin-mediated suppression of plasma glycerol concentrations (r=0.960, PInsulin Resistance (Adipo-IR) index (fasting plasma insulin-NEFA product; r=-0.526, Pinsulin-glycerol product (r=-0.467, PInsulin Resistance Index (fasting plasma insulin-basal lipolysis product; r=0.460, PInsulin Sensitivity Check Index (QUICKI)-NEFA index (r=0.621, Pinsulin resistance (area under the curve ⩾0.801, Pinsulin sensitivity (that is, the antilipolytic action of insulin) can be reliably quantified in overweight and obese humans by simplified index methods. The sensitivity and specificity of the Adipo-IR index and the fasting plasma insulin-glycerol product, combined with their simplicity and acceptable agreement, suggest that these may be most useful in clinical practice.

  15. High-performance liquid chromatography analysis methods developed for quantifying enzymatic esterification of flavonoids in ionic liquids

    DEFF Research Database (Denmark)

    Lue, Bena-Marie; Guo, Zheng; Xu, X.B.

    2008-01-01

    Methods using reversed-phase high-performance liquid chromatography (RP-HPLC) with ELSD were investigated to quantify enzymatic reactions of flavonoids with fatty acids in the presence of diverse room temperature ionic liquids (RTILs). A buffered salt (preferably triethylamine-acetate) was found...... essential for separation of flavonoids from strongly polar RTILs, whereby RTILs were generally visible as two major peaks identified based on an ion-pairing/exchanging hypothesis. C8 and C12 stationary phases were optimal while mobile phase pH (3-7) had only a minor influence on separation. The method...

  16. Methods to quantify the impacts of water erosion on productivity of tropical soils

    International Nuclear Information System (INIS)

    Obando, Franco H

    2000-01-01

    A review on methods to quantify the impacts of water erosion on soil properties and crop yield is presented. On the basis of results of soil losses through plastic shading meshes on oxisols in the eastern plains of Colombia, the experimental design to quantify erosion induced losses in soil productivity suggested by Stocking (1985) for tropical soils is modified. With the purpose of producing contrasting levels of natural erosion, simple 33% and 45% shading rates meshes, and superposed 33% and 45% meshes were used. These were stretched out on stocking 5 m x 10 m run-off plots at 40 cm height from soil surface. Annual soil losses produced under the above mentioned shading meshes treatments did not present significant differences. It was demonstrated that 33%, 45% as well as superposed 33% and 45% produce an equivalent surface cover, CVE, greater than 90% comparable to that produced by zero grazing Brachiaria decumbens pasture. Such results allowed presenting modifications to the stocking design. It is recommended to use alternated stripes of bare soil and shading meshes of different width to produce contrasting levels of equivalent soil surface cover and consequently contrasting erosion rates. Design of the modified stocking run-off plots, including collecting channels, collecting tanks and a Geib multibox divisor are presented

  17. Time-to-event analysis as a framework for quantifying fish passage performance: Chapter 9.1

    Science.gov (United States)

    Castro-Santos, Theodore R.; Perry, Russell W.; Adams, Noah S.; Beeman, John W.; Eiler, John H.

    2012-01-01

    Fish passage is the result of a sequence of processes, whereby fish must approach, enter, and pass a structure. Each of these processes takes time, and fishway performance is best quantified in terms of the rates at which each process is completed. Optimal performance is achieved by maximizing the rates of approach, entry, and passage through safe and desirable routes. Sometimes, however, it is necessary to reduce rates of passage through less desirable routes in order to increase proportions passing through the preferred route. Effectiveness of operational or structural modifications for achieving either of these goals is best quantified by applying time-to-event analysis, commonly known as survival analysis methods, to telemetry data. This set of techniques allows for accurate estimation of passage rates and covariate effects on those rates. Importantly, it allows researchers to quantify rates that vary over time, as well as the effects of covariates that also vary over time. Finally, these methods are able to control for competing risks, i.e., the presence of alternate passage routes, failure to pass, or other fates that remove fish from the pool of candidates available to pass through a particular route. In this chapter, we present a model simulation of telemetered fish passing a hydroelectric dam, and provide step-by-step guidance and rationales for performing time-to-event analysis on the resulting data. We demonstrate how this approach removes bias from performance estimates that can result from using methods that focus only on proportions passing each route. Time-to-event analysis, coupled with multinomial models for measuring survival, provides a comprehensive set of techniques for quantifying fish passage, and a framework from which performance among different sites can be better understood.

  18. Quantifying Evaporation and Evaluating Runoff Estimation Methods in a Permeable Pavement System - abstract

    Science.gov (United States)

    Studies on quantifying evaporation in permeable pavement systems are limited to few laboratory studies that used a scale to weigh evaporative losses and a field application with a tunnel-evaporation gauge. A primary objective of this research was to quantify evaporation for a la...

  19. Quantifying the sources of variability in equine faecal egg counts: implications for improving the utility of the method.

    Science.gov (United States)

    Denwood, M J; Love, S; Innocent, G T; Matthews, L; McKendrick, I J; Hillary, N; Smith, A; Reid, S W J

    2012-08-13

    The faecal egg count (FEC) is the most widely used means of quantifying the nematode burden of horses, and is frequently used in clinical practice to inform treatment and prevention. The statistical process underlying the FEC is complex, comprising a Poisson counting error process for each sample, compounded with an underlying continuous distribution of means between samples. Being able to quantify the sources of variability contributing to this distribution of means is a necessary step towards providing estimates of statistical power for future FEC and FECRT studies, and may help to improve the usefulness of the FEC technique by identifying and minimising unwanted sources of variability. Obtaining such estimates require a hierarchical statistical model coupled with repeated FEC observations from a single animal over a short period of time. Here, we use this approach to provide the first comparative estimate of multiple sources of within-horse FEC variability. The results demonstrate that a substantial proportion of the observed variation in FEC between horses occurs as a result of variation in FEC within an animal, with the major sources being aggregation of eggs within faeces and variation in egg concentration between faecal piles. The McMaster procedure itself is associated with a comparatively small coefficient of variation, and is therefore highly repeatable when a sufficiently large number of eggs are observed to reduce the error associated with the counting process. We conclude that the variation between samples taken from the same animal is substantial, but can be reduced through the use of larger homogenised faecal samples. Estimates are provided for the coefficient of variation (cv) associated with each within animal source of variability in observed FEC, allowing the usefulness of individual FEC to be quantified, and providing a basis for future FEC and FECRT studies. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Flexible and scalable methods for quantifying stochastic variability in the era of massive time-domain astronomical data sets

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Brandon C. [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93106-9530 (United States); Becker, Andrew C. [Department of Astronomy, University of Washington, P.O. Box 351580, Seattle, WA 98195-1580 (United States); Sobolewska, Malgosia [Nicolaus Copernicus Astronomical Center, Bartycka 18, 00-716, Warsaw (Poland); Siemiginowska, Aneta [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Uttley, Phil [Astronomical Institute Anton Pannekoek, University of Amsterdam, Postbus 94249, 1090 GE Amsterdam (Netherlands)

    2014-06-10

    We present the use of continuous-time autoregressive moving average (CARMA) models as a method for estimating the variability features of a light curve, and in particular its power spectral density (PSD). CARMA models fully account for irregular sampling and measurement errors, making them valuable for quantifying variability, forecasting and interpolating light curves, and variability-based classification. We show that the PSD of a CARMA model can be expressed as a sum of Lorentzian functions, which makes them extremely flexible and able to model a broad range of PSDs. We present the likelihood function for light curves sampled from CARMA processes, placing them on a statistically rigorous foundation, and we present a Bayesian method to infer the probability distribution of the PSD given the measured light curve. Because calculation of the likelihood function scales linearly with the number of data points, CARMA modeling scales to current and future massive time-domain data sets. We conclude by applying our CARMA modeling approach to light curves for an X-ray binary, two active galactic nuclei, a long-period variable star, and an RR Lyrae star in order to illustrate their use, applicability, and interpretation.

  1. Flexible and scalable methods for quantifying stochastic variability in the era of massive time-domain astronomical data sets

    International Nuclear Information System (INIS)

    Kelly, Brandon C.; Becker, Andrew C.; Sobolewska, Malgosia; Siemiginowska, Aneta; Uttley, Phil

    2014-01-01

    We present the use of continuous-time autoregressive moving average (CARMA) models as a method for estimating the variability features of a light curve, and in particular its power spectral density (PSD). CARMA models fully account for irregular sampling and measurement errors, making them valuable for quantifying variability, forecasting and interpolating light curves, and variability-based classification. We show that the PSD of a CARMA model can be expressed as a sum of Lorentzian functions, which makes them extremely flexible and able to model a broad range of PSDs. We present the likelihood function for light curves sampled from CARMA processes, placing them on a statistically rigorous foundation, and we present a Bayesian method to infer the probability distribution of the PSD given the measured light curve. Because calculation of the likelihood function scales linearly with the number of data points, CARMA modeling scales to current and future massive time-domain data sets. We conclude by applying our CARMA modeling approach to light curves for an X-ray binary, two active galactic nuclei, a long-period variable star, and an RR Lyrae star in order to illustrate their use, applicability, and interpretation.

  2. A portable and inexpensive method for quantifying ambient intermediate volatility organic compounds

    Science.gov (United States)

    Bouvier-Brown, Nicole C.; Carrasco, Erica; Karz, James; Chang, Kylee; Nguyen, Theodore; Ruiz, Daniel; Okonta, Vivian; Gilman, Jessica B.; Kuster, William C.; de Gouw, Joost A.

    2014-09-01

    Volatile organic compounds (VOCs) and intermediate volatility VOCs (IVOCs) are gas-phase organic compounds which may participate in chemical reactions affecting air quality and climate. The development of an inexpensive, field-portable quantification method for higher molecular weight VOCs and IVOCs utilizing commercially available components could be used as a tool to survey aerosol precursors or identify and monitor air quality in various communities. We characterized the performance characteristics for the HayeSep-Q adsorbent with a representative selection of anthropogenic and biogenic VOC standards and optimized experimental conditions and procedures for field collections followed by laboratory analysis. All VOCs were analyzed using gas chromatography coupled with mass spectrometry. Precision (average 22%) and accuracy were reasonable and the limit of detection ranged from 10 to 80 pmol/mol (ppt) for the studied compounds. The method was employed at the Los Angeles site during the CalNex campaign in summer 2010 and ambient mixing ratios agreed well (slope 0.69-1.06, R2 0.67-0.71) with measurements made using an in-situ GC-MS - a distinctly different sampling and quantification method. This new technique can be applied to quantify ambient biogenic and anthropogenic C8-C15 VOCs and IVOCs.

  3. Evaluation of the quality consistency of powdered poppy capsule extractive by an averagely linear-quantified fingerprint method in combination with antioxidant activities and two compounds analyses.

    Science.gov (United States)

    Zhang, Yujing; Sun, Guoxiang; Hou, Zhifei; Yan, Bo; Zhang, Jing

    2017-12-01

    A novel averagely linear-quantified fingerprint method was proposed and successfully applied to monitor the quality consistency of alkaloids in powdered poppy capsule extractive. Averagely linear-quantified fingerprint method provided accurate qualitative and quantitative similarities for chromatographic fingerprints of Chinese herbal medicines. The stability and operability of the averagely linear-quantified fingerprint method were verified by the parameter r. The average linear qualitative similarity SL (improved based on conventional qualitative "Similarity") was used as a qualitative criterion in the averagely linear-quantified fingerprint method, and the average linear quantitative similarity PL was introduced as a quantitative one. PL was able to identify the difference in the content of all the chemical components. In addition, PL was found to be highly correlated to the contents of two alkaloid compounds (morphine and codeine). A simple flow injection analysis was developed for the determination of antioxidant capacity in Chinese Herbal Medicines, which was based on the scavenging of 2,2-diphenyl-1-picrylhydrazyl radical by antioxidants. The fingerprint-efficacy relationship linking chromatographic fingerprints and antioxidant activities was investigated utilizing orthogonal projection to latent structures method, which provided important pharmacodynamic information for Chinese herbal medicines quality control. In summary, quantitative fingerprinting based on averagely linear-quantified fingerprint method can be applied for monitoring the quality consistency of Chinese herbal medicines, and the constructed orthogonal projection to latent structures model is particularly suitable for investigating the fingerprint-efficacy relationship. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Enhanced computational methods for quantifying the effect of geographic and environmental isolation on genetic differentiation

    DEFF Research Database (Denmark)

    Botta, Filippo; Eriksen, Casper; Fontaine, Michaël C.

    2015-01-01

    1. In a recent paper, Bradburd et al. (Evolution, 67, 2013, 3258) proposed a model to quantify the relative effect of geographic and environmental distance on genetic differentiation. Here, we enhance this method in several ways. 2. We modify the covariance model so as to fit better with mainstre...... available as an R package called sunder. It takes as input georeferenced allele counts at the individual or population level for co-dominant markers. Program homepage: http://www2.imm.dtu.dk/~gigu/Sunder/....

  5. Simple mathematical method to quantify p53 mutations in occupational lung cancer

    International Nuclear Information System (INIS)

    Helal, N.L.

    2005-01-01

    Radon-222, a decay product of uranium-238 and a source of high linear energy transfer (LET) alpha -particles, has been implicated in the increase risk of lung cancer in uranium miners as well as non-miners. The p53 gene mutational spectrum reveals evidence for a direct causal effect of radon inhalation in lung cancer. This mutation has been proposed as a marker of radon exposure. The development of such markers may ultimately be of benefit in the reduction of occupational morbidity and mortality from occupational cancer. One of the tasks in risk assessment of genotoxic occupational radiation exposure is to devise a simple numerical method. This method may be used to quantify the relationship between radiation dose and the effect on the genetic sequences. The tumor suppressor gene (TSG) p53 is an ideal bio marker addressing questions of exposure and risk. These proteins may be suitable for the design of more effective or less invasive cancer therapies. The clinical outcome of lung cancer patients may correlate with the normal regulation of these patients and, therefore, their identification may be used as a guideline for future therapy modalities. To investigate the association between radon exposure and p53 mutations in lung tumors, we have implied a mathematical method. This method has been developed from a 2-D graphical representational technique that enables easy visualization of base distributions. This is of special relevance to libraries of single nucleotide polymorphic (SNP) genes

  6. Quantifying the Effects of Biofilm on the Hydraulic Properties of Unsaturated Soils

    Science.gov (United States)

    Volk, E.; Iden, S.; Furman, A.; Durner, W.; Rosenzweig, R.

    2017-12-01

    Quantifying the effects of biofilms on hydraulic properties of unsaturated soils is necessary for predicting water and solute flow in soil with extensive microbial presence. This can be relevant to bioremediation processes, soil aquifer treatment and effluent irrigation. Previous works showed a reduction in the hydraulic conductivity and an increase in water content due to the addition of biofilm analogue materials. The objective of this research is to quantify soil hydraulic properties of unsaturated soil (water retention and hydraulic conductivity) using real soil biofilm. In this work, Hamra soil was incubated with Luria Broth (LB) and biofilm-producing bacteria (Pseudomonas Putida F1). Hydraulic conductivity and water retention were measured by the evaporation method, Dewpoint method and a constant head permeameter. Biofilm was quantified using viable counts and the deficit of TOC. The results show that the presence of biofilms increases soil retention in the `dry' range of the curve and reduces the hydraulic conductivity (see figure). This research shows that biofilms may have a non-negligible effect on flow and transport in unsaturated soils. These findings contribute to modeling water flow in biofilm amended soil.

  7. Biogenic amine profile in unripe Arabica coffee beans processed according to dry and wet methods.

    Science.gov (United States)

    Dias, Eduardo C; Pereira, Rosemary G F A; Borém, Flávio M; Mendes, Eulália; de Lima, Renato R; Fernandes, José O; Casal, Susana

    2012-04-25

    Immature coffee fruit processing contributes to a high amount of defective beans, which determines a significant amount of low-quality coffee sold in the Brazilian internal market. Unripe bean processing was tested, taking the levels of bioactive amines as criteria for evaluating the extent of fermentation and establishing the differences between processing methods. The beans were processed by the dry method after being mechanically depulped immediately after harvest or after a 12 h resting period in a dry pile or immersed in water. Seven bioactive amines were quantified: putrescine, spermine, spermidine, serotonin, cadaverine, histamine, and tyramine, with global amounts ranging from 71.8 to 80.3 mg/kg. The levels of spermine and spermidine were lower in the unripe depulped coffee than in the natural coffee. The specific conditions of dry and wet processing also influenced cadaverine levels, and histamine was reduced in unripe depulped coffee. A resting period of 12 h does not induce significant alteration on the beans and can be improved if performed in water. These results confirm that peeling immature coffee can decrease fermentation processes while providing more uniform drying, thus reducing the number of defects and potentially increasing beverage quality.

  8. The Inconvenient Truth of Fresh Sediment: Insights from a New Method for Quantifying Subsidence in the Mississippi Delta

    Science.gov (United States)

    Chamberlain, E. L.; Shen, Z.; Tornqvist, T. E.; Kim, W.

    2017-12-01

    Knowing the rates and drivers of subsidence in deltas is essential to coastal management. There is a growing consensus that relatively shallow processes such as compaction and artificial drainage are primary contributors to subsidence, although deeper processes such as faulting may be locally important. Here we present a new method to quantify subsidence of a 6000 km2 relict bayhead delta of the Mississippi Delta, using the depth of the mouthbar-overbank stratigraphic boundary that formed near the low tide level in combination with OSL chronology. The contributions of isostatic processes are removed by subtracting a relative sea-level rise term previously obtained from basal peat. We find that displacement rates of the boundary, averaged over 750 to 1500 years, are on the order of a few mm/yr. Cumulative displacement is strongly correlated to overburden thickness, decreases coastward coincident with thinning of the bayhead delta deposit, and appears unrelated to the thickness of underlying Holocene strata or the occurrence of previously mapped faults. This supports compaction of shallow strata as a dominant driver of subsidence in the Mississippi Delta. We find that at least 50% of elevation gained through overbank deposition is ultimately lost to subsidence, significantly greater than the 35% loss previously estimated for inland localities underlain by peat. Our results demonstrate that bayhead deltas are especially vulnerable to subsidence. This finding has major relevance to coastal restoration in the Mississippi Delta through engineered river-sediment diversions. While inactive regions of the delta may be fairly stable if not perturbed by humans, the introduction of fresh sediment to the delta plain will inevitably accelerate subsidence. Values obtained with our method will be applied to a delta growth model that predicts the land-building potential of river-sediment diversions discharging into open bays under realistic scenarios of load-driven subsidence.

  9. Surface defect detection in tiling Industries using digital image processing methods: analysis and evaluation.

    Science.gov (United States)

    Karimi, Mohammad H; Asemani, Davud

    2014-05-01

    Ceramic and tile industries should indispensably include a grading stage to quantify the quality of products. Actually, human control systems are often used for grading purposes. An automatic grading system is essential to enhance the quality control and marketing of the products. Since there generally exist six different types of defects originating from various stages of tile manufacturing lines with distinct textures and morphologies, many image processing techniques have been proposed for defect detection. In this paper, a survey has been made on the pattern recognition and image processing algorithms which have been used to detect surface defects. Each method appears to be limited for detecting some subgroup of defects. The detection techniques may be divided into three main groups: statistical pattern recognition, feature vector extraction and texture/image classification. The methods such as wavelet transform, filtering, morphology and contourlet transform are more effective for pre-processing tasks. Others including statistical methods, neural networks and model-based algorithms can be applied to extract the surface defects. Although, statistical methods are often appropriate for identification of large defects such as Spots, but techniques such as wavelet processing provide an acceptable response for detection of small defects such as Pinhole. A thorough survey is made in this paper on the existing algorithms in each subgroup. Also, the evaluation parameters are discussed including supervised and unsupervised parameters. Using various performance parameters, different defect detection algorithms are compared and evaluated. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Tracking and Quantifying Developmental Processes in C. elegans Using Open-source Tools.

    Science.gov (United States)

    Dutta, Priyanka; Lehmann, Christina; Odedra, Devang; Singh, Deepika; Pohl, Christian

    2015-12-16

    Quantitatively capturing developmental processes is crucial to derive mechanistic models and key to identify and describe mutant phenotypes. Here protocols are presented for preparing embryos and adult C. elegans animals for short- and long-term time-lapse microscopy and methods for tracking and quantification of developmental processes. The methods presented are all based on C. elegans strains available from the Caenorhabditis Genetics Center and on open-source software that can be easily implemented in any laboratory independently of the microscopy system used. A reconstruction of a 3D cell-shape model using the modelling software IMOD, manual tracking of fluorescently-labeled subcellular structures using the multi-purpose image analysis program Endrov, and an analysis of cortical contractile flow using PIVlab (Time-Resolved Digital Particle Image Velocimetry Tool for MATLAB) are shown. It is discussed how these methods can also be deployed to quantitatively capture other developmental processes in different models, e.g., cell tracking and lineage tracing, tracking of vesicle flow.

  11. Quantifying the Relative Contributions of Forest Change and Climatic Variability to Hydrology in Large Watersheds: A Critical Review of Research Methods

    Directory of Open Access Journals (Sweden)

    Xiaohua Wei

    2013-06-01

    Full Text Available Forest change and climatic variability are two major drivers for influencing change in watershed hydrology in forest–dominated watersheds. Quantifying their relative contributions is important to fully understand their individual effects. This review paper summarizes the progress on quantifying the relative contributions of forest or land cover change and climatic variability to hydrology in large watersheds using available case studies. It compared pros and cons of various research methods, identified research challenges and proposed future research priorities. Our synthesis shows that the relative hydrological effects of forest changes and climatic variability are largely dependent on their own change magnitudes and watershed characteristics. In some severely disturbed watersheds, impacts of forest changes or land use changes can be as important as those from climatic variability. This paper provides a brief review on eight selected research methods for this type of research. Because each method or technique has its own strengths and weaknesses, combining two or more methods is a more robust approach than using any single method alone. Future research priorities include conducting more case studies, refining research methods, and considering mechanism-based research using landscape ecology and geochemistry approaches.

  12. Technical note: a method to quantify prolamin proteins in corn that are negatively related to starch digestibility in ruminants.

    Science.gov (United States)

    Larson, J; Hoffman, P C

    2008-12-01

    Compared with floury or high-moisture corns, dry corn with a greater percentage of vitreous endosperm has been demonstrated to be negatively related to starch digestibility and milk yield of lactating dairy cows. Starch granules in corn are encapsulated by hydrophobic prolamin proteins that are innately insoluble in the rumen environment. Corn prolamin proteins are named zein, and laboratory methods to quantify zein exist but are seldom employed in ruminant nutrition because of their arduous nature. In this study, advances in cereal chemistry were combined with rapid turbidimetric methods yielding a modified turbidimetric zein method (mTZM) to quantify zein in whole corn. Ten dry corns containing unique endosperms were evaluated using the mTZM. Corns with flint, dent, floury, or opaque endosperms were found to contain 19.3, 11.3, 5.8, and 4.9 g of zein/100 g of starch, respectively. The ability of mTZM to differentiate corn endosperm types as defined by least significant difference was 2.6 g of zein/100 g of starch. Ten high-moisture corns of varying moisture content were also evaluated using the mTZM. Zein content of high-moisture corns as defined by mTZM ranged from 8.3 to 2.8 g of zein/100 g of starch with a least significant difference of 1.2 g of zein/100 g of starch. The mTZM determined that zein contents of high-moisture, floury, and opaque corns were markedly less than those of flint and dent dry corns, indicating that mTZM has the ability to quantify starch granule encapsulation by hydrophobic prolamin proteins in whole corn.

  13. High-performance liquid chromatography analysis methods developed for quantifying enzymatic esterification of flavonoids in ionic liquids.

    Science.gov (United States)

    Lue, Bena-Marie; Guo, Zheng; Xu, Xuebing

    2008-07-11

    Methods using reversed-phase high-performance liquid chromatography (RP-HPLC) with ELSD were investigated to quantify enzymatic reactions of flavonoids with fatty acids in the presence of diverse room temperature ionic liquids (RTILs). A buffered salt (preferably triethylamine-acetate) was found essential for separation of flavonoids from strongly polar RTILs, whereby RTILs were generally visible as two major peaks identified based on an ion-pairing/exchanging hypothesis. C8 and C12 stationary phases were optimal while mobile phase pH (3-7) had only a minor influence on separation. The method developed was successfully applied for primary screening of RTILs (>20), with in depth evaluation of substrates in 10 RTILs, for their evaluation as reaction media.

  14. The method of educational assessment affects children's neural processing and performance: behavioural and fMRI Evidence

    Science.gov (United States)

    Howard, Steven J.; Burianová, Hana; Calleia, Alysha; Fynes-Clinton, Samuel; Kervin, Lisa; Bokosmaty, Sahar

    2017-08-01

    Standardised educational assessments are now widespread, yet their development has given comparatively more consideration to what to assess than how to optimally assess students' competencies. Existing evidence from behavioural studies with children and neuroscience studies with adults suggest that the method of assessment may affect neural processing and performance, but current evidence remains limited. To investigate the impact of assessment methods on neural processing and performance in young children, we used functional magnetic resonance imaging to identify and quantify the neural correlates during performance across a range of current approaches to standardised spelling assessment. Results indicated that children's test performance declined as the cognitive load of assessment method increased. Activation of neural nodes associated with working memory further suggests that this performance decline may be a consequence of a higher cognitive load, rather than the complexity of the content. These findings provide insights into principles of assessment (re)design, to ensure assessment results are an accurate reflection of students' true levels of competency.

  15. Validation of a rapid, non-radioactive method to quantify internalisation of G-protein coupled receptors.

    Science.gov (United States)

    Jongsma, Maikel; Florczyk, Urszula M; Hendriks-Balk, Mariëlle C; Michel, Martin C; Peters, Stephan L M; Alewijnse, Astrid E

    2007-07-01

    Agonist exposure can cause internalisation of G-protein coupled receptors (GPCRs), which may be a part of desensitisation but also of cellular signaling. Previous methods to study internalisation have been tedious or only poorly quantitative. Therefore, we have developed and validated a quantitative method using a sphingosine-1-phosphate (S1P) receptor as a model. Because of a lack of suitable binding studies, it has been difficult to study S1P receptor internalisation. Using a N-terminal HisG-tag, S1P(1) receptors on the cell membrane can be visualised via immunocytochemistry with a specific anti-HisG antibody. S1P-induced internalisation was concentration dependent and was quantified using a microplate reader, detecting either absorbance, a fluorescent or luminescent signal, depending on the antibodies used. Among those, the fluorescence detection method was the most convenient to use. The relative ease of this method makes it suitable to measure a large number of data points, e.g. to compare the potency and efficacy of receptor ligands.

  16. The Quantified Characterization Method of the Micro-Macro Contacts of Three-Dimensional Granular Materials on the Basis of Graph Theory.

    Science.gov (United States)

    Guan, Yanpeng; Wang, Enzhi; Liu, Xiaoli; Wang, Sijing; Luan, Hebing

    2017-08-03

    We have attempted a multiscale and quantified characterization method of the contact in three-dimensional granular material made of spherical particles, particularly in cemented granular material. Particle contact is defined as a type of surface contact with voids in its surroundings, rather than a point contact. Macro contact is a particle contact set satisfying the restrictive condition of a two-dimensional manifold with a boundary. On the basis of graph theory, two dual geometrical systems are abstracted from the granular pack. The face and the face set, which satisfies the two-dimensional manifold with a boundary in the solid cell system, are extracted to characterize the particle contact and the macro contact, respectively. This characterization method is utilized to improve the post-processing in DEM (Discrete Element Method) from a micro perspective to describe the macro effect of the cemented granular material made of spherical particles. Since the crack has the same shape as its corresponding contact, this method is adopted to characterize the crack and realize its visualization. The integral failure route of the sample can be determined by a graph theory algorithm. The contact force is assigned to the weight value of the face characterizing the particle contact. Since the force vectors can be added, the macro contact force can be solved by adding the weight of its corresponding faces.

  17. A NEW METHOD TO QUANTIFY X-RAY SUBSTRUCTURES IN CLUSTERS OF GALAXIES

    Energy Technology Data Exchange (ETDEWEB)

    Andrade-Santos, Felipe; Lima Neto, Gastao B.; Lagana, Tatiana F. [Departamento de Astronomia, Instituto de Astronomia, Geofisica e Ciencias Atmosfericas, Universidade de Sao Paulo, Geofisica e Ciencias Atmosfericas, Rua do Matao 1226, Cidade Universitaria, 05508-090 Sao Paulo, SP (Brazil)

    2012-02-20

    We present a new method to quantify substructures in clusters of galaxies, based on the analysis of the intensity of structures. This analysis is done in a residual image that is the result of the subtraction of a surface brightness model, obtained by fitting a two-dimensional analytical model ({beta}-model or Sersic profile) with elliptical symmetry, from the X-ray image. Our method is applied to 34 clusters observed by the Chandra Space Telescope that are in the redshift range z in [0.02, 0.2] and have a signal-to-noise ratio (S/N) greater than 100. We present the calibration of the method and the relations between the substructure level with physical quantities, such as the mass, X-ray luminosity, temperature, and cluster redshift. We use our method to separate the clusters in two sub-samples of high- and low-substructure levels. We conclude, using Monte Carlo simulations, that the method recuperates very well the true amount of substructure for small angular core radii clusters (with respect to the whole image size) and good S/N observations. We find no evidence of correlation between the substructure level and physical properties of the clusters such as gas temperature, X-ray luminosity, and redshift; however, analysis suggest a trend between the substructure level and cluster mass. The scaling relations for the two sub-samples (high- and low-substructure level clusters) are different (they present an offset, i.e., given a fixed mass or temperature, low-substructure clusters tend to be more X-ray luminous), which is an important result for cosmological tests using the mass-luminosity relation to obtain the cluster mass function, since they rely on the assumption that clusters do not present different scaling relations according to their dynamical state.

  18. Quantifying light pollution

    International Nuclear Information System (INIS)

    Cinzano, P.; Falchi, F.

    2014-01-01

    In this paper we review new available indicators useful to quantify and monitor light pollution, defined as the alteration of the natural quantity of light in the night environment due to introduction of manmade light. With the introduction of recent radiative transfer methods for the computation of light pollution propagation, several new indicators become available. These indicators represent a primary step in light pollution quantification, beyond the bare evaluation of the night sky brightness, which is an observational effect integrated along the line of sight and thus lacking the three-dimensional information. - Highlights: • We review new available indicators useful to quantify and monitor light pollution. • These indicators are a primary step in light pollution quantification. • These indicators allow to improve light pollution mapping from a 2D to a 3D grid. • These indicators allow carrying out a tomography of light pollution. • We show an application of this technique to an Italian region

  19. A New Method to Quantify the Isotopic Signature of Leaf Transpiration: Implications for Landscape-Scale Evapotranspiration Partitioning Studies

    Science.gov (United States)

    Wang, L.; Good, S. P.; Caylor, K. K.

    2010-12-01

    Characterizing the constituent components of evapotranspiration is crucial to better understand ecosystem-level water budgets and water use dynamics. Isotope based evapotranspiration partitioning methods are promising but their utility lies in the accurate estimation of the isotopic composition of underlying transpiration and evaporation. Here we report a new method to quantify the isotopic signature of leaf transpiration under field conditions. This method utilizes a commercially available laser-based isotope analyzer and a transparent leaf chamber, modified from Licor conifer leaf chamber. The method is based on the water mass balance in ambient air and leaf transpired air. We verified the method using “artificial leaves” and glassline extracted samples. The method provides a new and direct way to estimate leaf transpiration isotopic signatures and it has wide applications in ecology, hydrology and plant physiology.

  20. Processing module operating methods, processing modules, and communications systems

    Science.gov (United States)

    McCown, Steven Harvey; Derr, Kurt W.; Moore, Troy

    2014-09-09

    A processing module operating method includes using a processing module physically connected to a wireless communications device, requesting that the wireless communications device retrieve encrypted code from a web site and receiving the encrypted code from the wireless communications device. The wireless communications device is unable to decrypt the encrypted code. The method further includes using the processing module, decrypting the encrypted code, executing the decrypted code, and preventing the wireless communications device from accessing the decrypted code. Another processing module operating method includes using a processing module physically connected to a host device, executing an application within the processing module, allowing the application to exchange user interaction data communicated using a user interface of the host device with the host device, and allowing the application to use the host device as a communications device for exchanging information with a remote device distinct from the host device.

  1. Quantifying Matter

    CERN Document Server

    Angelo, Joseph A

    2011-01-01

    Quantifying Matter explains how scientists learned to measure matter and quantify some of its most fascinating and useful properties. It presents many of the most important intellectual achievements and technical developments that led to the scientific interpretation of substance. Complete with full-color photographs, this exciting new volume describes the basic characteristics and properties of matter. Chapters include:. -Exploring the Nature of Matter. -The Origin of Matter. -The Search for Substance. -Quantifying Matter During the Scientific Revolution. -Understanding Matter's Electromagnet

  2. Quantifier spreading in child eye movements: A case of the Russian quantifier kazhdyj ‘every'

    Directory of Open Access Journals (Sweden)

    Irina A. Sekerina

    2017-07-01

    Full Text Available Extensive cross-linguistic work has documented that children up to the age of 9–10 make errors when performing a sentence-picture verification task that pairs spoken sentences with the universal quantifier 'every 'and pictures with entities in partial one-to-one correspondence. These errors stem from children’s difficulties in restricting the domain of a universal quantifier to the appropriate noun phrase and are referred in the literature as 'quantifier-spreading '('q'-spreading. We adapted the task to be performed in conjunction with eye-movement recordings using the Visual World Paradigm. Russian-speaking 5-to-6-year-old children ('N '= 31 listened to sentences like 'Kazhdyj alligator lezhit v vanne '‘Every alligator is lying in a bathtub’ and viewed pictures with three alligators, each in a bathtub, and two extra empty bathtubs. Non-spreader children ('N '= 12 were adult-like in their accuracy whereas 'q'-spreading ones ('N '= 19 were only 43% correct in interpreting such sentences compared to the control sentences. Eye movements of 'q'-spreading children revealed that more looks to the extra containers (two empty bathtubs correlated with higher error rates reflecting the processing pattern of 'q'-spreading. In contrast, more looks to the distractors in control sentences did not lead to errors in interpretation. We argue that 'q'-spreading errors are caused by interference from the extra entities in the visual context, and our results support the processing difficulty account of acquisition of quantification. Interference results in cognitive overload as children have to integrate multiple sources of information, i.e., visual context with salient extra entities and the spoken sentence in which these entities are mentioned in real-time processing.   This article is part of the special collection: Acquisition of Quantification

  3. Quantifying the value of E and P technology

    International Nuclear Information System (INIS)

    Heinemann, R.F.; Donlon, W.P.; Hoefner, M.L.

    1996-01-01

    A quantitative value-to-cost analysis was performed for the upstream technology portfolio of Mobil Oil for the period 1993 to 1998, by quantifying the cost of developing and delivering various technologies, including the net present value from technologies applied to thirty major assets. The value captured was classified into four general categories: (1) reduced capital costs, (2) reduced operating costs, (3) increased hydrocarbon production, and (4) increased proven reserves. The methodology used in quantifying the value-to-cost of upstream technologies and the results of asset analysis were described, with examples of value of technology to specific assets. A method to incorporate strategic considerations and business alignment to set overall program priorities was also discussed. Identifying and quantifying specific cases of technology application on an asset by asset basis was considered to be the principal advantage of using this method. figs

  4. Quantifying nitrous oxide emissions from Chinese grasslands with a process-based model

    Directory of Open Access Journals (Sweden)

    F. Zhang

    2010-06-01

    Full Text Available As one of the largest land cover types, grassland can potentially play an important role in the ecosystem services of natural resources in China. Nitrous oxide (N2O is a major greenhouse gas emitted from grasslands. Current N2O inventory at a regional or national level in China relies on the emission factor method, which is based on limited measurements. To improve the accuracy of the inventory by capturing the spatial variability of N2O emissions under the diverse climate, soil and management conditions across China, we adopted an approach by utilizing a process-based biogeochemical model, DeNitrification-DeComposition (DNDC, to quantify N2O emissions from Chinese grasslands. In the present study, DNDC was tested against datasets of N2O fluxes measured at eight grassland sites in China with encouraging results. The validated DNDC was then linked to a GIS database holding spatially differentiated information of climate, soil, vegetation and management at county-level for all the grasslands in the country. Daily weather data for 2000–2007 from 670 meteorological stations across the entire domain were employed to serve the simulations. The modelled results on a national scale showed a clear geographic pattern of N2O emissions. A high-emission strip showed up stretching from northeast to central China, which is consistent with the eastern boundary between the temperate grassland region and the major agricultural regions of China. The grasslands in the western mountain regions, however, emitted much less N2O. The regionally averaged rates of N2O emissions were 0.26, 0.14 and 0.38 kg nitrogen (N ha−1 y−1 for the temperate, montane and tropical/subtropical grasslands, respectively. The annual mean N2O emission from the total 337 million ha of grasslands in China was 76.5 ± 12.8 Gg N for the simulated years.

  5. Stacking interactions between carbohydrate and protein quantified by combination of theoretical and experimental methods.

    Directory of Open Access Journals (Sweden)

    Michaela Wimmerová

    Full Text Available Carbohydrate-receptor interactions are an integral part of biological events. They play an important role in many cellular processes, such as cell-cell adhesion, cell differentiation and in-cell signaling. Carbohydrates can interact with a receptor by using several types of intermolecular interactions. One of the most important is the interaction of a carbohydrate's apolar part with aromatic amino acid residues, known as dispersion interaction or CH/π interaction. In the study presented here, we attempted for the first time to quantify how the CH/π interaction contributes to a more general carbohydrate-protein interaction. We used a combined experimental approach, creating single and double point mutants with high level computational methods, and applied both to Ralstonia solanacearum (RSL lectin complexes with α-L-Me-fucoside. Experimentally measured binding affinities were compared with computed carbohydrate-aromatic amino acid residue interaction energies. Experimental binding affinities for the RSL wild type, phenylalanine and alanine mutants were -8.5, -7.1 and -4.1 kcal x mol(-1, respectively. These affinities agree with the computed dispersion interaction energy between carbohydrate and aromatic amino acid residues for RSL wild type and phenylalanine, with values -8.8, -7.9 kcal x mol(-1, excluding the alanine mutant where the interaction energy was -0.9 kcal x mol(-1. Molecular dynamics simulations show that discrepancy can be caused by creation of a new hydrogen bond between the α-L-Me-fucoside and RSL. Observed results suggest that in this and similar cases the carbohydrate-receptor interaction can be driven mainly by a dispersion interaction.

  6. A novel method for visualising and quantifying through-plane skin layer deformations.

    Science.gov (United States)

    Gerhardt, L-C; Schmidt, J; Sanz-Herrera, J A; Baaijens, F P T; Ansari, T; Peters, G W M; Oomens, C W J

    2012-10-01

    Skin is a multilayer composite and exhibits highly non-linear, viscoelastic, anisotropic material properties. In many consumer product and medical applications (e.g. during shaving, needle insertion, patient re-positioning), large tissue displacements and deformations are involved; consequently large local strains in the skin tissue can occur. Here, we present a novel imaging-based method to study skin deformations and the mechanics of interacting skin layers of full-thickness skin. Shear experiments and real-time video recording were combined with digital image correlation and strain field analysis to visualise and quantify skin layer deformations during dynamic mechanical testing. A global shear strain of 10% was applied to airbrush-patterned porcine skin (thickness: 1.2-1.6mm) using a rotational rheometer. The recordings were analysed with ARAMIS image correlation software, and local skin displacement, strain and stiffness profiles through the skin layers determined. The results of this pilot study revealed inhomogeneous skin deformation, characterised by a gradual transition from a low (2.0-5.0%; epidermis) to high (10-22%; dermis) shear strain regime. Shear moduli ranged from 20 to 130kPa. The herein presented method will be used for more extended studies on viable human skin, and is considered a valuable foundation for further development of constitutive models which can be used in advanced finite element analyses of skin. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. When things don't add up: quantifying impacts of multiple stressors from individual metabolism to ecosystem processing.

    Science.gov (United States)

    Galic, Nika; Sullivan, Lauren L; Grimm, Volker; Forbes, Valery E

    2018-04-01

    Ecosystems are exposed to multiple stressors which can compromise functioning and service delivery. These stressors often co-occur and interact in different ways which are not yet fully understood. Here, we applied a population model representing a freshwater amphipod feeding on leaf litter in forested streams. We simulated impacts of hypothetical stressors, individually and in pairwise combinations that target the individuals' feeding, maintenance, growth and reproduction. Impacts were quantified by examining responses at three levels of biological organisation: individual-level body sizes and cumulative reproduction, population-level abundance and biomass and ecosystem-level leaf litter decomposition. Interactive effects of multiple stressors at the individual level were mostly antagonistic, that is, less negative than expected. Most population- and ecosystem-level responses to multiple stressors were stronger than expected from an additive model, that is, synergistic. Our results suggest that across levels of biological organisation responses to multiple stressors are rarely only additive. We suggest methods for efficiently quantifying impacts of multiple stressors at different levels of biological organisation. © 2018 John Wiley & Sons Ltd/CNRS.

  8. Quantifying Diastolic Function: From E-Waves as Triangles to Physiologic Contours via the 'Geometric Method'.

    Science.gov (United States)

    Golman, Mikhail; Padovano, William; Shmuylovich, Leonid; Kovács, Sándor J

    2018-03-01

    Conventional echocardiographic diastolic function (DF) assessment approximates transmitral flow velocity contours (Doppler E-waves) as triangles, with peak (E peak ), acceleration time (AT), and deceleration time (DT) as indexes. These metrics have limited value because they are unable to characterize the underlying physiology. The parametrized diastolic filling (PDF) formalism provides a physiologic, kinematic mechanism based characterization of DF by extracting chamber stiffness (k), relaxation (c), and load (x o ) from E-wave contours. We derive the mathematical relationship between the PDF parameters and E peak , AT, DT and thereby introduce the geometric method (GM) that computes the PDF parameters using E peak , AT, and DT as input. Numerical experiments validated GM by analysis of 208 E-waves from 31 datasets spanning the full range of clinical diastolic function. GM yielded indistinguishable average parameter values per subject vs. the gold-standard PDF method (k: R 2  = 0.94, c: R 2  = 0.95, x o : R 2  = 0.95, p PDF method to quantify DF in terms of physiologic chamber properties.

  9. Approximate entropy—a new statistic to quantify arc and welding process stability in short-circuiting gas metal arc welding

    International Nuclear Information System (INIS)

    Cao Biao; Xiang Yuanpeng; Lü Xiaoqing; Zeng Min; Huang Shisheng

    2008-01-01

    Based on the phase state reconstruction of welding current in short-circuiting gas metal arc welding using carbon dioxide as shielding gas, the approximate entropy of welding current as well as its standard deviation has been calculated and analysed to investigate their relation with the stability of electric arc and welding process. The extensive experimental and calculated results show that the approximate entropy of welding current is significantly and positively correlated with arc and welding process stability, whereas its standard deviation is correlated with them negatively. A larger approximate entropy and a smaller standard deviation imply a more stable arc and welding process, and vice versa. As a result, the approximate entropy of welding current promises well in assessing and quantifying the stability of electric arc and welding process in short-circuiting gas metal arc welding

  10. Tracer methods to quantify nutrient uptake from plough layer, sub-soil and fertilizer: implications on sustainable nutrient management

    International Nuclear Information System (INIS)

    Haak, E.

    1996-01-01

    Two soils injection methods are presented. The first method consists of homogeneously labelling the whole plough layer with carrier free tracers. this is done in two treatments, (1) a reference treatment without connection with the sub-soil and (2) an experimental treatment where the sub-soil is freely accessible for root penetration. The second method, which is now under development, consists of using isotope labelled fertilizers instead of carrier free tracers. By application of the A-value concept it is possible to quantify (by the first method) the plant uptake of nutrients from plough layer and sub-soil, and from the second method, the uptake of nutrients from the applied fertilizer. A fertilizer strategy for phosphorus is discussed based on data obtained from tracer experiment in the field, and soil survey of specific field sites. (author). 7 refs, 2 figs, 1 tab

  11. Tracer methods to quantify nutrient uptake from plough layer, sub-soil and fertilizer: implications on sustainable nutrient management

    Energy Technology Data Exchange (ETDEWEB)

    Haak, E [Swedish Univ. of Agricultural Sciences, Uppsala (Sweden). Dept. of Radioecology

    1996-07-01

    Two soils injection methods are presented. The first method consists of homogeneously labelling the whole plough layer with carrier free tracers. this is done in two treatments, (1) a reference treatment without connection with the sub-soil and (2) an experimental treatment where the sub-soil is freely accessible for root penetration. The second method, which is now under development, consists of using isotope labelled fertilizers instead of carrier free tracers. By application of the A-value concept it is possible to quantify (by the first method) the plant uptake of nutrients from plough layer and sub-soil, and from the second method, the uptake of nutrients from the applied fertilizer. A fertilizer strategy for phosphorus is discussed based on data obtained from tracer experiment in the field, and soil survey of specific field sites. (author). 7 refs, 2 figs, 1 tab.

  12. Neutron radiography and X-ray computed tomography for quantifying weathering and water uptake processes inside porous limestone used as building material

    International Nuclear Information System (INIS)

    Dewanckele, J.; De Kock, T.; Fronteau, G.; Derluyn, H.; Vontobel, P.; Dierick, M.; Van Hoorebeke, L.; Jacobs, P.; Cnudde, V.

    2014-01-01

    Euville and Savonnières limestones were weathered by acid test and this resulted in the formation of a gypsum crust. In order to characterize the crystallization pattern and the evolution of the pore structure below the crust, a combination of high resolution X-ray computed tomography and SEM–EDS was used. A time lapse sequence of the changing pore structure in both stones was obtained and afterwards quantified by using image analysis. The difference in weathering of both stones by the same process could be explained by the underlying microstructure and texture. Because water and moisture play a crucial role in the weathering processes, water uptake in weathered and non-weathered samples was characterized based on neutron radiography. In this way the water uptake was both visualized and quantified in function of the height of the sample and in function of time. In general, the formation of a gypsum crust on limestone slows down the initial water uptake in the materials. - Highlights: • Time lapse sequence in 3D of changing pore structures inside limestone • A combination of X-ray CT, SEM and neutron radiography was used. • Quantification of water content in function of time, height and weathering • Characterization of weathering processes due to gypsum crystallization

  13. New methods to quantify the cracking performance of cementitious systems made with internal curing

    Science.gov (United States)

    Schlitter, John L.

    The use of high performance concretes that utilize low water-cement ratios have been promoted for use in infrastructure based on their potential to increase durability and service life because they are stronger and less porous. Unfortunately, these benefits are not always realized due to the susceptibility of high performance concrete to undergo early age cracking caused by shrinkage. This problem is widespread and effects federal, state, and local budgets that must maintain or replace deterioration caused by cracking. As a result, methods to reduce or eliminate early age shrinkage cracking have been investigated. Internal curing is one such method in which a prewetted lightweight sand is incorporated into the concrete mixture to provide internal water as the concrete cures. This action can significantly reduce or eliminate shrinkage and in some cases causes a beneficial early age expansion. Standard laboratory tests have been developed to quantify the shrinkage cracking potential of concrete. Unfortunately, many of these tests may not be appropriate for use with internally cured mixtures and only provide limited amounts of information. Most standard tests are not designed to capture the expansive behavior of internally cured mixtures. This thesis describes the design and implementation of two new testing devices that overcome the limitations of current standards. The first device discussed in this thesis is called the dual ring. The dual ring is a testing device that quantifies the early age restrained shrinkage performance of cementitious mixtures. The design of the dual ring is based on the current ASTM C 1581-04 standard test which utilizes one steel ring to restrain a cementitious specimen. The dual ring overcomes two important limitations of the standard test. First, the standard single ring test cannot restrain the expansion that takes place at early ages which is not representative of field conditions. The dual ring incorporates a second restraining ring

  14. Ochratoxin A reduction in meat sausages using processing methods practiced in households.

    Science.gov (United States)

    Pleadin, Jelka; Perši, Nina; Kovačević, Dragan; Vulić, Ana; Frece, Jadranka; Markov, Ksenija

    2014-01-01

    The aim of this study was to investigate the possibilities of ochratoxin A (OTA) reduction in home-made meat products. Meat sausages (n = 50) produced from raw materials coming from pigs exposed to OTA-contaminated feed, were subject to common heat processes practiced in households (cooking, frying and baking). Concentrations of OTA in pre- and post-processed products were quantified using a validated immunoassay method, enzyme-linked immunosorbent assay, and confirmed using a high-performance liquid chromatography with fluorescence detection. In line with the differences in recipes used and the degree of OTA accumulation in raw materials, OTA concentrations established in Mediterranean and roast sausages were lower than those found in liver and blood sausages. Baking of contaminated sausages at the temperatures of 190-220°C (for 60 min) resulted in significant reduction of OTA levels (75.8%), while 30-min cooking (at 100°C) and frying (at 170°C) proved to be significantly less effective (e.g. yielding OTA reductions of 7.4% and 12.6%, respectively). The results pointed out that despite high OTA stability, heat processes are capable of reducing its concentration in home-made meat products, depending on the processing modality used.

  15. Power Curve Measurements, quantify the production increase

    DEFF Research Database (Denmark)

    Gómez Arranz, Paula; Vesth, Allan

    The purpose of this report is to quantify the production increase on a given turbine with respect to another given turbine. The used methodology is the “side by side” comparison method, provided by the client. This method involves the use of two neighboring turbines and it is based...

  16. Quantifiers and working memory

    NARCIS (Netherlands)

    Szymanik, J.; Zajenkowski, M.

    2010-01-01

    The paper presents a study examining the role of working memory in quantifier verification. We created situations similar to the span task to compare numerical quantifiers of low and high rank, parity quantifiers and proportional quantifiers. The results enrich and support the data obtained

  17. Quantifiers and working memory

    NARCIS (Netherlands)

    Szymanik, J.; Zajenkowski, M.

    2009-01-01

    The paper presents a study examining the role of working memory in quantifier verification. We created situations similar to the span task to compare numerical quantifiers of low and high rank, parity quantifiers and proportional quantifiers. The results enrich and support the data obtained

  18. A hybrid approach to quantify software reliability in nuclear safety systems

    International Nuclear Information System (INIS)

    Arun Babu, P.; Senthil Kumar, C.; Murali, N.

    2012-01-01

    Highlights: ► A novel method to quantify software reliability using software verification and mutation testing in nuclear safety systems. ► Contributing factors that influence software reliability estimate. ► Approach to help regulators verify the reliability of safety critical software system during software licensing process. -- Abstract: Technological advancements have led to the use of computer based systems in safety critical applications. As computer based systems are being introduced in nuclear power plants, effective and efficient methods are needed to ensure dependability and compliance to high reliability requirements of systems important to safety. Even after several years of research, quantification of software reliability remains controversial and unresolved issue. Also, existing approaches have assumptions and limitations, which are not acceptable for safety applications. This paper proposes a theoretical approach combining software verification and mutation testing to quantify the software reliability in nuclear safety systems. The theoretical results obtained suggest that the software reliability depends on three factors: the test adequacy, the amount of software verification carried out and the reusability of verified code in the software. The proposed approach may help regulators in licensing computer based safety systems in nuclear reactors.

  19. Resolving and quantifying overlapped chromatographic bands by transmutation

    Science.gov (United States)

    Malinowski

    2000-09-15

    A new chemometric technique called "transmutation" is developed for the purpose of sharpening overlapped chromatographic bands in order to quantify the components. The "transmutation function" is created from the chromatogram of the pure component of interest, obtained from the same instrument, operating under the same experimental conditions used to record the unresolved chromatogram of the sample mixture. The method is used to quantify mixtures containing toluene, ethylbenzene, m-xylene, naphthalene, and biphenyl from unresolved chromatograms previously reported. The results are compared to those obtained using window factor analysis, rank annihilation factor analysis, and matrix regression analysis. Unlike the latter methods, the transmutation method is not restricted to two-dimensional arrays of data, such as those obtained from HPLC/DAD, but is also applicable to chromatograms obtained from single detector experiments. Limitations of the method are discussed.

  20. Methods of information processing

    Energy Technology Data Exchange (ETDEWEB)

    Kosarev, Yu G; Gusev, V D

    1978-01-01

    Works are presented on automation systems for editing and publishing operations by methods of processing symbol information and information contained in training selection (ranking of objectives by promise, classification algorithm of tones and noise). The book will be of interest to specialists in the automation of processing textural information, programming, and pattern recognition.

  1. Quantifying linguistic coordination

    DEFF Research Database (Denmark)

    Fusaroli, Riccardo; Tylén, Kristian

    task (Bahrami et al 2010, Fusaroli et al. 2012) we extend to linguistic coordination dynamical measures of recurrence employed in the analysis of sensorimotor coordination (such as heart-rate (Konvalinka et al 2011), postural sway (Shockley 2005) and eye-movements (Dale, Richardson and Kirkham 2012......). We employ nominal recurrence analysis (Orsucci et al 2005, Dale et al 2011) on the decision-making conversations between the participants. We report strong correlations between various indexes of recurrence and collective performance. We argue this method allows us to quantify the qualities...

  2. Developing and testing a computer vision method to quantify 3D movements of bottom-set gillnets on the seabed

    DEFF Research Database (Denmark)

    Savina, Esther; Krag, Ludvig Ahm; Madsen, Niels

    2018-01-01

    Gillnets are one of the most widely used fishing gears, but there is limited knowledge about their habitat effects, partly due to the lack of methodology to quantify such effects. A stereo imaging method was identified and adapted to quantify the dynamic behaviour of gillnets in-situ. Two cameras...... gillnets deployed in sandy habitats in the Danish coastal plaice fishery were assessed. The direct physical disruption of the seabed was minimal as the leadline was not penetrating into the seabed. Direct damage to the benthos could however originate from the sweeping movements of the nets, which were...... the general perception is that heavy gears are more destructive to the habitat, light nets were moving significantly more than heavy ones. The established methodology could be further applied to assess gear dynamic behaviour in situ of other static gears....

  3. The Rat Grimace Scale: A partially automated method for quantifying pain in the laboratory rat via facial expressions

    Directory of Open Access Journals (Sweden)

    Zhan Shu

    2011-07-01

    Full Text Available Abstract We recently demonstrated the utility of quantifying spontaneous pain in mice via the blinded coding of facial expressions. As the majority of preclinical pain research is in fact performed in the laboratory rat, we attempted to modify the scale for use in this species. We present herein the Rat Grimace Scale, and show its reliability, accuracy, and ability to quantify the time course of spontaneous pain in the intraplantar complete Freund's adjuvant, intraarticular kaolin-carrageenan, and laparotomy (post-operative pain assays. The scale's ability to demonstrate the dose-dependent analgesic efficacy of morphine is also shown. In addition, we have developed software, Rodent Face Finder®, which successfully automates the most labor-intensive step in the process. Given the known mechanistic dissociations between spontaneous and evoked pain, and the primacy of the former as a clinical problem, we believe that widespread adoption of spontaneous pain measures such as the Rat Grimace Scale might lead to more successful translation of basic science findings into clinical application.

  4. Microencapsulation and Electrostatic Processing Method

    Science.gov (United States)

    Morrison, Dennis R. (Inventor); Mosier, Benjamin (Inventor)

    2000-01-01

    Methods are provided for forming spherical multilamellar microcapsules having alternating hydrophilic and hydrophobic liquid layers, surrounded by flexible, semi-permeable hydrophobic or hydrophilic outer membranes which can be tailored specifically to control the diffusion rate. The methods of the invention rely on low shear mixing and liquid-liquid diffusion process and are particularly well suited for forming microcapsules containing both hydrophilic and hydrophobic drugs. These methods can be carried out in the absence of gravity and do not rely on density-driven phase separation, mechanical mixing or solvent evaporation phases. The methods include the process of forming, washing and filtering microcapsules. In addition, the methods contemplate coating microcapsules with ancillary coatings using an electrostatic field and free fluid electrophoresis of the microcapsules. The microcapsules produced by such methods are particularly useful in the delivery of pharmaceutical compositions.

  5. Digital signal processing with kernel methods

    CERN Document Server

    Rojo-Alvarez, José Luis; Muñoz-Marí, Jordi; Camps-Valls, Gustavo

    2018-01-01

    A realistic and comprehensive review of joint approaches to machine learning and signal processing algorithms, with application to communications, multimedia, and biomedical engineering systems Digital Signal Processing with Kernel Methods reviews the milestones in the mixing of classical digital signal processing models and advanced kernel machines statistical learning tools. It explains the fundamental concepts from both fields of machine learning and signal processing so that readers can quickly get up to speed in order to begin developing the concepts and application software in their own research. Digital Signal Processing with Kernel Methods provides a comprehensive overview of kernel methods in signal processing, without restriction to any application field. It also offers example applications and detailed benchmarking experiments with real and synthetic datasets throughout. Readers can find further worked examples with Matlab source code on a website developed by the authors. * Presents the necess...

  6. Quantifying construction and demolition waste: an analytical review.

    Science.gov (United States)

    Wu, Zezhou; Yu, Ann T W; Shen, Liyin; Liu, Guiwen

    2014-09-01

    Quantifying construction and demolition (C&D) waste generation is regarded as a prerequisite for the implementation of successful waste management. In literature, various methods have been employed to quantify the C&D waste generation at both regional and project levels. However, an integrated review that systemically describes and analyses all the existing methods has yet to be conducted. To bridge this research gap, an analytical review is conducted. Fifty-seven papers are retrieved based on a set of rigorous procedures. The characteristics of the selected papers are classified according to the following criteria - waste generation activity, estimation level and quantification methodology. Six categories of existing C&D waste quantification methodologies are identified, including site visit method, waste generation rate method, lifetime analysis method, classification system accumulation method, variables modelling method and other particular methods. A critical comparison of the identified methods is given according to their characteristics and implementation constraints. Moreover, a decision tree is proposed for aiding the selection of the most appropriate quantification method in different scenarios. Based on the analytical review, limitations of previous studies and recommendations of potential future research directions are further suggested. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Finite Element Method in Machining Processes

    CERN Document Server

    Markopoulos, Angelos P

    2013-01-01

    Finite Element Method in Machining Processes provides a concise study on the way the Finite Element Method (FEM) is used in the case of manufacturing processes, primarily in machining. The basics of this kind of modeling are detailed to create a reference that will provide guidelines for those who start to study this method now, but also for scientists already involved in FEM and want to expand their research. A discussion on FEM, formulations and techniques currently in use is followed up by machining case studies. Orthogonal cutting, oblique cutting, 3D simulations for turning and milling, grinding, and state-of-the-art topics such as high speed machining and micromachining are explained with relevant examples. This is all supported by a literature review and a reference list for further study. As FEM is a key method for researchers in the manufacturing and especially in the machining sector, Finite Element Method in Machining Processes is a key reference for students studying manufacturing processes but al...

  8. Development of a Combined Trifluoroacetic Acid Hydrolysis and HPLC-ELSD Method to Identify and Quantify Inulin Recovered from Jerusalem artichoke Assisted by Ultrasound Extraction

    Directory of Open Access Journals (Sweden)

    Shuyi Li

    2018-05-01

    Full Text Available Over the last years, inulin, a fructan mixture consisting of oligosaccharides and polysaccharides, has attracted more and more attention from both food industry and researchers, due to its unique functional properties as a natural resource. Therefore, there is an increased interest in the extraction and quantification of inulin for its valorization from inulin rich plants, wastes and by-products. In this work, ultrasonic treatment was applied for inulin extraction, observing a great impact of extraction temperature and ultrasonic power on the inulin content in the obtained extracts. A combined process including trifluoroacetic acid (TFA-assisted hydrolysis and analysis with high performance liquid chromatography equipped with evaporative light scattering detector (HPLC-ELSD was developed to quantify inulin content. The effect of hydrolysis parameters was investigated, obtaining the optimal conditions after using TFA at a concentration of 1 mg/mL, hydrolysis temperature of 90 °C, and hydrolysis duration of 60 min. The good linearity (>0.995, precision, recovery (100.27%, and stability obtained during the validation process showed that this developed method allows the quantification of total inulin content in the samples analyzed. This combined method may also contribute to the investigation of the functional properties of inulin (e.g., as prebiotic.

  9. Task-oriented comparison of power spectral density estimation methods for quantifying acoustic attenuation in diagnostic ultrasound using a reference phantom method.

    Science.gov (United States)

    Rosado-Mendez, Ivan M; Nam, Kibo; Hall, Timothy J; Zagzebski, James A

    2013-07-01

    Reported here is a phantom-based comparison of methods for determining the power spectral density (PSD) of ultrasound backscattered signals. Those power spectral density values are then used to estimate parameters describing α(f), the frequency dependence of the acoustic attenuation coefficient. Phantoms were scanned with a clinical system equipped with a research interface to obtain radiofrequency echo data. Attenuation, modeled as a power law α(f)= α0 f (β), was estimated using a reference phantom method. The power spectral density was estimated using the short-time Fourier transform (STFT), Welch's periodogram, and Thomson's multitaper technique, and performance was analyzed when limiting the size of the parameter-estimation region. Errors were quantified by the bias and standard deviation of the α0 and β estimates, and by the overall power-law fit error (FE). For parameter estimation regions larger than ~34 pulse lengths (~1 cm for this experiment), an overall power-law FE of 4% was achieved with all spectral estimation methods. With smaller parameter estimation regions as in parametric image formation, the bias and standard deviation of the α0 and β estimates depended on the size of the parameter estimation region. Here, the multitaper method reduced the standard deviation of the α0 and β estimates compared with those using the other techniques. The results provide guidance for choosing methods for estimating the power spectral density in quantitative ultrasound methods.

  10. SU-F-J-177: A Novel Image Analysis Technique (center Pixel Method) to Quantify End-To-End Tests

    Energy Technology Data Exchange (ETDEWEB)

    Wen, N; Chetty, I [Henry Ford Health System, Detroit, MI (United States); Snyder, K [Henry Ford Hospital System, Detroit, MI (United States); Scheib, S [Varian Medical System, Barton (Switzerland); Qin, Y; Li, H [Henry Ford Health System, Detroit, Michigan (United States)

    2016-06-15

    Purpose: To implement a novel image analysis technique, “center pixel method”, to quantify end-to-end tests accuracy of a frameless, image guided stereotactic radiosurgery system. Methods: The localization accuracy was determined by delivering radiation to an end-to-end prototype phantom. The phantom was scanned with 0.8 mm slice thickness. The treatment isocenter was placed at the center of the phantom. In the treatment room, CBCT images of the phantom (kVp=77, mAs=1022, slice thickness 1 mm) were acquired to register to the reference CT images. 6D couch correction were applied based on the registration results. Electronic Portal Imaging Device (EPID)-based Winston Lutz (WL) tests were performed to quantify the errors of the targeting accuracy of the system at 15 combinations of gantry, collimator and couch positions. The images were analyzed using two different methods. a) The classic method. The deviation was calculated by measuring the radial distance between the center of the central BB and the full width at half maximum of the radiation field. b) The center pixel method. Since the imager projection offset from the treatment isocenter was known from the IsoCal calibration, the deviation was determined between the center of the BB and the central pixel of the imager panel. Results: Using the automatic registration method to localize the phantom and the classic method of measuring the deviation of the BB center, the mean and standard deviation of the radial distance was 0.44 ± 0.25, 0.47 ± 0.26, and 0.43 ± 0.13 mm for the jaw, MLC and cone defined field sizes respectively. When the center pixel method was used, the mean and standard deviation was 0.32 ± 0.18, 0.32 ± 0.17, and 0.32 ± 0.19 mm respectively. Conclusion: Our results demonstrated that the center pixel method accurately analyzes the WL images to evaluate the targeting accuracy of the radiosurgery system. The work was supported by a Research Scholar Grant, RSG-15-137-01-CCE from the American

  11. Evaluation of a Method for Quantifying Eugenol Concentrations in the Fillet Tissue from Freshwater Fish Species.

    Science.gov (United States)

    Meinertz, Jeffery R; Schreier, Theresa M; Porcher, Scott T; Smerud, Justin R

    2016-01-01

    AQUI-S 20E(®) (active ingredient, eugenol; AQUI-S New Zealand Ltd, Lower Hutt, New Zealand) is being pursued for approval as an immediate-release sedative in the United States. A validated method to quantify the primary residue (the marker residue) in fillet tissue from AQUI-S 20E-exposed fish was needed. A method was evaluated for determining concentrations of the AQUI-S 20E marker residue, eugenol, in freshwater fish fillet tissue. Method accuracies from fillet tissue fortified at nominal concentrations of 0.15, 1, and 60 μg/g from six fish species ranged from 88-102%. Within-day and between-day method precisions (% CV) from the fortified tissue were ≤8.4% CV. There were no coextracted compounds from the control fillet tissue of seven fish species that interfered with eugenol analyses. Six compounds used as aquaculture drugs did not interfere with eugenol analyses. The lower limit of quantitation (LLOQ) was 0.012 μg/g. The method was robust, i.e., in most cases, minor changes to the method did not impact method performance. Eugenol was stable in acetonitrile-water (3 + 7, v/v) for at least 14 days, in fillet tissue extracts for 4 days, and in fillet tissue stored at ~ -80°C for at least 84 days.

  12. Quantifying the robustness of process manufacturing concept – A medical product case study

    DEFF Research Database (Denmark)

    Boorla, Srinivasa Murthy; Troldtoft, M.E.; Eifler, Tobias

    2017-01-01

    Product robustness refers to the consistency of performance of all of the units produced. It is often the case that process manufactured products are not designed concurrently, so by the end of the product design phase the Process Manufacturing Concept (PMC) has yet to be decided. Allocating...... the unit-to-unit robustness of an early-stage for a PMC is proposed. The method uses variability and adjustability information from the manufacturing concept in combination with sensitivity information from products' design to predict its functional performance variation. A Technology maturation factor...... process capable tolerances to the product during the design phase is therefore not possible. The robustness of the concept (how capable it is to achieve the product specification), only becomes clear at this late stage and thus after testing and iteration. In this article, a method for calculating...

  13. Methods of quantifying circulating IgE

    International Nuclear Information System (INIS)

    Merrett, T.G.; Merrett, J.

    1978-01-01

    Four radioimmunoassay techniques, two conventional and two sandwich, have been used to measure circulating IgE levels in 100 sera. The test sera had IgE levels ranging from 1.0 to 20,000 u/ml, and each was measured at five dilutions, ranging from three-fold to 400-fold. The same IgE standards were used throughout, and the optimal range for each assay was determined by assessing data for quality control sera and the WHO standard 69/204. To be of general use in the United Kingdom an IgE test must measure accurately levels as low as 20-30 u IgE/ml. The Phadebas RIST method failed to meet this criterion, and of the remaining tests the double antibody method had the most useful operating range and produced the most reliable results. However, the double antibody method is not available commercially and so, for the majority of laboratories, the Phadebas PRIST technique should be the method chosen. (author)

  14. Quantifying Standing Dead Tree Volume and Structural Loss with Voxelized Terrestrial Lidar Data

    Science.gov (United States)

    Popescu, S. C.; Putman, E.

    2017-12-01

    Standing dead trees (SDTs) are an important forest component and impact a variety of ecosystem processes, yet the carbon pool dynamics of SDTs are poorly constrained in terrestrial carbon cycling models. The ability to model wood decay and carbon cycling in relation to detectable changes in tree structure and volume over time would greatly improve such models. The overall objective of this study was to provide automated aboveground volume estimates of SDTs and automated procedures to detect, quantify, and characterize structural losses over time with terrestrial lidar data. The specific objectives of this study were: 1) develop an automated SDT volume estimation algorithm providing accurate volume estimates for trees scanned in dense forests; 2) develop an automated change detection methodology to accurately detect and quantify SDT structural loss between subsequent terrestrial lidar observations; and 3) characterize the structural loss rates of pine and oak SDTs in southeastern Texas. A voxel-based volume estimation algorithm, "TreeVolX", was developed and incorporates several methods designed to robustly process point clouds of varying quality levels. The algorithm operates on horizontal voxel slices by segmenting the slice into distinct branch or stem sections then applying an adaptive contour interpolation and interior filling process to create solid reconstructed tree models (RTMs). TreeVolX estimated large and small branch volume with an RMSE of 7.3% and 13.8%, respectively. A voxel-based change detection methodology was developed to accurately detect and quantify structural losses and incorporated several methods to mitigate the challenges presented by shifting tree and branch positions as SDT decay progresses. The volume and structural loss of 29 SDTs, composed of Pinus taeda and Quercus stellata, were successfully estimated using multitemporal terrestrial lidar observations over elapsed times ranging from 71 - 753 days. Pine and oak structural loss rates

  15. Quantifying Uncertainty in Near Surface Electromagnetic Imaging Using Bayesian Methods

    Science.gov (United States)

    Blatter, D. B.; Ray, A.; Key, K.

    2017-12-01

    Geoscientists commonly use electromagnetic methods to image the Earth's near surface. Field measurements of EM fields are made (often with the aid an artificial EM source) and then used to infer near surface electrical conductivity via a process known as inversion. In geophysics, the standard inversion tool kit is robust and can provide an estimate of the Earth's near surface conductivity that is both geologically reasonable and compatible with the measured field data. However, standard inverse methods struggle to provide a sense of the uncertainty in the estimate they provide. This is because the task of finding an Earth model that explains the data to within measurement error is non-unique - that is, there are many, many such models; but the standard methods provide only one "answer." An alternative method, known as Bayesian inversion, seeks to explore the full range of Earth model parameters that can adequately explain the measured data, rather than attempting to find a single, "ideal" model. Bayesian inverse methods can therefore provide a quantitative assessment of the uncertainty inherent in trying to infer near surface conductivity from noisy, measured field data. This study applies a Bayesian inverse method (called trans-dimensional Markov chain Monte Carlo) to transient airborne EM data previously collected over Taylor Valley - one of the McMurdo Dry Valleys in Antarctica. Our results confirm the reasonableness of previous estimates (made using standard methods) of near surface conductivity beneath Taylor Valley. In addition, we demonstrate quantitatively the uncertainty associated with those estimates. We demonstrate that Bayesian inverse methods can provide quantitative uncertainty to estimates of near surface conductivity.

  16. An unbiased stereological method for efficiently quantifying the innervation of the heart and other organs based on total length estimations

    DEFF Research Database (Denmark)

    Mühlfeld, Christian; Papadakis, Tamara; Krasteva, Gabriela

    2010-01-01

    Quantitative information about the innervation is essential to analyze the structure-function relationships of organs. So far, there has been no unbiased stereological tool for this purpose. This study presents a new unbiased and efficient method to quantify the total length of axons in a given...... reference volume, illustrated on the left ventricle of the mouse heart. The method is based on the following steps: 1) estimation of the reference volume; 2) randomization of location and orientation using appropriate sampling techniques; 3) counting of nerve fiber profiles hit by a defined test area within...

  17. A comparison between boat-based and diver-based methods for quantifying coral bleaching

    Science.gov (United States)

    Zawada, David G.; Ruzicka, Rob; Colella, Michael A.

    2015-01-01

    Recent increases in both the frequency and severity of coral bleaching events have spurred numerous surveys to quantify the immediate impacts and monitor the subsequent community response. Most of these efforts utilize conventional diver-based methods, which are inherently time-consuming, expensive, and limited in spatial scope unless they deploy large teams of scientifically-trained divers. In this study, we evaluated the effectiveness of the Along-Track Reef Imaging System (ATRIS), an automated image-acquisition technology, for assessing a moderate bleaching event that occurred in the summer of 2011 in the Florida Keys. More than 100,000 images were collected over 2.7 km of transects spanning four patch reefs in a 3-h period. In contrast, divers completed 18, 10-m long transects at nine patch reefs over a 5-day period. Corals were assigned to one of four categories: not bleached, pale, partially bleached, and bleached. The prevalence of bleaching estimated by ATRIS was comparable to the results obtained by divers, but only for corals > 41 cm in size. The coral size-threshold computed for ATRIS in this study was constrained by prevailing environmental conditions (turbidity and sea state) and, consequently, needs to be determined on a study-by-study basis. Both ATRIS and diver-based methods have innate strengths and weaknesses that must be weighed with respect to project goals.

  18. Quantifying the predictive consequences of model error with linear subspace analysis

    Science.gov (United States)

    White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

    2014-01-01

    All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

  19. Present status of processing method

    Energy Technology Data Exchange (ETDEWEB)

    Kosako, Kazuaki [Sumitomo Atomic Energy Industries Ltd., Tokyo (Japan)

    1998-11-01

    Present status of processing method for a high-energy nuclear data file was examined. The NJOY94 code is the only one available to the processing. In Japan, present processing used NJOY94 is orienting toward the production of traditional cross section library, because a high-energy transport code using a high-energy cross section library is indistinct. (author)

  20. Methods of digital image processing

    International Nuclear Information System (INIS)

    Doeler, W.

    1985-01-01

    Increasing use of computerized methods for diagnostical imaging of radiological problems will open up a wide field of applications for digital image processing. The requirements set by routine diagnostics in medical radiology point to picture data storage and documentation and communication as the main points of interest for application of digital image processing. As to the purely radiological problems, the value of digital image processing is to be sought in the improved interpretability of the image information in those cases where the expert's experience and image interpretation by human visual capacities do not suffice. There are many other domains of imaging in medical physics where digital image processing and evaluation is very useful. The paper reviews the various methods available for a variety of problem solutions, and explains the hardware available for the tasks discussed. (orig.) [de

  1. Development and validation of an HPLC method to quantify camptothecin in polymeric nanocapsule suspensions.

    Science.gov (United States)

    Granada, Andréa; Murakami, Fabio S; Sartori, Tatiane; Lemos-Senna, Elenara; Silva, Marcos A S

    2008-01-01

    A simple, rapid, and sensitive reversed-phase column high-performance liquid chromatographic method was developed and validated to quantify camptothecin (CPT) in polymeric nanocapsule suspensions. The chromatographic separation was performed on a Supelcosil LC-18 column (15 cm x 4.6 mm id, 5 microm) using a mobile phase consisting of methanol-10 mM KH2PO4 (60 + 40, v/v; pH 2.8) at a flow rate of 1.0 mL/min and ultraviolet detection at 254 nm. The calibration graph was linear from 0.5 to 3.0 microg/mL with a correlation coefficient of 0.9979, and the limit of quantitation was 0.35 microg/mL. The assay recovery ranged from 97.3 to 105.0%. The intraday and interday relative standard deviation values were entrapment efficiency and drug content in polymeric nanocapsule suspensions during the early stage of formulation development.

  2. Digital processing methods for bronchograms

    International Nuclear Information System (INIS)

    Mamilyaev, R.M.; Popova, N.P.; Matsulevich, T.V.

    1989-01-01

    The technique of digital processing of bronchograms with the aim of separating morphological details of bronchi and increasing the clarity in the outlines of contrasted bronchi, is described. The block diagram of digital processing on the automatized system of image processing is given. It is shown that digital processing of bronchograms permits to clearly outline bronchi walls and makes the measurements of bronchi diameters easier and more reliable. Considerable advantages of digital processing of images as compared with the optical methods, are shown

  3. Quantifying Anthropogenic Dust Emissions

    Science.gov (United States)

    Webb, Nicholas P.; Pierre, Caroline

    2018-02-01

    Anthropogenic land use and land cover change, including local environmental disturbances, moderate rates of wind-driven soil erosion and dust emission. These human-dust cycle interactions impact ecosystems and agricultural production, air quality, human health, biogeochemical cycles, and climate. While the impacts of land use activities and land management on aeolian processes can be profound, the interactions are often complex and assessments of anthropogenic dust loads at all scales remain highly uncertain. Here, we critically review the drivers of anthropogenic dust emission and current evaluation approaches. We then identify and describe opportunities to: (1) develop new conceptual frameworks and interdisciplinary approaches that draw on ecological state-and-transition models to improve the accuracy and relevance of assessments of anthropogenic dust emissions; (2) improve model fidelity and capacity for change detection to quantify anthropogenic impacts on aeolian processes; and (3) enhance field research and monitoring networks to support dust model applications to evaluate the impacts of disturbance processes on local to global-scale wind erosion and dust emissions.

  4. PATIENT-CENTERED DECISION MAKING: LESSONS FROM MULTI-CRITERIA DECISION ANALYSIS FOR QUANTIFYING PATIENT PREFERENCES.

    Science.gov (United States)

    Marsh, Kevin; Caro, J Jaime; Zaiser, Erica; Heywood, James; Hamed, Alaa

    2018-01-01

    Patient preferences should be a central consideration in healthcare decision making. However, stories of patients challenging regulatory and reimbursement decisions has led to questions on whether patient voices are being considered sufficiently during those decision making processes. This has led some to argue that it is necessary to quantify patient preferences before they can be adequately considered. This study considers the lessons from the use of multi-criteria decision analysis (MCDA) for efforts to quantify patient preferences. It defines MCDA and summarizes the benefits it can provide to decision makers, identifies examples of MCDAs that have involved patients, and summarizes good practice guidelines as they relate to quantifying patient preferences. The guidance developed to support the use of MCDA in healthcare provide some useful considerations for the quantification of patient preferences, namely that researchers should give appropriate consideration to: the heterogeneity of patient preferences, and its relevance to decision makers; the cognitive challenges posed by different elicitation methods; and validity of the results they produce. Furthermore, it is important to consider how the relevance of these considerations varies with the decision being supported. The MCDA literature holds important lessons for how patient preferences should be quantified to support healthcare decision making.

  5. In-vivo quantification of primary microRNA processing by Drosha with a luciferase based system

    International Nuclear Information System (INIS)

    Allegra, Danilo; Mertens, Daniel

    2011-01-01

    Research highlights: → Posttranscriptional regulation of miRNA processing is difficult to quantify. → Our in-vivo processing assay can quantify Drosha cleavage in live cells. → It is based on luciferase reporters fused with pri-miRNAs. → The assay validates the processing defect caused by a mutation in pri-16-1. → It is a sensitive method to quantify pri-miRNA cleavage by Drosha in live cells. -- Abstract: The RNAse III Drosha is responsible for the first step of microRNA maturation, the cleavage of primary miRNA to produce the precursor miRNA. Processing by Drosha is finely regulated and influences the amount of mature microRNA in a cell. We describe in the present work a method to quantify Drosha processing activity in-vivo, which is applicable to any microRNA. With respect to other methods for measuring Drosha activity, our system is faster and scalable, can be used with any cellular system and does not require cell sorting or use of radioactive isotopes. This system is useful to study regulation of Drosha activity in physiological and pathological conditions.

  6. a Borehole-Dilution Method for Quantifying Vertical Darcy Fluxes in the Hyporheic Zone

    Science.gov (United States)

    Augustine, S. D.; Annable, M. D.; Cho, J.

    2017-12-01

    The borehole dilution method has consistently and successfully been used for estimating local water fluxes, however, this method can be relatively labor intensive and expensive. The focus of this research is aimed at developing a low-cost, borehole dilution method for quantifying vertical water fluxes in the hyporheic zone at the surface-groundwater interface. This would allow for the deployment of multiple units within a targeted surface water body and thus produce high-resolution, spatially distributed data on the infiltration rates over a short period of time with minimal set-up requirements. The device consists of a 2-inch, inner diameter PVC pipe containing short, screened sections in its upper and lower segments. The working unit is driven into the sediment and acts as a continuous flow reactor creating a pathway between the subsurface pore-water and the overlying surface water where the presence of a hydraulic gradient facilitates vertical movement. We developed a simple electrode and tracer-injection system housed within the unit to inject and measure salt tracer concentrations at the desired intervals while monitoring and storing those measurements using open-source Arduino technology. Preliminary lab and field scale trials provided data that was fit to both zero and first order reaction rate functions for analysis. The field test was conducted over approximately one day within a wet retention basin. The initial results estimated a vertical Darcy flux of 113.5 cm/d. Additional testing over a range of expected Darcy fluxes will be presented along with an evaluation considering enhanced water flow due to the high hydraulic conductivity of the device.

  7. Sprinkling experiments to simulate high and intense rainfall for process based investigations - a comparison of two methods

    Science.gov (United States)

    Müller, C.; Seeger, M.; Schneider, R.; Johst, M.; Casper, M.

    2009-04-01

    Land use and land management changes affect runoff and erosion dynamics. So, measures within this scope are often directed towards the mitigation of natural hazards such as floods and landslides. However, the effects of these changes (e.g. in soil physics after reforestation or a less extensive agriculture) are i) detectable first many years later or ii) hardly observable with conventional methods. Therefore, sprinkling experiments are frequently used for process based investigations of near-surface hydrological response as well as rill and interrill erosion. In this study, two different sprinkling systems have been applied under different land use and at different scales to elucidate and quantify dominant processes of runoff generation, as well as to relate them to the detachment and transport of solids. The studies take place at the micro-scale basin Zemmer and Frankelbach in Germany. At the Zemmer basin the sprinkling experiments were performed on agricultural land while the experiments in Frankelbach were performed at reforested sites. The experiments were carried out i) with a small mobile rainfall simulator of high rainfall intensities (40 mm h-1) and ii) with a larger one covering a slope segment and simulating high rainfall amounts (120 mm in 3 days). Both methods show basically comparable results. On the agricultural sites clear differences could be observed between different soil management types: contrasting to the conventionally tilled soils, deep loosened soils (in combination with conservative tillage) do not produce overland flow, but tend to transfer more water by interflow processes, retaining large amounts in the subsoil. For the forested sites runoff shows a high variability as determined the larger and the smaller rainfall simulations. This variability is rather due to the different forest and soil types than to methodologically different settings of the sprinkling systems. Both rainfall simulation systems characterized the runoff behavior in a

  8. BUSINESS PROCESS REENGINEERING AS THE METHOD OF PROCESS MANAGEMENT

    Directory of Open Access Journals (Sweden)

    O. Honcharova

    2013-09-01

    Full Text Available The article is devoted to the analysis of process management approach. The main understanding of process management approach has been researched in the article. The definition of process and process management has been given. Also the methods of business process improvement has been analyzed, among them are fast-analysis solution technology (FAST, benchmarking, reprojecting and reengineering. The main results of using business process improvement have been described in figures of reducing cycle time, costs and errors. Also the tasks of business process reengineering have been noticed. The main stages of business process reengineering have been noticed. The main efficiency results of business process reengineering and its success factors have been determined.

  9. Evaluation of Three Field-Based Methods for Quantifying Soil Carbon

    Energy Technology Data Exchange (ETDEWEB)

    Izaurralde, Roberto C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rice, Charles W. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wielopolski, Lucien [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ebinger, Michael H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Reeves, James B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Thomson, Allison M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Harris, Ron [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Francis, Barry [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Mitra, S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rappaport, Aaron [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Etchevers, Jorge [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sayre, Ken D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Govaerts, Bram [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McCarty, G. W. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2013-01-31

    Three advanced technologies to measure soil carbon (C) density (g C m22) are deployed in the field and the results compared against those obtained by the dry combustion (DC) method. The advanced methods are: a) Laser Induced Breakdown Spectroscopy (LIBS), b) Diffuse Reflectance Fourier Transform Infrared Spectroscopy (DRIFTS), and c) Inelastic Neutron Scattering (INS). The measurements and soil samples were acquired at Beltsville, MD, USA and at Centro International para el Mejoramiento del Maiz y el Trigo (CIMMYT) at El Bata´n, Mexico. At Beltsville, soil samples were extracted at three depth intervals (0–5, 5–15, and 15–30 cm) and processed for analysis in the field with the LIBS and DRIFTS instruments. The INS instrument determined soil C density to a depth of 30 cm via scanning and stationary measurements. Subsequently, soil core samples were analyzed in the laboratory for soil bulk density (kg m23), C concentration (g kg21) by DC, and results reported as soil C density (kg m22). Results from each technique were derived independently and contributed to a blind test against results from the reference (DC) method. A similar procedure was employed at CIMMYT in Mexico employing but only with the LIBS and DRIFTS instruments. Following conversion to common units, we found that the LIBS, DRIFTS, and INS results can be compared directly with those obtained by the DC method. The first two methods and the standard DC require soil sampling and need soil bulk density information to convert soil C concentrations to soil C densities while the INS method does not require soil sampling. We conclude that, in comparison with the DC method, the three instruments (a) showed acceptable performances although further work is needed to improve calibration techniques and (b) demonstrated their portability and their capacity to perform under field conditions.

  10. Quantifying pCO2 in biological ocean acidification experiments: A comparison of four methods.

    Science.gov (United States)

    Watson, Sue-Ann; Fabricius, Katharina E; Munday, Philip L

    2017-01-01

    Quantifying the amount of carbon dioxide (CO2) in seawater is an essential component of ocean acidification research; however, equipment for measuring CO2 directly can be costly and involve complex, bulky apparatus. Consequently, other parameters of the carbonate system, such as pH and total alkalinity (AT), are often measured and used to calculate the partial pressure of CO2 (pCO2) in seawater, especially in biological CO2-manipulation studies, including large ecological experiments and those conducted at field sites. Here we compare four methods of pCO2 determination that have been used in biological ocean acidification experiments: 1) Versatile INstrument for the Determination of Total inorganic carbon and titration Alkalinity (VINDTA) measurement of dissolved inorganic carbon (CT) and AT, 2) spectrophotometric measurement of pHT and AT, 3) electrode measurement of pHNBS and AT, and 4) the direct measurement of CO2 using a portable CO2 equilibrator with a non-dispersive infrared (NDIR) gas analyser. In this study, we found these four methods can produce very similar pCO2 estimates, and the three methods often suited to field-based application (spectrophotometric pHT, electrode pHNBS and CO2 equilibrator) produced estimated measurement uncertainties of 3.5-4.6% for pCO2. Importantly, we are not advocating the replacement of established methods to measure seawater carbonate chemistry, particularly for high-accuracy quantification of carbonate parameters in seawater such as open ocean chemistry, for real-time measures of ocean change, nor for the measurement of small changes in seawater pCO2. However, for biological CO2-manipulation experiments measuring differences of over 100 μatm pCO2 among treatments, we find the four methods described here can produce similar results with careful use.

  11. Quantifying emissions from spontaneous combustion

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-09-01

    Spontaneous combustion can be a significant problem in the coal industry, not only due to the obvious safety hazard and the potential loss of valuable assets, but also with respect to the release of gaseous pollutants, especially CO2, from uncontrolled coal fires. This report reviews methodologies for measuring emissions from spontaneous combustion and discusses methods for quantifying, estimating and accounting for the purpose of preparing emission inventories.

  12. Quantifying Optical Microangiography Images Obtained from a Spectral Domain Optical Coherence Tomography System

    Directory of Open Access Journals (Sweden)

    Roberto Reif

    2012-01-01

    Full Text Available The blood vessel morphology is known to correlate with several diseases, such as cancer, and is important for describing several tissue physiological processes, like angiogenesis. Therefore, a quantitative method for characterizing the angiography obtained from medical images would have several clinical applications. Optical microangiography (OMAG is a method for obtaining three-dimensional images of blood vessels within a volume of tissue. In this study we propose to quantify OMAG images obtained with a spectral domain optical coherence tomography system. A technique for determining three measureable parameters (the fractal dimension, the vessel length fraction, and the vessel area density is proposed and validated. Finally, the repeatability for acquiring OMAG images is determined, and a new method for analyzing small areas from these images is proposed.

  13. Single-cell mechanics--An experimental-computational method for quantifying the membrane-cytoskeleton elasticity of cells.

    Science.gov (United States)

    Tartibi, M; Liu, Y X; Liu, G-Y; Komvopoulos, K

    2015-11-01

    The membrane-cytoskeleton system plays a major role in cell adhesion, growth, migration, and differentiation. F-actin filaments, cross-linkers, binding proteins that bundle F-actin filaments to form the actin cytoskeleton, and integrins that connect the actin cytoskeleton network to the cell plasma membrane and extracellular matrix are major cytoskeleton constituents. Thus, the cell cytoskeleton is a complex composite that can assume different shapes. Atomic force microscopy (AFM)-based techniques have been used to measure cytoskeleton material properties without much attention to cell shape. A recently developed surface chemical patterning method for long-term single-cell culture was used to seed individual cells on circular patterns. A continuum-based cell model, which uses as input the force-displacement response obtained with a modified AFM setup and relates the membrane-cytoskeleton elastic behavior to the cell geometry, while treating all other subcellular components suspended in the cytoplasmic liquid (gel) as an incompressible fluid, is presented and validated by experimental results. The developed analytical-experimental methodology establishes a framework for quantifying the membrane-cytoskeleton elasticity of live cells. This capability may have immense implications in cell biology, particularly in studies seeking to establish correlations between membrane-cytoskeleton elasticity and cell disease, mortality, differentiation, and migration, and provide insight into cell infiltration through nonwoven fibrous scaffolds. The present method can be further extended to analyze membrane-cytoskeleton viscoelasticity, examine the role of other subcellular components (e.g., nucleus envelope) in cell elasticity, and elucidate the effects of mechanical stimuli on cell differentiation and motility. This is the first study to decouple the membrane-cytoskeleton elasticity from cell stiffness and introduce an effective approach for measuring the elastic modulus. The

  14. A novel approach to quantify cybersecurity for electric power systems

    Science.gov (United States)

    Kaster, Paul R., Jr.

    Electric Power grid cybersecurity is a topic gaining increased attention in academia, industry, and government circles, yet a method of quantifying and evaluating a system's security is not yet commonly accepted. In order to be useful, a quantification scheme must be able to accurately reflect the degree to which a system is secure, simply determine the level of security in a system using real-world values, model a wide variety of attacker capabilities, be useful for planning and evaluation, allow a system owner to publish information without compromising the security of the system, and compare relative levels of security between systems. Published attempts at quantifying cybersecurity fail at one or more of these criteria. This document proposes a new method of quantifying cybersecurity that meets those objectives. This dissertation evaluates the current state of cybersecurity research, discusses the criteria mentioned previously, proposes a new quantification scheme, presents an innovative method of modeling cyber attacks, demonstrates that the proposed quantification methodology meets the evaluation criteria, and proposes a line of research for future efforts.

  15. Improved methods for signal processing in measurements of mercury by Tekran® 2537A and 2537B instruments

    Science.gov (United States)

    Ambrose, Jesse L.

    2017-12-01

    ) decrease by 31 to 88 % when the new methods are used in place of the Tekran® method. I recommend that signal processing uncertainties be quantified in future applications of the Tekran® 2537 instruments.

  16. A stochastic approach for quantifying immigrant integration: the Spanish test case

    Science.gov (United States)

    Agliari, Elena; Barra, Adriano; Contucci, Pierluigi; Sandell, Richard; Vernia, Cecilia

    2014-10-01

    We apply stochastic process theory to the analysis of immigrant integration. Using a unique and detailed data set from Spain, we study the relationship between local immigrant density and two social and two economic immigration quantifiers for the period 1999-2010. As opposed to the classic time-series approach, by letting immigrant density play the role of ‘time’ and the quantifier the role of ‘space,’ it becomes possible to analyse the behavior of the quantifiers by means of continuous time random walks. Two classes of results are then obtained. First, we show that social integration quantifiers evolve following diffusion law, while the evolution of economic quantifiers exhibits ballistic dynamics. Second, we make predictions of best- and worst-case scenarios taking into account large local fluctuations. Our stochastic process approach to integration lends itself to interesting forecasting scenarios which, in the hands of policy makers, have the potential to improve political responses to integration problems. For instance, estimating the standard first-passage time and maximum-span walk reveals local differences in integration performance for different immigration scenarios. Thus, by recognizing the importance of local fluctuations around national means, this research constitutes an important tool to assess the impact of immigration phenomena on municipal budgets and to set up solid multi-ethnic plans at the municipal level as immigration pressures build.

  17. A stochastic approach for quantifying immigrant integration: the Spanish test case

    International Nuclear Information System (INIS)

    Agliari, Elena; Barra, Adriano; Contucci, Pierluigi; Sandell, Richard; Vernia, Cecilia

    2014-01-01

    We apply stochastic process theory to the analysis of immigrant integration. Using a unique and detailed data set from Spain, we study the relationship between local immigrant density and two social and two economic immigration quantifiers for the period 1999–2010. As opposed to the classic time-series approach, by letting immigrant density play the role of ‘time’ and the quantifier the role of ‘space,’ it becomes possible to analyse the behavior of the quantifiers by means of continuous time random walks. Two classes of results are then obtained. First, we show that social integration quantifiers evolve following diffusion law, while the evolution of economic quantifiers exhibits ballistic dynamics. Second, we make predictions of best- and worst-case scenarios taking into account large local fluctuations. Our stochastic process approach to integration lends itself to interesting forecasting scenarios which, in the hands of policy makers, have the potential to improve political responses to integration problems. For instance, estimating the standard first-passage time and maximum-span walk reveals local differences in integration performance for different immigration scenarios. Thus, by recognizing the importance of local fluctuations around national means, this research constitutes an important tool to assess the impact of immigration phenomena on municipal budgets and to set up solid multi-ethnic plans at the municipal level as immigration pressures build. (paper)

  18. Normalized Tritium Quantification Approach (NoTQA) a Method for Quantifying Tritium Contaminated Trash and Debris at LLNL

    International Nuclear Information System (INIS)

    Dominick, J.L.; Rasmussen, C.L.

    2008-01-01

    Several facilities and many projects at LLNL work exclusively with tritium. These operations have the potential to generate large quantities of Low-Level Radioactive Waste (LLW) with the same or similar radiological characteristics. A standardized documented approach to characterizing these waste materials for disposal as radioactive waste will enhance the ability of the Laboratory to manage them in an efficient and timely manner while ensuring compliance with all applicable regulatory requirements. This standardized characterization approach couples documented process knowledge with analytical verification and is very conservative, overestimating the radioactivity concentration of the waste. The characterization approach documented here is the Normalized Tritium Quantification Approach (NoTQA). This document will serve as a Technical Basis Document which can be referenced in radioactive waste characterization documentation packages such as the Information Gathering Document. In general, radiological characterization of waste consists of both developing an isotopic breakdown (distribution) of radionuclides contaminating the waste and using an appropriate method to quantify the radionuclides in the waste. Characterization approaches require varying degrees of rigor depending upon the radionuclides contaminating the waste and the concentration of the radionuclide contaminants as related to regulatory thresholds. Generally, as activity levels in the waste approach a regulatory or disposal facility threshold the degree of required precision and accuracy, and therefore the level of rigor, increases. In the case of tritium, thresholds of concern for control, contamination, transportation, and waste acceptance are relatively high. Due to the benign nature of tritium and the resulting higher regulatory thresholds, this less rigorous yet conservative characterization approach is appropriate. The scope of this document is to define an appropriate and acceptable

  19. Complexity Quantification for Overhead Transmission Line Emergency Repair Scheme via a Graph Entropy Method Improved with Petri Net and AHP Weighting Method

    Directory of Open Access Journals (Sweden)

    Jing Zhou

    2014-01-01

    Full Text Available According to the characteristics of emergency repair in overhead transmission line accidents, a complexity quantification method for emergency repair scheme is proposed based on the entropy method in software engineering, which is improved by using group AHP (analytical hierarchical process method and Petri net. Firstly, information structure chart model and process control flowchart model could be built by Petri net. Then impact factors on complexity of emergency repair scheme could be quantified into corresponding entropy values, respectively. Finally, by using group AHP method, weight coefficient of each entropy value would be given before calculating the overall entropy value for the whole emergency repair scheme. By comparing group AHP weighting method with average weighting method, experiment results for the former showed a stronger correlation between quantified entropy values of complexity and the actual consumed time in repair, which indicates that this new method is more valid.

  20. Understanding the issues around quantifying GHG emissions in the financial sector

    International Nuclear Information System (INIS)

    Jacono, Caline; Poivet, Romain; Havette, Didier; Maille, Catherine; Jaubert, Nathalie; Grandjean, Alain; Cottenceau, Jean-Baptiste; Finidori, Esther; Le Teno, Helene; Cochard, Eric; Sanchez, Thomas; Michaux, Elisabeth; Courcier, Jerome; Marie Lapalle; Guez, Herve; Mia, Ladislas; Agnes Guiral; Martinez, Emmanuel; Rose, Antoine; Breton, Herve; Meyssonier, Guillaume; Arndt, Matthew; Saichs, Nancy; Desfosses, Philippe; Bonnet, Olivier; Rouchon, Jean-Philippe; Smart, Lauren; Lenoel, Benjamin; Dupre, Stanislas; Chenet, Hugues; Lavaud, Patricia; Laviale, Michel; Lucas-Leclin, Valery; Bernasconi, Maxime; Merlin, Alexis; Delettang, Catherine; Gerardi, Anne

    2016-01-01

    In the face of climate change, the financial sector shows a need to have access to methods and tools for quantifying GHG emissions. This guide proposes to address multiple needs of financial institutions (Investment banks, insurers, retail bank, commercial bank, asset managers...) in terms of financed emissions quantification. It meets two objectives: Make formal methodological recommendations for financial institutions about their operating related emissions, and propose methodological recommendations to quantify financed emissions (Scop3 - Poste 15 'Investments'). The guide is divided in three parts. Volume 1 gives background, identifies sectorial challenges related to climate change and offers an overview of the main existing quantification methods and tools. Volume 2 offers practical and operational guidance for estimating emissions from organisation's back-office functions into the financial sector. Volume 3 (through case studies) offers methodological information to quantify the financed emissions through a 'top-down' approach, with an 'excel' tool to calculate emission factors related to this method

  1. A comparative study of simple methods to quantify cerebral blood flow with acetazolamide challenge by using iodine-123-IMP SPECT with one-point arterial sampling

    Energy Technology Data Exchange (ETDEWEB)

    Ohkubo, Masaki [Niigata Univ. (Japan). School of Health Sciences; Odano, Ikuo

    2000-04-01

    The aim of this study was to compare the accuracy of simplified methods for quantifying rCBF with acetazolamide challenge by using {sup 123}I-N-isopropyl-p-iodoamphetamine (IMP) and SPECT with one-point arterial sampling. After acetazolamide administration we quantified rCBF in 12 subjects by the following three methods: (a) the modified microsphere method, (b) the IMP-autoradiographic (ARG) method based on a two-compartment one-parameter model, and (c) the simplified method based on a two-compartment two-parameter model (functional IMP method). The accuracy of these methods was validated by comparing rCBF values with those obtained by the standard method: the super-early microsphere method with continuous withdrawal of arterial blood. On analyzing rCBF in each flow range (0-0.25, 0.25-0.5, 0.5-0.75 and more than 0.75 ml/g/min), rCBF values obtained by both methods (a) and (c) showed significant correlations (p<0.01) with those obtained by the standard method in every range, but rCBF values obtained by method (b) did not significantly correlated in the high flow range (0.5-0.75 and more than 0.75 ml/g/min). Method (c) was found to be the most accurate, even though it needs two serial SPECT scans. When requiring one SPECT scan, method (a) was considered to be superior to method (b) because of its accuracy, especially in high flow regions loaded with acetazolamide. (author)

  2. Ventilation in Sewers Quantified by Measurements of CO2

    DEFF Research Database (Denmark)

    Fuglsang, Emil Dietz; Vollertsen, Jes; Nielsen, Asbjørn Haaning

    2012-01-01

    Understanding and quantifying ventilation in sewer systems is a prerequisite to predict transport of odorous and corrosive gasses within the system as well as their interaction with the urban atmosphere. This paper studies ventilation in sewer systems quantified by measurements of the natural...... occurring compound CO2. Most often Danish wastewater is supersaturated with CO2 and hence a potential for stripping is present. A novel model was built based on the kinetics behind the stripping process. It was applied to simulate ventilation rates from field measurements of wastewater temperature, p...

  3. Three-dimensional image signals: processing methods

    Science.gov (United States)

    Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru

    2010-11-01

    Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.

  4. Calcification–carbonation method for red mud processing

    Energy Technology Data Exchange (ETDEWEB)

    Li, Ruibing [School of Metallurgy, Northeastern University, Shenyang 110819 (China); Laboratory for Simulation and Modelling of Particulate Systems, Department of Chemical Engineering, Monash University, Clayton, Victoria, 3800 (Australia); Zhang, Tingan, E-mail: zhangta@smm.neu.edu.cn [School of Metallurgy, Northeastern University, Shenyang 110819 (China); Liu, Yan; Lv, Guozhi; Xie, Liqun [School of Metallurgy, Northeastern University, Shenyang 110819 (China)

    2016-10-05

    Highlights: • A new approach named calcification–carbonation method for red mud processing is proposed. • The method can prevent emission of red mud from alumina production and is good for the environment. • Thermodynamics characteristics were investigated. • The method was verified experimentally using a jet-flow reactor. - Abstract: Red mud, the Bayer process residue, is generated from alumina industry and causes environmental problem. In this paper, a novel calcification–carbonation method that utilized a large amount of the Bayer process residue is proposed. Using this method, the red mud was calcified with lime to transform the silicon phase into hydrogarnet, and the alkali in red mud was recovered. Then, the resulting hydrogarnet was decomposed by CO{sub 2} carbonation, affording calcium silicate, calcium carbonate, and aluminum hydroxide. Alumina was recovered using an alkaline solution at a low temperature. The effects of the new process were analyzed by thermodynamics analysis and experiments. The extraction efficiency of the alumina and soda obtained from the red mud reached 49.4% and 96.8%, respectively. The new red mud with <0.3% alkali can be used in cement production. Using a combination of this method and cement production, the Bayer process red mud can be completely utilized.

  5. Improving runoff risk estimates: Formulating runoff as a bivariate process using the SCS curve number method

    Science.gov (United States)

    Shaw, Stephen B.; Walter, M. Todd

    2009-03-01

    The Soil Conservation Service curve number (SCS-CN) method is widely used to predict storm runoff for hydraulic design purposes, such as sizing culverts and detention basins. As traditionally used, the probability of calculated runoff is equated to the probability of the causative rainfall event, an assumption that fails to account for the influence of variations in soil moisture on runoff generation. We propose a modification to the SCS-CN method that explicitly incorporates rainfall return periods and the frequency of different soil moisture states to quantify storm runoff risks. Soil moisture status is assumed to be correlated to stream base flow. Fundamentally, this approach treats runoff as the outcome of a bivariate process instead of dictating a 1:1 relationship between causative rainfall and resulting runoff volumes. Using data from the Fall Creek watershed in western New York and the headwaters of the French Broad River in the mountains of North Carolina, we show that our modified SCS-CN method improves frequency discharge predictions in medium-sized watersheds in the eastern United States in comparison to the traditional application of the method.

  6. Methods in Astronomical Image Processing

    Science.gov (United States)

    Jörsäter, S.

    A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future

  7. A New Sensitive GC-MS-based Method for Analysis of Dipicolinic Acid and Quantifying Bacterial Endospores in Deep Marine Subsurface Sediment

    Science.gov (United States)

    Fang, J.

    2015-12-01

    Marine sediments cover more than two-thirds of the Earth's surface and represent a major part of the deep biosphere. Microbial cells and microbial activity appear to be widespread in these sediments. Recently, we reported the isolation of gram-positive anaerobic spore-forming piezophilic bacteria and detection of bacterial endospores in marine subsurface sediment from the Shimokita coalbed, Japan. However, the modern molecular microbiological methods (e.g., DNA-based microbial detection techniques) cannot detect bacterial endospore, because endospores are impermeable and are not stained by fluorescence DNA dyes or by ribosomal RNA staining techniques such as catalysed reporter deposition fluorescence in situ hybridization. Thus, the total microbial cell abundance in the deep biosphere may has been globally underestimated. This emphasizes the need for a new cultivation independent approach for the quantification of bacterial endospores in the deep subsurface. Dipicolinic acid (DPA, pyridine-2,6-dicarboxylic acid) is a universal and specific component of bacterial endospores, representing 5-15wt% of the dry spore, and therefore is a useful indicator and quantifier of bacterial endospores and permits to estimate total spore numbers in the subsurface biosphere. We developed a sensitive analytical method to quantify DPA content in environmental samples using gas chromatography-mass spectrometry. The method is sensitive and more convenient in use than other traditional methods. We applied this method to analyzing sediment samples from the South China Sea (obtained from IODP Exp. 349) to determine the abundance of spore-forming bacteria in the deep marine subsurface sediment. Our results suggest that gram-positive, endospore-forming bacteria may be the "unseen majority" in the deep biosphere.

  8. METHODS TO QUANTIFY THE UNDERGROUND ECONOMY

    Directory of Open Access Journals (Sweden)

    Oana Simona HUDEA

    2017-12-01

    Full Text Available The underground economy issue has raised in time miscellaneous discussions, it representing a large interest problem that affects the nations all over the world, without exception and, thereby, the well—being of stand—alone individuals. Although also treated in some previous works of the author, this topic in herein approached from a different perspective, namely the one related to distinct methods to be used in order to capture, by quantification, this undesirable economic form. Such methods, empirically tested or just imposed, based on arguments, by the researchers having launched the same, are rendered while considering their pluses and minuses in revealing, with a reasonable accuracy, the level of the above—mentioned informal economy.

  9. Quantifying biopsychosocial aspects in everyday contexts: an integrative methodological approach from the behavioral sciences

    Science.gov (United States)

    Portell, Mariona; Anguera, M Teresa; Hernández-Mendo, Antonio; Jonsson, Gudberg K

    2015-01-01

    Contextual factors are crucial for evaluative research in psychology, as they provide insights into what works, for whom, in what circumstances, in what respects, and why. Studying behavior in context, however, poses numerous methodological challenges. Although a comprehensive framework for classifying methods seeking to quantify biopsychosocial aspects in everyday contexts was recently proposed, this framework does not contemplate contributions from observational methodology. The aim of this paper is to justify and propose a more general framework that includes observational methodology approaches. Our analysis is rooted in two general concepts: ecological validity and methodological complementarity. We performed a narrative review of the literature on research methods and techniques for studying daily life and describe their shared properties and requirements (collection of data in real time, on repeated occasions, and in natural settings) and classification criteria (eg, variables of interest and level of participant involvement in the data collection process). We provide several examples that illustrate why, despite their higher costs, studies of behavior and experience in everyday contexts offer insights that complement findings provided by other methodological approaches. We urge that observational methodology be included in classifications of research methods and techniques for studying everyday behavior and advocate a renewed commitment to prioritizing ecological validity in behavioral research seeking to quantify biopsychosocial aspects. PMID:26089708

  10. Uranium manufacturing process employing the electrolytic reduction method

    International Nuclear Information System (INIS)

    Oda, Yoshio; Kazuhare, Manabu; Morimoto, Takeshi.

    1986-01-01

    The present invention related to a uranium manufacturing process that employs the electrolytic reduction method, but particularly to a uranium manufacturing process that employs an electrolytic reduction method requiring low voltage. The process, in which uranium is obtained by means of the electrolytic method and with uranyl acid as the raw material, is prior art

  11. Using multiple linear regression techniques to quantify carbon ...

    African Journals Online (AJOL)

    Fallow ecosystems provide a significant carbon stock that can be quantified for inclusion in the accounts of global carbon budgets. Process and statistical models of productivity, though useful, are often technically rigid as the conditions for their application are not easy to satisfy. Multiple regression techniques have been ...

  12. Method and apparatus for processing algae

    Science.gov (United States)

    Chew, Geoffrey; Reich, Alton J.; Dykes, Jr., H. Waite; Di Salvo, Roberto

    2012-07-03

    Methods and apparatus for processing algae are described in which a hydrophilic ionic liquid is used to lyse algae cells. The lysate separates into at least two layers including a lipid-containing hydrophobic layer and an ionic liquid-containing hydrophilic layer. A salt or salt solution may be used to remove water from the ionic liquid-containing layer before the ionic liquid is reused. The used salt may also be dried and/or concentrated and reused. The method can operate at relatively low lysis, processing, and recycling temperatures, which minimizes the environmental impact of algae processing while providing reusable biofuels and other useful products.

  13. Quantifying differences in land use emission estimates implied by definition discrepancies

    Science.gov (United States)

    Stocker, B. D.; Joos, F.

    2015-11-01

    The quantification of CO2 emissions from anthropogenic land use and land use change (eLUC) is essential to understand the drivers of the atmospheric CO2 increase and to inform climate change mitigation policy. Reported values in synthesis reports are commonly derived from different approaches (observation-driven bookkeeping and process-modelling) but recent work has emphasized that inconsistencies between methods may imply substantial differences in eLUC estimates. However, a consistent quantification is lacking and no concise modelling protocol for the separation of primary and secondary components of eLUC has been established. Here, we review differences of eLUC quantification methods and apply an Earth System Model (ESM) of Intermediate Complexity to quantify them. We find that the magnitude of effects due to merely conceptual differences between ESM and offline vegetation model-based quantifications is ~ 20 % for today. Under a future business-as-usual scenario, differences tend to increase further due to slowing land conversion rates and an increasing impact of altered environmental conditions on land-atmosphere fluxes. We establish how coupled Earth System Models may be applied to separate secondary component fluxes of eLUC arising from the replacement of potential C sinks/sources and the land use feedback and show that secondary fluxes derived from offline vegetation models are conceptually and quantitatively not identical to either, nor their sum. Therefore, we argue that synthesis studies should resort to the "least common denominator" of different methods, following the bookkeeping approach where only primary land use emissions are quantified under the assumption of constant environmental boundary conditions.

  14. Transport and mass exchange processes in sand and gravel aquifers (v.1)

    International Nuclear Information System (INIS)

    Moltyaner, G.

    1990-01-01

    The objectives of this conference were to exchange information on promising field measurement techniques used for the characterization of spatial variability of geologic formations and on new methods used for quantifying the effect of spatial variability on groundwater flow and transport of materials; to discuss novel developments in the theory of transport processes and simulation methods; and to present views and opinions on future initiatives and directions in the design of large-scale field tracer experiments and the development of conceptual and mathematical models of transport and mass exchange processes. The 46 papers presented in these proceedings are divided into six sections: field studies of transport processes; groundwater tracers and novel field measurement techniques; promising methods and field measurement techniques for quantifying the effect of geological heterogeneities on groundwater flow and transport; novel developments in the theory of transport processes; numerical modelling of transport and mass exchange processes; and field and modelling studies of mass exchange processes. (L.L.)

  15. Comparative analysis of accelerogram processing methods

    International Nuclear Information System (INIS)

    Goula, X.; Mohammadioun, B.

    1986-01-01

    The work described here inafter is a short development of an on-going research project, concerning high-quality processing of strong-motion recordings of earthquakes. Several processing procedures have been tested, applied to synthetic signals simulating ground-motion designed for this purpose. The methods of correction operating in the time domain are seen to be strongly dependent upon the sampling rate. Two methods of low-frequency filtering followed by an integration of accelerations yielded satisfactory results [fr

  16. Quantifying uncertainty and resilience on coral reefs using a Bayesian approach

    International Nuclear Information System (INIS)

    Van Woesik, R

    2013-01-01

    Coral reefs are rapidly deteriorating globally. The contemporary management option favors managing for resilience to provide reefs with the capacity to tolerate human-induced disturbances. Yet resilience is most commonly defined as the capacity of a system to absorb disturbances without changing fundamental processes or functionality. Quantifying no change, or the uncertainty of a null hypothesis, is nonsensical using frequentist statistics, but is achievable using a Bayesian approach. This study outlines a practical Bayesian framework that quantifies the resilience of coral reefs using two inter-related models. The first model examines the functionality of coral reefs in the context of their reef-building capacity, whereas the second model examines the recovery rates of coral cover after disturbances. Quantifying intrinsic rates of increase in coral cover and habitat-specific, steady-state equilibria are useful proxies of resilience. A reduction in the intrinsic rate of increase following a disturbance, or the slowing of recovery over time, can be useful indicators of stress; a change in the steady-state equilibrium suggests a phase shift. Quantifying the uncertainty of key reef-building processes and recovery parameters, and comparing these parameters against benchmarks, facilitates the detection of loss of resilience and provides signals of imminent change. (letter)

  17. Quantifying uncertainty and resilience on coral reefs using a Bayesian approach

    Science.gov (United States)

    van Woesik, R.

    2013-12-01

    Coral reefs are rapidly deteriorating globally. The contemporary management option favors managing for resilience to provide reefs with the capacity to tolerate human-induced disturbances. Yet resilience is most commonly defined as the capacity of a system to absorb disturbances without changing fundamental processes or functionality. Quantifying no change, or the uncertainty of a null hypothesis, is nonsensical using frequentist statistics, but is achievable using a Bayesian approach. This study outlines a practical Bayesian framework that quantifies the resilience of coral reefs using two inter-related models. The first model examines the functionality of coral reefs in the context of their reef-building capacity, whereas the second model examines the recovery rates of coral cover after disturbances. Quantifying intrinsic rates of increase in coral cover and habitat-specific, steady-state equilibria are useful proxies of resilience. A reduction in the intrinsic rate of increase following a disturbance, or the slowing of recovery over time, can be useful indicators of stress; a change in the steady-state equilibrium suggests a phase shift. Quantifying the uncertainty of key reef-building processes and recovery parameters, and comparing these parameters against benchmarks, facilitates the detection of loss of resilience and provides signals of imminent change.

  18. Detecting periodicities with Gaussian processes

    Directory of Open Access Journals (Sweden)

    Nicolas Durrande

    2016-04-01

    Full Text Available We consider the problem of detecting and quantifying the periodic component of a function given noise-corrupted observations of a limited number of input/output tuples. Our approach is based on Gaussian process regression, which provides a flexible non-parametric framework for modelling periodic data. We introduce a novel decomposition of the covariance function as the sum of periodic and aperiodic kernels. This decomposition allows for the creation of sub-models which capture the periodic nature of the signal and its complement. To quantify the periodicity of the signal, we derive a periodicity ratio which reflects the uncertainty in the fitted sub-models. Although the method can be applied to many kernels, we give a special emphasis to the Matérn family, from the expression of the reproducing kernel Hilbert space inner product to the implementation of the associated periodic kernels in a Gaussian process toolkit. The proposed method is illustrated by considering the detection of periodically expressed genes in the arabidopsis genome.

  19. An evaluation of indices for quantifying tuberculosis transmission using genotypes of pathogen isolates

    Directory of Open Access Journals (Sweden)

    Phong Renault

    2006-06-01

    Full Text Available Abstract Background Infectious diseases are often studied by characterising the population structure of the pathogen using genetic markers. An unresolved problem is the effective quantification of the extent of transmission using genetic variation data from such pathogen isolates. Methods It is important that transmission indices reflect the growth of the infectious population as well as account for the mutation rate of the marker and the effects of sampling. That is, while responding to this growth rate, indices should be unresponsive to the sample size and the mutation rate. We use simulation methods taking into account both the mutation and sampling processes to evaluate indices designed to quantify transmission of tuberculosis. Results Previously proposed indices generally perform inadequately according to the above criteria, with the partial exception of the recently proposed Transmission-Mutation Index. Conclusion Any transmission index needs to take into account mutation of the marker and the effects of sampling. Simple indices are unlikely to capture the full complexity of the underlying processes.

  20. Development of nondestructive measurement system for quantifying radioactivity from crud, liquids and gases in a contaminated pipe

    Energy Technology Data Exchange (ETDEWEB)

    Katagiri, Masaki; Ito, Hirokuni; Wakayama, Naoaki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1992-11-01

    A nondestructive measuring method was developed to quantify separately radioisotope concentrations of crud, liquids and gases in a contaminated pipe. For applying this method to practical in-situ measurement, a nondestructive measurement system was developed. The measurement system consists of an in-situ equipment for gamma-ray scanning measurements and a data-processing equipment for analysis of radioactivity. The communication between both equipments is performed by a wireless telemeter device. To construct the measurement system, a gas-cooled Ge detector of practical use, small-sized electronics circuits, a fast and reliable telemeter device and automatic measurement technics using a computer were developed. Through performance tests, it is confirmed that the measurement system is effective for in-situ measurements of radioactivity in a contaminated pipe. The measurement accuracy with this measurement system is 10 - 20 %, which was determined by comparison with solid and liquid radioisotope concentrations in a mock-up contaminated pipe that had been quantified in advance. (author).

  1. Tracing and quantifying groundwater inflow into lakes using a simple method for radon-222 analysis

    Directory of Open Access Journals (Sweden)

    T. Kluge

    2007-09-01

    Full Text Available Due to its high activities in groundwater, the radionuclide 222Rn is a sensitive natural tracer to detect and quantify groundwater inflow into lakes, provided the comparatively low activities in the lakes can be measured accurately. Here we present a simple method for radon measurements in the low-level range down to 3 Bq m−3, appropriate for groundwater-influenced lakes, together with a concept to derive inflow rates from the radon budget in lakes. The analytical method is based on a commercially available radon detector and combines the advantages of established procedures with regard to efficient sampling and sensitive analysis. Large volume (12 l water samples are taken in the field and analyzed in the laboratory by equilibration with a closed air loop and alpha spectrometry of radon in the gas phase. After successful laboratory tests, the method has been applied to a small dredging lake without surface in- or outflow in order to estimate the groundwater contribution to the hydrological budget. The inflow rate calculated from a 222Rn balance for the lake is around 530 m³ per day, which is comparable to the results of previous studies. In addition to the inflow rate, the vertical and horizontal radon distribution in the lake provides information on the spatial distribution of groundwater inflow to the lake. The simple measurement and sampling technique encourages further use of radon to examine groundwater-lake water interaction.

  2. Focal depth measurements of the vaginal wall: a new method to noninvasively quantify vaginal wall thickness in the diagnosis and treatment of vaginal atrophy

    NARCIS (Netherlands)

    Weber, Maaike A.; Diedrich, Chantal M.; Ince, Can; Roovers, Jan-Paul

    2016-01-01

    The aim of the study was to evaluate if vaginal focal depth measurement could be a noninvasive method to quantify vaginal wall thickness. Postmenopausal women undergoing topical estrogen therapy because of vaginal atrophy (VA) were recruited. VA was diagnosed based on the presence of symptoms and

  3. Quantifying human exposure to air pollution - moving from static monitoring to spatio-temporally resolved personal exposure assessment

    DEFF Research Database (Denmark)

    Steinle, Susanne; Reis, Stefan; Sabel, Clive E

    2013-01-01

    exposure studies to accurately assess human health risks. ? We discuss potential and shortcomings of methods and tools with a focus on how their development influences study design. ? We propose a novel conceptual model for integrated health impact assessment of human exposure to air pollutants. ? We......Quantifying human exposure to air pollutants is a challenging task. Ambient concentrations of air pollutants at potentially harmful levels are ubiquitous in urban areas and subject to high spatial and temporal variability. At the same time, every individual has unique activity-patterns. Exposure...... results from multifaceted relationships and interactions between environmental and human systems, adding complexity to the assessment process. Traditionally, approaches to quantify human exposure have relied on pollutant concentrations from fixed air quality network sites and static population...

  4. A Generalizable Methodology for Quantifying User Satisfaction

    Science.gov (United States)

    Huang, Te-Yuan; Chen, Kuan-Ta; Huang, Polly; Lei, Chin-Laung

    Quantifying user satisfaction is essential, because the results can help service providers deliver better services. In this work, we propose a generalizable methodology, based on survival analysis, to quantify user satisfaction in terms of session times, i. e., the length of time users stay with an application. Unlike subjective human surveys, our methodology is based solely on passive measurement, which is more cost-efficient and better able to capture subconscious reactions. Furthermore, by using session times, rather than a specific performance indicator, such as the level of distortion of voice signals, the effects of other factors like loudness and sidetone, can also be captured by the developed models. Like survival analysis, our methodology is characterized by low complexity and a simple model-developing process. The feasibility of our methodology is demonstrated through case studies of ShenZhou Online, a commercial MMORPG in Taiwan, and the most prevalent VoIP application in the world, namely Skype. Through the model development process, we can also identify the most significant performance factors and their impacts on user satisfaction and discuss how they can be exploited to improve user experience and optimize resource allocation.

  5. QUANTIFYING SUPPLIERS’ PRODUCT QUALITY: AN EXPLORATORY PRODUCT AUDIT METHOD

    Directory of Open Access Journals (Sweden)

    S. Avakh Darestani

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT:The quality of the raw material and supplied product from suppliers plays a critical role in the quality of the final product. It has become the norm that vehicle manufacturers require their suppliers to measure product quality and service with a product audit method. Measuring quality of product is emphasised by QS9000 VDA6.5 and ISO/TS16949. From a competitive standpoint, and also to see continuous improvement in business, companies need to monitor their suppliers’ performance. Quality and delivery are two very important indicators of supplier performance. This paper presents a statistical method for measuring the quality of supplied product. This method allocates different weights to variables and attributes characteristics. Moreover, following normal distribution, the tolerance zone is divided to three regions with different scores. Therefore, the quality of suppliers’ products can be monitored based on the Product Quality Audit Score (PQAS. However, this method may be employed for organisations to monitor their raw material, work-in-process parts, and final product. It can be an indicator to monitor supplier quality behaviour.

    AFRIKAANSE OPSOMMING: Die gehalte van grondstowwe en produkte/komponente wat deur leweransiers verskaf word, speel ‘n kritiese rol in die gehalte van die finale produk. Dit het die norm geword in die motorvervaardigingsbedryf dat daar van leweransiers verwag word om hulle produkkwaliteit en –diens te meet by wyse van ‘n produkouditmetode. Die meting van produkkwaliteit word benadruk deur QS9000 VDA6.5 en ISO/TS16949. Uit ‘n mededingingshoek en ook om kontinue verbetering te monitor, is dit noodsaaklik dat leweransiers se verrigting gemeet word. Gehalte en aflewering is twee van die belangrikste indikatore van leweransiersverrigting. In hierdie artikel word ‘n statistiese model voorgehou vir die meting van die kwaliteit van die gelewerde produk. Die metode ken verskillende

  6. Multi-block methods in multivariate process control

    DEFF Research Database (Denmark)

    Kohonen, J.; Reinikainen, S.P.; Aaljoki, K.

    2008-01-01

    methods the effect of a sub-process can be seen and an example with two blocks, near infra-red, NIR, and process data, is shown. The results show improvements in modelling task, when a MB-based approach is used. This way of working with data gives more information on the process than if all data...... are in one X-matrix. The procedure is demonstrated by an industrial continuous process, where knowledge about the sub-processes is available and X-matrix can be divided into blocks between process variables and NIR spectra.......In chemometric studies all predictor variables are usually collected in one data matrix X. This matrix is then analyzed by PLS regression or other methods. When data from several different sub-processes are collected in one matrix, there is a possibility that the effects of some sub-processes may...

  7. Use of smartphones and portable media devices for quantifying human movement characteristics of gait, tendon reflex response, and Parkinson's disease hand tremor.

    Science.gov (United States)

    LeMoyne, Robert; Mastroianni, Timothy

    2015-01-01

    Smartphones and portable media devices are both equipped with sensor components, such as accelerometers. A software application enables these devices to function as a robust wireless accelerometer platform. The recorded accelerometer waveform can be transmitted wireless as an e-mail attachment through connectivity to the Internet. The implication of such devices as a wireless accelerometer platform is the experimental and post-processing locations can be placed anywhere in the world. Gait was quantified by mounting a smartphone or portable media device proximal to the lateral malleolus of the ankle joint. Attributes of the gait cycle were quantified with a considerable accuracy and reliability. The patellar tendon reflex response was quantified by using the device in tandem with a potential energy impact pendulum to evoke the patellar tendon reflex. The acceleration waveform maximum acceleration feature of the reflex response displayed considerable accuracy and reliability. By mounting the smartphone or portable media device to the dorsum of the hand through a glove, Parkinson's disease hand tremor was quantified and contrasted with significance to a non-Parkinson's disease steady hand control. With the methods advocated in this chapter, any aspect of human movement may be quantified through smartphones or portable media devices and post-processed anywhere in the world. These wearable devices are anticipated to substantially impact the biomedical and healthcare industry.

  8. Comparative Study of Different Processing Methods for the ...

    African Journals Online (AJOL)

    The result of the two processing methods reduced the cyanide concentration to the barest minimum level required by World Health Organization (10mg/kg). The mechanical pressing-fermentation method removed more cyanide when compared to fermentation processing method. Keywords: Cyanide, Fermentation, Manihot ...

  9. Coupling and quantifying resilience and sustainability in facilities management

    DEFF Research Database (Denmark)

    Cox, Rimante Andrasiunaite; Nielsen, Susanne Balslev; Rode, Carsten

    2015-01-01

    Purpose – The purpose of this paper is to consider how to couple and quantify resilience and sustainability, where sustainability refers to not only environmental impact, but also economic and social impacts. The way a particular function of a building is provisioned may have significant repercus......Purpose – The purpose of this paper is to consider how to couple and quantify resilience and sustainability, where sustainability refers to not only environmental impact, but also economic and social impacts. The way a particular function of a building is provisioned may have significant...... repercussions beyond just resilience. The goal is to develop a decision support tool for facilities managers. Design/methodology/approach – A risk framework is used to quantify both resilience and sustainability in monetary terms. The risk framework allows to couple resilience and sustainability, so...... that the provisioning of a particular building can be investigated with consideration of functional, environmental, economic and, possibly, social dimensions. Findings – The method of coupling and quantifying resilience and sustainability (CQRS) is illustrated with a simple example that highlights how very different...

  10. Quantifying Solar Cell Cracks in Photovoltaic Modules by Electroluminescence Imaging

    DEFF Research Database (Denmark)

    Spataru, Sergiu; Hacke, Peter; Sera, Dezso

    2015-01-01

    This article proposes a method for quantifying the percentage of partially and totally disconnected solar cell cracks by analyzing electroluminescence images of the photovoltaic module taken under high- and low-current forward bias. The method is based on the analysis of the module’s electrolumin...

  11. A field comparison of multiple techniques to quantify groundwater - surface-water interactions

    Science.gov (United States)

    González-Pinzón, Ricardo; Ward, Adam S; Hatch, Christine E; Wlostowski, Adam N; Singha, Kamini; Gooseff, Michael N.; Haggerty, Roy; Harvey, Judson; Cirpka, Olaf A; Brock, James T

    2015-01-01

    Groundwater–surface-water (GW-SW) interactions in streams are difficult to quantify because of heterogeneity in hydraulic and reactive processes across a range of spatial and temporal scales. The challenge of quantifying these interactions has led to the development of several techniques, from centimeter-scale probes to whole-system tracers, including chemical, thermal, and electrical methods. We co-applied conservative and smart reactive solute-tracer tests, measurement of hydraulic heads, distributed temperature sensing, vertical profiles of solute tracer and temperature in the stream bed, and electrical resistivity imaging in a 450-m reach of a 3rd-order stream. GW-SW interactions were not spatially expansive, but were high in flux through a shallow hyporheic zone surrounding the reach. NaCl and resazurin tracers suggested different surface–subsurface exchange patterns in the upper ⅔ and lower ⅓ of the reach. Subsurface sampling of tracers and vertical thermal profiles quantified relatively high fluxes through a 10- to 20-cm deep hyporheic zone with chemical reactivity of the resazurin tracer indicated at 3-, 6-, and 9-cm sampling depths. Monitoring of hydraulic gradients along transects with MINIPOINT streambed samplers starting ∼40 m from the stream indicated that groundwater discharge prevented development of a larger hyporheic zone, which progressively decreased from the stream thalweg toward the banks. Distributed temperature sensing did not detect extensive inflow of ground water to the stream, and electrical resistivity imaging showed limited large-scale hyporheic exchange. We recommend choosing technique(s) based on: 1) clear definition of the questions to be addressed (physical, biological, or chemical processes), 2) explicit identification of the spatial and temporal scales to be covered and those required to provide an appropriate context for interpretation, and 3) maximizing generation of mechanistic understanding and reducing costs of

  12. Quantifying construction and demolition waste: An analytical review

    International Nuclear Information System (INIS)

    Wu, Zezhou; Yu, Ann T.W.; Shen, Liyin; Liu, Guiwen

    2014-01-01

    Highlights: • Prevailing C and D waste quantification methodologies are identified and compared. • One specific methodology cannot fulfill all waste quantification scenarios. • A relevance tree for appropriate quantification methodology selection is proposed. • More attentions should be paid to civil and infrastructural works. • Classified information is suggested for making an effective waste management plan. - Abstract: Quantifying construction and demolition (C and D) waste generation is regarded as a prerequisite for the implementation of successful waste management. In literature, various methods have been employed to quantify the C and D waste generation at both regional and project levels. However, an integrated review that systemically describes and analyses all the existing methods has yet to be conducted. To bridge this research gap, an analytical review is conducted. Fifty-seven papers are retrieved based on a set of rigorous procedures. The characteristics of the selected papers are classified according to the following criteria - waste generation activity, estimation level and quantification methodology. Six categories of existing C and D waste quantification methodologies are identified, including site visit method, waste generation rate method, lifetime analysis method, classification system accumulation method, variables modelling method and other particular methods. A critical comparison of the identified methods is given according to their characteristics and implementation constraints. Moreover, a decision tree is proposed for aiding the selection of the most appropriate quantification method in different scenarios. Based on the analytical review, limitations of previous studies and recommendations of potential future research directions are further suggested

  13. Combined use of thermal methods and seepage meters to efficiently locate, quantify, and monitor focused groundwater discharge to a sand-bed stream

    Science.gov (United States)

    Rosenberry, Donald O.; Briggs, Martin A.; Delin, Geoffrey N.; Hare, Danielle K.

    2016-01-01

    Quantifying flow of groundwater through streambeds often is difficult due to the complexity of aquifer-scale heterogeneity combined with local-scale hyporheic exchange. We used fiber-optic distributed temperature sensing (FO-DTS), seepage meters, and vertical temperature profiling to locate, quantify, and monitor areas of focused groundwater discharge in a geomorphically simple sand-bed stream. This combined approach allowed us to rapidly focus efforts at locations where prodigious amounts of groundwater discharged to the Quashnet River on Cape Cod, Massachusetts, northeastern USA. FO-DTS detected numerous anomalously cold reaches one to several m long that persisted over two summers. Seepage meters positioned upstream, within, and downstream of 7 anomalously cold reaches indicated that rapid groundwater discharge occurred precisely where the bed was cold; median upward seepage was nearly 5 times faster than seepage measured in streambed areas not identified as cold. Vertical temperature profilers deployed next to 8 seepage meters provided diurnal-signal-based seepage estimates that compared remarkably well with seepage-meter values. Regression slope and R2 values both were near 1 for seepage ranging from 0.05 to 3.0 m d−1. Temperature-based seepage model accuracy was improved with thermal diffusivity determined locally from diurnal signals. Similar calculations provided values for streambed sediment scour and deposition at subdaily resolution. Seepage was strongly heterogeneous even along a sand-bed river that flows over a relatively uniform sand and fine-gravel aquifer. FO-DTS was an efficient method for detecting areas of rapid groundwater discharge, even in a strongly gaining river, that can then be quantified over time with inexpensive streambed thermal methods.

  14. Method for pre-processing LWR spent fuel

    International Nuclear Information System (INIS)

    Otsuka, Katsuyuki; Ebihara, Hikoe.

    1986-01-01

    Purpose: To facilitate the decladding of spent fuel, cladding tube processing, and waste gas recovery, and to enable the efficient execution of main re-processing process thereafter. Constitution: Spent fuel assemblies are sent to a cutting process where they are cut into chips of easy-to-process size. The chips, in a thermal decladding process, undergo a thermal cycle processing in air with the processing temperatures increased and decreased within the range of from 700 deg C to 1200 deg C, oxidizing zircaloy comprising the cladding tubes into zirconia. The oxidized cladding tubes have a number of fine cracks and become very brittle and easy to loosen off from fuel pellets when even a slight mechanical force is applied thereto, thus changing into a form of powder. Processed products are then separated into zirconia sand and fuel pellets by a gravitational selection method or by a sifting method, the zirconia sand being sent to a waste processing process and the fuel pellets to a melting-refining process. (Yoshino, Y.)

  15. A method to quantify tritium inside waste drums: He{sup 3} ingrowth method

    Energy Technology Data Exchange (ETDEWEB)

    Godot, A.; Lepeytre, C.; Hubinois, J.C. [CEA Valduc, Dept. Traitement Materiaux Nucleaires, Service Analyses- Dechets, Lab. Chimie Analytique, 21 - Is-sur-Tille (France); Arseguel, A.; Daclin, J.P.; Douche, C. [CEA Valduc, Dept. Traitement Materiaux Nucleaires, Service Analyses- Dechets, Lab. de Gestion des Dechets Trities, 21 - Is-sur-Tille (France)

    2008-07-15

    This method enables an indirect, non intrusive and non destructive measurement of the Tritium activity in wastes drums. The amount of tritium enclosed inside a wastes drum can be determined by the measurement of the leak rate of {sup 3}He of this latter. The simulation predicts that a few months are necessary for establishing the equilibrium between the {sup 3}He production inside the drum and the {sup 3}He drum leak. In practice, after one year of storage, sampling {sup 3}He outside the drum can be realized by the mean of a confining chamber that collect the {sup 3}He outflow. The apparatus, the experimental procedure and the calculation of tritium activity from mass spectrometric {sup 3}He measurements are detailed. The industrial device based on a confinement cell and the automated process to measure the {sup 3}He amount at the initial time and after the confinement time is described. Firstly, reference drums containing a certified tritium activity (HTO) in addition to organic materials have been measured to qualify the method and to evaluate its performances. Secondly, tritium activity of organic wastes drums issued from the storage building in Valduc have been determined. Results of the qualification and optimised values of the experimental parameters are reported in order to determine the performances of this industrial device. As a conclusion, the apparatus enables the measurement of an activity as low as 1 GBq of tritium in a 200 liters drum containing organic wastes. (authors)

  16. An extended diffraction tomography method for quantifying structural damage using numerical Green's functions.

    Science.gov (United States)

    Chan, Eugene; Rose, L R Francis; Wang, Chun H

    2015-05-01

    Existing damage imaging algorithms for detecting and quantifying structural defects, particularly those based on diffraction tomography, assume far-field conditions for the scattered field data. This paper presents a major extension of diffraction tomography that can overcome this limitation and utilises a near-field multi-static data matrix as the input data. This new algorithm, which employs numerical solutions of the dynamic Green's functions, makes it possible to quantitatively image laminar damage even in complex structures for which the dynamic Green's functions are not available analytically. To validate this new method, the numerical Green's functions and the multi-static data matrix for laminar damage in flat and stiffened isotropic plates are first determined using finite element models. Next, these results are time-gated to remove boundary reflections, followed by discrete Fourier transform to obtain the amplitude and phase information for both the baseline (damage-free) and the scattered wave fields. Using these computationally generated results and experimental verification, it is shown that the new imaging algorithm is capable of accurately determining the damage geometry, size and severity for a variety of damage sizes and shapes, including multi-site damage. Some aspects of minimal sensors requirement pertinent to image quality and practical implementation are also briefly discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Quantifying the Accuracy of Digital Hemispherical Photography for Leaf Area Index Estimates on Broad-Leaved Tree Species.

    Science.gov (United States)

    Gilardelli, Carlo; Orlando, Francesca; Movedi, Ermes; Confalonieri, Roberto

    2018-03-29

    Digital hemispherical photography (DHP) has been widely used to estimate leaf area index (LAI) in forestry. Despite the advancement in the processing of hemispherical images with dedicated tools, several steps are still manual and thus easily affected by user's experience and sensibility. The purpose of this study was to quantify the impact of user's subjectivity on DHP LAI estimates for broad-leaved woody canopies using the software Can-Eye. Following the ISO 5725 protocol, we quantified the repeatability and reproducibility of the method, thus defining its precision for a wide range of broad-leaved canopies markedly differing for their structure. To get a complete evaluation of the method accuracy, we also quantified its trueness using artificial canopy images with known canopy cover. Moreover, the effect of the segmentation method was analysed. The best results for precision (restrained limits of repeatability and reproducibility) were obtained for high LAI values (>5) with limits corresponding to a variation of 22% in the estimated LAI values. Poorer results were obtained for medium and low LAI values, with a variation of the estimated LAI values that exceeded the 40%. Regardless of the LAI range explored, satisfactory results were achieved for trees in row-structured plantations (limits almost equal to the 30% of the estimated LAI). Satisfactory results were achieved for trueness, regardless of the canopy structure. The paired t -test revealed that the effect of the segmentation method on LAI estimates was significant. Despite a non-negligible user effect, the accuracy metrics for DHP are consistent with those determined for other indirect methods for LAI estimates, confirming the overall reliability of DHP in broad-leaved woody canopies.

  18. Quantifying the Accuracy of Digital Hemispherical Photography for Leaf Area Index Estimates on Broad-Leaved Tree Species

    Directory of Open Access Journals (Sweden)

    Carlo Gilardelli

    2018-03-01

    Full Text Available Digital hemispherical photography (DHP has been widely used to estimate leaf area index (LAI in forestry. Despite the advancement in the processing of hemispherical images with dedicated tools, several steps are still manual and thus easily affected by user’s experience and sensibility. The purpose of this study was to quantify the impact of user’s subjectivity on DHP LAI estimates for broad-leaved woody canopies using the software Can-Eye. Following the ISO 5725 protocol, we quantified the repeatability and reproducibility of the method, thus defining its precision for a wide range of broad-leaved canopies markedly differing for their structure. To get a complete evaluation of the method accuracy, we also quantified its trueness using artificial canopy images with known canopy cover. Moreover, the effect of the segmentation method was analysed. The best results for precision (restrained limits of repeatability and reproducibility were obtained for high LAI values (>5 with limits corresponding to a variation of 22% in the estimated LAI values. Poorer results were obtained for medium and low LAI values, with a variation of the estimated LAI values that exceeded the 40%. Regardless of the LAI range explored, satisfactory results were achieved for trees in row-structured plantations (limits almost equal to the 30% of the estimated LAI. Satisfactory results were achieved for trueness, regardless of the canopy structure. The paired t-test revealed that the effect of the segmentation method on LAI estimates was significant. Despite a non-negligible user effect, the accuracy metrics for DHP are consistent with those determined for other indirect methods for LAI estimates, confirming the overall reliability of DHP in broad-leaved woody canopies.

  19. Parametric methods for spatial point processes

    DEFF Research Database (Denmark)

    Møller, Jesper

    is studied in Section 4, and Bayesian inference in Section 5. On one hand, as the development in computer technology and computational statistics continues,computationally-intensive simulation-based methods for likelihood inference probably will play a increasing role for statistical analysis of spatial...... inference procedures for parametric spatial point process models. The widespread use of sensible but ad hoc methods based on summary statistics of the kind studied in Chapter 4.3 have through the last two decades been supplied by likelihood based methods for parametric spatial point process models......(This text is submitted for the volume ‘A Handbook of Spatial Statistics' edited by A.E. Gelfand, P. Diggle, M. Fuentes, and P. Guttorp, to be published by Chapmand and Hall/CRC Press, and planned to appear as Chapter 4.4 with the title ‘Parametric methods'.) 1 Introduction This chapter considers...

  20. Solar fuel processing efficiency for ceria redox cycling using alternative oxygen partial pressure reduction methods

    International Nuclear Information System (INIS)

    Lin, Meng; Haussener, Sophia

    2015-01-01

    Solar-driven non-stoichiometric thermochemical redox cycling of ceria for the conversion of solar energy into fuels shows promise in achieving high solar-to-fuel efficiency. This efficiency is significantly affected by the operating conditions, e.g. redox temperatures, reduction and oxidation pressures, solar irradiation concentration, or heat recovery effectiveness. We present a thermodynamic analysis of five redox cycle designs to investigate the effects of working conditions on the fuel production. We focused on the influence of approaches to reduce the partial pressure of oxygen in the reduction step, namely by mechanical approaches (sweep gassing or vacuum pumping), chemical approaches (chemical scavenger), and combinations thereof. The results indicated that the sweep gas schemes work more efficient at non-isothermal than isothermal conditions, and efficient gas phase heat recovery and sweep gas recycling was important to ensure efficient fuel processing. The vacuum pump scheme achieved best efficiencies at isothermal conditions, and at non-isothermal conditions heat recovery was less essential. The use of oxygen scavengers combined with sweep gas and vacuum pump schemes further increased the system efficiency. The present work can be used to predict the performance of solar-driven non-stoichiometric redox cycles and further offers quantifiable guidelines for system design and operation. - Highlights: • A thermodynamic analysis was conducted for ceria-based thermochemical cycles. • Five novel cycle designs and various operating conditions were proposed and investigated. • Pressure reduction method affects optimal operating conditions for maximized efficiency. • Chemical oxygen scavenger proves to be promising in further increasing efficiency. • Formulation of quantifiable design guidelines for economical competitive solar fuel processing

  1. Quantifying the economic water savings benefit of water hyacinth ...

    African Journals Online (AJOL)

    Quantifying the economic water savings benefit of water hyacinth ... Value Method was employed to estimate the average production value of irrigation water, ... invasions of this nature, as they present significant costs to the economy and ...

  2. A novel method to quantify the activity of alcohol acetyltransferase Using a SnO2-based sensor of electronic nose.

    Science.gov (United States)

    Hu, Zhongqiu; Li, Xiaojing; Wang, Huxuan; Niu, Chen; Yuan, Yahong; Yue, Tianli

    2016-07-15

    Alcohol acetyltransferase (AATFase) extensively catalyzes the reactions of alcohols to acetic esters in microorganisms and plants. In this work, a novel method has been proposed to quantify the activity of AATFase using a SnO2-based sensor of electronic nose, which was determined on the basis of its higher sensitivity to the reducing alcohol than the oxidizing ester. The maximum value of the first-derivative of the signals from the SnO2-based sensor was therein found to be an eigenvalue of isoamyl alcohol concentration. Quadratic polynomial regression perfectly fitted the correlation between the eigenvalue and the isoamyl alcohol concentration. The method was used to determine the AATFase activity in this type of reaction by calculating the conversion rate of isoamyl alcohol. The proposed method has been successfully applied to determine the AATFase activity of a cider yeast strain. Compared with GC-MS, the method shows promises with ideal recovery and low cost. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  4. Methods Used to Deal with Peace Process Spoilers

    Directory of Open Access Journals (Sweden)

    MA. Bilbil Kastrati

    2014-06-01

    Full Text Available The conflicts of the past three decades have shown that the major problems which peace processes face are the spoilers. Spoilers are warring parties and their leaders who believe that peaceful settlement of disputes threatens their interests, power and their reputation; therefore, they use all means to undermine or completely spoil the process. Spoilers of peace processes can be inside or outside of the process and are characterized as limited, greedy or total spoilers. Their motives for spoiling can be different, such as: political, financial, ethnic, security, etc. Furthermore, it is important to emphasise that spoilers are not only rebels and insurgents, but can often be governments, diasporas, warlords, private military companies, etc. In order to counteract the spoilers, the international community has adopted and implemented three methods: inducement, socialization and coercion. Often all three methods are used to convince the spoilers to negotiate, accept and implement peace agreements. Hence, this paper will examine the methods used to deal with peace process spoilers through an assessment of the strategies employed, impact, success and failures. This paper will also argue that the success or failure of the peace process depends on the method(s used to deal with spoilers. If the right method is chosen, with a persistent engagement of the international community, the peace process will be successful; on the contrary, if they fail to do so, the consequences will be devastating.

  5. Entropy generation method to quantify thermal comfort

    Science.gov (United States)

    Boregowda, S. C.; Tiwari, S. N.; Chaturvedi, S. K.

    2001-01-01

    The present paper presents a thermodynamic approach to assess the quality of human-thermal environment interaction and quantify thermal comfort. The approach involves development of entropy generation term by applying second law of thermodynamics to the combined human-environment system. The entropy generation term combines both human thermal physiological responses and thermal environmental variables to provide an objective measure of thermal comfort. The original concepts and definitions form the basis for establishing the mathematical relationship between thermal comfort and entropy generation term. As a result of logic and deterministic approach, an Objective Thermal Comfort Index (OTCI) is defined and established as a function of entropy generation. In order to verify the entropy-based thermal comfort model, human thermal physiological responses due to changes in ambient conditions are simulated using a well established and validated human thermal model developed at the Institute of Environmental Research of Kansas State University (KSU). The finite element based KSU human thermal computer model is being utilized as a "Computational Environmental Chamber" to conduct series of simulations to examine the human thermal responses to different environmental conditions. The output from the simulation, which include human thermal responses and input data consisting of environmental conditions are fed into the thermal comfort model. Continuous monitoring of thermal comfort in comfortable and extreme environmental conditions is demonstrated. The Objective Thermal Comfort values obtained from the entropy-based model are validated against regression based Predicted Mean Vote (PMV) values. Using the corresponding air temperatures and vapor pressures that were used in the computer simulation in the regression equation generates the PMV values. The preliminary results indicate that the OTCI and PMV values correlate well under ideal conditions. However, an experimental study

  6. Quantifying the ice-albedo feedback through decoupling

    Science.gov (United States)

    Kravitz, B.; Rasch, P. J.

    2017-12-01

    The ice-albedo feedback involves numerous individual components, whereby warming induces sea ice melt, inducing reduced surface albedo, inducing increased surface shortwave absorption, causing further warming. Here we attempt to quantify the sea ice albedo feedback using an analogue of the "partial radiative perturbation" method, but where the governing mechanisms are directly decoupled in a climate model. As an example, we can isolate the insulating effects of sea ice on surface energy and moisture fluxes by allowing sea ice thickness to change but fixing Arctic surface albedo, or vice versa. Here we present results from such idealized simulations using the Community Earth System Model in which individual components are successively fixed, effectively decoupling the ice-albedo feedback loop. We isolate the different components of this feedback, including temperature change, sea ice extent/thickness, and air-sea exchange of heat and moisture. We explore the interactions between these different components, as well as the strengths of the total feedback in the decoupled feedback loop, to quantify contributions from individual pieces. We also quantify the non-additivity of the effects of the components as a means of investigating the dominant sources of nonlinearity in the ice-albedo feedback.

  7. A novel method for quantifying the greenhouse gas emissions of biofuels based on historical land use change

    Science.gov (United States)

    Liu, X.; Rhodes, J.; Clarens, A. F.

    2012-12-01

    Land use change (LUC) emissions have been at the center of an ongoing debate about how the carbon footprint of biofuels compare to petroleum-based fuels over their entire life cycle. The debate about LUC has important implications in the US, the EU, and other countries that are working to deploy biofuel policies, informed by life cycle assessment, that promote carbon emission reductions, among other things. LUC calculations often distinguish between direct land use change (DLUC), those that occur onsite, and indirect land use change (ILUC), those that result from market mechanisms leading to emissions that are either spatially or temporally removed from the agricultural activity. These designations are intended to capture the fundamental connection between agricultural production of biofuel feedstock and its physical effects on the land, but both DLUC and ILUC can be difficult to measure and apply broadly. ILUC estimates are especially challenging to quantify because they rely on global economic models to assess how much land would be brought into production in other countries as a consequence of biofuel feedstock cultivation. As a result, ILUC estimates inherently uncertain, are sensitive to complex assumptions, have limited transparency, and have precipitated sufficient controversy to delay development of coherent biofuel policies. To address these shortcomings of conventional LUC methodologies, we have developed a method for estimating land use change emissions that is based on historical emissions from a parcel of land. The method, which we call historical land use change (HLUC) can be readily quantified for any parcel of land in the world using open source datasets of historical emissions. HLUC is easy to use and is directly tied to the physical processes on land used for biofuel production. The emissions from the HLUC calculations are allocated between historical agricultural activity and proposed biofuel feedstock cultivation. This is compatible with

  8. Quantifying Transmission.

    Science.gov (United States)

    Woolhouse, Mark

    2017-07-01

    Transmissibility is the defining characteristic of infectious diseases. Quantifying transmission matters for understanding infectious disease epidemiology and designing evidence-based disease control programs. Tracing individual transmission events can be achieved by epidemiological investigation coupled with pathogen typing or genome sequencing. Individual infectiousness can be estimated by measuring pathogen loads, but few studies have directly estimated the ability of infected hosts to transmit to uninfected hosts. Individuals' opportunities to transmit infection are dependent on behavioral and other risk factors relevant given the transmission route of the pathogen concerned. Transmission at the population level can be quantified through knowledge of risk factors in the population or phylogeographic analysis of pathogen sequence data. Mathematical model-based approaches require estimation of the per capita transmission rate and basic reproduction number, obtained by fitting models to case data and/or analysis of pathogen sequence data. Heterogeneities in infectiousness, contact behavior, and susceptibility can have substantial effects on the epidemiology of an infectious disease, so estimates of only mean values may be insufficient. For some pathogens, super-shedders (infected individuals who are highly infectious) and super-spreaders (individuals with more opportunities to transmit infection) may be important. Future work on quantifying transmission should involve integrated analyses of multiple data sources.

  9. Processing methods for temperature-dependent MCNP libraries

    International Nuclear Information System (INIS)

    Li Songyang; Wang Kan; Yu Ganglin

    2008-01-01

    In this paper,the processing method of NJOY which transfers ENDF files to ACE (A Compact ENDF) files (point-wise cross-Section file used for MCNP program) is discussed. Temperatures that cover the range for reactor design and operation are considered. Three benchmarks are used for testing the method: Jezebel Benchmark, 28 cm-thick Slab Core Benchmark and LWR Benchmark with Burnable Absorbers. The calculation results showed the precision of the neutron cross-section library and verified the correct processing methods in usage of NJOY. (authors)

  10. A Computational Approach to Quantifiers as an Explanation for Some Language Impairments in Schizophrenia

    Science.gov (United States)

    Zajenkowski, Marcin; Styla, Rafal; Szymanik, Jakub

    2011-01-01

    We compared the processing of natural language quantifiers in a group of patients with schizophrenia and a healthy control group. In both groups, the difficulty of the quantifiers was consistent with computational predictions, and patients with schizophrenia took more time to solve the problems. However, they were significantly less accurate only…

  11. Droplet digital PCR as a novel detection method for quantifying microRNAs in acute myocardial infarction.

    Science.gov (United States)

    Robinson, S; Follo, M; Haenel, D; Mauler, M; Stallmann, D; Tewari, M; Duerschmied, D; Peter, K; Bode, C; Ahrens, I; Hortmann, M

    2018-04-15

    micro-RNAs have shown promise as potential biomarkers for acute myocardial infarction and ischemia-reperfusion injury (I/R). Most recently droplet digital polymerase chain reaction (ddPCR) has been introduced as a more reliable and reproducible method for detecting micro-RNAs. We aimed to demonstrate the improved technical performance and diagnostic potential of ddPCR by measuring micro-RNAs in ST-elevation myocardial infarction (STEMI). A dilution series was performed in duplicate on synthetic Caenorrhabditis elegans-miR-39, comparing quantitative real-time PCR (qRT-PCR) and ddPCR. We used ddPCR and qRT-PCR to quantify the serum levels of miR-21, miR-208a and miR-499 between STEMI patients (n=24) and stable coronary artery disease (CAD) patients (n=20). In STEMI, I/R injury was assessed via measurement of ST-segment resolution. In the dilution series, ddPCR demonstrated superior coefficient of variation (12.1%vs.32.9%) and limit of detection (0.9325 vs.2.425copies/μl). In the patient cohort, ddPCR demonstrated greater differences in miR-21 levels (2190.5 vs. 484.7copies/μl; p=0.0004 for ddPCR and 136.4 vs. 122.8copies/μl; p=0.2273 for qRT-PCR) and in miR-208a (0 vs. 24.1copies/μl, p=0.0013 for ddPCR and 0 vs. 0copies/μl, p=0.0032 for qRT-PCR), with similar differences observed in miR-499 levels (9.4 vs. 81.5copies/μl, pPCR). ddPCR also more accurately defined STEMI for all miRNAs (area under the curve (AUC) of 0.8021/0.7740/0.9063 for miR-21/208a/499 with ddPCR vs. AUC of 0.6083/0.6917/0.8417 with qRT-PCR). However, there was no association between miR-21/208a/499 levels and ischemia-reperfusion injury. ddPCR demonstrates superiority in both technical performance and diagnostic potential compared to qRT-PCR. Ultimately, this supports its use as a diagnostic method for quantifying micro-RNAs, particularly in large multi-center trials. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Quantifying the ventilatory control contribution to sleep apnoea using polysomnography.

    Science.gov (United States)

    Terrill, Philip I; Edwards, Bradley A; Nemati, Shamim; Butler, James P; Owens, Robert L; Eckert, Danny J; White, David P; Malhotra, Atul; Wellman, Andrew; Sands, Scott A

    2015-02-01

    Elevated loop gain, consequent to hypersensitive ventilatory control, is a primary nonanatomical cause of obstructive sleep apnoea (OSA) but it is not possible to quantify this in the clinic. Here we provide a novel method to estimate loop gain in OSA patients using routine clinical polysomnography alone. We use the concept that spontaneous ventilatory fluctuations due to apnoeas/hypopnoeas (disturbance) result in opposing changes in ventilatory drive (response) as determined by loop gain (response/disturbance). Fitting a simple ventilatory control model (including chemical and arousal contributions to ventilatory drive) to the ventilatory pattern of OSA reveals the underlying loop gain. Following mathematical-model validation, we critically tested our method in patients with OSA by comparison with a standard (continuous positive airway pressure (CPAP) drop method), and by assessing its ability to detect the known reduction in loop gain with oxygen and acetazolamide. Our method quantified loop gain from baseline polysomnography (correlation versus CPAP-estimated loop gain: n=28; r=0.63, p<0.001), detected the known reduction in loop gain with oxygen (n=11; mean±sem change in loop gain (ΔLG) -0.23±0.08, p=0.02) and acetazolamide (n=11; ΔLG -0.20±0.06, p=0.005), and predicted the OSA response to loop gain-lowering therapy. We validated a means to quantify the ventilatory control contribution to OSA pathogenesis using clinical polysomnography, enabling identification of likely responders to therapies targeting ventilatory control. Copyright ©ERS 2015.

  13. Psychological distress as a factor in environmental impact assessment: Some methods and ideas for quantifying this intangible intangible

    International Nuclear Information System (INIS)

    Egna, H.S.

    1995-01-01

    A case study describing citizens' contentions that restarting Three Mile Island's nuclear reactor (TMI-1) would cause psychological distress provides historical and legislative impetus for federal agencies to consider psycho-social dimensions in their environmental impact statements (EISs). Although the Nuclear Regulatory Commission (NRC) rejected citizens' complaints on the grounds that distress is not easily quantifiable, experts associated with the case noted that the NRC's contention was not entirely valid and that the National Environmental Protection Act missed a golden opportunity to promote the development of methodology and models for incorporating psychosocial factors into the EIS. This study describes some of the methods that have subsequently been used for measuring distress in the context of technological hazards

  14. Apparatus and method for locating and quantifying or directing a source of ionizing radiation

    International Nuclear Information System (INIS)

    Rogers, W.L.; Wainstock, M.A.

    1976-01-01

    An apparatus and method for locating or directing a source of ionizing radiation such as X-rays, gamma rays, alpha particles, beta particles, etc. are described. The preferred embodiment detects and locates abnormalities of the body such as ocular melanomas by detecting the emission of radiation from a melanoma which has absorbed a radioactive medium. The apparatus includes an ultrasound probe which emits ultrasonic waves along a first axis and detects a returned portion of the waves. The ultrasound probe is associated with a display which displays the returned portion of the waves in the time domain so that suspected abnormalities can be located. The ultrasound probe is used to guide a directional probe for detecting and quantifying ionizing radiation which is equipped with a focusing collimator having a focal point along a second axis. The two probes are supported so that the first and second axes converge at the focal point of the collimator. A range marker is associated with the ultrasonic detector which indicates the point of convergence of the axes on the ultrasonic display permitting guidance of the radiation detecting probe to the suspected abnormality

  15. Validation of an ultra-high-performance liquid chromatography-tandem mass spectrometry method to quantify illicit drug and pharmaceutical residues in wastewater using accuracy profile approach.

    Science.gov (United States)

    Hubert, Cécile; Roosen, Martin; Levi, Yves; Karolak, Sara

    2017-06-02

    The analysis of biomarkers in wastewater has become a common approach to assess community behavior. This method is an interesting way to estimate illicit drug consumption in a given population: by using a back calculation method, it is therefore possible to quantify the amount of a specific drug used in a community and to assess the consumption variation at different times and locations. Such a method needs reliable analytical data since the determination of a concentration in the ngL -1 range in a complex matrix is difficult and not easily reproducible. The best analytical method is liquid chromatography - mass spectrometry coupling after solid-phase extraction or on-line pre-concentration. Quality criteria are not specially defined for this kind of determination. In this context, it was decided to develop an UHPLC-MS/MS method to analyze 10 illicit drugs and pharmaceuticals in wastewater treatment plant influent or effluent using a pre-concentration on-line system. A validation process was then carried out using the accuracy profile concept as an innovative tool to estimate the probability of getting prospective results within specified acceptance limits. Influent and effluent samples were spiked with known amounts of the 10 compounds and analyzed three times a day for three days in order to estimate intra-day and inter-day variations. The matrix effect was estimated for each compound. The developed method can provide at least 80% of results within ±25% limits except for compounds that are degraded in influent. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Application of finite-element-methods in food processing

    DEFF Research Database (Denmark)

    Risum, Jørgen

    2004-01-01

    Presentation of the possible use of finite-element-methods in food processing. Examples from diffusion studies are given.......Presentation of the possible use of finite-element-methods in food processing. Examples from diffusion studies are given....

  17. Digitally quantifying cerebral hemorrhage using Photoshop and Image J.

    Science.gov (United States)

    Tang, Xian Nan; Berman, Ari Ethan; Swanson, Raymond Alan; Yenari, Midori Anne

    2010-07-15

    A spectrophotometric hemoglobin assay is widely used to estimate the extent of brain hemorrhage by measuring the amount of hemoglobin in the brain. However, this method requires using the entire brain sample, leaving none for histology or other assays. Other widely used measures of gross brain hemorrhage are generally semi-quantitative and can miss subtle differences. Semi-quantitative brain hemorrhage scales may also be subject to bias. Here, we present a method to digitally quantify brain hemorrhage using Photoshop and Image J, and compared this method to the spectrophotometric hemoglobin assay. Male Sprague-Dawley rats received varying amounts of autologous blood injected into the cerebral hemispheres in order to generate different sized hematomas. 24h later, the brains were harvested, sectioned, photographed then prepared for the hemoglobin assay. From the brain section photographs, pixels containing hemorrhage were identified by Photoshop and the optical intensity was measured by Image J. Identification of hemorrhage size using optical intensities strongly correlated to the hemoglobin assay (R=0.94). We conclude that our method can accurately quantify the extent of hemorrhage. An advantage of this technique is that brain tissue can be used for additional studies. Published by Elsevier B.V.

  18. DIGITALLY QUANTIFYING CEREBRAL HEMORRHAGE USING PHOTOSHOP® AND IMAGE J

    Science.gov (United States)

    Tang, Xian Nan; Berman, Ari Ethan; Swanson, Raymond Alan; Yenari, Midori Anne

    2010-01-01

    A spectrophotometric hemoglobin assay is widely used to estimate the extent of brain hemorrhage by measuring the amount of hemoglobin in the brain. However, this method requires using the entire brain sample, leaving none for histology or other assays. Other widely used measures of gross brain hemorrhage are generally semi-quantitative and can miss subtle differences. Semi-quantitative brain hemorrhage scales may also be subject to bias. Here, we present a method to digitally quantify brain hemorrhage using Photoshop and Image J, and compared this method to the spectrophotometric hemoglobin assay. Male Sprague-Dawley rats received varying amounts of autologous blood injected into the cerebral hemispheres in order to generate different sized hematomas. 24 hours later, the brains were harvested, sectioned, photographed then prepared for the hemoglobin assay. From the brain section photographs, pixels containing hemorrhage were identified by Photoshop® and the optical intensity was measured by Image J. Identification of hemorrhage size using optical intensities strongly correlated to the hemoglobin assay (R=0.94). We conclude that our method can accurately quantify the extent of hemorrhage. An advantage of this technique is that brain tissue can be used for additional studies. PMID:20452374

  19. A method to quantify and value floodplain sediment and nutrient retention ecosystem services

    Science.gov (United States)

    Hopkins, Kristina G.; Noe, Gregory; Franco, Fabiano; Pindilli, Emily J.; Gordon, Stephanie; Metes, Marina J.; Claggett, Peter; Gellis, Allen; Hupp, Cliff R.; Hogan, Dianna

    2018-01-01

    Floodplains provide critical ecosystem services to local and downstream communities by retaining floodwaters, sediments, and nutrients. The dynamic nature of floodplains is such that these areas can both accumulate sediment and nutrients through deposition, and export material downstream through erosion. Therefore, estimating floodplain sediment and nutrient retention should consider the net flux of both depositional and erosive processes. An ecosystem services framework was used to quantify and value the sediment and nutrient ecosystem service provided by floodplains in the Difficult Run watershed, a small (151 km2) suburban watershed located in the Piedmont of Virginia (USA). A sediment balance was developed for Difficult Run and two nested watersheds. The balance included upland sediment delivery to streams, stream bank flux, floodplain flux, and stream load. Upland sediment delivery was estimated using geospatial datasets and a modified Revised Universal Soil Loss Equation. Predictive models were developed to extrapolate field measurements of the flux of sediment, sediment-bound nitrogen (N), and sediment-bound phosphorus (P) from stream banks and floodplains to 3232 delineated stream segments in the study area. A replacement cost approach was used to estimate the economic value of the sediment and nutrient retention ecosystem service based on estimated net stream bank and floodplain flux of sediment-bound N for all streams in the study area. Results indicated the net fluvial fluxes of sediment, sediment-bound N, and sediment-bound P were −10,439 Mg yr−1 (net export), 57,300 kg-N yr−1(net trapping), and 98 kg-P yr−1(net trapping), respectively. For sediment, floodplain retention was offset by substantial losses from stream bank erosion, particularly in headwater catchments, resulting in a net export of sediment. Nutrient retention in the floodplain exceeded that lost through stream bank erosion resulting in net retention of nutrients (TN and

  20. Quantifying hyporheic exchange dynamics in a highly regulated large river reach.

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Glenn Edward; Zhou, T; Huang, M; Hou, Z; Bao, J; Arntzen, E; Mackley, R; Harding, S; Titzler, S; Murray, C; Perkins, W; Chen, X; Stegen, J; Thorne, P; Zachara, J

    2017-03-01

    Hyporheic exchange is an important mechanism taking place in riverbanks and riverbed sediments, where river water and shallow groundwater mix and interact with each other. The direction, magnitude, and residence time of the hyporheic flux that penetrates the river bed are critical for biogeochemical processes such as carbon and nitrogen cycling, and biodegradation of organic contaminants. Many approaches including field measurements and numerical methods have been developed to quantify the hyporheic exchanges in relatively small rivers. However, the spatial and temporal distributions of hyporheic exchanges in a large, regulated river reach remain less explored due to the large spatial domains, complexity of geomorphologic features and subsurface properties, and the great pressure gradient variations at the riverbed created by dam operations.

  1. Quantifying Distributional Model Risk via Optimal Transport

    OpenAIRE

    Blanchet, Jose; Murthy, Karthyek R. A.

    2016-01-01

    This paper deals with the problem of quantifying the impact of model misspecification when computing general expected values of interest. The methodology that we propose is applicable in great generality, in particular, we provide examples involving path dependent expectations of stochastic processes. Our approach consists in computing bounds for the expectation of interest regardless of the probability measure used, as long as the measure lies within a prescribed tolerance measured in terms ...

  2. A non-destructive method for quantifying small-diameter woody biomass in southern pine forests

    Science.gov (United States)

    D. Andrew Scott; Rick Stagg; Morris Smith

    2006-01-01

    Quantifying the impact of silvicultural treatments on woody understory vegetation largely has been accomplished by destructive sampling or through estimates of frequency and coverage. In studies where repeated measures of understory biomass across large areas are needed, destructive sampling and percent cover estimates are not satisfactory. For example, estimates of...

  3. Method for double-sided processing of thin film transistors

    Science.gov (United States)

    Yuan, Hao-Chih; Wang, Guogong; Eriksson, Mark A.; Evans, Paul G.; Lagally, Max G.; Ma, Zhenqiang

    2008-04-08

    This invention provides methods for fabricating thin film electronic devices with both front- and backside processing capabilities. Using these methods, high temperature processing steps may be carried out during both frontside and backside processing. The methods are well-suited for fabricating back-gate and double-gate field effect transistors, double-sided bipolar transistors and 3D integrated circuits.

  4. Biotype assessment and evaluation as a method to quantify the external costs related to surface brown coal mining

    International Nuclear Information System (INIS)

    Kabrna, M.; Peleska, O.

    2009-01-01

    Externalities express the uncompensated effects of human individuals on each other and on nature. Externalities include costs and benefits which impact human individuals and environment and are not included in the costs and benefits of their creators and are often defined as the differences between social costs or revenues from economic activity and private costs or revenues. Surface brown coal mining can be characterized by a large range of adverse environmental effects. In order to compensate for the environmental damage caused by mining activities in the Czech Republic, various environmental fees paid by mining companies were introduced to the Czech legislation. The Hessian method of quantifying impacts on the environment is an expert methods that is appropriate for the evaluating environmental damage caused by large-scale impacts on landscape. This paper described the methodology of the Hessian method and its application to a selected opencast mine in north-western Bohemia called the Vrsany-Sverma mine. The paper also discussed current environmental charges in the Czech Republic. It was concluded that the calculated amount of environmental damage caused by surface mining balances the amount of environmental fees currently paid by mining companies. 4 refs., 1 tab.

  5. Geophysical methods for monitoring soil stabilization processes

    Science.gov (United States)

    Saneiyan, Sina; Ntarlagiannis, Dimitrios; Werkema, D. Dale; Ustra, Andréa

    2018-01-01

    Soil stabilization involves methods used to turn unconsolidated and unstable soil into a stiffer, consolidated medium that could support engineered structures, alter permeability, change subsurface flow, or immobilize contamination through mineral precipitation. Among the variety of available methods carbonate precipitation is a very promising one, especially when it is being induced through common soil borne microbes (MICP - microbial induced carbonate precipitation). Such microbial mediated precipitation has the added benefit of not harming the environment as other methods can be environmentally detrimental. Carbonate precipitation, typically in the form of calcite, is a naturally occurring process that can be manipulated to deliver the expected soil strengthening results or permeability changes. This study investigates the ability of spectral induced polarization and shear-wave velocity for monitoring calcite driven soil strengthening processes. The results support the use of these geophysical methods as soil strengthening characterization and long term monitoring tools, which is a requirement for viable soil stabilization projects. Both tested methods are sensitive to calcite precipitation, with SIP offering additional information related to long term stability of precipitated carbonate. Carbonate precipitation has been confirmed with direct methods, such as direct sampling and scanning electron microscopy (SEM). This study advances our understanding of soil strengthening processes and permeability alterations, and is a crucial step for the use of geophysical methods as monitoring tools in microbial induced soil alterations through carbonate precipitation.

  6. Marine ecosystem acoustics (MEA): Quantifying processes in the sea at the spatio-temporal scales on which they occur

    KAUST Repository

    Godøl, Olav Rune

    2014-07-22

    Sustainable management of fisheries resources requires quantitative knowledge and understanding of species distribution, abundance, and productivity-determining processes. Conventional sampling by physical capture is inconsistent with the spatial and temporal scales on which many of these processes occur. In contrast, acoustic observations can be obtained on spatial scales from centimetres to ocean basins, and temporal scales from seconds to seasons. The concept of marine ecosystem acoustics (MEA) is founded on the basic capability of acoustics to detect, classify, and quantify organisms and biological and physical heterogeneities in the water column. Acoustics observations integrate operational technologies, platforms, and models and can generate information by taxon at the relevant scales. The gaps between single-species assessment and ecosystem-based management, as well as between fisheries oceanography and ecology, are thereby bridged. The MEA concept combines state-of-the-art acoustic technology with advanced operational capabilities and tailored modelling integrated into a flexible tool for ecosystem research and monitoring. Case studies are presented to illustrate application of the MEA concept in quantification of biophysical coupling, patchiness of organisms, predator-prey interactions, and fish stock recruitment processes. Widespread implementation of MEA will have a large impact on marine monitoring and assessment practices and it is to be hoped that they also promote and facilitate interaction among disciplines within the marine sciences.

  7. A user-oriented and quantifiable approach to irrigation design.

    NARCIS (Netherlands)

    Baars, E.; Bastiaansen, A.P.M.; Menenti, M.

    1995-01-01

    A new user-oriented approach is presented to apply marketing research techniques to quantify perceptions, preferences and utility values of farmers. This approach was applied to design an improved water distribution method for an irrigation scheme in Mendoza, Argentina. The approach comprises two

  8. Process control and optimization with simple interval calculation method

    DEFF Research Database (Denmark)

    Pomerantsev, A.; Rodionova, O.; Høskuldsson, Agnar

    2006-01-01

    for the quality improvement in the course of production. The latter is an active quality optimization, which takes into account the actual history of the process. The advocate approach is allied to the conventional method of multivariate statistical process control (MSPC) as it also employs the historical process......Methods of process control and optimization are presented and illustrated with a real world example. The optimization methods are based on the PLS block modeling as well as on the simple interval calculation methods of interval prediction and object status classification. It is proposed to employ...... the series of expanding PLS/SIC models in order to support the on-line process improvements. This method helps to predict the effect of planned actions on the product quality and thus enables passive quality control. We have also considered an optimization approach that proposes the correcting actions...

  9. The statistical process control methods - SPC

    Directory of Open Access Journals (Sweden)

    Floreková Ľubica

    1998-03-01

    Full Text Available Methods of statistical evaluation of quality – SPC (item 20 of the documentation system of quality control of ISO norm, series 900 of various processes, products and services belong amongst basic qualitative methods that enable us to analyse and compare data pertaining to various quantitative parameters. Also they enable, based on the latter, to propose suitable interventions with the aim of improving these processes, products and services. Theoretical basis and applicatibily of the principles of the: - diagnostics of a cause and effects, - Paret analysis and Lorentz curve, - number distribution and frequency curves of random variable distribution, - Shewhart regulation charts, are presented in the contribution.

  10. A new soil mechanics approach to quantify and predict land subsidence by peat compression

    NARCIS (Netherlands)

    Koster, K.; Erkens, G.; Zwanenburg, C.

    2016-01-01

    Land subsidence threatens many coastal areas. Quantifying current and predicting future subsidence are essential to sustain the viability of these areas with respect to rising sea levels. Despite its scale and severity, methods to quantify subsidence are scarce. In peat-rich subsidence hot spots,

  11. The Fallacy of Quantifying Risk

    Science.gov (United States)

    2012-09-01

    Defense AT&L: September–October 2012 18 The Fallacy of Quantifying Risk David E. Frick, Ph.D. Frick is a 35-year veteran of the Department of...a key to risk analysis was “choosing the right technique” of quantifying risk . The weakness in this argument stems not from the assertion that one...of information about the enemy), yet achiev- ing great outcomes. Attempts at quantifying risk are not, in and of themselves, objectionable. Prudence

  12. An Acoustic-Based Method to Detect and Quantify the Effect of Exhalation into a Dry Powder Inhaler.

    Science.gov (United States)

    Holmes, Martin S; Seheult, Jansen N; O'Connell, Peter; D'Arcy, Shona; Ehrhardt, Carsten; Healy, Anne Marie; Costello, Richard W; Reilly, Richard B

    2015-08-01

    Dry powder inhaler (DPI) users frequently exhale into their inhaler mouthpiece before the inhalation step. This error in technique compromises the integrity of the drug and results in poor bronchodilation. This study investigated the effect of four exhalation factors (exhalation flow rate, distance from mouth to inhaler, exhalation duration, and relative air humidity) on dry powder dose delivery. Given that acoustic energy can be related to the factors associated with exhalation sounds, we then aimed to develop a method of identifying and quantifying this critical inhaler technique error using acoustic based methods. An in vitro test rig was developed to simulate this critical error. The effect of the four factors on subsequent drug delivery were investigated using multivariate regression models. In a further study we then used an acoustic monitoring device to unobtrusively record the sounds 22 asthmatic patients made whilst using a Diskus(™) DPI. Acoustic energy was employed to automatically detect and analyze exhalation events in the audio files. All exhalation factors had a statistically significant effect on drug delivery (pacoustic method detected exhalations with an accuracy of 89.1%. We were able to classify exhalations occurring 5 cm or less in the direction of the inhaler mouthpiece or recording device with a sensitivity of 72.2% and specificity of 85.7%. Exhaling into a DPI has a significant detrimental effect. Acoustic based methods can be employed to objectively detect and analyze exhalations during inhaler use, thus providing a method of remotely monitoring inhaler technique and providing personalized inhaler technique feedback.

  13. A Mixed-Method Approach for Quantifying Illegal Fishing and Its Impact on an Endangered Fish Species.

    Science.gov (United States)

    Free, Christopher M; Jensen, Olaf P; Mendsaikhan, Bud

    2015-01-01

    Illegal harvest is recognized as a widespread problem in natural resource management. The use of multiple methods for quantifying illegal harvest has been widely recommended yet infrequently applied. We used a mixed-method approach to evaluate the extent, character, and motivations of illegal gillnet fishing in Lake Hovsgol National Park, Mongolia and its impact on the lake's fish populations, especially that of the endangered endemic Hovsgol grayling (Thymallus nigrescens). Surveys for derelict fishing gear indicate that gillnet fishing is widespread and increasing and that fishers generally use 3-4 cm mesh gillnet. Interviews with resident herders and park rangers suggest that many residents fish for subsistence during the spring grayling spawning migration and that some residents fish commercially year-round. Interviewed herders and rangers generally agree that fish population sizes are decreasing but are divided on the causes and solutions. Biological monitoring indicates that the gillnet mesh sizes used by fishers efficiently target Hovsgol grayling. Of the five species sampled in the monitoring program, only burbot (Lota lota) showed a significant decrease in population abundance from 2009-2013. However, grayling, burbot, and roach (Rutilus rutilus) all showed significant declines in average body size, suggesting a negative fishing impact. Data-poor stock assessment methods suggest that the fishing effort equivalent to each resident family fishing 50-m of gillnet 11-15 nights per year would be sufficient to overexploit the grayling population. Results from the derelict fishing gear survey and interviews suggest that this level of effort is not implausible. Overall, we demonstrate the ability for a mixed-method approach to effectively describe an illegal fishery and suggest that these methods be used to assess illegal fishing and its impacts in other protected areas.

  14. Validation of a simple and fast method to quantify in vitro mineralization with fluorescent probes used in molecular imaging of bone

    International Nuclear Information System (INIS)

    Moester, Martiene J.C.; Schoeman, Monique A.E.; Oudshoorn, Ineke B.; Beusekom, Mara M. van; Mol, Isabel M.; Kaijzel, Eric L.; Löwik, Clemens W.G.M.; Rooij, Karien E. de

    2014-01-01

    Highlights: •We validate a simple and fast method of quantification of in vitro mineralization. •Fluorescently labeled agents can detect calcium deposits in the mineralized matrix of cell cultures. •Fluorescent signals of the probes correlated with Alizarin Red S staining. -- Abstract: Alizarin Red S staining is the standard method to indicate and quantify matrix mineralization during differentiation of osteoblast cultures. KS483 cells are multipotent mouse mesenchymal progenitor cells that can differentiate into chondrocytes, adipocytes and osteoblasts and are a well-characterized model for the study of bone formation. Matrix mineralization is the last step of differentiation of bone cells and is therefore a very important outcome measure in bone research. Fluorescently labelled calcium chelating agents, e.g. BoneTag and OsteoSense, are currently used for in vivo imaging of bone. The aim of the present study was to validate these probes for fast and simple detection and quantification of in vitro matrix mineralization by KS483 cells and thus enabling high-throughput screening experiments. KS483 cells were cultured under osteogenic conditions in the presence of compounds that either stimulate or inhibit osteoblast differentiation and thereby matrix mineralization. After 21 days of differentiation, fluorescence of stained cultures was quantified with a near-infrared imager and compared to Alizarin Red S quantification. Fluorescence of both probes closely correlated to Alizarin Red S staining in both inhibiting and stimulating conditions. In addition, both compounds displayed specificity for mineralized nodules. We therefore conclude that this method of quantification of bone mineralization using fluorescent compounds is a good alternative for the Alizarin Red S staining

  15. Validation of a simple and fast method to quantify in vitro mineralization with fluorescent probes used in molecular imaging of bone

    Energy Technology Data Exchange (ETDEWEB)

    Moester, Martiene J.C. [Department of Radiology, Leiden University Medical Center (Netherlands); Schoeman, Monique A.E. [Department of Orthopedic Surgery, Leiden University Medical Center (Netherlands); Oudshoorn, Ineke B. [Department of Radiology, Leiden University Medical Center (Netherlands); Percuros BV, Leiden (Netherlands); Beusekom, Mara M. van [Department of Radiology, Leiden University Medical Center (Netherlands); Mol, Isabel M. [Department of Radiology, Leiden University Medical Center (Netherlands); Percuros BV, Leiden (Netherlands); Kaijzel, Eric L.; Löwik, Clemens W.G.M. [Department of Radiology, Leiden University Medical Center (Netherlands); Rooij, Karien E. de, E-mail: k.e.de_rooij@lumc.nl [Department of Radiology, Leiden University Medical Center (Netherlands); Percuros BV, Leiden (Netherlands)

    2014-01-03

    Highlights: •We validate a simple and fast method of quantification of in vitro mineralization. •Fluorescently labeled agents can detect calcium deposits in the mineralized matrix of cell cultures. •Fluorescent signals of the probes correlated with Alizarin Red S staining. -- Abstract: Alizarin Red S staining is the standard method to indicate and quantify matrix mineralization during differentiation of osteoblast cultures. KS483 cells are multipotent mouse mesenchymal progenitor cells that can differentiate into chondrocytes, adipocytes and osteoblasts and are a well-characterized model for the study of bone formation. Matrix mineralization is the last step of differentiation of bone cells and is therefore a very important outcome measure in bone research. Fluorescently labelled calcium chelating agents, e.g. BoneTag and OsteoSense, are currently used for in vivo imaging of bone. The aim of the present study was to validate these probes for fast and simple detection and quantification of in vitro matrix mineralization by KS483 cells and thus enabling high-throughput screening experiments. KS483 cells were cultured under osteogenic conditions in the presence of compounds that either stimulate or inhibit osteoblast differentiation and thereby matrix mineralization. After 21 days of differentiation, fluorescence of stained cultures was quantified with a near-infrared imager and compared to Alizarin Red S quantification. Fluorescence of both probes closely correlated to Alizarin Red S staining in both inhibiting and stimulating conditions. In addition, both compounds displayed specificity for mineralized nodules. We therefore conclude that this method of quantification of bone mineralization using fluorescent compounds is a good alternative for the Alizarin Red S staining.

  16. Extended morphological processing: a practical method for automatic spot detection of biological markers from microscopic images.

    Science.gov (United States)

    Kimori, Yoshitaka; Baba, Norio; Morone, Nobuhiro

    2010-07-08

    A reliable extraction technique for resolving multiple spots in light or electron microscopic images is essential in investigations of the spatial distribution and dynamics of specific proteins inside cells and tissues. Currently, automatic spot extraction and characterization in complex microscopic images poses many challenges to conventional image processing methods. A new method to extract closely located, small target spots from biological images is proposed. This method starts with a simple but practical operation based on the extended morphological top-hat transformation to subtract an uneven background. The core of our novel approach is the following: first, the original image is rotated in an arbitrary direction and each rotated image is opened with a single straight line-segment structuring element. Second, the opened images are unified and then subtracted from the original image. To evaluate these procedures, model images of simulated spots with closely located targets were created and the efficacy of our method was compared to that of conventional morphological filtering methods. The results showed the better performance of our method. The spots of real microscope images can be quantified to confirm that the method is applicable in a given practice. Our method achieved effective spot extraction under various image conditions, including aggregated target spots, poor signal-to-noise ratio, and large variations in the background intensity. Furthermore, it has no restrictions with respect to the shape of the extracted spots. The features of our method allow its broad application in biological and biomedical image information analysis.

  17. Development of continuous pharmaceutical production processes supported by process systems engineering methods and tools

    DEFF Research Database (Denmark)

    Gernaey, Krist; Cervera Padrell, Albert Emili; Woodley, John

    2012-01-01

    The pharmaceutical industry is undergoing a radical transition towards continuous production processes. Systematic use of process systems engineering (PSE) methods and tools form the key to achieve this transition in a structured and efficient way.......The pharmaceutical industry is undergoing a radical transition towards continuous production processes. Systematic use of process systems engineering (PSE) methods and tools form the key to achieve this transition in a structured and efficient way....

  18. Development of QuEChERS-based extraction and liquid chromatography-tandem mass spectrometry method for quantifying flumethasone residues in beef muscle.

    Science.gov (United States)

    Park, Ki Hun; Choi, Jeong-Heui; Abd El-Aty, A M; Cho, Soon-Kil; Park, Jong-Hyouk; Kwon, Ki Sung; Park, Hee Ra; Kim, Hyung Soo; Shin, Ho-Chul; Kim, Mi Ra; Shim, Jae-Han

    2012-12-01

    A rapid, specific, and sensitive method based on liquid chromatography-electrospray ionization tandem mass spectrometry (LC-ESI-MS/MS) in the positive ion mode using multiple reaction monitoring (MRM) was developed and validated to quantify flumethasone residues in beef muscle. Methods were compared between the original as well as the EN quick, easy, cheap, effective, rugged, and safe (QuEChERS)-based extraction. Good linearity was achieved at concentration levels of 5-30 μg/kg. Estimated recovery rates at spiking levels of 5 and 10 μg/kg ranged from 72.1 to 84.6%, with relative standard deviations (RSDs)noise ratios (S/Ns) of 3 and 10, respectively. The method was successfully applied to analyze real samples obtained from large markets throughout the Korean Peninsula. The method proved to be sensitive and reliable and, thus, rendered an appropriate means for residue analysis studies. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Methods for Optimisation of the Laser Cutting Process

    DEFF Research Database (Denmark)

    Dragsted, Birgitte

    This thesis deals with the adaptation and implementation of various optimisation methods, in the field of experimental design, for the laser cutting process. The problem in optimising the laser cutting process has been defined and a structure for at Decision Support System (DSS......) for the optimisation of the laser cutting process has been suggested. The DSS consists of a database with the currently used and old parameter settings. Also one of the optimisation methods has been implemented in the DSS in order to facilitate the optimisation procedure for the laser operator. The Simplex Method has...... been adapted in two versions. A qualitative one, that by comparing the laser cut items optimise the process and a quantitative one that uses a weighted quality response in order to achieve a satisfactory quality and after that maximises the cutting speed thus increasing the productivity of the process...

  20. Quantifying and Reducing Curve-Fitting Uncertainty in Isc

    Energy Technology Data Exchange (ETDEWEB)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-06-14

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  1. Helium Mass Spectrometer Leak Detection: A Method to Quantify Total Measurement Uncertainty

    Science.gov (United States)

    Mather, Janice L.; Taylor, Shawn C.

    2015-01-01

    In applications where leak rates of components or systems are evaluated against a leak rate requirement, the uncertainty of the measured leak rate must be included in the reported result. However, in the helium mass spectrometer leak detection method, the sensitivity, or resolution, of the instrument is often the only component of the total measurement uncertainty noted when reporting results. To address this shortfall, a measurement uncertainty analysis method was developed that includes the leak detector unit's resolution, repeatability, hysteresis, and drift, along with the uncertainty associated with the calibration standard. In a step-wise process, the method identifies the bias and precision components of the calibration standard, the measurement correction factor (K-factor), and the leak detector unit. Together these individual contributions to error are combined and the total measurement uncertainty is determined using the root-sum-square method. It was found that the precision component contributes more to the total uncertainty than the bias component, but the bias component is not insignificant. For helium mass spectrometer leak rate tests where unit sensitivity alone is not enough, a thorough evaluation of the measurement uncertainty such as the one presented herein should be performed and reported along with the leak rate value.

  2. Quantifiers for quantum logic

    OpenAIRE

    Heunen, Chris

    2008-01-01

    We consider categorical logic on the category of Hilbert spaces. More generally, in fact, any pre-Hilbert category suffices. We characterise closed subobjects, and prove that they form orthomodular lattices. This shows that quantum logic is just an incarnation of categorical logic, enabling us to establish an existential quantifier for quantum logic, and conclude that there cannot be a universal quantifier.

  3. Using Fluorescence Intensity of Enhanced Green Fluorescent Protein to Quantify Pseudomonas aeruginosa

    Directory of Open Access Journals (Sweden)

    Erin Wilson

    2018-05-01

    Full Text Available A variety of direct and indirect methods have been used to quantify planktonic and biofilm bacterial cells. Direct counting methods to determine the total number of cells include plate counts, microscopic cell counts, Coulter cell counting, flow cytometry, and fluorescence microscopy. However, indirect methods are often used to supplement direct cell counting, as they are often more convenient, less time-consuming, and require less material, while providing a number that can be related to the direct cell count. Herein, an indirect method is presented that uses fluorescence emission intensity as a proxy marker for studying bacterial accumulation. A clinical strain of Pseudomonas aeruginosa was genetically modified to express a green fluorescent protein (PA14/EGFP. The fluorescence intensity of EGFP in live cells was used as an indirect measure of live cell density, and was compared with the traditional cell counting methods of optical density (OD600 and plate counting (colony-forming units (CFUs. While both OD600 and CFUs are well-established methods, the use of fluorescence spectroscopy to quantify bacteria is less common. This study demonstrates that EGFP intensity is a convenient reporter for bacterial quantification. In addition, we demonstrate the potential for fluorescence spectroscopy to be used to measure the quantity of PA14/EGFP biofilms, which have important human health implications due to their antimicrobial resistance. Therefore, fluorescence spectroscopy could serve as an alternative or complementary quick assay to quantify bacteria in planktonic cultures and biofilms.

  4. A method to quantify infection and colonization of holm oak (Quercus ilex roots by Phytophthora cinnamomi

    Directory of Open Access Journals (Sweden)

    Ruiz-Gómez Francisco J

    2012-09-01

    Full Text Available Abstract Phytophthora cinnamomi Rands. is an important root rot pathogen widely distributed in the north hemisphere, with a large host range. Among others diseases, it is known to be a principal factor in the decline of holm oak and cork oak, the most important tree species in the “dehesa” ecosystem of south-western Spain. Previously, the focus of studies on P. cinnamomi and holm oak have been on molecular tools for identification, functional responses of the host, together with other physiological and morphological host variables. However, a microscopic index to describe the degree of infection and colonization in the plant tissues has not yet been developed. A colonization or infection index would be a useful tool for studies that examine differences between individuals subjected to different treatments or to individuals belonging to different breeding accessions, together with their specific responses to the pathogen. This work presents a methodology based on the capture and digital treatment of microscopic images, using simple and accessible software, together with a range of variables that quantify the infection and colonization process.

  5. A method to quantify movement activity of groups of animals using automated image analysis

    Science.gov (United States)

    Xu, Jianyu; Yu, Haizhen; Liu, Ying

    2009-07-01

    Most physiological and environmental changes are capable of inducing variations in animal behavior. The behavioral parameters have the possibility to be measured continuously in-situ by a non-invasive and non-contact approach, and have the potential to be used in the actual productions to predict stress conditions. Most vertebrates tend to live in groups, herds, flocks, shoals, bands, packs of conspecific individuals. Under culture conditions, the livestock or fish are in groups and interact on each other, so the aggregate behavior of the group should be studied rather than that of individuals. This paper presents a method to calculate the movement speed of a group of animal in a enclosure or a tank denoted by body length speed that correspond to group activity using computer vision technique. Frame sequences captured at special time interval were subtracted in pairs after image segmentation and identification. By labeling components caused by object movement in difference frame, the projected area caused by the movement of every object in the capture interval was calculated; this projected area was divided by the projected area of every object in the later frame to get body length moving distance of each object, and further could obtain the relative body length speed. The average speed of all object can well respond to the activity of the group. The group activity of a tilapia (Oreochromis niloticus) school to high (2.65 mg/L) levels of unionized ammonia (UIA) concentration were quantified based on these methods. High UIA level condition elicited a marked increase in school activity at the first hour (P<0.05) exhibiting an avoidance reaction (trying to flee from high UIA condition), and then decreased gradually.

  6. Model based methods and tools for process systems engineering

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    need to be integrated with work-flows and data-flows for specific product-process synthesis-design problems within a computer-aided framework. The framework therefore should be able to manage knowledge-data, models and the associated methods and tools needed by specific synthesis-design work...... of model based methods and tools within a computer aided framework for product-process synthesis-design will be highlighted.......Process systems engineering (PSE) provides means to solve a wide range of problems in a systematic and efficient manner. This presentation will give a perspective on model based methods and tools needed to solve a wide range of problems in product-process synthesis-design. These methods and tools...

  7. Quantifying offshore wind resources from satellite wind maps: Study area the North Sea

    DEFF Research Database (Denmark)

    Hasager, Charlotte Bay; Barthelmie, Rebecca Jane; Christiansen, Merete B.

    2006-01-01

    Offshore wind resources are quantified from satellite synthetic aperture radar (SAR) and satellite scatterometer observations at local and regional scale respectively at the Horns Rev site in Denmark. The method for wind resource estimation from satellite observations interfaces with the wind atlas...... of the Horns Rev wind farm is quantified from satellite SAR images and compared with state-of-the-art wake model results with good agreement. It is a unique method using satellite observations to quantify the spatial extent of the wake behind large offshore wind farms. Copyright © 2006 John Wiley & Sons, Ltd....... analysis and application program (WAsP). An estimate of the wind resource at the new project site at Horns Rev is given based on satellite SAR observations. The comparison of offshore satellite scatterometer winds, global model data and in situ data shows good agreement. Furthermore, the wake effect...

  8. Quantifying Electromagnetic Wave Propagation Environment Using Measurements From A Small Buoy

    Science.gov (United States)

    2017-06-01

    ELECTROMAGNETIC WAVE PROPAGATION ENVIRONMENT USING MEASUREMENTS FROM A SMALL BUOY by Andrew E. Sweeney June 2017 Thesis Advisor: Qing Wang...TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE QUANTIFYING ELECTROMAGNETIC WAVE PROPAGATION ENVIRONMENT USING MEASUREMENTS FROM A...the Coupled Air Sea Processes and Electromagnetic (EM) ducting Research (CASPER), to understand air-sea interaction processes and their representation

  9. Quantifying Short-Chain Chlorinated Paraffin Congener Groups.

    Science.gov (United States)

    Yuan, Bo; Bogdal, Christian; Berger, Urs; MacLeod, Matthew; Gebbink, Wouter A; Alsberg, Tomas; de Wit, Cynthia A

    2017-09-19

    Accurate quantification of short-chain chlorinated paraffins (SCCPs) poses an exceptional challenge to analytical chemists. SCCPs are complex mixtures of chlorinated alkanes with variable chain length and chlorination level; congeners with a fixed chain length (n) and number of chlorines (m) are referred to as a "congener group" C n Cl m . Recently, we resolved individual C n Cl m by mathematically deconvolving soft ionization high-resolution mass spectra of SCCP mixtures. Here we extend the method to quantifying C n Cl m by introducing C n Cl m specific response factors (RFs) that are calculated from 17 SCCP chain-length standards with a single carbon chain length and variable chlorination level. The signal pattern of each standard is measured on APCI-QTOF-MS. RFs of each C n Cl m are obtained by pairwise optimization of the normal distribution's fit to the signal patterns of the 17 chain-length standards. The method was verified by quantifying SCCP technical mixtures and spiked environmental samples with accuracies of 82-123% and 76-109%, respectively. The absolute differences between calculated and manufacturer-reported chlorination degrees were -0.9 to 1.0%Cl for SCCP mixtures of 49-71%Cl. The quantification method has been replicated with ECNI magnetic sector MS and ECNI-Q-Orbitrap-MS. C n Cl m concentrations determined with the three instruments were highly correlated (R 2 > 0.90) with each other.

  10. On quantifying uncertainty for project selection: the case of renewable energy sources' investment

    International Nuclear Information System (INIS)

    Kirytopoulos, Konstantinos; Rentizelas, Athanassios; Tziralis, Georgios

    2006-01-01

    The selection of a project among different alternatives, considering the limited resources of a company (organisation), is an added value process that determines the prosperity of an undertaken project (investment). This applies also to the 'boming' Renewable Energy Sector, especially under the circumstances established by the recent activation of the Kyoto protocal and by the plethora of available choices for renewable energy sources (RES) projjects. The need for a reliable project selection method among the various alternatives is, therefore, highlighted and, in this context, the paper proposes the NPV function as one of possible criteria for the selection of a RES project. Furthermore, it differentiates from the typical NPV calculation process by adding the concept of a probabilistic NPV approach through Monte Carlo simulation. Reality is non-deterministic, so any attempt of modelling it by using a deterministic approach is by definition erroneous. The paper ultimately proposes a process of substituting the point with a range estimation, capable of quantifying the various uncertainty factors and in this way elucidate the accomplishment possibilities of eligible scenarious. The paper is enhanced by case study showing how the proposed method can be practically applied to support the investment decision, thus enabling the decision makers to judge its effectiveness and usefulness.(Author)

  11. Quantifying the efficiency of river regulation

    Directory of Open Access Journals (Sweden)

    R. Rödel

    2005-01-01

    Full Text Available Dam-affected hydrologic time series give rise to uncertainties when they are used for calibrating large-scale hydrologic models or for analysing runoff records. It is therefore necessary to identify and to quantify the impact of impoundments on runoff time series. Two different approaches were employed. The first, classic approach compares the volume of the dams that are located upstream from a station with the annual discharge. The catchment areas of the stations are calculated and then related to geo-referenced dam attributes. The paper introduces a data set of geo-referenced dams linked with 677 gauging stations in Europe. Second, the intensity of the impoundment impact on runoff times series can be quantified more exactly and directly when long-term runoff records are available. Dams cause a change in the variability of flow regimes. This effect can be measured using the model of linear single storage. The dam-caused storage change ΔS can be assessed through the volume of the emptying process between two flow regimes. As an example, the storage change ΔS is calculated for regulated long-term series of the Luleälven in northern Sweden.

  12. In situ analysis of thin film deposition processes using time-of-flight (TOF) ion beam analysis methods

    International Nuclear Information System (INIS)

    Im, J.; Lin, Y.; Schultz, J.A.; Auciello, O.H.; Chang, R.P.H.

    1995-05-01

    Non-destructive, in situ methods for characterization of thin film growth phenomena is key to understand thin film growth processes and to develop more reliable deposition procedures, especially for complex layered structures involving multi-phase materials. However, surface characterization methods that use either electrons (e.g. AES or XPS) or low energy ions (SIMS) require an UHV environment and utilize instrumentation which obstructs line of sight access to the substrate and are therefore incompatible with line of sight deposition methods and thin film deposition processes which introduce gas, either part of the deposition or in order to produce the desired phase. We have developed a means of differentially pumping both the ion beam source and detectors of a TOF ion beam surface analysis spectrometer that does not interfere with the deposition process and permits compositional and structural analysis of the growing film in the present system, at pressures up to several mTorr. Higher pressures are feasible with modified source-detector geometry. In order to quantify the sensitivity of Ion Scattering Spectroscopy (ISS) and Direct Recoil Spectroscopy (DRS), we have measured the signal intensity for stabilized clean metals in a variety of gas environments as a function of the ambient gas species and pressure, and ion beam species and kinetic energy. Results are interpreted in terms of collision cross sections which are compared with known gas phase scattering data and provide an apriori basis for the evaluation of time-of-flight ion scattering and recoil spectroscopies (ToF-ISARS) for various industrial processing environments which involve both inert and reactive cases. The cross section data for primary ion-gas molecule and recoiled atom-gas molecule interactions are also provided. from which the maximum operating pressure in any experimental configuration can be obtained

  13. Ecosystem site description - an approach to quantify transport and accumulation of matter in a drainage area

    International Nuclear Information System (INIS)

    Soderback, B.; Kautsky, U.; Lindborg, T.

    2004-01-01

    The Swedish Nuclear Fuel and Waste Management Co. (SKB) presently perform site investigations at two sites in Sweden for a future repository of spent nuclear fuel. The safety assessment of a potential repository will, among other methods, use an approach where transport and accumulation of radionuclides is modelled by quantifying the pathways of carbon/nitrogen/phosphorous in the ecosystem. Since water is the most important medium for transportation of matter, the obvious delimitation of an area for quantification of matter transport is the drainage area. This study describes how site-specific data on surface water chemistry and hydrology, measured at several points along the flow paths of a drainage area, can be used to describe and quantify the flow of matter in terms of transport or accumulation. The approach was applied to the drainage area of Lake Eckarfjaerden, investigated as part of the site investigation programme at Forsmark in central Sweden. By using data from inlet and outlet of the lake, together with data from the lake itself, we quantified the flow of matter in the drainage area, and also developed mass-balance budgets for important elements. The results were used to validate process oriented terrestrial and aquatic ecosystem models, developed for the same drainage area in parallel to the present study. In conclusion, applying this approach will contribute substantially to our understanding of the processes controlling transport and accumulation of matter in a drainage area, and thereby reduce the uncertainties in estimating radionuclide flow and consequences to humans and the environment. (author)

  14. Quantitative chemical shift-encoded MRI is an accurate method to quantify hepatic steatosis.

    Science.gov (United States)

    Kühn, Jens-Peter; Hernando, Diego; Mensel, Birger; Krüger, Paul C; Ittermann, Till; Mayerle, Julia; Hosten, Norbert; Reeder, Scott B

    2014-06-01

    To compare the accuracy of liver fat quantification using a three-echo chemical shift-encoded magnetic resonance imaging (MRI) technique without and with correction for confounders with spectroscopy (MRS) as the reference standard. Fifty patients (23 women, mean age 56.6 ± 13.2 years) with fatty liver disease were enrolled. Patients underwent T2-corrected single-voxel MRS and a three-echo chemical shift-encoded gradient echo (GRE) sequence at 3.0T. MRI fat fraction (FF) was calculated without and with T2* and T1 correction and multispectral modeling of fat and compared with MRS-FF using linear regression. The spectroscopic range of liver fat was 0.11%-38.7%. Excellent correlation between MRS-FF and MRI-FF was observed when using T2* correction (R(2)  = 0.96). With use of T2* correction alone, the slope was significantly different from 1 (1.16 ± 0.03, P fat were addressed, the results showed equivalence between fat quantification using MRI and MRS (slope: 1.02 ± 0.03, P = 0.528; intercept: 0.26% ± 0.46%, P = 0.572). Complex three-echo chemical shift-encoded MRI is equivalent to MRS for quantifying liver fat, but only with correction for T2* decay and T1 recovery and use of spectral modeling of fat. This is necessary because T2* decay, T1 recovery, and multispectral complexity of fat are processes which may otherwise bias the measurements. Copyright © 2013 Wiley Periodicals, Inc.

  15. An alternative method for quantifying coronary artery calcification: the multi-ethnic study of atherosclerosis (MESA

    Directory of Open Access Journals (Sweden)

    Liang C

    2012-07-01

    Full Text Available Abstract Background Extent of atherosclerosis measured by amount of coronary artery calcium (CAC in computed tomography (CT has been traditionally assessed using thresholded scoring methods, such as the Agatston score (AS. These thresholded scores have value in clinical prediction, but important information might exist below the threshold, which would have important advantages for understanding genetic, environmental, and other risk factors in atherosclerosis. We developed a semi-automated threshold-free scoring method, the spatially weighted calcium score (SWCS for CAC in the Multi-Ethnic Study of Atherosclerosis (MESA. Methods Chest CT scans were obtained from 6814 participants in the Multi-Ethnic Study of Atherosclerosis (MESA. The SWCS and the AS were calculated for each of the scans. Cox proportional hazards models and linear regression models were used to evaluate the associations of the scores with CHD events and CHD risk factors. CHD risk factors were summarized using a linear predictor. Results Among all participants and participants with AS > 0, the SWCS and AS both showed similar strongly significant associations with CHD events (hazard ratios, 1.23 and 1.19 per doubling of SWCS and AS; 95% CI, 1.16 to 1.30 and 1.14 to 1.26 and CHD risk factors (slopes, 0.178 and 0.164; 95% CI, 0.162 to 0.195 and 0.149 to 0.179. Even among participants with AS = 0, an increase in the SWCS was still significantly associated with established CHD risk factors (slope, 0.181; 95% CI, 0.138 to 0.224. The SWCS appeared to be predictive of CHD events even in participants with AS = 0, though those events were rare as expected. Conclusions The SWCS provides a valid, continuous measure of CAC suitable for quantifying the extent of atherosclerosis without a threshold, which will be useful for examining novel genetic and environmental risk factors for atherosclerosis.

  16. Optoelectronic imaging of speckle using image processing method

    Science.gov (United States)

    Wang, Jinjiang; Wang, Pengfei

    2018-01-01

    A detailed image processing of laser speckle interferometry is proposed as an example for the course of postgraduate student. Several image processing methods were used together for dealing with optoelectronic imaging system, such as the partial differential equations (PDEs) are used to reduce the effect of noise, the thresholding segmentation also based on heat equation with PDEs, the central line is extracted based on image skeleton, and the branch is removed automatically, the phase level is calculated by spline interpolation method, and the fringe phase can be unwrapped. Finally, the imaging processing method was used to automatically measure the bubble in rubber with negative pressure which could be used in the tire detection.

  17. Copy alert : A method and metric to detect visual copycat brands

    NARCIS (Netherlands)

    Satomura, T.; Wedel, M.; Pieters, R.

    The authors propose a method and metric to quantify the consumer confusion between leading brands and copycat brands that results from the visual similarity of their packaging designs. The method has three components. First, image processing techniques establish the objective similarity of the

  18. Bare quantifier fronting as contrastive topicalization

    Directory of Open Access Journals (Sweden)

    Ion Giurgea

    2015-11-01

    Full Text Available I argue that indefinites (in particular bare quantifiers such as ‘something’, ‘somebody’, etc. which are neither existentially presupposed nor in the restriction of a quantifier over situations, can undergo topicalization in a number of Romance languages (Catalan, Italian, Romanian, Spanish, but only if the sentence contains “verum” focus, i.e. focus on a high degree of certainty of the sentence. I analyze these indefinites as contrastive topics, using Büring’s (1999 theory (where the term ‘S-topic’ is used for what I call ‘contrastive topic’. I propose that the topic is evaluated in relation to a scalar set including generalized quantifiers such as {lP $x P(x, lP MANYx P(x, lP MOSTx P(x, lP “xP(x} or {lP $xP(x, lP P(a, lP P(b …}, and that the contrastive topic is the weakest generalized quantifier in this set. The verum focus, which is part of the “comment” that co-occurs with the “Topic”, introduces a set of alternatives including degrees of certainty of the assertion. The speaker asserts that his claim is certainly true or highly probable, contrasting it with stronger claims for which the degree of probability is unknown. This explains the observation that in downward entailing contexts, the fronted quantified DPs are headed by ‘all’ or ‘many’, whereas ‘some’, small numbers or ‘at least n’ appear in upward entailing contexts. Unlike other cases of non-specific topics, which are property topics, these are quantifier topics: the topic part is a generalized quantifier, the comment is a property of generalized quantifiers. This explains the narrow scope of the fronted quantified DP.

  19. Signal processing methods for MFE plasma diagnostics

    International Nuclear Information System (INIS)

    Candy, J.V.; Casper, T.; Kane, R.

    1985-02-01

    The application of various signal processing methods to extract energy storage information from plasma diamagnetism sensors occurring during physics experiments on the Tandom Mirror Experiment-Upgrade (TMX-U) is discussed. We show how these processing techniques can be used to decrease the uncertainty in the corresponding sensor measurements. The algorithms suggested are implemented using SIG, an interactive signal processing package developed at LLNL

  20. Quantifying in-stream nitrate reaction rates using continuously-collected water quality data

    Science.gov (United States)

    Matthew Miller; Anthony Tesoriero; Paul Capel

    2016-01-01

    High frequency in situ nitrate data from three streams of varying hydrologic condition, land use, and watershed size were used to quantify the mass loading of nitrate to streams from two sources – groundwater discharge and event flow – at a daily time step for one year. These estimated loadings were used to quantify temporally-variable in-stream nitrate processing ...

  1. A method for risk-informed safety significance categorization using the analytic hierarchy process and bayesian belief networks

    International Nuclear Information System (INIS)

    Ha, Jun Su; Seong, Poong Hyun

    2004-01-01

    A risk-informed safety significance categorization (RISSC) is to categorize structures, systems, or components (SSCs) of a nuclear power plant (NPP) into two or more groups, according to their safety significance using both probabilistic and deterministic insights. In the conventional methods for the RISSC, the SSCs are quantitatively categorized according to their importance measures for the initial categorization. The final decisions (categorizations) of SSCs, however, are qualitatively made by an expert panel through discussions and adjustments of opinions by using the probabilistic insights compiled in the initial categorization process and combining the probabilistic insights with the deterministic insights. Therefore, owing to the qualitative and linear decision-making process, the conventional methods have the demerits as follows: (1) they are very costly in terms of time and labor, (2) it is not easy to reach the final decision, when the opinions of the experts are in conflict and (3) they have an overlapping process due to the linear paradigm (the categorization is performed twice - first, by the engineers who propose the method, and second, by the expert panel). In this work, a method for RISSC using the analytic hierarchy process (AHP) and bayesian belief networks (BBN) is proposed to overcome the demerits of the conventional methods and to effectively arrive at a final decision (or categorization). By using the AHP and BBN, the expert panel takes part in the early stage of the categorization (that is, the quantification process) and the safety significance based on both probabilistic and deterministic insights is quantified. According to that safety significance, SSCs are quantitatively categorized into three categories such as high safety significant category (Hi), potentially safety significant category (Po), or low safety significant category (Lo). The proposed method was applied to the components such as CC-V073, CV-V530, and SI-V644 in Ulchin Unit

  2. Saturated excitation of Fluorescence to quantify excitation enhancement in aperture antennas

    KAUST Repository

    Aouani, Heykel

    2012-07-23

    Fluorescence spectroscopy is widely used to probe the electromagnetic intensity amplification on optical antennas, yet measuring the excitation intensity amplification is a challenge, as the detected fluorescence signal is an intricate combination of excitation and emission. Here, we describe a novel approach to quantify the electromagnetic amplification in aperture antennas by taking advantage of the intrinsic non linear properties of the fluorescence process. Experimental measurements of the fundamental f and second harmonic 2f amplitudes of the fluorescence signal upon excitation modulation are used to quantify the electromagnetic intensity amplification with plasmonic aperture antennas. © 2012 Optical Society of America.

  3. Saturated excitation of Fluorescence to quantify excitation enhancement in aperture antennas

    KAUST Repository

    Aouani, Heykel; Hostein, Richard; Mahboub, Oussama; Devaux, Eloï se; Rigneault, Hervé ; Ebbesen, Thomas W.; Wenger, Jé rô me

    2012-01-01

    Fluorescence spectroscopy is widely used to probe the electromagnetic intensity amplification on optical antennas, yet measuring the excitation intensity amplification is a challenge, as the detected fluorescence signal is an intricate combination of excitation and emission. Here, we describe a novel approach to quantify the electromagnetic amplification in aperture antennas by taking advantage of the intrinsic non linear properties of the fluorescence process. Experimental measurements of the fundamental f and second harmonic 2f amplitudes of the fluorescence signal upon excitation modulation are used to quantify the electromagnetic intensity amplification with plasmonic aperture antennas. © 2012 Optical Society of America.

  4. Multidominance, ellipsis, and quantifier scope

    NARCIS (Netherlands)

    Temmerman, Tanja Maria Hugo

    2012-01-01

    This dissertation provides a novel perspective on the interaction between quantifier scope and ellipsis. It presents a detailed investigation of the scopal interaction between English negative indefinites, modals, and quantified phrases in ellipsis. One of the crucial observations is that a negative

  5. A field method for soil erosion measurements in agricultural and natural lands

    Science.gov (United States)

    Y.P. Hsieh; K.T. Grant; G.C. Bugna

    2009-01-01

    Soil erosion is one of the most important watershed processes in nature, yet quantifying it under field conditions remains a challenge. The lack of soil erosion field data is a major factor hindering our ability to predict soil erosion in a watershed. We present here the development of a simple and sensitive field method that quantifies soil erosion and the resulting...

  6. Quantifying the retention of foam formulation components to sedimentary phases to enable predictions of mobility and treatment efficacy - 59369

    International Nuclear Information System (INIS)

    Ramirez, Rosa; Jansik, Danielle; Wellman, Dawn

    2012-01-01

    Document available in abstract form only. Full text of publication follows: Deep vadose zone remediation remains the most challenging remediation problem in the DOE Complex. Foam delivery technology is being developed as a method for delivering remedial amendments within vadose zone environments for in situ contaminant stabilization. Thus far, the physical propagation of foam within subsurface media has been evaluated and quantified. However, foam propagation is a product of surfactant sorption which directly impacts foam stability. In order to predict the stability of foam during subsurface transport it is necessary to quantify the sorption of foam components as a function of concentration, competitive sorption, sediment mineralogy, and temperature. This investigation provides the results of standard static batch test quantifying these relationships. High Performance Liquid Chromatography (HPLC) was used to measure surfactant concentrations. The results of this investigation provide necessary understanding to predict foam stability during subsurface transport and determination of the remedial radius of influence. This study is part of a multiple step process for demonstrating the feasibility of foam transport to distribute amendments within in the vadose zone. (authors)

  7. Methods of process management in radiology

    International Nuclear Information System (INIS)

    Teichgraeber, U.K.M.; Gillessen, C.; Neumann, F.

    2003-01-01

    The main emphasis in health care has been on quality and availability but increasing cost pressure has made cost efficiency ever more relevant for nurses, technicians, and physicians. Within a hospital, the radiologist considerably influences the patient's length of stay through the availability of service and diagnostic information. Therefore, coordinating and timing radiologic examinations become increasingly more important. Physicians are not taught organizational management during their medical education and residency training, and the necessary expertise in economics is generally acquired through the literature or specialized courses. Beyond the medical service, the physicians are increasingly required to optimize their work flow according to economic factors. This review introduces various tools for process management and its application in radiology. By means of simple paper-based methods, the work flow of most processes can be analyzed. For more complex work flow, it is suggested to choose a method that allows for an exact qualitative and quantitative prediction of the effect of variations. This review introduces network planning technique and process simulation. (orig.) [de

  8. Quantifying rainfall-derived inflow and infiltration in sanitary sewer systems based on conductivity monitoring

    Science.gov (United States)

    Zhang, Mingkai; Liu, Yanchen; Cheng, Xun; Zhu, David Z.; Shi, Hanchang; Yuan, Zhiguo

    2018-03-01

    Quantifying rainfall-derived inflow and infiltration (RDII) in a sanitary sewer is difficult when RDII and overflow occur simultaneously. This study proposes a novel conductivity-based method for estimating RDII. The method separately decomposes rainfall-derived inflow (RDI) and rainfall-induced infiltration (RII) on the basis of conductivity data. Fast Fourier transform was adopted to analyze variations in the flow and water quality during dry weather. Nonlinear curve fitting based on the least squares algorithm was used to optimize parameters in the proposed RDII model. The method was successfully applied to real-life case studies, in which inflow and infiltration were successfully estimated for three typical rainfall events with total rainfall volumes of 6.25 mm (light), 28.15 mm (medium), and 178 mm (heavy). Uncertainties of model parameters were estimated using the generalized likelihood uncertainty estimation (GLUE) method and were found to be acceptable. Compared with traditional flow-based methods, the proposed approach exhibits distinct advantages in estimating RDII and overflow, particularly when the two processes happen simultaneously.

  9. Method and apparatus for lysing and processing algae

    Science.gov (United States)

    Chew, Geoffrey; Reich, Alton J.; Dykes, Jr., H. Waite H.; Di Salvo, Roberto

    2013-03-05

    Methods and apparatus for processing algae are described in which a hydrophilic ionic liquid is used to lyse algae cells at lower temperatures than existing algae processing methods. A salt or salt solution is used as a separation agent and to remove water from the ionic liquid, allowing the ionic liquid to be reused. The used salt may be dried or concentrated and reused. The relatively low lysis temperatures and recycling of the ionic liquid and salt reduce the environmental impact of the algae processing while providing biofuels and other useful products.

  10. The Method of Manufactured Universes for validating uncertainty quantification methods

    KAUST Repository

    Stripling, H.F.

    2011-09-01

    The Method of Manufactured Universes is presented as a validation framework for uncertainty quantification (UQ) methodologies and as a tool for exploring the effects of statistical and modeling assumptions embedded in these methods. The framework calls for a manufactured reality from which experimental data are created (possibly with experimental error), an imperfect model (with uncertain inputs) from which simulation results are created (possibly with numerical error), the application of a system for quantifying uncertainties in model predictions, and an assessment of how accurately those uncertainties are quantified. The application presented in this paper manufactures a particle-transport universe, models it using diffusion theory with uncertain material parameters, and applies both Gaussian process and Bayesian MARS algorithms to make quantitative predictions about new experiments within the manufactured reality. The results of this preliminary study indicate that, even in a simple problem, the improper application of a specific UQ method or unrealized effects of a modeling assumption may produce inaccurate predictions. We conclude that the validation framework presented in this paper is a powerful and flexible tool for the investigation and understanding of UQ methodologies. © 2011 Elsevier Ltd. All rights reserved.

  11. Connected Car: Quantified Self becomes Quantified Car

    Directory of Open Access Journals (Sweden)

    Melanie Swan

    2015-02-01

    Full Text Available The automotive industry could be facing a situation of profound change and opportunity in the coming decades. There are a number of influencing factors such as increasing urban and aging populations, self-driving cars, 3D parts printing, energy innovation, and new models of transportation service delivery (Zipcar, Uber. The connected car means that vehicles are now part of the connected world, continuously Internet-connected, generating and transmitting data, which on the one hand can be helpfully integrated into applications, like real-time traffic alerts broadcast to smartwatches, but also raises security and privacy concerns. This paper explores the automotive connected world, and describes five killer QS (Quantified Self-auto sensor applications that link quantified-self sensors (sensors that measure the personal biometrics of individuals like heart rate and automotive sensors (sensors that measure driver and passenger biometrics or quantitative automotive performance metrics like speed and braking activity. The applications are fatigue detection, real-time assistance for parking and accidents, anger management and stress reduction, keyless authentication and digital identity verification, and DIY diagnostics. These kinds of applications help to demonstrate the benefit of connected world data streams in the automotive industry and beyond where, more fundamentally for human progress, the automation of both physical and now cognitive tasks is underway.

  12. Quantifying facial paralysis using the Kinect v2.

    Science.gov (United States)

    Gaber, Amira; Taher, Mona F; Wahed, Manal Abdel

    2015-01-01

    Assessment of facial paralysis (FP) and quantitative grading of facial asymmetry are essential in order to quantify the extent of the condition as well as to follow its improvement or progression. As such, there is a need for an accurate quantitative grading system that is easy to use, inexpensive and has minimal inter-observer variability. A comprehensive automated system to quantify and grade FP is the main objective of this work. An initial prototype has been presented by the authors. The present research aims to enhance the accuracy and robustness of one of this system's modules: the resting symmetry module. This is achieved by including several modifications to the computation method of the symmetry index (SI) for the eyebrows, eyes and mouth. These modifications are the gamma correction technique, the area of the eyes, and the slope of the mouth. The system was tested on normal subjects and showed promising results. The mean SI of the eyebrows decreased slightly from 98.42% to 98.04% using the modified method while the mean SI for the eyes and mouth increased from 96.93% to 99.63% and from 95.6% to 98.11% respectively while using the modified method. The system is easy to use, inexpensive, automated and fast, has no inter-observer variability and is thus well suited for clinical use.

  13. Process synthesis, design and analysis using a process-group contribution method

    DEFF Research Database (Denmark)

    Kumar Tula, Anjan; Eden, Mario R.; Gani, Rafiqul

    2015-01-01

    ) techniques. The fundamental pillars of this framework are the definition and use of functional process-groups (building blocks) representing a wide range of process operations, flowsheet connectivity rules to join the process-groups to generate all the feasible flowsheet alternatives and flowsheet property...... models like energy consumption, atom efficiency, environmental impact to evaluate the performance of the generated alternatives. In this way, a list of feasible flowsheets are quickly generated, screened and selected for further analysis. Since the flowsheet is synthesized and the operations......This paper describes the development and application of a process-group contribution method to model, simulate and synthesize chemical processes. Process flowsheets are generated in the same way as atoms or groups of atoms are combined to form molecules in computer aided molecular design (CAMD...

  14. Quantifying the Pathway and Predicting Spontaneous Emulsification during Material Exchange in a Two Phase Liquid System.

    Science.gov (United States)

    Spooner, Stephen; Rahnama, Alireza; Warnett, Jason M; Williams, Mark A; Li, Zushu; Sridhar, Seetharaman

    2017-10-30

    Kinetic restriction of a thermodynamically favourable equilibrium is a common theme in materials processing. The interfacial instability in systems where rate of material exchange is far greater than the mass transfer through respective bulk phases is of specific interest when tracking the transient interfacial area, a parameter integral to short processing times for productivity streamlining in all manufacturing where interfacial reaction occurs. This is even more pertinent in high-temperature systems for energy and cost savings. Here the quantified physical pathway of interfacial area change due to material exchange in liquid metal-molten oxide systems is presented. In addition the predicted growth regime and emulsification behaviour in relation to interfacial tension as modelled using phase-field methodology is shown. The observed in-situ emulsification behaviour links quantitatively the geometry of perturbations as a validation method for the development of simulating the phenomena. Thus a method is presented to both predict and engineer the formation of micro emulsions to a desired specification.

  15. Nuclear data for fusion: Validation of typical pre-processing methods for radiation transport calculations

    International Nuclear Information System (INIS)

    Hutton, T.; Sublet, J.C.; Morgan, L.; Leadbeater, T.W.

    2015-01-01

    Highlights: • We quantify the effect of processing nuclear data from ENDF to ACE format. • We consider the differences between fission and fusion angular distributions. • C-nat(n,el) at 2.0 MeV has a 0.6% deviation between original and processed data. • Fe-56(n,el) at 14.1 MeV has a 11.0% deviation between original and processed data. • Processed data do not accurately depict ENDF distributions for fusion energies. - Abstract: Nuclear data form the basis of the radiation transport codes used to design and simulate the behaviour of nuclear facilities, such as the ITER and DEMO fusion reactors. Typically these data and codes are biased towards fission and high-energy physics applications yet are still applied to fusion problems. With increasing interest in fusion applications, the lack of fusion specific codes and relevant data libraries is becoming increasingly apparent. Industry standard radiation transport codes require pre-processing of the evaluated data libraries prior to use in simulation. Historically these methods focus on speed of simulation at the cost of accurate data representation. For legacy applications this has not been a major concern, but current fusion needs differ significantly. Pre-processing reconstructs the differential and double differential interaction cross sections with a coarse binned structure, or more recently as a tabulated cumulative distribution function. This work looks at the validity of applying these processing methods to data used in fusion specific calculations in comparison to fission. The relative effects of applying this pre-processing mechanism, to both fission and fusion relevant reaction channels are demonstrated, and as such the poor representation of these distributions for the fusion energy regime. For the nat C(n,el) reaction at 2.0 MeV, the binned differential cross section deviates from the original data by 0.6% on average. For the 56 Fe(n,el) reaction at 14.1 MeV, the deviation increases to 11.0%. We

  16. Quantifying radionuclide signatures from a γ–γ coincidence system

    International Nuclear Information System (INIS)

    Britton, Richard; Jackson, Mark J.; Davies, Ashley V.

    2015-01-01

    A method for quantifying gamma coincidence signatures has been developed, and tested in conjunction with a high-efficiency multi-detector system to quickly identify trace amounts of radioactive material. The γ–γ system utilises fully digital electronics and list-mode acquisition to time–stamp each event, allowing coincidence matrices to be easily produced alongside typical ‘singles’ spectra. To quantify the coincidence signatures a software package has been developed to calculate efficiency and cascade summing corrected branching ratios. This utilises ENSDF records as an input, and can be fully automated, allowing the user to quickly and easily create/update a coincidence library that contains all possible γ and conversion electron cascades, associated cascade emission probabilities, and true-coincidence summing corrected γ cascade detection probabilities. It is also fully searchable by energy, nuclide, coincidence pair, γ multiplicity, cascade probability and half-life of the cascade. The probabilities calculated were tested using measurements performed on the γ–γ system, and found to provide accurate results for the nuclides investigated. Given the flexibility of the method, (it only relies on evaluated nuclear data, and accurate efficiency characterisations), the software can now be utilised for a variety of systems, quickly and easily calculating coincidence signature probabilities. - Highlights: • Monte-Carlo based software developed to easily create/update a coincidence signal library for environmental radionuclides. • Coincidence library utilised to accurately quantify gamma coincidence signatures. • All coincidence signature probabilities are corrected for cascade summing, conversion electron emission and pair production. • Key CTBTO relevant radionuclides have been tested to verify the calculated correction factors. • Accurately quantifying coincidence signals during routine analysis will allow dramatically improved detection

  17. PRO-QUEST: a rapid assessment method based on progressive saturation for quantifying exchange rates using saturation times in CEST.

    Science.gov (United States)

    Demetriou, Eleni; Tachrount, Mohamed; Zaiss, Moritz; Shmueli, Karin; Golay, Xavier

    2018-03-05

    To develop a new MRI technique to rapidly measure exchange rates in CEST MRI. A novel pulse sequence for measuring chemical exchange rates through a progressive saturation recovery process, called PRO-QUEST (progressive saturation for quantifying exchange rates using saturation times), has been developed. Using this method, the water magnetization is sampled under non-steady-state conditions, and off-resonance saturation is interleaved with the acquisition of images obtained through a Look-Locker type of acquisition. A complete theoretical framework has been set up, and simple equations to obtain the exchange rates have been derived. A reduction of scan time from 58 to 16 minutes has been obtained using PRO-QUEST versus the standard QUEST. Maps of both T 1 of water and B 1 can simply be obtained by repetition of the sequence without off-resonance saturation pulses. Simulations and calculated exchange rates from experimental data using amino acids such as glutamate, glutamine, taurine, and alanine were compared and found to be in good agreement. The PRO-QUEST sequence was also applied on healthy and infarcted rats after 24 hours, and revealed that imaging specificity to ischemic acidification during stroke was substantially increased relative to standard amide proton transfer-weighted imaging. Because of the reduced scan time and insensitivity to nonchemical exchange factors such as direct water saturation, PRO-QUEST can serve as an excellent alternative for researchers and clinicians interested to map pH changes in vivo. © 2018 International Society for Magnetic Resonance in Medicine.

  18. Radiology reports: a quantifiable and objective textual approach

    International Nuclear Information System (INIS)

    Scott, J.A.; Palmer, E.L.

    2015-01-01

    Aim: To examine the feasibility of using automated lexical analysis in conjunction with machine learning to create a means of objectively characterising radiology reports for quality improvement. Materials and methods: Twelve lexical parameters were quantified from the collected reports of four radiologists. These included the number of different words used, number of sentences, reading grade, readability, usage of the passive voice, and lexical metrics of concreteness, ambivalence, complexity, passivity, embellishment, communication and cognition. Each radiologist was statistically compared to the mean of the group for each parameter to determine outlying report characteristics. The reproducibility of these parameters in a given radiologist's reporting style was tested by using only these 12 parameters as input to a neural network designed to establish the authorship of 60 unknown reports. Results: Significant differences in report characteristics were observed between radiologists, quantifying and characterising deviations of individuals from the group reporting style. The 12 metrics employed in a neural network correctly identified the author in each of 60 unknown reports tested, indicating a robust parametric signature. Conclusion: Automated and quantifiable methods can be used to analyse reporting style and provide impartial and objective feedback as well as to detect and characterise significant differences from the group. The parameters examined are sufficiently specific to identify the authors of reports and can potentially be useful in quality improvement and residency training. - Highlights: • Radiology reports can be objectively studied based upon their lexical characteristics. • This analysis can help establish norms for reporting, resident training and authorship attribution. • This analysis can complement higher level subjective analysis in quality improvement efforts.

  19. Quantifying and Comparing Effects of Climate Engineering Methods on the Earth System

    Science.gov (United States)

    Sonntag, Sebastian; Ferrer González, Miriam; Ilyina, Tatiana; Kracher, Daniela; Nabel, Julia E. M. S.; Niemeier, Ulrike; Pongratz, Julia; Reick, Christian H.; Schmidt, Hauke

    2018-02-01

    To contribute to a quantitative comparison of climate engineering (CE) methods, we assess atmosphere-, ocean-, and land-based CE measures with respect to Earth system effects consistently within one comprehensive model. We use the Max Planck Institute Earth System Model (MPI-ESM) with prognostic carbon cycle to compare solar radiation management (SRM) by stratospheric sulfur injection and two carbon dioxide removal methods: afforestation and ocean alkalinization. The CE model experiments are designed to offset the effect of fossil-fuel burning on global mean surface air temperature under the RCP8.5 scenario to follow or get closer to the RCP4.5 scenario. Our results show the importance of feedbacks in the CE effects. For example, as a response to SRM the land carbon uptake is enhanced by 92 Gt by the year 2100 compared to the reference RCP8.5 scenario due to reduced soil respiration thus reducing atmospheric CO2. Furthermore, we show that normalizations allow for a better comparability of different CE methods. For example, we find that due to compensating processes such as biogeophysical effects of afforestation more carbon needs to be removed from the atmosphere by afforestation than by alkalinization to reach the same global warming reduction. Overall, we illustrate how different CE methods affect the components of the Earth system; we identify challenges arising in a CE comparison, and thereby contribute to developing a framework for a comparative assessment of CE.

  20. Quantifying short run cost-effectiveness during a gradual implementation process.

    Science.gov (United States)

    van de Wetering, Gijs; Woertman, Willem H; Verbeek, Andre L; Broeders, Mireille J; Adang, Eddy M M

    2013-12-01

    This paper examines the short run inefficiencies that arise during gradual implementation of a new cost-effective technology in healthcare. These inefficiencies arise when health gains associated with the new technology cannot be obtained immediately because the new technology does not yet supply all patients, and when there is overcapacity for the old technology in the short run because the supply of care is divided among two mutually exclusive technologies. Such efficiency losses are not taken into account in standard textbook cost-effectiveness analysis in which a steady state is presented where costs and effects are assumed to be unchanging over time. A model is constructed to quantify such short run inefficiencies as well as to inform the decision maker about the optimal implementation pattern for the new technology. The model operates by integrating the incremental net benefit equations for both the period of co-existence of mutually exclusive technologies and the period after complete substitution of the old technology. It takes into account the rate of implementation of the new technology, depreciation of capital of the old technology as well as the demand curves for both technologies. The model is applied to the real world case of converting from screen film to digital mammography in the Netherlands.

  1. An Automated Processing Method for Agglomeration Areas

    Directory of Open Access Journals (Sweden)

    Chengming Li

    2018-05-01

    Full Text Available Agglomeration operations are a core component of the automated generalization of aggregated area groups. However, because geographical elements that possess agglomeration features are relatively scarce, the current literature has not given sufficient attention to agglomeration operations. Furthermore, most reports on the subject are limited to the general conceptual level. Consequently, current agglomeration methods are highly reliant on subjective determinations and cannot support intelligent computer processing. This paper proposes an automated processing method for agglomeration areas. Firstly, the proposed method automatically identifies agglomeration areas based on the width of the striped bridging area, distribution pattern index (DPI, shape similarity index (SSI, and overlap index (OI. Next, the progressive agglomeration operation is carried out, including the computation of the external boundary outlines and the extraction of agglomeration lines. The effectiveness and rationality of the proposed method has been validated by using actual census data of Chinese geographical conditions in the Jiangsu Province.

  2. A method to evaluate process performance by integrating time and resources

    Science.gov (United States)

    Wang, Yu; Wei, Qingjie; Jin, Shuang

    2017-06-01

    The purpose of process mining is to improve the existing process of the enterprise, so how to measure the performance of the process is particularly important. However, the current research on the performance evaluation method is still insufficient. The main methods of evaluation are mainly using time or resource. These basic statistics cannot evaluate process performance very well. In this paper, a method of evaluating the performance of the process based on time dimension and resource dimension is proposed. This method can be used to measure the utilization and redundancy of resources in the process. This paper will introduce the design principle and formula of the evaluation algorithm. Then, the design and the implementation of the evaluation method will be introduced. Finally, we will use the evaluating method to analyse the event log from a telephone maintenance process and propose an optimization plan.

  3. Municipal solid waste processing methods: Technical-economic comparison

    International Nuclear Information System (INIS)

    Bertanza, G.

    1993-01-01

    This paper points out the advantages and disadvantages of municipal solid waste processing methods incorporating different energy and/or materials recovery techniques, i.e., those involving composting or incineration and those with a mix of composting and incineration. The various technologies employed are compared especially with regard to process reliability, flexibility, modularity, pollution control efficiency and cost effectiveness. For that which regards composting, biodigestors are examined, while for incineration, the paper analyzes systems using combustion with complete recovery of vapour, combustion with total recovery of available electric energy, and combustion with cogeneration. Each of the processing methods examined includes an iron recovery cycle

  4. Novel welding image processing method based on fractal theory

    Institute of Scientific and Technical Information of China (English)

    陈强; 孙振国; 肖勇; 路井荣

    2002-01-01

    Computer vision has come into used in the fields of welding process control and automation. In order to improve precision and rapidity of welding image processing, a novel method based on fractal theory has been put forward in this paper. Compared with traditional methods, the image is preliminarily processed in the macroscopic regions then thoroughly analyzed in the microscopic regions in the new method. With which, an image is divided up to some regions according to the different fractal characters of image edge, and the fuzzy regions including image edges are detected out, then image edges are identified with Sobel operator and curved by LSM (Lease Square Method). Since the data to be processed have been decreased and the noise of image has been reduced, it has been testified through experiments that edges of weld seam or weld pool could be recognized correctly and quickly.

  5. Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-09-28

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  6. Process-morphology scaling relations quantify self-organization in capillary densified nanofiber arrays.

    Science.gov (United States)

    Kaiser, Ashley L; Stein, Itai Y; Cui, Kehang; Wardle, Brian L

    2018-02-07

    Capillary-mediated densification is an inexpensive and versatile approach to tune the application-specific properties and packing morphology of bulk nanofiber (NF) arrays, such as aligned carbon nanotubes. While NF length governs elasto-capillary self-assembly, the geometry of cellular patterns formed by capillary densified NFs cannot be precisely predicted by existing theories. This originates from the recently quantified orders of magnitude lower than expected NF array effective axial elastic modulus (E), and here we show via parametric experimentation and modeling that E determines the width, area, and wall thickness of the resulting cellular pattern. Both experiments and models show that further tuning of the cellular pattern is possible by altering the NF-substrate adhesion strength, which could enable the broad use of this facile approach to predictably pattern NF arrays for high value applications.

  7. Mathematical methods in time series analysis and digital image processing

    CERN Document Server

    Kurths, J; Maass, P; Timmer, J

    2008-01-01

    The aim of this volume is to bring together research directions in theoretical signal and imaging processing developed rather independently in electrical engineering, theoretical physics, mathematics and the computer sciences. In particular, mathematically justified algorithms and methods, the mathematical analysis of these algorithms, and methods as well as the investigation of connections between methods from time series analysis and image processing are reviewed. An interdisciplinary comparison of these methods, drawing upon common sets of test problems from medicine and geophysical/enviromental sciences, is also addressed. This volume coherently summarizes work carried out in the field of theoretical signal and image processing. It focuses on non-linear and non-parametric models for time series as well as on adaptive methods in image processing.

  8. Method for modeling social care processes for national information exchange.

    Science.gov (United States)

    Miettinen, Aki; Mykkänen, Juha; Laaksonen, Maarit

    2012-01-01

    Finnish social services include 21 service commissions of social welfare including Adoption counselling, Income support, Child welfare, Services for immigrants and Substance abuse care. This paper describes the method used for process modeling in the National project for IT in Social Services in Finland (Tikesos). The process modeling in the project aimed to support common national target state processes from the perspective of national electronic archive, increased interoperability between systems and electronic client documents. The process steps and other aspects of the method are presented. The method was developed, used and refined during the three years of process modeling in the national project.

  9. A compact clinical instrument for quantifying suppression.

    Science.gov (United States)

    Black, Joanne M; Thompson, Benjamin; Maehara, Goro; Hess, Robert F

    2011-02-01

    We describe a compact and convenient clinical apparatus for the measurement of suppression based on a previously reported laboratory-based approach. In addition, we report and validate a novel, rapid psychophysical method for measuring suppression using this apparatus, which makes the technique more applicable to clinical practice. By using a Z800 dual pro head-mounted display driven by a MAC laptop, we provide dichoptic stimulation. Global motion stimuli composed of arrays of moving dots are presented to each eye. One set of dots move in a coherent direction (termed signal) whereas another set of dots move in a random direction (termed noise). To quantify performance, we measure the signal/noise ratio corresponding to a direction-discrimination threshold. Suppression is quantified by assessing the extent to which it matters which eye sees the signal and which eye sees the noise. A space-saving, head-mounted display using current video technology offers an ideal solution for clinical practice. In addition, our optimized psychophysical method provided results that were in agreement with those produced using the original technique. We made measures of suppression on a group of nine adult amblyopic participants using this apparatus with both the original and new psychophysical paradigms. All participants had measurable suppression ranging from mild to severe. The two different psychophysical methods gave a strong correlation for the strength of suppression (rho = -0.83, p = 0.006). Combining the new apparatus and new psychophysical method creates a convenient and rapid technique for parametric measurement of interocular suppression. In addition, this apparatus constitutes the ideal platform for suppressors to combine information between their eyes in a similar way to binocularly normal people. This provides a convenient way for clinicians to implement the newly proposed binocular treatment of amblyopia that is based on antisuppression training.

  10. Quantifying wetland–aquifer interactions in a humid subtropical climate region: An integrated approach

    Science.gov (United States)

    Mendoza-Sanchez, Itza; Phanikumar, Mantha S.; Niu, Jie; Masoner, Jason R.; Cozzarelli, Isabelle M.; McGuire, Jennifer T.

    2013-01-01

    Wetlands are widely recognized as sentinels of global climate change. Long-term monitoring data combined with process-based modeling has the potential to shed light on key processes and how they change over time. This paper reports the development and application of a simple water balance model based on long-term climate, soil, vegetation and hydrological dynamics to quantify groundwater–surface water (GW–SW) interactions at the Norman landfill research site in Oklahoma, USA. Our integrated approach involved model evaluation by means of the following independent measurements: (a) groundwater inflow calculation using stable isotopes of oxygen and hydrogen (16O, 18O, 1H, 2H); (b) seepage flux measurements in the wetland hyporheic sediment; and (c) pan evaporation measurements on land and in the wetland. The integrated approach was useful for identifying the dominant hydrological processes at the site, including recharge and subsurface flows. Simulated recharge compared well with estimates obtained using isotope methods from previous studies and allowed us to identify specific annual signatures of this important process during the period of study (1997–2007). Similarly, observations of groundwater inflow and outflow rates to and from the wetland using seepage meters and isotope methods were found to be in good agreement with simulation results. Results indicate that subsurface flow components in the system are seasonal and readily respond to rainfall events. The wetland water balance is dominated by local groundwater inputs and regional groundwater flow contributes little to the overall water balance.

  11. Quantifying Cancer Risk from Radiation.

    Science.gov (United States)

    Keil, Alexander P; Richardson, David B

    2017-12-06

    Complex statistical models fitted to data from studies of atomic bomb survivors are used to estimate the human health effects of ionizing radiation exposures. We describe and illustrate an approach to estimate population risks from ionizing radiation exposure that relaxes many assumptions about radiation-related mortality. The approach draws on developments in methods for causal inference. The results offer a different way to quantify radiation's effects and show that conventional estimates of the population burden of excess cancer at high radiation doses are driven strongly by projecting outside the range of current data. Summary results obtained using the proposed approach are similar in magnitude to those obtained using conventional methods, although estimates of radiation-related excess cancers differ for many age, sex, and dose groups. At low doses relevant to typical exposures, the strength of evidence in data is surprisingly weak. Statements regarding human health effects at low doses rely strongly on the use of modeling assumptions. © 2017 Society for Risk Analysis.

  12. An Approach to quantify the Costs of Business Process Intelligence.

    NARCIS (Netherlands)

    Mutschler, B.B.; Bumiller, J.; Reichert, M.U.; Desel, J.; Frank, U.

    2005-01-01

    Today, enterprises are forced to continuously optimize their business as well as service processes. In this context the process-centered alignment of information systems is crucial. The use of business process intelligence (BPI) tools offers promising perspectives in this respect. However, when

  13. A New Enzyme-linked Sorbent Assay (ELSA) to Quantify Syncytiotrophoblast Extracellular Vesicles in Biological Fluids.

    Science.gov (United States)

    Göhner, Claudia; Weber, Maja; Tannetta, Dionne S; Groten, Tanja; Plösch, Torsten; Faas, Marijke M; Scherjon, Sicco A; Schleußner, Ekkehard; Markert, Udo R; Fitzgerald, Justine S

    2015-06-01

    The pregnancy-associated disease preeclampsia is related to the release of syncytiotrophoblast extracellular vesicles (STBEV) by the placenta. To improve functional research on STBEV, reliable and specific methods are needed to quantify them. However, only a few quantification methods are available and accepted, though imperfect. For this purpose, we aimed to provide an enzyme-linked sorbent assay (ELSA) to quantify STBEV in fluid samples based on their microvesicle characteristics and placental origin. Ex vivo placenta perfusion provided standards and samples for the STBEV quantification. STBEV were captured by binding of extracellular phosphatidylserine to immobilized annexin V. The membranous human placental alkaline phosphatase on the STBEV surface catalyzed a colorimetric detection reaction. The described ELSA is a rapid and simple method to quantify STBEV in diverse liquid samples, such as blood or perfusion suspension. The reliability of the ELSA was proven by comparison with nanoparticle tracking analysis. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. A New Method to Simultaneously Quantify the Antioxidants: Carotenes, Xanthophylls, and Vitamin A in Human Plasma

    Directory of Open Access Journals (Sweden)

    Mariel Colmán-Martínez

    2016-01-01

    Full Text Available A simple and accurate reversed phase high-performance liquid chromatography coupled with diode array detector (HPLC-DAD method for simultaneously determining and quantifying the antioxidants carotenes, xanthophylls, and retinol in human plasma is presented in this paper. Compounds were extracted with hexane, a C30 column, and a mobile phase of methanol, methyl tert-butyl ether, and water were used for the separation of the compounds. A total of 8 carotenoids, 3 Z-β-carotene isomers, and 1 fat-soluble vitamin (retinol were resolved within 72 min at a flow rate of 0.6 mL/min. Detection was achieved at 450 nm for carotenoids and 330 nm for retinol. To evaluate the effectiveness of themethod, it has been applied to an intervention study conducted on eight volunteers. Results. Limits of detection were between 0.1 μg/mL for lycopene and astaxanthin and 1.3 μg/mL for 15-Z-β-carotene. Recoveries were ranged between 89% and 113% for α-carotene and astaxanthin, respectively. Accuracy was between 90.7% and 112.2% and precision was between 1% and 15% RSD. In human plasma samples compounds studied were identified besides three lycopene isomers, demonstrated to be suitable for application in dietary intervention studies. Conclusions. Due to its accuracy, precision, selectivity, and reproducibility, this method is suitable to dietary habits and/or antioxidants status studies.

  15. A New Method to Simultaneously Quantify the Antioxidants: Carotenes, Xanthophylls, and Vitamin A in Human Plasma.

    Science.gov (United States)

    Colmán-Martínez, Mariel; Martínez-Huélamo, Miriam; Miralles, Esther; Estruch, Ramón; Lamuela-Raventós, Rosa M

    2015-01-01

    A simple and accurate reversed phase high-performance liquid chromatography coupled with diode array detector (HPLC-DAD) method for simultaneously determining and quantifying the antioxidants carotenes, xanthophylls, and retinol in human plasma is presented in this paper. Compounds were extracted with hexane, a C30 column, and a mobile phase of methanol, methyl tert-butyl ether, and water were used for the separation of the compounds. A total of 8 carotenoids, 3 Z-β-carotene isomers, and 1 fat-soluble vitamin (retinol) were resolved within 72 min at a flow rate of 0.6 mL/min. Detection was achieved at 450 nm for carotenoids and 330 nm for retinol. To evaluate the effectiveness of themethod, it has been applied to an intervention study conducted on eight volunteers. Results. Limits of detection were between 0.1 μg/mL for lycopene and astaxanthin and 1.3 μg/mL for 15-Z-β-carotene. Recoveries were ranged between 89% and 113% for α-carotene and astaxanthin, respectively. Accuracy was between 90.7% and 112.2% and precision was between 1% and 15% RSD. In human plasma samples compounds studied were identified besides three lycopene isomers, demonstrated to be suitable for application in dietary intervention studies. Conclusions. Due to its accuracy, precision, selectivity, and reproducibility, this method is suitable to dietary habits and/or antioxidants status studies.

  16. Enhnacing the science of the WFIRST coronagraph instrument with post-processing.

    Science.gov (United States)

    Pueyo, Laurent; WFIRST CGI data analysis and post-processing WG

    2018-01-01

    We summarize the results of a three years effort investigating how to apply to the WFIRST coronagraph instrument (CGI) modern image analysis methods, now routinely used with ground-based coronagraphs. In this post we quantify the gain associated post-processing for WFIRST-CGI observing scenarios simulated between 2013 and 2017. We also show based one simulations that spectrum of planet can be confidently retrieved using these processing tools with and Integral Field Spectrograph. We then discuss our work using CGI experimental data and quantify coronagraph post-processing testbed gains. We finally introduce stability metrics that are simple to define and measure, and place useful lower bound and upper bounds on the achievable RDI post-processing contrast gain. We show that our bounds hold in the case of the testbed data.

  17. Quantifying and mapping spatial variability in simulated forest plots

    Science.gov (United States)

    Gavin R. Corral; Harold E. Burkhart

    2016-01-01

    We used computer simulations to test the efficacy of multivariate statistical methods to detect, quantify, and map spatial variability of forest stands. Simulated stands were developed of regularly-spaced plantations of loblolly pine (Pinus taeda L.). We assumed no affects of competition or mortality, but random variability was added to individual tree characteristics...

  18. Apparatus and method for radiation processing of materials

    International Nuclear Information System (INIS)

    Neuberg, W.B.; Luniewski, R.

    1983-01-01

    A method and apparatus for radiation degradation processing of polytetrafluoroethylene makes use of a simultaneous irradiation, agitation and cooling. The apparatus is designed to make efficent use of radiation in the processing. (author)

  19. Method for processing spent nuclear reactor fuel

    International Nuclear Information System (INIS)

    Levenson, M.; Zebroski, E.L.

    1981-01-01

    A method and apparatus are claimed for processing spent nuclear reactor fuel wherein plutonium is continuously contaminated with radioactive fission products and diluted with uranium. Plutonium of sufficient purity to fabricate nuclear weapons cannot be produced by the process or in the disclosed reprocessing plant. Diversion of plutonium is prevented by radiation hazards and ease of detection

  20. Intelligent methods for the process parameter determination of plastic injection molding

    Science.gov (United States)

    Gao, Huang; Zhang, Yun; Zhou, Xundao; Li, Dequn

    2018-03-01

    Injection molding is one of the most widely used material processing methods in producing plastic products with complex geometries and high precision. The determination of process parameters is important in obtaining qualified products and maintaining product quality. This article reviews the recent studies and developments of the intelligent methods applied in the process parameter determination of injection molding. These intelligent methods are classified into three categories: Case-based reasoning methods, expert system- based methods, and data fitting and optimization methods. A framework of process parameter determination is proposed after comprehensive discussions. Finally, the conclusions and future research topics are discussed.

  1. Quantifying voids effecting delamination in carbon/epoxy composites: static and fatigue fracture behavior

    Science.gov (United States)

    Hakim, I.; May, D.; Abo Ras, M.; Meyendorf, N.; Donaldson, S.

    2016-04-01

    On the present work, samples of carbon fiber/epoxy composites with different void levels were fabricated using hand layup vacuum bagging process by varying the pressure. Thermal nondestructive methods: thermal conductivity measurement, pulse thermography, pulse phase thermography and lock-in-thermography, and mechanical testing: modes I and II interlaminar fracture toughness were conducted. Comparing the parameters resulted from the thermal nondestructive testing revealed that voids lead to reductions in thermal properties in all directions of composites. The results of mode I and mode II interlaminar fracture toughness showed that voids lead to reductions in interlaminar fracture toughness. The parameters resulted from thermal nondestructive testing were correlated to the results of mode I and mode II interlaminar fracture toughness and voids were quantified.

  2. Practical Markov Logic Containing First-Order Quantifiers With Application to Identity Uncertainty

    National Research Council Canada - National Science Library

    Culotta, Aron; McCallum, Andrew

    2005-01-01

    .... In this paper, we present approximate inference and training methods that incrementally instantiate portions of the network as needed to enable first-order existential and universal quantifiers in Markov logic networks...

  3. A Method to Quantify Plant Availability and Initiating Event Frequency Using a Large Event Tree, Small Fault Tree Model

    International Nuclear Information System (INIS)

    Kee, Ernest J.; Sun, Alice; Rodgers, Shawn; Popova, ElmiraV; Nelson, Paul; Moiseytseva, Vera; Wang, Eric

    2006-01-01

    South Texas Project uses a large fault tree to produce scenarios (minimal cut sets) used in quantification of plant availability and event frequency predictions. On the other hand, the South Texas Project probabilistic risk assessment model uses a large event tree, small fault tree for quantifying core damage and radioactive release frequency predictions. The South Texas Project is converting its availability and event frequency model to use a large event tree, small fault in an effort to streamline application support and to provide additional detail in results. The availability and event frequency model as well as the applications it supports (maintenance and operational risk management, system engineering health assessment, preventive maintenance optimization, and RIAM) are briefly described. A methodology to perform availability modeling in a large event tree, small fault tree framework is described in detail. How the methodology can be used to support South Texas Project maintenance and operations risk management is described in detail. Differences with other fault tree methods and other recently proposed methods are discussed in detail. While the methods described are novel to the South Texas Project Risk Management program and to large event tree, small fault tree models, concepts in the area of application support and availability modeling have wider applicability to the industry. (authors)

  4. Image restoration and processing methods

    International Nuclear Information System (INIS)

    Daniell, G.J.

    1984-01-01

    This review will stress the importance of using image restoration techniques that deal with incomplete, inconsistent, and noisy data and do not introduce spurious features into the processed image. No single image is equally suitable for both the resolution of detail and the accurate measurement of intensities. A good general purpose technique is the maximum entropy method and the basis and use of this will be explained. (orig.)

  5. Non-filtration method of processing uranium ores

    International Nuclear Information System (INIS)

    Laskorin, B.N.; Vodolazov, L.I.; Tokarev, N.N.; Vyalkov, V.I.; Goldobina, V.A.; Gosudarstvennyj Komitet po Ispol'zovaniyu Atomnoj Ehnergii SSSR, Moscow)

    1977-01-01

    The development of the non-filtration sorption method has lead to procedures of the sorption leaching and the extraction desorption, which have made it possible to intensify the processing of uranium ores and to improve greatly the technical and economic indexes by eliminating the complex method of multiple filtration and re-pulping of cakes. This method makes it possible to involve more poor uranium raw materials, at the same time extracting valuable components such as molybdenum, vanadium, copper, etc. Considerable industrial experience has been acquired in the sorption of dense pulp with a solid-to-liquid phase ratio of 1:1. This has led to a plant production increase of 1.5-3.0 times, an increase of uranium extraction by 5-10%, a two- to- three-fold increase of labour capacity of the main workers, and to a several-fold decrease of reagents, auxiliary materials, electric energy and vapour. This non-filtration method is a continuous process in all its phases thanks to the use of high-yield and high-power equipment for high-density pulps. (author)

  6. PCB Food Web Dynamics Quantify Nutrient and Energy Flow in Aquatic Ecosystems.

    Science.gov (United States)

    McLeod, Anne M; Paterson, Gordon; Drouillard, Ken G; Haffner, G Douglas

    2015-11-03

    Measuring in situ nutrient and energy flows in spatially and temporally complex aquatic ecosystems represents a major ecological challenge. Food web structure, energy and nutrient budgets are difficult to measure, and it is becoming more important to quantify both energy and nutrient flow to determine how food web processes and structure are being modified by multiple stressors. We propose that polychlorinated biphenyl (PCB) congeners represent an ideal tracer to quantify in situ energy and nutrient flow between trophic levels. Here, we demonstrate how an understanding of PCB congener bioaccumulation dynamics provides multiple direct measurements of energy and nutrient flow in aquatic food webs. To demonstrate this novel approach, we quantified nitrogen (N), phosphorus (P) and caloric turnover rates for Lake Huron lake trout, and reveal how these processes are regulated by both growth rate and fish life history. Although minimal nutrient recycling was observed in young growing fish, slow growing, older lake trout (>5 yr) recycled an average of 482 Tonnes·yr(-1) of N, 45 Tonnes·yr(-1) of P and assimilated 22 TJ yr(-1) of energy. Compared to total P loading rates of 590 Tonnes·yr(-1), the recycling of primarily bioavailable nutrients by fish plays an important role regulating the nutrient states of oligotrophic lakes.

  7. Evaluation of processing methods for static radioisotope scan images

    International Nuclear Information System (INIS)

    Oakberg, J.A.

    1976-12-01

    Radioisotope scanning in the field of nuclear medicine provides a method for the mapping of a radioactive drug in the human body to produce maps (images) which prove useful in detecting abnormalities in vital organs. At best, radioisotope scanning methods produce images with poor counting statistics. One solution to improving the body scan images is using dedicated small computers with appropriate software to process the scan data. Eleven methods for processing image data are compared

  8. Quantifying drug-protein binding in vivo

    International Nuclear Information System (INIS)

    Buchholz, B; Bench, G; Keating III, G; Palmblad, M; Vogel, J; Grant, P G; Hillegonds, D

    2004-01-01

    Accelerator mass spectrometry (AMS) provides precise quantitation of isotope labeled compounds that are bound to biological macromolecules such as DNA or proteins. The sensitivity is high enough to allow for sub-pharmacological (''micro-'') dosing to determine macromolecular targets without inducing toxicities or altering the system under study, whether it is healthy or diseased. We demonstrated an application of AMS in quantifying the physiologic effects of one dosed chemical compound upon the binding level of another compound in vivo at sub-toxic doses [4].We are using tissues left from this study to develop protocols for quantifying specific binding to isolated and identified proteins. We also developed a new technique to quantify nanogram to milligram amounts of isolated protein at precisions that are comparable to those for quantifying the bound compound by AMS

  9. Methods for Quantifying the Uncertainties of LSIT Test Parameters, Test Results, and Full-Scale Mixing Performance Using Models Developed from Scaled Test Data

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Cooley, Scott K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kuhn, William L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rector, David R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Heredia-Langner, Alejandro [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to “address uncertainties and increase confidence in the projected, full-scale mixing performance and operations” in the Waste Treatment and Immobilization Plant (WTP).

  10. Collaborative simulation method with spatiotemporal synchronization process control

    Science.gov (United States)

    Zou, Yisheng; Ding, Guofu; Zhang, Weihua; Zhang, Jian; Qin, Shengfeng; Tan, John Kian

    2016-10-01

    When designing a complex mechatronics system, such as high speed trains, it is relatively difficult to effectively simulate the entire system's dynamic behaviors because it involves multi-disciplinary subsystems. Currently,a most practical approach for multi-disciplinary simulation is interface based coupling simulation method, but it faces a twofold challenge: spatial and time unsynchronizations among multi-directional coupling simulation of subsystems. A new collaborative simulation method with spatiotemporal synchronization process control is proposed for coupling simulating a given complex mechatronics system across multiple subsystems on different platforms. The method consists of 1) a coupler-based coupling mechanisms to define the interfacing and interaction mechanisms among subsystems, and 2) a simulation process control algorithm to realize the coupling simulation in a spatiotemporal synchronized manner. The test results from a case study show that the proposed method 1) can certainly be used to simulate the sub-systems interactions under different simulation conditions in an engineering system, and 2) effectively supports multi-directional coupling simulation among multi-disciplinary subsystems. This method has been successfully applied in China high speed train design and development processes, demonstrating that it can be applied in a wide range of engineering systems design and simulation with improved efficiency and effectiveness.

  11. A review of systematic and quantifiable methods of estimating the needs of a community for alcohol treatment services.

    Science.gov (United States)

    Crook, G M; Oei, T P

    1998-01-01

    The purpose of this paper was to review a variety of systematic and quantifiable methodologies for planning and evaluating the provision of alcohol treatment services for communities. These methods include: (a) developing and evaluating indicators of alcohol-related harm in and across defined geographic areas, to assess the relative need for services; (b) demand-oriented techniques that involve the prediction of future demand for services based on the previous utilisation of treatment facilities; (c) comprehensive systems approaches to planning services; and (d) the estimation of the prevalence of individuals who need or would benefit from an intervention for their alcohol problem. In practice, service planners may incorporate a combination of approaches that could be compared and contrasted to assess the convergent validity of results. These methodologies can also be used to provide information for planning and evaluating prevention/health promotion and early intervention initiatives.

  12. Metrology and process control: dealing with measurement uncertainty

    Science.gov (United States)

    Potzick, James

    2010-03-01

    Metrology is often used in designing and controlling manufacturing processes. A product sample is processed, some relevant property is measured, and the process adjusted to bring the next processed sample closer to its specification. This feedback loop can be remarkably effective for the complex processes used in semiconductor manufacturing, but there is some risk involved because measurements have uncertainty and product specifications have tolerances. There is finite risk that good product will fail testing or that faulty product will pass. Standard methods for quantifying measurement uncertainty have been presented, but the question arises: how much measurement uncertainty is tolerable in a specific case? Or, How does measurement uncertainty relate to manufacturing risk? This paper looks at some of the components inside this process control feedback loop and describes methods to answer these questions.

  13. Quantifier Scope in Categorical Compositional Distributional Semantics

    Directory of Open Access Journals (Sweden)

    Mehrnoosh Sadrzadeh

    2016-08-01

    Full Text Available In previous work with J. Hedges, we formalised a generalised quantifiers theory of natural language in categorical compositional distributional semantics with the help of bialgebras. In this paper, we show how quantifier scope ambiguity can be represented in that setting and how this representation can be generalised to branching quantifiers.

  14. Methods for the Evaluation of Waste Treatment Processes

    Directory of Open Access Journals (Sweden)

    Hans-Joachim Gehrmann

    2017-01-01

    Full Text Available Decision makers for waste management are confronted with the problem of selecting the most economic, environmental, and socially acceptable waste treatment process. This paper elucidates evaluation methods for waste treatment processes for the comparison of ecological and economic aspects such as material flow analysis, statistical entropy analysis, energetic and exergetic assessment, cumulative energy demand, and life cycle assessment. The work is based on the VDI guideline 3925. A comparison of two thermal waste treatment plants with different process designs and energy recovery systems was performed with the described evaluation methods. The results are mainly influenced by the type of energy recovery, where the waste-to-energy plant providing district heat and process steam emerged to be beneficial in most aspects. Material recovery options from waste incineration were evaluated according to sustainability targets, such as saving of resources and environmental protection.

  15. Quantifying the accuracy of the tumor motion and area as a function of acceleration factor for the simulation of the dynamic keyhole magnetic resonance imaging method.

    Science.gov (United States)

    Lee, Danny; Greer, Peter B; Pollock, Sean; Kim, Taeho; Keall, Paul

    2016-05-01

    The dynamic keyhole is a new MR image reconstruction method for thoracic and abdominal MR imaging. To date, this method has not been investigated with cancer patient magnetic resonance imaging (MRI) data. The goal of this study was to assess the dynamic keyhole method for the task of lung tumor localization using cine-MR images reconstructed in the presence of respiratory motion. The dynamic keyhole method utilizes a previously acquired a library of peripheral k-space datasets at similar displacement and phase (where phase is simply used to determine whether the breathing is inhale to exhale or exhale to inhale) respiratory bins in conjunction with central k-space datasets (keyhole) acquired. External respiratory signals drive the process of sorting, matching, and combining the two k-space streams for each respiratory bin, thereby achieving faster image acquisition without substantial motion artifacts. This study was the first that investigates the impact of k-space undersampling on lung tumor motion and area assessment across clinically available techniques (zero-filling and conventional keyhole). In this study, the dynamic keyhole, conventional keyhole and zero-filling methods were compared to full k-space dataset acquisition by quantifying (1) the keyhole size required for central k-space datasets for constant image quality across sixty four cine-MRI datasets from nine lung cancer patients, (2) the intensity difference between the original and reconstructed images in a constant keyhole size, and (3) the accuracy of tumor motion and area directly measured by tumor autocontouring. For constant image quality, the dynamic keyhole method, conventional keyhole, and zero-filling methods required 22%, 34%, and 49% of the keyhole size (P lung tumor monitoring applications. This study demonstrates that the dynamic keyhole method is a promising technique for clinical applications such as image-guided radiation therapy requiring the MR monitoring of thoracic tumors. Based

  16. Using OSL dating to quantify rates of Earth surface processes

    Science.gov (United States)

    Rhodes, E. J.; Rittenour, T. M.

    2010-12-01

    In Optically Stimulated Luminescence (OSL), the dating signal is reset when mineral grains are exposed to light or heat, and gradually rebuilds during subsequent burial by interaction with ionising radiation. Quartz and feldspar provide useful OSL signals demonstrating rapid signal reduction in only seconds of light exposure. Age estimates ranging from under 1 year to around 200,000 years can be determined for a wide range of sedimentary contexts, including dunes, marine deposits, fluvial and glacial environments, and recent developments provide the framework for low temperature thermochronometric applications on timescales comparable with rapid climate fluctuations. In this presentation, we explore the range of applications for determining rates of Earth surface processes using OSL. We examine technical limitations, and provide a framework for overcoming current difficulties experienced in several specific regions and contexts. We will focus on OSL dating applications to glacigenic and fluvial records, along with use of the technique in tectonic and paleoseismic contexts. In many ways, these represent the most challenging environments for OSL; rapid high energy deposition is associated with incomplete signal zeroing, and the characteristics of quartz in many of these environments make it difficult to derive precise age estimates using this mineral. We will introduce innovative methods to overcome these limitations, both existing and those under development.

  17. Mathematical methods for diffusion MRI processing

    International Nuclear Information System (INIS)

    Lenglet, C.; Lenglet, C.; Sapiro, G.; Campbell, J.S.W.; Pike, G.B.; Campbell, J.S.W.; Siddiqi, K.; Descoteaux, M.; Haro, G.; Wassermann, D.; Deriche, R.; Wassermann, D.; Anwander, A.; Thompson, P.M.

    2009-01-01

    In this article, we review recent mathematical models and computational methods for the processing of diffusion Magnetic Resonance Images, including state-of-the-art reconstruction of diffusion models, cerebral white matter connectivity analysis, and segmentation techniques. We focus on Diffusion Tensor Images (DTI) and Q-Ball Images (QBI). (authors)

  18. Final report: mathematical method for quantifying the effectiveness of management strategies.

    Energy Technology Data Exchange (ETDEWEB)

    Covan, John Morgan; Sena-Henderson, Lisa; Robinett, Rush D. III (.; ); Brewer, Jeffrey D.; Roginski, Robert J.; Cooper, James Arlin

    2005-10-01

    Large complex teams (e.g., DOE labs) must achieve sustained productivity in critical operations (e.g., weapons and reactor development) while maintaining safety for involved personnel, the public, and physical assets, as well as security for property and information. This requires informed management decisions that depend on tradeoffs of factors such as the mode and extent of personnel protection, potential accident consequences, the extent of information and physical asset protection, and communication with and motivation of involved personnel. All of these interact (and potentially interfere) with each other and must be weighed against financial resources and implementation time. Existing risk analysis tools can successfully treat physical response, component failure, and routine human actions. However, many ''soft'' factors involving human motivation and interaction among weakly related factors have proved analytically problematic. There has been a need for an effective software tool capable of quantifying these tradeoffs and helping make rational choices. This type of tool, developed during this project, facilitates improvements in safety, security, and productivity, and enables measurement of improvements as a function of resources expended. Operational safety, security, and motivation are significantly influenced by ''latent effects'', which are pre-occurring influences. One example of these is that an atmosphere of excessive fear can suppress open and frank disclosures, which can in turn hide problems, impede correction, and prevent lessons learned. Another is that a cultural mind-set of commitment, self-responsibility, and passion for an activity is a significant contributor to the activity's success. This project pursued an innovative approach for quantitatively analyzing latent effects in order to link the above types of factors, aggregating available information into quantitative metrics that can contribute to

  19. Study on Processing Method of Image Shadow

    Directory of Open Access Journals (Sweden)

    Wang Bo

    2014-07-01

    Full Text Available In order to effectively remove disturbance of shadow and enhance robustness of information processing of computer visual image, this paper makes study on inspection and removal of image shadow. It makes study the continual removal algorithm of shadow based on integration, the illumination surface and texture, it respectively introduces their work principles and realization method, it can effectively carrying processing for shadow by test.

  20. Quantifying sound quality in loudspeaker reproduction

    NARCIS (Netherlands)

    Beerends, John G.; van Nieuwenhuizen, Kevin; van den Broek, E.L.

    2016-01-01

    We present PREQUEL: Perceptual Reproduction Quality Evaluation for Loudspeakers. Instead of quantifying the loudspeaker system itself, PREQUEL quantifies the overall loudspeakers' perceived sound quality by assessing their acoustic output using a set of music signals. This approach introduces a

  1. Is Time Predictability Quantifiable?

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2012-01-01

    Computer architects and researchers in the realtime domain start to investigate processors and architectures optimized for real-time systems. Optimized for real-time systems means time predictable, i.e., architectures where it is possible to statically derive a tight bound of the worst......-case execution time. To compare different approaches we would like to quantify time predictability. That means we need to measure time predictability. In this paper we discuss the different approaches for these measurements and conclude that time predictability is practically not quantifiable. We can only...... compare the worst-case execution time bounds of different architectures....

  2. Different methods to quantify Listeria monocytogenesbiofilms cells showed different profile in their viability

    Directory of Open Access Journals (Sweden)

    Lizziane Kretli Winkelströter

    2015-03-01

    Full Text Available Listeria monocytogenes is a foodborne pathogen able to adhere and to form biofilms in several materials commonly present in food processing plants. The aim of this study was to evaluate the resistance of Listeria monocytogenes attached to abiotic surface, after treatment with sanitizers, by culture method, microscopy and Quantitative Real Time Polymerase Chain Reaction (qPCR. Biofilms of L. monocytogenes were obtained in stainless steel coupons immersed in Brain Heart Infusion Broth, under agitation at 37 °C for 24 h. The methods selected for this study were based on plate count, microscopic count with the aid of viability dyes (CTC-DAPI, and qPCR. Results of culture method showed that peroxyacetic acid was efficient to kill sessile L. monocytogenes populations, while sodium hypochlorite was only partially effective to kill attached L. monocytogenes (p < 0.05. When, viability dyes (CTC/DAPI combined with fluorescence microscopy and qPCR were used and lower counts were found after treatments (p < 0.05. Selective quantification of viable cells of L. monocytogenes by qPCR using EMA revelead that the pre-treatment with EMA was not appropriate since it also inhibited amplification of DNA from live cells by ca. 2 log. Thus, the use of CTC counts was the best method to count viable cells in biofilms.

  3. Quantifying uncertainty in Gulf of Mexico forecasts stemming from uncertain initial conditions

    KAUST Repository

    Iskandarani, Mohamed; Le Hé naff, Matthieu; Srinivasan, Ashwanth; Knio, Omar

    2016-01-01

    Polynomial Chaos (PC) methods are used to quantify the impacts of initial conditions uncertainties on oceanic forecasts of the Gulf of Mexico circulation. Empirical Orthogonal Functions are used as initial conditions perturbations with their modal

  4. Methods for Quantifying the Uncertainties of LSIT Test Parameters, Test Results, and Full-Scale Mixing Performance Using Models Developed from Scaled Test Data

    International Nuclear Information System (INIS)

    Piepel, Gregory F.; Cooley, Scott K.; Kuhn, William L.; Rector, David R.; Heredia-Langner, Alejandro

    2015-01-01

    This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to ''address uncertainties and increase confidence in the projected, full-scale mixing performance and operations'' in the Waste Treatment and Immobilization Plant (WTP).

  5. Dendritic network models: Improving isoscapes and quantifying influence of landscape and in-stream processes on strontium isotopes in rivers

    Science.gov (United States)

    Brennan, Sean R.; Torgersen, Christian E.; Hollenbeck, Jeff P.; Fernandez, Diego P.; Jensen, Carrie K.; Schindler, Daniel E.

    2016-05-01

    A critical challenge for the Earth sciences is to trace the transport and flux of matter within and among aquatic, terrestrial, and atmospheric systems. Robust descriptions of isotopic patterns across space and time, called "isoscapes," form the basis of a rapidly growing and wide-ranging body of research aimed at quantifying connectivity within and among Earth's systems. However, isoscapes of rivers have been limited by conventional Euclidean approaches in geostatistics and the lack of a quantitative framework to apportion the influence of processes driven by landscape features versus in-stream phenomena. Here we demonstrate how dendritic network models substantially improve the accuracy of isoscapes of strontium isotopes and partition the influence of hydrologic transport versus local geologic features on strontium isotope ratios in a large Alaska river. This work illustrates the analytical power of dendritic network models for the field of isotope biogeochemistry, particularly for provenance studies of modern and ancient animals.

  6. [The application of Delphi method in improving the score table for the hygienic quantifying and classification of hotels].

    Science.gov (United States)

    Wang, Zi-yun; Liu, Yong-quan; Wang, Hong-bo; Zheng, Yang; Wu, Qi; Yang, Xia; Wu, Yong-wei; Zhao, Yi-ming

    2009-04-01

    By means of Delphi method and expert panel consultations, to choose suitable indicators and improve the score table for classifying the hygienic condition of hotels so that it can be widely used at nationwide. A two-round Delphi consultation was held to choose suitable indicators among 78 experts from 18 provinces, municipalities and autonomous regions. The suitable indicators were selected according to the importance recognized by experts. The average length of service in public health of the experts was (21.08 +/- 5.78) years and the average coefficient of experts' authorities C(r) was 0.89 +/- 0.07. The response rates of the two-round consultation were 98.72% (77/78) and 100.00% (77/77). The average feedback time were (8.49 +/- 4.48) d, (5.86 +/- 2.28) d, and the difference between two rounds was statistically significant (t = 4.60, P hotels was composed of three first-class indicators (hygienic management, hygienic facilities and hygienic practices) and 36 second-class indicators. The weight coefficients of the three first-class indicators were 0.35, 0.34, 0.31. Delphi method might be used in a large-scale consultation among experts and be propitious to improve the score table for the hygienic quantifying and classification.

  7. Activity – based costing method

    Directory of Open Access Journals (Sweden)

    Èuchranová Katarína

    2001-06-01

    Full Text Available Activity based costing is a method of identifying and tracking the operating costs directly associated with processing items. It is the practice of focusing on some unit of output, such as a purchase order or an assembled automobile and attempting to determine its total as precisely as poccible based on the fixed and variable costs of the inputs.You use ABC to identify, quantify and analyze the various cost drivers (such as labor, materials, administrative overhead, rework. and to determine which ones are candidates for reduction.A processes any activity that accepts inputs, adds value to these inputs for customers and produces outputs for these customers. The customer may be either internal or external to the organization. Every activity within an organization comprimes one or more processes. Inputs, controls and resources are all supplied to the process.A process owner is the person responsible for performing and or controlling the activity.The direction of cost through their contact to partial activity and processes is a new modern theme today. Beginning of this method is connected with very important changes in the firm processes.ABC method is a instrument , that bring a competitive advantages for the firm.

  8. Quantifying Complexity in Quantum Phase Transitions via Mutual Information Complex Networks.

    Science.gov (United States)

    Valdez, Marc Andrew; Jaschke, Daniel; Vargas, David L; Carr, Lincoln D

    2017-12-01

    We quantify the emergent complexity of quantum states near quantum critical points on regular 1D lattices, via complex network measures based on quantum mutual information as the adjacency matrix, in direct analogy to quantifying the complexity of electroencephalogram or functional magnetic resonance imaging measurements of the brain. Using matrix product state methods, we show that network density, clustering, disparity, and Pearson's correlation obtain the critical point for both quantum Ising and Bose-Hubbard models to a high degree of accuracy in finite-size scaling for three classes of quantum phase transitions, Z_{2}, mean field superfluid to Mott insulator, and a Berzinskii-Kosterlitz-Thouless crossover.

  9. Quantifying Complexity in Quantum Phase Transitions via Mutual Information Complex Networks

    Science.gov (United States)

    Valdez, Marc Andrew; Jaschke, Daniel; Vargas, David L.; Carr, Lincoln D.

    2017-12-01

    We quantify the emergent complexity of quantum states near quantum critical points on regular 1D lattices, via complex network measures based on quantum mutual information as the adjacency matrix, in direct analogy to quantifying the complexity of electroencephalogram or functional magnetic resonance imaging measurements of the brain. Using matrix product state methods, we show that network density, clustering, disparity, and Pearson's correlation obtain the critical point for both quantum Ising and Bose-Hubbard models to a high degree of accuracy in finite-size scaling for three classes of quantum phase transitions, Z2, mean field superfluid to Mott insulator, and a Berzinskii-Kosterlitz-Thouless crossover.

  10. An alternative method for quantifying coronary artery calcification: the multi-ethnic study of atherosclerosis (MESA).

    Science.gov (United States)

    Liang, C Jason; Budoff, Matthew J; Kaufman, Joel D; Kronmal, Richard A; Brown, Elizabeth R

    2012-07-02

    Extent of atherosclerosis measured by amount of coronary artery calcium (CAC) in computed tomography (CT) has been traditionally assessed using thresholded scoring methods, such as the Agatston score (AS). These thresholded scores have value in clinical prediction, but important information might exist below the threshold, which would have important advantages for understanding genetic, environmental, and other risk factors in atherosclerosis. We developed a semi-automated threshold-free scoring method, the spatially weighted calcium score (SWCS) for CAC in the Multi-Ethnic Study of Atherosclerosis (MESA). Chest CT scans were obtained from 6814 participants in the Multi-Ethnic Study of Atherosclerosis (MESA). The SWCS and the AS were calculated for each of the scans. Cox proportional hazards models and linear regression models were used to evaluate the associations of the scores with CHD events and CHD risk factors. CHD risk factors were summarized using a linear predictor. Among all participants and participants with AS > 0, the SWCS and AS both showed similar strongly significant associations with CHD events (hazard ratios, 1.23 and 1.19 per doubling of SWCS and AS; 95% CI, 1.16 to 1.30 and 1.14 to 1.26) and CHD risk factors (slopes, 0.178 and 0.164; 95% CI, 0.162 to 0.195 and 0.149 to 0.179). Even among participants with AS = 0, an increase in the SWCS was still significantly associated with established CHD risk factors (slope, 0.181; 95% CI, 0.138 to 0.224). The SWCS appeared to be predictive of CHD events even in participants with AS = 0, though those events were rare as expected. The SWCS provides a valid, continuous measure of CAC suitable for quantifying the extent of atherosclerosis without a threshold, which will be useful for examining novel genetic and environmental risk factors for atherosclerosis.

  11. Quantifying export flows of used electronics: advanced methods to resolve used goods within trade data.

    Science.gov (United States)

    Duan, Huabo; Miller, T Reed; Gregory, Jeremy; Kirchain, Randolph

    2014-03-18

    There is limited convincing quantitative data on the export of used electronics from the United States (U.S.). Thus, we advance a methodology to quantify the export flows of whole units of used electronics from the U.S. using detailed export trade data, and demonstrate the methodology using laptops. Since used electronics are not explicitly identified in export trade data, we hypothesize that exports with a low unit value below a used-new threshold specific to a destination world region are used. The importance of using the most disaggregated trade data set available when resolving used and new goods is illustrated. Two detailed U.S. export trade data sets were combined to arrive at quantities and unit values for each port, mode of transport, month, trade partner country, and trade code. We add rigor to the determination of the used-new threshold by utilizing both the Neighborhood valley-emphasis method (NVEM) and published sales prices. This analysis found that 748 to 1199 thousand units of used laptops were exported from the U.S. in 2010, of which 78-81% are destined for non-OECD countries. Asia was found to be the largest destination of used laptop exports across all used-new threshold methods. Latin American and the Caribbean was the second largest recipient of these exports. North America and Europe also received used laptops from the U.S. Only a small fraction of used laptops was exported to Africa. However, these quantities are lower bound estimates because not all shipments of used laptops may be shipped using the proper laptop trade code. Still, this approach has the potential to give insight into the quantity and destinations of the exports if applied to all used electronics product types across a series of years.

  12. Research on Comprehensive Evaluation Method for Heating Project Based on Analytic Hierarchy Processing

    Science.gov (United States)

    Han, Shenchao; Yang, Yanchun; Liu, Yude; Zhang, Peng; Li, Siwei

    2018-01-01

    It is effective to reduce haze in winter by changing the distributed heat supply system. Thus, the studies on comprehensive index system and scientific evaluation method of distributed heat supply project are essential. Firstly, research the influence factors of heating modes, and an index system with multiple dimension including economic, environmental, risk and flexibility was built and all indexes were quantified. Secondly, a comprehensive evaluation method based on AHP was put forward to analyze the proposed multiple and comprehensive index system. Lastly, the case study suggested that supplying heat with electricity has great advantage and promotional value. The comprehensive index system of distributed heating supply project and evaluation method in this paper can evaluate distributed heat supply project effectively and provide scientific support for choosing the distributed heating project.

  13. Apparatus and method X-ray image processing

    International Nuclear Information System (INIS)

    1984-01-01

    The invention relates to a method for X-ray image processing. The radiation passed through the object is transformed into an electric image signal from which the logarithmic value is determined and displayed by a display device. Its main objective is to provide a method and apparatus that renders X-ray images or X-ray subtraction images with strong reduction of stray radiation. (Auth.)

  14. Processing method and device for radioactive liquid waste

    International Nuclear Information System (INIS)

    Matsuo, Toshiaki; Nishi, Takashi; Matsuda, Masami; Yukita, Atsushi.

    1997-01-01

    When only suspended particulate ingredients are contained as COD components in radioactive washing liquid wastes, the liquid wastes are heated by a first process, for example, an adsorption step to adsorb the suspended particulate ingredients to an activated carbon, and then separating and removing the suspended particulate ingredients by filtration. When both of the floating particle ingredients and soluble organic ingredients are contained, the suspended particulate ingredients are separated and removed by the first process, and then soluble organic ingredients are removed by other process, or both of the suspended particulate ingredients and the soluble organic ingredients are removed by the first process. In an existent method of adding an activated carbon and then filtering them at a normal temperature, the floating particle ingredients cover the layer of activated carbon formed on a filter paper or fabric to sometimes cause clogging. However, according to the method of the present invention, since disturbance by the floating particle ingredients does not occur, the COD components can be separated and removed sufficiently without lowering liquid waste processing speed. (T.M.)

  15. Device and method for shortening reactor process tubes

    Science.gov (United States)

    Frantz, Charles E.; Alexander, William K.; Lander, Walter E. B.

    1980-01-01

    This disclosure describes a device and method for in situ shortening of nuclear reactor zirconium alloy process tubes which have grown as a result of radiation exposure. An upsetting technique is utilized which involves inductively heating a short band of a process tube with simultaneous application of an axial load sufficient to cause upsetting with an attendant decrease in length of the process tube.

  16. A new method for defining and managing process alarms and for correcting process operation when an alarm occurs

    International Nuclear Information System (INIS)

    Brooks, Robin; Thorpe, Richard; Wilson, John

    2004-01-01

    A new mathematical treatment of alarms that considers them as multi-variable interactions between process variables has provided the first-ever method to calculate values for alarm limits. This has resulted in substantial reductions in false alarms and hence in alarm annunciation rates in field trials. It has also unified alarm management, process control and product quality control into a single mathematical framework so that operations improvement and hence economic benefits are obtained at the same time as increased process safety. Additionally, an algorithm has been developed that advises what changes should be made to Manipulable process variables to clear an alarm. The multi-variable Best Operating Zone at the heart of the method is derived from existing historical data using equation-free methods. It does not require a first-principles process model or an expensive series of process identification experiments. Integral with the method is a new format Process Operator Display that uses only existing variables to fully describe the multi-variable operating space. This combination of features makes it an affordable and maintainable solution for small plants and single items of equipment as well as for the largest plants. In many cases, it also provides the justification for the investments about to be made or already made in process historian systems. Field Trials have been and are being conducted at IneosChlor and Mallinckrodt Chemicals, both in the UK, of the new geometric process control (GPC) method for improving the quality of both process operations and product by providing Process Alarms and Alerts of much high quality than ever before. The paper describes the methods used, including a simple visual method for Alarm Rationalisation that quickly delivers large sets of Consistent Alarm Limits, and the extension to full Alert Management with highlights from the Field Trials to indicate the overall effectiveness of the method in practice

  17. A new method for defining and managing process alarms and for correcting process operation when an alarm occurs

    Energy Technology Data Exchange (ETDEWEB)

    Brooks, Robin [Curvaceous Software Limited, P.O. Box 43, Gerrards Cross, Bucks SL98UX (United Kingdom)]. E-mail: enquiries@curvaceous.com; Thorpe, Richard [Curvaceous Software Limited, P.O. Box 43, Gerrards Cross, Bucks SL98UX (United Kingdom); Wilson, John [Curvaceous Software Limited, P.O. Box 43, Gerrards Cross, Bucks SL98UX (United Kingdom)

    2004-11-11

    A new mathematical treatment of alarms that considers them as multi-variable interactions between process variables has provided the first-ever method to calculate values for alarm limits. This has resulted in substantial reductions in false alarms and hence in alarm annunciation rates in field trials. It has also unified alarm management, process control and product quality control into a single mathematical framework so that operations improvement and hence economic benefits are obtained at the same time as increased process safety. Additionally, an algorithm has been developed that advises what changes should be made to Manipulable process variables to clear an alarm. The multi-variable Best Operating Zone at the heart of the method is derived from existing historical data using equation-free methods. It does not require a first-principles process model or an expensive series of process identification experiments. Integral with the method is a new format Process Operator Display that uses only existing variables to fully describe the multi-variable operating space. This combination of features makes it an affordable and maintainable solution for small plants and single items of equipment as well as for the largest plants. In many cases, it also provides the justification for the investments about to be made or already made in process historian systems. Field Trials have been and are being conducted at IneosChlor and Mallinckrodt Chemicals, both in the UK, of the new geometric process control (GPC) method for improving the quality of both process operations and product by providing Process Alarms and Alerts of much high quality than ever before. The paper describes the methods used, including a simple visual method for Alarm Rationalisation that quickly delivers large sets of Consistent Alarm Limits, and the extension to full Alert Management with highlights from the Field Trials to indicate the overall effectiveness of the method in practice.

  18. A new method for defining and managing process alarms and for correcting process operation when an alarm occurs.

    Science.gov (United States)

    Brooks, Robin; Thorpe, Richard; Wilson, John

    2004-11-11

    A new mathematical treatment of alarms that considers them as multi-variable interactions between process variables has provided the first-ever method to calculate values for alarm limits. This has resulted in substantial reductions in false alarms and hence in alarm annunciation rates in field trials. It has also unified alarm management, process control and product quality control into a single mathematical framework so that operations improvement and hence economic benefits are obtained at the same time as increased process safety. Additionally, an algorithm has been developed that advises what changes should be made to Manipulable process variables to clear an alarm. The multi-variable Best Operating Zone at the heart of the method is derived from existing historical data using equation-free methods. It does not require a first-principles process model or an expensive series of process identification experiments. Integral with the method is a new format Process Operator Display that uses only existing variables to fully describe the multi-variable operating space. This combination of features makes it an affordable and maintainable solution for small plants and single items of equipment as well as for the largest plants. In many cases, it also provides the justification for the investments about to be made or already made in process historian systems. Field Trials have been and are being conducted at IneosChlor and Mallinckrodt Chemicals, both in the UK, of the new geometric process control (GPC) method for improving the quality of both process operations and product by providing Process Alarms and Alerts of much high quality than ever before. The paper describes the methods used, including a simple visual method for Alarm Rationalisation that quickly delivers large sets of Consistent Alarm Limits, and the extension to full Alert Management with highlights from the Field Trials to indicate the overall effectiveness of the method in practice.

  19. Development and application of an eDNA method to detect and quantify a pathogenic parasite in aquatic ecosystems.

    Science.gov (United States)

    Huver, J R; Koprivnikar, J; Johnson, P T J; Whyard, S

    2015-06-01

    Approaches based on organismal DNA found in the environment (eDNA) have become increasingly utilized for ecological studies and biodiversity inventories as an alternative to traditional field survey methods. Such DNA-based techniques have largely been used to establish the presence of free-living organisms, but have much potential for detecting and quantifying infectious agents in the environment, which is necessary to evaluate disease risk. We developed an eDNA method to examine the distribution and abundance of the trematode Ribeiroia ondatrae, a pathogenic parasite known to cause malformations in North American amphibians. In addition to comparing this eDNA approach to classical host necropsy, we examined the detectability of R. ondatrae in water samples subject to different degradation conditions (time and temperature). Our test exhibited high specificity and sensitivity to R. ondatrae, capable of detecting as little as 14 fg (femtograms) of this parasite's DNA (1/2500th of a single infectious stage) from field water samples. Compared to our results from amphibian host necropsy, quantitative PCR was -90% concordant with respect to R. ondatrae detection from 15 field sites and was also a significant predictor of host infection abundance. DNA was still detectable in lab samples after 21 days at 25°C, indicating that our method is robust to field conditions. By comparing the advantages and disadvantages of eDNA vs. traditional survey methods for determining pathogen presence and abundance in the field, we found that the lower cost and effort associated with eDNA approaches provide many advantages. The development of alternative tools is critical for disease ecology, as wildlife management and conservation efforts require reliable establishment and monitoring of pathogens.

  20. The accountability imperative for quantifying the uncertainty of emission forecasts: evidence from Mexico

    DEFF Research Database (Denmark)

    Puig, Daniel; Morales-Nápoles, Oswaldo; Bakhtiari, Fatemeh

    2017-01-01

    forecasting approaches can reflect prevailing uncertainties. We apply a transparent and replicable method to quantify the uncertainty associated with projections of gross domestic product growth rates for Mexico, a key driver of GHG emissions in the country. We use those projections to produce probabilistic...... forecasts of GHG emissions for Mexico. We contrast our probabilistic forecasts with Mexico’s governmental deterministic forecasts. We show that, because they fail to reflect such key uncertainty, deterministic forecasts are ill-suited for use in target-setting processes. We argue that (i) guidelines should...... be agreed upon, to ensure that governmental forecasts meet certain minimum transparency and quality standards, and (ii) governments should be held accountable for the appropriateness of the forecasting approach applied to prepare governmental forecasts, especially when those forecasts are used to derive...

  1. Numerical computer methods part E

    CERN Document Server

    Johnson, Michael L

    2004-01-01

    The contributions in this volume emphasize analysis of experimental data and analytical biochemistry, with examples taken from biochemistry. They serve to inform biomedical researchers of the modern data analysis methods that have developed concomitantly with computer hardware. Selected Contents: A practical approach to interpretation of SVD results; modeling of oscillations in endocrine networks with feedback; quantifying asynchronous breathing; sample entropy; wavelet modeling and processing of nasal airflow traces.

  2. Method for quantifying NSAIDs and clofibric acid in aqueous samples, lumpfish (Cyclopterus lumpus) roe, and zebrafish (Danio rerio) eleutheroembryos and evaluation of their bioconcentration in zebrafish eleutheroembryos.

    Science.gov (United States)

    Molina-Fernandez, N; Perez-Conde, C; Rainieri, S; Sanz-Landaluze, J

    2017-04-01

    Pharmaceuticals such as nonsteroidal anti-inflammatory drugs (NSAIDs) and lipid regulators are being repeatedly detected at low concentrations (pg · mL -1 -ng · mL -1 ) in the environment. A large fraction of these compounds are ionizable. Ionized compounds show different physico-chemical properties and environmental behavior in comparison to their neutral analogs; as a consequence, the quantification methods currently available, based on the neutral molecules, might not be suitable to detect the corresponding ionized compounds. To overcome this problem, we developed a specific analytical method to quantify NSAIDs and lipid regulators (i.e., ibuprofen, diclofenac, naproxen, and clofibric acid) and their ionized compounds. This method is based on three steps: (1) the extraction of the organic compounds with an organic solvent assisted with an ultrasonic probe, (2) the cleaning of the extracts with a dispersive SPE with C 18 , and (3) the determination of the chemical compounds by GC-MS (prior derivatization of the analytes). We demonstrated that the proposed method can successfully quantify the pharmaceuticals and their ionized compounds in aqueous samples, lumpfish eggs, and zebrafish eleutheroembryos. Additionally, it allows the extraction and the cleanup of extracts from small samples (0.010 g of wet weight in pools of 20 larvae) and complex matrixes (due to high lipid content) and can be used as a basis for bioaccumulation assays performed with zebrafish eleutheroembryos in alternative to OECD test 305.

  3. Non-filtration method of processing of uranium ores

    International Nuclear Information System (INIS)

    Laskorin, B.N.; Vodolazov, L.I.; Tokarev, N.N.; Vyalkov, V.I.; Goldobina, V.A.; Gosudarstvennyj Komitet po Ispol'zovaniyu Atomnoj Ehnergii SSSR, Moscow)

    1977-01-01

    The development of the filterless sorption method has lead to working out the sorption leaching process and the process of extraction desorption, which has made possible to intensify the process of uranium ore working and to improve greatly the technical economic indexes by liquidating the complex method of multiple filtration and repulping of cakes. This method makes possible to involve more poor uranium raw materials and at the same time to extract valuable components: molybdenum, vanadium, copper, etc. Great industrial experience has been accumulating in sorption of dense pulp with the ratio of solid phase to liquid one equal to 1:1. This has lead to the increase of productivity of working plants by 1,5-3,0 times, the increase of uranium extraction by 5-10%, the increase of labour capacity of main workers by 2-3 times, and to the decrease of reagents expense, auxiliary materials, electric energy and vapour by several times. In fact the developed technology is continuous in all its steps with complete complex automatization of the process with the help of the most simple and available means of regulation and controlling. The process is equipped with high productivity apparatuses of great power with mechanic and pneumatic mixing for high density pulps, and with the columns KDS, KDZS, KNSPR and PIK for the regeneration of saturated sorbent in the counterflow regime. The exploitation of fine-granular hydrophilic ion-exchange resins in hydrophobized state is foreseen [ru

  4. Advances in Packaging Methods, Processes and Systems

    Directory of Open Access Journals (Sweden)

    Nitaigour Premchand Mahalik

    2014-10-01

    Full Text Available The food processing and packaging industry is becoming a multi-trillion dollar global business. The reason is that the recent increase in incomes in traditionally less economically developed countries has led to a rise in standards of living that includes a significantly higher consumption of packaged foods. As a result, food safety guidelines have been more stringent than ever. At the same time, the number of research and educational institutions—that is, the number of potential researchers and stakeholders—has increased in the recent past. This paper reviews recent developments in food processing and packaging (FPP, keeping in view the aforementioned advancements and bearing in mind that FPP is an interdisciplinary area in that materials, safety, systems, regulation, and supply chains play vital roles. In particular, the review covers processing and packaging principles, standards, interfaces, techniques, methods, and state-of-the-art technologies that are currently in use or in development. Recent advances such as smart packaging, non-destructive inspection methods, printing techniques, application of robotics and machineries, automation architecture, software systems and interfaces are reviewed.

  5. Quantitative methods for studying design protocols

    CERN Document Server

    Kan, Jeff WT

    2017-01-01

    This book is aimed at researchers and students who would like to engage in and deepen their understanding of design cognition research. The book presents new approaches for analyzing design thinking and proposes methods of measuring design processes. These methods seek to quantify design issues and design processes that are defined based on notions from the Function-Behavior-Structure (FBS) design ontology and from linkography. A linkograph is a network of linked design moves or segments. FBS ontology concepts have been used in both design theory and design thinking research and have yielded numerous results. Linkography is one of the most influential and elegant design cognition research methods. In this book Kan and Gero provide novel and state-of-the-art methods of analyzing design protocols that offer insights into design cognition by integrating segmentation with linkography by assigning FBS-based codes to design moves or segments and treating links as FBS transformation processes. They propose and test ...

  6. A comparative study of 2 computer-assisted methods of quantifying brightfield microscopy images.

    Science.gov (United States)

    Tse, George H; Marson, Lorna P

    2013-10-01

    Immunohistochemistry continues to be a powerful tool for the detection of antigens. There are several commercially available software packages that allow image analysis; however, these can be complex, require relatively high level of computer skills, and can be expensive. We compared 2 commonly available software packages, Adobe Photoshop CS6 and ImageJ, in their ability to quantify percentage positive area after picrosirius red (PSR) staining and 3,3'-diaminobenzidine (DAB) staining. On analysis of DAB-stained B cells in the mouse spleen, with a biotinylated primary rat anti-mouse-B220 antibody, there was no significant difference on converting images from brightfield microscopy to binary images to measure black and white pixels using ImageJ compared with measuring a range of brown pixels with Photoshop (Student t test, P=0.243, correlation r=0.985). When analyzing mouse kidney allografts stained with PSR, Photoshop achieved a greater interquartile range while maintaining a lower 10th percentile value compared with analysis with ImageJ. A lower 10% percentile reflects that Photoshop analysis is better at analyzing tissues with low levels of positive pixels; particularly relevant for control tissues or negative controls, whereas after ImageJ analysis the same images would result in spuriously high levels of positivity. Furthermore comparing the 2 methods by Bland-Altman plot revealed that these 2 methodologies did not agree when measuring images with a higher percentage of positive staining and correlation was poor (r=0.804). We conclude that for computer-assisted analysis of images of DAB-stained tissue there is no difference between using Photoshop or ImageJ. However, for analysis of color images where differentiation into a binary pattern is not easy, such as with PSR, Photoshop is superior at identifying higher levels of positivity while maintaining differentiation of low levels of positive staining.

  7. Quantifying intervertebral disc mechanics: a new definition of the neutral zone

    NARCIS (Netherlands)

    Smit, Theodoor H.; van Tunen, Manon Slm; van der Veen, Albert J.; Kingma, Idsart; van Dieën, Jaap H.

    2011-01-01

    The neutral zone (NZ) is the range over which a spinal motion segment (SMS) moves with minimal resistance. Clear as this may seem, the various methods to quantify NZ described in the literature depend on rather arbitrary criteria. Here we present a stricter, more objective definition. To

  8. GRAPH THEORY APPROACH TO QUANTIFY UNCERTAINTY OF PERFORMANCE MEASURES

    Directory of Open Access Journals (Sweden)

    Sérgio D. Sousa

    2015-03-01

    Full Text Available In this work, the performance measurement process is studied to quantify the uncertainty induced in the resulting performance measure (PM. To that end, the causes of uncertainty are identified, analysing the activities undertaken in the three following stages of the performance measurement process: design and implementation, data collection and record, and determination and analysis. A quantitative methodology based on graph theory and on the sources of uncertainty of the performance measurement process is used to calculate an uncertainty index to evaluate the level of uncertainty of a given PM or (key performance indicator. An application example is presented. The quantification of PM uncertainty could contribute to better represent the risk associated with a given decision and also to improve the PM to increase its precision and reliability.

  9. Processing of low-quality bauxite feedstock by thermochemistry-Bayer method

    Directory of Open Access Journals (Sweden)

    О. А. Дубовиков

    2016-11-01

    Full Text Available The modern production of aluminum which by its global output ranks first among the non-ferrous metals includes three main stages: ore extraction, its processing into alumina and, finally, the production of primary aluminum. Alumina production from bauxites,  being the  primary raw material in the  alumina industry,  is based  on two main methods: the Bayer method and the sintering method developed in Russia under the lead of an academician Nikolay Semenovich Kurnakov. Alumina production by the Bayer’s method is more cost effective,  but  has  higher  requirements to the  quality of the bauxite feedstock.  A great deal  of research has  been carried  out on low quality bauxites focusing firstly on finding ways to enrich the feedstock, secondly on improving the combined sequential Bayer-sintering method and thirdly on developing new hydrometallurgical ways for bauxites processing. Mechanical methods of bauxite enrichment have not yet brought any positive outcome, and a development of new hydrometallurgical high alkaline  autoclave process  faced  significant hardware  difficulties not addressed so far. For efficient processing of such low quality bauxite feedstock it is suggested to use a universal thermochemistry-Bayer method, which was developed in St. Petersburg Mining University under  the lead  of  Nikolay Ivanovich Eremin, allows to process different substandard bauxite feedstock and has a competitive costing as compared to the sintering method and combined methods. The main stages of thermochemistry-Bayer method are thermal activation of feedstock, its further desiliconization with the alkaline solution and leaching of the resultant bauxite product  under Bayer’s method. Despite high energy consumption at  the baking stage,  it  allows to condition the  low quality bauxite feedstock by neutralizing a variety of technologically harmful impurities such as organic matter, sulfide sulfur, carbonates, and at the

  10. Quantified social and aesthetic values in environmental decision making

    International Nuclear Information System (INIS)

    Burnham, J.B.; Maynard, W.S.; Jones, G.R.

    1975-01-01

    A method has been devised for quantifying the social criteria to be considered when selecting a nuclear design and/or site option. Community judgement of social values is measured directly and indirectly on eight siting factors. These same criteria are independently analysed by experts using techno-economic methods. The combination of societal and technical indices yields a weighted score for each alternative. The aesthetic impact was selected as the first to be quantified. A visual quality index was developed to measure the change in the visual quality of a viewscape caused by construction of a facility. Visual quality was measured by reducing it to its component parts - intactness, vividness and unity - and rating each part with and without the facility. Urban planners and landscape architects used the technique to analyse three viewscapes, testing three different methods on each viewscape. The three methods used the same aesthetic elements but varied in detail and depth. As expected, the technique with the greatest analytical detail (and least subjective judgement) was the most reliable method. Social value judgements were measured by social psychologists applying a questionnaire technique, using a number of design and site options to illustrate the range of criteria. Three groups of predictably different respondents - environmentalists, high-school students and businessmen - were selected. The three groups' response patterns were remarkably similar, though businessmen were consistently more biased towards nuclear power than were environmentalists. Correlational and multiple regression analyses provided indirect estimates of the relative importance of each impact category. Only the environmentalists showed a high correlation between the two methods. This is partially explained by their interest and knowledge. Also, the regression analysis encounters problems when small samples are used, and the environmental sample was considerably larger than the other two

  11. Updating parameters of the chicken processing line model

    DEFF Research Database (Denmark)

    Kurowicka, Dorota; Nauta, Maarten; Jozwiak, Katarzyna

    2010-01-01

    A mathematical model of chicken processing that quantitatively describes the transmission of Campylobacter on chicken carcasses from slaughter to chicken meat product has been developed in Nauta et al. (2005). This model was quantified with expert judgment. Recent availability of data allows...... updating parameters of the model to better describe processes observed in slaughterhouses. We propose Bayesian updating as a suitable technique to update expert judgment with microbiological data. Berrang and Dickens’s data are used to demonstrate performance of this method in updating parameters...... of the chicken processing line model....

  12. Method of parallel processing in SANPO real time system

    International Nuclear Information System (INIS)

    Ostrovnoj, A.I.; Salamatin, I.M.

    1981-01-01

    A method of parellel processing in SANPO real time system is described. Algorithms of data accumulation and preliminary processing in this system as a parallel processes using a specialized high level programming language are described. Hierarchy of elementary processes are also described. It provides the synchronization of concurrent processes without semaphors. The developed means are applied to the systems of experiment automation using SM-3 minicomputers [ru

  13. Global approach for the validation of an in-line Raman spectroscopic method to determine the API content in real-time during a hot-melt extrusion process.

    Science.gov (United States)

    Netchacovitch, L; Thiry, J; De Bleye, C; Dumont, E; Cailletaud, J; Sacré, P-Y; Evrard, B; Hubert, Ph; Ziemons, E

    2017-08-15

    Since the Food and Drug Administration (FDA) published a guidance based on the Process Analytical Technology (PAT) approach, real-time analyses during manufacturing processes are in real expansion. In this study, in-line Raman spectroscopic analyses were performed during a Hot-Melt Extrusion (HME) process to determine the Active Pharmaceutical Ingredient (API) content in real-time. The method was validated based on a univariate and a multivariate approach and the analytical performances of the obtained models were compared. Moreover, on one hand, in-line data were correlated with the real API concentration present in the sample quantified by a previously validated off-line confocal Raman microspectroscopic method. On the other hand, in-line data were also treated in function of the concentration based on the weighing of the components in the prepared mixture. The importance of developing quantitative methods based on the use of a reference method was thus highlighted. The method was validated according to the total error approach fixing the acceptance limits at ±15% and the α risk at ±5%. This method reaches the requirements of the European Pharmacopeia norms for the uniformity of content of single-dose preparations. The validation proves that future results will be in the acceptance limits with a previously defined probability. Finally, the in-line validated method was compared with the off-line one to demonstrate its ability to be used in routine analyses. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. A Comparison of Advanced Regression Algorithms for Quantifying Urban Land Cover

    Directory of Open Access Journals (Sweden)

    Akpona Okujeni

    2014-07-01

    Full Text Available Quantitative methods for mapping sub-pixel land cover fractions are gaining increasing attention, particularly with regard to upcoming hyperspectral satellite missions. We evaluated five advanced regression algorithms combined with synthetically mixed training data for quantifying urban land cover from HyMap data at 3.6 and 9 m spatial resolution. Methods included support vector regression (SVR, kernel ridge regression (KRR, artificial neural networks (NN, random forest regression (RFR and partial least squares regression (PLSR. Our experiments demonstrate that both kernel methods SVR and KRR yield high accuracies for mapping complex urban surface types, i.e., rooftops, pavements, grass- and tree-covered areas. SVR and KRR models proved to be stable with regard to the spatial and spectral differences between both images and effectively utilized the higher complexity of the synthetic training mixtures for improving estimates for coarser resolution data. Observed deficiencies mainly relate to known problems arising from spectral similarities or shadowing. The remaining regressors either revealed erratic (NN or limited (RFR and PLSR performances when comprehensively mapping urban land cover. Our findings suggest that the combination of kernel-based regression methods, such as SVR and KRR, with synthetically mixed training data is well suited for quantifying urban land cover from imaging spectrometer data at multiple scales.

  15. Integration of UAV and ground-based Structure from Motion with Multi-View Stereo photogrammetry and hydrological data to quantify hillslope gully erosion processes in tropical savanna

    Science.gov (United States)

    Koci, J.; Jarihani, B.; Sidle, R. C.; Wilkinson, S. N.; Bartley, R.

    2017-12-01

    Structure from Motion with Multi-View Stereo (SfM-MVS) photogrammetry provides a cost-effective method of rapidly acquiring high resolution (sub-meter) topographic data, but is rarely used in hydrogeomorphic investigations of gully erosion. This study integrates high resolution topographic and land cover data derived from an unmanned aerial vehicle (UAV) and ground-based SfM-MVS photogrammetry, with rainfall and gully discharge data, to elucidate hydrogeomorphic processes driving hillslope gully erosion. The study is located within a small (13 km2) dry-tropical savanna catchment within the Burdekin River Basin, northeast Australia, which is a major contributor sediments and nutrients to the Great Barrier Reef World Heritage Area. A pre-wet season UAV survey covered an entire hillslope gully system (0.715 km2), and is used to derive topography, ground cover and hydrological flow pathways in the gully contributing area. Ground-based surveys of a single active gully (650 m2) within the broader hillslope are compared between pre- and post-wet season conditions to quantify gully geomorphic change. Rainfall, recorded near to the head of the gully, is related to gully discharge during sporadic storm events. The study provides valuable insights into the relationships among hydrological flow pathways, ground cover, rainfall and runoff, and spatial patterns of gully morphologic change. We demonstrate how UAV and ground-based SfM-MVS photogrammetry can be used to improve hydrogeomorphic process understanding and aid in the modelling and management of hillslope gully systems.

  16. Three-dimensional laser scanning technique to quantify aggregate and ballast shape properties

    CSIR Research Space (South Africa)

    Anochie-Boateng, Joseph

    2013-06-01

    Full Text Available methods towards a more accurate and automated techniques to quantify aggregate shape properties. This paper validates a new flakiness index equation using three-dimensional (3-D) laser scanning data of aggregate and ballast materials obtained from...

  17. A simple and sensitive approach to quantify methyl farnesoate in whole arthropods by matrix-solid phase dispersion and gas chromatography-mass spectrometry.

    Science.gov (United States)

    Montes, Rosa; Rodil, Rosario; Neuparth, Teresa; Santos, Miguel M; Cela, Rafael; Quintana, José Benito

    2017-07-28

    Methyl farnesoate (MF) is an arthropod hormone that plays a key role in the physiology of several arthropods' classes being implicated in biological processes such as molting and reproduction. The development of an analytical technique to quantify the levels of this compound in biological tissues can be of major importance for the field of aquaculture/apiculture conservation and in endocrine disruption studies. Therefore, the aim of this study was to develop a simple and sensitive method to measure native levels of MF in the tissue of three representative species from different arthropods classes with environmental and/or economic importance. Thus, a new approach using whole organisms and the combination of matrix solid-phase dispersion with gas chromatography coupled to mass spectrometry was developed. This method allows quantifying endogenous MF at low levels (LOQs in the 1.2-3.1ng/g range) in three arthropod species, and could be expanded to additional arthropod classes. The found levels ranged between 2 and 12ng/g depending on the studied species and gender. The overall recovery of the method was evaluated and ranged between 69 and 96%. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. The quantified relationship

    NARCIS (Netherlands)

    Danaher, J.; Nyholm, S.R.; Earp, B.

    2018-01-01

    The growth of self-tracking and personal surveillance has given rise to the Quantified Self movement. Members of this movement seek to enhance their personal well-being, productivity, and self-actualization through the tracking and gamification of personal data. The technologies that make this

  19. Systems analysis as a tool for optimal process strategy

    International Nuclear Information System (INIS)

    Ditterich, K.; Schneider, J.

    1975-09-01

    For the description and the optimal treatment of complex processes, the methods of Systems Analysis are used as the most promising approach in recent times. In general every process should be optimised with respect to reliability, safety, economy and environmental pollution. In this paper the complex relations between these general optimisation postulates are established in qualitative form. These general trend relations have to be quantified for every particular system studied in practice

  20. Quantifying the uncertainty in heritability.

    Science.gov (United States)

    Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph

    2014-05-01

    The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large.

  1. Quantifying greenhouse gas emissions from waste treatment facilities

    DEFF Research Database (Denmark)

    Mønster, Jacob

    to be in-stalled in any vehicle and thereby enabling measurements wherever there were roads. The validation of the measurement method was done by releasing a controlled amount of methane and quantifying the emission using the release of tracer gas. The validation test showed that even in areas with large...... treatment plants. The PhD study reviewed and evaluated previously used methane measurement methods and found the tracer dispersion method promising. The method uses release of tracer gas and the use of mobile equipment with high analytical sensitivity, to measure the downwind plumes of methane and tracer...... ranged from 10 to 92 kg per hour and was found to change in even short timescales of a few hours. The periods with large emissions correlated with a drop in methane utilization, indicating that emissions came from the digesters tanks or gas storage/use. The measurements indicated that the main emissions...

  2. Post-Processing in the Material-Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars Vabbersgaard

    The material-point method (MPM) is a numerical method for dynamic or static analysis of solids using a discretization in time and space. The method has shown to be successful in modelling physical problems involving large deformations, which are difficult to model with traditional numerical tools...... such as the finite element method. In the material-point method, a set of material points is utilized to track the problem in time and space, while a computational background grid is utilized to obtain spatial derivatives relevant to the physical problem. Currently, the research within the material-point method......-point method. The first idea involves associating a volume with each material point and displaying the deformation of this volume. In the discretization process, the physical domain is divided into a number of smaller volumes each represented by a simple shape; here quadrilaterals are chosen for the presented...

  3. Effect of thermal processing methods on the proximate composition ...

    African Journals Online (AJOL)

    The nutritive value of raw and thermal processed castor oil seed (Ricinus communis) was investigated using the following parameters; proximate composition, gross energy, mineral constituents and ricin content. Three thermal processing methods; toasting, boiling and soaking-and-boiling were used in the processing of the ...

  4. Quantifying the effect of sorption and bioavailability of hydrophobic organic contaminants

    International Nuclear Information System (INIS)

    Zhang, W.; Bouwer, E.; Cunningham, A.

    1994-01-01

    In-situ bioremediation has been applied successfully at a few sites. Several restrictions presently exist which could greatly limit the effectiveness of this promising technology. Hydrophobic organic contaminants tend to sorb onto soil. However, microorganisms are most effective in utilizing substrates from the aqueous phase. Sorption tends to separate the direct contact between microorganisms and contaminants necessary for biodegradation to occur. A series of experiments, which represented scenarios with fast sorption/desorption, slow sorption/desorption, mass transfer across boundary layer and mass transfer within attached microorganisms (biofilm), was conducted to demonstrate the concentration effect and the mass transfer effect. A method has been developed to quantify bioavailability of organic contaminants in aquatic environments. Bioavailability Factor (B f ), a dimensionless parameter derived from mathematical models and verified by experimental results, has been formulated to describe the impact of equilibrium sorption, nonequilibrium sorption, and mass transfer processes on the rate and extent of biodegradation of petroleum hydrocarbons

  5. Evaluation of sampling methods to quantify abundance of hardwoods and snags within conifer-dominated riparian zones

    Science.gov (United States)

    Theresa Marquardt; Hailemariam Temesgen; Paul D. Anderson; Bianca. Eskelson

    2012-01-01

    Six sampling alternatives were examined for their ability to quantify selected attributes of snags and hardwoods in conifer-dominated riparian areas of managed headwater forests in western Oregon. Each alternative was simulated 500 times at eight headwater forest locations based on a 0.52-ha square stem map. The alternatives were evaluated based on how well they...

  6. Quantifying high dimensional entanglement with two mutually unbiased bases

    Directory of Open Access Journals (Sweden)

    Paul Erker

    2017-07-01

    Full Text Available We derive a framework for quantifying entanglement in multipartite and high dimensional systems using only correlations in two unbiased bases. We furthermore develop such bounds in cases where the second basis is not characterized beyond being unbiased, thus enabling entanglement quantification with minimal assumptions. Furthermore, we show that it is feasible to experimentally implement our method with readily available equipment and even conservative estimates of physical parameters.

  7. Quantifying Pilot Visual Attention in Low Visibility Terminal Operations

    Science.gov (United States)

    Ellis, Kyle K.; Arthur, J. J.; Latorella, Kara A.; Kramer, Lynda J.; Shelton, Kevin J.; Norman, Robert M.; Prinzel, Lawrence J.

    2012-01-01

    Quantifying pilot visual behavior allows researchers to determine not only where a pilot is looking and when, but holds implications for specific behavioral tracking when these data are coupled with flight technical performance. Remote eye tracking systems have been integrated into simulators at NASA Langley with effectively no impact on the pilot environment. This paper discusses the installation and use of a remote eye tracking system. The data collection techniques from a complex human-in-the-loop (HITL) research experiment are discussed; especially, the data reduction algorithms and logic to transform raw eye tracking data into quantified visual behavior metrics, and analysis methods to interpret visual behavior. The findings suggest superior performance for Head-Up Display (HUD) and improved attentional behavior for Head-Down Display (HDD) implementations of Synthetic Vision System (SVS) technologies for low visibility terminal area operations. Keywords: eye tracking, flight deck, NextGen, human machine interface, aviation

  8. Development of key indicators to quantify the health impacts of climate change on Canadians

    OpenAIRE

    Cheng, June J.; Berry, Peter

    2013-01-01

    Objectives This study aimed at developing a list of key human health indicators for quantifying the health impacts of climate change in Canada. Methods A literature review was conducted in OVID Medline to identify health morbidity and mortality indicators currently used to quantify climate change impacts. Public health frameworks and other studies of climate change indicators were reviewed to identify criteria with which to evaluate the list of proposed key indicators and a rating scale was d...

  9. An object-oriented description method of EPMM process

    Science.gov (United States)

    Jiang, Zuo; Yang, Fan

    2017-06-01

    In order to use the object-oriented mature tools and language in software process model, make the software process model more accord with the industrial standard, it’s necessary to study the object-oriented modelling of software process. Based on the formal process definition in EPMM, considering the characteristics that Petri net is mainly formal modelling tool and combining the Petri net modelling with the object-oriented modelling idea, this paper provides this implementation method to convert EPMM based on Petri net into object models based on object-oriented description.

  10. Validation of a new device to quantify groundwater-surface water exchange

    Science.gov (United States)

    Cremeans, Mackenzie M.; Devlin, J. F.

    2017-11-01

    Distributions of flow across the groundwater-surface water interface should be expected to be as complex as the geologic deposits associated with stream or lake beds and their underlying aquifers. In these environments, the conventional Darcy-based method of characterizing flow systems (near streams) has significant limitations, including reliance on parameters with high uncertainties (e.g., hydraulic conductivity), the common use of drilled wells in the case of streambank investigations, and potentially lengthy measurement times for aquifer characterization and water level measurements. Less logistically demanding tools for quantifying exchanges across streambeds have been developed and include drive-point mini-piezometers, seepage meters, and temperature profiling tools. This project adds to that toolbox by introducing the Streambed Point Velocity Probe (SBPVP), a reusable tool designed to quantify groundwater-surface water interactions (GWSWI) at the interface with high density sampling, which can effectively, rapidly, and accurately complement conventional methods. The SBPVP is a direct push device that measures in situ water velocities at the GWSWI with a small-scale tracer test on the probe surface. Tracer tests do not rely on hydraulic conductivity or gradient information, nor do they require long equilibration times. Laboratory testing indicated that the SBPVP has an average accuracy of ± 3% and an average precision of ± 2%. Preliminary field testing, conducted in the Grindsted Å in Jutland, Denmark, yielded promising agreement between groundwater fluxes determined by conventional methods and those estimated from the SBPVP tests executed at similar scales. These results suggest the SBPVP is a viable tool to quantify groundwater-surface water interactions in high definition in sandy streambeds.

  11. Identifying and quantifying main components of physiological noise in functional near infrared spectroscopy on prefrontal cortex

    Directory of Open Access Journals (Sweden)

    Evgeniya eKirilina

    2013-12-01

    Full Text Available Functional Near-Infrared Spectroscopy (fNIRS is a promising method to study functional organization of the prefrontal cortex. However, in order to realize the high potential of fNIRS, effective discrimination between physiological noise originating from forehead skin haemodynamic and cerebral signals is required. Main sources of physiological noise are global and local blood flow regulation processes on multiple time scales. The goal of the present study was to identify the main physiological noise contributions in fNIRS forehead signals and to develop a method for physiological de-noising of fNIRS data. To achieve this goal we combined concurrent time-domain fNIRS and peripheral physiology recordings with wavelet coherence analysis. Depth selectivity was achieved by analyzing moments of photon time-of-flight distributions provided by time-domain fNIRS. Simultaneously, mean arterial blood pressure (MAP, heart rate (HR, and skin blood flow (SBF on the forehead were recorded. Wavelet coherence analysis was employed to quantify the impact of physiological processes on fNIRS signals separately for different time scales. We identified three main processes contributing to physiological noise in fNIRS signals on the forehead. The first process with the period of about 3 s is induced by respiration. The second process is highly correlated with time lagged MAP and HR fluctuations with a period of about 10 s often referred as Mayer waves. The third process is local regulation of the facial skin blood flow time locked to the task-evoked fNIRS signals. All processes affect oxygenated haemoglobin concentration more strongly than that of deoxygenated haemoglobin. Based on these results we developed a set of physiological regressors, which were used for physiological de-noising of fNIRS signals. Our results demonstrate that proposed de-noising method can significantly improve the sensitivity of fNIRS to cerebral signals.

  12. DEVELOPMENT AND VALIDATION OF AN HPLC-DAD ANALYTICAL METHOD TO QUANTIFY 5-METHOXYFLAVONES IN METHANOLIC EXTRACTS OF Vochysia divergens POHL CULTURED UNDER STRESS CONDITIONS

    Directory of Open Access Journals (Sweden)

    Letícia Pereira Pimenta

    Full Text Available Vochysia divergens Pohl, known as "Cambara" in Brazil, is an invasive species that is expanding throughout Pantanal in Brazil, to form mono-dominant communities. This expansion is affecting the agricultural areas that support the typical seasonal flood and drought conditions of this biome. This article describes the development and validation of an HPLC-DAD analytical method to quantify 5-methoxyflavones in methanolic extracts of greenhouse-grown V. divergens associated with one of two endophytic fungal species Zopfiella tetraspora (Zt or Melanconiella elegans (Me and later subjected to water stress. The developed method gave good validation parameters and was successfully applied to quantify the flavones 3',5-dimethoxy luteolin-7-O-β-glucopyranoside (1, 5-methoxy luteolin (2, and 3',5-dimethoxy luteolin (3 in the target extracts. Inoculation of the plant with Zt decreased the concentration of flavone 1 in the extract by 2.69-fold as compared to the control. Inoculation of the plant with Zt or Me did not significantly alter the contents of flavones 2 and 3 in the extracts as compared to the control. Therefore, the aerial parts of germinated V. divergens plants inoculated with either Zt or Me responded differently in terms of the production of flavones. These results can cast light on the symbiosis between fungal microorganisms and V. divergens, which most likely influences the response of V. divergens to changes in the availability of water in Pantanal.

  13. Method of processing low-level radioactive liquid wastes

    International Nuclear Information System (INIS)

    Matsunaga, Ichiro; Sugai, Hiroshi.

    1984-01-01

    Purpose: To effectively reduce the radioactivity density of low-level radioactive liquid wastes discharged from enriched uranium conversion processing steps or the likes. Method: Hydrazin is added to low-level radioactive liquid wastes, which are in contact with iron hydroxide-cation exchange resins prepared by processing strongly acidic-cation exchange resins with ferric chloride and aqueous ammonia to form hydrorizates of ferric ions in the resin. Hydrazine added herein may be any of hydrazine hydrate, hydrazine hydrochloride and hydranine sulfate. The preferred addition amount is more than 100 mg per one liter of the liquid wastes. If it is less than 100 mg, the reduction rate for the radioactivety density (procession liquid density/original liquid density) is decreased. This method enables to effectively reduce the radioactivity density of the low-level radioactive liquid wastes containing a trace amount of radioactive nucleides. (Yoshihara, H.)

  14. A method for manufacturing a tool part for an injection molding process, a hot embossing process, a nano-imprint process, or an extrusion process

    DEFF Research Database (Denmark)

    2013-01-01

    The present invention relates to a method for manufacturing a tool part for an injection molding process, a hot embossing process, nano-imprint process or an extrusion process. First, there is provided a master structure (10) with a surface area comprising nanometre-sized protrusions (11...

  15. Current challenges in quantifying preferential flow through the vadose zone

    Science.gov (United States)

    Koestel, John; Larsbo, Mats; Jarvis, Nick

    2017-04-01

    In this presentation, we give an overview of current challenges in quantifying preferential flow through the vadose zone. A review of the literature suggests that current generation models do not fully reflect the present state of process understanding and empirical knowledge of preferential flow. We believe that the development of improved models will be stimulated by the increasingly widespread application of novel imaging technologies as well as future advances in computational power and numerical techniques. One of the main challenges in this respect is to bridge the large gap between the scales at which preferential flow occurs (pore to Darcy scales) and the scale of interest for management (fields, catchments, regions). Studies at the pore scale are being supported by the development of 3-D non-invasive imaging and numerical simulation techniques. These studies are leading to a better understanding of how macropore network topology and initial/boundary conditions control key state variables like matric potential and thus the strength of preferential flow. Extrapolation of this knowledge to larger scales would require support from theoretical frameworks such as key concepts from percolation and network theory, since we lack measurement technologies to quantify macropore networks at these large scales. Linked hydro-geophysical measurement techniques that produce highly spatially and temporally resolved data enable investigation of the larger-scale heterogeneities that can generate preferential flow patterns at pedon, hillslope and field scales. At larger regional and global scales, improved methods of data-mining and analyses of large datasets (machine learning) may help in parameterizing models as well as lead to new insights into the relationships between soil susceptibility to preferential flow and site attributes (climate, land uses, soil types).

  16. Quantifying brain microstructure with diffusion MRI

    DEFF Research Database (Denmark)

    Novikov, Dmitry S.; Jespersen, Sune N.; Kiselev, Valerij G.

    2016-01-01

    the potential to quantify the relevant length scales for neuronal tissue, such as the packing correlation length for neuronal fibers, the degree of neuronal beading, and compartment sizes. The second avenue corresponds to the long-time limit, when the observed signal can be approximated as a sum of multiple non......-exchanging anisotropic Gaussian components. Here the challenge lies in parameter estimation and in resolving its hidden degeneracies. The third avenue employs multiple diffusion encoding techniques, able to access information not contained in the conventional diffusion propagator. We conclude with our outlook...... on the future research directions which can open exciting possibilities for developing markers of pathology and development based on methods of studying mesoscopic transport in disordered systems....

  17. Quantifying population genetic differentiation from next-generation sequencing data

    DEFF Research Database (Denmark)

    Fumagalli, Matteo; Garrett Vieira, Filipe Jorge; Korneliussen, Thorfinn Sand

    2013-01-01

    method for quantifying population genetic differentiation from next-generation sequencing data. In addition, we present a strategy to investigate population structure via Principal Components Analysis. Through extensive simulations, we compare the new method herein proposed to approaches based...... on genotype calling and demonstrate a marked improvement in estimation accuracy for a wide range of conditions. We apply the method to a large-scale genomic data set of domesticated and wild silkworms sequenced at low coverage. We find that we can infer the fine-scale genetic structure of the sampled......Over the last few years, new high-throughput DNA sequencing technologies have dramatically increased speed and reduced sequencing costs. However, the use of these sequencing technologies is often challenged by errors and biases associated with the bioinformatical methods used for analyzing the data...

  18. Studies of neutron methods for process control and criticality surveillance of fissile material processing facilities

    International Nuclear Information System (INIS)

    Zoltowski, T.

    1988-01-01

    The development of radiochemical processes for fissile material processing and spent fuel handling need new control procedures enabling an improvement of plant throughput. This is strictly related to the implementation of continuous criticality control policy and developing reliable methods for monitoring the reactivity of radiochemical plant operations in presence of the process perturbations. Neutron methods seem to be applicable for fissile material control in some technological facilities. The measurement of epithermal neutron source multiplication with heuristic evaluation of measured data enables surveillance of anomalous reactivity enhancement leading to unsafe states. 80 refs., 47 figs., 33 tabs. (author)

  19. Quantifying the sensitivity of post-glacial sea level change to laterally varying viscosity

    Science.gov (United States)

    Crawford, Ophelia; Al-Attar, David; Tromp, Jeroen; Mitrovica, Jerry X.; Austermann, Jacqueline; Lau, Harriet C. P.

    2018-05-01

    We present a method for calculating the derivatives of measurements of glacial isostatic adjustment (GIA) with respect to the viscosity structure of the Earth and the ice sheet history. These derivatives, or kernels, quantify the linearised sensitivity of measurements to the underlying model parameters. The adjoint method is used to enable efficient calculation of theoretically exact sensitivity kernels within laterally heterogeneous earth models that can have a range of linear or non-linear viscoelastic rheologies. We first present a new approach to calculate GIA in the time domain, which, in contrast to the more usual formulation in the Laplace domain, is well suited to continuously varying earth models and to the use of the adjoint method. Benchmarking results show excellent agreement between our formulation and previous methods. We illustrate the potential applications of the kernels calculated in this way through a range of numerical calculations relative to a spherically symmetric background model. The complex spatial patterns of the sensitivities are not intuitive, and this is the first time that such effects are quantified in an efficient and accurate manner.

  20. Method of processing liquid wastes

    International Nuclear Information System (INIS)

    Naba, Katsumi; Oohashi, Takeshi; Kawakatsu, Ryu; Kuribayashi, Kotaro.

    1980-01-01

    Purpose: To process radioactive liquid wastes with safety by distillating radioactive liquid wastes while passing gases, properly treating the distillation fractions, adding combustible and liquid synthetic resin material to the distillation residues, polymerizing to solidify and then burning them. Method: Radioactive substance - containing liquid wastes are distillated while passing gases and the distillation fractions containing no substantial radioactive substances are treated in an adequate method. Synthetic resin material, which may be a mixture of polymer and monomer, is added together with a catalyst to the distillation residues containing almost of the radioactive substances to polymerize and solidify. Water or solvent in such an extent as not hindering the solidification may be allowed if remained. The solidification products are burnt for facilitating the treatment of the radioactive substances. The resin material can be selected suitably, methacrylate syrup (mainly solution of polymethylmethacrylate and methylmethacrylate) being preferred. (Seki, T.)

  1. Quantifying the morphodynamics of river restoration schemes using Unmanned Aerial Vehicles (UAVs)

    Science.gov (United States)

    Williams, Richard; Byrne, Patrick; Gilles, Eric; Hart, John; Hoey, Trevor; Maniatis, George; Moir, Hamish; Reid, Helen; Ves, Nikolas

    2017-04-01

    River restoration schemes are particularly sensitive to morphological adjustment during the first set of high-flow events that they are subjected to. Quantifying elevation change associated with morphological adjustment can contribute to improved adaptive decision making to ensure river restoration scheme objectives are achieved. To date the relatively high cost, technical demands and challenging logistics associated with acquiring repeat, high-resolution topographic surveys has resulted in a significant barrier to monitoring the three-dimensional morphodynamics of river restoration schemes. The availability of low-cost, consumer grade Unmanned Aerial Vehicles that are capable of acquiring imagery for processing using Structure-from-Motion Multi-View Stereo (SfM MVS) photogrammetry has the potential to transform the survey the morphodynamics of river restoration schemes. Application guidance does, however, need to be developed to fully exploit the advances of UAV technology and SfM MVS processing techniques. In particular, there is a need to quantify the effect of the number and spatial distribution of ground targets on vertical error. This is particularly significant because vertical errors propagate when mapping morphological change, and thus determine the evidence that is available for decision making. This presentation presents results from a study that investigated how the number and spatial distribution of targets influenced vertical error, and then used the findings to determine survey protocols for a monitoring campaign that has quantified morphological change across a number of restoration schemes. At the Swindale river restoration scheme, Cumbria, England, 31 targets were distributed across a 700 m long reach and the centre of each target was surveyed using RTK-GPS. Using the targets as General Control Points (GCPs) or checkpoints, they were divided into three different spatial patterns (centre, edge and random) and used for processing images acquired

  2. A seismic processing approach dedicated to quantitative characterization of landfill heterogeneities

    NARCIS (Netherlands)

    Konstantaki, L.A.; Ghose, R.; Draganov, D.S.; Diaferia, G.; Heimovaara, T.J.

    2014-01-01

    The ability to image and quantify the heterogeneity in municipal landfills is crucial for improving the landfill treatment methods, for predicting the behaviour of processes that take place inside the landfills and hence, for estimating the after-care period. Our aim is to image the flow paths

  3. First-order Convex Optimization Methods for Signal and Image Processing

    DEFF Research Database (Denmark)

    Jensen, Tobias Lindstrøm

    2012-01-01

    In this thesis we investigate the use of first-order convex optimization methods applied to problems in signal and image processing. First we make a general introduction to convex optimization, first-order methods and their iteration complexity. Then we look at different techniques, which can...... be used with first-order methods such as smoothing, Lagrange multipliers and proximal gradient methods. We continue by presenting different applications of convex optimization and notable convex formulations with an emphasis on inverse problems and sparse signal processing. We also describe the multiple...

  4. Comparison between dot-immunoblotting assay and clinical sign determination method for quantifying avian infectious bronchitis virus vaccine by titration in embryonated eggs.

    Science.gov (United States)

    Yuk, Seong-Su; Kwon, Jung-Hoon; Noh, Jin-Yong; Hong, Woo-Tack; Gwon, Gyeong-Bin; Jeong, Jei-Hyun; Jeong, Sol; Youn, Ha-Na; Heo, Yong-Hwan; Lee, Joong-Bok; Park, Seung-Yong; Choi, In-Soo; Song, Chang-Seon

    2016-04-01

    A sensitive and specific method for measuring the vaccine titer of infectious bronchitis virus (IBV) is important to commercial manufacturers for improving vaccine quality. Typically, IBV is titrated in embryonated chicken eggs, and the infectivity of the virus dilutions is determined by assessing clinical signs in the embryos as evidence of viral propagation. In this study, we used a dot-immunoblotting assay (DIA) to measure the titers of IBV vaccines that originated from different pathogenic strains or attenuation methods in embryonated eggs, and we compared this assay to the currently used method, clinical sign evaluation. To compare the two methods, we used real-time reverse transcription-PCR, which had the lowest limit of detection for propagated IBV. As a clinical sign of infection, dwarfism of the embryo was quantified using the embryo: egg (EE) index. The DIA showed 9.41% higher sensitivity and 15.5% higher specificity than the clinical sign determination method. The DIA was particularly useful for measuring the titer of IBV vaccine that did not cause apparent stunting but propagated in embryonated chicken eggs such as a heat-adapted vaccine strain. The results of this study indicate that the DIA is a rapid, sensitive, reliable method for determining IBV vaccine titer in embryonated eggs at a relatively low cost. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Digital processing method for monitoring the radioactivity of stack releases

    International Nuclear Information System (INIS)

    Vialettes, H.; Leblanc, P.; Perotin, J.P.; Lazou, J.P.

    1978-01-01

    The digital processing method proposed is adapted for data supplied by a fixed-filter detector normally used for analogue processing (integrator system). On the basis of the raw data (pulses) from the detector, the technique makes it possible to determine the rate of activity released whereas analogue processing gives only the released activity. Furthermore, the method can be used to develop alarm systems on the basis of a possible exposure rate at the point of fall-out, and by including in the program a coefficient which allows for atmospheric diffusion conditions at any given time one can improve the accuracy of the results. In order to test the digital processing method and demonstrate its advantages over analogue processing, various atmospheric contamination situations were simulated in a glove-box and analysed simultaneously, using both systems, from the pulses transmitted by the same sampling and fixed-filter detection unit. The experimental results confirm the advantages foreseen in the theoretical research. (author)

  6. A New Enzyme-linked Sorbent Assay (ELSA) to Quantify Syncytiotrophoblast Extracellular Vesicles in Biological Fluids

    NARCIS (Netherlands)

    Goehner, Claudia; Weber, Maja; Tannetta, Dionne S.; Groten, Tanja; Ploesch, Torsten; Faas, Marijke M.; Scherjon, Sicco A.; Schleussner, Ekkehard; Markert, Udo R.; Fitzgerald, Justine S.

    ProblemThe pregnancy-associated disease preeclampsia is related to the release of syncytiotrophoblast extracellular vesicles (STBEV) by the placenta. To improve functional research on STBEV, reliable and specific methods are needed to quantify them. However, only a few quantification methods are

  7. Gear hot forging process robust design based on finite element method

    International Nuclear Information System (INIS)

    Xuewen, Chen; Won, Jung Dong

    2008-01-01

    During the hot forging process, the shaping property and forging quality will fluctuate because of die wear, manufacturing tolerance, dimensional variation caused by temperature and the different friction conditions, etc. In order to control this variation in performance and to optimize the process parameters, a robust design method is proposed in this paper, based on the finite element method for the hot forging process. During the robust design process, the Taguchi method is the basic robust theory. The finite element analysis is incorporated in order to simulate the hot forging process. In addition, in order to calculate the objective function value, an orthogonal design method is selected to arrange experiments and collect sample points. The ANOVA method is employed to analyze the relationships of the design parameters and design objectives and to find the best parameters. Finally, a case study for the gear hot forging process is conducted. With the objective to reduce the forging force and its variation, the robust design mathematical model is established. The optimal design parameters obtained from this study indicate that the forging force has been reduced and its variation has been controlled

  8. Terrestrial laser scanning to quantify above-ground biomass of structurally complex coastal wetland vegetation

    Science.gov (United States)

    Owers, Christopher J.; Rogers, Kerrylee; Woodroffe, Colin D.

    2018-05-01

    Above-ground biomass represents a small yet significant contributor to carbon storage in coastal wetlands. Despite this, above-ground biomass is often poorly quantified, particularly in areas where vegetation structure is complex. Traditional methods for providing accurate estimates involve harvesting vegetation to develop mangrove allometric equations and quantify saltmarsh biomass in quadrats. However broad scale application of these methods may not capture structural variability in vegetation resulting in a loss of detail and estimates with considerable uncertainty. Terrestrial laser scanning (TLS) collects high resolution three-dimensional point clouds capable of providing detailed structural morphology of vegetation. This study demonstrates that TLS is a suitable non-destructive method for estimating biomass of structurally complex coastal wetland vegetation. We compare volumetric models, 3-D surface reconstruction and rasterised volume, and point cloud elevation histogram modelling techniques to estimate biomass. Our results show that current volumetric modelling approaches for estimating TLS-derived biomass are comparable to traditional mangrove allometrics and saltmarsh harvesting. However, volumetric modelling approaches oversimplify vegetation structure by under-utilising the large amount of structural information provided by the point cloud. The point cloud elevation histogram model presented in this study, as an alternative to volumetric modelling, utilises all of the information within the point cloud, as opposed to sub-sampling based on specific criteria. This method is simple but highly effective for both mangrove (r2 = 0.95) and saltmarsh (r2 > 0.92) vegetation. Our results provide evidence that application of TLS in coastal wetlands is an effective non-destructive method to accurately quantify biomass for structurally complex vegetation.

  9. Nonaqueous processing methods

    International Nuclear Information System (INIS)

    Coops, M.S.; Bowersox, D.F.

    1984-09-01

    A high-temperature process utilizing molten salt extraction from molten metal alloys has been developed for purification of spent power reactor fuels. Experiments with laboratory-scale processing operations show that purification and throughput parameters comparable to the Barnwell Purex process can be achieved by pyrochemical processing in equipment one-tenth the size, with all wastes being discharged as stable metal alloys at greatly reduced volume and disposal cost. This basic technology can be developed for large-scale processing of spent reactor fuels. 13 references, 4 figures

  10. Recent developments in analytical detection methods for radiation processed foods

    International Nuclear Information System (INIS)

    Wu Jilan

    1993-01-01

    A short summary of the programmes of 'ADMIT' (FAO/IAEA) and the developments in analytical detection methods for radiation processed foods has been given. It is suggested that for promoting the commercialization of radiation processed foods and controlling its quality, one must pay more attention to the study of analytical detection methods of irradiated food

  11. A method to quantify mechanobiologic forces during zebrafish cardiac development using 4-D light sheet imaging and computational modeling.

    Directory of Open Access Journals (Sweden)

    Vijay Vedula

    2017-10-01

    Full Text Available Blood flow and mechanical forces in the ventricle are implicated in cardiac development and trabeculation. However, the mechanisms of mechanotransduction remain elusive. This is due in part to the challenges associated with accurately quantifying mechanical forces in the developing heart. We present a novel computational framework to simulate cardiac hemodynamics in developing zebrafish embryos by coupling 4-D light sheet imaging with a stabilized finite element flow solver, and extract time-dependent mechanical stimuli data. We employ deformable image registration methods to segment the motion of the ventricle from high resolution 4-D light sheet image data. This results in a robust and efficient workflow, as segmentation need only be performed at one cardiac phase, while wall position in the other cardiac phases is found by image registration. Ventricular hemodynamics are then quantified by numerically solving the Navier-Stokes equations in the moving wall domain with our validated flow solver. We demonstrate the applicability of the workflow in wild type zebrafish and three treated fish types that disrupt trabeculation: (a chemical treatment using AG1478, an ErbB2 signaling inhibitor that inhibits proliferation and differentiation of cardiac trabeculation; (b injection of gata1a morpholino oligomer (gata1aMO suppressing hematopoiesis and resulting in attenuated trabeculation; (c weak-atriumm58 mutant (wea with inhibited atrial contraction leading to a highly undeveloped ventricle and poor cardiac function. Our simulations reveal elevated wall shear stress (WSS in wild type and AG1478 compared to gata1aMO and wea. High oscillatory shear index (OSI in the grooves between trabeculae, compared to lower values on the ridges, in the wild type suggest oscillatory forces as a possible regulatory mechanism of cardiac trabeculation development. The framework has broad applicability for future cardiac developmental studies focused on quantitatively

  12. Air/surface exchange processes of mercury and their linkage to atmospheric pools

    International Nuclear Information System (INIS)

    Bahlmann, Enno; Ebinghaus, Ralf

    2001-01-01

    The atmospheric mercury cycle is strongly linked to the terrestrial, aquatic and biologic cycle of mercury via air/surface exchange processes. In order to quantify mercury fluxes from and to the atmosphere to predict local and regional source contributions the methods for flux measurements as well as the physicochemical factors controlling air/surface exchange processes must be assessed. We will describe methods for the determination of mercury and mercury species in ambient air which are basic for investigation of air/surface exchange processes. Further on we will describe approaches for studying the physicochemical factors controlling this processes by using a new laboratory flux measurement system. (author)

  13. A copula-based sampling method for data-driven prognostics

    International Nuclear Information System (INIS)

    Xi, Zhimin; Jing, Rong; Wang, Pingfeng; Hu, Chao

    2014-01-01

    This paper develops a Copula-based sampling method for data-driven prognostics. The method essentially consists of an offline training process and an online prediction process: (i) the offline training process builds a statistical relationship between the failure time and the time realizations at specified degradation levels on the basis of off-line training data sets; and (ii) the online prediction process identifies probable failure times for online testing units based on the statistical model constructed in the offline process and the online testing data. Our contributions in this paper are three-fold, namely the definition of a generic health index system to quantify the health degradation of an engineering system, the construction of a Copula-based statistical model to learn the statistical relationship between the failure time and the time realizations at specified degradation levels, and the development of a simulation-based approach for the prediction of remaining useful life (RUL). Two engineering case studies, namely the electric cooling fan health prognostics and the 2008 IEEE PHM challenge problem, are employed to demonstrate the effectiveness of the proposed methodology. - Highlights: • We develop a novel mechanism for data-driven prognostics. • A generic health index system quantifies health degradation of engineering systems. • Off-line training model is constructed based on the Bayesian Copula model. • Remaining useful life is predicted from a simulation-based approach

  14. Adoption of the Creative Process According to the Immersive Method

    Directory of Open Access Journals (Sweden)

    Sonja Vuk

    2015-09-01

    Full Text Available The immersive method is a new concept of visual education that is better suited to the needs of students in contemporary post-industrial society. The features of the immersive method are: 1 it emerges from interaction with visual culture; 2 it encourages understanding of contemporary art (as an integral part of visual culture; and 3 it implements the strategies and processes of the dominant tendencies in contemporary art (new media art and relational art with the goal of adopting the creative process, expressing one’s thoughts and emotions, and communicating with the environment. The immersive method transfers the creative process from art to the process of creation by the students themselves. This occurs with the mediation of an algorithmic scheme that enables students to adopt ways to solve problems, to express thoughts and emotions, to develop ideas and to transfer these ideas to form, medium and material. The immersive method uses transfer in classes, the therapeutic aspect of art and “flow state” (the optimal experience of being immersed in an activity/aesthetic experience (a total experience that has a beginning, a process and a conclusion/immersive experience (comprehensive immersion in the present moment. This is a state leading to the sublimative effect of creation (identification with what has been expressed, as well as to self-actualisation. The immersive method teaches one to connect the context, social relations and the artwork as a whole in which one lives as an individual. The adopted creative process is implemented in a critical manner on one’s surrounding through analysis, aesthetic interventions, and ecologically and socially aware inclusion in the life of a community. The students gain the crucial meta-competence of a creative thinking process.

  15. Research of Monte Carlo method used in simulation of different maintenance processes

    International Nuclear Information System (INIS)

    Zhao Siqiao; Liu Jingquan

    2011-01-01

    The paper introduces two kinds of Monte Carlo methods used in equipment life process simulation under the least maintenance: condition: method of producing the interval of lifetime, method of time scale conversion. The paper also analyzes the characteristics and the using scope of the two methods. By using the conception of service age reduction factor, the model of equipment's life process under incomplete maintenance condition is established, and also the life process simulation method applicable to this situation is invented. (authors)

  16. [Influence of different processing methods on Angelica sinensis polysaccharides from same origin].

    Science.gov (United States)

    Lv, Jieli; Chen, Hongli; Duan, Jinao; Yan, Hui; Tang, Yuping; Song, Bingsheng

    2011-04-01

    To study the influences of different processing methods on the content of Angelica sinensis polysaccharides (APS) from the same origin. The contents of neutral polysaccharides and acidic polysaccharides in various samples of A. sinensis were determined by phenol-sulfuric acid and carbazole-sulfuric acid method, respectively. The proliferation ability of lymphocyte was detected by MTT method after the cells were cultured with different concentrations of APS from two samples processed by different methods. The different processing methods had different effects on the contents of polysaccharide. The maximum content of APS (26.03%) was found in the sample processed by microwave drying medium-fired, but the minimum content of APS (2.25%) was found in the sample processed by vacuum drying at 50 TC. Furthermore, the APS (high concentration group, P methods have different effects on the contents of APS and the proliferation ability of lymphocytes.

  17. Method of electrolytic processing for radioactive liquid waste

    International Nuclear Information System (INIS)

    Otsuka, Katsuyuki; Takahashi, Yoshiharu; Tamai, Hideaki.

    1989-01-01

    Radioactive liquid wastes containing sodium compounds are electrolized using mercury as a cathode. As a result, they are separated into sodium-containing metal amalgam and residues. Metals containing sodium are separated from amalgam, purified and re-utilized, while mercury is recycled to the electrolysis vessel. The foregoing method can provide advantageous effect such as: (1) volume of the wastes to be processed can be reduced, (2) since processing can be carried out at a relatively low temperature, low boiling elements can be handled with no evaporization, (3) useful elements can be recovered and (4) other method than glass solidification can easily be employed remarkable volume-reduction of solidification products can be expected. (K.M.)

  18. Processing Methods of Alkaline Hydrolysate from Rice Husk

    Directory of Open Access Journals (Sweden)

    Olga D. Arefieva

    2017-07-01

    Full Text Available This paper devoted to finding processing methods of alkaline hydrolysate produced from rice husk pre-extraction, and discusses alkaline hydrolysate processing schemed and disengagement of some products: amorphous silica of various quality, alkaline lignin, and water and alkaline extraction polysaccharides. Silica samples were characterized: crude (air-dried, burnt (no preliminary water treatment, washed in distilled water, and washed in distilled water and burnt. Waste water parameters upon the extraction of solids from alkaline hydrolysate dropped a few dozens or thousand times depending on the applied processing method. Color decreased a few thousand times, turbidity was virtually eliminated, chemical oxygen demanded about 20–136 times; polyphenols content might decrease 50% or be virtually eliminated. The most prospective scheme obtained the two following solid products from rice husk alkaline hydrolysate: amorphous silica and alkaline extraction polysaccharide. Chemical oxygen demand of the remaining waste water decreased about 140 times compared to the silica-free solution.

  19. Integrated process development-a robust, rapid method for inclusion body harvesting and processing at the microscale level.

    Science.gov (United States)

    Walther, Cornelia; Kellner, Martin; Berkemeyer, Matthias; Brocard, Cécile; Dürauer, Astrid

    2017-10-21

    Escherichia coli stores large amounts of highly pure product within inclusion bodies (IBs). To take advantage of this beneficial feature, after cell disintegration, the first step to optimal product recovery is efficient IB preparation. This step is also important in evaluating upstream optimization and process development, due to the potential impact of bioprocessing conditions on product quality and on the nanoscale properties of IBs. Proper IB preparation is often neglected, due to laboratory-scale methods requiring large amounts of materials and labor. Miniaturization and parallelization can accelerate analyses of individual processing steps and provide a deeper understanding of up- and downstream processing interdependencies. Consequently, reproducible, predictive microscale methods are in demand. In the present study, we complemented a recently established high-throughput cell disruption method with a microscale method for preparing purified IBs. This preparation provided results comparable to laboratory-scale IB processing, regarding impurity depletion, and product loss. Furthermore, with this method, we performed a "design of experiments" study to demonstrate the influence of fermentation conditions on the performance of subsequent downstream steps and product quality. We showed that this approach provided a 300-fold reduction in material consumption for each fermentation condition and a 24-fold reduction in processing time for 24 samples.

  20. Quantifying Evaporation in a Permeable Pavement System

    Science.gov (United States)

    Studies quantifying evaporation from permeable pavement systems are limited to a few laboratory studies and one field application. This research quantifies evaporation for a larger-scale field application by measuring the water balance from lined permeable pavement sections. Th...

  1. Research on interpolation methods in medical image processing.

    Science.gov (United States)

    Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian

    2012-04-01

    Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.

  2. Radiometric method for the characterization of particulate processes in colloidal suspensions. II. Experimental verification of the method

    Energy Technology Data Exchange (ETDEWEB)

    Subotic, B. [Institut Rudjer Boskovic, Zagreb (Yugoslavia)

    1979-09-15

    A radiometric method for the characterization of particulate processes is verified using stable hydrosols of silver iodide. Silver iodide hydrosols satisfy the conditions required for the applications of the proposed method. Comparison shows that the values for the change of particle size measured in silver iodide hydrosols by the proposed method are in excellent agreement with the values obtained by other methods on the same systems (electron microscopy, sedimentation analysis, light scattering). This shows that the proposed method is suitable for the characterization of particulate processes in colloidal suspensions. (Auth.).

  3. Method of Quantifying Size of Retinal Hemorrhages in Eyes with Branch Retinal Vein Occlusion Using 14-Square Grid: Interrater and Intrarater Reliability

    Directory of Open Access Journals (Sweden)

    Yuko Takashima

    2016-01-01

    Full Text Available Purpose. To describe a method of quantifying the size of the retinal hemorrhages in branch retinal vein occlusion (BRVO and to determine the interrater and intrarater reliabilities of these measurements. Methods. Thirty-five fundus photographs from 35 consecutive eyes with BRVO were studied. The fundus images were analyzed with Power-Point® software, and a grid of 14 squares was laid over the fundus image. Raters were asked to judge the percentage of each of the 14 squares that was covered by the hemorrhages, and the average of the 14 squares was taken to be the relative size of the retinal hemorrhage. Results. Interrater reliability between three raters was higher when a grid with 14 squares was used (intraclass correlation coefficient (ICC, 0.96 than that when a box with no grid was used (ICC, 0.78. Intrarater reliability, which was calculated by the retinal hemorrhage area measured on two different days, was also higher (ICC, 0.97 than that with no grid (ICC, 0.86. Interrater reliability for five fundus pictures with poor image quality was also good when a grid with 14 squares was used (ICC, 0.88. Conclusions. Although our method is subjective, excellent interrater and intrarater reliabilities indicate that this method can be adapted for clinical use.

  4. SKOCh modified parameters and data processing method

    International Nuclear Information System (INIS)

    Abramov, V.V.; Baldin, B.Yu.; Vasil'chenko, V.G.

    1986-01-01

    Characteristics of a modified Cherenkov radiation ring spectrometer variant (SKOCH) are presented. Methods of experimental data processing are described. Different SKOCH optics variants are investigated. Multi-particle registering electronic equipment for data read-out from SKOCH providing for the improvement of multiparticle occurance registration conditions is applied in the course of measurements using proton beams. A system of SKOCH spectrometer data processing programms is developed and experimentally tested. Effective algorithm for calibrating Cherenkov radiation ring spectrometers with quite a large angular and radial aperture is developed. The on-line- and off-line-processing program complex provides for the complete control of SKOCH operation during statistics collection and for particle (π, K, P) identification within 5.5-30 GeV/c range

  5. Development of rupture process analysis method for great earthquakes using Direct Solution Method

    Science.gov (United States)

    Yoshimoto, M.; Yamanaka, Y.; Takeuchi, N.

    2010-12-01

    Conventional rupture process analysis methods using teleseismic body waves were based on ray theory. Therefore, these methods have the following problems in applying to great earthquakes such as 2004 Sumatra earthquake: (1) difficulty in computing all later phases such as the PP reflection phase, (2) impossibility of computing called “W phase”, the long period phase arriving before S wave, (3) implausibility of hypothesis that the distance is far enough from the observation points to the hypocenter compared to the fault length. To solve above mentioned problems, we have developed a new method which uses the synthetic seismograms computed by the Direct Solution Method (DSM, e.g. Kawai et al. 2006) as Green’s functions. We used the DSM software (http://www.eri.u-tokyo.ac.jp/takeuchi/software/) for computing the Green’s functions up to 1 Hz for the IASP91 (Kennett and Engdahl, 1991) model, and determined the final slip distributions using the waveform inversion method (Kikuchi et al. 2003). First we confirmed whether the Green’s functions computed by DSM were accurate in higher frequencies up to 1 Hz. Next we performed the rupture process analysis of this new method for Mw8.0 (GCMT) large Solomon Islands earthquake on April 1, 2007. We found that this earthquake consisted of two asperities and the rupture propagated across the subducting Sinbo ridge. The obtained slip distribution better correlates to the aftershock distributions than existing method. Furthermore, this new method keep same accuracy of existing method (which has the advantage of calculating) with respect to direct P-wave and reflection phases near the source, and also accurately calculate the later phases such a PP-wave.

  6. Quantifying Multiscale Habitat Structural Complexity: A Cost-Effective Framework for Underwater 3D Modelling

    Directory of Open Access Journals (Sweden)

    Renata Ferrari

    2016-02-01

    Full Text Available Coral reef habitat structural complexity influences key ecological processes, ecosystem biodiversity, and resilience. Measuring structural complexity underwater is not trivial and researchers have been searching for accurate and cost-effective methods that can be applied across spatial extents for over 50 years. This study integrated a set of existing multi-view, image-processing algorithms, to accurately compute metrics of structural complexity (e.g., ratio of surface to planar area underwater solely from images. This framework resulted in accurate, high-speed 3D habitat reconstructions at scales ranging from small corals to reef-scapes (10s km2. Structural complexity was accurately quantified from both contemporary and historical image datasets across three spatial scales: (i branching coral colony (Acropora spp.; (ii reef area (400 m2; and (iii reef transect (2 km. At small scales, our method delivered models with <1 mm error over 90% of the surface area, while the accuracy at transect scale was 85.3% ± 6% (CI. Advantages are: no need for an a priori requirement for image size or resolution, no invasive techniques, cost-effectiveness, and utilization of existing imagery taken from off-the-shelf cameras (both monocular or stereo. This remote sensing method can be integrated to reef monitoring and improve our knowledge of key aspects of coral reef dynamics, from reef accretion to habitat provisioning and productivity, by measuring and up-scaling estimates of structural complexity.

  7. 87Sr/86Sr as a quantitative geochemical proxy for 14C reservoir age in dynamic, brackish waters: assessing applicability and quantifying uncertainties.

    Science.gov (United States)

    Lougheed, Bryan; van der Lubbe, Jeroen; Davies, Gareth

    2016-04-01

    Accurate geochronologies are crucial for reconstructing the sensitivity of brackish and estuarine environments to rapidly changing past external impacts. A common geochronological method used for such studies is radiocarbon (14C) dating, but its application in brackish environments is severely limited by an inability to quantify spatiotemporal variations in 14C reservoir age, or R(t), due to dynamic interplay between river runoff and marine water. Additionally, old carbon effects and species-specific behavioural processes also influence 14C ages. Using the world's largest brackish water body (the estuarine Baltic Sea) as a test-bed, combined with a comprehensive approach that objectively excludes both old carbon and species-specific effects, we demonstrate that it is possible to use 87Sr/86Sr ratios to quantify R(t) in ubiquitous mollusc shell material, leading to almost one order of magnitude increase in Baltic Sea 14C geochronological precision over the current state-of-the-art. We propose that this novel proxy method can be developed for other brackish water bodies worldwide, thereby improving geochronological control in these climate sensitive, near-coastal environments.

  8. SELECTION OF NON-CONVENTIONAL MACHINING PROCESSES USING THE OCRA METHOD

    Directory of Open Access Journals (Sweden)

    Miloš Madić

    2015-04-01

    Full Text Available Selection of the most suitable nonconventional machining process (NCMP for a given machining application can be viewed as multi-criteria decision making (MCDM problem with many conflicting and diverse criteria. To aid these selection processes, different MCDM methods have been proposed. This paper introduces the use of an almost unexplored MCDM method, i.e. operational competitiveness ratings analysis (OCRA method for solving the NCMP selection problems. Applicability, suitability and computational procedure of OCRA method have been demonstrated while solving three case studies dealing with selection of the most suitable NCMP. In each case study the obtained rankings were compared with those derived by the past researchers using different MCDM methods. The results obtained using the OCRA method have good correlation with those derived by the past researchers which validate the usefulness of this method while solving complex NCMP selection problems.

  9. FIRST USE OF STEREOLOGY TO QUANTIFY THE SURVIVAL OF FAT AUTOGRAFTS

    Directory of Open Access Journals (Sweden)

    Eduardo Serna Cuéllar

    2011-05-01

    Full Text Available It is not usual to perform quantitative analyses on surgical materials. Rather, they are evaluated clinically, through qualitative methods, and if quantitation is done, it is on a 2-dimensional basis. In this study, the long-term survival of fat autografts (FAG in 40 subjects with facial soft tissue defects is quantified. An adipose tissue preparation from the abdomen obtained through liposuction and centrifugation is injected subcutaneously. Approximately 14 months later, the treated area is biopsied. Extensive computer-based histological analyses were performed using the stereological method in order to directly obtain three parameters: volume fraction of adipocytes in the fat tissue (VV, density (number per volume of adipocytes in the fat tissue (NV, and the mean cell volume of adipocytes (VA in each tissue sample. A set of equations based on these three quantitative parameters is produced for evaluation of the volumetric survival fraction (VSF of FAG. The presented data evidenced a 66% survival fraction at the 14-month follow-up. In routine practice, it would be sufficient to perform this volumetric analysis on the injected and biopsied fat samples to know what fraction of the FAG has survived. This is an objective method for quantifying FAG survival and will allow a standardized comparison between different research series and authors.

  10. An allometric approach to quantify the extinction vulnerability of birds and mammals.

    Science.gov (United States)

    Hilbers, J P; Schipper, A M; Hendriks, A J; Verones, F; Pereira, H M; Huijbregts, M A J

    2016-03-01

    Methods to quantify the vulnerability of species to extinction are typically limited by the availability of species-specific input data pertaining to life-history characteristics and population dynamics. This lack of data hampers global biodiversity assessments and conservation planning. Here, we developed a new framework that systematically quantifies extinction risk based on allometric relationships between various wildlife demographic parameters and body size. These allometric relationships have a solid theoretical and ecological foundation. Extinction risk indicators included are (1) the probability of extinction, (2) the mean time to extinction, and (3) the critical patch size. We applied our framework to assess the global extinction vulnerability of terrestrial carnivorous and non-carnivorous birds and mammals. Irrespective of the indicator used, large-bodied species were found to be more vulnerable to extinction than their smaller counterparts. The patterns with body size were confirmed for all species groups by a comparison with IUCN data on the proportion of extant threatened species: the models correctly predicted a multimodal distribution with body size for carnivorous birds and a monotonic distribution for mammals and non-carnivorous birds. Carnivorous mammals were found to have higher extinction risks than non-carnivores, while birds were more prone to extinction than mammals. These results are explained by the allometric relationships, predicting the vulnerable species groups to have lower intrinsic population growth rates, smaller population sizes, lower carrying capacities, or larger dispersal distances, which, in turn, increase the importance of losses due to environmental stochastic effects and dispersal activities. Our study is the first to integrate population viability analysis and allometry into a novel, process-based framework that is able to quantify extinction risk of a large number of species without requiring data-intensive, species

  11. Method of processing radioactive wastes

    International Nuclear Information System (INIS)

    Nomura, Ichiro; Hashimoto, Yasuo.

    1984-01-01

    Purpose: To improve the volume-reduction effect, as well as enable simultaneous procession for the wastes such as burnable solid wastes, resin wastes or sludges, and further convert the processed materials into glass-solidified products which are much less burnable and stable chemically and thermally. Method: Auxiliaries mainly composed of SiO 2 such as clays, and wastes such as burnable solid wastes, waste resins and sludges are charged through a waste hopper into an incinerating melting furnace comprising an incinerating and a melting furnace, while radioactive concentrated liquid wastes are sprayed from a spray nozzle. The wastes are burnt by the heat from the melting furnace and combustion air, and the sprayed concentrated wastes are dried by the hot air after the combustion into solid components. The solid matters from the concentrated liquid wastes and the incinerating ashes of the wastes are melted together with the auxiliaries in the melting furnace and converted into glass-like matters. The glass-like matters thus formed are caused to flow into a vessel and gradually cooled to solidify. (Horiuchi, T.)

  12. Methods and systems for the processing of physiological signals

    International Nuclear Information System (INIS)

    Cosnac, B. de; Gariod, R.; Max, J.; Monge, V.

    1975-01-01

    This note is a general survey of the processing of physiological signals. After an introduction about electrodes and their limitations, the physiological nature of the main signals are shortly recalled. Different methods (signal averaging, spectral analysis, shape morphological analysis) are described as applications to the fields of magnetocardiography, electro-encephalography, cardiography, electronystagmography. As for processing means (single portable instruments and programmable), they are described through the example of application to rheography and to the Plurimat'S general system. As a conclusion the methods of signal processing are dominated by the morphological analysis of curves and by the necessity of a more important introduction of the statistical classification. As for the instruments, microprocessors will appear but specific operators linked to computer will certainly grow [fr

  13. Extrusion Process by Finite Volume Method Using OpenFoam Software

    International Nuclear Information System (INIS)

    Matos Martins, Marcelo; Tonini Button, Sergio; Divo Bressan, Jose; Ivankovic, Alojz

    2011-01-01

    The computational codes are very important tools to solve engineering problems. In the analysis of metal forming process, such as extrusion, this is not different because the computational codes allow analyzing the process with reduced cost. Traditionally, the Finite Element Method is used to solve solid mechanic problems, however, the Finite Volume Method (FVM) have been gaining force in this field of applications. This paper presents the velocity field and friction coefficient variation results, obtained by numerical simulation using the OpenFoam Software and the FVM to solve an aluminum direct cold extrusion process.

  14. Trunk sway analysis to quantify the warm-up phenomenon in myotonia congenita patients.

    NARCIS (Netherlands)

    Horlings, G.C.; Drost, G.; Bloem, B.R.; Trip, J.; Pieterse, A.J.; Engelen, B.G.M. van; Allum, J.H.J.

    2009-01-01

    OBJECTIVE: Patients with autosomal recessive myotonia congenita display myotonia and transient paresis that diminish with repetitive muscle contractions (warm-up phenomenon). A new approach is presented to quantify this warm-up phenomenon under clinically relevant gait and balance tasks. METHODS:

  15. Changing perspective on tissue processing - comparison of microwave histoprocessing method with the conventional method

    Directory of Open Access Journals (Sweden)

    G Shrestha

    2015-09-01

    Full Text Available Background: Histopathological examination of tissues requires sliver of formalin fixed tissue that has been chemically processed and then stained with Haematoxylin and Eosin. The time honored conventional method of tissue processing, which requires 12 to 13 hours for completion, is employed at majority of laboratories but is now seeing the

  16. Processing method of radioactive metal wastes

    International Nuclear Information System (INIS)

    Uetake, Naoto; Urata, Megumu; Sato, Masao.

    1985-01-01

    Purpose: To reduce the volume and increase the density of radioactive metal wastes easily while preventing scattering of radioactivity and process them into suitable form to storage and treatment. Method: Metal wastes mainly composed of zirconium are discharged from nuclear power plants or fuel re-processing plants, and these metals such as zirconium and titanium vigorously react with hydrogen and rapidly diffuse as hydrides. Since the hydrides are extremely brittle and can be pulverized easily, they can be volume-reduced. However, since metal hydrides have no ductility, dehydrogenation is applied for the molding fabrication in view of the subsequent storage and processing. The dehydrogenation is easy like the hydrogenation and fine metal pieces can be molded in a small compression device. For the dehydrogenation, a temperature is slightly increased as compared with that in the hydrogenation, pressure is reduced through the vacuum evacuation system and the removed hydrogen is purified for reuse. The upper limit for the temperature of the hydrogenation is 680 0 C in order to prevent the scttering of radioactivity. (Kamimura, M.)

  17. Quantifying ground impact fatality rate for small unmanned aircraft

    DEFF Research Database (Denmark)

    La Cour-Harbo, Anders

    2018-01-01

    is based on a standard stochastic model, and employs a parameterized high fidelity ground impact distribution model that accounts for both aircraft specifications, parameter uncertainties, and wind. The method also samples the flight path to create an almost continuous quantification of the risk......One of the major challenges of conducting operation of unmanned aircraft, especially operations beyond visual line-of-sight (BVLOS), is to make a realistic and sufficiently detailed risk assessment. An important part of such an assessment is to identify the risk of fatalities, preferably...... in a quantitative way since this allows for comparison with manned aviation to determine whether an equivalent level of safety is achievable. This work presents a method for quantifying the probability of fatalities resulting from an uncontrolled descent of an unmanned aircraft conducting a BVLOS flight. The method...

  18. siMS Score: Simple Method for Quantifying Metabolic Syndrome

    OpenAIRE

    Soldatovic, Ivan; Vukovic, Rade; Culafic, Djordje; Gajic, Milan; Dimitrijevic-Sreckovic, Vesna

    2016-01-01

    Objective To evaluate siMS score and siMS risk score, novel continuous metabolic syndrome scores as methods for quantification of metabolic status and risk. Materials and Methods Developed siMS score was calculated using formula: siMS score = 2*Waist/Height + Gly/5.6 + Tg/1.7 + TAsystolic/130?HDL/1.02 or 1.28 (for male or female subjects, respectively). siMS risk score was calculated using formula: siMS risk score = siMS score * age/45 or 50 (for male or female subjects, respectively) * famil...

  19. Performance Analysis of Entropy Methods on K Means in Clustering Process

    Science.gov (United States)

    Dicky Syahputra Lubis, Mhd.; Mawengkang, Herman; Suwilo, Saib

    2017-12-01

    K Means is a non-hierarchical data clustering method that attempts to partition existing data into one or more clusters / groups. This method partitions the data into clusters / groups so that data that have the same characteristics are grouped into the same cluster and data that have different characteristics are grouped into other groups.The purpose of this data clustering is to minimize the objective function set in the clustering process, which generally attempts to minimize variation within a cluster and maximize the variation between clusters. However, the main disadvantage of this method is that the number k is often not known before. Furthermore, a randomly chosen starting point may cause two points to approach the distance to be determined as two centroids. Therefore, for the determination of the starting point in K Means used entropy method where this method is a method that can be used to determine a weight and take a decision from a set of alternatives. Entropy is able to investigate the harmony in discrimination among a multitude of data sets. Using Entropy criteria with the highest value variations will get the highest weight. Given this entropy method can help K Means work process in determining the starting point which is usually determined at random. Thus the process of clustering on K Means can be more quickly known by helping the entropy method where the iteration process is faster than the K Means Standard process. Where the postoperative patient dataset of the UCI Repository Machine Learning used and using only 12 data as an example of its calculations is obtained by entropy method only with 2 times iteration can get the desired end result.

  20. New Principles of Process Control in Geotechnics by Acoustic Methods

    OpenAIRE

    Leššo, I.; Flegner, P.; Pandula, B.; Horovčák, P.

    2007-01-01

    The contribution describes the new solution of the control of rotary drilling process as some elementary process in geotechnics. The article presents the first results of research on the utilization of acoustic methods in identification process by optimal control of rotary drilling.

  1. Information theoretical methods as discerning quantifiers of the equations of state of neutron stars

    Energy Technology Data Exchange (ETDEWEB)

    Avellar, M.G.B. de, E-mail: mgb.avellar@iag.usp.br [Instituto de Astronomia, Geofísica e Ciências Atmosféricas – Universidade de São Paulo, Rua do Matão 1226, Cidade Universitária, 05508-090, São Paulo, SP (Brazil); Souza, R.A. de, E-mail: rodrigo.souza@usp.br [Instituto de Astronomia, Geofísica e Ciências Atmosféricas – Universidade de São Paulo, Rua do Matão 1226, Cidade Universitária, 05508-090, São Paulo, SP (Brazil); Horvath, J.E., E-mail: foton@iag.usp.br [Instituto de Astronomia, Geofísica e Ciências Atmosféricas – Universidade de São Paulo, Rua do Matão 1226, Cidade Universitária, 05508-090, São Paulo, SP (Brazil); Paret, D.M., E-mail: dmanreza@fisica.uh.cu [Facultad de Física, Universidad de la Habana, San Lázaro y L, Vedado La Habana, 10400 (Cuba)

    2014-11-07

    In this work we use the statistical measures of information entropy, disequilibrium and complexity to discriminate different approaches and parametrizations for different equations of state for quark stars. We confirm the usefulness of such quantities to quantify the role of interactions in such stars. We find that within this approach, a quark matter equation of state such as SU(2) NJL with vectorial coupling and phase transition is slightly favoured and deserves deeper studies. - Highlights: • We used information theory tools to discern different compositions for compact stars. • Hadronic and quark stars analogues behave differently when analyzed with these tools. • The effects of different equations of state are singled out in this work.

  2. Quantifying forecast quality of IT business value

    NARCIS (Netherlands)

    Eveleens, J.L.; van der Pas, M.; Verhoef, C.

    2012-01-01

    This article discusses how to quantify the forecasting quality of IT business value. We address a common economic indicator often used to determine the business value of project proposals, the Net Present Value (NPV). To quantify the forecasting quality of IT business value, we develop a generalized

  3. Ethnographic methods for process evaluations of complex health behaviour interventions.

    Science.gov (United States)

    Morgan-Trimmer, Sarah; Wood, Fiona

    2016-05-04

    This article outlines the contribution that ethnography could make to process evaluations for trials of complex health-behaviour interventions. Process evaluations are increasingly used to examine how health-behaviour interventions operate to produce outcomes and often employ qualitative methods to do this. Ethnography shares commonalities with the qualitative methods currently used in health-behaviour evaluations but has a distinctive approach over and above these methods. It is an overlooked methodology in trials of complex health-behaviour interventions that has much to contribute to the understanding of how interventions work. These benefits are discussed here with respect to three strengths of ethnographic methodology: (1) producing valid data, (2) understanding data within social contexts, and (3) building theory productively. The limitations of ethnography within the context of process evaluations are also discussed.

  4. A new decomposition method for parallel processing multi-level optimization

    International Nuclear Information System (INIS)

    Park, Hyung Wook; Kim, Min Soo; Choi, Dong Hoon

    2002-01-01

    In practical designs, most of the multidisciplinary problems have a large-size and complicate design system. Since multidisciplinary problems have hundreds of analyses and thousands of variables, the grouping of analyses and the order of the analyses in the group affect the speed of the total design cycle. Therefore, it is very important to reorder and regroup the original design processes in order to minimize the total computational cost by decomposing large multidisciplinary problems into several MultiDisciplinary Analysis SubSystems (MDASS) and by processing them in parallel. In this study, a new decomposition method is proposed for parallel processing of multidisciplinary design optimization, such as Collaborative Optimization (CO) and Individual Discipline Feasible (IDF) method. Numerical results for two example problems are presented to show the feasibility of the proposed method

  5. Quantifying potential recharge in mantled sinkholes using ERT.

    Science.gov (United States)

    Schwartz, Benjamin F; Schreiber, Madeline E

    2009-01-01

    Potential recharge through thick soils in mantled sinkholes was quantified using differential electrical resistivity tomography (ERT). Conversion of time series two-dimensional (2D) ERT profiles into 2D volumetric water content profiles using a numerically optimized form of Archie's law allowed us to monitor temporal changes in water content in soil profiles up to 9 m in depth. Combining Penman-Monteith daily potential evapotranspiration (PET) and daily precipitation data with potential recharge calculations for three sinkhole transects indicates that potential recharge occurred only during brief intervals over the study period and ranged from 19% to 31% of cumulative precipitation. Spatial analysis of ERT-derived water content showed that infiltration occurred both on sinkhole flanks and in sinkhole bottoms. Results also demonstrate that mantled sinkholes can act as regions of both rapid and slow recharge. Rapid recharge is likely the result of flow through macropores (such as root casts and thin gravel layers), while slow recharge is the result of unsaturated flow through fine-grained sediments. In addition to developing a new method for quantifying potential recharge at the field scale in unsaturated conditions, we show that mantled sinkholes are an important component of storage in a karst system.

  6. Psychometric Properties of a Standardized Observation Protocol to Quantify Pediatric Physical Therapy Actions

    NARCIS (Netherlands)

    Sonderer, Patrizia; Ziegler, Schirin Akhbari; Oertle, Barbara Gressbach; Meichtry, Andre; Hadders-Algra, Mijna

    Purpose: Pediatric physical therapy (PPT) is characterized by heterogeneity. This blurs the evaluation of effective components of PPT. The Groningen Observation Protocol (GOP) was developed to quantify contents of PPT. This study assesses the reliability and completeness of the GOP. Methods: Sixty

  7. New Principles of Process Control in Geotechnics by Acoustic Methods

    Directory of Open Access Journals (Sweden)

    Leššo, I.

    2007-01-01

    Full Text Available The contribution describes the new solution of the control of rotary drilling process as some elementary process in geotechnics. The article presents the first results of research on the utilization of acoustic methods in identification process by optimal control of rotary drilling.

  8. Photoelastic method to quantitatively visualise the evolution of whole-field stress in 3D printed models subject to continuous loading processes

    Science.gov (United States)

    Ju, Yang; Ren, Zhangyu; Wang, Li; Mao, Lingtao; Chiang, Fu-Pen

    2018-01-01

    The combination of three-dimensional (3D) printing techniques and photoelastic testing is a promising way to quantitatively determine the continuous whole-field stress distributions in solids that are characterized by complex structures. However, photoelastic testing produces wrapped isoclinic and isochromatic phase maps, and unwrapping these maps has always been a significant challenge. To realize the visualization and transparentization of the stress fields in complex structures, we report a new approach to quantify the continuous evolution of the whole-field stress in photosensitive material that is applicable to the fabrication of complex structures using 3D printing technology. The stress fringe orders are determined by analyzing a series of continuous frames extracted from a video recording of the fringe changes over the entire loading process. The integer portion of the fringe orders at a specific point on the model can be determined by counting the valleys of the light intensity change curve over the whole loading process, and the fractional portion can be calculated based on the cosine function between the light intensity and retardation. This method allows the fringe orders to be determined from the video itself, which significantly improves characterization accuracy and simplifies the experimental operation over the entire processes. To validate the proposed method, we compare the results of the theoretical calculations to those of experiments based on the diametric compression of a circular disc prepared by a 3D printer with photosensitive resin. The results indicate that the method can accurately determine the stress fringe order, except for points where the deformation is too large to differentiate the fringes pertaining to photoplasticity.

  9. A Multi-Objective Optimization Method to integrate Heat Pumps in Industrial Processes

    OpenAIRE

    Becker, Helen; Spinato, Giulia; Maréchal, François

    2011-01-01

    Aim of process integration methods is to increase the efficiency of industrial processes by using pinch analysis combined with process design methods. In this context, appropriate integrated utilities offer promising opportunities to reduce energy consumption, operating costs and pollutants emissions. Energy integration methods are able to integrate any type of predefined utility, but so far there is no systematic approach to generate potential utilities models based on their technology limit...

  10. Methods of control the machining process

    Directory of Open Access Journals (Sweden)

    Yu.V. Petrakov

    2017-12-01

    Full Text Available Presents control methods, differentiated by the time of receipt of information used: a priori, a posteriori and current. When used a priori information to determine the mode of cutting is carried out by simulation the process of cutting allowance, where the shape of the workpiece and the details are presented in the form of wireframes. The office for current information provides for a system of adaptive control and modernization of CNC machine, where in the input of the unit shall be computed by using established optimization software. For the control by a posteriori information of the proposed method of correction of shape-generating trajectory in the second pass measurement surface of the workpiece formed by the first pass. Developed programs that automatically design the adjusted file for machining.

  11. Quantifying temporal glucose variability in diabetes via continuous glucose monitoring: mathematical methods and clinical application.

    Science.gov (United States)

    Kovatchev, Boris P; Clarke, William L; Breton, Marc; Brayman, Kenneth; McCall, Anthony

    2005-12-01

    Continuous glucose monitors (CGMs) collect detailed blood glucose (BG) time series, which carry significant information about the dynamics of BG fluctuations. In contrast, the methods for analysis of CGM data remain those developed for infrequent BG self-monitoring. As a result, important information about the temporal structure of the data is lost during the translation of raw sensor readings into clinically interpretable statistics and images. The following mathematical methods are introduced into the field of CGM data interpretation: (1) analysis of BG rate of change; (2) risk analysis using previously reported Low/High BG Indices and Poincare (lag) plot of risk associated with temporal BG variability; and (3) spatial aggregation of the process of BG fluctuations and its Markov chain visualization. The clinical application of these methods is illustrated by analysis of data of a patient with Type 1 diabetes mellitus who underwent islet transplantation and with data from clinical trials. Normative data [12,025 reference (YSI device, Yellow Springs Instruments, Yellow Springs, OH) BG determinations] in patients with Type 1 diabetes mellitus who underwent insulin and glucose challenges suggest that the 90%, 95%, and 99% confidence intervals of BG rate of change that could be maximally sustained over 15-30 min are [-2,2], [-3,3], and [-4,4] mg/dL/min, respectively. BG dynamics and risk parameters clearly differentiated the stages of transplantation and the effects of medication. Aspects of treatment were clearly visualized by graphs of BG rate of change and Low/High BG Indices, by a Poincare plot of risk for rapid BG fluctuations, and by a plot of the aggregated Markov process. Advanced analysis and visualization of CGM data allow for evaluation of dynamical characteristics of diabetes and reveal clinical information that is inaccessible via standard statistics, which do not take into account the temporal structure of the data. The use of such methods improves the

  12. An accurate method for quantifying and analyzing copy number variation in porcine KIT by an oligonucleotide ligation assay

    Directory of Open Access Journals (Sweden)

    Cho In-Cheol

    2007-11-01

    Full Text Available Abstract Background Aside from single nucleotide polymorphisms, copy number variations (CNVs are the most important factors in susceptibility to genetic disorders because they affect expression levels of genes. In previous studies, pyrosequencing, mini-sequencing, real-time PCR, invader assays and other techniques have been used to detect CNVs. However, the higher the copy number in a genome, the more difficult it is to resolve the copies, so a more accurate method for measuring CNVs and assigning genotype is needed. Results PCR followed by a quantitative oligonucleotide ligation assay (qOLA was developed for quantifying CNVs. The accuracy and precision of the assay were evaluated for porcine KIT, which was selected as a model locus. Overall, the root mean squares of bias and standard deviation of qOLA were 2.09 and 0.45, respectively. These values are less than half of those in the published pyrosequencing assay for analyzing CNV in porcine KIT. Using a combined method of qOLA and another pyrosequencing for quantitative analysis of KIT copies with spliced forms, we confirmed the segregation of KIT alleles in 145 F1 animals with pedigree information and verified the correct assignment of genotypes. In a diagnostic test on 100 randomly sampled commercial pigs, there was perfect agreement between the genotypes obtained by grouping observations on a scatter plot and by clustering using the nearest centroid sorting method implemented in PROC FASTCLUS of the SAS package. In a test on 159 Large White pigs, there were only two discrepancies between genotypes assigned by the two clustering methods (98.7% agreement, confirming that the quantitative ligation assay established here makes genotyping possible through the accurate measurement of high KIT copy numbers (>4 per diploid genome. Moreover, the assay is sensitive enough for use on DNA from hair follicles, indicating that DNA from various sources could be used. Conclusion We have established a high

  13. Comparison of different methods to quantify fat classes in bakery products.

    Science.gov (United States)

    Shin, Jae-Min; Hwang, Young-Ok; Tu, Ock-Ju; Jo, Han-Bin; Kim, Jung-Hun; Chae, Young-Zoo; Rhu, Kyung-Hun; Park, Seung-Kook

    2013-01-15

    The definition of fat differs in different countries; thus whether fat is listed on food labels depends on the country. Some countries list crude fat content in the 'Fat' section on the food label, whereas other countries list total fat. In this study, three methods were used for determining fat classes and content in bakery products: the Folch method, the automated Soxhlet method, and the AOAC 996.06 method. The results using these methods were compared. Fat (crude) extracted by the Folch and Soxhlet methods was gravimetrically determined and assessed by fat class using capillary gas chromatography (GC). In most samples, fat (total) content determined by the AOAC 996.06 method was lower than the fat (crude) content determined by the Folch or automated Soxhlet methods. Furthermore, monounsaturated fat or saturated fat content determined by the AOAC 996.06 method was lowest. Almost no difference was observed between fat (crude) content determined by the Folch method and that determined by the automated Soxhlet method for nearly all samples. In three samples (wheat biscuits, butter cookies-1, and chocolate chip cookies), monounsaturated fat, saturated fat, and trans fat content obtained by the automated Soxhlet method was higher than that obtained by the Folch method. The polyunsaturated fat content obtained by the automated Soxhlet method was not higher than that obtained by the Folch method in any sample. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. A Situational Implementation Method for Business Process Management Systems

    NARCIS (Netherlands)

    R.L. Jansen; J.P.P. Ravensteyn

    For the integrated implementation of Business Process Management and supporting information systems many methods are available. Most of these methods, however, apply a one-size fits all approach and do not take into account the specific situation of the organization in which an information system is

  15. Using effect size to quantify plantar pressure asymmetry of gait of nondisabled adults and patients with hemiparesis.

    Science.gov (United States)

    Potdevin, François J; Femery, Virginie G; Decatoire, Aurélien; Bosquet, Laurent; Coello, Yann; Moretto, Pierre

    2007-01-01

    In the literature, numerous statistical analyses are used to quantify asymmetry in gait. This study tested the effect size (ES) statistic for quantifying asymmetry in nondisabled and pathological populations. The plantar pressure peaks on eight footprint locations of 27 nondisabled subjects and 18 patients with hemiparesis were bilaterally compared. Asymmetry quantifications were performed with ES and standard statistical tests (index of asymmetry, symmetry index, and ratio index). The results show an advantage in using ES to quantify asymmetry when confidence limits are also calculated. Conversely, traditional asymmetry indexes immediately implied asymmetry without statistical basis. These findings should be considered when one is attempting to diagnose pathological walking patterns or guide rehabilitation processes.

  16. Psychophysical "blinding" methods reveal a functional hierarchy of unconscious visual processing.

    Science.gov (United States)

    Breitmeyer, Bruno G

    2015-09-01

    Numerous non-invasive experimental "blinding" methods exist for suppressing the phenomenal awareness of visual stimuli. Not all of these suppressive methods occur at, and thus index, the same level of unconscious visual processing. This suggests that a functional hierarchy of unconscious visual processing can in principle be established. The empirical results of extant studies that have used a number of different methods and additional reasonable theoretical considerations suggest the following tentative hierarchy. At the highest levels in this hierarchy is unconscious processing indexed by object-substitution masking. The functional levels indexed by crowding, the attentional blink (and other attentional blinding methods), backward pattern masking, metacontrast masking, continuous flash suppression, sandwich masking, and single-flash interocular suppression, fall at progressively lower levels, while unconscious processing at the lowest levels is indexed by eye-based binocular-rivalry suppression. Although unconscious processing levels indexed by additional blinding methods is yet to be determined, a tentative placement at lower levels in the hierarchy is also given for unconscious processing indexed by Troxler fading and adaptation-induced blindness, and at higher levels in the hierarchy indexed by attentional blinding effects in addition to the level indexed by the attentional blink. The full mapping of levels in the functional hierarchy onto cortical activation sites and levels is yet to be determined. The existence of such a hierarchy bears importantly on the search for, and the distinctions between, neural correlates of conscious and unconscious vision. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. SU-D-207A-06: Pediatric Abdominal Organ Motion Quantified Via a Novel 4D MRI Method

    Energy Technology Data Exchange (ETDEWEB)

    Uh, J; Krasin, MJ; Lucas, JT; Tinkle, C; Merchant, TE; Hua, C [St. Jude Children’s Research Hospital, Memphis, TN (United States)

    2016-06-15

    Purpose: To develop a 4D MRI method for assessing respiration-induced abdominal organ motion in children receiving radiation therapy. Methods: A 4D MRI using internal image-based respiratory surrogate has been developed and implemented on a clinical scanner (1.5T Siemens Avanto). Ten patients (younger group: N=6, 2–5 years, anesthetized; older group: N=4, 11–15 years) with neuroblastoma, Wilm’s tumor rhabdomyosarcoma, or desmoplastic small round cell tumor received free breathing 4D MRI scans for treatment planning. Coronal image slices of the entire abdomen were retrospectively constructed in 10 respiratory phases. A B-spline deformable registration (Metz et al. 2011) was performed on 4D datasets to automatically derive motion trajectories of selected anatomical landmarks, including the dome and the center of the liver, and the superior edges of kidneys and spleen. The extents of the motion in three dimensions (anteroposterior, AP; mediolateral, ML; superoinferior, SI) and the correlations between organ motion trajectories were quantified. Results: The 4D MRI scans were successfully performed in <20 minutes for all patients without the use of any external device. Organ motion extents were larger in adolescents (kidneys: 3–13 mm SI, liver and spleen: 6–18 mm SI) than in younger children (kidneys:<3mm in all directions; liver and spleen: 1–8 mm SI, 1–5 mm ML and AP). The magnitude of respiratory motion in some adolescents may warrant special motion management. Motion trajectories were not synchronized across selected anatomical landmarks, particularly in the ML and AP directions, indicating inter- and intra-organ variations of the respiratory-induced motion. Conclusion: The developed 4D MRI acquisition and motion analysis methods provide a non-ionizing, non-invasive approach to automatically measure the organ motion trajectory in the pediatric abdomen. It is useful for defining ITV and PRV, monitoring changes in target motion patterns during the

  18. Quantifying the thermodynamic interactions of polyhedral boranes in solution to guide nanocomposite fabrication

    Energy Technology Data Exchange (ETDEWEB)

    Mutz, M. [University of Tennessee, Department of Chemistry (United States); Eastwood, Eric [Honeywell Kansas City Plant (United States); Lee, Mark E. [University of Missouri (United States); Bowen, Daniel E. [Honeywell Kansas City Plant (United States); Dadmun, M. D., E-mail: dad@utk.edu [University of Tennessee, Department of Chemistry (United States)

    2012-11-15

    The solubility of boron containing nanoparticles in a variety of solvents is quantified using static light scattering in conjunction with refractometry. Four polyhedral boranes were tested in this work, using refractometry to obtain dn/dc, while static light scattering quantifies A{sub 2}. A{sub 2} obtained from these measurements was then used to calculate {chi}, the solute-solvent interaction parameter, and the Hildebrand solubility parameter, {delta}, which provides a quantifiable method to identify good solvents. Of the nanoparticles studied, 1,3-di-o-carboranylpropane is thermodynamically stable in toluene, with a {chi} less than 0.5, a solubility limit of 2.47 mg/mL, and all solutions remaining clear with no visible particle settling. For all of the particles tested, there was good correlation between the physical observations of the solutions, {chi}, and {delta}. For instance, lower values of {chi} correspond to a smaller radius of gyration (R{sub g}). A list of suitable solvents based on {delta} is also presented.

  19. Quantifying the thermodynamic interactions of polyhedral boranes in solution to guide nanocomposite fabrication

    International Nuclear Information System (INIS)

    Mutz, M.; Eastwood, Eric; Lee, Mark E.; Bowen, Daniel E.; Dadmun, M. D.

    2012-01-01

    The solubility of boron containing nanoparticles in a variety of solvents is quantified using static light scattering in conjunction with refractometry. Four polyhedral boranes were tested in this work, using refractometry to obtain dn/dc, while static light scattering quantifies A 2 . A 2 obtained from these measurements was then used to calculate χ, the solute–solvent interaction parameter, and the Hildebrand solubility parameter, δ, which provides a quantifiable method to identify good solvents. Of the nanoparticles studied, 1,3-di-o-carboranylpropane is thermodynamically stable in toluene, with a χ less than 0.5, a solubility limit of 2.47 mg/mL, and all solutions remaining clear with no visible particle settling. For all of the particles tested, there was good correlation between the physical observations of the solutions, χ, and δ. For instance, lower values of χ correspond to a smaller radius of gyration (R g ). A list of suitable solvents based on δ is also presented.

  20. An optimized process flow for rapid segmentation of cortical bones of the craniofacial skeleton using the level-set method.

    Science.gov (United States)

    Szwedowski, T D; Fialkov, J; Pakdel, A; Whyne, C M

    2013-01-01

    Accurate representation of skeletal structures is essential for quantifying structural integrity, for developing accurate models, for improving patient-specific implant design and in image-guided surgery applications. The complex morphology of thin cortical structures of the craniofacial skeleton (CFS) represents a significant challenge with respect to accurate bony segmentation. This technical study presents optimized processing steps to segment the three-dimensional (3D) geometry of thin cortical bone structures from CT images. In this procedure, anoisotropic filtering and a connected components scheme were utilized to isolate and enhance the internal boundaries between craniofacial cortical and trabecular bone. Subsequently, the shell-like nature of cortical bone was exploited using boundary-tracking level-set methods with optimized parameters determined from large-scale sensitivity analysis. The process was applied to clinical CT images acquired from two cadaveric CFSs. The accuracy of the automated segmentations was determined based on their volumetric concurrencies with visually optimized manual segmentations, without statistical appraisal. The full CFSs demonstrated volumetric concurrencies of 0.904 and 0.719; accuracy increased to concurrencies of 0.936 and 0.846 when considering only the maxillary region. The highly automated approach presented here is able to segment the cortical shell and trabecular boundaries of the CFS in clinical CT images. The results indicate that initial scan resolution and cortical-trabecular bone contrast may impact performance. Future application of these steps to larger data sets will enable the determination of the method's sensitivity to differences in image quality and CFS morphology.

  1. A Robust Photogrammetric Processing Method of Low-Altitude UAV Images

    Directory of Open Access Journals (Sweden)

    Mingyao Ai

    2015-02-01

    Full Text Available Low-altitude Unmanned Aerial Vehicles (UAV images which include distortion, illumination variance, and large rotation angles are facing multiple challenges of image orientation and image processing. In this paper, a robust and convenient photogrammetric approach is proposed for processing low-altitude UAV images, involving a strip management method to automatically build a standardized regional aerial triangle (AT network, a parallel inner orientation algorithm, a ground control points (GCPs predicting method, and an improved Scale Invariant Feature Transform (SIFT method to produce large number of evenly distributed reliable tie points for bundle adjustment (BA. A multi-view matching approach is improved to produce Digital Surface Models (DSM and Digital Orthophoto Maps (DOM for 3D visualization. Experimental results show that the proposed approach is robust and feasible for photogrammetric processing of low-altitude UAV images and 3D visualization of products.

  2. Two Undergraduate Process Modeling Courses Taught Using Inductive Learning Methods

    Science.gov (United States)

    Soroush, Masoud; Weinberger, Charles B.

    2010-01-01

    This manuscript presents a successful application of inductive learning in process modeling. It describes two process modeling courses that use inductive learning methods such as inquiry learning and problem-based learning, among others. The courses include a novel collection of multi-disciplinary complementary process modeling examples. They were…

  3. Research on the raw data processing method of the hydropower construction project

    Science.gov (United States)

    Tian, Zhichao

    2018-01-01

    In this paper, based on the characteristics of the fixed data, this paper compares the various mathematical statistics analysis methods and chooses the improved Grabs criterion to analyze the data, and through the analysis of the data processing, the data processing method is not suitable. It is proved that this method can be applied to the processing of fixed raw data. This paper provides a reference for reasonably determining the effective quota analysis data.

  4. Effect of processing methods on the mechanical properties of engineered bamboo

    OpenAIRE

    Sharma, Bhavna; Gatóo, Ana; Ramage, Michael H.

    2015-01-01

    Engineered bamboo is increasingly explored as a material with significant potential for structural applications. The material is comprised of raw bamboo processed into a laminated composite. Commercial methods vary due to the current primary use as an architectural surface material, with processing used to achieve different colours in the material. The present work investigates the effect of two types of processing methods, bleaching and caramelisation, to determine the effect on the mechanic...

  5. Minimal processing - preservation methods of the future: an overview

    International Nuclear Information System (INIS)

    Ohlsson, T.

    1994-01-01

    Minimal-processing technologies are modern techniques that provide sufficient shelf life to foods to allow their distribution, while also meeting the demands of the consumers for convenience and fresh-like quality. Minimal-processing technologies can be applied at various stages of the food distribution chain, in storage, in processing and/or in packaging. Examples of methods will be reviewed, including modified-atmosphere packaging, high-pressure treatment, sous-vide cooking and active packaging

  6. Operating cost budgeting methods: quantitative methods to improve the process

    Directory of Open Access Journals (Sweden)

    José Olegário Rodrigues da Silva

    Full Text Available Abstract Operating cost forecasts are used in economic feasibility studies of projects and in budgeting process. Studies have pointed out that some companies are not satisfied with the budgeting process and chief executive officers want updates more frequently. In these cases, the main problem lies in the costs versus benefits. Companies seek simple and cheap forecasting methods without, at the same time, conceding in terms of quality of the resulting information. This study aims to compare operating cost forecasting models to identify the ones that are relatively easy to implement and turn out less deviation. For this purpose, we applied ARIMA (autoregressive integrated moving average and distributed dynamic lag models to data from a Brazilian petroleum company. The results suggest that the models have potential application, and that multivariate models fitted better and showed itself a better way to forecast costs than univariate models.

  7. Quantifying the impacts of landscape heterogeneity and model resolution on dust emissions in the Arabian Peninsula

    KAUST Repository

    Shi, Mingjie; Yang, Zong-Liang; Stenchikov, Georgiy L.; Parajuli, Sagar P.; Tao, Weichun; Kalenderski, Stoitchko

    2016-01-01

    This study evaluates the spatiotemporal variability of dust emission in the Arabian Peninsula and quantifies the emission sensitivity to the land-cover heterogeneity by using the Community Land Model version 4 (CLM43) at three different spatial resolutions. The land-cover heterogeneity is represented by the CLM4-default plant function types (PFTs) and the Moderate Resolution Imaging Spectroradiometer (MODIS) land cover types, respectively, at different grids. We area-average surface vegetation data and use the default nearest neighbor method to interpolate meteorological variables. We find that using MODIS data leads to a slightly higher coverage of vegetated land than the default PFT data; the former also gives more dust emission than the latter at 25- and 50-km grids as the default PFT data have more gridcells favoring less dust emission. The research highlights the importance of using proper data-processing methods or dust emission thresholds to preserve the dust emission accuracy in land models. © 2016 Elsevier Ltd.

  8. Quantifying the impacts of landscape heterogeneity and model resolution on dust emissions in the Arabian Peninsula

    KAUST Repository

    Shi, Mingjie

    2016-01-11

    This study evaluates the spatiotemporal variability of dust emission in the Arabian Peninsula and quantifies the emission sensitivity to the land-cover heterogeneity by using the Community Land Model version 4 (CLM43) at three different spatial resolutions. The land-cover heterogeneity is represented by the CLM4-default plant function types (PFTs) and the Moderate Resolution Imaging Spectroradiometer (MODIS) land cover types, respectively, at different grids. We area-average surface vegetation data and use the default nearest neighbor method to interpolate meteorological variables. We find that using MODIS data leads to a slightly higher coverage of vegetated land than the default PFT data; the former also gives more dust emission than the latter at 25- and 50-km grids as the default PFT data have more gridcells favoring less dust emission. The research highlights the importance of using proper data-processing methods or dust emission thresholds to preserve the dust emission accuracy in land models. © 2016 Elsevier Ltd.

  9. Possibilities of implementing nonthermal processing methods in the dairy industry

    OpenAIRE

    Irena Jeličić

    2010-01-01

    In the past two decades a lot of research in the field of food science has focused on new, non-thermal processing methods. This article describes the most intensively investigated new processing methodsfor implementation in the dairy industry, like microfiltration, high hydrostatic pressure, ultrasound and pulsed electric fields. For each method an overview is given for the principle of microbial inactivation, the obtained results regarding reduction of microorganisms as well as the positive ...

  10. System and method for deriving a process-based specification

    Science.gov (United States)

    Hinchey, Michael Gerard (Inventor); Rash, James Larry (Inventor); Rouff, Christopher A. (Inventor)

    2009-01-01

    A system and method for deriving a process-based specification for a system is disclosed. The process-based specification is mathematically inferred from a trace-based specification. The trace-based specification is derived from a non-empty set of traces or natural language scenarios. The process-based specification is mathematically equivalent to the trace-based specification. Code is generated, if applicable, from the process-based specification. A process, or phases of a process, using the features disclosed can be reversed and repeated to allow for an interactive development and modification of legacy systems. The process is applicable to any class of system, including, but not limited to, biological and physical systems, electrical and electro-mechanical systems in addition to software, hardware and hybrid hardware-software systems.

  11. Efficient point cloud data processing in shipbuilding: Reformative component extraction method and registration method

    Directory of Open Access Journals (Sweden)

    Jingyu Sun

    2014-07-01

    Full Text Available To survive in the current shipbuilding industry, it is of vital importance for shipyards to have the ship components’ accuracy evaluated efficiently during most of the manufacturing steps. Evaluating components’ accuracy by comparing each component’s point cloud data scanned by laser scanners and the ship’s design data formatted in CAD cannot be processed efficiently when (1 extract components from point cloud data include irregular obstacles endogenously, or when (2 registration of the two data sets have no clear direction setting. This paper presents reformative point cloud data processing methods to solve these problems. K-d tree construction of the point cloud data fastens a neighbor searching of each point. Region growing method performed on the neighbor points of the seed point extracts the continuous part of the component, while curved surface fitting and B-spline curved line fitting at the edge of the continuous part recognize the neighbor domains of the same component divided by obstacles’ shadows. The ICP (Iterative Closest Point algorithm conducts a registration of the two sets of data after the proper registration’s direction is decided by principal component analysis. By experiments conducted at the shipyard, 200 curved shell plates are extracted from the scanned point cloud data, and registrations are conducted between them and the designed CAD data using the proposed methods for an accuracy evaluation. Results show that the methods proposed in this paper support the accuracy evaluation targeted point cloud data processing efficiently in practice.

  12. Quantifying bleaching for zero-age fluvial sediment: A Bayesian approach

    International Nuclear Information System (INIS)

    Cunningham, Alastair C.; Evans, Mary; Knight, Jasper

    2015-01-01

    Luminescence dating of sediment requires the sand grains to have been exposed to sunlight prior to their most recent burial. Under fluvial transport, the amount of sunlight exposure may not always be sufficient to reset the luminescence signal, a phenomenon known as ‘partial bleaching'. The extent of bleaching is dependent on a combination of geomorphic, sedimentological and fluvial processes. If bleaching can be quantified, and the relationship with these processes understood, it could potentially be used as a new environmental proxy for changes in the dynamics of river systems. Here, we use a recently developed statistical model to evaluate the extent of bleaching, by inferring the proportion of well-bleached grains in the small-aliquot population. We sampled low-flow and flood deposits at a single site on the River Sabie, South Africa. We show that the low-flow sediment is almost perfectly bleached (>80% of grains well bleached), while sediment at flood elevations is partially bleached (20–70 % of grains well bleached). The degree of bleaching may show a relationship with flood magnitude as defined by elevation above normal river level, and we speculate on the causes of variability in bleaching between flood samples. - Highlights: • We sampled modern river sediment from low-flow and flood elevations. • The unbleached OSL dose was measured. • Bayesian methods can estimate the proportion of well-bleached grains. • Low-flow sediments are well bleached; flood deposits are poorly bleached.

  13. Quantifying the Lateral Bracing Provided by Standing Steam Roof Systems

    OpenAIRE

    Sorensen, Taylor J.

    2016-01-01

    One of the major challenges of engineering is finding the proper balance between economical and safe. Currently engineers at Nucor Corporation have ignored the additional lateral bracing provided by standing seam roofing systems to joists because of the lack of methods available to quantify the amount of bracing provided. Based on the results of testing performed herein, this bracing is significant, potentially resulting in excessively conservative designs and unnecessary costs. This proje...

  14. Development of X-ray radiography examination technology by image processing method

    Energy Technology Data Exchange (ETDEWEB)

    Min, Duck Kee; Koo, Dae Seo; Kim, Eun Ka [Korea Atomic Energy Research Institute, Taejon (Korea)

    1998-06-01

    Because the dimension of nuclear fuel rods was measured with rapidity and accuracy by X-ray radiography examination, the set-up of image processing system which was composed of 979 CCD-L camera, image processing card and fluorescent lighting was carried out, and the image processing system enabled image processing to perform. The examination technology of X-ray radiography, which enabled dimension measurement of nuclear fuel rods to perform, was developed by image processing method. The result of dimension measurement of standard fuel rod by image processing method was 2% reduction in relative measuring error than that of X-ray radiography film, while the former was better by 100 {approx} 200 {mu}m in measuring accuracy than the latter. (author). 9 refs., 22 figs., 3 tabs.

  15. Active voltammetric microsensors with neural signal processing.

    Energy Technology Data Exchange (ETDEWEB)

    Vogt, M. C.

    1998-12-11

    Many industrial and environmental processes, including bioremediation, would benefit from the feedback and control information provided by a local multi-analyte chemical sensor. For most processes, such a sensor would need to be rugged enough to be placed in situ for long-term remote monitoring, and inexpensive enough to be fielded in useful numbers. The multi-analyte capability is difficult to obtain from common passive sensors, but can be provided by an active device that produces a spectrum-type response. Such new active gas microsensor technology has been developed at Argonne National Laboratory. The technology couples an electrocatalytic ceramic-metallic (cermet) microsensor with a voltammetric measurement technique and advanced neural signal processing. It has been demonstrated to be flexible, rugged, and very economical to produce and deploy. Both narrow interest detectors and wide spectrum instruments have been developed around this technology. Much of this technology's strength lies in the active measurement technique employed. The technique involves applying voltammetry to a miniature electrocatalytic cell to produce unique chemical ''signatures'' from the analytes. These signatures are processed with neural pattern recognition algorithms to identify and quantify the components in the analyte. The neural signal processing allows for innovative sampling and analysis strategies to be employed with the microsensor. In most situations, the whole response signature from the voltammogram can be used to identify, classify, and quantify an analyte, without dissecting it into component parts. This allows an instrument to be calibrated once for a specific gas or mixture of gases by simple exposure to a multi-component standard rather than by a series of individual gases. The sampled unknown analytes can vary in composition or in concentration, the calibration, sensing, and processing methods of these active voltammetric microsensors can

  16. Active voltammetric microsensors with neural signal processing

    Science.gov (United States)

    Vogt, Michael C.; Skubal, Laura R.

    1999-02-01

    Many industrial and environmental processes, including bioremediation, would benefit from the feedback and control information provided by a local multi-analyte chemical sensor. For most processes, such a sensor would need to be rugged enough to be placed in situ for long-term remote monitoring, and inexpensive enough to be fielded in useful numbers. The multi-analyte capability is difficult to obtain from common passive sensors, but can be provided by an active device that produces a spectrum-type response. Such new active gas microsensor technology has been developed at Argonne National Laboratory. The technology couples an electrocatalytic ceramic-metallic (cermet) microsensor with a voltammetric measurement technique and advanced neural signal processing. It has been demonstrated to be flexible, rugged, and very economical to produce and deploy. Both narrow interest detectors and wide spectrum instruments have been developed around this technology. Much of this technology's strength lies in the active measurement technique employed. The technique involves applying voltammetry to a miniature electrocatalytic cell to produce unique chemical 'signatures' from the analytes. These signatures are processed with neural pattern recognition algorithms to identify and quantify the components in the analyte. The neural signal processing allows for innovative sampling and analysis strategies to be employed with the microsensor. In most situations, the whole response signature from the voltammogram can be used to identify, classify, and quantify an analyte, without dissecting it into component parts. This allows an instrument to be calibrated once for a specific gas or mixture of gases by simple exposure to a multi-component standard rather than by a series of individual gases. The sampled unknown analytes can vary in composition or in concentration; the calibration, sensing, and processing methods of these active voltammetric microsensors can detect, recognize, and

  17. Methods for nondestructive assay holdup measurements in shutdown uranium enrichment facilities

    International Nuclear Information System (INIS)

    Hagenauer, R.C.; Mayer, R.L. II.

    1991-09-01

    Measurement surveys of uranium holdup using nondestructive assay (NDA) techniques are being conducted for shutdown gaseous diffusion facilities at the Oak Ridge K-25 Site (formerly the Oak Ridge Gaseous Diffusion Plant). When in operation, these facilities processed UF 6 with enrichments ranging from 0.2 to 93 wt % 235 U. Following final shutdown of all process facilities, NDA surveys were initiated to provide process holdup data for the planning and implementation of decontamination and decommissioning activities. A three-step process is used to locate and quantify deposits: (1) high-resolution gamma-ray measurements are performed to generally define the relative abundances of radioisotopes present, (2) sizable deposits are identified using gamma-ray scanning methods, and (3) the deposits are quantified using neutron measurement methods. Following initial quantitative measurements, deposit sizes are calculated; high-resolution gamma-ray measurements are then performed on the items containing large deposits. The quantitative estimates for the large deposits are refined on the basis of these measurements. Facility management is using the results of the survey to support a variety of activities including isolation and removal of large deposits; performing health, safety, and environmental analyses; and improving facility nuclear material control and accountability records. 3 refs., 1 tab

  18. Project ES3: attempting to quantify and measure the level of stress.

    Science.gov (United States)

    Aguiló, Jordi; Ferrer-Salvans, Pau; García-Rozo, Antonio; Armario, Antonio; Corbí, Ángel; Cambra, Francisco J; Bailón, Raquel; González-Marcos, Ana; Caja, Gerardo; Aguiló, Sira; López-Antón, Raúl; Arza-Valdés, Adriana; Garzón-Rey, Jorge M

    2015-11-01

    The WHO has qualified stress as a 'world epidemic' due to its increasingly greater incidence on health. The work described in this paper represents an attempt to objectively quantify the level of stress. The aim of the method developed here is to measure how close or how far a subject is from a situation that can be considered 'normal' in medical and social terms. The literature on the pathophysiology of stress and its methods of study in experiments on both animals and humans was reviewed. Nine prospective observational studies were undertaken with different types of subjects and stressors covering the different types of stress. The results of the literature review made it possible to identify the different types of stress, the indicators that yield significant results, the psychometric tests and the well-documented 'stressors'. This material was then used to design the general method and the details of the nine clinical trials. The preliminary results obtained in some of the studies were used to validate the indicators as well as the efficacy of the techniques used experimentally to diminish stress or to produce it. The early results obtained in the experimental trials show that we are on the right path towards defining and validating multivariable markers for quantifying levels of stress and also suggest that the method can be applied in a similar way to the study of mental disorders.

  19. Diagnosis of osteoarthritis by cartilage surface smoothness quantified automatically from knee MRI

    DEFF Research Database (Denmark)

    Tummala, Sudhakar; Bay-Jensen, Anne-Christine; Karsdal, Morten A.

    2011-01-01

    Objective: We investigated whether surface smoothness of articular cartilage in the medial tibiofemoral compartment quantified from magnetic resonance imaging (MRI) could be appropriate as a diagnostic marker of osteoarthritis (OA). Method: At baseline, 159 community-based subjects aged 21 to 81...... with normal or OA-affected knees were recruited to provide a broad range of OA states. Smoothness was quantified using an automatic framework from low-field MRI in the tibial, femoral, and femoral subcompartments. Diagnostic ability of smoothness was evaluated by comparison with conventional OA markers......, correlations between smoothness and pain values and smoothness loss and cartilage loss supported a link to progression of OA. Thereby, smoothness markers may allow detection and monitoring of OA-supplemented currently accepted markers....

  20. Effective updating process of seismic fragilities using Bayesian method and information entropy

    International Nuclear Information System (INIS)

    Kato, Masaaki; Takata, Takashi; Yamaguchi, Akira

    2008-01-01

    Seismic probabilistic safety assessment (SPSA) is an effective method for evaluating overall performance of seismic safety of a plant. Seismic fragilities are estimated to quantify the seismically induced accident sequences. It is a great concern that the SPSA results involve uncertainties, a part of which comes from the uncertainty in the seismic fragility of equipment and systems. A straightforward approach to reduce the uncertainty is to perform a seismic qualification test and to reflect the results on the seismic fragility estimate. In this paper, we propose a figure-of-merit to find the most cost-effective condition of the seismic qualification tests about the acceleration level and number of components tested. Then a mathematical method to reflect the test results on the fragility update is developed. A Bayesian method is used for the fragility update procedure. Since a lognormal distribution that is used for the fragility model does not have a Bayes conjugate function, a parameterization method is proposed so that the posterior distribution expresses the characteristics of the fragility. The information entropy is used as the figure-of-merit to express importance of obtained evidence. It is found that the information entropy is strongly associated with the uncertainty of the fragility. (author)

  1. Methods to Quantify Uncertainty in Human Health Risk Assessment

    National Research Council Canada - National Science Library

    Aurelius, Lea

    1998-01-01

    ...) and other health professionals, such as the Bioenviroumental Engineer, to identify the appropriate use of probabilistic techniques for a site, and the methods by which probabilistic risk assessment...

  2. Method of processing radioactive rare gase

    International Nuclear Information System (INIS)

    Tagusagawa, Atsushi; Tuda, Kazuaki.

    1988-01-01

    Purpose: To obtain a safety processing method without using mechanical pumps or pressure-proof containers and, accordingly, with no risk for the leakage of radioactive rare gas. Method: A container filled with zeolige is inserted with a cover being opened into an autoclave. Meanwhile, krypton-containing gases are supplied to an adsorption tower filled with adsorbents, cooled, adsorbed and then heated to desorb adsorbed krypton. The krypton-containing gases are introduced due to the pressure difference to the autoclave thereby causing krypton to adsorb at ambient temperature to zeolite. Then, the inside of the autoclave is heated to desorb krypton and adsorbed moistures from zeolite and the pressure is elevated. After sending the gases under pressure to the adsorption tower, the zeolite-filled container is taken out from the autoclave, tightly closed and then transferred to a predetermined site. (Takahashi, M.)

  3. Recovery process of elite athletes: A review of contemporary methods

    Directory of Open Access Journals (Sweden)

    Veljović Draško

    2012-01-01

    Full Text Available A numerous training stimulus and competition as well can reduce level of abilities among athletes. This decline of performance can be a temporary phenomenon, with duration of several minutes or several hours after a workout, or take much longer, even a several days. The lack of adequate recovery process can influence on athletes not being able to train at the desired intensity or do not fully meet the tasks at the next training session. Chronic fatigue can lead to injuries, and therefore, full recovery is necessary for achieving optimal level of abilities that will ensure a better athletic performance. For this reasons, athletes often carry out a variety of techniques and methods aimed to recover after training or match. They have become a part of the training process and their purpose is reduction of stress and fatigue incurred as a result of daily exposure to intense training stimulus. There are numerous methods and techniques today that can accelerate the recovery process of athletes. For this reason it is necessary to know the efficiency of an adequate method which will be applied in the training process. The aim of this review article is to point to those currently used and their effects on the process of recovery after physical activity in elite sport.

  4. Quantifying requirements volatility effects

    NARCIS (Netherlands)

    Kulk, G.P.; Verhoef, C.

    2008-01-01

    In an organization operating in the bancassurance sector we identified a low-risk IT subportfolio of 84 IT projects comprising together 16,500 function points, each project varying in size and duration, for which we were able to quantify its requirements volatility. This representative portfolio

  5. Thermosensory reversal effect quantified

    NARCIS (Netherlands)

    Bergmann Tiest, W.M.; Kappers, A.M.L.

    2008-01-01

    At room temperature, some materials feel colder than others due to differences in thermal conductivity, heat capacity and geometry. When the ambient temperature is well above skin temperature, the roles of 'cold' and 'warm' materials are reversed. In this paper, this effect is quantified by

  6. Automatic MRI Quantifying Methods in Behavioral-Variant Frontotemporal Dementia Diagnosis

    DEFF Research Database (Denmark)

    Cajanus, Antti; Hall, Anette; Koikkalainen, Juha

    2018-01-01

    genetic status in the differentiation sensitivity. Methods: The MRI scans of 50 patients with bvFTD (17 C9ORF72 expansion carriers) were analyzed using 6 quantification methods as follows: voxel-based morphometry (VBM), tensor-based morphometry, volumetry (VOL), manifold learning, grading, and white...

  7. Bridging Technometric Method and Innovation Process: An Initial Study

    Science.gov (United States)

    Rumanti, A. A.; Reynaldo, R.; Samadhi, T. M. A. A.; Wiratmadja, I. I.; Dwita, A. C.

    2018-03-01

    The process of innovation is one of ways utilized to increase the capability of a technology component that reflects the need of SME. Technometric method can be used to identify to what extent the level of technology advancement in a SME is, and also which technology component that needs to be maximized in order to significantly deliver an innovation. This paper serves as an early study, which lays out a conceptual framework that identifies and elaborates the principles of innovation process from a well-established innovation model by Martin with the technometric method, based on the initial background research conducted at SME Ira Silver in Jogjakarta, Indonesia.

  8. Method of processing radioactive liquid wastes

    International Nuclear Information System (INIS)

    Kurumada, Norimitsu; Shibata, Setsuo; Wakabayashi, Toshikatsu; Kuribayashi, Hiroshi.

    1984-01-01

    Purpose: To facilitate the procession of liquid wastes containing insoluble salts of boric acid and calcium in a process for solidifying under volume reduction of radioactive liquid wastes containing boron. Method: A soluble calcium compound (such as calcium hydroxide, calcium oxide and calcium nitrate) is added to liquid wastes whose pH value is adjusted neutral or alkaline such that the molar ratio of calcium to boron in the liquid wastes is at least 0.2. Then, they are agitated at a temperature between 40 - 70 0 C to form insoluble calcium salt containing boron. Thereafter, the liquid is maintained at a temperature less than the above-mentioned forming temperature to age the products and, thereafter, the liquid is evaporated to condensate into a liquid concentrate containing 30 - 80% by weight of solid components. The concentrated liquid is mixed with cement to solidify. (Ikeda, J.)

  9. The OptD-multi method in LiDAR processing

    International Nuclear Information System (INIS)

    Błaszczak-Bąk, Wioleta; Sobieraj-Żłobińska, Anna; Kowalik, Michał

    2017-01-01

    New and constantly developing technology for acquiring spatial data, such as LiDAR (light detection and ranging), is a source for large volume of data. However, such amount of data is not always needed for developing the most popular LiDAR products: digital terrain model (DTM) or digital surface model. Therefore, in many cases, the number of contained points are reduced in the pre-processing stage. The degree of reduction is determined by the algorithm used, which should enable the user to obtain a dataset appropriate and optimal for the planned purpose. The aim of this article is to propose a new Optimum Dataset method (OptD method) in the processing of LiDAR point clouds. The OptD method can reduce the number of points in a dataset for the specified optimization criteria concerning the characteristics of generated DTM. The OptD method can be used in two variants: OptD-single (one criterion for optimization) and OptD-multi (two or more optimization criteria). The OptD-single method has been thoroughly tested and presented by Błaszczak-Bąk (2016 Acta Geodyn. Geomater . 13/4 379–86). In this paper the authors discussed the OptD-multi method. (paper)

  10. OPTIMAL SIGNAL PROCESSING METHODS IN GPR

    Directory of Open Access Journals (Sweden)

    Saeid Karamzadeh

    2014-01-01

    Full Text Available In the past three decades, a lot of various applications of Ground Penetrating Radar (GPR took place in real life. There are important challenges of this radar in civil applications and also in military applications. In this paper, the fundamentals of GPR systems will be covered and three important signal processing methods (Wavelet Transform, Matched Filter and Hilbert Huang will be compared to each other in order to get most accurate information about objects which are in subsurface or behind the wall.

  11. USING ANALYTIC HIERARCHY PROCESS (AHP METHOD IN RURAL DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Tülay Cengiz

    2003-04-01

    Full Text Available Rural development is a body of economical and social policies towards improving living conditions in rural areas through enabling rural population to utilize economical, social, cultural and technological blessing of city life in place, without migrating. As it is understood from this description, rural development is a very broad concept. Therefore, in development efforts problem should be stated clearly, analyzed and many criterias should be evaluated by experts. Analytic Hierarchy Process (AHP method can be utilized at there stages of development efforts. AHP methods is one of multi-criteria decision method. After degrading a problem in smaller pieces, relative importance and level of importance of two compared elements are determined. It allows evaluation of quality and quantity factors. At the same time, it permits utilization of ideas of many experts and use them in decision process. Because mentioned features of AHP method, it could be used in rural development works. In this article, cultural factors, one of the important components of rural development is often ignored in many studies, were evaluated as an example. As a result of these applications and evaluations, it is concluded that AHP method could be helpful in rural development efforts.

  12. Implementation of a new rapid tissue processing method--advantages and challenges

    DEFF Research Database (Denmark)

    Munkholm, Julie; Talman, Maj-Lis; Hasselager, Thomas

    2008-01-01

    Conventional tissue processing of histologic specimens has been carried out in the same manner for many years. It is a time-consuming process involving batch production, resulting in a 1-day delay of the diagnosis. Microwave-assisted tissue processing enables a continuous high flow of histologic...... specimens through the processor with a processing time of as low as 1h. In this article, we present the effects of the automated microwave-assisted tissue processor on the histomorphologic quality and the turnaround time (TAT) for histopathology reports. We present a blind comparative study regarding...... the histomorphologic quality of microwave-processed and conventionally processed tissue samples. A total of 333 specimens were included. The microwave-assisted processing method showed a histomorphologic quality comparable to the conventional method for a number of tissue types, including skin and specimens from...

  13. Extension of moment projection method to the fragmentation process

    International Nuclear Information System (INIS)

    Wu, Shaohua; Yapp, Edward K.Y.; Akroyd, Jethro; Mosbach, Sebastian; Xu, Rong; Yang, Wenming; Kraft, Markus

    2017-01-01

    The method of moments is a simple but efficient method of solving the population balance equation which describes particle dynamics. Recently, the moment projection method (MPM) was proposed and validated for particle inception, coagulation, growth and, more importantly, shrinkage; here the method is extended to include the fragmentation process. The performance of MPM is tested for 13 different test cases for different fragmentation kernels, fragment distribution functions and initial conditions. Comparisons are made with the quadrature method of moments (QMOM), hybrid method of moments (HMOM) and a high-precision stochastic solution calculated using the established direct simulation algorithm (DSA) and advantages of MPM are drawn.

  14. Extension of moment projection method to the fragmentation process

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Shaohua [Department of Mechanical Engineering, National University of Singapore, Engineering Block EA, Engineering Drive 1, 117576 (Singapore); Yapp, Edward K.Y.; Akroyd, Jethro; Mosbach, Sebastian [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge, CB2 3RA (United Kingdom); Xu, Rong [School of Chemical and Biomedical Engineering, Nanyang Technological University, 62 Nanyang Drive, 637459 (Singapore); Yang, Wenming [Department of Mechanical Engineering, National University of Singapore, Engineering Block EA, Engineering Drive 1, 117576 (Singapore); Kraft, Markus, E-mail: mk306@cam.ac.uk [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge, CB2 3RA (United Kingdom); School of Chemical and Biomedical Engineering, Nanyang Technological University, 62 Nanyang Drive, 637459 (Singapore)

    2017-04-15

    The method of moments is a simple but efficient method of solving the population balance equation which describes particle dynamics. Recently, the moment projection method (MPM) was proposed and validated for particle inception, coagulation, growth and, more importantly, shrinkage; here the method is extended to include the fragmentation process. The performance of MPM is tested for 13 different test cases for different fragmentation kernels, fragment distribution functions and initial conditions. Comparisons are made with the quadrature method of moments (QMOM), hybrid method of moments (HMOM) and a high-precision stochastic solution calculated using the established direct simulation algorithm (DSA) and advantages of MPM are drawn.

  15. Quantum-Mechanical Methods for Quantifying Incorporation of Contaminants in Proximal Minerals

    Directory of Open Access Journals (Sweden)

    Lindsay C. Shuller-Nickles

    2014-07-01

    Full Text Available Incorporation reactions play an important role in dictating immobilization and release pathways for chemical species in low-temperature geologic environments. Quantum-mechanical investigations of incorporation seek to characterize the stability and geometry of incorporated structures, as well as the thermodynamics and kinetics of the reactions themselves. For a thermodynamic treatment of incorporation reactions, a source of the incorporated ion and a sink for the released ion is necessary. These sources/sinks in a real geochemical system can be solids, but more commonly, they are charged aqueous species. In this contribution, we review the current methods for ab initio calculations of incorporation reactions, many of which do not consider incorporation from aqueous species. We detail a recently-developed approach for the calculation of incorporation reactions and expand on the part that is modeling the interaction of periodic solids with aqueous source and sink phases and present new research using this approach. To model these interactions, a systematic series of calculations must be done to transform periodic solid source and sink phases to aqueous-phase clusters. Examples of this process are provided for three case studies: (1 neptunyl incorporation into studtite and boltwoodite: for the layered boltwoodite, the incorporation energies are smaller (more favorable for reactions using environmentally relevant source and sink phases (i.e., ΔErxn(oxides > ΔErxn(silicates > ΔErxn(aqueous. Estimates of the solid-solution behavior of Np5+/P5+- and U6+/Si4+-boltwoodite and Np5+/Ca2+- and U6+/K+-boltwoodite solid solutions are used to predict the limit of Np-incorporation into boltwoodite (172 and 768 ppm at 300 °C, respectively; (2 uranyl and neptunyl incorporation into carbonates and sulfates: for both carbonates and sulfates, it was found that actinyl incorporation into a defect site is more favorable than incorporation into defect-free periodic

  16. Statistical methods in spatial genetics

    DEFF Research Database (Denmark)

    Guillot, Gilles; Leblois, Raphael; Coulon, Aurelie

    2009-01-01

    The joint analysis of spatial and genetic data is rapidly becoming the norm in population genetics. More and more studies explicitly describe and quantify the spatial organization of genetic variation and try to relate it to underlying ecological processes. As it has become increasingly difficult...... to keep abreast with the latest methodological developments, we review the statistical toolbox available to analyse population genetic data in a spatially explicit framework. We mostly focus on statistical concepts but also discuss practical aspects of the analytical methods, highlighting not only...

  17. Standard CMMIsm Appraisal Method for Process Improvement (SCAMPIsm), Version 1.1: Method Definition Document

    National Research Council Canada - National Science Library

    2001-01-01

    The Standard CMMI Appraisal Method for Process Improvement (SCAMPI(Service Mark)) is designed to provide benchmark quality ratings relative to Capability Maturity Model(registered) Integration (CMMI(Service Mark)) models...

  18. Quantifying mast cells in bladder pain syndrome by immunohistochemical analysis

    DEFF Research Database (Denmark)

    Larsen, M.S.; Mortensen, S.; Nordling, J.

    2008-01-01

    OBJECTIVES To evaluate a simple method for counting mast cells, thought to have a role in the pathophysiology of bladder pain syndrome (BPS, formerly interstitial cystitis, a syndrome of pelvic pain perceived to be related to the urinary bladder and accompanied by other urinary symptoms, e. g....... frequency and nocturia), as > 28 mast cells/mm(2) is defined as mastocytosis and correlated with clinical outcome. PATIENTS AND METHODS The current enzymatic staining method (naphtolesterase) on 10 mu m sections for quantifying mast cells is complicated. In the present study, 61 patients had detrusor...... sections between, respectively. Mast cells were counted according to a well-defined procedure. RESULTS The old and the new methods, on 10 and 3 mu m sections, showed a good correlation between mast cell counts. When using tryptase staining and 3 mu m sections, the mast cell number correlated well...

  19. A digital processing method for the analysis of complex nuclear spectra

    International Nuclear Information System (INIS)

    Madan, V.K.; Abani, M.C.; Bairi, B.R.

    1994-01-01

    This paper describes a digital processing method using frequency power spectra for the analysis of complex nuclear spectra. The power spectra were estimated by employing modified discrete Fourier transform. The method was applied to observed spectral envelopes. The results for separating closely-spaced doublets in nuclear spectra of low statistical precision compared favorably with those obtained by using a popular peak fitting program SAMPO. The paper also describes limitations of the peak fitting methods. It describes the advantages of digital processing techniques for type II digital signals including nuclear spectra. A compact computer program occupying less than 2.5 kByte of memory space was written in BASIC for the processing of observed spectral envelopes. (orig.)

  20. Validity and reliability of the session-RPE method for quantifying training in Australian football: a comparison of the CR10 and CR100 scales.

    Science.gov (United States)

    Scott, Tannath J; Black, Cameron R; Quinn, John; Coutts, Aaron J

    2013-01-01

    The purpose of this study was to examine and compare the criterion validity and test-retest reliability of the CR10 and CR100 rating of perceived exertion (RPE) scales for team sport athletes that undertake high-intensity, intermittent exercise. Twenty-one male Australian football (AF) players (age: 19.0 ± 1.8 years, body mass: 83.92 ± 7.88 kg) participated the first part (part A) of this study, which examined the construct validity of the session-RPE (sRPE) method for quantifying training load in AF. Ten male athletes (age: 16.1 ± 0.5 years) participated in the second part of the study (part B), which compared the test-retest reliability of the CR10 and CR100 RPE scales. In part A, the validity of the sRPE method was assessed by examining the relationships between sRPE, and objective measures of internal (i.e., heart rate) and external training load (i.e., distance traveled), collected from AF training sessions. Part B of the study assessed the reliability of sRPE through examining the test-retest reliability of sRPE during 3 different intensities of controlled intermittent running (10, 11.5, and 13 km·h(-1)). Results from part A demonstrated strong correlations for CR10- and CR100-derived sRPE with measures of internal training load (Banisters TRIMP and Edwards TRIMP) (CR10: r = 0.83 and 0.83, and CR100: r = 0.80 and 0.81, p training load (distance, higher speed running and player load) for both the CR10 (r = 0.81, 0.71, and 0.83) and CR100 (r = 0.78, 0.69, and 0.80) were significant (p reliability for both the CR10 (31.9% CV) and CR100 (38.6% CV) RPE scales after short bouts of intermittent running. Collectively, these results suggest both CR10- and CR100-derived sRPE methods have good construct validity for assessing training load in AF. The poor levels of reliability revealed under field testing indicate that the sRPE method may not be sensible to detecting small changes in exercise intensity during brief intermittent running bouts. Despite this limitation