WorldWideScience

Sample records for iterative-aggregation disaggregation technique

  1. Aggregating and Disaggregating Flexibility Objects

    DEFF Research Database (Denmark)

    Siksnys, Laurynas; Valsomatzis, Emmanouil; Hose, Katja

    2015-01-01

    In many scientific and commercial domains we encounter flexibility objects, i.e., objects with explicit flexibilities in a time and an amount dimension (e.g., energy or product amount). Applications of flexibility objects require novel and efficient techniques capable of handling large amounts...... and aiming at energy balancing during aggregation. In more detail, this paper considers the complete life cycle of flex-objects: aggregation, disaggregation, associated requirements, efficient incremental computation, and balance aggregation techniques. Extensive experiments based on real-world data from...

  2. Effect of natural antioxidants on the aggregation and disaggregation ...

    African Journals Online (AJOL)

    Conclusion: High antioxidant activities were positively correlated with the inhibition of Aβ aggregation, although not with the disaggregation of pre-formed Aβ aggregates. Nevertheless, potent antioxidants may be helpful in treating Alzheimer's disease. Keywords: Alzheimer's disease, β-Amyloid, Aggregation, Disaggregation ...

  3. Cellular Handling of Protein Aggregates by Disaggregation Machines.

    Science.gov (United States)

    Mogk, Axel; Bukau, Bernd; Kampinga, Harm H

    2018-01-18

    Both acute proteotoxic stresses that unfold proteins and expression of disease-causing mutant proteins that expose aggregation-prone regions can promote protein aggregation. Protein aggregates can interfere with cellular processes and deplete factors crucial for protein homeostasis. To cope with these challenges, cells are equipped with diverse folding and degradation activities to rescue or eliminate aggregated proteins. Here, we review the different chaperone disaggregation machines and their mechanisms of action. In all these machines, the coating of protein aggregates by Hsp70 chaperones represents the conserved, initializing step. In bacteria, fungi, and plants, Hsp70 recruits and activates Hsp100 disaggregases to extract aggregated proteins. In the cytosol of metazoa, Hsp70 is empowered by a specific cast of J-protein and Hsp110 co-chaperones allowing for standalone disaggregation activity. Both types of disaggregation machines are supported by small Hsps that sequester misfolded proteins. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. An Iterative Load Disaggregation Approach Based on Appliance Consumption Pattern

    Directory of Open Access Journals (Sweden)

    Huijuan Wang

    2018-04-01

    Full Text Available Non-intrusive load monitoring (NILM, monitoring single-appliance consumption level by decomposing the aggregated energy consumption, is a novel and economic technology that is beneficial to energy utilities and energy demand management strategies development. Hardware costs of high-frequency sampling and algorithm’s computational complexity hampered NILM large-scale application. However, low sampling data shows poor performance in event detection when multiple appliances are simultaneously turned on. In this paper, we contribute an iterative disaggregation approach that is based on appliance consumption pattern (ILDACP. Our approach combined Fuzzy C-means clustering algorithm, which provide an initial appliance operating status, and sub-sequence searching Dynamic Time Warping, which retrieves single energy consumption based on the typical power consumption pattern. Results show that the proposed approach is effective to accurately disaggregate power consumption, and is suitable for the situation where different appliances are simultaneously operated. Also, the approach has lower computational complexity than Hidden Markov Model method and it is easy to implement in the household without installing special equipment.

  5. Carbon emissions, energy consumption and economic growth: An aggregate and disaggregate analysis of the Indian economy

    International Nuclear Information System (INIS)

    Ahmad, Ashfaq; Zhao, Yuhuan; Shahbaz, Muhammad; Bano, Sadia; Zhang, Zhonghua; Wang, Song; Liu, Ya

    2016-01-01

    This study investigates the long and short run relationships among carbon emissions, energy consumption and economic growth in India at the aggregated and disaggregated levels during 1971–2014. The autoregressive distributed lag model is employed for the cointegration analyses and the vector error correction model is applied to determine the direction of causality between variables. Results show that a long run cointegration relationship exists and that the environmental Kuznets curve is validated at the aggregated and disaggregated levels. Furthermore, energy (total energy, gas, oil, electricity and coal) consumption has a positive relationship with carbon emissions and a feedback effect exists between economic growth and carbon emissions. Thus, energy-efficient technologies should be used in domestic production to mitigate carbon emissions at the aggregated and disaggregated levels. The present study provides policy makers with new directions in drafting comprehensive policies with lasting impacts on the economy, energy consumption and environment towards sustainable development. - Highlights: •Relationships among carbon emissions, energy consumption and economic growth are investigated. •The EKC exists at aggregated and disaggregated levels for India. •All energy resources have positive effects on carbon emissions. •Gas energy consumption is less polluting than other energy sources in India.

  6. Long-run relationship between sectoral productivity and energy consumption in Malaysia: An aggregated and disaggregated viewpoint

    International Nuclear Information System (INIS)

    Rahman, Md Saifur; Junsheng, Ha; Shahari, Farihana; Aslam, Mohamed; Masud, Muhammad Mehedi; Banna, Hasanul; Liya, Ma

    2015-01-01

    This paper investigates the causal relationship between energy consumption and economic productivity in Malaysia at both aggregated and disaggregated levels. The investigation utilises total and sectoral (industrial and manufacturing) productivity growth during the 1971–2012 period using the modified Granger causality test proposed by Toda and Yamamoto [1] within a multivariate framework. The economy of Malaysia was found to be energy dependent at aggregated and disaggregated levels of national and sectoral economic growth. However, at disaggregate level, inefficient energy use is particularly identified with electricity and coal consumption patterns and their Granger caused negative effects upon GDP (Gross Domestic Product) and manufacturing growth. These findings suggest that policies should focus more on improving energy efficiency and energy saving. Furthermore, since emissions are found to have a close relationship to economic output at national and sectoral levels green technologies are of a highest necessity. - Highlights: • At aggregate level, energy consumption significantly influences GDP (Gross Domestic Product). • At disaggregate level, electricity & coal consumption does not help output growth. • Mineral and waste are found to positively Granger cause GDP. • The results reveal strong interactions between emissions and economic growth

  7. Energy consumption and economic growth: Evidence from China at both aggregated and disaggregated levels

    International Nuclear Information System (INIS)

    Yuan Jiahai; Kang Jiangang; Zhao Changhong; Hu Zhaoguang

    2008-01-01

    Using a neo-classical aggregate production model where capital, labor and energy are treated as separate inputs, this paper tests for the existence and direction of causality between output growth and energy use in China at both aggregated total energy and disaggregated levels as coal, oil and electricity consumption. Using the Johansen cointegration technique, the empirical findings indicate that there exists long-run cointegration among output, labor, capital and energy use in China at both aggregated and all three disaggregated levels. Then using a VEC specification, the short-run dynamics of the interested variables are tested, indicating that there exists Granger causality running from electricity and oil consumption to GDP, but does not exist Granger causality running from coal and total energy consumption to GDP. On the other hand, short-run Granger causality exists from GDP to total energy, coal and oil consumption, but does not exist from GDP to electricity consumption. We thus propose policy suggestions to solve the energy and sustainable development dilemma in China as: enhancing energy supply security and guaranteeing energy supply, especially in the short run to provide adequate electric power supply and set up national strategic oil reserve; enhancing energy efficiency to save energy; diversifying energy sources, energetically exploiting renewable energy and drawing out corresponding policies and measures; and finally in the long run, transforming development pattern and cut reliance on resource- and energy-dependent industries

  8. The influence of energy consumption of China on its real GDP from aggregated and disaggregated viewpoints

    International Nuclear Information System (INIS)

    Zhang, Wei; Yang, Shuyun

    2013-01-01

    This paper investigated the causal relationship between energy consumption and gross domestic product (GDP) in China at both aggregated and disaggregated levels during the period of 1978–2009 by using a modified version of the Granger (1969) causality test proposed by Toda and Yamamoto (1995) within a multivariate framework. The empirical results suggested the existence of a negative bi-directional Granger causality running from aggregated energy consumption to real GDP. At disaggregated level of energy consumption, the results were complicated. For coal, empirical findings suggested that there was a negative bi-directional Granger causality running from coal consumption to real GDP. However, for oil and gas, empirical findings suggested a positive bi-directional Granger causality running from oil as well as gas consumption to real GDP. Though these results supported the feedback hypothesis, the negative relationship might be attributed to the growing economy production shifting towards less energy intensive sectors and excessive energy consumption in relatively unproductive sectors. The results indicated that policies with reducing aggregated energy consumption and promoting energy conservation may boost China's economic growth. - Highlights: ► A negative bi-directional Granger causality runs from energy consumption to real GDP. ► The same result runs from coal consumption to real GDP, but oil and gas it does not. ► The results partly derive from excessive energy consumption in unproductive sectors. ► Reducing aggregated energy consumption probably promotes the development of China's economy

  9. Energy consumption, carbon emissions and economic growth in Saudi Arabia: An aggregate and disaggregate analysis

    International Nuclear Information System (INIS)

    Alkhathlan, Khalid; Javid, Muhammad

    2013-01-01

    The objective of this study is to examine the relationship among economic growth, carbon emissions and energy consumption at the aggregate and disaggregate levels. For the aggregate energy consumption model, we use total energy consumption per capita and CO 2 emissions per capita based on the total energy consumption. For the disaggregate analysis, we used oil, gas and electricity consumption models along with their respective CO 2 emissions. The long-term income elasticities of carbon emissions in three of the four models are positive and higher than their estimated short-term income elasticities. These results suggest that carbon emissions increase with the increase in per capita income which supports the belief that there is a monotonically increasing relationship between per capita carbon emissions and per capita income for the aggregate model and for the oil and electricity consumption models. The long- and short-term income elasticities of carbon emissions are negative for the gas consumption model. This result indicates that if the Saudi Arabian economy switched from oil to gas consumption, then an increase in per capita income would reduce carbon emissions. The results also suggest that electricity is less polluting than other sources of energy. - Highlights: • Carbon emissions increase with the increase in per capita income in Saudi Arabia. • The income elasticity of CO 2 is negative for the gas consumption model. • The income elasticity of CO 2 is positive for the oil consumption model. • The results suggest that electricity is less polluting than oil and gas

  10. Specific effect of the linear charge density of the acid polysaccharide on thermal aggregation/ disaggregation processes in complex carrageenan/lysozyme systems

    NARCIS (Netherlands)

    Antonov, Y.; Zhuravleva, I.; Cardinaels, R.M.; Moldenaers, P.

    2017-01-01

    We study thermal aggregation and disaggregation processes in complex carrageenan/lysozyme systems with a different linear charge density of the sulphated polysaccharide. To this end, we determine the temperature dependency of the turbidity and the intensity size distribution functions in complex

  11. Photoinduced disaggregation of TiO₂ nanoparticles enables transdermal penetration.

    Directory of Open Access Journals (Sweden)

    Samuel W Bennett

    Full Text Available Under many aqueous conditions, metal oxide nanoparticles attract other nanoparticles and grow into fractal aggregates as the result of a balance between electrostatic and Van Der Waals interactions. Although particle coagulation has been studied for over a century, the effect of light on the state of aggregation is not well understood. Since nanoparticle mobility and toxicity have been shown to be a function of aggregate size, and generally increase as size decreases, photo-induced disaggregation may have significant effects. We show that ambient light and other light sources can partially disaggregate nanoparticles from the aggregates and increase the dermal transport of nanoparticles, such that small nanoparticle clusters can readily diffuse into and through the dermal profile, likely via the interstitial spaces. The discovery of photoinduced disaggregation presents a new phenomenon that has not been previously reported or considered in coagulation theory or transdermal toxicological paradigms. Our results show that after just a few minutes of light, the hydrodynamic diameter of TiO(2 aggregates is reduced from ∼280 nm to ∼230 nm. We exposed pigskin to the nanoparticle suspension and found 200 mg kg(-1 of TiO(2 for skin that was exposed to nanoparticles in the presence of natural sunlight and only 75 mg kg(-1 for skin exposed to dark conditions, indicating the influence of light on NP penetration. These results suggest that photoinduced disaggregation may have important health implications.

  12. Erosion of atmospherically deposited radionuclides as affected by soil disaggregation mechanisms

    International Nuclear Information System (INIS)

    Claval, D.; Garcia-Sanchez, L.; Real, J.; Rouxel, R.; Mauger, S.; Sellier, L.

    2004-01-01

    The interactions of soil disaggregation with radionuclide erosion were studied under controlled conditions in the laboratory on samples from a loamy silty-sandy soil. The fate of 134 Cs and 85 Sr was monitored on soil aggregates and on small plots, with time resolution ranging from minutes to hours after contamination. Analytical experiments reproducing disaggregation mechanisms on aggregates showed that disaggregation controls both erosion and sorption. Compared to differential swelling, air explosion mobilized the most by producing finer particles and increasing five-fold sorption. For all the mechanisms studied, a significant part of the contamination was still unsorbed on the aggregates after an hour. Global experiments on contaminated sloping plots submitted to artificial rainfalls showed radionuclide erosion fluctuations and their origin. Wet radionuclide deposition increased short-term erosion by 50% compared to dry deposition. A developed soil crust when contaminated decreased radionuclide erosion by a factor 2 compared to other initial soil states. These erosion fluctuations were more significant for 134 Cs than 85 Sr, known to have better affinity to soil matrix. These findings confirm the role of disaggregation on radionuclide erosion. Our data support a conceptual model of radionuclide erosion at the small plot scale in two steps: (1) radionuclide non-equilibrium sorption on mobile particles, resulting from simultaneous sorption and disaggregation during wet deposition and (2) later radionuclide transport by runoff with suspended matter

  13. Probabilistic disaggregation model with application to natural hazard risk assessment of portfolios

    DEFF Research Database (Denmark)

    Custer, Rocco; Nishijima, Kazuyoshi

    In natural hazard risk assessment, a resolution mismatch between hazard data and aggregated exposure data is often observed. A possible solution to this issue is the disaggregation of exposure data to match the spatial resolution of hazard data. Disaggregation models available in literature...... disaggregation model that considers the uncertainty in the disaggregation, taking basis in the scaled Dirichlet distribution. The proposed probabilistic disaggregation model is applied to a portfolio of residential buildings in the Canton Bern, Switzerland, subject to flood risk. Thereby, the model is verified...... are usually deterministic and make use of auxiliary indicator, such as land cover, to spatially distribute exposures. As the dependence between auxiliary indicator and disaggregated number of exposures is generally imperfect, uncertainty arises in disaggregation. This paper therefore proposes a probabilistic...

  14. Amyloid formation and disaggregation of α-synuclein and its tandem repeat (α-TR)

    International Nuclear Information System (INIS)

    Bae, Song Yi; Kim, Seulgi; Hwang, Heejin; Kim, Hyun-Kyung; Yoon, Hyun C.; Kim, Jae Ho; Lee, SangYoon; Kim, T. Doohun

    2010-01-01

    Research highlights: → Formation of the α-synuclein amyloid fibrils by [BIMbF 3 Im]. → Disaggregation of amyloid fibrils by epigallocatechin gallate (EGCG) and baicalein. → Amyloid formation of α-synuclein tandem repeat (α-TR). -- Abstract: The aggregation of α-synuclein is clearly related to the pathogenesis of Parkinson's disease. Therefore, detailed understanding of the mechanism of fibril formation is highly valuable for the development of clinical treatment and also of the diagnostic tools. Here, we have investigated the interaction of α-synuclein with ionic liquids by using several biochemical techniques including Thioflavin T assays and transmission electron microscopy (TEM). Our data shows a rapid formation of α-synuclein amyloid fibrils was stimulated by 1-butyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide [BIMbF 3 Im], and these fibrils could be disaggregated by polyphenols such as epigallocatechin gallate (EGCG) and baicalein. Furthermore, the effect of [BIMbF 3 Im] on the α-synuclein tandem repeat (α-TR) in the aggregation process was studied.

  15. New Insight into the Finance-Energy Nexus: Disaggregated Evidence from Turkish Sectors

    Directory of Open Access Journals (Sweden)

    Mert Topcu

    2017-01-01

    Full Text Available Seeing that reshaped energy economics literature has adopted some new variables in energy demand function, the number of papers looking into the relationship between financial development and energy consumption at the aggregate level has been increasing over the last few years. This paper, however, proposes a new framework using disaggregated data and investigates the nexus between financial development and sectoral energy consumption in Turkey. To this end, panel time series regression and causality techniques are adopted over the period 1989–2011. Empirical results confirm that financial development does have a significant impact on energy consumption, even with disaggregated data. It is also proved that the magnitude of financial development is larger in energy-intensive industries than in less energy-intensive ones.

  16. Probabilistic disaggregation model with application to natural hazard risk assessment of portfolios

    OpenAIRE

    Custer, Rocco; Nishijima, Kazuyoshi

    2012-01-01

    In natural hazard risk assessment, a resolution mismatch between hazard data and aggregated exposure data is often observed. A possible solution to this issue is the disaggregation of exposure data to match the spatial resolution of hazard data. Disaggregation models available in literature are usually deterministic and make use of auxiliary indicator, such as land cover, to spatially distribute exposures. As the dependence between auxiliary indicator and disaggregated number of exposures is ...

  17. Analysis of aggregation and disaggregation effects for grid-based hydrological models and the development of improved precipitation disaggregation procedures for GCMs

    Directory of Open Access Journals (Sweden)

    H. S. Wheater

    1999-01-01

    Full Text Available Appropriate representation of hydrological processes within atmospheric General Circulation Models (GCMs is important with respect to internal model dynamics (e.g. surface feedback effects on atmospheric fluxes, continental runoff production and to simulation of terrestrial impacts of climate change. However, at the scale of a GCM grid-square, several methodological problems arise. Spatial disaggregation of grid-square average climatological parameters is required in particular to produce appropriate point intensities from average precipitation. Conversely, aggregation of land surface heterogeneity is necessary for grid-scale or catchment scale application. The performance of grid-based hydrological models is evaluated for two large (104km2 UK catchments. Simple schemes, using sub-grid average of individual land use at 40 km scale and with no calibration, perform well at the annual time-scale and, with the addition of a (calibrated routing component, at the daily and monthly time-scale. Decoupling of hillslope and channel routing does not necessarily improve performance or identifiability. Scale dependence is investigated through application of distribution functions for rainfall and soil moisture at 100 km scale. The results depend on climate, but show interdependence of the representation of sub-grid rainfall and soil moisture distribution. Rainfall distribution is analysed directly using radar rainfall data from the UK and the Arkansas Red River, USA. Among other properties, the scale dependence of spatial coverage upon radar pixel resolution and GCM grid-scale, as well as the serial correlation of coverages are investigated. This leads to a revised methodology for GCM application, as a simple extension of current procedures. A new location-based approach using an image processing technique is then presented, to allow for the preservation of the spatial memory of the process.

  18. Command Disaggregation Attack and Mitigation in Industrial Internet of Things

    Directory of Open Access Journals (Sweden)

    Peng Xun

    2017-10-01

    Full Text Available A cyber-physical attack in the industrial Internet of Things can cause severe damage to physical system. In this paper, we focus on the command disaggregation attack, wherein attackers modify disaggregated commands by intruding command aggregators like programmable logic controllers, and then maliciously manipulate the physical process. It is necessary to investigate these attacks, analyze their impact on the physical process, and seek effective detection mechanisms. We depict two different types of command disaggregation attack modes: (1 the command sequence is disordered and (2 disaggregated sub-commands are allocated to wrong actuators. We describe three attack models to implement these modes with going undetected by existing detection methods. A novel and effective framework is provided to detect command disaggregation attacks. The framework utilizes the correlations among two-tier command sequences, including commands from the output of central controller and sub-commands from the input of actuators, to detect attacks before disruptions occur. We have designed components of the framework and explain how to mine and use these correlations to detect attacks. We present two case studies to validate different levels of impact from various attack models and the effectiveness of the detection framework. Finally, we discuss how to enhance the detection framework.

  19. Command Disaggregation Attack and Mitigation in Industrial Internet of Things.

    Science.gov (United States)

    Xun, Peng; Zhu, Pei-Dong; Hu, Yi-Fan; Cui, Peng-Shuai; Zhang, Yan

    2017-10-21

    A cyber-physical attack in the industrial Internet of Things can cause severe damage to physical system. In this paper, we focus on the command disaggregation attack, wherein attackers modify disaggregated commands by intruding command aggregators like programmable logic controllers, and then maliciously manipulate the physical process. It is necessary to investigate these attacks, analyze their impact on the physical process, and seek effective detection mechanisms. We depict two different types of command disaggregation attack modes: (1) the command sequence is disordered and (2) disaggregated sub-commands are allocated to wrong actuators. We describe three attack models to implement these modes with going undetected by existing detection methods. A novel and effective framework is provided to detect command disaggregation attacks. The framework utilizes the correlations among two-tier command sequences, including commands from the output of central controller and sub-commands from the input of actuators, to detect attacks before disruptions occur. We have designed components of the framework and explain how to mine and use these correlations to detect attacks. We present two case studies to validate different levels of impact from various attack models and the effectiveness of the detection framework. Finally, we discuss how to enhance the detection framework.

  20. Equity in health care financing in Palestine: the value-added of the disaggregate approach.

    Science.gov (United States)

    Abu-Zaineh, Mohammad; Mataria, Awad; Luchini, Stéphane; Moatti, Jean-Paul

    2008-06-01

    This paper analyzes the redistributive effect and progressivity associated with the current health care financing schemes in the Occupied Palestinian Territory, using data from the first Palestinian Household Health Expenditure Survey conducted in 2004. The paper goes beyond the commonly used "aggregate summary index approach" to apply a more detailed "disaggregate approach". Such an approach is borrowed from the general economic literature on taxation, and examines redistributive and vertical effects over specific parts of the income distribution, using the dominance criterion. In addition, the paper employs a bootstrap method to test for the statistical significance of the inequality measures. While both the aggregate and disaggregate approaches confirm the pro-rich and regressive character of out-of-pocket payments, the aggregate approach does not ascertain the potential progressive feature of any of the available insurance schemes. The disaggregate approach, however, significantly reveals a progressive aspect, for over half of the population, of the government health insurance scheme, and demonstrates that the regressivity of the out-of-pocket payments is most pronounced among the worst-off classes of the population. Recommendations are advanced to improve the performance of the government insurance schemes to enhance its capacity in limiting inequalities in health care financing in the Occupied Palestinian Territory.

  1. Statistical Models for Disaggregation and Reaggregation of Natural Gas Consumption Data

    Czech Academy of Sciences Publication Activity Database

    Brabec, Marek; Konár, Ondřej; Malý, Marek; Kasanický, Ivan; Pelikán, Emil

    2015-01-01

    Roč. 42, č. 5 (2015), s. 921-937 ISSN 0266-4763 Institutional support: RVO:67985807 Keywords : natural gas consumption * semiparametric model * standardized load profiles * aggregation * disaggregation * 62P30 Subject RIV: BB - Applied Statistics , Operational Research Impact factor: 0.419, year: 2015

  2. Evolution of an intricate J-protein network driving protein disaggregation in eukaryotes.

    Science.gov (United States)

    Nillegoda, Nadinath B; Stank, Antonia; Malinverni, Duccio; Alberts, Niels; Szlachcic, Anna; Barducci, Alessandro; De Los Rios, Paolo; Wade, Rebecca C; Bukau, Bernd

    2017-05-15

    Hsp70 participates in a broad spectrum of protein folding processes extending from nascent chain folding to protein disaggregation. This versatility in function is achieved through a diverse family of J-protein cochaperones that select substrates for Hsp70. Substrate selection is further tuned by transient complexation between different classes of J-proteins, which expands the range of protein aggregates targeted by metazoan Hsp70 for disaggregation. We assessed the prevalence and evolutionary conservation of J-protein complexation and cooperation in disaggregation. We find the emergence of a eukaryote-specific signature for interclass complexation of canonical J-proteins. Consistently, complexes exist in yeast and human cells, but not in bacteria, and correlate with cooperative action in disaggregation in vitro. Signature alterations exclude some J-proteins from networking, which ensures correct J-protein pairing, functional network integrity and J-protein specialization. This fundamental change in J-protein biology during the prokaryote-to-eukaryote transition allows for increased fine-tuning and broadening of Hsp70 function in eukaryotes.

  3. Probabilistic disaggregation of a spatial portfolio of exposure for natural hazard risk assessment

    DEFF Research Database (Denmark)

    Custer, Rocco; Nishijima, Kazuyoshi

    2018-01-01

    In natural hazard risk assessment situations are encountered where information on the portfolio of exposure is only available in a spatially aggregated form, hindering a precise risk assessment. Recourse might be found in the spatial disaggregation of the portfolio of exposure to the resolution...... of a portfolio of buildings in two communes in Switzerland and the results are compared to sample observations. The relevance of probabilistic disaggregation uncertainty in natural hazard risk assessment is illustrated with the example of a simple flood risk assessment....

  4. Disaggregated energy consumption and GDP in Taiwan: A threshold co-integration analysis

    International Nuclear Information System (INIS)

    Hu, J.-L.; Lin, C.-H.

    2008-01-01

    Energy consumption growth is much higher than economic growth for Taiwan in recent years, worsening its energy efficiency. This paper provides a solid explanation by examining the equilibrium relationship between GDP and disaggregated energy consumption under a non-linear framework. The threshold co-integration test developed with asymmetric dynamic adjusting processes proposed by Hansen and Seo [Hansen, B.E., Seo, B., 2002. Testing for two-regime threshold cointegration in vector error-correction models. Journal of Econometrics 110, 293-318.] is applied. Non-linear co-integrations between GDP and disaggregated energy consumptions are confirmed except for oil consumption. The two-regime vector error-correction models (VECM) show that the adjustment process of energy consumption toward equilibrium is highly persistent when an appropriately threshold is reached. There is mean-reverting behavior when the threshold is reached, making aggregate and disaggregated energy consumptions grow faster than GDP in Taiwan

  5. An economic analysis of disaggregation of space assets: Application to GPS

    Science.gov (United States)

    Hastings, Daniel E.; La Tour, Paul A.

    2017-05-01

    New ideas, technologies and architectural concepts are emerging with the potential to reshape the space enterprise. One of those new architectural concepts is the idea that rather than aggregating payloads onto large very high performance buses, space architectures should be disaggregated with smaller numbers of payloads (as small as one) per bus and the space capabilities spread across a correspondingly larger number of systems. The primary rationale is increased survivability and resilience. The concept of disaggregation is examined from an acquisition cost perspective. A mixed system dynamics and trade space exploration model is developed to look at long-term trends in the space acquisition business. The model is used to examine the question of how different disaggregated GPS architectures compare in cost to the well-known current GPS architecture. A generation-over-generation examination of policy choices is made possible through the application of soft systems modeling of experience and learning effects. The assumptions that are allowed to vary are: design lives, production quantities, non-recurring engineering and time between generations. The model shows that there is always a premium in the first generation to be paid to disaggregate the GPS payloads. However, it is possible to construct survivable architectures where the premium after two generations is relatively low.

  6. A rainfall disaggregation scheme for sub-hourly time scales: Coupling a Bartlett-Lewis based model with adjusting procedures

    Science.gov (United States)

    Kossieris, Panagiotis; Makropoulos, Christos; Onof, Christian; Koutsoyiannis, Demetris

    2018-01-01

    Many hydrological applications, such as flood studies, require the use of long rainfall data at fine time scales varying from daily down to 1 min time step. However, in the real world there is limited availability of data at sub-hourly scales. To cope with this issue, stochastic disaggregation techniques are typically employed to produce possible, statistically consistent, rainfall events that aggregate up to the field data collected at coarser scales. A methodology for the stochastic disaggregation of rainfall at fine time scales was recently introduced, combining the Bartlett-Lewis process to generate rainfall events along with adjusting procedures to modify the lower-level variables (i.e., hourly) so as to be consistent with the higher-level one (i.e., daily). In the present paper, we extend the aforementioned scheme, initially designed and tested for the disaggregation of daily rainfall into hourly depths, for any sub-hourly time scale. In addition, we take advantage of the recent developments in Poisson-cluster processes incorporating in the methodology a Bartlett-Lewis model variant that introduces dependence between cell intensity and duration in order to capture the variability of rainfall at sub-hourly time scales. The disaggregation scheme is implemented in an R package, named HyetosMinute, to support disaggregation from daily down to 1-min time scale. The applicability of the methodology was assessed on a 5-min rainfall records collected in Bochum, Germany, comparing the performance of the above mentioned model variant against the original Bartlett-Lewis process (non-random with 5 parameters). The analysis shows that the disaggregation process reproduces adequately the most important statistical characteristics of rainfall at wide range of time scales, while the introduction of the model with dependent intensity-duration results in a better performance in terms of skewness, rainfall extremes and dry proportions.

  7. Electromagnetic scattering using the iterative multi-region technique

    CERN Document Server

    Al Sharkawy, Mohamed H

    2007-01-01

    In this work, an iterative approach using the finite difference frequency domain method is presented to solve the problem of scattering from large-scale electromagnetic structures. The idea of the proposed iterative approach is to divide one computational domain into smaller subregions and solve each subregion separately. Then the subregion solutions are combined iteratively to obtain a solution for the complete domain. As a result, a considerable reduction in the computation time and memory is achieved. This procedure is referred to as the iterative multiregion (IMR) technique.Different enhan

  8. Disaggregation of Rainy Hours: Compared Performance of Various Models.

    Science.gov (United States)

    Ben Haha, M.; Hingray, B.; Musy, A.

    In the urban environment, the response times of catchments are usually short. To de- sign or to diagnose waterworks in that context, it is necessary to describe rainfall events with a good time resolution: a 10mn time step is often necessary. Such in- formation is not always available. Rainfall disaggregation models have thus to be applied to produce from rough rainfall data that short time resolution information. The communication will present the performance obtained with several rainfall dis- aggregation models that allow for the disaggregation of rainy hours into six 10mn rainfall amounts. The ability of the models to reproduce some statistical character- istics of rainfall (mean, variance, overall distribution of 10mn-rainfall amounts; ex- treme values of maximal rainfall amounts over different durations) is evaluated thanks to different graphical and numerical criteria. The performance of simple models pre- sented in some scientific papers or developed in the Hydram laboratory as well as the performance of more sophisticated ones is compared with the performance of the basic constant disaggregation model. The compared models are either deterministic or stochastic; for some of them the disaggregation is based on scaling properties of rainfall. The compared models are in increasing complexity order: constant model, linear model (Ben Haha, 2001), Ormsbee Deterministic model (Ormsbee, 1989), Ar- tificial Neuronal Network based model (Burian et al. 2000), Hydram Stochastic 1 and Hydram Stochastic 2 (Ben Haha, 2001), Multiplicative Cascade based model (Olsson and Berndtsson, 1998), Ormsbee Stochastic model (Ormsbee, 1989). The 625 rainy hours used for that evaluation (with a hourly rainfall amount greater than 5mm) were extracted from the 21 years chronological rainfall series (10mn time step) observed at the Pully meteorological station, Switzerland. The models were also evaluated when applied to different rainfall classes depending on the season first and on the

  9. AIR Tools - A MATLAB Package of Algebraic Iterative Reconstruction Techniques

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Saxild-Hansen, Maria

    This collection of MATLAB software contains implementations of several Algebraic Iterative Reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods...... are implemented: Algebraic Reconstruction Techniques (ART) and Simultaneous Iterative Reconstruction Techniques (SIRT). In addition we provide a few simplified test problems from medical and seismic tomography. For each iterative method, a number of strategies are available for choosing the relaxation parameter...

  10. Development of sampling techniques for ITER Type B radwaste

    International Nuclear Information System (INIS)

    Hong, Kwon Pyo; Kim, Sung Geun; Jung, Sang Hee; Oh, Wan Ho; Park, Myung Chul; Kim, Hee Moon; Ahn, Sang Bok

    2016-01-01

    There are several difficulties and limitation in sampling activities. As the Type B radwaste components are mostly metallic(mostly stainless steel) and bulk(∼ 1 m in size and ∼ 100 mm in thickness), it is difficult in taking samples from the surface of Type B radwaste by remote operation. But also, sampling should be performed without use of any liquid coolant to avoid the spread of contamination. And all sampling procedures are carried in the hot cell red zone with remote operation. Three kinds of sampling techniques are being developed. They are core sampling, chip sampling, and wedge sampling, which are the candidates of sampling techniques to be applied to ITER hot cell. Object materials for sampling are stainless steel or Cu alloy block in order to simulate ITER Type B radwaste. The best sampling technique for ITER Type B radwaste among the three sampling techniques will be suggested in several months after finishing the related experiment

  11. Development of sampling techniques for ITER Type B radwaste

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Kwon Pyo; Kim, Sung Geun; Jung, Sang Hee; Oh, Wan Ho; Park, Myung Chul; Kim, Hee Moon; Ahn, Sang Bok [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    There are several difficulties and limitation in sampling activities. As the Type B radwaste components are mostly metallic(mostly stainless steel) and bulk(∼ 1 m in size and ∼ 100 mm in thickness), it is difficult in taking samples from the surface of Type B radwaste by remote operation. But also, sampling should be performed without use of any liquid coolant to avoid the spread of contamination. And all sampling procedures are carried in the hot cell red zone with remote operation. Three kinds of sampling techniques are being developed. They are core sampling, chip sampling, and wedge sampling, which are the candidates of sampling techniques to be applied to ITER hot cell. Object materials for sampling are stainless steel or Cu alloy block in order to simulate ITER Type B radwaste. The best sampling technique for ITER Type B radwaste among the three sampling techniques will be suggested in several months after finishing the related experiment.

  12. Viral Aggregation: Impact on Virus Behavior in the Environment.

    Science.gov (United States)

    Gerba, Charles P; Betancourt, Walter Q

    2017-07-05

    Aggregates of viruses can have a significant impact on quantification and behavior of viruses in the environment. Viral aggregates may be formed in numerous ways. Viruses may form crystal like structures and aggregates in the host cell during replication or may form due to changes in environmental conditions after virus particles are released from the host cells. Aggregates tend to form near the isoelectric point of the virus, under the influence of certain salts and salt concentrations in solution, cationic polymers, and suspended organic matter. The given conditions under which aggregates form in the environment are highly dependent on the type of virus, type of salts in solution (cation, anion. monovalent, divalent) and pH. However, virus type greatly influences the conditions when aggregation/disaggregation will occur, making predictions difficult under any given set of water quality conditions. Most studies have shown that viral aggregates increase the survival of viruses in the environment and resistance to disinfectants, especially with more reactive disinfectants. The presence of viral aggregates may also result in overestimation of removal by filtration processes. Virus aggregation-disaggregation is a complex process and predicting the behavior of any individual virus is difficult under a given set of environmental circumstances without actual experimental data.

  13. Behaviour of humic-bentonite aggregates in diluted suspensions ...

    African Journals Online (AJOL)

    Formation and disaggregation of micron-size aggregates in a diluted suspension made up of HSs and bentonite (B) were studied by tracing distribution of aggregate sizes and their counts in freshly prepared and aged suspensions, and at high (10 000) and low (1.0) [HS]/[B] ratios. Diluted HSB suspensions are unstable ...

  14. Core-size regulated aggregation/disaggregation of citrate-coated gold nanoparticles (5-50 nm) and dissolved organic matter: Extinction, emission, and scattering evidence

    Science.gov (United States)

    Esfahani, Milad Rabbani; Pallem, Vasanta L.; Stretz, Holly A.; Wells, Martha J. M.

    2018-01-01

    Knowledge of the interactions between gold nanoparticles (GNPs) and dissolved organic matter (DOM) is significant in the development of detection devices for environmental sensing, studies of environmental fate and transport, and advances in antifouling water treatment membranes. The specific objective of this research was to spectroscopically investigate the fundamental interactions between citrate-stabilized gold nanoparticles (CT-GNPs) and DOM. Studies indicated that 30 and 50 nm diameter GNPs promoted disaggregation of the DOM. This result-disaggregation of an environmentally important polyelectrolyte-will be quite useful regarding antifouling properties in water treatment and water-based sensing applications. Furthermore, resonance Rayleigh scattering results showed significant enhancement in the UV range which can be useful to characterize DOM and can be exploited as an analytical tool to better sense and improve our comprehension of nanomaterial interactions with environmental systems. CT-GNPs having core size diameters of 5, 10, 30, and 50 nm were studied in the absence and presence of added DOM at 2 and 8 ppm at low ionic strength and near neutral pH (6.0-6.5) approximating surface water conditions. Interactions were monitored by cross-interpretation among ultraviolet (UV)-visible extinction spectroscopy, excitation-emission matrix (EEM) spectroscopy (emission and Rayleigh scattering), and dynamic light scattering (DLS). This comprehensive combination of spectroscopic analyses lends new insights into the antifouling behavior of GNPs. The CT-GNP-5 and -10 controls emitted light and aggregated. In contrast, the CT-GNP-30 and CT-GNP-50 controls scattered light intensely, but did not aggregate and did not emit light. The presence of any CT-GNP did not affect the extinction spectra of DOM, and the presence of DOM did not affect the extinction spectra of the CT-GNPs. The emission spectra (visible range) differed only slightly between calculated and actual

  15. Is disaggregation the holy grail of energy efficiency? The case of electricity

    International Nuclear Information System (INIS)

    Carrie Armel, K.; Gupta, Abhay; Shrimali, Gireesh; Albert, Adrian

    2013-01-01

    This paper aims to address two timely energy problems. First, significant low-cost energy reductions can be made in the residential and commercial sectors, but these savings have not been achievable to date. Second, billions of dollars are being spent to install smart meters, yet the energy saving and financial benefits of this infrastructure – without careful consideration of the human element – will not reach its full potential. We believe that we can address these problems by strategically marrying them, using disaggregation. Disaggregation refers to a set of statistical approaches for extracting end-use and/or appliance level data from an aggregate, or whole-building, energy signal. In this paper, we explain how appliance level data affords numerous benefits, and why using the algorithms in conjunction with smart meters is the most cost-effective and scalable solution for getting this data. We review disaggregation algorithms and their requirements, and evaluate the extent to which smart meters can meet those requirements. Research, technology, and policy recommendations are also outlined. - Highlights: ► Appliance energy use data can produce many consumer, industry, and policy benefits. ► Disaggregating smart meter data is the most cost-effective and scalable solution. ► We review algorithm requirements, and ability of smart meters to meet those. ► Current technology identifies ∼10 appliances; minor upgrades could identify more. ► Research, technology, and policy recommendations for moving forward are outlined.

  16. Cellular strategies to cope with protein aggregation

    NARCIS (Netherlands)

    Scior, Annika; Juenemann, Katrin; Kirstein, Janine

    2016-01-01

    Nature has evolved several mechanisms to detoxify intracellular protein aggregates that arise upon proteotoxic challenges. These include the controlled deposition of misfolded proteins at distinct cellular sites, the protein disaggregation and refolding by molecular chaperones and/or degradation of

  17. Development of the armoring technique for ITER Divertor Dome

    Energy Technology Data Exchange (ETDEWEB)

    Litunovsky, Nikolay, E-mail: nlitunovsky@sintez.niiefa.spb.su [D.V. Efremov Reseasch Institute, 3, Doroga na Metallostroy, Saint Petersburg (Russian Federation); Alekseenko, Evgeny; Makhankov, Alexey; Mazul, Igor [D.V. Efremov Reseasch Institute, 3, Doroga na Metallostroy, Saint Petersburg (Russian Federation)

    2011-10-15

    This paper describes the current status of the technique for armoring of Plasma Facing Units (PFUs) of the ITER Divertor Dome with flat tungsten tiles planned for application at the procurement stage. Application of high-temperature vacuum brazing for armoring of High Heat Flux (HHF) plasma facing components was traditionally developed at the Efremov Institute and successfully tried out at the ITER R and D stage by manufacturing and HHF testing of a number of W- and Be-armored mock-ups . Nevertheless, the so-called 'fast brazing' technique successfully applied in the past was abandoned at the stage of manufacturing of the Dome Qualification Prototypes (Dome QPs), as it failed to retain the mechanical properties of CuCrZr heat sink of the substrate. Another problem was a substantially increased number of armoring tiles brazed onto one substrate. Severe ITER requirements for the joints quality have forced us to refuse from production of W/Cu joints by brazing in favor of casting. These modifications have allowed us to produce ITER Divertor Dome QPs with high-quality tungsten armor, which then passed successfully the HHF testing. Further preparation to the procurement stage is in progress.

  18. Streamflow disaggregation: a nonlinear deterministic approach

    Directory of Open Access Journals (Sweden)

    B. Sivakumar

    2004-01-01

    Full Text Available This study introduces a nonlinear deterministic approach for streamflow disaggregation. According to this approach, the streamflow transformation process from one scale to another is treated as a nonlinear deterministic process, rather than a stochastic process as generally assumed. The approach follows two important steps: (1 reconstruction of the scalar (streamflow series in a multi-dimensional phase-space for representing the transformation dynamics; and (2 use of a local approximation (nearest neighbor method for disaggregation. The approach is employed for streamflow disaggregation in the Mississippi River basin, USA. Data of successively doubled resolutions between daily and 16 days (i.e. daily, 2-day, 4-day, 8-day, and 16-day are studied, and disaggregations are attempted only between successive resolutions (i.e. 2-day to daily, 4-day to 2-day, 8-day to 4-day, and 16-day to 8-day. Comparisons between the disaggregated values and the actual values reveal excellent agreements for all the cases studied, indicating the suitability of the approach for streamflow disaggregation. A further insight into the results reveals that the best results are, in general, achieved for low embedding dimensions (2 or 3 and small number of neighbors (less than 50, suggesting possible presence of nonlinear determinism in the underlying transformation process. A decrease in accuracy with increasing disaggregation scale is also observed, a possible implication of the existence of a scaling regime in streamflow.

  19. Disaggregation of sectors in Social Accounting Matrices using a customized Wolsky method

    OpenAIRE

    BARRERA-LOZANO Margarita; MAINAR CAUSAPÉ ALFREDO; VALLÉS FERRER José

    2014-01-01

    The aim of this work is to enable the implementation of disaggregation processes for specific and homogeneous sectors in Social Accounting Matrices (SAMs), while taking into account the difficulties in data collection from these types of sectors. The method proposed is based on the Wolsky technique, customized for the disaggregation of Social Accounting Matrices, within the current-facilities framework. The Spanish Social Accounting Matrix for 2008 is used as a benchmark for the analysis, and...

  20. A New Iteration Multivariate Pad e´ Approximation Technique for ...

    African Journals Online (AJOL)

    In this paper, the Laplace transform, the New iteration method and the Multivariate Pade´ approximation technique are employed to solve nonlinear fractional partial differential equations whose fractional derivatives are described in the sense of Caputo. The Laplace transform is used to ”fully” determine the initial iteration ...

  1. Spatial and temporal disaggregation of transport-related carbon dioxide emissions in Bogota - Colombia

    Science.gov (United States)

    Hernandez-Gonzalez, L. A.; Jimenez Pizarro, R.; Néstor Y. Rojas, N. Y.

    2011-12-01

    As a result of rapid urbanization during the last 60 years, 75% of the Colombian population now lives in cities. Urban areas are net sources of greenhouse gases (GHG) and contribute significantly to national GHG emission inventories. The development of scientifically-sound GHG mitigation strategies require accurate GHG source and sink estimations. Disaggregated inventories are effective mitigation decision-making tools. The disaggregation process renders detailed information on the distribution of emissions by transport mode, and the resulting a priori emissions map allows for optimal definition of sites for GHG flux monitoring, either by eddy covariance or inverse modeling techniques. Fossil fuel use in transportation is a major source of carbon dioxide (CO2) in Bogota. We present estimates of CO2 emissions from road traffic in Bogota using the Intergovernmental Panel on Climate Change (IPCC) reference method, and a spatial and temporal disaggregation method. Aggregated CO2 emissions from mobile sources were estimated from monthly and annual fossil fuel (gasoline, diesel and compressed natural gas - CNG) consumption statistics, and estimations of bio-ethanol and bio-diesel use. Although bio-fuel CO2 emissions are considered balanced over annual (or multi-annual) agricultural cycles, we included them since CO2 generated by their combustion would be measurable by a net flux monitoring system. For the disaggregation methodology, we used information on Bogota's road network classification, mean travel speed and trip length for each vehicle category and road type. The CO2 emission factors were taken from recent in-road measurements for gasoline- and CNG-powered vehicles and also estimated from COPERT IV. We estimated emission factors for diesel from surveys on average trip length and fuel consumption. Using IPCC's reference method, we estimate Bogota's total transport-related CO2 emissions for 2008 (reference year) at 4.8 Tg CO2. The disaggregation method estimation is

  2. Technological shape and size: A disaggregated perspective on sectoral innovation systems in renewable electrification pathways

    DEFF Research Database (Denmark)

    Hansen, Ulrich Elmer; Gregersen, Cecilia; Lema, Rasmus

    2018-01-01

    important analytical implications because the disaggregated perspective allows us to identify trajectories that cut across conventionally defined core technologies. This is important for ongoing discussions of electrification pathways in developing countries. We conclude the paper by distilling......The sectoral innovation system perspective has been developed as an analytical framework to analyse and understand innovation dynamics within and across various sectors. Most of the research conducted on sectoral innovation systems has focused on an aggregate-level analysis of entire sectors....... This paper argues that a disaggregated (sub-sectoral) focus is more suited to policy-oriented work on the development and diffusion of renewable energy, particularly in countries with rapidly developing energy systems and open technology choices. It focuses on size, distinguishing between small-scale (mini...

  3. Aggregation control of quantum dots through ion-mediated hydrogen bonding shielding.

    Science.gov (United States)

    Liu, Jianbo; Yang, Xiaohai; Wang, Kemin; He, Xiaoxiao; Wang, Qing; Huang, Jin; Liu, Yan

    2012-06-26

    Nanoparticle stabilization against detrimental aggregation is a critical parameter that needs to be well controlled. Herein, we present a facile and rapid ion-mediated dispersing technique that leads to hydrophilic aggregate-free quantum dots (QDs). Because of the shielding of the hydrogen bonds between cysteamine-capped QDs, the presence of F(-) ions disassembled the aggregates of QDs and afforded their high colloidal stability. The F(-) ions also greatly eliminated the nonspecific adsorption of the QDs on glass slides and cells. Unlike the conventional colloidal stabilized method that requires the use of any organic ligand and/or polymer for the passivation of the nanoparticle surface, the proposed approach adopts the small size and large diffusion coefficient of inorganic ions as dispersant, which offers the disaggregation a fast reaction dynamics and negligible influence on their intrinsic surface functional properties. Therefore, the ion-mediated dispersing strategy showed great potential in chemosensing and biomedical applications.

  4. Dose reduction in pediatric abdominal CT: use of iterative reconstruction techniques across different CT platforms

    International Nuclear Information System (INIS)

    Khawaja, Ranish Deedar Ali; Singh, Sarabjeet; Otrakji, Alexi; Padole, Atul; Lim, Ruth; Nimkin, Katherine; Westra, Sjirk; Kalra, Mannudeep K.; Gee, Michael S.

    2015-01-01

    Dose reduction in children undergoing CT scanning is an important priority for the radiology community and public at large. Drawbacks of radiation reduction are increased image noise and artifacts, which can affect image interpretation. Iterative reconstruction techniques have been developed to reduce noise and artifacts from reduced-dose CT examinations, although reconstruction algorithm, magnitude of dose reduction and effects on image quality vary. We review the reconstruction principles, radiation dose potential and effects on image quality of several iterative reconstruction techniques commonly used in clinical settings, including 3-D adaptive iterative dose reduction (AIDR-3D), adaptive statistical iterative reconstruction (ASIR), iDose, sinogram-affirmed iterative reconstruction (SAFIRE) and model-based iterative reconstruction (MBIR). We also discuss clinical applications of iterative reconstruction techniques in pediatric abdominal CT. (orig.)

  5. Dose reduction in pediatric abdominal CT: use of iterative reconstruction techniques across different CT platforms

    Energy Technology Data Exchange (ETDEWEB)

    Khawaja, Ranish Deedar Ali; Singh, Sarabjeet; Otrakji, Alexi; Padole, Atul; Lim, Ruth; Nimkin, Katherine; Westra, Sjirk; Kalra, Mannudeep K.; Gee, Michael S. [MGH Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA (United States)

    2015-07-15

    Dose reduction in children undergoing CT scanning is an important priority for the radiology community and public at large. Drawbacks of radiation reduction are increased image noise and artifacts, which can affect image interpretation. Iterative reconstruction techniques have been developed to reduce noise and artifacts from reduced-dose CT examinations, although reconstruction algorithm, magnitude of dose reduction and effects on image quality vary. We review the reconstruction principles, radiation dose potential and effects on image quality of several iterative reconstruction techniques commonly used in clinical settings, including 3-D adaptive iterative dose reduction (AIDR-3D), adaptive statistical iterative reconstruction (ASIR), iDose, sinogram-affirmed iterative reconstruction (SAFIRE) and model-based iterative reconstruction (MBIR). We also discuss clinical applications of iterative reconstruction techniques in pediatric abdominal CT. (orig.)

  6. Application of interferometry and Faraday rotation techniques for density measurements on ITER

    International Nuclear Information System (INIS)

    Snider, R.T.; Carlstrom, T.N.; Ma, C.H.; Peebles, W.A.

    1995-01-01

    There is a need for real time, reliable density measurement for density control, compatible with the restricted access and radiation environment on ITER. Line average density measurements using microwave or laser interferometry techniques have proven to be robust and reliable for density control on contemporary tokamaks. In ITER, the large path length, high density and density gradients, limit the wavelength of a probing beam to shorter then about 50 microm due to refraction effects. In this paper the authors consider the design of short wavelength vibration compensated interferometers and Faraday rotation techniques for density measurements on ITER. These techniques allow operation of the diagnostics without a prohibitively large vibration isolated structure and permits the optics to be mounted directly on the radial port plugs on ITER. A beam path designed for 10.6 microm (CO2 laser) with a tangential path through the plasma allows both an interferometer and a Faraday rotation measurement of the line average density with good density resolution while avoiding refraction problems. Plasma effects on the probing beams and design tradeoffs will be discussed along with radiation and long pulse issues. A proposed layout of the diagnostic for ITER will be present

  7. Nanoblock aggregation-disaggregation of zeolite nanoparticles: Temperature control on crystallinity

    KAUST Repository

    Gao, Feifei

    2011-04-21

    During the induction period of silicalite-1 formation at 80 °C, primary nanoblocks of 8-11 nm self-assemble together into fragile nanoflocculates of ca. 60 nm that dislocate and reappear according to a slow pseudoperiodical process. Between 22 and 32 h, the nanoflocculates grow up to 350 nm and contain ill- and well-oriented aggregates of ca. 40 nm. After 48 h, only ill-faceted monodomains of ca. 90 nm remains, which self-assemble into larger flocculates of ca. 450 nm. For crystal growth performed at 90 °C, most of the final aggregates exhibit ill-oriented assembly. This is consistent with a trial-and-error block-by-block building mechanism that turns into an irreversible and apparently faster process at 90 °C, causing definitively ill-oriented product. The nanoblocks, aggregates, and flocculates were characterized in nondiluted, nondiluted and ultrasonicated, or diluted and ultrasonicated solutions, using mainly dynamic light scattering and cryo-high-resolution transmission electron microscopy at various tilted angles. © 2011 American Chemical Society.

  8. Disaggregate energy consumption and industrial production in South Africa

    International Nuclear Information System (INIS)

    Ziramba, Emmanuel

    2009-01-01

    This paper tries to assess the relationship between disaggregate energy consumption and industrial output in South Africa by undertaking a cointegration analysis using annual data from 1980 to 2005. We also investigate the causal relationships between the various disaggregate forms of energy consumption and industrial production. Our results imply that industrial production and employment are long-run forcing variables for electricity consumption. Applying the [Toda, H.Y., Yamamoto, T., 1995. Statistical inference in vector autoregressions with possibly integrated processes. Journal of Econometrics 66, 225-250] technique to Granger-causality, we find bi-directional causality between oil consumption and industrial production. For the other forms of energy consumption, there is evidence in support of the energy neutrality hypothesis. There is also evidence of causality between employment and electricity consumption as well as coal consumption causing employment.

  9. Enhanced nonlinear iterative techniques applied to a nonequilibrium plasma flow

    International Nuclear Information System (INIS)

    Knoll, D.A.

    1998-01-01

    The authors study the application of enhanced nonlinear iterative methods to the steady-state solution of a system of two-dimensional convection-diffusion-reaction partial differential equations that describe the partially ionized plasma flow in the boundary layer of a tokamak fusion reactor. This system of equations is characterized by multiple time and spatial scales and contains highly anisotropic transport coefficients due to a strong imposed magnetic field. They use Newton's method to linearize the nonlinear system of equations resulting from an implicit, finite volume discretization of the governing partial differential equations, on a staggered Cartesian mesh. The resulting linear systems are neither symmetric nor positive definite, and are poorly conditioned. Preconditioned Krylov iterative techniques are employed to solve these linear systems. They investigate both a modified and a matrix-free Newton-Krylov implementation, with the goal of reducing CPU cost associated with the numerical formation of the Jacobian. A combination of a damped iteration, mesh sequencing, and a pseudotransient continuation technique is used to enhance global nonlinear convergence and CPU efficiency. GMRES is employed as the Krylov method with incomplete lower-upper (ILU) factorization preconditioning. The goal is to construct a combination of nonlinear and linear iterative techniques for this complex physical problem that optimizes trade-offs between robustness, CPU time, memory requirements, and code complexity. It is shown that a mesh sequencing implementation provides significant CPU savings for fine grid calculations. Performance comparisons of modified Newton-Krylov and matrix-free Newton-Krylov algorithms will be presented

  10. A Novel Magnetic Actuation Scheme to Disaggregate Nanoparticles and Enhance Passage across the Blood–Brain Barrier

    Directory of Open Access Journals (Sweden)

    Ali Kafash Hoshiar

    2017-12-01

    Full Text Available The blood–brain barrier (BBB hinders drug delivery to the brain. Despite various efforts to develop preprogramed actuation schemes for magnetic drug delivery, the unmodeled aggregation phenomenon limits drug delivery performance. This paper proposes a novel scheme with an aggregation model for a feed-forward magnetic actuation design. A simulation platform for aggregated particle delivery is developed and an actuation scheme is proposed to deliver aggregated magnetic nanoparticles (MNPs using a discontinuous asymmetrical magnetic actuation. The experimental results with a Y-shaped channel indicated the success of the proposed scheme in steering and disaggregation. The delivery performance of the developed scheme was examined using a realistic, three-dimensional (3D vessel simulation. Furthermore, the proposed scheme enhanced the transport and uptake of MNPs across the BBB in mice. The scheme presented here facilitates the passage of particles across the BBB to the brain using an electromagnetic actuation scheme.

  11. Descriptive parameters of the erythrocyte aggregation phenomenon using a laser transmission optical chip

    Science.gov (United States)

    Toderi, Martín A.; Castellini, Horacio V.; Riquelme, Bibiana D.

    2017-01-01

    The study of red blood cell (RBC) aggregation is of great interest because of its implications for human health. Altered RBC aggregation can lead to microcirculatory problems as in vascular pathologies, such as hypertension and diabetes, due to a decrease in the erythrocyte surface electric charge and an increase in the ligands present in plasma. The process of erythrocyte aggregation was studied in stasis situation (free shear stresses), using an optical chip based on the laser transmission technique. Kinetic curves of erythrocyte aggregation under different conditions were obtained, allowing evaluation and characterization of this process. Two main characteristics of blood that influence erythrocyte aggregation were analyzed: the erythrocyte surface anionic charge (EAC) after digestion with the enzyme trypsin and plasmatic protein concentration in suspension medium using plasma dissolutions in physiological saline with human albumin. A theoretical approach was evaluated to obtain aggregation and disaggregation ratios by syllectograms data fitting. Sensible parameters (Amp100, t) regarding a reduced erythrocyte EAC were determined, and other parameters (AI, M-Index) resulted that are representative of a variation in the plasmatic protein content of the suspension medium. These results are very useful for further applications in biomedicine.

  12. A GIS-based disaggregate spatial watershed analysis using RADAR data

    International Nuclear Information System (INIS)

    Al-Hamdan, M.

    2002-01-01

    Hydrology is the study of water in all its forms, origins, and destinations on the earth.This paper develops a novel modeling technique using a geographic information system (GIS) to facilitate watershed hydrological routing using RADAR data. The RADAR rainfall data, segmented to 4 km by 4 km blocks, divides the watershed into several sub basins which are modeled independently. A case study for the GIS-based disaggregate spatial watershed analysis using RADAR data is provided for South Fork Cowikee Creek near Batesville, Alabama. All the data necessary to complete the analysis is maintained in the ArcView GIS software. This paper concludes that the GIS-Based disaggregate spatial watershed analysis using RADAR data is a viable method to calculate hydrological routing for large watersheds. (author)

  13. Decision Aggregation in Distributed Classification by a Transductive Extension of Maximum Entropy/Improved Iterative Scaling

    Directory of Open Access Journals (Sweden)

    George Kesidis

    2008-06-01

    Full Text Available In many ensemble classification paradigms, the function which combines local/base classifier decisions is learned in a supervised fashion. Such methods require common labeled training examples across the classifier ensemble. However, in some scenarios, where an ensemble solution is necessitated, common labeled data may not exist: (i legacy/proprietary classifiers, and (ii spatially distributed and/or multiple modality sensors. In such cases, it is standard to apply fixed (untrained decision aggregation such as voting, averaging, or naive Bayes rules. In recent work, an alternative transductive learning strategy was proposed. There, decisions on test samples were chosen aiming to satisfy constraints measured by each local classifier. This approach was shown to reliably correct for class prior mismatch and to robustly account for classifier dependencies. Significant gains in accuracy over fixed aggregation rules were demonstrated. There are two main limitations of that work. First, feasibility of the constraints was not guaranteed. Second, heuristic learning was applied. Here, we overcome these problems via a transductive extension of maximum entropy/improved iterative scaling for aggregation in distributed classification. This method is shown to achieve improved decision accuracy over the earlier transductive approach and fixed rules on a number of UC Irvine datasets.

  14. Disaggregate energy consumption and industrial production in South Africa

    Energy Technology Data Exchange (ETDEWEB)

    Ziramba, Emmanuel [Department of Economics, University of South Africa, P.O Box 392, UNISA 0003 (South Africa)

    2009-06-15

    This paper tries to assess the relationship between disaggregate energy consumption and industrial output in South Africa by undertaking a cointegration analysis using annual data from 1980 to 2005. We also investigate the causal relationships between the various disaggregate forms of energy consumption and industrial production. Our results imply that industrial production and employment are long-run forcing variables for electricity consumption. Applying the [Toda, H.Y., Yamamoto, T., 1995. Statistical inference in vector autoregressions with possibly integrated processes. Journal of Econometrics 66, 225-250] technique to Granger-causality, we find bi-directional causality between oil consumption and industrial production. For the other forms of energy consumption, there is evidence in support of the energy neutrality hypothesis. There is also evidence of causality between employment and electricity consumption as well as coal consumption causing employment. (author)

  15. Disaggregate energy consumption and industrial output in the United States

    International Nuclear Information System (INIS)

    Ewing, Bradley T.; Sari, Ramazan; Soytas, Ugur

    2007-01-01

    This paper investigates the effect of disaggregate energy consumption on industrial output in the United States. Most of the related research utilizes aggregate data which may not indicate the relative strength or explanatory power of various energy inputs on output. We use monthly data and employ the generalized variance decomposition approach to assess the relative impacts of energy and employment on real output. Our results suggest that unexpected shocks to coal, natural gas and fossil fuel energy sources have the highest impacts on the variation of output, while several renewable sources exhibit considerable explanatory power as well. However, none of the energy sources explain more of the forecast error variance of industrial output than employment

  16. Block-accelerated aggregation multigrid for Markov chains with application to PageRank problems

    Science.gov (United States)

    Shen, Zhao-Li; Huang, Ting-Zhu; Carpentieri, Bruno; Wen, Chun; Gu, Xian-Ming

    2018-06-01

    Recently, the adaptive algebraic aggregation multigrid method has been proposed for computing stationary distributions of Markov chains. This method updates aggregates on every iterative cycle to keep high accuracies of coarse-level corrections. Accordingly, its fast convergence rate is well guaranteed, but often a large proportion of time is cost by aggregation processes. In this paper, we show that the aggregates on each level in this method can be utilized to transfer the probability equation of that level into a block linear system. Then we propose a Block-Jacobi relaxation that deals with the block system on each level to smooth error. Some theoretical analysis of this technique is presented, meanwhile it is also adapted to solve PageRank problems. The purpose of this technique is to accelerate the adaptive aggregation multigrid method and its variants for solving Markov chains and PageRank problems. It also attempts to shed some light on new solutions for making aggregation processes more cost-effective for aggregation multigrid methods. Numerical experiments are presented to illustrate the effectiveness of this technique.

  17. FDI spillovers at different levels of industrial and spatial aggregation: Evidence from the electricity sector

    International Nuclear Information System (INIS)

    Del Bo, Chiara F.

    2013-01-01

    The European electricity sector has undergone significant reforms in recent years, in the direction of market opening, integration and privatization. National and regional markets are now characterized by the presence of domestic and foreign firms, both privately and publicly owned. Did foreign entry induce positive productivity spillovers to domestic firms in the electricity sector, both at the aggregate and disaggregated level, while also controlling for domestic firms' ownership? This paper examines this issue by focusing on regional foreign direct investment (FDI) spillovers in the aggregated electricity sector and in the disaggregated sub-sectors of generation and distribution. The results show the importance of industry aggregation in determining the existence and sign of regional FDI spillovers for domestic firms. FDI spillovers are then calculated based on a purely geographic scale, by considering the distance between each firm's city of location and firms in neighboring cities. The importance and sign of FDI spillovers is different with respect to the analysis based on regional administrative boundaries, suggesting that spatial aggregation, along with industrial aggregation, is relevant in accounting for productivity spillover effects of foreign presence in the EU electricity sector. - Highlights: • Has the post-reform entry of foreign firms in the EU electricity sector induced spillover effects? • Spatial and industrial disaggregation are important when evaluating foreign direct investment (FDI) spillovers. • Positive horizontal spillovers are found only in the distribution segment of the industry. • Vertical spillovers in generation are negative; positive in distribution. • Spillover intensity in distribution decreasing with distance; regional dimension relevant in generation

  18. Nanoblock aggregation-disaggregation of zeolite nanoparticles: Temperature control on crystallinity

    KAUST Repository

    Gao, Feifei; Sougrat, Rachid; Albela, Belé n; Bonneviot, Laurent

    2011-01-01

    performed at 90 °C, most of the final aggregates exhibit ill-oriented assembly. This is consistent with a trial-and-error block-by-block building mechanism that turns into an irreversible and apparently faster process at 90 °C, causing definitively ill

  19. Capital-energy complementarity in aggregate energy-economic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hogan, W.W.

    1979-10-01

    The interplay between capital and energy will affect the outcome of energy-policy initiatives. A static model clarifies the interpretation of the conflicting empirical evidence on the nature of this interplay. This resolves an apparent conflict between engineering and economc interpretations and points to an additional ambiguity that can be resolved by distinguishing between policy issues at aggregated and disaggregated levels. Restrictions on aggregate energy use should induce reductions in the demand for capital and exacerbate the economic impacts of the energy policy. 32 references.

  20. Iterative Reconstruction Techniques in Abdominopelvic CT: Technical Concepts and Clinical Implementation.

    Science.gov (United States)

    Patino, Manuel; Fuentes, Jorge M; Singh, Sarabjeet; Hahn, Peter F; Sahani, Dushyant V

    2015-07-01

    This article discusses the clinical challenge of low-radiation-dose examinations, the commonly used approaches for dose optimization, and their effect on image quality. We emphasize practical aspects of the different iterative reconstruction techniques, along with their benefits, pitfalls, and clinical implementation. The widespread use of CT has raised concerns about potential radiation risks, motivating diverse strategies to reduce the radiation dose associated with CT. CT manufacturers have developed alternative reconstruction algorithms intended to improve image quality on dose-optimized CT studies, mainly through noise and artifact reduction. Iterative reconstruction techniques take unique approaches to noise reduction and provide distinct strength levels or settings.

  1. Light scattering method to measure red blood cell aggregation during incubation

    Science.gov (United States)

    Grzegorzewski, B.; Szołna-Chodór, A.; Baryła, J.; DreŻek, D.

    2018-01-01

    Red blood cell (RBC) aggregation can be observed both in vivo as well as in vitro. This process is a cause of alterations of blood flow in microvascular network. Enhanced RBC aggregation makes oxygen and nutrients delivery difficult. Measurements of RBC aggregation usually give a description of the process for a sample where the state of a solution and cells is well-defined and the system reached an equilibrium. Incubation of RBCs in various solutions is frequently used to study the effects of the solutions on the RBC aggregation. The aggregation parameters are compared before and after incubation while the detailed changes of the parameters during incubation remain unknown. In this paper we have proposed a method to measure red blood cell aggregation during incubation based on the well-known technique where backscattered light is used to assess the parameters of the RBC aggregation. Couette system consisting of two cylinders is adopted in the method. The incubation is observed in the Couette system. In the proposed method following sequence of rotations is adapted. Two minutes rotation is followed by two minutes stop. In this way we have obtained a time series of back scattered intensity consisting of signals respective for disaggregation and aggregation. It is shown that the temporal changes of the intensity manifest changes of RBC aggregation during incubation. To show the ability of the method to assess the effect of incubation time on RBC aggregation the results are shown for solutions that cause an increase of RBC aggregation as well as for the case where the aggregation is decreased.

  2. Marine snow microbial communities: scaling of abundances with aggregate size

    DEFF Research Database (Denmark)

    Kiørboe, Thomas

    2003-01-01

    Marine aggregates are inhabited by diverse microbial communities, and the concentration of attached microbes typically exceeds concentrations in the ambient water by orders of magnitude. An extension of the classical Lotka-Volterra model, which includes 3 trophic levels (bacteria, flagellates...... are controlled by flagellate grazing, while flagellate and ciliate populations are governed by colonization and detachment. The model also suggests that microbial populations are turned over rapidly (1 to 20 times d-1) due to continued colonization and detachment. The model overpredicts somewhat the scaling...... of microbial abundances with aggregate size observed in field-collected aggregates. This may be because it disregards the aggregation/disaggregation dynamics of aggregates, as well as interspecific interactions between bacteria....

  3. Enhanced nonlinear iterative techniques applied to a non-equilibrium plasma flow

    Energy Technology Data Exchange (ETDEWEB)

    Knoll, D.A.; McHugh, P.R. [Idaho National Engineering Lab., Idaho Falls, ID (United States)

    1996-12-31

    We study the application of enhanced nonlinear iterative methods to the steady-state solution of a system of two-dimensional convection-diffusion-reaction partial differential equations that describe the partially-ionized plasma flow in the boundary layer of a tokamak fusion reactor. This system of equations is characterized by multiple time and spatial scales, and contains highly anisotropic transport coefficients due to a strong imposed magnetic field. We use Newton`s method to linearize the nonlinear system of equations resulting from an implicit, finite volume discretization of the governing partial differential equations, on a staggered Cartesian mesh. The resulting linear systems are neither symmetric nor positive definite, and are poorly conditioned. Preconditioned Krylov iterative techniques are employed to solve these linear systems. We investigate both a modified and a matrix-free Newton-Krylov implementation, with the goal of reducing CPU cost associated with the numerical formation of the Jacobian. A combination of a damped iteration, one-way multigrid and a pseudo-transient continuation technique are used to enhance global nonlinear convergence and CPU efficiency. GMRES is employed as the Krylov method with Incomplete Lower-Upper(ILU) factorization preconditioning. The goal is to construct a combination of nonlinear and linear iterative techniques for this complex physical problem that optimizes trade-offs between robustness, CPU time, memory requirements, and code complexity. It is shown that a one-way multigrid implementation provides significant CPU savings for fine grid calculations. Performance comparisons of the modified Newton-Krylov and matrix-free Newton-Krylov algorithms will be presented.

  4. Multisite rainfall downscaling and disaggregation in a tropical urban area

    Science.gov (United States)

    Lu, Y.; Qin, X. S.

    2014-02-01

    A systematic downscaling-disaggregation study was conducted over Singapore Island, with an aim to generate high spatial and temporal resolution rainfall data under future climate-change conditions. The study consisted of two major components. The first part was to perform an inter-comparison of various alternatives of downscaling and disaggregation methods based on observed data. This included (i) single-site generalized linear model (GLM) plus K-nearest neighbor (KNN) (S-G-K) vs. multisite GLM (M-G) for spatial downscaling, (ii) HYETOS vs. KNN for single-site disaggregation, and (iii) KNN vs. MuDRain (Multivariate Rainfall Disaggregation tool) for multisite disaggregation. The results revealed that, for multisite downscaling, M-G performs better than S-G-K in covering the observed data with a lower RMSE value; for single-site disaggregation, KNN could better keep the basic statistics (i.e. standard deviation, lag-1 autocorrelation and probability of wet hour) than HYETOS; for multisite disaggregation, MuDRain outperformed KNN in fitting interstation correlations. In the second part of the study, an integrated downscaling-disaggregation framework based on M-G, KNN, and MuDRain was used to generate hourly rainfall at multiple sites. The results indicated that the downscaled and disaggregated rainfall data based on multiple ensembles from HadCM3 for the period from 1980 to 2010 could well cover the observed mean rainfall amount and extreme data, and also reasonably keep the spatial correlations both at daily and hourly timescales. The framework was also used to project future rainfall conditions under HadCM3 SRES A2 and B2 scenarios. It was indicated that the annual rainfall amount could reduce up to 5% at the end of this century, but the rainfall of wet season and extreme hourly rainfall could notably increase.

  5. Foreign labor and regional labor markets: aggregate and disaggregate impact on growth and wages in Danish regions

    DEFF Research Database (Denmark)

    Schmidt, Torben Dall; Jensen, Peter Sandholt

    2013-01-01

    non-negative effects on the job opportunities for Danish workers in regional labor markets, whereas the evidence of a regional wage growth effect is mixed. We also present disaggregated results focusing on regional heterogeneity of business structures, skill levels and backgrounds of foreign labor....... The results are interpreted within a specific Danish labor market context and the associated regional outcomes. This adds to previous findings and emphasizes the importance of labor market institutions for the effect of foreign labor on regional employment growth....

  6. Oscillations in deviating difference equations using an iterative technique

    Directory of Open Access Journals (Sweden)

    George E Chatzarakis

    2017-07-01

    Full Text Available Abstract The paper deals with the oscillation of the first-order linear difference equation with deviating argument and nonnegative coefficients. New sufficient oscillation conditions, involving limsup, are given, which essentially improve all known results, based on an iterative technique. We illustrate the results and the improvement over other known oscillation criteria by examples, numerically solved in Matlab.

  7. Conditions for the Occurrence of Slaking and Other Disaggregation Processes under Rainfall

    Directory of Open Access Journals (Sweden)

    Frédéric Darboux

    2016-07-01

    Full Text Available Under rainfall conditions, aggregates may suffer breakdown by different mechanisms. Slaking is a very efficient breakdown mechanism. However, its occurrence under rainfall conditions has not been demonstrated. Therefore, the aim of this study was to evaluate the occurrence of slaking under rain. Two soils with silt loam (SL and clay loam (CL textures were analyzed. Two classes of aggregates were utilized: 1–3 mm and 3–5 mm. The aggregates were submitted to stability tests and to high intensity (90 mm·h−1 and low intensity (28 mm·h−1 rainfalls, and different kinetic energy impacts (large and small raindrops using a rainfall simulator. The fragment size distributions were determined both after the stability tests and rainfall simulations, with the calculation of the mean weighted diameter (MWD. After the stability tests the SL presented smaller MWDs for all stability tests when compared to the CL. In both soils the lowest MWD was obtained using the fast wetting test, showing they were sensitive to slaking. For both soils and the two aggregate classes evaluated, the MWDs were recorded from the early beginning of the rainfall event under the four rainfall conditions. The occurrence of slaking in the evaluated soils was not verified under the simulated rainfall conditions studied. The early disaggregation was strongly related to the cumulative kinetic energy, advocating for the occurrence of mechanical breakdown. Because slaking requires a very high wetting rate on initially dry aggregates, it seems unlikely to occur under field conditions, except perhaps for furrow irrigation.

  8. Weighted thinned linear array design with the iterative FFT technique

    CSIR Research Space (South Africa)

    Du Plessis, WP

    2011-09-01

    Full Text Available techniques utilise simulated annealing [3]?[5], [10], mixed integer linear programming [7], genetic algorithms [9], and a hyrid approach combining a genetic algorithm and a local optimiser [8]. The iterative Fourier technique (IFT) developed by Keizer [2... algorithm being well- suited to obtaining low CTRs. Test problems from the literature are considered, and the results obtained with the IFT considerably exceed those achieved with other algorithms. II. DESCRIPTION OF THE ALGORITHM A flowchart describing...

  9. Three-dimensional shape analysis of coarse aggregates: New techniques for and preliminary results on several different coarse aggregates and reference rocks

    International Nuclear Information System (INIS)

    Erdogan, S.T.; Quiroga, P.N.; Fowler, D.W.; Saleh, H.A.; Livingston, R.A.; Garboczi, E.J.; Ketcham, P.M.; Hagedorn, J.G.; Satterfield, S.G.

    2006-01-01

    The shape of aggregates used in concrete is an important parameter that helps determine many concrete properties, especially the rheology of fresh concrete and early-age mechanical properties. This paper discusses the sample preparation and image analysis techniques necessary for obtaining an aggregate particle image in 3-D, using X-ray computed tomography, which is then suitable for spherical harmonic analysis. The shapes of three reference rocks are analyzed for uncertainty determination via direct comparison to the geometry of their reconstructed images. A Virtual Reality Modeling Language technique is demonstrated that can give quick and accurate 3-D views of aggregates. Shape data on several different kinds of coarse aggregates are compared and used to illustrate potential mathematical shape analyses made possible by the spherical harmonic information

  10. Contextualising Water Use in Residential Settings: A Survey of Non-Intrusive Techniques and Approaches

    Directory of Open Access Journals (Sweden)

    Davide Carboni

    2016-05-01

    Full Text Available Water monitoring in households is important to ensure the sustainability of fresh water reserves on our planet. It provides stakeholders with the statistics required to formulate optimal strategies in residential water management. However, this should not be prohibitive and appliance-level water monitoring cannot practically be achieved by deploying sensors on every faucet or water-consuming device of interest due to the higher hardware costs and complexity, not to mention the risk of accidental leakages that can derive from the extra plumbing needed. Machine learning and data mining techniques are promising techniques to analyse monitored data to obtain non-intrusive water usage disaggregation. This is because they can discern water usage from the aggregated data acquired from a single point of observation. This paper provides an overview of water usage disaggregation systems and related techniques adopted for water event classification. The state-of-the art of algorithms and testbeds used for fixture recognition are reviewed and a discussion on the prominent challenges and future research are also included.

  11. Three-dimensional laser scanning technique to quantify aggregate and ballast shape properties

    CSIR Research Space (South Africa)

    Anochie-Boateng, Joseph

    2013-06-01

    Full Text Available methods towards a more accurate and automated techniques to quantify aggregate shape properties. This paper validates a new flakiness index equation using three-dimensional (3-D) laser scanning data of aggregate and ballast materials obtained from...

  12. Load Disaggregation Technologies: Real World and Laboratory Performance

    Energy Technology Data Exchange (ETDEWEB)

    Mayhorn, Ebony T.; Sullivan, Greg P.; Petersen, Joseph M.; Butner, Ryan S.; Johnson, Erica M.

    2016-09-28

    Low cost interval metering and communication technology improvements over the past ten years have enabled the maturity of load disaggregation (or non-intrusive load monitoring) technologies to better estimate and report energy consumption of individual end-use loads. With the appropriate performance characteristics, these technologies have the potential to enable many utility and customer facing applications such as billing transparency, itemized demand and energy consumption, appliance diagnostics, commissioning, energy efficiency savings verification, load shape research, and demand response measurement. However, there has been much skepticism concerning the ability of load disaggregation products to accurately identify and estimate energy consumption of end-uses; which has hindered wide-spread market adoption. A contributing factor is that common test methods and metrics are not available to evaluate performance without having to perform large scale field demonstrations and pilots, which can be costly when developing such products. Without common and cost-effective methods of evaluation, more developed disaggregation technologies will continue to be slow to market and potential users will remain uncertain about their capabilities. This paper reviews recent field studies and laboratory tests of disaggregation technologies. Several factors are identified that are important to consider in test protocols, so that the results reflect real world performance. Potential metrics are examined to highlight their effectiveness in quantifying disaggregation performance. This analysis is then used to suggest performance metrics that are meaningful and of value to potential users and that will enable researchers/developers to identify beneficial ways to improve their technologies.

  13. Measurement of blood coagulation with considering RBC aggregation through a microchip-based light transmission aggregometer.

    Science.gov (United States)

    Lim, Hyunjung; Nam, Jeonghun; Xue, Shubin; Shin, Sehyun

    2011-01-01

    Even though blood coagulation can be tested by various methods and techniques, the effect of RBC aggregation on blood coagulation is not fully understood. The present study monitored clot formation in a microchip-based light transmission aggregometer. Citrated blood samples with and without the addition of calcium ion solution were initially disaggregated by rotating a stirrer in the microchip. After abrupt stop of the rotating stirrer, the transmitted light intensity over time was recorded. The syllectogram (light intensity vs. time graph) manifested a rapid increase that is associated with RBC aggregation followed by a decrease that is associated with blood coagulation. The time to reach the peak point was used as a new index of coagulation time (CT) and ranged from 200 to 500 seconds in the present measurements. The CT was inversely proportional to the concentration of fibrinogen, which enhances RBC aggregation. In addition, the CT was inversely proportional to the hematocrit, which is similar to the case of the prothrombin time (PT), as measured by a commercial coagulometer. Thus, we carefully concluded that RBC aggregation should be considered in tests of blood coagulation.

  14. A novel method for soil aggregate stability measurement by laser granulometry with sonication

    Science.gov (United States)

    Rawlins, B. G.; Lark, R. M.; Wragg, J.

    2012-04-01

    Regulatory authorities need to establish rapid, cost-effective methods to measure soil physical indicators - such as aggregate stability - which can be applied to large numbers of soil samples to detect changes of soil quality through monitoring. Limitations of sieve-based methods to measure the stability of soil macro-aggregates include: i) the mass of stable aggregates is measured, only for a few, discrete sieve/size fractions, ii) no account is taken of the fundamental particle size distribution of the sub-sampled material, and iii) they are labour intensive. These limitations could be overcome by measurements with a Laser Granulometer (LG) instrument, but this technology has not been widely applied to the quantification of aggregate stability of soils. We present a novel method to quantify macro-aggregate (1-2 mm) stability. We measure the difference between the mean weight diameter (MWD; μm) of aggregates that are stable in circulating water of low ionic strength, and the MWD of the fundamental particles of the soil to which these aggregates are reduced by sonication. The suspension is circulated rapidly through a LG analytical cell from a connected vessel for ten seconds; during this period hydrodynamic forces associated with the circulating water lead to the destruction of unstable aggregates. The MWD of stable aggregates is then measured by LG. In the next step, the aggregates - which are kept in the vessel at a minimal water circulation speed - are subject to sonication (18W for ten minutes) so the vast majority of the sample is broken down into its fundamental particles. The suspension is then recirculated rapidly through the LG and the MWD measured again. We refer to the difference between these two measurements as disaggregation reduction (DR) - the reduction in MWD on disaggregation by sonication. Soil types with more stable aggregates have larger values of DR. The stable aggregates - which are resistant to both slaking and mechanical breakdown by the

  15. A Peltier-based freeze-thaw device for meteorite disaggregation

    Science.gov (United States)

    Ogliore, R. C.

    2018-02-01

    A Peltier-based freeze-thaw device for the disaggregation of meteorite or other rock samples is described. Meteorite samples are kept in six water-filled cavities inside a thin-walled Al block. This block is held between two Peltier coolers that are automatically cycled between cooling and warming. One cycle takes approximately 20 min. The device can run unattended for months, allowing for ˜10 000 freeze-thaw cycles that will disaggregate meteorites even with relatively low porosity. This device was used to disaggregate ordinary and carbonaceous chondrite regoltih breccia meteorites to search for micrometeoroid impact craters.

  16. Iterative reconstruction techniques for computed tomography Part 1: Technical principles

    International Nuclear Information System (INIS)

    Willemink, Martin J.; Jong, Pim A. de; Leiner, Tim; Nievelstein, Rutger A.J.; Schilham, Arnold M.R.; Heer, Linda M. de; Budde, Ricardo P.J.

    2013-01-01

    To explain the technical principles of and differences between commercially available iterative reconstruction (IR) algorithms for computed tomography (CT) in non-mathematical terms for radiologists and clinicians. Technical details of the different proprietary IR techniques were distilled from available scientific articles and manufacturers' white papers and were verified by the manufacturers. Clinical results were obtained from a literature search spanning January 2006 to January 2012, including only original research papers concerning IR for CT. IR for CT iteratively reduces noise and artefacts in either image space or raw data, or both. Reported dose reductions ranged from 23 % to 76 % compared to locally used default filtered back-projection (FBP) settings, with similar noise, artefacts, subjective, and objective image quality. IR has the potential to allow reducing the radiation dose while preserving image quality. Disadvantages of IR include blotchy image appearance and longer computational time. Future studies need to address differences between IR algorithms for clinical low-dose CT. circle Iterative reconstruction technology for CT is presented in non-mathematical terms. (orig.)

  17. Fusion alpha loss diagnostic for ITER using activation technique

    Czech Academy of Sciences Publication Activity Database

    Bonheure, G.; Hult, M.; González de Orduña, R.; Vermaercke, P.; Murari, A.; Popovichev, S.; Mlynář, Jan

    2011-01-01

    Roč. 86, 6-8 (2011), s. 1298-1301 ISSN 0920-3796. [Symposium on Fusion Technology (SOFT) /26th./. Port o, 27.09.2010-01.10.2010] Institutional research plan: CEZ:AV0Z20430508 Keywords : ITER * fusion product * burning plasma diagnostics * alpha losses * activation technique Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 1.490, year: 2011 http://www.sciencedirect.com/science/article/pii/S0920379611002778

  18. Shear-induced aggregation or disaggregation in edible oils: Models, computer simulation, and USAXS measurements

    Science.gov (United States)

    Townsend, B.; Peyronel, F.; Callaghan-Patrachar, N.; Quinn, B.; Marangoni, A. G.; Pink, D. A.

    2017-12-01

    The effects of shear upon the aggregation of solid objects formed from solid triacylglycerols (TAGs) immersed in liquid TAG oils were modeled using Dissipative Particle Dynamics (DPD) and the predictions compared to experimental data using Ultra-Small Angle X-ray Scattering (USAXS). The solid components were represented by spheres interacting via attractive van der Waals forces and short range repulsive forces. A velocity was applied to the liquid particles nearest to the boundary, and Lees-Edwards boundary conditions were used to transmit this motion to non-boundary layers via dissipative interactions. The shear was created through the dissipative forces acting between liquid particles. Translational diffusion was simulated, and the Stokes-Einstein equation was used to relate DPD length and time scales to SI units for comparison with USAXS results. The SI values depended on how large the spherical particles were (250 nm vs. 25 nm). Aggregation was studied by (a) computing the Structure Function and (b) quantifying the number of pairs of solid spheres formed. Solid aggregation was found to be enhanced by low shear rates. As the shear rate was increased, a transition shear region was manifested in which aggregation was inhibited and shear banding was observed. Aggregation was inhibited, and eventually eliminated, by further increases in the shear rate. The magnitude of the transition region shear, γ˙ t, depended on the size of the solid particles, which was confirmed experimentally.

  19. Normal and system lupus erythematosus red blood cell interactions studied by double trap optical tweezers: direct measurements of aggregation forces

    Science.gov (United States)

    Khokhlova, Maria D.; Lyubin, Eugeny V.; Zhdanov, Alexander G.; Rykova, Sophia Yu.; Sokolova, Irina A.; Fedyanin, Andrey A.

    2012-02-01

    Direct measurements of aggregation forces in piconewton range between two red blood cells in pair rouleau are performed under physiological conditions using double trap optical tweezers. Aggregation and disaggregation properties of healthy and pathologic (system lupus erythematosis) blood samples are analyzed. Strong difference in aggregation speed and behavior is revealed using the offered method which is proposed to be a promising tool for SLE monitoring at single cell level.

  20. Iterative reconstruction technique with reduced volume CT dose index: diagnostic accuracy in pediatric acute appendicitis

    International Nuclear Information System (INIS)

    Didier, Ryne A.; Vajtai, Petra L.; Hopkins, Katharine L.

    2015-01-01

    Iterative reconstruction technique has been proposed as a means of reducing patient radiation dose in pediatric CT. Yet, the effect of such reductions on diagnostic accuracy has not been thoroughly evaluated. This study compares accuracy of diagnosing pediatric acute appendicitis using contrast-enhanced abdominopelvic CT scans performed with traditional pediatric weight-based protocols and filtered back projection reconstruction vs. a filtered back projection/iterative reconstruction technique blend with reduced volume CT dose index (CTDI vol ). Results of pediatric contrast-enhanced abdominopelvic CT scans done for pain and/or suspected appendicitis were reviewed in two groups: A, 192 scans performed with the hospital's established weight-based CT protocols and filtered back projection reconstruction; B, 194 scans performed with iterative reconstruction technique and reduced CTDI vol . Reduced CTDI vol was achieved primarily by reductions in effective tube current-time product (mAs eff ) and tube peak kilovoltage (kVp). CT interpretation was correlated with clinical follow-up and/or surgical pathology. CTDI vol , size-specific dose estimates (SSDE) and performance characteristics of the two CT techniques were then compared. Between groups A and B, mean CTDI vol was reduced by 45%, and mean SSDE was reduced by 46%. Sensitivity, specificity and diagnostic accuracy were 96%, 97% and 96% in group A vs. 100%, 99% and 99% in group B. Accuracy in diagnosing pediatric acute appendicitis was maintained in contrast-enhanced abdominopelvic CT scans that incorporated iterative reconstruction technique, despite reductions in mean CTDI vol and SSDE by nearly half as compared to the hospital's traditional weight-based protocols. (orig.)

  1. Bolted Ribs Analysis for the ITER Vacuum Vessel using Finite Element Submodelling Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Zarzalejos, José María, E-mail: jose.zarzalejos@ext.f4e.europa.eu [External at F4E, c/Josep Pla, n.2, Torres Diagonal Litoral, Edificio B3, E-08019, Barcelona (Spain); Fernández, Elena; Caixas, Joan; Bayón, Angel [F4E, c/Josep Pla, n.2, Torres Diagonal Litoral, Edificio B3, E-08019, Barcelona (Spain); Polo, Joaquín [Iberdrola Ingeniería y Construcción, Avenida de Manoteras 20, 28050 Madrid (Spain); Guirao, Julio [Numerical Analysis Technologies, S L., Marqués de San Esteban 52, Entlo, 33209 Gijon (Spain); García Cid, Javier [Iberdrola Ingeniería y Construcción, Avenida de Manoteras 20, 28050 Madrid (Spain); Rodríguez, Eduardo [Mechanical Engineering Department EPSIG, University of Oviedo, Gijon (Spain)

    2014-10-15

    Highlights: • The ITER Vacuum Vessel Bolted Ribs assemblies are modelled using Finite Elements. • Finite Element submodelling techniques are used. • Stress results are obtained for all the assemblies and a post-processing is performed. • All the elements of the assemblies are compliant with the regulatory provisions. • Submodelling is a time-efficient solution to verify the structural integrity of this type of structures. - Abstract: The ITER Vacuum Vessel (VV) primary function is to enclose the plasmas produced by the ITER Tokamak. Since it acts as the first radiological barrier of the plasma, it is classified as a class 2 welded box structure, according to RCC-MR 2007. The VV is made of an inner and an outer D-shape, 60 mm-thick double shell connected through thick massive bars (housings) and toroidal and poloidal structural stiffening ribs. In order to provide neutronic shielding to the ex-vessel components, the space between shells is filled with borated steel plates, called In-Wall Shielding (IWS) blocks, and water. In general, these blocks are connected to the IWS ribs which are connected to adjacent housings. The development of a Finite Element model of the ITER VV including all its components in detail is unaffordable from the computational point of view due to the large number of degrees of freedom it would require. This limitation can be overcome by using submodelling techniques to simulate the behaviour of the bolted ribs assemblies. Submodelling is a Finite Element technique which allows getting more accurate results in a given region of a coarse model by generating an independent, finer model of the region under study. In this paper, the methodology and several simulations of the VV bolted ribs assemblies using submodelling techniques are presented. A stress assessment has been performed for the elements involved in the assembly considering possible types of failure and including stress classification and categorization techniques to analyse

  2. Development of core sampling technique for ITER Type B radwaste

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S. G.; Hong, K. P.; Oh, W. H.; Park, M. C.; Jung, S. H.; Ahn, S. B. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    Type B radwaste (intermediate level and long lived radioactive waste) imported from ITER vacuum vessel are to be treated and stored in basement of hot cell building. The Type B radwaste treatment process is composed of buffer storage, cutting, sampling/tritium measurement, tritium removal, characterization, pre-packaging, inspection/decontamination, and storage etc. The cut slices of Type B radwaste components generated from cutting process undergo sampling process before and after tritium removal process. The purpose of sampling is to obtain small pieces of samples in order to investigate the tritium content and concentration of Type B radwaste. Core sampling, which is the candidates of sampling technique to be applied to ITER hot cell, is available for not thick (less than 50 mm) metal without use of coolant. Experimented materials were SS316L and CuCrZr in order to simulate ITER Type B radwaste. In core sampling, substantial secondary wastes from cutting chips will be produced unavoidably. Thus, core sampling machine will have to be equipped with disposal system such as suction equipment. Core sampling is considered an unfavorable method for tool wear compared to conventional drilling.

  3. Self-assembled lipoprotein based gold nanoparticles for detection and photothermal disaggregation of β-amyloid aggregates

    KAUST Repository

    Martins, P. A. T.; Alsaiari, Shahad K.; Julfakyan, Khachatur; Nie, Z.; Khashab, Niveen M.

    2017-01-01

    We present a reconstituted lipoprotein-based nanoparticle platform comprising a curcumin fluorescent motif and an NIR responsive gold core. This multifunctional nanosystem is successfully used for aggregation-dependent fluorescence detection and photothermal disassembly of insoluble amyloid aggregates.

  4. Self-assembled lipoprotein based gold nanoparticles for detection and photothermal disaggregation of β-amyloid aggregates

    KAUST Repository

    Martins, P. A. T.

    2017-01-10

    We present a reconstituted lipoprotein-based nanoparticle platform comprising a curcumin fluorescent motif and an NIR responsive gold core. This multifunctional nanosystem is successfully used for aggregation-dependent fluorescence detection and photothermal disassembly of insoluble amyloid aggregates.

  5. Disaggregated Futures-Only Commitments of Traders

    Data.gov (United States)

    Commodity Futures Trading Commission — The Disaggregated Futures-Only Commitments of Traders dataset provides a breakdown of each week's open interest for agriculture, energy, metals, lumber, and...

  6. Iterative reconstruction technique with reduced volume CT dose index: diagnostic accuracy in pediatric acute appendicitis

    Energy Technology Data Exchange (ETDEWEB)

    Didier, Ryne A. [Oregon Health and Science University, Department of Diagnostic Radiology, DC7R, Portland, OR (United States); Vajtai, Petra L. [Oregon Health and Science University, Department of Pediatrics, Portland, OR (United States); Oregon Health and Science University, Department of Diagnostic Radiology, DC7R, Portland, OR (United States); Hopkins, Katharine L. [Oregon Health and Science University, Department of Diagnostic Radiology, DC7R, Portland, OR (United States); Oregon Health and Science University, Department of Pediatrics, Portland, OR (United States)

    2014-07-05

    Iterative reconstruction technique has been proposed as a means of reducing patient radiation dose in pediatric CT. Yet, the effect of such reductions on diagnostic accuracy has not been thoroughly evaluated. This study compares accuracy of diagnosing pediatric acute appendicitis using contrast-enhanced abdominopelvic CT scans performed with traditional pediatric weight-based protocols and filtered back projection reconstruction vs. a filtered back projection/iterative reconstruction technique blend with reduced volume CT dose index (CTDI{sub vol}). Results of pediatric contrast-enhanced abdominopelvic CT scans done for pain and/or suspected appendicitis were reviewed in two groups: A, 192 scans performed with the hospital's established weight-based CT protocols and filtered back projection reconstruction; B, 194 scans performed with iterative reconstruction technique and reduced CTDI{sub vol}. Reduced CTDI{sub vol} was achieved primarily by reductions in effective tube current-time product (mAs{sub eff}) and tube peak kilovoltage (kVp). CT interpretation was correlated with clinical follow-up and/or surgical pathology. CTDI{sub vol}, size-specific dose estimates (SSDE) and performance characteristics of the two CT techniques were then compared. Between groups A and B, mean CTDI{sub vol} was reduced by 45%, and mean SSDE was reduced by 46%. Sensitivity, specificity and diagnostic accuracy were 96%, 97% and 96% in group A vs. 100%, 99% and 99% in group B. Accuracy in diagnosing pediatric acute appendicitis was maintained in contrast-enhanced abdominopelvic CT scans that incorporated iterative reconstruction technique, despite reductions in mean CTDI{sub vol} and SSDE by nearly half as compared to the hospital's traditional weight-based protocols. (orig.)

  7. Schemes for aggregating preferential tariffs in agriculture,export volume effects and African LDCs

    DEFF Research Database (Denmark)

    Yu, Wusheng

    Trade-weighted aggregated tariffs (TWPT) are often used in analyzing the issues of erosion of non-reciprocal preferences. This paper argues that commonly used TWPTs under-estimate the true protection on imports originated from preference-receiving countries, including LDCs. When used in numerical...... simulations of preference erosion and expansion scenarios, the TWPTs tend to incorrectly downplay preference erosion effect of MFN tariff cuts, and understate the export promotion effect of expanding preferences. In light of these claims, an alternative aggregation scheme is developed to calculate aggregated...... preferential tariffs imposed by a number of developed countries on African LDCs. These are shown to be higher than the TWPTs aggregated from the same disaggregated tariff data set. Numerical simulations conducted with the two sets of aggregated tariffs confirm the two claims and suggest that TWPTs may lead...

  8. Tomographic reconstruction by using FPSIRT (Fast Particle System Iterative Reconstruction Technique)

    Energy Technology Data Exchange (ETDEWEB)

    Moreira, Icaro Valgueiro M.; Melo, Silvio de Barros; Dantas, Carlos; Lima, Emerson Alexandre; Silva, Ricardo Martins; Cardoso, Halisson Alberdan C., E-mail: ivmm@cin.ufpe.br, E-mail: sbm@cin.ufpe.br, E-mail: rmas@cin.ufpe.br, E-mail: hacc@cin.ufpe.br, E-mail: ccd@ufpe.br, E-mail: eal@cin.ufpe.br [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil)

    2015-07-01

    The PSIRT (Particle System Iterative Reconstruction Technique) is a method of tomographic image reconstruction primarily designed to work with configurations suitable for industrial applications. A particle system is an optimization technique inspired in real physical systems that associates to the reconstructing material a set of particles with certain physical features, subject to a force eld, which can produce movement. The system constantly updates the set of particles by repositioning them in such a way as to approach the equilibrium. The elastic potential along a trajectory is a function of the difference between the attenuation coefficient in the current configuration and the corresponding input data. PSIRT has been successfully used to reconstruct simulated and real objects subject to sets of parallel and fanbeam lines in different angles, representing typical gamma-ray tomographic arrangements. One of PSIRT's limitation was its performance, too slow for real time scenarios. In this work, it is presented a reformulation in PSIRT's computational model, which is able to grant the new algorithm, the FPSIRT - Fast System Iterative Reconstruction Technique, a performance up to 200-time faster than PSIRT's. In this work a comparison of their application to real and simulated data from the HSGT, High Speed Gamma Tomograph, is presented. (author)

  9. Tomographic reconstruction by using FPSIRT (Fast Particle System Iterative Reconstruction Technique)

    International Nuclear Information System (INIS)

    Moreira, Icaro Valgueiro M.; Melo, Silvio de Barros; Dantas, Carlos; Lima, Emerson Alexandre; Silva, Ricardo Martins; Cardoso, Halisson Alberdan C.

    2015-01-01

    The PSIRT (Particle System Iterative Reconstruction Technique) is a method of tomographic image reconstruction primarily designed to work with configurations suitable for industrial applications. A particle system is an optimization technique inspired in real physical systems that associates to the reconstructing material a set of particles with certain physical features, subject to a force eld, which can produce movement. The system constantly updates the set of particles by repositioning them in such a way as to approach the equilibrium. The elastic potential along a trajectory is a function of the difference between the attenuation coefficient in the current configuration and the corresponding input data. PSIRT has been successfully used to reconstruct simulated and real objects subject to sets of parallel and fanbeam lines in different angles, representing typical gamma-ray tomographic arrangements. One of PSIRT's limitation was its performance, too slow for real time scenarios. In this work, it is presented a reformulation in PSIRT's computational model, which is able to grant the new algorithm, the FPSIRT - Fast System Iterative Reconstruction Technique, a performance up to 200-time faster than PSIRT's. In this work a comparison of their application to real and simulated data from the HSGT, High Speed Gamma Tomograph, is presented. (author)

  10. ESTIMATION OF COB-DOUGLAS AND TRANSLOG PRODUCTION FUNCTIONS WITH CAPITAL AND GENDER DISAGGREGATED LABOR INPUTS IN THE USA

    Directory of Open Access Journals (Sweden)

    Gertrude Sebunya Muwanga

    2018-01-01

    Full Text Available This is an empirical investigation of the homogeneity of gender disaggregated labor using the Douglas, single/multi-factor translog production functions; and labor productivity functions for the USA.   The results based on the single factor translog model, indicated that: an increase in the capita/female labor ratio increases aggregate output; male labor is more productive than female labor, which is more productive than capital; a simultaneous increase in quantity allocated and productivity of the leads to an increase in output; female labor productivity has grown slower than male labor productivity; it much easier to substitute male labor for capital compared to female labor; and the three inputs are neither perfect substitutes nor perfect complements. As a consequence, male and female labor are not homogenous inputs. Efforts to investigate the factors influencing gender disaggregated labor productivity; and designing policies to achieve gender parity in numbers/productivity in the labor force and increasing the ease of substitutability between male labor and female labor are required.

  11. Current status on detail design and fabrication techniques development of ITER blanket shield block in Korea

    International Nuclear Information System (INIS)

    Kim, Duck Hoi; Cho, Seungyon; Ahn, Mu-Young; Lee, Eun-Seok; Jung, Ki Jung

    2007-01-01

    The allocation of components and systems to be delivered to ITER on an in-kind basis, was agreed between the ITER Parties. Among parties, Korea agreed to procure inboard blanket modules 1, 2 and 6, which consists of FW and shield block. Regarding shield block the detail design and Fabrication techniques development have been undertaken in Korea. Especially manufacturing feasibility study on shield block had been performed and some technical issues for the fabrication were selected. Based on these results, fabrication techniques using EB welding are being developed. Meanwhile, the detail design of inboard standard module has been carried out. The optimization of flow driver design to improve the cooling performance was executed. And, thermo-hydraulic analysis on half block of inboard standard module was performed. In this study, current status and some results from Fabrication techniques development on ITER blanket shield block are described. The detail design activity and results on shield block are also introduced herein. (orig.)

  12. Criticality calculations by source-collision iteration technique for cylindrical systems

    International Nuclear Information System (INIS)

    Sundaram, V.K.; Gopinath, D.V.

    1977-01-01

    A fast-converging iterative technique is presented which uses first collision probabilities developed for obtaining criticality parameters in two-region cylindrical systems with multigroup structure in energy of the neutrons. The space transmission matrix is obtained part analytically and part numerically through evaluation of a single-fold integral. Critical dimensions for condensed systems of uranium and plutonium computed using this method are presented and compared with published values

  13. Curcumin Inhibits Tau Aggregation and Disintegrates Preformed Tau Filaments in vitro.

    Science.gov (United States)

    Rane, Jitendra Subhash; Bhaumik, Prasenjit; Panda, Dulal

    2017-01-01

    The pathological aggregation of tau is a common feature of most of the neuronal disorders including frontotemporal dementia, Parkinson's disease, and Alzheimer's disease. The inhibition of tau aggregation is considered to be one of the important strategies for treating these neurodegenerative diseases. Curcumin, a natural polyphenolic molecule, has been reported to have neuroprotective ability. In this work, curcumin was found to bind to adult tau and fetal tau with a dissociation constant of 3.3±0.4 and 8±1 μM, respectively. Molecular docking studies indicated a putative binding site of curcumin in the microtubule-binding region of tau. Using several complementary techniques, including dynamic light scattering, thioflavin S fluorescence, 90° light scattering, electron microscopy, and atomic force microscopy, curcumin was found to inhibit the aggregation of tau. The dynamic light scattering analysis and atomic force microscopic images revealed that curcumin inhibits the oligomerization of tau. Curcumin also disintegrated preformed tau oligomers. Using Far-UV circular dichroism, curcumin was found to inhibit the β-sheets formation in tau indicating that curcumin inhibits an initial step of tau aggregation. In addition, curcumin inhibited tau fibril formation. Furthermore, the effect of curcumin on the preformed tau filaments was analyzed by atomic force microscopy, transmission electron microscopy, and 90° light scattering. Curcumin treatment disintegrated preformed tau filaments. The results indicated that curcumin inhibited the oligomerization of tau and could disaggregate tau filaments.

  14. Development of an Asset Value Map for Disaster Risk Assessment in China by Spatial Disaggregation Using Ancillary Remote Sensing Data.

    Science.gov (United States)

    Wu, Jidong; Li, Ying; Li, Ning; Shi, Peijun

    2018-01-01

    The extent of economic losses due to a natural hazard and disaster depends largely on the spatial distribution of asset values in relation to the hazard intensity distribution within the affected area. Given that statistical data on asset value are collected by administrative units in China, generating spatially explicit asset exposure maps remains a key challenge for rapid postdisaster economic loss assessment. The goal of this study is to introduce a top-down (or downscaling) approach to disaggregate administrative-unit level asset value to grid-cell level. To do so, finding the highly correlated "surrogate" indicators is the key. A combination of three data sets-nighttime light grid, LandScan population grid, and road density grid, is used as ancillary asset density distribution information for spatializing the asset value. As a result, a high spatial resolution asset value map of China for 2015 is generated. The spatial data set contains aggregated economic value at risk at 30 arc-second spatial resolution. Accuracy of the spatial disaggregation reflects redistribution errors introduced by the disaggregation process as well as errors from the original ancillary data sets. The overall accuracy of the results proves to be promising. The example of using the developed disaggregated asset value map in exposure assessment of watersheds demonstrates that the data set offers immense analytical flexibility for overlay analysis according to the hazard extent. This product will help current efforts to analyze spatial characteristics of exposure and to uncover the contributions of both physical and social drivers of natural hazard and disaster across space and time. © 2017 Society for Risk Analysis.

  15. Analytical and laser scanning techniques to determine shape properties of aggregates used in pavements

    CSIR Research Space (South Africa)

    Komba, Julius J

    2013-06-01

    Full Text Available and volume of an aggregate particle, the sphericity computed by using orthogonal dimensions of an aggregate particle, and the flat and elongated ratio computed by using longest and smallest dimensions of an aggregate particle. The second approach employed.... Further validation of the laser-based technique was achieved by correlating the laser-based aggregate form indices with the results from two current standard tests; the flakiness index and the flat and elongated particles ratio tests. The laser...

  16. Solution of the fully fuzzy linear systems using iterative techniques

    International Nuclear Information System (INIS)

    Dehghan, Mehdi; Hashemi, Behnam; Ghatee, Mehdi

    2007-01-01

    This paper mainly intends to discuss the iterative solution of fully fuzzy linear systems which we call FFLS. We employ Dubois and Prade's approximate arithmetic operators on LR fuzzy numbers for finding a positive fuzzy vector x-tilde which satisfies A-tildex-tilde=b, where A-tilde and b-tilde are a fuzzy matrix and a fuzzy vector, respectively. Please note that the positivity assumption is not so restrictive in applied problems. We transform FFLS and propose iterative techniques such as Richardson, Jacobi, Jacobi overrelaxation (JOR), Gauss-Seidel, successive overrelaxation (SOR), accelerated overrelaxation (AOR), symmetric and unsymmetric SOR (SSOR and USSOR) and extrapolated modified Aitken (EMA) for solving FFLS. In addition, the methods of Newton, quasi-Newton and conjugate gradient are proposed from nonlinear programming for solving a fully fuzzy linear system. Various numerical examples are also given to show the efficiency of the proposed schemes

  17. Software for the grouped optimal aggregation technique

    Science.gov (United States)

    Brown, P. M.; Shaw, G. W. (Principal Investigator)

    1982-01-01

    The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.

  18. Value of time determination for the city of Alexandria based on a disaggregate binary mode choice model

    Directory of Open Access Journals (Sweden)

    Mounir Mahmoud Moghazy Abdel-Aal

    2017-12-01

    Full Text Available In the travel demand modeling field, mode choice is the most important decision that affects the resulted road congestion. The behavioral nature of the disaggregate models and the associated advantages of such models over aggregate models have led to their extensive use. This paper proposes a framework to determine the value of time (VoT for the city of Alexandria through calibrating a disaggregate linear-in parameter utility-based binary logit mode choice model of the city. The mode attributes (travel time and travel cost along with traveler attributes (car ownership and income were selected as the utility attributes of the basic model formulation which included 5 models. Three additional alternative utility formulations based on the transformation of the mode attributes including relative travel cost (cost divided by income and log (travel time and the combination of the two transformations together were introduced. The parameter estimation procedure was based on the likelihood maximization technique and was performed in EXCEL. Out of 20 models estimated, only 2 models are considered successful in terms of the parameters estimates correct signs and the magnitude of their significance (t-statistics value. The determination of the VoT serves also in the model validation. The best two models estimated the value of time at LE 11.30/hr and LE 14.50/hr with a relative error of +3.7% and +33.0%, respectively, of the hourly salary of LE 10.9/hr. The proposed two models prove to be sensitive to trip time and income levels as factors affecting the choice mechanism. The sensitivity analysis was performed and proved the model with higher relative error is marginally more robust. Keywords: Transportation modeling, Binary mode choice, Parameter estimation, Value of time, Likelihood maximization, Sensitivity analysis

  19. An algebraic iterative reconstruction technique for differential X-ray phase-contrast computed tomography.

    Science.gov (United States)

    Fu, Jian; Schleede, Simone; Tan, Renbo; Chen, Liyuan; Bech, Martin; Achterhold, Klaus; Gifford, Martin; Loewen, Rod; Ruth, Ronald; Pfeiffer, Franz

    2013-09-01

    Iterative reconstruction has a wide spectrum of proven advantages in the field of conventional X-ray absorption-based computed tomography (CT). In this paper, we report on an algebraic iterative reconstruction technique for grating-based differential phase-contrast CT (DPC-CT). Due to the differential nature of DPC-CT projections, a differential operator and a smoothing operator are added to the iterative reconstruction, compared to the one commonly used for absorption-based CT data. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured at a two-grating interferometer setup. Since the algorithm is easy to implement and allows for the extension to various regularization possibilities, we expect a significant impact of the method for improving future medical and industrial DPC-CT applications. Copyright © 2012. Published by Elsevier GmbH.

  20. Decomposing the variation of aggregate electricity intensity in Spanish industry

    International Nuclear Information System (INIS)

    Gonzalez, P.F.; Suarez, R.P.

    2003-01-01

    Several papers have dealt with methodological and application issues related to techniques for decomposing changes in environmental indicators. This paper aims to decompose changes in electricity intensity in Spanish industry and to explain the factors that contribute to these changes. Focusing on an energy intensity approach based on Divisia indices, we began by reviewing the two general parametric Divisia methods and six specific cases. In order to avoid obtaining significantly different results by using differing methods, all of them have been applied to Spanish data. Also two different disaggregation levels have been taken into consideration. Combined with electricity price analysis, the results of this paper indicate the poor contribution of structural change to substantial reductions in aggregate electricity intensity, and underline the role of innovation, development, diffusion and access to more efficient technologies as main contributors to the reduction of the energy/production ratio. (author)

  1. Iterative categorization (IC): a systematic technique for analysing qualitative data

    Science.gov (United States)

    2016-01-01

    Abstract The processes of analysing qualitative data, particularly the stage between coding and publication, are often vague and/or poorly explained within addiction science and research more broadly. A simple but rigorous and transparent technique for analysing qualitative textual data, developed within the field of addiction, is described. The technique, iterative categorization (IC), is suitable for use with inductive and deductive codes and can support a range of common analytical approaches, e.g. thematic analysis, Framework, constant comparison, analytical induction, content analysis, conversational analysis, discourse analysis, interpretative phenomenological analysis and narrative analysis. Once the data have been coded, the only software required is a standard word processing package. Worked examples are provided. PMID:26806155

  2. Tritium absorption and desorption in ITER relevant materials: comparative study of tungsten dust and massive samples

    Energy Technology Data Exchange (ETDEWEB)

    Grisolia, C., E-mail: christian.grisolia@cea.fr [CEA, IRFM, F-13108 Saint Paul lez Durance (France); Hodille, E. [CEA, IRFM, F-13108 Saint Paul lez Durance (France); Chene, J.; Garcia-Argote, S.; Pieters, G.; El-Kharbachi, A. [CEA Saclay, SCBM, iBiTec-S, PC n° 108, 91191 Gifsur-Yvette (France); Marchetti, L.; Martin, F.; Miserque, F. [CEA Saclay, DEN/DPC/SCCME/LECA, F-91191 Gif-sur-Yvette (France); Vrel, D.; Redolfi, M. [LSPM, Université Paris 13, Sorbonne Paris Cité, UPR 3407 CNRS, 93430 Villetaneuse (France); Malard, V. [CEA, DSV, IBEB, Lab Biochim System Perturb, Bagnols-sur-Cèze F-30207 (France); Dinescu, G.; Acsente, T. [NILPRP, 409 Atomistilor Street, 77125 Magurele, Bucharest (Romania); Gensdarmes, F.; Peillon, S. [IRSN, PSN-RES/SCA/LPMA, Saclay, Gif-sur-Yvette, 91192 (France); Pegourié, B. [CEA, IRFM, F-13108 Saint Paul lez Durance (France); Rousseau, B. [CEA Saclay, SCBM, iBiTec-S, PC n° 108, 91191 Gifsur-Yvette (France)

    2015-08-15

    Tritium adsorption and desorption from well characterized tungsten dust are presented. The dust used are of different types prepared by planetary milling and by aggregation technique in plasma. For the milled powder, the surface specific area (SSA) is 15.5 m{sup 2}/g. The particles are poly-disperse with a maximum size of 200 nm for the milled powder and 100 nm for the aggregation one. Prior to tritiation the particles are carefully de-oxidized. Both samples are experiencing a high tritium inventory from 5 GBq/g to 35 GBq/g. From comparison with massive samples and considering that tritium inventory increases with SSA, it is shown that surface effects are predominant in the tritium trapping process. Extrapolation to the ITER environment is undertaken with the help of a Macroscopic Rate Equation model. It is shown that, during the life time of ITER, these particles can exceed rapidly 1 GBq/g.

  3. Disaggregated Futures and Options Commitments of Traders

    Data.gov (United States)

    Commodity Futures Trading Commission — The Disaggregated Futures and Options Commitments of Traders dataset provides a breakdown of each week's open interest for agriculture, energy, metals, lumber, and...

  4. Amorphous Calcium Phosphate Formation and Aggregation Process Revealed by Light Scattering Techniques

    Directory of Open Access Journals (Sweden)

    Vida Čadež

    2018-06-01

    Full Text Available Amorphous calcium phosphate (ACP attracts attention as a precursor of crystalline calcium phosphates (CaPs formation in vitro and in vivo as well as due to its excellent biological properties. Its formation can be considered to be an aggregation process. Although aggregation of ACP is of interest for both gaining a fundamental understanding of biominerals formation and in the synthesis of novel materials, it has still not been investigated in detail. In this work, the ACP aggregation was followed by two widely applied techniques suitable for following nanoparticles aggregation in general: dynamic light scattering (DLS and laser diffraction (LD. In addition, the ACP formation was followed by potentiometric measurements and formed precipitates were characterized by Fourier transform infrared spectroscopy (FTIR, powder X-ray diffraction (PXRD, transmission electron microscopy (TEM, and atomic force microscopy (AFM. The results showed that aggregation of ACP particles is a process which from the earliest stages simultaneously takes place at wide length scales, from nanometers to micrometers, leading to a highly polydisperse precipitation system, with polydispersity and vol. % of larger aggregates increasing with concentration. Obtained results provide insight into developing a way of regulating ACP and consequently CaP formation by controlling aggregation on the scale of interest.

  5. A Unique Technique to get Kaprekar Iteration in Linear Programming Problem

    Science.gov (United States)

    Sumathi, P.; Preethy, V.

    2018-04-01

    This paper explores about a frivolous number popularly known as Kaprekar constant and Kaprekar numbers. A large number of courses and the different classroom capacities with difference in study periods make the assignment between classrooms and courses complicated. An approach of getting the minimum value of number of iterations to reach the Kaprekar constant for four digit numbers and maximum value is also obtained through linear programming techniques.

  6. Study on assembly techniques and procedures for ITER tokamak device

    International Nuclear Information System (INIS)

    Obara, Kenjiro; Kakudate, Satoshi; Shibanuma, Kiyoshi; Sago, Hiromi; Ue, Koichi; Shimizu, Katsusuke; Onozuka, Masanori

    2006-06-01

    The International Thermonuclear Experimental Reactor (ITER) tokamak is mainly composed of a doughnut-shaped vacuum vessel (VV), four types of superconducting coils such as toroidal field coils (TF coils) arranged around the VV, and in-vessel components, such as blanket and divertor. The dimensions and weight of the respective components are around a few ten-meters and several hundred-tons. In addition, the whole tokamak assembly, which are composed of these components, are roughly estimated, 26 m in diameter, 18 m in height and over 16,500 tons in total weight. On the other hand, as for positioning and assembly tolerances of the VV and the TF coil are required to be a high accuracy of ±3 mm in spite of large size and heavy weight. The assembly procedures and techniques of the ITER tokamak are therefore studied, taking account of the tolerance requirements as well as the configuration of the tokamak with large size and heavy weight. Based on the above backgrounds, the assembly procedures and techniques, which are able to assemble the tokamak with high accuracy, are described in the present report. The tokamak assembly operations are categorized into six work break down structures (WBS), i.e., (1) preparation for assembly operations, (2) sub-assembly of the 40deg sector composed of 40deg VV sector, two TF coils and thermal shield between VV and TF coil at the assembly hall, (3) completion of the doughnut-shaped tokamak assembly composed of nine 40deg sectors in the cryostat at the tokamak pit, (4) measurement of positioning and accuracy after the completion of the tokamak assembly, (5) installation of the ex-vessel components, and (6) installation of in-vessel components. In the present report, two assembly operations of (2) and (3) in the above six WBS, which are the most critical in the tokamak assembly, are mainly described. The report describes the following newly developed tokamak assembly procedures and techniques, jigs and tools for assembly and metrology

  7. Conformational Analysis of Misfolded Protein Aggregation by FRET and Live-Cell Imaging Techniques

    Directory of Open Access Journals (Sweden)

    Akira Kitamura

    2015-03-01

    Full Text Available Cellular homeostasis is maintained by several types of protein machinery, including molecular chaperones and proteolysis systems. Dysregulation of the proteome disrupts homeostasis in cells, tissues, and the organism as a whole, and has been hypothesized to cause neurodegenerative disorders, including amyotrophic lateral sclerosis (ALS and Huntington’s disease (HD. A hallmark of neurodegenerative disorders is formation of ubiquitin-positive inclusion bodies in neurons, suggesting that the aggregation process of misfolded proteins changes during disease progression. Hence, high-throughput determination of soluble oligomers during the aggregation process, as well as the conformation of sequestered proteins in inclusion bodies, is essential for elucidation of physiological regulation mechanism and drug discovery in this field. To elucidate the interaction, accumulation, and conformation of aggregation-prone proteins, in situ spectroscopic imaging techniques, such as Förster/fluorescence resonance energy transfer (FRET, fluorescence correlation spectroscopy (FCS, and bimolecular fluorescence complementation (BiFC have been employed. Here, we summarize recent reports in which these techniques were applied to the analysis of aggregation-prone proteins (in particular their dimerization, interactions, and conformational changes, and describe several fluorescent indicators used for real-time observation of physiological states related to proteostasis.

  8. Convective aggregation in realistic convective-scale simulations

    Science.gov (United States)

    Holloway, Christopher E.

    2017-06-01

    To investigate the real-world relevance of idealized-model convective self-aggregation, five 15 day cases of real organized convection in the tropics are simulated. These include multiple simulations of each case to test sensitivities of the convective organization and mean states to interactive radiation, interactive surface fluxes, and evaporation of rain. These simulations are compared to self-aggregation seen in the same model configured to run in idealized radiative-convective equilibrium. Analysis of the budget of the spatial variance of column-integrated frozen moist static energy shows that control runs have significant positive contributions to organization from radiation and negative contributions from surface fluxes and transport, similar to idealized runs once they become aggregated. Despite identical lateral boundary conditions for all experiments in each case, systematic differences in mean column water vapor (CWV), CWV distribution shape, and CWV autocorrelation length scale are found between the different sensitivity runs, particularly for those without interactive radiation, showing that there are at least some similarities in sensitivities to these feedbacks in both idealized and realistic simulations (although the organization of precipitation shows less sensitivity to interactive radiation). The magnitudes and signs of these systematic differences are consistent with a rough equilibrium between (1) equalization due to advection from the lateral boundaries and (2) disaggregation due to the absence of interactive radiation, implying disaggregation rates comparable to those in idealized runs with aggregated initial conditions and noninteractive radiation. This points to a plausible similarity in the way that radiation feedbacks maintain aggregated convection in both idealized simulations and the real world.Plain Language SummaryUnderstanding the processes that lead to the organization of tropical rainstorms is an important challenge for weather

  9. Extending the reach of strong-coupling: an iterative technique for Hamiltonian lattice models

    International Nuclear Information System (INIS)

    Alberty, J.; Greensite, J.; Patkos, A.

    1983-12-01

    The authors propose an iterative method for doing lattice strong-coupling-like calculations in a range of medium to weak couplings. The method is a modified Lanczos scheme, with greatly improved convergence properties. The technique is tested on the Mathieu equation and on a Hamiltonian finite-chain XY model, with excellent results. (Auth.)

  10. Iterative categorization (IC): a systematic technique for analysing qualitative data.

    Science.gov (United States)

    Neale, Joanne

    2016-06-01

    The processes of analysing qualitative data, particularly the stage between coding and publication, are often vague and/or poorly explained within addiction science and research more broadly. A simple but rigorous and transparent technique for analysing qualitative textual data, developed within the field of addiction, is described. The technique, iterative categorization (IC), is suitable for use with inductive and deductive codes and can support a range of common analytical approaches, e.g. thematic analysis, Framework, constant comparison, analytical induction, content analysis, conversational analysis, discourse analysis, interpretative phenomenological analysis and narrative analysis. Once the data have been coded, the only software required is a standard word processing package. Worked examples are provided. © 2016 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.

  11. Iterative Method of Regularization with Application of Advanced Technique for Detection of Contours

    International Nuclear Information System (INIS)

    Niedziela, T.; Stankiewicz, A.

    2000-01-01

    This paper proposes a novel iterative method of regularization with application of an advanced technique for detection of contours. To eliminate noises, the properties of convolution of functions are utilized. The method can be accomplished in a simple neural cellular network, which creates the possibility of extraction of contours by automatic image recognition equipment. (author)

  12. Localization of SDGs through Disaggregation of KPIs

    Directory of Open Access Journals (Sweden)

    Manohar Patole

    2018-03-01

    Full Text Available The United Nation’s Agenda 2030 and Sustainable Development Goals (SDGs pick up where the Millennium Development Goals (MDGs left off. The SDGs set forth a formidable task for the global community and international sustainable development over the next 15 years. Learning from the successes and failures of the MDGs, government officials, development experts, and many other groups understood that localization is necessary to accomplish the SDGs but how and what to localize remain as questions to be answered. The UN Inter-Agency and Expert Group on Sustainable Development Goals (UN IAEG-SDGs sought to answer these questions through development of metadata behind the 17 goals, 169 associated targets and corresponding indicators of the SDGs. Data management is key to understanding how and what to localize, but, to do it properly, the data and metadata needs to be properly disaggregated. This paper reviews the utilization of disaggregation analysis for localization and demonstrates the process of identifying opportunities for subnational interventions to achieve multiple targets and indicators through the formation of new integrated key performance indicators. A case study on SDG 6: Clean Water and Sanitation is used to elucidate these points. The examples presented here are only illustrative—future research and the development of an analytical framework for localization and disaggregation of the SDGs would be a valuable tool for national and local governments, implementing partners and other interested parties.

  13. Comparative Analysis of Rank Aggregation Techniques for Metasearch Using Genetic Algorithm

    Science.gov (United States)

    Kaur, Parneet; Singh, Manpreet; Singh Josan, Gurpreet

    2017-01-01

    Rank Aggregation techniques have found wide applications for metasearch along with other streams such as Sports, Voting System, Stock Markets, and Reduction in Spam. This paper presents the optimization of rank lists for web queries put by the user on different MetaSearch engines. A metaheuristic approach such as Genetic algorithm based rank…

  14. A new iterative triclass thresholding technique in image segmentation.

    Science.gov (United States)

    Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin

    2014-03-01

    We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.

  15. Spatial Disaggregation of Areal Rainfall Using Two Different Artificial Neural Networks Models

    Directory of Open Access Journals (Sweden)

    Sungwon Kim

    2015-06-01

    Full Text Available The objective of this study is to develop artificial neural network (ANN models, including multilayer perceptron (MLP and Kohonen self-organizing feature map (KSOFM, for spatial disaggregation of areal rainfall in the Wi-stream catchment, an International Hydrological Program (IHP representative catchment, in South Korea. A three-layer MLP model, using three training algorithms, was used to estimate areal rainfall. The Levenberg–Marquardt training algorithm was found to be more sensitive to the number of hidden nodes than were the conjugate gradient and quickprop training algorithms using the MLP model. Results showed that the networks structures of 11-5-1 (conjugate gradient and quickprop and 11-3-1 (Levenberg-Marquardt were the best for estimating areal rainfall using the MLP model. The networks structures of 1-5-11 (conjugate gradient and quickprop and 1-3-11 (Levenberg–Marquardt, which are the inverse networks for estimating areal rainfall using the best MLP model, were identified for spatial disaggregation of areal rainfall using the MLP model. The KSOFM model was compared with the MLP model for spatial disaggregation of areal rainfall. The MLP and KSOFM models could disaggregate areal rainfall into individual point rainfall with spatial concepts.

  16. Protein aggregation in bacteria: the thin boundary between functionality and toxicity.

    Science.gov (United States)

    Bednarska, Natalia G; Schymkowitz, Joost; Rousseau, Frederic; Van Eldere, Johan

    2013-09-01

    Misfolding and aggregation of proteins have a negative impact on all living organisms. In recent years, aggregation has been studied in detail due to its involvement in neurodegenerative diseases, including Alzheimer's, Parkinson's and Huntington's diseases, and type II diabetes--all associated with accumulation of amyloid fibrils. This research highlighted the central importance of protein homeostasis, or proteostasis for short, defined as the cellular state in which the proteome is both stable and functional. It implicates an equilibrium between synthesis, folding, trafficking, aggregation, disaggregation and degradation. In accordance with the eukaryotic systems, it has been documented that protein aggregation also reduces fitness of bacterial cells, but although our understanding of the cellular protein quality control systems is perhaps most detailed in bacteria, the use of bacterial proteostasis as a drug target remains little explored. Here we describe protein aggregation as a normal physiological process and its role in bacterial virulence and we shed light on how bacteria defend themselves against the toxic threat of aggregates. We review the impact of aggregates on bacterial viability and look at the ways that bacteria use to maintain a balance between aggregation and functionality. The proteostasis in bacteria can be interrupted via overexpression of proteins, certain antibiotics such as aminoglycosides, as well as antimicrobial peptides--all leading to loss of cell viability. Therefore intracellular protein aggregation and disruption of proteostatic balance in bacteria open up another strategy that should be explored towards the discovery of new antimicrobials.

  17. Pre-Saturation Technique of the Recycled Aggregates: Solution to the Water Absorption Drawback in the Recycled Concrete Manufacture.

    Science.gov (United States)

    García-González, Julia; Rodríguez-Robles, Desirée; Juan-Valdés, Andrés; Morán-Del Pozo, Julia Mª; Guerra-Romero, M Ignacio

    2014-09-01

    The replacement of natural aggregates by recycled aggregates in the concrete manufacturing has been spreading worldwide as a recycling method to counteract the large amount of construction and demolition waste. Although legislation in this field is still not well developed, many investigations demonstrate the possibilities of success of this trend given that concrete with satisfactory mechanical and durability properties could be achieved. However, recycled aggregates present a low quality compared to natural aggregates, the water absorption being their main drawback. When used untreated in concrete mix, the recycled aggregate absorb part of the water initially calculated for the cement hydration, which will adversely affect some characteristics of the recycled concrete. This article seeks to demonstrate that the technique of pre-saturation is able to solve the aforementioned problem. In order to do so, the water absorption of the aggregates was tested to determine the necessary period of soaking to bring the recycled aggregates into a state of suitable humidity for their incorporation into the mixture. Moreover, several concrete mixes were made with different replacement percentages of natural aggregate and various periods of pre-saturation. The consistency and compressive strength of the concrete mixes were tested to verify the feasibility of the proposed technique.

  18. Conductivity-Dependent Flow Field-Flow Fractionation of Fulvic and Humic Acid Aggregates

    Directory of Open Access Journals (Sweden)

    Martha J. M. Wells

    2015-09-01

    Full Text Available Fulvic (FAs and humic acids (HAs are chemically fascinating. In water, they have a strong propensity to aggregate, but this research reveals that tendency is regulated by ionic strength. In the environment, conductivity extremes occur naturally—freshwater to seawater—warranting consideration at low and high values. The flow field flow fractionation (flow FFF of FAs and HAs is observed to be concentration dependent in low ionic strength solutions whereas the corresponding flow FFF fractograms in high ionic strength solutions are concentration independent. Dynamic light scattering (DLS also reveals insight into the conductivity-dependent behavior of humic substances (HSs. Four particle size ranges for FAs and humic acid aggregates are examined: (1 <10 nm; (2 10 nm–6 µm; (3 6–100 µm; and (4 >100 µm. Representative components of the different size ranges are observed to dynamically coexist in solution. The character of the various aggregates observed—such as random-extended-coiled macromolecules, hydrogels, supramolecular, and micellar—as influenced by electrolytic conductivity, is discussed. The disaggregation/aggregation of HSs is proposed to be a dynamic equilibrium process for which the rate of aggregate formation is controlled by the electrolytic conductivity of the solution.

  19. Measurement of the temperature-dependent threshold shear-stress of red blood cell aggregation.

    Science.gov (United States)

    Lim, Hyun-Jung; Nam, Jeong-Hun; Lee, Yong-Jin; Shin, Sehyun

    2009-09-01

    Red blood cell (RBC) aggregation is becoming an important hemorheological parameter, which typically exhibits temperature dependence. Quite recently, a critical shear-stress was proposed as a new dimensional index to represent the aggregative and disaggregative behaviors of RBCs. The present study investigated the effect of the temperature on the critical shear-stress that is required to keep RBC aggregates dispersed. The critical shear-stress was measured at various temperatures (4, 10, 20, 30, and 37 degrees C) through the use of a transient microfluidic aggregometry. The critical shear-stress significantly increased as the blood temperature lowered, which accorded with the increase in the low-shear blood viscosity with the lowering of the temperature. Furthermore, the critical shear-stress also showed good agreement with the threshold shear-stress, as measured in a rotational Couette flow. These findings assist in rheologically validating the critical shear-stress, as defined in the microfluidic aggregometry.

  20. Volatility spillover from world oil spot markets to aggregate and electricity stock index returns in Turkey

    International Nuclear Information System (INIS)

    Soytas, Ugur; Oran, Adil

    2011-01-01

    This study examines the inter-temporal links between world oil prices, ISE 100 and ISE electricity index returns unadjusted and adjusted for market effects. The traditional approaches could not detect a causal relationship running from oil returns to any of the stock returns. However, when we examine the causality using Cheung-Ng approach we discover that world oil prices Granger cause electricity index and adjusted electricity index returns in variance, but not the aggregate market index returns. Hence, our results show that the Cheung-Ng procedure with the use of disaggregated stock index returns can uncover new information that went unnoticed with the traditional causality tests using aggregated market indices. (author)

  1. Principles, techniques and recent advances in fine particle aggregation for solid-liquid separation

    International Nuclear Information System (INIS)

    Somasundaran, P.; Vasudevan, T.V.

    1993-01-01

    Waste water discharged from various chemical and nuclear processing operations contains dissolved metal species that are highly toxic and, in some cases, radioactive. When the waste is acidic in nature, neutralization using reagents such as lime is commonly practiced to reduce both the acidity and the amount of waste (Kuyucak et al.). The sludge that results from the neutralization process contains metal oxide or hydroxide precipitates that are colloidal in nature and is highly stable. Destabilization of colloidal suspensions can be achieved by aggregation of fines into larger sized agglomerates. Aggregation of fines is a complex phenomenon involving a multitude of forces that control the interparticle interaction. In order to understand the colloidal behavior of suspensions a fundamental knowledge of physicochemical properties that determine the various forces is essential. In this review, a discussion of basic principles governing the aggregation of colloidal fines, various ways in which interparticle forces can be manipulated to achieve the desired aggregation response and recent advances in experimental techniques to probe the interfacial characteristics that control the flocculation behavior are discussed

  2. Application of the iterative probe correction technique for a high-order probe in spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Laitinen, Tommi; Pivnenko, Sergey; Breinbjerg, Olav

    2006-01-01

    An iterative probe-correction technique for spherical near-field antenna measurements is examined. This technique has previously been shown to be well-suited for non-ideal first-order probes. In this paper, its performance in the case of a high-order probe (a dual-ridged horn) is examined....

  3. Iterative methods used in overlap astrometric reduction techniques do not always converge

    Science.gov (United States)

    Rapaport, M.; Ducourant, C.; Colin, J.; Le Campion, J. F.

    1993-04-01

    In this paper we prove that the classical Gauss-Seidel type iterative methods used for the solution of the reduced normal equations occurring in overlapping reduction methods of astrometry do not always converge. We exhibit examples of divergence. We then analyze an alternative algorithm proposed by Wang (1985). We prove the consistency of this algorithm and verify that it can be convergent while the Gauss-Seidel method is divergent. We conjecture the convergence of Wang method for the solution of astrometric problems using overlap techniques.

  4. Coronary artery plaques: Cardiac CT with model-based and adaptive-statistical iterative reconstruction technique

    International Nuclear Information System (INIS)

    Scheffel, Hans; Stolzmann, Paul; Schlett, Christopher L.; Engel, Leif-Christopher; Major, Gyöngi Petra; Károlyi, Mihály; Do, Synho; Maurovich-Horvat, Pál; Hoffmann, Udo

    2012-01-01

    Objectives: To compare image quality of coronary artery plaque visualization at CT angiography with images reconstructed with filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR), and model based iterative reconstruction (MBIR) techniques. Methods: The coronary arteries of three ex vivo human hearts were imaged by CT and reconstructed with FBP, ASIR and MBIR. Coronary cross-sectional images were co-registered between the different reconstruction techniques and assessed for qualitative and quantitative image quality parameters. Readers were blinded to the reconstruction algorithm. Results: A total of 375 triplets of coronary cross-sectional images were co-registered. Using MBIR, 26% of the images were rated as having excellent overall image quality, which was significantly better as compared to ASIR and FBP (4% and 13%, respectively, all p < 0.001). Qualitative assessment of image noise demonstrated a noise reduction by using ASIR as compared to FBP (p < 0.01) and further noise reduction by using MBIR (p < 0.001). The contrast-to-noise-ratio (CNR) using MBIR was better as compared to ASIR and FBP (44 ± 19, 29 ± 15, 26 ± 9, respectively; all p < 0.001). Conclusions: Using MBIR improved image quality, reduced image noise and increased CNR as compared to the other available reconstruction techniques. This may further improve the visualization of coronary artery plaque and allow radiation reduction.

  5. Ultrasonic techniques for quality assessment of ITER Divertor plasma facing component

    International Nuclear Information System (INIS)

    Martinez-Ona, Rafael; Garcia, Monica; Medrano, Mercedes

    2009-01-01

    The divertor is one of the most challenging components of ITER machine. Its plasma facing components contain thousands of joints that should be assessed to demonstrate their integrity during the required lifetime. Ultrasonic (US) techniques have been developed to study the capability of defect detection and to control the quality and degradation of these interfaces after the manufacturing process. Three types of joints made of carbon fibre composite to copper alloy, tungsten to copper alloy, and copper-to-copper alloy with two types of configurations have been studied. More than 100 samples representing these configurations and containing implanted flaws of different sizes have been examined. US techniques developed are detailed and results of validation samples examination before and after high heat flux (HHF) tests are presented. The results show that for W monoblocks the US technique is able to detect, locate and size the degradations in the two sample joints; for CFC monoblocks, the US technique is also able to detect, locate and size the calibrated defects in the two joints before the HHF, however after the HHF test the technique is not able to reliably detect defects in the CFC/Cu joint; finally, for the W flat tiles the US technique is able to detect, locate and size the calibrated defects in the two joints before HHF test, nevertheless defect location and sizing are more difficult after the HHF test.

  6. Pre-Saturation Technique of the Recycled Aggregates: Solution to the Water Absorption Drawback in the Recycled Concrete Manufacture †

    Science.gov (United States)

    García-González, Julia; Rodríguez-Robles, Desirée; Juan-Valdés, Andrés; Morán-del Pozo, Julia Mª; Guerra-Romero, M. Ignacio

    2014-01-01

    The replacement of natural aggregates by recycled aggregates in the concrete manufacturing has been spreading worldwide as a recycling method to counteract the large amount of construction and demolition waste. Although legislation in this field is still not well developed, many investigations demonstrate the possibilities of success of this trend given that concrete with satisfactory mechanical and durability properties could be achieved. However, recycled aggregates present a low quality compared to natural aggregates, the water absorption being their main drawback. When used untreated in concrete mix, the recycled aggregate absorb part of the water initially calculated for the cement hydration, which will adversely affect some characteristics of the recycled concrete. This article seeks to demonstrate that the technique of pre-saturation is able to solve the aforementioned problem. In order to do so, the water absorption of the aggregates was tested to determine the necessary period of soaking to bring the recycled aggregates into a state of suitable humidity for their incorporation into the mixture. Moreover, several concrete mixes were made with different replacement percentages of natural aggregate and various periods of pre-saturation. The consistency and compressive strength of the concrete mixes were tested to verify the feasibility of the proposed technique. PMID:28788188

  7. Application of dynamic programming for the analysis of complex water resources systems : a case study on the Mahaweli River basin development in Sri Lanka

    NARCIS (Netherlands)

    Kularathna, M.D.U.P.

    1992-01-01

    The technique of Stochastic Dynamic Programming (SDP) is ideally suited for operation policy analyses of water resources systems. However SDP has a major drawback which is appropriately termed as its "curse of dimensionality".

    Aggregation/Disaggregation techniques based on SDP and

  8. Performance of an iterative two-stage bayesian technique for population pharmacokinetic analysis of rich data sets

    NARCIS (Netherlands)

    Proost, Johannes H.; Eleveld, Douglas J.

    2006-01-01

    Purpose. To test the suitability of an Iterative Two-Stage Bayesian (ITSB) technique for population pharmacokinetic analysis of rich data sets, and to compare ITSB with Standard Two-Stage (STS) analysis and nonlinear Mixed Effect Modeling (MEM). Materials and Methods. Data from a clinical study with

  9. Multicore Performance of Block Algebraic Iterative Reconstruction Methods

    DEFF Research Database (Denmark)

    Sørensen, Hans Henrik B.; Hansen, Per Christian

    2014-01-01

    Algebraic iterative methods are routinely used for solving the ill-posed sparse linear systems arising in tomographic image reconstruction. Here we consider the algebraic reconstruction technique (ART) and the simultaneous iterative reconstruction techniques (SIRT), both of which rely on semiconv......Algebraic iterative methods are routinely used for solving the ill-posed sparse linear systems arising in tomographic image reconstruction. Here we consider the algebraic reconstruction technique (ART) and the simultaneous iterative reconstruction techniques (SIRT), both of which rely...... on semiconvergence. Block versions of these methods, based on a partitioning of the linear system, are able to combine the fast semiconvergence of ART with the better multicore properties of SIRT. These block methods separate into two classes: those that, in each iteration, access the blocks in a sequential manner...... a fixed relaxation parameter in each method, namely, the one that leads to the fastest semiconvergence. Computational results show that for multicore computers, the sequential approach is preferable....

  10. Disaggregating Qualitative Data from Asian American College Students in Campus Racial Climate Research and Assessment

    Science.gov (United States)

    Museus, Samuel D.; Truong, Kimberly A.

    2009-01-01

    This article highlights the utility of disaggregating qualitative research and assessment data on Asian American college students. Given the complexity of and diversity within the Asian American population, scholars have begun to underscore the importance of disaggregating data in the empirical examination of Asian Americans, but most of those…

  11. Post-convergence automatic differentiation of iterative schemes

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1997-01-01

    A new approach for performing automatic differentiation (AD) of computer codes that embody an iterative procedure, based on differentiating a single additional iteration upon achieving convergence, is described and implemented. This post-convergence automatic differentiation (PAD) technique results in better accuracy of the computed derivatives, as it eliminates part of the derivatives convergence error, and a large reduction in execution time, especially when many iterations are required to achieve convergence. In addition, it provides a way to compute derivatives of the converged solution without having to repeat the entire iterative process every time new parameters are considered. These advantages are demonstrated and the PAD technique is validated via a set of three linear and nonlinear codes used to solve neutron transport and fluid flow problems. The PAD technique reduces the execution time over direct AD by a factor of up to 30 and improves the accuracy of the derivatives by up to two orders of magnitude. The PAD technique's biggest disadvantage lies in the necessity to compute the iterative map's Jacobian, which for large problems can be prohibitive. Methods are discussed to alleviate this difficulty

  12. Neutron scattering techniques in the examination of recycled aggregate concrete

    International Nuclear Information System (INIS)

    Krezel, A.; Alabaster, P.; Bakshi, E.; McManus, K.

    1999-01-01

    Full text: Researchers at Swinburne University of Technology (SUT) have undertaken a research project aiming initially at better understanding the effects of any chemical impurities in Recycled Concrete Aggregate (RCA) on the microstructure development of Recycled Aggregate Concrete (RAC). Furthermore, a porosity of RCA and RAC and its effect on the acoustic performance and mechanical properties is being investigated. A number of conventional tests have been employed to examine the porosity of the aggregate and concrete made from RCA ranging from Volume of Permeable Voids test, through nitrogen adsorption to scanning electron microscopy. These tests are performed at SUT to characterise pores structure including pore size and volume as well as their surface area. The preparation of samples differs for the various tests, and this is a main reason contributing to inconsistencies in the results from these tests. None-the-less the results indicate strong positive correlation of inherent and purposely introduced porosity in RAC to its sound absorption capacities. Some inconsistency in the results is also due to the complexity of concrete itself compounded by the use of recycled material. However, the research has been granted a Grant from Australian Institute of Nuclear Science and Engineering (AINSE) which allows to conduct RAC examination using Small Angle Neutron Scattering (SANS). This neutron scattering technique characterises pore structure in a non-destructive manner. The results from this method should augment these obtained from conventional methods

  13. Aggregation by Provenance Types: A Technique for Summarising Provenance Graphs

    Directory of Open Access Journals (Sweden)

    Luc Moreau

    2015-04-01

    Full Text Available As users become confronted with a deluge of provenance data, dedicated techniques are required to make sense of this kind of information. We present Aggregation by Provenance Types, a provenance graph analysis that is capable of generating provenance graph summaries. It proceeds by converting provenance paths up to some length k to attributes, referred to as provenance types, and by grouping nodes that have the same provenance types. The summary also includes numeric values representing the frequency of nodes and edges in the original graph. A quantitative evaluation and a complexity analysis show that this technique is tractable; with small values of k, it can produce useful summaries and can help detect outliers. We illustrate how the generated summaries can further be used for conformance checking and visualization.

  14. Parallel S/sub n/ iteration schemes

    International Nuclear Information System (INIS)

    Wienke, B.R.; Hiromoto, R.E.

    1986-01-01

    The iterative, multigroup, discrete ordinates (S/sub n/) technique for solving the linear transport equation enjoys widespread usage and appeal. Serial iteration schemes and numerical algorithms developed over the years provide a timely framework for parallel extension. On the Denelcor HEP, the authors investigate three parallel iteration schemes for solving the one-dimensional S/sub n/ transport equation. The multigroup representation and serial iteration methods are also reviewed. This analysis represents a first attempt to extend serial S/sub n/ algorithms to parallel environments and provides good baseline estimates on ease of parallel implementation, relative algorithm efficiency, comparative speedup, and some future directions. The authors examine ordered and chaotic versions of these strategies, with and without concurrent rebalance and diffusion acceleration. Two strategies efficiently support high degrees of parallelization and appear to be robust parallel iteration techniques. The third strategy is a weaker parallel algorithm. Chaotic iteration, difficult to simulate on serial machines, holds promise and converges faster than ordered versions of the schemes. Actual parallel speedup and efficiency are high and payoff appears substantial

  15. Improving the Communication Pattern in Matrix-Vector Operations for Large Scale-Free Graphs by Disaggregation

    Energy Technology Data Exchange (ETDEWEB)

    Kuhlemann, Verena [Emory Univ., Atlanta, GA (United States); Vassilevski, Panayot S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-10-28

    Matrix-vector multiplication is the key operation in any Krylov-subspace iteration method. We are interested in Krylov methods applied to problems associated with the graph Laplacian arising from large scale-free graphs. Furthermore, computations with graphs of this type on parallel distributed-memory computers are challenging. This is due to the fact that scale-free graphs have a degree distribution that follows a power law, and currently available graph partitioners are not efficient for such an irregular degree distribution. The lack of a good partitioning leads to excessive interprocessor communication requirements during every matrix-vector product. Here, we present an approach to alleviate this problem based on embedding the original irregular graph into a more regular one by disaggregating (splitting up) vertices in the original graph. The matrix-vector operations for the original graph are performed via a factored triple matrix-vector product involving the embedding graph. And even though the latter graph is larger, we are able to decrease the communication requirements considerably and improve the performance of the matrix-vector product.

  16. Context-Based Energy Disaggregation in Smart Homes

    Directory of Open Access Journals (Sweden)

    Francesca Paradiso

    2016-01-01

    Full Text Available In this paper, we address the problem of energy conservation and optimization in residential environments by providing users with useful information to solicit a change in consumption behavior. Taking care to highly limit the costs of installation and management, our work proposes a Non-Intrusive Load Monitoring (NILM approach, which consists of disaggregating the whole-house power consumption into the individual portions associated to each device. State of the art NILM algorithms need monitoring data sampled at high frequency, thus requiring high costs for data collection and management. In this paper, we propose an NILM approach that relaxes the requirements on monitoring data since it uses total active power measurements gathered at low frequency (about 1 Hz. The proposed approach is based on the use of Factorial Hidden Markov Models (FHMM in conjunction with context information related to the user presence in the house and the hourly utilization of appliances. Through a set of tests, we investigated how the use of these additional context-awareness features could improve disaggregation results with respect to the basic FHMM algorithm. The tests have been performed by using Tracebase, an open dataset made of data gathered from real home environments.

  17. A semi-analytical iterative technique for solving chemistry problems

    Directory of Open Access Journals (Sweden)

    Majeed Ahmed AL-Jawary

    2017-07-01

    Full Text Available The main aim and contribution of the current paper is to implement a semi-analytical iterative method suggested by Temimi and Ansari in 2011 namely (TAM to solve two chemical problems. An approximate solution obtained by the TAM provides fast convergence. The current chemical problems are the absorption of carbon dioxide into phenyl glycidyl ether and the other system is a chemical kinetics problem. These problems are represented by systems of nonlinear ordinary differential equations that contain boundary conditions and initial conditions. Error analysis of the approximate solutions is studied using the error remainder and the maximal error remainder. Exponential rate for the convergence is observed. For both problems the results of the TAM are compared with other results obtained by previous methods available in the literature. The results demonstrate that the method has many merits such as being derivative-free, and overcoming the difficulty arising in calculating Adomian polynomials to handle the non-linear terms in Adomian Decomposition Method (ADM. It does not require to calculate Lagrange multiplier in Variational Iteration Method (VIM in which the terms of the sequence become complex after several iterations, thus, analytical evaluation of terms becomes very difficult or impossible in VIM. No need to construct a homotopy in Homotopy Perturbation Method (HPM and solve the corresponding algebraic equations. The MATHEMATICA® 9 software was used to evaluate terms in the iterative process.

  18. SU-D-12A-05: Iterative Reconstruction Techniques to Enable Intrinsic Respiratory Gated CT in Mice

    Energy Technology Data Exchange (ETDEWEB)

    Sun, T; Sun, N; Tan, S [Huazhong University of Science and Technology, Wuhan, Hubei (China); Liu, Y; Mistry, N [University of Maryland School of Medicine, Baltimore, MD (United States)

    2014-06-01

    Purpose: Longitudinal studies of lung function in mice need the ability to image different phases of ventilation in free-breathing mice using retrospective gating. However, retrospective gating often produces under-sampled and uneven angular samples, resulting in severe reconstruction artifacts when using traditional FDK based reconstruction algorithms. We wanted to demonstrate the utility of iterative reconstruction method to enable intrinsic respiratory gating in small-animal CT. Methods: Free-breathing mice were imaged using a Siemens Inveon PET/micro-CT system. Evenly distributed projection images were acquired at 360 angles. Retrospective respiratory gating was performed using an intrinsic marker based on the average intensity in a region covering the diaphragm. Projections were classified into 4 and 6 phases (finer temporal resolution) resulting in 138 and 67 projections respectively. Reconstruction was carried out using 3 Methods: conventional FDK, iterative penalized least-square (PWLS) with total variation (TV), and PWLS with edge-preserving penalty. The performance of the methods was compared using contrast-to-noise (CNR) in a region of interest (ROI). Line profile through a specific region was plotted to evaluate the preserving of edges. Results: In both the cases with 4 and 6 phases, inadequate and non-uniform angular sampling results in artifacts using conventional FDK. However, such artifacts are minimized using both the iterative methods. Using both 4 and 6 phases, the iterative techniques outperformed FDK in terms of CNR and maintaining sharp edges. This is further evidenced especially with increased artifacts using FDK for 6 phases. Conclusion: This work indicates fewer artifacts and better image details can be achieved with iterative reconstruction methods in non-uniform under-sampled reconstruction. Using iterative methods can enable free-breathing intrinsic respiratory gating in small-animal CT. Further studies are needed to compare the

  19. Defining the spatial scale in modern regional analysis new challenges from data at local level

    CERN Document Server

    Fernández Vázquez, Esteban

    2014-01-01

    This book discusses the concept of region, including techniques of ecological inference applied to estimating disaggregated data from observable aggregates. The final part presents applications in line with the functional areas definition in regional analysis.

  20. Differential stress response of Saccharomyces hybrids revealed by monitoring Hsp104 aggregation and disaggregation.

    Science.gov (United States)

    Kempf, Claudia; Lengeler, Klaus; Wendland, Jürgen

    2017-07-01

    Proteotoxic stress may occur upon exposure of yeast cells to different stress conditions. The induction of stress response mechanisms is important for cells to adapt to changes in the environment and ensure survival. For example, during exposure to elevated temperatures the expression of heat shock proteins such as Hsp104 is induced in yeast. Hsp104 extracts misfolded proteins from aggregates to promote their refolding. We used an Hsp104-GFP reporter to analyze the stress profiles of Saccharomyces species hybrids. To this end a haploid S. cerevisiae strain, harboring a chromosomal HSP104-GFP under control of its endogenous promoter, was mated with stable haploids of S. bayanus, S. cariocanus, S. kudriavzevii, S. mikatae, S. paradoxus and S. uvarum. Stress response behaviors in these hybrids were followed over time by monitoring the appearance and dissolution of Hsp104-GFP foci upon heat shock. General stress tolerance of these hybrids was related to the growth rate detected during exposure to e.g. ethanol and oxidizing agents. We observed that hybrids were generally more resistant to high temperature and ethanol stress compared to their parental strains. Amongst the hybrids differential responses regarding the appearance of Hsp104-foci and the time required for dissolving these aggregates were observed. The S. cerevisiae/S. paradoxus hybrid, combining the two most closely related strains, performed best under these conditions. Copyright © 2017 Elsevier GmbH. All rights reserved.

  1. Monoblock Obturation Technique for Non-Vital Immature Permanent Maxillary Incisors Using Mineral Trioxide Aggregate: Results from Case Series

    International Nuclear Information System (INIS)

    Iqbal, Z.; Qureshi, A. H.

    2014-01-01

    Ten patients presented with non-vital immature teeth for root canal treatment. In all these cases the pre-operative clinical examination revealed apical periodontitis with a buccal sinus tract of endodontic origin. These cases were treated by a mineral trioxide aggregate (MTA) monoblock obturation technique. Follow-up evaluations were performed at 1 - 2 years after treatment. Eight out of 10 cases were associated with periradicular healing at follow-up evaluation. Mineral trioxide aggregate Monoblock obturation technique appears to be a valid material to obtain periradicular healing in teeth with open apices and necrotic pulps. (author)

  2. A methodology for finding the optimal iteration number of the SIRT algorithm for quantitative Electron Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Okariz, Ana, E-mail: ana.okariz@ehu.es [eMERG, Fisika Aplikatua I Saila, Faculty of Engineering, University of the Basque Country, UPV/EHU, Rafael Moreno “Pitxitxi” Pasealekua 3, 48013 Bilbao (Spain); Guraya, Teresa [eMERG, Departamento de Ingeniería Minera y Metalúrgica y Ciencia de los Materiales, Faculty of Engineering, University of the Basque Country, UPV/EHU, Rafael Moreno “Pitxitxi” Pasealekua 3, 48013 Bilbao (Spain); Iturrondobeitia, Maider [eMERG, Departamento de Expresión Gráfica y Proyectos de Ingeniería, Faculty of Engineering, University of the Basque Country, UPV/EHU, Rafael Moreno “Pitxitxi” Pasealekua 3, 48013 Bilbao (Spain); Ibarretxe, Julen [eMERG, Fisika Aplikatua I Saila, Faculty of Engineering,University of the Basque Country, UPV/EHU, Rafael Moreno “Pitxitxi” Pasealekua 2, 48013 Bilbao (Spain)

    2017-02-15

    The SIRT (Simultaneous Iterative Reconstruction Technique) algorithm is commonly used in Electron Tomography to calculate the original volume of the sample from noisy images, but the results provided by this iterative procedure are strongly dependent on the specific implementation of the algorithm, as well as on the number of iterations employed for the reconstruction. In this work, a methodology for selecting the iteration number of the SIRT reconstruction that provides the most accurate segmentation is proposed. The methodology is based on the statistical analysis of the intensity profiles at the edge of the objects in the reconstructed volume. A phantom which resembles a a carbon black aggregate has been created to validate the methodology and the SIRT implementations of two free software packages (TOMOJ and TOMO3D) have been used. - Highlights: • The non uniformity of the resolution in electron tomography reconstructions has been demonstrated. • An overall resolution for the evaluation of the quality of electron tomography reconstructions has been defined. • Parameters for estimating an overall resolution across the reconstructed volume have been proposed. • The overall resolution of the reconstructions of a phantom has been estimated from the probability density functions. • It has been proven that reconstructions with the best overall resolutions have provided the most accurate segmentations.

  3. A methodology for finding the optimal iteration number of the SIRT algorithm for quantitative Electron Tomography

    International Nuclear Information System (INIS)

    Okariz, Ana; Guraya, Teresa; Iturrondobeitia, Maider; Ibarretxe, Julen

    2017-01-01

    The SIRT (Simultaneous Iterative Reconstruction Technique) algorithm is commonly used in Electron Tomography to calculate the original volume of the sample from noisy images, but the results provided by this iterative procedure are strongly dependent on the specific implementation of the algorithm, as well as on the number of iterations employed for the reconstruction. In this work, a methodology for selecting the iteration number of the SIRT reconstruction that provides the most accurate segmentation is proposed. The methodology is based on the statistical analysis of the intensity profiles at the edge of the objects in the reconstructed volume. A phantom which resembles a a carbon black aggregate has been created to validate the methodology and the SIRT implementations of two free software packages (TOMOJ and TOMO3D) have been used. - Highlights: • The non uniformity of the resolution in electron tomography reconstructions has been demonstrated. • An overall resolution for the evaluation of the quality of electron tomography reconstructions has been defined. • Parameters for estimating an overall resolution across the reconstructed volume have been proposed. • The overall resolution of the reconstructions of a phantom has been estimated from the probability density functions. • It has been proven that reconstructions with the best overall resolutions have provided the most accurate segmentations.

  4. Liposomes bi-functionalized with phosphatidic acid and an ApoE-derived peptide affect Aβ aggregation features and cross the blood-brain-barrier: implications for therapy of Alzheimer disease

    NARCIS (Netherlands)

    Bana, Laura; Minniti, Stefania; Salvati, Elisa; Sesana, Silvia; Zambelli, Vanessa; Cagnotto, Alfredo; Orlando, Antonina; Cazzaniga, Emanuela; Zwart, Rob; Scheper, Wiep; Masserini, Massimo; Re, Francesca

    2014-01-01

    Targeting amyloid-β peptide (Aβ) within the brain is a strategy actively sought for therapy of Alzheimer's disease (AD). We investigated the ability of liposomes bi-functionalized with phosphatidic acid and with a modified ApoE-derived peptide (mApoE-PA-LIP) to affect Aβ aggregation/disaggregation

  5. An iterative homogenization technique that preserves assembly core exchanges

    International Nuclear Information System (INIS)

    Mondot, Ph.; Sanchez, R.

    2003-01-01

    A new interactive homogenization procedure for reactor core calculations is proposed that requires iterative transport assembly and diffusion core calculations. At each iteration the transport solution of every assembly type is used to produce homogenized cross sections for the core calculation. The converged solution gives assembly fine multigroup transport fluxes that preserve macro-group assembly exchanges in the core. This homogenization avoids the periodic lattice-leakage model approximation and gives detailed assembly transport fluxes without need of an approximated flux reconstruction. Preliminary results are given for a one-dimensional core model. (authors)

  6. Advances in iterative methods

    International Nuclear Information System (INIS)

    Beauwens, B.; Arkuszewski, J.; Boryszewicz, M.

    1981-01-01

    Results obtained in the field of linear iterative methods within the Coordinated Research Program on Transport Theory and Advanced Reactor Calculations are summarized. The general convergence theory of linear iterative methods is essentially based on the properties of nonnegative operators on ordered normed spaces. The following aspects of this theory have been improved: new comparison theorems for regular splittings, generalization of the notions of M- and H-matrices, new interpretations of classical convergence theorems for positive-definite operators. The estimation of asymptotic convergence rates was developed with two purposes: the analysis of model problems and the optimization of relaxation parameters. In the framework of factorization iterative methods, model problem analysis is needed to investigate whether the increased computational complexity of higher-order methods does not offset their increased asymptotic convergence rates, as well as to appreciate the effect of standard relaxation techniques (polynomial relaxation). On the other hand, the optimal use of factorization iterative methods requires the development of adequate relaxation techniques and their optimization. The relative performances of a few possibilities have been explored for model problems. Presently, the best results have been obtained with optimal diagonal-Chebyshev relaxation

  7. Results on the ITER Technology R and D

    International Nuclear Information System (INIS)

    1999-01-01

    The ITER Engineering Design Activities (EDA) have passed their originally planned six years by approval of the ITER Final Design Report at a meeting of the ITER Council held in July, 1998. The four Parties (EU, Japan, Russia, and USA) had hoped to make a decision for its construction by end of the EDA. However, the financial environment of these Parties were not optimistic to directly start construction of the device scooped in the Report. The ITER Technology R and D has been conducted by cooperation of these four Parties to provide data base and demonstrate technical feasibility on the ITER design. It contains, not only component technologies on tokamak reactor core, but also peripheral system technologies such as heating and current drive technique, remote maintenance technique, tritium technology, fuel air-in-taking/-exhausting technique, measurement diagnosis element technique, safety, and so on. Above all, seven large R and D projects are identified to demonstrate technical feasibility of manufacturing and system tests. They were planned to have scales capable of extrapolating to the ITER and of carrying out by joint efforts of a plural Parties. These projects were relating to superconducting magnet technology; vacuum vessel technology, blanket technology, divertor technology, and remote maintenance technology, among which three projects were promoted under leading of Japan. This report was prepared so as to enable to understand outline of results obtained under the seven projects on the ITER Technology R and D. (G.K.)

  8. A rank-based approach for correcting systematic biases in spatial disaggregation of coarse-scale climate simulations

    Science.gov (United States)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2017-07-01

    Use of General Circulation Model (GCM) precipitation and evapotranspiration sequences for hydrologic modelling can result in unrealistic simulations due to the coarse scales at which GCMs operate and the systematic biases they contain. The Bias Correction Spatial Disaggregation (BCSD) method is a popular statistical downscaling and bias correction method developed to address this issue. The advantage of BCSD is its ability to reduce biases in the distribution of precipitation totals at the GCM scale and then introduce more realistic variability at finer scales than simpler spatial interpolation schemes. Although BCSD corrects biases at the GCM scale before disaggregation; at finer spatial scales biases are re-introduced by the assumptions made in the spatial disaggregation process. Our study focuses on this limitation of BCSD and proposes a rank-based approach that aims to reduce the spatial disaggregation bias especially for both low and high precipitation extremes. BCSD requires the specification of a multiplicative bias correction anomaly field that represents the ratio of the fine scale precipitation to the disaggregated precipitation. It is shown that there is significant temporal variation in the anomalies, which is masked when a mean anomaly field is used. This can be improved by modelling the anomalies in rank-space. Results from the application of the rank-BCSD procedure improve the match between the distributions of observed and downscaled precipitation at the fine scale compared to the original BCSD approach. Further improvements in the distribution are identified when a scaling correction to preserve mass in the disaggregation process is implemented. An assessment of the approach using a single GCM over Australia shows clear advantages especially in the simulation of particularly low and high downscaled precipitation amounts.

  9. Hyperforin prevents beta-amyloid neurotoxicity and spatial memory impairments by disaggregation of Alzheimer's amyloid-beta-deposits.

    Science.gov (United States)

    Dinamarca, M C; Cerpa, W; Garrido, J; Hancke, J L; Inestrosa, N C

    2006-11-01

    The major protein constituent of amyloid deposits in Alzheimer's disease (AD) is the amyloid beta-peptide (Abeta). In the present work, we have determined the effect of hyperforin an acylphloroglucinol compound isolated from Hypericum perforatum (St John's Wort), on Abeta-induced spatial memory impairments and on Abeta neurotoxicity. We report here that hyperforin: (1) decreases amyloid deposit formation in rats injected with amyloid fibrils in the hippocampus; (2) decreases the neuropathological changes and behavioral impairments in a rat model of amyloidosis; (3) prevents Abeta-induced neurotoxicity in hippocampal neurons both from amyloid fibrils and Abeta oligomers, avoiding the increase in reactive oxidative species associated with amyloid toxicity. Both effects could be explained by the capacity of hyperforin to disaggregate amyloid deposits in a dose and time-dependent manner and to decrease Abeta aggregation and amyloid formation. Altogether these evidences suggest that hyperforin may be useful to decrease amyloid burden and toxicity in AD patients, and may be a putative therapeutic agent to fight the disease.

  10. Anti-aggregation-based spectrometric detection of Hg(II) at physiological pH using gold nanorods

    Energy Technology Data Exchange (ETDEWEB)

    Rajeshwari, A.; Karthiga, D.; Chandrasekaran, Natarajan; Mukherjee, Amitava, E-mail: amit.mookerjea@gmail.com

    2016-10-01

    An efficient detection method for Hg (II) ions at physiological pH (pH 7.4) was developed using tween 20-modified gold nanorods (NRs) in the presence of dithiothreitol (DTT). Thiol groups (-SH) at the end of DTT have a higher affinity towards gold atoms, and they can covalently interact with gold NRs and leads to their aggregation. The addition of Hg(II) ions prevents the aggregation of gold NRs due to the covalent bond formation between the -SH group of DTT and Hg(II) ions in the buffer system. The changes in the longitudinal surface plasmon resonance peak of gold NRs were characterized using a UV–visible spectrophotometer. The absorption intensity peak of gold NRs at 679 nm was observed to reduce after interaction with DTT, and the absorption intensity was noted to increase by increasing the concentration of Hg(II) ions. The TEM analysis confirms the morphological changes of gold NRs before and after addition of Hg(II) ions in the presence of DTT. Further, the aggregation and disaggregation of gold NRs were confirmed by particle size and zeta potential analysis. The developed method shows an excellent linearity (y = 0.001 x + 0.794) for the graph plotted between the absorption ratio and Hg(II) concentration (1 to 100 pM) under the optimized conditions. The limit of detection was noted to be 0.42 pM in the buffer system. The developed method was tested in simulated body fluid, and it was found to have a good recovery rate. - Highlights: • Tween-20 modified gold NRs used as a probe for Hg(II) at physiological pH. • TEM, particle size and surface charge analysis confirm the aggregation and • disaggregation of NRs • The sensitivity of the probe for Hg(II) ions detection was 0.42 pM. • Hg(II) estimation in simulated body fluids with good recovery.

  11. A Novel Method to Quantify Soil Aggregate Stability by Measuring Aggregate Bond Energies

    Science.gov (United States)

    Efrat, Rachel; Rawlins, Barry G.; Quinton, John N.; Watts, Chris W.; Whitmore, Andy P.

    2016-04-01

    Soil aggregate stability is a key indicator of soil quality because it controls physical, biological and chemical functions important in cultivated soils. Micro-aggregates are responsible for the long term sequestration of carbon in soil, therefore determine soils role in the carbon cycle. It is thus vital that techniques to measure aggregate stability are accurate, consistent and reliable, in order to appropriately manage and monitor soil quality, and to develop our understanding and estimates of soil as a carbon store to appropriately incorporate in carbon cycle models. Practices used to assess the stability of aggregates vary in sample preparation, operational technique and unit of results. They use proxies and lack quantification. Conflicting results are therefore drawn between projects that do not provide methodological or resultant comparability. Typical modern stability tests suspend aggregates in water and monitor fragmentation upon exposure to an un-quantified amount of ultrasonic energy, utilising a laser granulometer to measure the change in mean weight diameter. In this project a novel approach has been developed based on that of Zhu et al., (2009), to accurately quantify the stability of aggregates by specifically measuring their bond energies. The bond energies are measured operating a combination of calorimetry and a high powered ultrasonic probe, with computable output function. Temperature change during sonication is monitored by an array of probes which enables calculation of the energy spent heating the system (Ph). Our novel technique suspends aggregates in heavy liquid lithium heteropolytungstate, as opposed to water, to avoid exposing aggregates to an immeasurable disruptive energy source, due to cavitation, collisions and clay swelling. Mean weight diameter is measured by a laser granulometer to monitor aggregate breakdown after successive periods of calculated ultrasonic energy input (Pi), until complete dispersion is achieved and bond

  12. Spatial and temporal disaggregation of anthropogenic CO2 emissions from the City of Cape Town

    Directory of Open Access Journals (Sweden)

    Alecia Nickless

    2015-11-01

    Full Text Available This paper describes the methodology used to spatially and temporally disaggregate carbon dioxide emission estimates for the City of Cape Town, to be used for a city-scale atmospheric inversion estimating carbon dioxide fluxes. Fossil fuel emissions were broken down into emissions from road transport, domestic emissions, industrial emissions, and airport and harbour emissions. Using spatially explicit information on vehicle counts, and an hourly scaling factor, vehicle emissions estimates were obtained for the city. Domestic emissions from fossil fuel burning were estimated from household fuel usage information and spatially disaggregated population data from the 2011 national census. Fuel usage data were used to derive industrial emissions from listed activities, which included emissions from power generation, and these were distributed spatially according to the source point locations. The emissions from the Cape Town harbour and the international airport were determined from vessel and aircraft count data, respectively. For each emission type, error estimates were determined through error propagation techniques. The total fossil fuel emission field for the city was obtained by summing the spatial layers for each emission type, accumulated for the period of interest. These results will be used in a city-scale inversion study, and this method implemented in the future for a national atmospheric inversion study.

  13. Coronary stent on coronary CT angiography: Assessment with model-based iterative reconstruction technique

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eun Chae; Kim, Yeo Koon; Chun, Eun Ju; Choi, Sang IL [Dept. of of Radiology, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of)

    2016-05-15

    To assess the performance of model-based iterative reconstruction (MBIR) technique for evaluation of coronary artery stents on coronary CT angiography (CCTA). Twenty-two patients with coronary stent implantation who underwent CCTA were retrospectively enrolled for comparison of image quality between filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR) and MBIR. In each data set, image noise was measured as the standard deviation of the measured attenuation units within circular regions of interest in the ascending aorta (AA) and left main coronary artery (LM). To objectively assess the noise and blooming artifacts in coronary stent, we additionally measured the standard deviation of the measured attenuation and intra-luminal stent diameters of total 35 stents with dedicated software. All image noise measured in the AA (all p < 0.001), LM (p < 0.001, p = 0.001) and coronary stent (all p < 0.001) were significantly lower with MBIR in comparison to those with FBP or ASIR. Intraluminal stent diameter was significantly higher with MBIR, as compared with ASIR or FBP (p < 0.001, p = 0.001). MBIR can reduce image noise and blooming artifact from the stent, leading to better in-stent assessment in patients with coronary artery stent.

  14. Effects of Aggregation on Blood Sedimentation and Conductivity

    Science.gov (United States)

    Zhbanov, Alexander; Yang, Sung

    2015-01-01

    The erythrocyte sedimentation rate (ESR) test has been used for over a century. The Westergren method is routinely used in a variety of clinics. However, the mechanism of erythrocyte sedimentation remains unclear, and the 60 min required for the test seems excessive. We investigated the effects of cell aggregation during blood sedimentation and electrical conductivity at different hematocrits. A sample of blood was drop cast into a small chamber with two planar electrodes placed on the bottom. The measured blood conductivity increased slightly during the first minute and decreased thereafter. We explored various methods of enhancing or retarding the erythrocyte aggregation. Using experimental measurements and theoretical calculations, we show that the initial increase in blood conductivity was indeed caused by aggregation, while the subsequent decrease in conductivity resulted from the deposition of erythrocytes. We present a method for calculating blood conductivity based on effective medium theory. Erythrocytes are modeled as conducting spheroids surrounded by a thin insulating membrane. A digital camera was used to investigate the erythrocyte sedimentation behavior and the distribution of the cell volume fraction in a capillary tube. Experimental observations and theoretical estimations of the settling velocity are provided. We experimentally demonstrate that the disaggregated cells settle much slower than the aggregated cells. We show that our method of measuring the electrical conductivity credibly reflected the ESR. The method was very sensitive to the initial stage of aggregation and sedimentation, while the sedimentation curve for the Westergren ESR test has a very mild slope in the initial time. We tested our method for rapid estimation of the Westergren ESR. We show a correlation between our method of measuring changes in blood conductivity and standard Westergren ESR method. In the future, our method could be examined as a potential means of accelerating

  15. Comparison of adaptive statistical iterative and filtered back projection reconstruction techniques in quantifying coronary calcium.

    Science.gov (United States)

    Takahashi, Masahiro; Kimura, Fumiko; Umezawa, Tatsuya; Watanabe, Yusuke; Ogawa, Harumi

    2016-01-01

    Adaptive statistical iterative reconstruction (ASIR) has been used to reduce radiation dose in cardiac computed tomography. However, change of image parameters by ASIR as compared to filtered back projection (FBP) may influence quantification of coronary calcium. To investigate the influence of ASIR on calcium quantification in comparison to FBP. In 352 patients, CT images were reconstructed using FBP alone, FBP combined with ASIR 30%, 50%, 70%, and ASIR 100% based on the same raw data. Image noise, plaque density, Agatston scores and calcium volumes were compared among the techniques. Image noise, Agatston score, and calcium volume decreased significantly with ASIR compared to FBP (each P ASIR reduced Agatston score by 10.5% to 31.0%. In calcified plaques both of patients and a phantom, ASIR decreased maximum CT values and calcified plaque size. In comparison to FBP, adaptive statistical iterative reconstruction (ASIR) may significantly decrease Agatston scores and calcium volumes. Copyright © 2016 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.

  16. Iteration and Prototyping in Creating Technical Specifications.

    Science.gov (United States)

    Flynt, John P.

    1994-01-01

    Claims that the development process for computer software can be greatly aided by the writers of specifications if they employ basic iteration and prototyping techniques. Asserts that computer software configuration management practices provide ready models for iteration and prototyping. (HB)

  17. Effect of dextran-induced changes in refractive index and aggregation on optical properties of whole blood

    International Nuclear Information System (INIS)

    Xu Xiangqun; Wang, Ruikang K; Elder, James B; Tuchin, Valery V

    2003-01-01

    The purpose of the present study is to investigate systematically the mechanisms of alterations in the optical properties of whole blood immersed in the biocompatible agent dextran, and to define the optimal concentration of dextrans required for blood optical clearing in order to enhance the capability of light penetration depth for optical imaging applications. In the experiments, dextrans with different molecular weights and various concentrations were employed and investigated by the use of the optical coherence tomography technique. Changes in light attenuation, refractive index and aggregation properties of blood immersed in dextrans were studied. It was concluded from the results that the mechanisms for blood optical clearing are characteristic of the types of dextrans employed, their concentrations and the application stages. Among the substances applied, Dx500 at a concentration at 0.5 g dl -1 gives the best result in improving light penetration depth through the blood. The increase of light transmission at the beginning of the addition of dextrans is mainly attributed to refractive index matching between the scattering centres and the ground matter. Thereafter, the transmission change is probably due to a dextran-induced aggregation-disaggregation effect. Overall, light scattering in the blood could be effectively reduced by the application of dextrans. It represents a promising approach to increasing the imaging depth for in vivo optical imaging of biological tissue, for example optical coherence tomography

  18. Use Residual Correction Method and Monotone Iterative Technique to Calculate the Upper and Lower Approximate Solutions of Singularly Perturbed Non-linear Boundary Value Problems

    Directory of Open Access Journals (Sweden)

    Chi-Chang Wang

    2013-09-01

    Full Text Available This paper seeks to use the proposed residual correction method in coordination with the monotone iterative technique to obtain upper and lower approximate solutions of singularly perturbed non-linear boundary value problems. First, the monotonicity of a non-linear differential equation is reinforced using the monotone iterative technique, then the cubic-spline method is applied to discretize and convert the differential equation into the mathematical programming problems of an inequation, and finally based on the residual correction concept, complex constraint solution problems are transformed into simpler questions of equational iteration. As verified by the four examples given in this paper, the method proposed hereof can be utilized to fast obtain the upper and lower solutions of questions of this kind, and to easily identify the error range between mean approximate solutions and exact solutions.

  19. Disaggregating Assessment to Close the Loop and Improve Student Learning

    Science.gov (United States)

    Rawls, Janita; Hammons, Stacy

    2015-01-01

    This study examined student learning outcomes for accelerated degree students as compared to conventional undergraduate students, disaggregated by class levels, to develop strategies for then closing the loop with assessment. Using the National Survey of Student Engagement, critical thinking and oral and written communication outcomes were…

  20. GIS aided spatial disaggregation of emission inventories

    International Nuclear Information System (INIS)

    Orthofer, R.; Loibl, W.

    1995-10-01

    We have applied our method to produce detailed NMVOC and NO x emission density maps for Austria. While theoretical average emission densities for the whole country would be only 5 t NMVOC and 2.5 t NO x per km 2 , the actual emission densities range from zero in the many uninhabited areas up to more than 3,000 t/km 2 along major highways. In Austria, small scale disaggregation is necessary particularly for the differentiated topography and population patterns in alpine valleys. (author)

  1. Iterative approach as alternative to S-matrix in modal methods

    Science.gov (United States)

    Semenikhin, Igor; Zanuccoli, Mauro

    2014-12-01

    The continuously increasing complexity of opto-electronic devices and the rising demands of simulation accuracy lead to the need of solving very large systems of linear equations making iterative methods promising and attractive from the computational point of view with respect to direct methods. In particular, iterative approach potentially enables the reduction of required computational time to solve Maxwell's equations by Eigenmode Expansion algorithms. Regardless of the particular eigenmodes finding method used, the expansion coefficients are computed as a rule by scattering matrix (S-matrix) approach or similar techniques requiring order of M3 operations. In this work we consider alternatives to the S-matrix technique which are based on pure iterative or mixed direct-iterative approaches. The possibility to diminish the impact of M3 -order calculations to overall time and in some cases even to reduce the number of arithmetic operations to M2 by applying iterative techniques are discussed. Numerical results are illustrated to discuss validity and potentiality of the proposed approaches.

  2. ITER...ation

    International Nuclear Information System (INIS)

    Troyon, F.

    1997-01-01

    Recurrent attacks against ITER, the new generation of tokamak are a mix of political and scientific arguments. This short article draws a historical review of the European fusion program. This program has allowed to build and manage several installations in the aim of getting experimental results necessary to lead the program forwards. ITER will bring together a fusion reactor core with technologies such as materials, superconductive coils, heating devices and instrumentation in order to validate and delimit the operating range. ITER will be a logical and decisive step towards the use of controlled fusion. (A.C.)

  3. The ITER Thomson scattering core LIDAR diagnostic

    NARCIS (Netherlands)

    Naylor, G.A.; Scannell, R.; Beurskens, M.; Walsh, M.J.; Pastor, I.; Donné, A.J.H.; Snijders, B.; Biel, W.; Meszaros, B.; Giudicotti, L.; Pasqualotto, R.; Marot, L.

    2012-01-01

    The central electron temperature and density of the ITER plasma may be determined by Thomson scattering. A LIDAR topology is proposed in order to minimize the port access required of the ITER vacuum vessel. By using a LIDAR technique, a profile of the electron temperature and density can be

  4. Free-boundary simulations of ITER advanced scenarios

    International Nuclear Information System (INIS)

    Besseghir, K.

    2013-06-01

    The successful operation of ITER advanced scenarios is likely to be a major step forward in the development of controlled fusion as a power production source. ITER advanced scenarios raise specific challenges that are not encountered in presently-operated tokamaks. In this thesis, it is argued that ITER advanced operation may benefit from optimal control techniques. Optimal control ensures high performance operation while guaranteeing tokamak integrity. The application of optimal control techniques for ITER operation is assessed and it is concluded that robust optimisation is appropriate for ITER operation of advanced scenarios. Real-time optimisation schemes are discussed and it is concluded that the necessary conditions of optimality tracking approach may potentially be appropriate for ITER operation, thus offering a viable closed-loop optimal control approach. Simulations of ITER advanced operation are necessary in order to assess the present ITER design and uncover the main difficulties that may be encountered during advanced operation. The DINA-CH and CRONOS full tokamak simulator is used to simulate the operation of the ITER hybrid and steady-state scenarios. It is concluded that the present ITER design is appropriate for performing a hybrid scenario pulse lasting more than 1000 sec, with a flat-top plasma current of 12 MA, and a fusion gain of Q ≅ 8. Similarly, a steady-state scenario without internal transport barrier, with a flat-top plasma current of 10 MA, and with a fusion gain of Q ≅ 5 can be realised using the present ITER design. The sensitivity of the advanced scenarios with respect to transport models and physical assumption is assessed using CRONOS. It is concluded that the hybrid scenario and the steady-state scenario are highly sensitive to the L-H transition timing, to the value of the confinement enhancement factor, to the heating and current drive scenario during ramp-up, and, to a lesser extent, to the density peaking and pedestal

  5. Free-boundary simulations of ITER advanced scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Besseghir, K.

    2013-06-15

    The successful operation of ITER advanced scenarios is likely to be a major step forward in the development of controlled fusion as a power production source. ITER advanced scenarios raise specific challenges that are not encountered in presently-operated tokamaks. In this thesis, it is argued that ITER advanced operation may benefit from optimal control techniques. Optimal control ensures high performance operation while guaranteeing tokamak integrity. The application of optimal control techniques for ITER operation is assessed and it is concluded that robust optimisation is appropriate for ITER operation of advanced scenarios. Real-time optimisation schemes are discussed and it is concluded that the necessary conditions of optimality tracking approach may potentially be appropriate for ITER operation, thus offering a viable closed-loop optimal control approach. Simulations of ITER advanced operation are necessary in order to assess the present ITER design and uncover the main difficulties that may be encountered during advanced operation. The DINA-CH and CRONOS full tokamak simulator is used to simulate the operation of the ITER hybrid and steady-state scenarios. It is concluded that the present ITER design is appropriate for performing a hybrid scenario pulse lasting more than 1000 sec, with a flat-top plasma current of 12 MA, and a fusion gain of Q ≅ 8. Similarly, a steady-state scenario without internal transport barrier, with a flat-top plasma current of 10 MA, and with a fusion gain of Q ≅ 5 can be realised using the present ITER design. The sensitivity of the advanced scenarios with respect to transport models and physical assumption is assessed using CRONOS. It is concluded that the hybrid scenario and the steady-state scenario are highly sensitive to the L-H transition timing, to the value of the confinement enhancement factor, to the heating and current drive scenario during ramp-up, and, to a lesser extent, to the density peaking and pedestal

  6. Aggregation factor analysis for protein formulation by a systematic approach using FTIR, SEC and design of experiments techniques.

    Science.gov (United States)

    Feng, Yan Wen; Ooishi, Ayako; Honda, Shinya

    2012-01-05

    A simple systematic approach using Fourier transform infrared (FTIR) spectroscopy, size exclusion chromatography (SEC) and design of experiments (DOE) techniques was applied to the analysis of aggregation factors for protein formulations in stress and accelerated testings. FTIR and SEC were used to evaluate protein conformational and storage stabilities, respectively. DOE was used to determine the suitable formulation and to analyze both the main effect of single factors and the interaction effect of combined factors on aggregation. Our results indicated that (i) analysis at a low protein concentration is not always applicable to high concentration formulations; (ii) an investigation of interaction effects of combined factors as well as main effects of single factors is effective for improving conformational stability of proteins; (iii) with the exception of pH, the results of stress testing with regard to aggregation factors would be available for suitable formulation instead of performing time-consuming accelerated testing; (iv) a suitable pH condition should not be determined in stress testing but in accelerated testing, because of inconsistent effects of pH on conformational and storage stabilities. In summary, we propose a three-step strategy, using FTIR, SEC and DOE techniques, to effectively analyze the aggregation factors and perform a rapid screening for suitable conditions of protein formulation. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Daily disaggregation of simulated monthly flows using different rainfall datasets in southern Africa

    Directory of Open Access Journals (Sweden)

    D.A. Hughes

    2015-09-01

    New hydrological insights for the region: There are substantial regional differences in the success of the monthly hydrological model, which inevitably affects the success of the daily disaggregation results. There are also regional differences in the success of using global rainfall data sets (Climatic Research Unit (CRU datasets for monthly, National Oceanic and Atmospheric Administration African Rainfall Climatology, version 2 (ARC2 satellite data for daily. The overall conclusion is that the disaggregation method presents a parsimonious approach to generating daily flow simulations from existing monthly simulations and that these daily flows are likely to be useful for some purposes (e.g. water quality modelling, but less so for others (e.g. peak flow analysis.

  8. A Method for Speeding Up Value Iteration in Partially Observable Markov Decision Processes

    OpenAIRE

    Zhang, Nevin Lianwen; Lee, Stephen S.; Zhang, Weihong

    2013-01-01

    We present a technique for speeding up the convergence of value iteration for partially observable Markov decisions processes (POMDPs). The underlying idea is similar to that behind modified policy iteration for fully observable Markov decision processes (MDPs). The technique can be easily incorporated into any existing POMDP value iteration algorithms. Experiments have been conducted on several test problems with one POMDP value iteration algorithm called incremental pruning. We find that th...

  9. Fps/Fes and Fer non-receptor protein-tyrosine kinases regulate collagen- and ADP-induced platelet aggregation.

    Science.gov (United States)

    Senis, Y A; Sangrar, W; Zirngibl, R A; Craig, A W B; Lee, D H; Greer, P A

    2003-05-01

    Fps/Fes and Fer proto-oncoproteins are structurally related non-receptor protein-tyrosine kinases implicated in signaling downstream from cytokines, growth factors and immune receptors. We show that Fps/Fes and Fer are expressed in human and mouse platelets, and are activated following stimulation with collagen and collagen-related peptide (CRP), suggesting a role in GPVI receptor signaling. Fer was also activated following stimulation with thrombin and a protease-activated receptor4 (PAR4)-activating peptide, suggesting a role in signaling downstream from the G protein-coupled PAR4. There were no detectable perturbations in CRP-induced activation of Syk, PLCgamma2, cortactin, Erk, Jnk, Akt or p38 in platelets from mice lacking Fps/Fes, Fer, or both kinases. Platelets lacking Fps/Fes, from a targeted fps/fes null strain of mice, showed increased rates and amplitudes of collagen-induced aggregation, relative to wild-type platelets. P-Selectin expression was also elevated on the surface of Fps/Fes-null platelets in response to CRP. Fer-deficient platelets, from mice targeted with a kinase-inactivating mutation, disaggregated more rapidly than wild-type platelets in response to ADP. This report provides the first evidence that Fps/Fes and Fer are expressed in platelets and become activated downstream from the GPVI collagen receptor, and that Fer is activated downstream from a G-protein coupled receptor. Furthermore, using targeted mouse models we show that deficiency in Fps/Fes or Fer resulted in disregulated platelet aggregation and disaggregation, demonstrating a role for these kinases in regulating platelet functions.

  10. Disaggregation of small, cohesive rubble pile asteroids due to YORP

    Science.gov (United States)

    Scheeres, D. J.

    2018-04-01

    The implication of small amounts of cohesion within relatively small rubble pile asteroids is investigated with regard to their evolution under the persistent presence of the YORP effect. We find that below a characteristic size, which is a function of cohesive strength, density and other properties, rubble pile asteroids can enter a "disaggregation phase" in which they are subject to repeated fissions after which the formation of a stabilizing binary system is not possible. Once this threshold is passed rubble pile asteroids may be disaggregated into their constituent components within a finite time span. These constituent components will have their own spin limits - albeit potentially at a much higher spin rate due to the greater strength of a monolithic body. The implications of this prediction are discussed and include modification of size distributions, prevalence of monolithic bodies among meteoroids and the lifetime of small rubble pile bodies in the solar system. The theory is then used to place constraints on the strength of binary asteroids characterized as a function of their type.

  11. Asphaltene Aggregation and Fouling Behavior

    Science.gov (United States)

    Derakhshesh, Marzie

    This thesis explored the properties of asphaltene nano-aggregates in crude oil and toluene based solutions and fouling at process furnace temperatures, and the links between these two phenomena. The link between stability of asphaltenes at ambient conditions and fouling at the conditions of a delayed coker furnace, at over 450 °C, was examined by blending crude oil with an aliphatic diluent in different ratios. The stability of the blends were measured using a S-value analyzer, then fouling rates were measured on electrically heated stainless steel 316 wires in an autoclave reactor. The less stable the blend, the greater the rate and extent of fouling. The most severe fouling occurred with the unstable asphaltenes. SEM imaging of the foulant illustrates very different textures, with the structure becoming more porous with lower stability. Under cross-polarized light, the coke shows the presence of mesophase in the foulant layer. These data suggest a correlation between the fouling rate at high temperature furnace conditions and the stability index of the crude oil. Three organic polysulfides were introduced to the crude oil to examine their effect on fouling. The polysulfides are able to reduce coking and carbon monoxide generation in steam crackers. The fouling results demonstrated that polysulfide with more sulfur content increased the amount of corrosion-fouling of the wire. Various additives, solvents, ultrasound, and heat were employed to attempt to completely disaggregate the asphaltene nano-aggregates in solution at room temperature. The primary analytical technique used to monitor the nano-aggregation state of the asphaltenes in solution was the UV-visible spectroscopy. The results indicate that stronger solvents, such as pyridine and quinoline, combined with ionic liquids yield a slight reduction in the apparent absorbance at longer wavelengths, indicative of a decrease in the nano-aggregate size although the magnitude of the decrease is not significant

  12. Analysis of an aggregation-based algebraic two-grid method for a rotated anisotropic diffusion problem

    KAUST Repository

    Chen, Meng-Huo; Greenbaum, Anne

    2015-01-01

    Summary: A two-grid convergence analysis based on the paper [Algebraic analysis of aggregation-based multigrid, by A. Napov and Y. Notay, Numer. Lin. Alg. Appl. 18 (2011), pp. 539-564] is derived for various aggregation schemes applied to a finite element discretization of a rotated anisotropic diffusion equation. As expected, it is shown that the best aggregation scheme is one in which aggregates are aligned with the anisotropy. In practice, however, this is not what automatic aggregation procedures do. We suggest approaches for determining appropriate aggregates based on eigenvectors associated with small eigenvalues of a block splitting matrix or based on minimizing a quantity related to the spectral radius of the iteration matrix. © 2015 John Wiley & Sons, Ltd.

  13. Analysis of an aggregation-based algebraic two-grid method for a rotated anisotropic diffusion problem

    KAUST Repository

    Chen, Meng-Huo

    2015-03-18

    Summary: A two-grid convergence analysis based on the paper [Algebraic analysis of aggregation-based multigrid, by A. Napov and Y. Notay, Numer. Lin. Alg. Appl. 18 (2011), pp. 539-564] is derived for various aggregation schemes applied to a finite element discretization of a rotated anisotropic diffusion equation. As expected, it is shown that the best aggregation scheme is one in which aggregates are aligned with the anisotropy. In practice, however, this is not what automatic aggregation procedures do. We suggest approaches for determining appropriate aggregates based on eigenvectors associated with small eigenvalues of a block splitting matrix or based on minimizing a quantity related to the spectral radius of the iteration matrix. © 2015 John Wiley & Sons, Ltd.

  14. Adaptive Statistical Iterative Reconstruction-V Versus Adaptive Statistical Iterative Reconstruction: Impact on Dose Reduction and Image Quality in Body Computed Tomography.

    Science.gov (United States)

    Gatti, Marco; Marchisio, Filippo; Fronda, Marco; Rampado, Osvaldo; Faletti, Riccardo; Bergamasco, Laura; Ropolo, Roberto; Fonio, Paolo

    The aim of this study was to evaluate the impact on dose reduction and image quality of the new iterative reconstruction technique: adaptive statistical iterative reconstruction (ASIR-V). Fifty consecutive oncologic patients acted as case controls undergoing during their follow-up a computed tomography scan both with ASIR and ASIR-V. Each study was analyzed in a double-blinded fashion by 2 radiologists. Both quantitative and qualitative analyses of image quality were conducted. Computed tomography scanner radiation output was 38% (29%-45%) lower (P ASIR-V examinations than for the ASIR ones. The quantitative image noise was significantly lower (P ASIR-V. Adaptive statistical iterative reconstruction-V had a higher performance for the subjective image noise (P = 0.01 for 5 mm and P = 0.009 for 1.25 mm), the other parameters (image sharpness, diagnostic acceptability, and overall image quality) being similar (P > 0.05). Adaptive statistical iterative reconstruction-V is a new iterative reconstruction technique that has the potential to provide image quality equal to or greater than ASIR, with a dose reduction around 40%.

  15. The methods for generating tomographic images using transmition, emission and nuclear magnetic resonance techniques. II. Fourier method and iterative methods

    International Nuclear Information System (INIS)

    Ursu, I.; Demco, D.E.; Gligor, T.D.; Pop, G.; Dollinger, R.

    1987-01-01

    In a wide variety of applications it is necessary to infer the structure of a multidimensional object from a set of its projections. Computed tomography is at present largely extended in the medical field, but the industrial application may ultimately far exceed its medical applications. Two techniques for reconstructing objects from their projections are presented: Fourier methods and iterative techniques. The paper also contains a brief comparative study of the reconstruction algorithms. (authors)

  16. Towards constraint-based aggregation of energy flexibilities

    DEFF Research Database (Denmark)

    Valsomatzis, Emmanouil; Pedersen, Torben Bach; Abello, Alberto

    2016-01-01

    present the problem of aggregating energy flexibilities taking into account grid capacity limitations and introduce a heuristic aggregation technique. We show through an experimental setup that our proposed technique, compared to a baseline approach, not only leads to a valid unit commitment result......The aggregation of energy flexibilities enables individual producers and/or consumers with small loads to directly participate in the emerging energy markets. On the other hand, aggregation of such flexibilities might also create problems to the operation of the electrical grid. In this paper, we...

  17. Dynamic re-weighted total variation technique and statistic Iterative reconstruction method for x-ray CT metal artifact reduction

    Science.gov (United States)

    Peng, Chengtao; Qiu, Bensheng; Zhang, Cheng; Ma, Changyu; Yuan, Gang; Li, Ming

    2017-07-01

    Over the years, the X-ray computed tomography (CT) has been successfully used in clinical diagnosis. However, when the body of the patient to be examined contains metal objects, the image reconstructed would be polluted by severe metal artifacts, which affect the doctor's diagnosis of disease. In this work, we proposed a dynamic re-weighted total variation (DRWTV) technique combined with the statistic iterative reconstruction (SIR) method to reduce the artifacts. The DRWTV method is based on the total variation (TV) and re-weighted total variation (RWTV) techniques, but it provides a sparser representation than TV and protects the tissue details better than RWTV. Besides, the DRWTV can suppress the artifacts and noise, and the SIR convergence speed is also accelerated. The performance of the algorithm is tested on both simulated phantom dataset and clinical dataset, which are the teeth phantom with two metal implants and the skull with three metal implants, respectively. The proposed algorithm (SIR-DRWTV) is compared with two traditional iterative algorithms, which are SIR and SIR constrained by RWTV regulation (SIR-RWTV). The results show that the proposed algorithm has the best performance in reducing metal artifacts and protecting tissue details.

  18. Improving head and neck CTA with hybrid and model-based iterative reconstruction techniques

    NARCIS (Netherlands)

    Niesten, J. M.; van der Schaaf, I. C.; Vos, P. C.; Willemink, MJ; Velthuis, B. K.

    2015-01-01

    AIM: To compare image quality of head and neck computed tomography angiography (CTA) reconstructed with filtered back projection (FBP), hybrid iterative reconstruction (HIR) and model-based iterative reconstruction (MIR) algorithms. MATERIALS AND METHODS: The raw data of 34 studies were

  19. The ITER remote maintenance system

    International Nuclear Information System (INIS)

    Tesini, A.; Palmer, J.

    2007-01-01

    ITER is a joint international research and development project that aims to demonstrate the scientific and technological feasibility of fusion power. As soon as the plasma operation begins using tritium, the replacement of the vacuum vessel internal components will need to be done with remote handling techniques. To accomplish these operations ITER has equipped itself with a Remote Maintenance System; this includes the Remote Handling equipment set and the Hot Cell facility. Both need to work in a cooperative way, with the aim of minimizing the machine shutdown periods and to maximize the machine availability. The ITER Remote Handling equipment set is required to be available, robust, reliable and retrievable. The machine components, to be remotely handle-able, are required to be designed simply so as to ease their maintenance. The baseline ITER Remote Handling equipment is described. The ITER Hot Cell Facility is required to provide a controlled and shielded area for the execution of repair operations (carried out using dedicated remote handling equipment) on those activated components which need to be returned to service, inside the vacuum vessel. The Hot Cell provides also the equipment and space for the processing and temporary storage of the operational and decommissioning radwaste. A conceptual ITER Hot Cell Facility is described. (orig.)

  20. Array architectures for iterative algorithms

    Science.gov (United States)

    Jagadish, Hosagrahar V.; Rao, Sailesh K.; Kailath, Thomas

    1987-01-01

    Regular mesh-connected arrays are shown to be isomorphic to a class of so-called regular iterative algorithms. For a wide variety of problems it is shown how to obtain appropriate iterative algorithms and then how to translate these algorithms into arrays in a systematic fashion. Several 'systolic' arrays presented in the literature are shown to be specific cases of the variety of architectures that can be derived by the techniques presented here. These include arrays for Fourier Transform, Matrix Multiplication, and Sorting.

  1. A comparison in the reconstruction of neutron spectrums using classical iterative techniques

    International Nuclear Information System (INIS)

    Ortiz R, J. M.; Martinez B, M. R.; Vega C, H. R.; Gallego, E.

    2009-10-01

    One of the key drawbacks to the use of BUNKI code is that the process begins the reconstruction of the spectrum based on a priori knowledge as close as possible to the solution that is sought. The user has to specify the initial spectrum or do it through a subroutine called MAXIET to calculate a Maxwellian and a 1/E spectrum as initial spectrum. Because the application of iterative procedures by to resolve the reconstruction of neutron spectrum needs an initial spectrum, it is necessary to have new proposals for the election of the same. Based on the experience gained with a widely used method of reconstruction, called BUNKI, has developed a new computational tools for neutron spectrometry and dosimetry, which was first introduced, which operates by means of an iterative algorithm for the reconstruction of neutron spectra. The main feature of this tool is that unlike the existing iterative codes, the choice of the initial spectrum is performed automatically by the program, through a neutron spectra catalog. To develop the code, the algorithm was selected as the routine iterative SPUNIT be used in computing tool and response matrix UTA4 for 31 energy groups. (author)

  2. Study of the influence of ultraviolet radiation on aggregative properties of blood red cell by light backscattering

    International Nuclear Information System (INIS)

    Azhnaj, L.; Chueltehm, D.

    1988-01-01

    The method based on the fact of measurable intensity of backscattered laser beam resulting from the angular distribution of scattered light is investigated. The method permits study of the mechanisms of aggregation and disaggregation processes by ultraviolet radiation and action of some inductors. The ultraviolet light acting directly on erythrocyte rouleaus of 10 x 100 μ causes the scattering of laser beam of wavelength 632,8 nm. According the above mentioned fact at an agle of approximately 180 0 the light intensity is measured. Stabilized blood sample is exposed to laser beam by means of fiber optics. Backscattering light transmitted through the photomultiplier and direct current supply is recorded. Quantitative concept of erythrocyte aggregation process is calculated from the plot. Blood sample is mixed by magnetic mixer and the measuring temperature is kept constantly at 37 0 C. Accordingly, the present model can adequately reproduce complex blood red cells kinetics. The influence of ultraviolet radiation and different kinds of inductors on erythrocytes' aggregation is experimentally studied depending on time. 2 figs. (B.Sh.)

  3. IHadoop: Asynchronous iterations for MapReduce

    KAUST Repository

    Elnikety, Eslam Mohamed Ibrahim

    2011-11-01

    MapReduce is a distributed programming frame-work designed to ease the development of scalable data-intensive applications for large clusters of commodity machines. Most machine learning and data mining applications involve iterative computations over large datasets, such as the Web hyperlink structures and social network graphs. Yet, the MapReduce model does not efficiently support this important class of applications. The architecture of MapReduce, most critically its dataflow techniques and task scheduling, is completely unaware of the nature of iterative applications; tasks are scheduled according to a policy that optimizes the execution for a single iteration which wastes bandwidth, I/O, and CPU cycles when compared with an optimal execution for a consecutive set of iterations. This work presents iHadoop, a modified MapReduce model, and an associated implementation, optimized for iterative computations. The iHadoop model schedules iterations asynchronously. It connects the output of one iteration to the next, allowing both to process their data concurrently. iHadoop\\'s task scheduler exploits inter-iteration data locality by scheduling tasks that exhibit a producer/consumer relation on the same physical machine allowing a fast local data transfer. For those iterative applications that require satisfying certain criteria before termination, iHadoop runs the check concurrently during the execution of the subsequent iteration to further reduce the application\\'s latency. This paper also describes our implementation of the iHadoop model, and evaluates its performance against Hadoop, the widely used open source implementation of MapReduce. Experiments using different data analysis applications over real-world and synthetic datasets show that iHadoop performs better than Hadoop for iterative algorithms, reducing execution time of iterative applications by 25% on average. Furthermore, integrating iHadoop with HaLoop, a variant Hadoop implementation that caches

  4. IHadoop: Asynchronous iterations for MapReduce

    KAUST Repository

    Elnikety, Eslam Mohamed Ibrahim; El Sayed, Tamer S.; Ramadan, Hany E.

    2011-01-01

    MapReduce is a distributed programming frame-work designed to ease the development of scalable data-intensive applications for large clusters of commodity machines. Most machine learning and data mining applications involve iterative computations over large datasets, such as the Web hyperlink structures and social network graphs. Yet, the MapReduce model does not efficiently support this important class of applications. The architecture of MapReduce, most critically its dataflow techniques and task scheduling, is completely unaware of the nature of iterative applications; tasks are scheduled according to a policy that optimizes the execution for a single iteration which wastes bandwidth, I/O, and CPU cycles when compared with an optimal execution for a consecutive set of iterations. This work presents iHadoop, a modified MapReduce model, and an associated implementation, optimized for iterative computations. The iHadoop model schedules iterations asynchronously. It connects the output of one iteration to the next, allowing both to process their data concurrently. iHadoop's task scheduler exploits inter-iteration data locality by scheduling tasks that exhibit a producer/consumer relation on the same physical machine allowing a fast local data transfer. For those iterative applications that require satisfying certain criteria before termination, iHadoop runs the check concurrently during the execution of the subsequent iteration to further reduce the application's latency. This paper also describes our implementation of the iHadoop model, and evaluates its performance against Hadoop, the widely used open source implementation of MapReduce. Experiments using different data analysis applications over real-world and synthetic datasets show that iHadoop performs better than Hadoop for iterative algorithms, reducing execution time of iterative applications by 25% on average. Furthermore, integrating iHadoop with HaLoop, a variant Hadoop implementation that caches

  5. HIV/AIDS National Strategic Plans of Sub-Saharan African countries: an analysis for gender equality and sex-disaggregated HIV targets.

    Science.gov (United States)

    Sherwood, Jennifer; Sharp, Alana; Cooper, Bergen; Roose-Snyder, Beirne; Blumenthal, Susan

    2017-12-01

    National Strategic Plans (NSPs) for HIV/AIDS are country planning documents that set priorities for programmes and services, including a set of targets to quantify progress toward national and international goals. The inclusion of sex-disaggregated targets and targets to combat gender inequality is important given the high disease burden among young women and adolescent girls in Sub-Saharan Africa, yet no comprehensive gender-focused analysis of NSP targets has been performed. This analysis quantitatively evaluates national HIV targets, included in NSPs from eighteen Sub-Saharan African countries, for sex-disaggregation. Additionally, NSP targets aimed at reducing gender-based inequality in health outcomes are compiled and inductively coded to report common themes. On average, in the eighteen countries included in this analysis, 31% of NSP targets include sex-disaggregation (range 0-92%). Three countries disaggregated a majority (>50%) of their targets by sex. Sex-disaggregation in data reporting was more common for targets related to the early phases of the HIV care continuum: 83% of countries included any sex-disaggregated targets for HIV prevention, 56% for testing and linkage to care, 22% for improving antiretroviral treatment coverage, and 11% for retention in treatment. The most common target to reduce gender inequality was to prevent gender-based violence (present in 50% of countries). Other commonly incorporated target areas related to improving women's access to family planning, human and legal rights, and decision-making power. The inclusion of sex-disaggregated targets in national planning is vital to ensure that programmes make progress for all population groups. Improving the availability and quality of indicators to measure gender inequality, as well as evaluating programme outcomes by sex, is critical to tracking this progress. This analysis reveals an urgent need to set specific and separate targets for men and women in order to achieve an equitable

  6. HIV/AIDS National Strategic Plans of Sub-Saharan African countries: an analysis for gender equality and sex-disaggregated HIV targets

    Science.gov (United States)

    Sherwood, Jennifer; Sharp, Alana; Cooper, Bergen; Roose-Snyder, Beirne; Blumenthal, Susan

    2017-01-01

    Abstract National Strategic Plans (NSPs) for HIV/AIDS are country planning documents that set priorities for programmes and services, including a set of targets to quantify progress toward national and international goals. The inclusion of sex-disaggregated targets and targets to combat gender inequality is important given the high disease burden among young women and adolescent girls in Sub-Saharan Africa, yet no comprehensive gender-focused analysis of NSP targets has been performed. This analysis quantitatively evaluates national HIV targets, included in NSPs from eighteen Sub-Saharan African countries, for sex-disaggregation. Additionally, NSP targets aimed at reducing gender-based inequality in health outcomes are compiled and inductively coded to report common themes. On average, in the eighteen countries included in this analysis, 31% of NSP targets include sex-disaggregation (range 0–92%). Three countries disaggregated a majority (>50%) of their targets by sex. Sex-disaggregation in data reporting was more common for targets related to the early phases of the HIV care continuum: 83% of countries included any sex-disaggregated targets for HIV prevention, 56% for testing and linkage to care, 22% for improving antiretroviral treatment coverage, and 11% for retention in treatment. The most common target to reduce gender inequality was to prevent gender-based violence (present in 50% of countries). Other commonly incorporated target areas related to improving women’s access to family planning, human and legal rights, and decision-making power. The inclusion of sex-disaggregated targets in national planning is vital to ensure that programmes make progress for all population groups. Improving the availability and quality of indicators to measure gender inequality, as well as evaluating programme outcomes by sex, is critical to tracking this progress. This analysis reveals an urgent need to set specific and separate targets for men and women in order to achieve

  7. Model-based iterative reconstruction technique for radiation dose reduction in chest CT: comparison with the adaptive statistical iterative reconstruction technique

    Energy Technology Data Exchange (ETDEWEB)

    Katsura, Masaki; Matsuda, Izuru; Akahane, Masaaki; Sato, Jiro; Akai, Hiroyuki; Yasaka, Koichiro; Kunimatsu, Akira; Ohtomo, Kuni [University of Tokyo, Department of Radiology, Graduate School of Medicine, Bunkyo-ku, Tokyo (Japan)

    2012-08-15

    To prospectively evaluate dose reduction and image quality characteristics of chest CT reconstructed with model-based iterative reconstruction (MBIR) compared with adaptive statistical iterative reconstruction (ASIR). One hundred patients underwent reference-dose and low-dose unenhanced chest CT with 64-row multidetector CT. Images were reconstructed with 50 % ASIR-filtered back projection blending (ASIR50) for reference-dose CT, and with ASIR50 and MBIR for low-dose CT. Two radiologists assessed the images in a blinded manner for subjective image noise, artefacts and diagnostic acceptability. Objective image noise was measured in the lung parenchyma. Data were analysed using the sign test and pair-wise Student's t-test. Compared with reference-dose CT, there was a 79.0 % decrease in dose-length product with low-dose CT. Low-dose MBIR images had significantly lower objective image noise (16.93 {+-} 3.00) than low-dose ASIR (49.24 {+-} 9.11, P < 0.01) and reference-dose ASIR images (24.93 {+-} 4.65, P < 0.01). Low-dose MBIR images were all diagnostically acceptable. Unique features of low-dose MBIR images included motion artefacts and pixellated blotchy appearances, which did not adversely affect diagnostic acceptability. Diagnostically acceptable chest CT images acquired with nearly 80 % less radiation can be obtained using MBIR. MBIR shows greater potential than ASIR for providing diagnostically acceptable low-dose CT images without severely compromising image quality. (orig.)

  8. A Replication of ``Using self-esteem to disaggregate psychopathy, narcissism, and aggression (2013''

    Directory of Open Access Journals (Sweden)

    Durand, Guillaume

    2016-09-01

    Full Text Available The present study is a replication of Falkenbach, Howe, and Falki (2013. Using self-esteem to disaggregate psychopathy, narcissism, and aggression. Personality and Individual Differences, 54(7, 815-820.

  9. Concrete Waste Recycling Process for High Quality Aggregate

    International Nuclear Information System (INIS)

    Ishikura, Takeshi; Fujii, Shin-ichi

    2008-01-01

    Large amount of concrete waste generates during nuclear power plant (NPP) dismantling. Non-contaminated concrete waste is assumed to be disposed in a landfill site, but that will not be the solution especially in the future, because of decreasing tendency of the site availability and natural resources. Concerning concrete recycling, demand for roadbeds and backfill tends to be less than the amount of dismantled concrete generated in a single rural site, and conventional recycled aggregate is limited of its use to non-structural concrete, because of its inferior quality to ordinary natural aggregate. Therefore, it is vital to develop high quality recycled aggregate for general uses of dismantled concrete. If recycled aggregate is available for high structural concrete, the dismantling concrete is recyclable as aggregate for industry including nuclear field. Authors developed techniques on high quality aggregate reclamation for large amount of concrete generated during NPP decommissioning. Concrete of NPP buildings has good features for recycling aggregate; large quantity of high quality aggregate from same origin, record keeping of the aggregate origin, and little impurities in dismantled concrete such as wood and plastics. The target of recycled aggregate in this development is to meet the quality criteria for NPP concrete as prescribed in JASS 5N 'Specification for Nuclear Power Facility Reinforced Concrete' and JASS 5 'Specification for Reinforced Concrete Work'. The target of recycled aggregate concrete is to be comparable performance with ordinary aggregate concrete. The high quality recycled aggregate production techniques are assumed to apply for recycling for large amount of non-contaminated concrete. These techniques can also be applied for slightly contaminated concrete dismantled from radiological control area (RCA), together with free release survey. In conclusion: a technology on dismantled concrete recycling for high quality aggregate was developed

  10. AIR Tools - A MATLAB package of algebraic iterative reconstruction methods

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Saxild-Hansen, Maria

    2012-01-01

    We present a MATLAB package with implementations of several algebraic iterative reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods are impleme......We present a MATLAB package with implementations of several algebraic iterative reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods...... are implemented: Algebraic Reconstruction Techniques (ART) and Simultaneous Iterative Reconstruction Techniques (SIRT). In addition we provide a few simplified test problems from medical and seismic tomography. For each iterative method, a number of strategies are available for choosing the relaxation parameter...

  11. A sparse electromagnetic imaging scheme using nonlinear landweber iterations

    KAUST Repository

    Desmal, Abdulla

    2015-10-26

    Development and use of electromagnetic inverse scattering techniques for imagining sparse domains have been on the rise following the recent advancements in solving sparse optimization problems. Existing techniques rely on iteratively converting the nonlinear forward scattering operator into a sequence of linear ill-posed operations (for example using the Born iterative method) and applying sparsity constraints to the linear minimization problem of each iteration through the use of L0/L1-norm penalty term (A. Desmal and H. Bagci, IEEE Trans. Antennas Propag, 7, 3878–3884, 2014, and IEEE Trans. Geosci. Remote Sens., 3, 532–536, 2015). It has been shown that these techniques produce more accurate and sharper images than their counterparts which solve a minimization problem constrained with smoothness promoting L2-norm penalty term. But these existing techniques are only applicable to investigation domains involving weak scatterers because the linearization process breaks down for high values of dielectric permittivity.

  12. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    Science.gov (United States)

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  13. Multivariate exploration of non-intrusive load monitoring via spatiotemporal pattern network

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Chao; Akintayo, Adedotun; Jiang, Zhanhong; Henze, Gregor P.; Sarkar, Soumik

    2018-02-01

    Non-intrusive load monitoring (NILM) of electrical demand for the purpose of identifying load components has thus far mostly been studied using univariate data, e.g., using only whole building electricity consumption time series to identify a certain type of end-use such as lighting load. However, using additional variables in the form of multivariate time series data may provide more information in terms of extracting distinguishable features in the context of energy disaggregation. In this work, a novel probabilistic graphical modeling approach, namely the spatiotemporal pattern network (STPN) is proposed for energy disaggregation using multivariate time-series data. The STPN framework is shown to be capable of handling diverse types of multivariate time-series to improve the energy disaggregation performance. The technique outperforms the state of the art factorial hidden Markov models (FHMM) and combinatorial optimization (CO) techniques in multiple real-life test cases. Furthermore, based on two homes' aggregate electric consumption data, a similarity metric is defined for the energy disaggregation of one home using a trained model based on the other home (i.e., out-of-sample case). The proposed similarity metric allows us to enhance scalability via learning supervised models for a few homes and deploying such models to many other similar but unmodeled homes with significantly high disaggregation accuracy.

  14. Verifying large modular systems using iterative abstraction refinement

    International Nuclear Information System (INIS)

    Lahtinen, Jussi; Kuismin, Tuomas; Heljanko, Keijo

    2015-01-01

    Digital instrumentation and control (I&C) systems are increasingly used in the nuclear engineering domain. The exhaustive verification of these systems is challenging, and the usual verification methods such as testing and simulation are typically insufficient. Model checking is a formal method that is able to exhaustively analyse the behaviour of a model against a formally written specification. If the model checking tool detects a violation of the specification, it will give out a counter-example that demonstrates how the specification is violated in the system. Unfortunately, sometimes real life system designs are too big to be directly analysed by traditional model checking techniques. We have developed an iterative technique for model checking large modular systems. The technique uses abstraction based over-approximations of the model behaviour, combined with iterative refinement. The main contribution of the work is the concrete abstraction refinement technique based on the modular structure of the model, the dependency graph of the model, and a refinement sampling heuristic similar to delta debugging. The technique is geared towards proving properties, and outperforms BDD-based model checking, the k-induction technique, and the property directed reachability algorithm (PDR) in our experiments. - Highlights: • We have developed an iterative technique for model checking large modular systems. • The technique uses BDD-based model checking, k-induction, and PDR in parallel. • We have tested our algorithm by verifying two models with it. • The technique outperforms classical model checking methods in our experiments

  15. Iterative algorithm for the volume integral method for magnetostatics problems

    International Nuclear Information System (INIS)

    Pasciak, J.E.

    1980-11-01

    Volume integral methods for solving nonlinear magnetostatics problems are considered in this paper. The integral method is discretized by a Galerkin technique. Estimates are given which show that the linearized problems are well conditioned and hence easily solved using iterative techniques. Comparisons of iterative algorithms with the elimination method of GFUN3D shows that the iterative method gives an order of magnitude improvement in computational time as well as memory requirements for large problems. Computational experiments for a test problem as well as a double layer dipole magnet are given. Error estimates for the linearized problem are also derived

  16. Model-based iterative reconstruction technique for radiation dose reduction in chest CT: comparison with the adaptive statistical iterative reconstruction technique

    International Nuclear Information System (INIS)

    Katsura, Masaki; Matsuda, Izuru; Akahane, Masaaki; Sato, Jiro; Akai, Hiroyuki; Yasaka, Koichiro; Kunimatsu, Akira; Ohtomo, Kuni

    2012-01-01

    To prospectively evaluate dose reduction and image quality characteristics of chest CT reconstructed with model-based iterative reconstruction (MBIR) compared with adaptive statistical iterative reconstruction (ASIR). One hundred patients underwent reference-dose and low-dose unenhanced chest CT with 64-row multidetector CT. Images were reconstructed with 50 % ASIR-filtered back projection blending (ASIR50) for reference-dose CT, and with ASIR50 and MBIR for low-dose CT. Two radiologists assessed the images in a blinded manner for subjective image noise, artefacts and diagnostic acceptability. Objective image noise was measured in the lung parenchyma. Data were analysed using the sign test and pair-wise Student's t-test. Compared with reference-dose CT, there was a 79.0 % decrease in dose-length product with low-dose CT. Low-dose MBIR images had significantly lower objective image noise (16.93 ± 3.00) than low-dose ASIR (49.24 ± 9.11, P < 0.01) and reference-dose ASIR images (24.93 ± 4.65, P < 0.01). Low-dose MBIR images were all diagnostically acceptable. Unique features of low-dose MBIR images included motion artefacts and pixellated blotchy appearances, which did not adversely affect diagnostic acceptability. Diagnostically acceptable chest CT images acquired with nearly 80 % less radiation can be obtained using MBIR. MBIR shows greater potential than ASIR for providing diagnostically acceptable low-dose CT images without severely compromising image quality. (orig.)

  17. Disaggregating Orders of Water Scarcity - The Politics of Nexus in the Wami-Ruvu River Basin, Tanzania

    Directory of Open Access Journals (Sweden)

    Anna Mdee

    2017-02-01

    Full Text Available This article considers the dilemma of managing competing uses of surface water in ways that respond to social, ecological and economic needs. Current approaches to managing competing water use, such as Integrated Water Resources Management (IWRM and the concept of the water-energy-food nexus do not adequately disaggregate the political nature of water allocations. This is analysed using Mehta’s (2014 framework on orders of scarcity to disaggregate narratives of water scarcity in two ethnographic case studies in the WamiRuvu River Basin in Tanzania: one of a mountain river that provides water to urban Morogoro, and another of a large donor-supported irrigation scheme on the Wami River. These case studies allow us to explore different interfaces in the food-water-energy nexus. The article makes two points: that disaggregating water scarcity is essential for analysing the nexus; and that current institutional frameworks (such as IWRM mask the political nature of the nexus, and therefore do not provide an adequate platform for adjudicating the interfaces of competing water use.

  18. Development of in situ cleaning techniques for diagnostic mirrors in ITER

    International Nuclear Information System (INIS)

    Litnovsky, A.; Laengner, M.; Matveeva, M.; Schulz, Ch.; Marot, L.; Voitsenya, V.S.; Philipps, V.; Biel, W.; Samm, U.

    2011-01-01

    Mirrors will be used in all optical and laser-based diagnostic systems of ITER. In the severe environment, the optical characteristics of mirrors will be degraded, hampering the entire performance of the respective diagnostics. A minute impurity deposition of 20 nm of carbon on the mirror is sufficient to decrease the mirror reflectivity by tens of percent outlining the necessity of the mirror cleaning in ITER. The results of R and D on plasma cleaning of molybdenum diagnostic mirrors are reported. The mirrors contaminated with amorphous carbon films in the laboratory conditions and in the tokamaks were cleaned in steady-state hydrogenic plasmas. The maximum cleaning efficiency of 4.2 nm/min was reached for the laboratory and soft tokamak hydrocarbon films, whereas for the hard tokamak films the carbidization of mirrors drastically decreased the cleaning efficiency down to 0.016 nm/min. This implies the necessity of sputtering cleaning of contaminated mirrors as the only reliable tool to remove the deposits by plasma cleaning. An overview of R and D program on mirror cleaning is provided along with plans for further studies and the recommendations for ITER mirror-based diagnostics.

  19. Wall conditioning for ITER: Current experimental and modeling activities

    Energy Technology Data Exchange (ETDEWEB)

    Douai, D., E-mail: david.douai@cea.fr [CEA, IRFM, Association Euratom-CEA, 13108 St. Paul lez Durance (France); Kogut, D. [CEA, IRFM, Association Euratom-CEA, 13108 St. Paul lez Durance (France); Wauters, T. [LPP-ERM/KMS, Association Belgian State, 1000 Brussels (Belgium); Brezinsek, S. [FZJ, Institut für Energie- und Klimaforschung Plasmaphysik, 52441 Jülich (Germany); Hagelaar, G.J.M. [Laboratoire Plasma et Conversion d’Energie, UMR5213, Toulouse (France); Hong, S.H. [National Fusion Research Institute, Daejeon 305-806 (Korea, Republic of); Lomas, P.J. [CCFE, Culham Science Centre, OX14 3DB Abingdon (United Kingdom); Lyssoivan, A. [LPP-ERM/KMS, Association Belgian State, 1000 Brussels (Belgium); Nunes, I. [Associação EURATOM-IST, Instituto de Plasmas e Fusão Nuclear, 1049-001 Lisboa (Portugal); Pitts, R.A. [ITER International Organization, F-13067 St. Paul lez Durance (France); Rohde, V. [Max-Planck-Institut für Plasmaphysik, 85748 Garching (Germany); Vries, P.C. de [ITER International Organization, F-13067 St. Paul lez Durance (France)

    2015-08-15

    Wall conditioning will be required in ITER to control fuel and impurity recycling, as well as tritium (T) inventory. Analysis of conditioning cycle on the JET, with its ITER-Like Wall is presented, evidencing reduced need for wall cleaning in ITER compared to JET–CFC. Using a novel 2D multi-fluid model, current density during Glow Discharge Conditioning (GDC) on the in-vessel plasma-facing components (PFC) of ITER is predicted to approach the simple expectation of total anode current divided by wall surface area. Baking of the divertor to 350 °C should desorb the majority of the co-deposited T. ITER foresees the use of low temperature plasma based techniques compatible with the permanent toroidal magnetic field, such as Ion (ICWC) or Electron Cyclotron Wall Conditioning (ECWC), for tritium removal between ITER plasma pulses. Extrapolation of JET ICWC results to ITER indicates removal comparable to estimated T-retention in nominal ITER D:T shots, whereas GDC may be unattractive for that purpose.

  20. Characteristics and Performance of Existing Load Disaggregation Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Mayhorn, Ebony T. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sullivan, Greg P. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Butner, Ryan S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hao, He [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Baechler, Michael C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-04-10

    Non-intrusive load monitoring (NILM) or non-intrusive appliance load monitoring (NIALM) is an analytic approach to disaggregate building loads based on a single metering point. This advanced load monitoring and disaggregation technique has the potential to provide an alternative solution to high-priced traditional sub-metering and enable innovative approaches for energy conservation, energy efficiency, and demand response. However, since the inception of the concept in the 1980’s, evaluations of these technologies have focused on reporting performance accuracy without investigating sources of inaccuracies or fully understanding and articulating the meaning of the metrics used to quantify performance. As a result, the market for, as well as, advances in these technologies have been slowly maturing.To improve the market for these NILM technologies, there has to be confidence that the deployment will lead to benefits. In reality, every end-user and application that this technology may enable does not require the highest levels of performance accuracy to produce benefits. Also, there are other important characteristics that need to be considered, which may affect the appeal of NILM products to certain market targets (i.e. residential and commercial building consumers) and the suitability for particular applications. These characteristics include the following: 1) ease of use, the level of expertise/bandwidth required to properly use the product; 2) ease of installation, the level of expertise required to install along with hardware needs that impact product cost; and 3) ability to inform decisions and actions, whether the energy outputs received by end-users (e.g. third party applications, residential users, building operators, etc.) empower decisions and actions to be taken at time frames required for certain applications. Therefore, stakeholders, researchers, and other interested parties should be kept abreast of the evolving capabilities, uses, and characteristics

  1. Role of the disaggregase ClpB in processing of proteins aggregated as inclusion bodies.

    Science.gov (United States)

    Zblewska, Kamila; Krajewska, Joanna; Zolkiewski, Michal; Kędzierska-Mieszkowska, Sabina

    2014-08-01

    Overproduction of heterologous proteins in bacterial systems often results in the formation of insoluble inclusion bodies (IBs), which is a major impediment in biochemical research and biotechnology. In principle, the activity of molecular chaperones could be employed to gain control over the IB formation and to improve the recombinant protein yields, but the potential of each of the major bacterial chaperones (DnaK/J, GroEL/ES, and ClpB) to process IBs has not been fully established yet. We investigated the formation of inclusion bodies (IBs) of two aggregation-prone proteins, VP1LAC and VP1GFP, overproduced in Escherichiacoli in the presence and absence of the chaperone ClpB. We found that both ClpB isoforms, ClpB95 and ClpB80 accumulated in E. coli cells during the production of IBs. The amount of IB proteins increased in the absence of ClpB. ClpB supported the resolubilization and reactivation of the aggregated VP1LAC and VP1GFP in E. coli cells. The IB disaggregation was optimal in the presence of both ClpB95 and ClpB80. Our results indicate an essential role of ClpB in controlling protein aggregation and inclusion body formation in bacteria. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Preparation of Amyloid Fibrils Seeded from Brain and Meninges.

    Science.gov (United States)

    Scherpelz, Kathryn P; Lu, Jun-Xia; Tycko, Robert; Meredith, Stephen C

    2016-01-01

    Seeding of amyloid fibrils into fresh solutions of the same peptide or protein in disaggregated form leads to the formation of replicate fibrils, with close structural similarity or identity to the original fibrillar seeds. Here we describe procedures for isolating fibrils composed mainly of β-amyloid (Aβ) from human brain and from leptomeninges, a source of cerebral blood vessels, for investigating Alzheimer's disease and cerebral amyloid angiopathy. We also describe methods for seeding isotopically labeled, disaggregated Aβ peptide solutions for study using solid-state NMR and other techniques. These methods should be applicable to other types of amyloid fibrils, to Aβ fibrils from mice or other species, tissues other than brain, and to some non-fibrillar aggregates. These procedures allow for the examination of authentic amyloid fibrils and other protein aggregates from biological tissues without the need for labeling the tissue.

  3. A Robust Threshold for Iterative Channel Estimation in OFDM Systems

    Directory of Open Access Journals (Sweden)

    A. Kalaycioglu

    2010-04-01

    Full Text Available A novel threshold computation method for pilot symbol assisted iterative channel estimation in OFDM systems is considered. As the bits are transmitted in packets, the proposed technique is based on calculating a particular threshold for each data packet in order to select the reliable decoder output symbols to improve the channel estimation performance. Iteratively, additional pilot symbols are established according to the threshold and the channel is re-estimated with the new pilots inserted to the known channel estimation pilot set. The proposed threshold calculation method for selecting additional pilots performs better than non-iterative channel estimation, no threshold and fixed threshold techniques in poor HF channel simulations.

  4. Disaggregating and mapping crop statistics using hypertemporal remote sensing

    Science.gov (United States)

    Khan, M. R.; de Bie, C. A. J. M.; van Keulen, H.; Smaling, E. M. A.; Real, R.

    2010-02-01

    Governments compile their agricultural statistics in tabular form by administrative area, which gives no clue to the exact locations where specific crops are actually grown. Such data are poorly suited for early warning and assessment of crop production. 10-Daily satellite image time series of Andalucia, Spain, acquired since 1998 by the SPOT Vegetation Instrument in combination with reported crop area statistics were used to produce the required crop maps. Firstly, the 10-daily (1998-2006) 1-km resolution SPOT-Vegetation NDVI-images were used to stratify the study area in 45 map units through an iterative unsupervised classification process. Each unit represents an NDVI-profile showing changes in vegetation greenness over time which is assumed to relate to the types of land cover and land use present. Secondly, the areas of NDVI-units and the reported cropped areas by municipality were used to disaggregate the crop statistics. Adjusted R-squares were 98.8% for rainfed wheat, 97.5% for rainfed sunflower, and 76.5% for barley. Relating statistical data on areas cropped by municipality with the NDVI-based unit map showed that the selected crops were significantly related to specific NDVI-based map units. Other NDVI-profiles did not relate to the studied crops and represented other types of land use or land cover. The results were validated by using primary field data. These data were collected by the Spanish government from 2001 to 2005 through grid sampling within agricultural areas; each grid (block) contains three 700 m × 700 m segments. The validation showed 68%, 31% and 23% variability explained (adjusted R-squares) between the three produced maps and the thousands of segment data. Mainly variability within the delineated NDVI-units caused relatively low values; the units are internally heterogeneous. Variability between units is properly captured. The maps must accordingly be considered "small scale maps". These maps can be used to monitor crop performance of

  5. Modelling OAIS Compliance for Disaggregated Preservation Services

    Directory of Open Access Journals (Sweden)

    Gareth Knight

    2007-07-01

    Full Text Available The reference model for the Open Archival Information System (OAIS is well established in the research community as a method of modelling the functions of a digital repository and as a basis in which to frame digital curation and preservation issues. In reference to the 5th anniversary review of the OAIS, it is timely to consider how it may be interpreted by an institutional repository. The paper examines methods of sharing essential functions and requirements of an OAIS between two or more institutions, outlining the practical considerations of outsourcing. It also details the approach taken by the SHERPA DP Project to introduce a disaggregated service model for institutional repositories that wish to implement preservation services.

  6. Qualification tests and facilities for the ITER superconductors

    International Nuclear Information System (INIS)

    Bruzzone, P.; Wesche, R.; Stepanov, B.; Cau, F.; Bagnasco, M.; Calvi, M.; Herzog, R.; Vogel, M.

    2009-01-01

    All the ITER superconductors are tested as short length samples in the SULTAN test facility at CRPP. Twenty-four TF conductor samples with small layout variations were tested since February 2007 with the aim of verifying the design and qualification of the manufacturers. The sample assembly and the measurement techniques at CRPP are discussed. Starting in 2010, another test facility for ITER conductors, named EDIPO, will be operating at CRPP to share with SULTAN the load of the samples for the acceptance tests during the construction of ITER.

  7. Landslide Susceptibility Assessment Using Frequency Ratio Technique with Iterative Random Sampling

    Directory of Open Access Journals (Sweden)

    Hyun-Joo Oh

    2017-01-01

    Full Text Available This paper assesses the performance of the landslide susceptibility analysis using frequency ratio (FR with an iterative random sampling. A pair of before-and-after digital aerial photographs with 50 cm spatial resolution was used to detect landslide occurrences in Yongin area, Korea. Iterative random sampling was run ten times in total and each time it was applied to the training and validation datasets. Thirteen landslide causative factors were derived from the topographic, soil, forest, and geological maps. The FR scores were calculated from the causative factors and training occurrences repeatedly ten times. The ten landslide susceptibility maps were obtained from the integration of causative factors that assigned FR scores. The landslide susceptibility maps were validated by using each validation dataset. The FR method achieved susceptibility accuracies from 89.48% to 93.21%. And the landslide susceptibility accuracy of the FR method is higher than 89%. Moreover, the ten times iterative FR modeling may contribute to a better understanding of a regularized relationship between the causative factors and landslide susceptibility. This makes it possible to incorporate knowledge-driven considerations of the causative factors into the landslide susceptibility analysis and also be extensively used to other areas.

  8. Repair of manufacturing defects in the armor of plasma facing units of the ITER Divertor Dome

    International Nuclear Information System (INIS)

    Litunovsky, Nikolay; Alekseenko, Evgeny; Kuznetsov, Vladimir; Lyanzberg, Dmitriy; Makhankov, Aleksey; Rulev, Roman

    2013-01-01

    Highlights: • Sporadic manufacturing defects in ITER Divertor Dome PFUs may be repaired. • We have developed a repair technique for ITER Divertor Dome PFUs. • Armor repair technique for ITER Divertor Dome PFUs is successfully tested. -- Abstract: The paper describes the repair procedure developed for removal of manufacturing defects occurring sporadically during armoring of plasma facing units (PFUs) of the ITER Divertor Dome. Availability of armor repair technique is prescribed by the procurement arrangement for the ITER Divertor Dome concluded in 2009 between the ITER Organization and the ITER Domestic Agency of Russia. The paper presents the detailed description of the procedure, data on its effect on the joints of the rest part of the armor and on the grain structure of the PFU heat sink. The results of thermocycling of large-scale Dome PFU mock-ups manufactured with demonstration of armor repair are also given

  9. Repair of manufacturing defects in the armor of plasma facing units of the ITER Divertor Dome

    Energy Technology Data Exchange (ETDEWEB)

    Litunovsky, Nikolay, E-mail: nlitunovsky@sintez.niiefa.spb.su; Alekseenko, Evgeny; Kuznetsov, Vladimir; Lyanzberg, Dmitriy; Makhankov, Aleksey; Rulev, Roman

    2013-10-15

    Highlights: • Sporadic manufacturing defects in ITER Divertor Dome PFUs may be repaired. • We have developed a repair technique for ITER Divertor Dome PFUs. • Armor repair technique for ITER Divertor Dome PFUs is successfully tested. -- Abstract: The paper describes the repair procedure developed for removal of manufacturing defects occurring sporadically during armoring of plasma facing units (PFUs) of the ITER Divertor Dome. Availability of armor repair technique is prescribed by the procurement arrangement for the ITER Divertor Dome concluded in 2009 between the ITER Organization and the ITER Domestic Agency of Russia. The paper presents the detailed description of the procedure, data on its effect on the joints of the rest part of the armor and on the grain structure of the PFU heat sink. The results of thermocycling of large-scale Dome PFU mock-ups manufactured with demonstration of armor repair are also given.

  10. Disaggregating radar-derived rainfall measurements in East Azarbaijan, Iran, using a spatial random-cascade model

    Science.gov (United States)

    Fouladi Osgouei, Hojjatollah; Zarghami, Mahdi; Ashouri, Hamed

    2017-07-01

    The availability of spatial, high-resolution rainfall data is one of the most essential needs in the study of water resources. These data are extremely valuable in providing flood awareness for dense urban and industrial areas. The first part of this paper applies an optimization-based method to the calibration of radar data based on ground rainfall gauges. Then, the climatological Z-R relationship for the Sahand radar, located in the East Azarbaijan province of Iran, with the help of three adjacent rainfall stations, is obtained. The new climatological Z-R relationship with a power-law form shows acceptable statistical performance, making it suitable for radar-rainfall estimation by the Sahand radar outputs. The second part of the study develops a new heterogeneous random-cascade model for spatially disaggregating the rainfall data resulting from the power-law model. This model is applied to the radar-rainfall image data to disaggregate rainfall data with coverage area of 512 × 512 km2 to a resolution of 32 × 32 km2. Results show that the proposed model has a good ability to disaggregate rainfall data, which may lead to improvement in precipitation forecasting, and ultimately better water-resources management in this arid region, including Urmia Lake.

  11. Project management techniques used in the European Vacuum Vessel sectors procurement for ITER

    Energy Technology Data Exchange (ETDEWEB)

    Losasso, Marcello, E-mail: marcello.losasso@f4e.europa.eu [Fusion for Energy (F4E), Barcelona (Spain); Ortiz de Zuniga, Maria; Jones, Lawrence; Bayon, Angel; Arbogast, Jean-Francois; Caixas, Joan; Fernandez, Jose; Galvan, Stefano; Jover, Teresa [Fusion for Energy (F4E), Barcelona (Spain); Ioki, Kimihiro [ITER Organisation, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France); Lewczanin, Michal; Mico, Gonzalo; Pacheco, Jose Miguel [Fusion for Energy (F4E), Barcelona (Spain); Preble, Joseph [ITER Organisation, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France); Stamos, Vassilis; Trentea, Alexandru [Fusion for Energy (F4E), Barcelona (Spain)

    2012-08-15

    Highlights: Black-Right-Pointing-Pointer File name contains the directory tree structure with a string of three-letter acronyms, thereby enabling parent directory location when confronted with orphan files. Black-Right-Pointing-Pointer The management of the procurement procedure was carried out in an efficient and timely manner, achieving precisely the contract placement date foreseen at the start of the process. Black-Right-Pointing-Pointer The contract start-up has been effectively implemented and a flexible project management system has been put in place for an efficient monitoring of the contract. - Abstract: The contract for the seven European Sectors of the ITER Vacuum Vessel (VV) was placed at the end of 2010 with a consortium of three Italian companies. The task of placing and the initial take-off of this large and complex contract, one of the largest placed by F4E, the European Domestic Agency for ITER, is described. A stringent quality controlled system with a bespoke Vacuum Vessel Project Lifecycle Management system to control the information flow, based on ENOVIA SmarTeam, was developed to handle the storage and approval of Documentation including links to the F4E Vacuum Vessel system and ITER International Organization System interfaces. The VV Sector design and manufacturing schedule is based on Primavera software, which is cost loaded thus allowing F4E to carry out performance measurement with respect to its payments and commitments. This schedule is then integrated into the overall Vacuum Vessel schedule, which includes ancillary activities such as instruments, preliminary design and analysis. The VV Sector Risk Management included three separate risk analyses from F4E and the bidders, utilizing two different methodologies. These efforts will lead to an efficient and effective implementation of this contract, vital to the success of the ITER machine, since the Vacuum Vessel is the biggest single work package of Europe's contribution to ITER and

  12. Neutronics experiments, radiation detectors and nuclear techniques development in the EU in support of the TBM design for ITER

    Energy Technology Data Exchange (ETDEWEB)

    Angelone, M., E-mail: maurizio.angelone@enea.it [ENEA UT-FUS C.R. Frascati, via E. Fermi, 45-00044 Frascati (Italy); Fischer, U. [Karlsruhe Institute of Technology (KIT), Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Flammini, D. [ENEA UT-FUS C.R. Frascati, via E. Fermi, 45-00044 Frascati (Italy); Jodlowski, P. [AGH University of Science and Technology, Al. Mickiewicza 30, 30-059 Krakow (Poland); Klix, A. [Karlsruhe Institute of Technology (KIT), Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Kodeli, I. [Jožef Stefan Institute, Ljubljana (Slovenia); Kuc, T. [AGH University of Science and Technology, Al. Mickiewicza 30, 30-059 Krakow (Poland); Leichtle, D. [Fusion for Energy, C/Josep Pla 2, Torres Diagonal Litoral B3, 08019 Barcelona (Spain); Lilley, S. [Culham Centre for Fusion Energy, Culham, OX14 3DB (United Kingdom); Majerle, M.; Novák, J. [Nuclear Physics Institute of the ASCR, Řež 130, 250 68 Řež (Czech Republic); Ostachowicz, B. [AGH University of Science and Technology, Al. Mickiewicza 30, 30-059 Krakow (Poland); Packer, L.W. [Culham Centre for Fusion Energy, Culham, OX14 3DB (United Kingdom); Pillon, M. [ENEA UT-FUS C.R. Frascati, via E. Fermi, 45-00044 Frascati (Italy); Pohorecki, W. [AGH University of Science and Technology, Al. Mickiewicza 30, 30-059 Krakow (Poland); Radulović, V. [Jožef Stefan Institute, Ljubljana (Slovenia); Šimečková, E. [Nuclear Physics Institute of the ASCR, Řež 130, 250 68 Řež (Czech Republic); and others

    2015-10-15

    Highlights: • A number of experiments and tests are ongoing to develop detectors and methods for HCLL and HCPM ITER-TBM. • Experiments for measuring gas production relevant to IFMIF are also performed using a cyclotron. • A benchmark experiment with a Cu block is performed to validate copper cross sections. • Experimental techniques to measure tritium in TBM are presented. • Experimental verification of activation cross sections for a Neutron Activation System for TBM is addressed. - Abstract: The development of high quality nuclear data, radiation detectors and instrumentation techniques for fusion technology applications in Europe is supported by Fusion for Energy (F4E) and conducted in a joint and collaborative effort by several European research associations (ENEA, KIT, JSI, NPI, AGH, and CCFE) joined to form the “Consortium on Nuclear Data Studies/Experiments in Support of TBM Activities”. This paper presents the neutronics activities carried out by the Consortium. A selection of available results are presented. Among then a benchmark experiment on a pure copper block to study the Cu cross sections at neutron energies relevant to fusion, the fabrication of prototype neutron detectors able to withstand harsh environment and temperature >200 °C (artificial diamond and self-powered detectors) developed for operating in ITER-TBM as well as measurement of relevant activation and integral gas production cross-sections. The latter measured at neutron energies relevant to IFMIF (>14 MeV) and the development of innovative experimental techniques for tritium measurement in TBM.

  13. The Behaviour of Disaggregated Public Expenditures and Income in Malaysia

    OpenAIRE

    Tang, Chor-Foon; Lau, Evan

    2011-01-01

    The present study attempts to re-investigate the behaviour of disaggregated public expenditures data and national income for Malaysia. This study covers the sample period of annual data from 1960 to 2007. The Bartlett-corrected trace tests proposed by Johansen (2002) were used to ascertain the presence of long run equilibrium relationship between public expenditures and national income. The results show one cointegrating vector for each specification of public expenditures. The relatively new...

  14. Development of acoustic model-based iterative reconstruction technique for thick-concrete imaging

    Science.gov (United States)

    Almansouri, Hani; Clayton, Dwight; Kisner, Roger; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2016-02-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.1

  15. Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL

    2016-01-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.

  16. Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL

    2015-01-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well s health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.

  17. Invariances and diversities in the evolution of manufacturing industries

    NARCIS (Netherlands)

    Bottazzi, Giulio; Cefis, E.; Dosi, Giovanni; Secchi, Angelo

    2003-01-01

    In this work we explore some basic properties of the size distributions of firms and of their growth processes both at aggregate and disaggregate levels. First, we investigate which properties of firm s size distributions and growth dynamics are robust under disaggregation. Second, at a disaggregate

  18. Pre-aggregation for Probability Distributions

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    Motivated by the increasing need to analyze complex uncertain multidimensional data (e.g., in order to optimize and personalize location-based services), this paper proposes novel types of {\\em probabilistic} OLAP queries that operate on aggregate values that are probability distributions...... and the techniques to process these queries. The paper also presents the methods for computing the probability distributions, which enables pre-aggregation, and for using the pre-aggregated distributions for further aggregation. In order to achieve good time and space efficiency, the methods perform approximate...... multidimensional data analysis that is considered in this paper (i.e., approximate processing of probabilistic OLAP queries over probability distributions)....

  19. Threshold partitioning of sparse matrices and applications to Markov chains

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Hwajeong; Szyld, D.B. [Temple Univ., Philadelphia, PA (United States)

    1996-12-31

    It is well known that the order of the variables and equations of a large, sparse linear system influences the performance of classical iterative methods. In particular if, after a symmetric permutation, the blocks in the diagonal have more nonzeros, classical block methods have a faster asymptotic rate of convergence. In this paper, different ordering and partitioning algorithms for sparse matrices are presented. They are modifications of PABLO. In the new algorithms, in addition to the location of the nonzeros, the values of the entries are taken into account. The matrix resulting after the symmetric permutation has dense blocks along the diagonal, and small entries in the off-diagonal blocks. Parameters can be easily adjusted to obtain, for example, denser blocks, or blocks with elements of larger magnitude. In particular, when the matrices represent Markov chains, the permuted matrices are well suited for block iterative methods that find the corresponding probability distribution. Applications to three types of methods are explored: (1) Classical block methods, such as Block Gauss Seidel. (2) Preconditioned GMRES, where a block diagonal preconditioner is used. (3) Iterative aggregation method (also called aggregation/disaggregation) where the partition obtained from the ordering algorithm with certain parameters is used as an aggregation scheme. In all three cases, experiments are presented which illustrate the performance of the methods with the new orderings. The complexity of the new algorithms is linear in the number of nonzeros and the order of the matrix, and thus adding little computational effort to the overall solution.

  20. Overview of physics basis for ITER

    International Nuclear Information System (INIS)

    Mukhovatov, V; Shimada, M; Chudnovskiy, A N; Costley, A E; Gribov, Y; Federici, G; Kardaun, O; Kukushkin, A S; Polevoi, A; Pustovitov, V D; Shimomura, Y; Sugie, T; Sugihara, M; Vayakis, G

    2003-01-01

    ITER will be the first magnetic confinement device with burning DT plasma and fusion power of about 0.5 GW. Parameters of ITER plasma have been predicted using methodologies summarized in the ITER Physics Basis (1999 Nucl. Fusion 39 2175). During the past few years, new results have been obtained that substantiate confidence in achieving Q>=10 in ITER with inductive H-mode operation. These include achievement of a good H-mode confinement near the Greenwald density at high triangularity of the plasma cross section; improvements in theory-based confinement projections for the core plasma, even though further studies are needed for understanding the transport near the plasma edge; improvement in helium ash removal due to the elastic collisions of He atoms with D/T ions in the divertor predicted by modelling; demonstration of feedback control of neoclassical tearing modes and resultant improvement in the achievable beta values; better understanding of edge localized mode (ELM) physics and development of ELM mitigation techniques; and demonstration of mitigation of plasma disruptions. ITER will have a flexibility to operate also in steady-state and intermediate (hybrid) regimes. The 'advanced tokamak' regimes with weak or negative central magnetic shear and internal transport barriers are considered as potential scenarios for steady-state operation. The paper concentrates on inductively driven plasma performance and discusses requirements for steady-state operation in ITER

  1. The added value of stochastic spatial disaggregation for short-term rainfall forecasts currently available in Canada

    Science.gov (United States)

    Gagnon, Patrick; Rousseau, Alain N.; Charron, Dominique; Fortin, Vincent; Audet, René

    2017-11-01

    Several businesses and industries rely on rainfall forecasts to support their day-to-day operations. To deal with the uncertainty associated with rainfall forecast, some meteorological organisations have developed products, such as ensemble forecasts. However, due to the intensive computational requirements of ensemble forecasts, the spatial resolution remains coarse. For example, Environment and Climate Change Canada's (ECCC) Global Ensemble Prediction System (GEPS) data is freely available on a 1-degree grid (about 100 km), while those of the so-called High Resolution Deterministic Prediction System (HRDPS) are available on a 2.5-km grid (about 40 times finer). Potential users are then left with the option of using either a high-resolution rainfall forecast without uncertainty estimation and/or an ensemble with a spectrum of plausible rainfall values, but at a coarser spatial scale. The objective of this study was to evaluate the added value of coupling the Gibbs Sampling Disaggregation Model (GSDM) with ECCC products to provide accurate, precise and consistent rainfall estimates at a fine spatial resolution (10-km) within a forecast framework (6-h). For 30, 6-h, rainfall events occurring within a 40,000-km2 area (Québec, Canada), results show that, using 100-km aggregated reference rainfall depths as input, statistics of the rainfall fields generated by GSDM were close to those of the 10-km reference field. However, in forecast mode, GSDM outcomes inherit of the ECCC forecast biases, resulting in a poor performance when GEPS data were used as input, mainly due to the inherent rainfall depth distribution of the latter product. Better performance was achieved when the Regional Deterministic Prediction System (RDPS), available on a 10-km grid and aggregated at 100-km, was used as input to GSDM. Nevertheless, most of the analyzed ensemble forecasts were weakly consistent. Some areas of improvement are identified herein.

  2. Soil map disaggregation improved by soil-landscape relationships, area-proportional sampling and random forest implementation

    DEFF Research Database (Denmark)

    Møller, Anders Bjørn; Malone, Brendan P.; Odgers, Nathan

    implementation generally improved the algorithm’s ability to predict the correct soil class. The implementation of soil-landscape relationships and area-proportional sampling generally increased the calculation time, while the random forest implementation reduced the calculation time. In the most successful......Detailed soil information is often needed to support agricultural practices, environmental protection and policy decisions. Several digital approaches can be used to map soil properties based on field observations. When soil observations are sparse or missing, an alternative approach...... is to disaggregate existing conventional soil maps. At present, the DSMART algorithm represents the most sophisticated approach for disaggregating conventional soil maps (Odgers et al., 2014). The algorithm relies on classification trees trained from resampled points, which are assigned classes according...

  3. Synthesis of non-aggregated nicotinic acid coated magnetite nanorods via hydrothermal technique

    Energy Technology Data Exchange (ETDEWEB)

    Attallah, Olivia A., E-mail: olivia.adly@hu.edu.eg [Center of Nanotechnology, Nile University, 12677 Giza (Egypt); Pharmaceutical Chemistry Department, Heliopolis University, 11777 El Salam, Cairo (Egypt); Girgis, E. [Solid State Physics Department, National Research Center, 12622 Dokki, Giza (Egypt); Advanced Materials and Nanotechnology Lab, CEAS, National Research Center, 12622 Dokki, Giza (Egypt); Abdel-Mottaleb, Mohamed M.S.A. [Center of Nanotechnology, Nile University, 12677 Giza (Egypt)

    2016-02-01

    Non-aggregated magnetite nanorods with average diameters of 20–30 nm and lengths of up to 350 nm were synthesized via in situ, template free hydrothermal technique. These nanorods capped with different concentrations (1, 1.5, 2 and 2.5 g) of nicotinic acid (vitamin B3); possessed good magnetic properties and easy dispersion in aqueous solutions. Our new synthesis technique maintained the uniform shape of the nanorods even with increasing the coating material concentration. The effect of nicotinic acid on the shape, particle size, chemical structure and magnetic properties of the prepared nanorods was evaluated using different characterization methods. The length of nanorods increased from 270 nm to 350 nm in nicotinic acid coated nanorods. Goethite and magnetite phases with different ratios were the dominant phases in the coated samples while a pure magnetite phase was observed in the uncoated one. Nicotinic acid coated magnetic nanorods showed a significant decrease in saturation magnetization than uncoated samples (55 emu/g) reaching 4 emu/g in 2.5 g nicotinic acid coated sample. The novel synthesis technique proved its potentiality to prepare coated metal oxides with one dimensional nanostructure which can function effectively in different biological applications. - Highlights: • We synthesize nicotinic acid coated magnetite nanorods via hydrothermal technique • Effect of nicotinic acid concentration on the nanorods properties was significant • Nanorods maintained uniform shape with increased concentration of nicotinic acid • Alterations occurred in particle size, mineral phases and magnetics of coated samples.

  4. Synthesis of non-aggregated nicotinic acid coated magnetite nanorods via hydrothermal technique

    International Nuclear Information System (INIS)

    Attallah, Olivia A.; Girgis, E.; Abdel-Mottaleb, Mohamed M.S.A.

    2016-01-01

    Non-aggregated magnetite nanorods with average diameters of 20–30 nm and lengths of up to 350 nm were synthesized via in situ, template free hydrothermal technique. These nanorods capped with different concentrations (1, 1.5, 2 and 2.5 g) of nicotinic acid (vitamin B3); possessed good magnetic properties and easy dispersion in aqueous solutions. Our new synthesis technique maintained the uniform shape of the nanorods even with increasing the coating material concentration. The effect of nicotinic acid on the shape, particle size, chemical structure and magnetic properties of the prepared nanorods was evaluated using different characterization methods. The length of nanorods increased from 270 nm to 350 nm in nicotinic acid coated nanorods. Goethite and magnetite phases with different ratios were the dominant phases in the coated samples while a pure magnetite phase was observed in the uncoated one. Nicotinic acid coated magnetic nanorods showed a significant decrease in saturation magnetization than uncoated samples (55 emu/g) reaching 4 emu/g in 2.5 g nicotinic acid coated sample. The novel synthesis technique proved its potentiality to prepare coated metal oxides with one dimensional nanostructure which can function effectively in different biological applications. - Highlights: • We synthesize nicotinic acid coated magnetite nanorods via hydrothermal technique • Effect of nicotinic acid concentration on the nanorods properties was significant • Nanorods maintained uniform shape with increased concentration of nicotinic acid • Alterations occurred in particle size, mineral phases and magnetics of coated samples.

  5. Iterative solution of large linear systems

    CERN Document Server

    Young, David Matheson

    1971-01-01

    This self-contained treatment offers a systematic development of the theory of iterative methods. Its focal point resides in an analysis of the convergence properties of the successive overrelaxation (SOR) method, as applied to a linear system with a consistently ordered matrix. The text explores the convergence properties of the SOR method and related techniques in terms of the spectral radii of the associated matrices as well as in terms of certain matrix norms. Contents include a review of matrix theory and general properties of iterative methods; SOR method and stationary modified SOR meth

  6. A sparse electromagnetic imaging scheme using nonlinear landweber iterations

    KAUST Repository

    Desmal, Abdulla; Bagci, Hakan

    2015-01-01

    Development and use of electromagnetic inverse scattering techniques for imagining sparse domains have been on the rise following the recent advancements in solving sparse optimization problems. Existing techniques rely on iteratively converting

  7. CT angiography after carotid artery stenting: assessment of the utility of adaptive statistical iterative reconstruction and model-based iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Kuya, Keita; Shinohara, Yuki; Fujii, Shinya; Ogawa, Toshihide [Tottori University, Division of Radiology, Department of Pathophysiological Therapeutic Science, Faculty of Medicine, Yonago (Japan); Sakamoto, Makoto; Watanabe, Takashi [Tottori University, Division of Neurosurgery, Department of Brain and Neurosciences, Faculty of Medicine, Yonago (Japan); Iwata, Naoki; Kishimoto, Junichi [Tottori University, Division of Clinical Radiology Faculty of Medicine, Yonago (Japan); Kaminou, Toshio [Osaka Minami Medical Center, Department of Radiology, Osaka (Japan)

    2014-11-15

    Follow-up CT angiography (CTA) is routinely performed for post-procedure management after carotid artery stenting (CAS). However, the stent lumen tends to be underestimated because of stent artifacts on CTA reconstructed with the filtered back projection (FBP) technique. We assessed the utility of new iterative reconstruction techniques, such as adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR), for CTA after CAS in comparison with FBP. In a phantom study, we evaluated the differences among the three reconstruction techniques with regard to the relationship between the stent luminal diameter and the degree of underestimation of stent luminal diameter. In a clinical study, 34 patients who underwent follow-up CTA after CAS were included. We compared the stent luminal diameters among FBP, ASIR, and MBIR, and performed visual assessment of low attenuation area (LAA) in the stent lumen using a three-point scale. In the phantom study, stent luminal diameter was increasingly underestimated as luminal diameter became smaller in all CTA images. Stent luminal diameter was larger with MBIR than with the other reconstruction techniques. Similarly, in the clinical study, stent luminal diameter was larger with MBIR than with the other reconstruction techniques. LAA detectability scores of MBIR were greater than or equal to those of FBP and ASIR in all cases. MBIR improved the accuracy of assessment of stent luminal diameter and LAA detectability in the stent lumen when compared with FBP and ASIR. We conclude that MBIR is a useful reconstruction technique for CTA after CAS. (orig.)

  8. CT angiography after carotid artery stenting: assessment of the utility of adaptive statistical iterative reconstruction and model-based iterative reconstruction

    International Nuclear Information System (INIS)

    Kuya, Keita; Shinohara, Yuki; Fujii, Shinya; Ogawa, Toshihide; Sakamoto, Makoto; Watanabe, Takashi; Iwata, Naoki; Kishimoto, Junichi; Kaminou, Toshio

    2014-01-01

    Follow-up CT angiography (CTA) is routinely performed for post-procedure management after carotid artery stenting (CAS). However, the stent lumen tends to be underestimated because of stent artifacts on CTA reconstructed with the filtered back projection (FBP) technique. We assessed the utility of new iterative reconstruction techniques, such as adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR), for CTA after CAS in comparison with FBP. In a phantom study, we evaluated the differences among the three reconstruction techniques with regard to the relationship between the stent luminal diameter and the degree of underestimation of stent luminal diameter. In a clinical study, 34 patients who underwent follow-up CTA after CAS were included. We compared the stent luminal diameters among FBP, ASIR, and MBIR, and performed visual assessment of low attenuation area (LAA) in the stent lumen using a three-point scale. In the phantom study, stent luminal diameter was increasingly underestimated as luminal diameter became smaller in all CTA images. Stent luminal diameter was larger with MBIR than with the other reconstruction techniques. Similarly, in the clinical study, stent luminal diameter was larger with MBIR than with the other reconstruction techniques. LAA detectability scores of MBIR were greater than or equal to those of FBP and ASIR in all cases. MBIR improved the accuracy of assessment of stent luminal diameter and LAA detectability in the stent lumen when compared with FBP and ASIR. We conclude that MBIR is a useful reconstruction technique for CTA after CAS. (orig.)

  9. Comparison of iterative model, hybrid iterative, and filtered back projection reconstruction techniques in low-dose brain CT: impact of thin-slice imaging

    Energy Technology Data Exchange (ETDEWEB)

    Nakaura, Takeshi; Iyama, Yuji; Kidoh, Masafumi; Yokoyama, Koichi [Amakusa Medical Center, Diagnostic Radiology, Amakusa, Kumamoto (Japan); Kumamoto University, Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto (Japan); Oda, Seitaro; Yamashita, Yasuyuki [Kumamoto University, Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto (Japan); Tokuyasu, Shinichi [Philips Electronics, Kumamoto (Japan); Harada, Kazunori [Amakusa Medical Center, Department of Surgery, Kumamoto (Japan)

    2016-03-15

    The purpose of this study was to evaluate the utility of iterative model reconstruction (IMR) in brain CT especially with thin-slice images. This prospective study received institutional review board approval, and prior informed consent to participate was obtained from all patients. We enrolled 34 patients who underwent brain CT and reconstructed axial images with filtered back projection (FBP), hybrid iterative reconstruction (HIR) and IMR with 1 and 5 mm slice thicknesses. The CT number, image noise, contrast, and contrast noise ratio (CNR) between the thalamus and internal capsule, and the rate of increase of image noise in 1 and 5 mm thickness images between the reconstruction methods, were assessed. Two independent radiologists assessed image contrast, image noise, image sharpness, and overall image quality on a 4-point scale. The CNRs in 1 and 5 mm slice thickness were significantly higher with IMR (1.2 ± 0.6 and 2.2 ± 0.8, respectively) than with FBP (0.4 ± 0.3 and 1.0 ± 0.4, respectively) and HIR (0.5 ± 0.3 and 1.2 ± 0.4, respectively) (p < 0.01). The mean rate of increasing noise from 5 to 1 mm thickness images was significantly lower with IMR (1.7 ± 0.3) than with FBP (2.3 ± 0.3) and HIR (2.3 ± 0.4) (p < 0.01). There were no significant differences in qualitative analysis of unfamiliar image texture between the reconstruction techniques. IMR offers significant noise reduction and higher contrast and CNR in brain CT, especially for thin-slice images, when compared to FBP and HIR. (orig.)

  10. New developments in iterated rounding

    NARCIS (Netherlands)

    Bansal, N.; Raman, V.; Suresh, S.P.

    2014-01-01

    Iterated rounding is a relatively recent technique in algorithm design, that despite its simplicity has led to several remarkable new results and also simpler proofs of many previous results. We will briefly survey some applications of the method, including some recent developments and giving a high

  11. Load Disaggregation via Pattern Recognition: A Feasibility Study of a Novel Method in Residential Building

    Directory of Open Access Journals (Sweden)

    Younghoon Kwak

    2018-04-01

    Full Text Available In response to the need to improve energy-saving processes in older buildings, especially residential ones, this paper describes the potential of a novel method of disaggregating loads in light of the load patterns of household appliances determined in residential buildings. Experiments were designed to be applicable to general residential buildings and four types of commonly used appliances were selected to verify the method. The method assumes that loads are disaggregated and measured by a single primary meter. Following the metering of household appliances and an analysis of the usage patterns of each type, values of electric current were entered into a Hidden Markov Model (HMM to formulate predictions. Thereafter, the HMM repeatedly performed to output the predicted data close to the measured data, while errors between predicted and the measured data were evaluated to determine whether they met tolerance. When the method was examined for 4 days, matching rates in accordance with the load disaggregation outcomes of the household appliances (i.e., laptop, refrigerator, TV, and microwave were 0.994, 0.992, 0.982, and 0.988, respectively. The proposed method can provide insights into how and where within such buildings energy is consumed. As a result, effective and systematic energy saving measures can be derived even in buildings in which monitoring sensors and measurement equipment are not installed.

  12. Differential Targeting of Hsp70 Heat Shock Proteins HSPA6 and HSPA1A with Components of a Protein Disaggregation/Refolding Machine in Differentiated Human Neuronal Cells following Thermal Stress

    Directory of Open Access Journals (Sweden)

    Ian R. Brown

    2017-04-01

    Full Text Available Heat shock proteins (Hsps co-operate in multi-protein machines that counter protein misfolding and aggregation and involve DNAJ (Hsp40, HSPA (Hsp70, and HSPH (Hsp105α. The HSPA family is a multigene family composed of inducible and constitutively expressed members. Inducible HSPA6 (Hsp70B' is found in the human genome but not in the genomes of mouse and rat. To advance knowledge of this little studied HSPA member, the targeting of HSPA6 to stress-sensitive neuronal sites with components of a disaggregation/refolding machine was investigated following thermal stress. HSPA6 targeted the periphery of nuclear speckles (perispeckles that have been characterized as sites of transcription. However, HSPA6 did not co-localize at perispeckles with DNAJB1 (Hsp40-1 or HSPH1 (Hsp105α. At 3 h after heat shock, HSPA6 co-localized with these members of the disaggregation/refolding machine at the granular component (GC of the nucleolus. Inducible HSPA1A (Hsp70-1 and constitutively expressed HSPA8 (Hsc70 co-localized at nuclear speckles with components of the machine immediately after heat shock, and at the GC layer of the nucleolus at 1 h with DNAJA1 and BAG-1. These results suggest that HSPA6 exhibits targeting features that are not apparent for HSPA1A and HSPA8.

  13. Iterative Decomposition of Water and Fat with Echo Asymmetric and Least—Squares Estimation (IDEAL (Reeder et al. 2005 Automated Spine Survey Iterative Scan Technique (ASSIST (Weiss et al. 2006

    Directory of Open Access Journals (Sweden)

    Kenneth L. Weiss

    2008-01-01

    Full Text Available Background and Purpose: Multi-parametric MRI of the entire spine is technologist-dependent, time consuming, and often limited by inhomogeneous fat suppression. We tested a technique to provide rapid automated total spine MRI screening with improved tissue contrast through optimized fat-water separation.Methods: The entire spine was auto-imaged in two contiguous 35 cm field of view (FOV sagittal stations, utilizing out-of-phase fast gradient echo (FGRE and T1 and/or T2 weighted fast spin echo (FSE IDEAL (Iterative Decomposition of Water and Fat with Echo Asymmetric and Least-squares Estimation sequences. 18 subjects were studied, one twice at 3.0T (pre and post contrast and one at both 1.5 T and 3.0T for a total of 20 spine examinations (8 at 1.5 T and 12 at 3.0T. Images were independently evaluated by two neuroradiologists and run through Automated Spine Survey Iterative Scan Technique (ASSIST analysis software for automated vertebral numbering.Results: In all 20 total spine studies, neuroradiologist and computer ASSIST labeling were concordant. In all cases, IDEAL provided uniform fat and water separation throughout the entire 70 cm FOV imaged. Two subjects demonstrated breast metastases and one had a large presumptive schwannoma. 14 subjects demonstrated degenerative disc disease with associated Modic Type I or II changes at one or more levels. FGRE ASSIST afforded subminute submillimeter in-plane resolution of the entire spine with high contrast between discs and vertebrae at both 1.5 and 3.0T. Marrow signal abnormalities could be particularly well characterized with IDEAL derived images and parametric maps.Conclusion: IDEAL ASSIST is a promising MRI technique affording a rapid automated high resolution, high contrast survey of the entire spine with optimized tissue characterization.

  14. Influence of iterative image reconstruction on CT-based calcium score measurements

    NARCIS (Netherlands)

    van Osch, Jochen A. C.; Mouden, Mohamed; van Dalen, Jorn A.; Timmer, Jorik R.; Reiffers, Stoffer; Knollema, Siert; Greuter, Marcel J. W.; Ottervanger, Jan Paul; Jager, Piet L.

    Iterative reconstruction techniques for coronary CT angiography have been introduced as an alternative for traditional filter back projection (FBP) to reduce image noise, allowing improved image quality and a potential for dose reduction. However, the impact of iterative reconstruction on the

  15. iHadoop: Asynchronous Iterations Support for MapReduce

    KAUST Repository

    Elnikety, Eslam

    2011-08-01

    MapReduce is a distributed programming framework designed to ease the development of scalable data-intensive applications for large clusters of commodity machines. Most machine learning and data mining applications involve iterative computations over large datasets, such as the Web hyperlink structures and social network graphs. Yet, the MapReduce model does not efficiently support this important class of applications. The architecture of MapReduce, most critically its dataflow techniques and task scheduling, is completely unaware of the nature of iterative applications; tasks are scheduled according to a policy that optimizes the execution for a single iteration which wastes bandwidth, I/O, and CPU cycles when compared with an optimal execution for a consecutive set of iterations. This work presents iHadoop, a modified MapReduce model, and an associated implementation, optimized for iterative computations. The iHadoop model schedules iterations asynchronously. It connects the output of one iteration to the next, allowing both to process their data concurrently. iHadoop\\'s task scheduler exploits inter- iteration data locality by scheduling tasks that exhibit a producer/consumer relation on the same physical machine allowing a fast local data transfer. For those iterative applications that require satisfying certain criteria before termination, iHadoop runs the check concurrently during the execution of the subsequent iteration to further reduce the application\\'s latency. This thesis also describes our implementation of the iHadoop model, and evaluates its performance against Hadoop, the widely used open source implementation of MapReduce. Experiments using different data analysis applications over real-world and synthetic datasets show that iHadoop performs better than Hadoop for iterative algorithms, reducing execution time of iterative applications by 25% on average. Furthermore, integrating iHadoop with HaLoop, a variant Hadoop implementation that caches

  16. Comparison of adaptive statistical iterative and filtered back projection reconstruction techniques in brain CT

    International Nuclear Information System (INIS)

    Ren, Qingguo; Dewan, Sheilesh Kumar; Li, Ming; Li, Jianying; Mao, Dingbiao; Wang, Zhenglei; Hua, Yanqing

    2012-01-01

    Purpose: To compare image quality and visualization of normal structures and lesions in brain computed tomography (CT) with adaptive statistical iterative reconstruction (ASIR) and filtered back projection (FBP) reconstruction techniques in different X-ray tube current–time products. Materials and methods: In this IRB-approved prospective study, forty patients (nineteen men, twenty-one women; mean age 69.5 ± 11.2 years) received brain scan at different tube current–time products (300 and 200 mAs) in 64-section multi-detector CT (GE, Discovery CT750 HD). Images were reconstructed with FBP and four levels of ASIR-FBP blending. Two radiologists (please note that our hospital is renowned for its geriatric medicine department, and these two radiologists are more experienced in chronic cerebral vascular disease than in neoplastic disease, so this research did not contain cerebral tumors but as a discussion) assessed all the reconstructed images for visibility of normal structures, lesion conspicuity, image contrast and diagnostic confidence in a blinded and randomized manner. Volume CT dose index (CTDI vol ) and dose-length product (DLP) were recorded. All the data were analyzed by using SPSS 13.0 statistical analysis software. Results: There was no statistically significant difference between the image qualities at 200 mAs with 50% ASIR blending technique and 300 mAs with FBP technique (p > .05). While between the image qualities at 200 mAs with FBP and 300 mAs with FBP technique a statistically significant difference (p < .05) was found. Conclusion: ASIR provided same image quality and diagnostic ability in brain imaging with greater than 30% dose reduction compared with FBP reconstruction technique

  17. Comparison of adaptive statistical iterative and filtered back projection reconstruction techniques in brain CT

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Qingguo, E-mail: renqg83@163.com [Department of Radiology, Hua Dong Hospital of Fudan University, Shanghai 200040 (China); Dewan, Sheilesh Kumar, E-mail: sheilesh_d1@hotmail.com [Department of Geriatrics, Hua Dong Hospital of Fudan University, Shanghai 200040 (China); Li, Ming, E-mail: minli77@163.com [Department of Radiology, Hua Dong Hospital of Fudan University, Shanghai 200040 (China); Li, Jianying, E-mail: Jianying.Li@med.ge.com [CT Imaging Research Center, GE Healthcare China, Beijing (China); Mao, Dingbiao, E-mail: maodingbiao74@163.com [Department of Radiology, Hua Dong Hospital of Fudan University, Shanghai 200040 (China); Wang, Zhenglei, E-mail: Williswang_doc@yahoo.com.cn [Department of Radiology, Shanghai Electricity Hospital, Shanghai 200050 (China); Hua, Yanqing, E-mail: cjr.huayanqing@vip.163.com [Department of Radiology, Hua Dong Hospital of Fudan University, Shanghai 200040 (China)

    2012-10-15

    Purpose: To compare image quality and visualization of normal structures and lesions in brain computed tomography (CT) with adaptive statistical iterative reconstruction (ASIR) and filtered back projection (FBP) reconstruction techniques in different X-ray tube current–time products. Materials and methods: In this IRB-approved prospective study, forty patients (nineteen men, twenty-one women; mean age 69.5 ± 11.2 years) received brain scan at different tube current–time products (300 and 200 mAs) in 64-section multi-detector CT (GE, Discovery CT750 HD). Images were reconstructed with FBP and four levels of ASIR-FBP blending. Two radiologists (please note that our hospital is renowned for its geriatric medicine department, and these two radiologists are more experienced in chronic cerebral vascular disease than in neoplastic disease, so this research did not contain cerebral tumors but as a discussion) assessed all the reconstructed images for visibility of normal structures, lesion conspicuity, image contrast and diagnostic confidence in a blinded and randomized manner. Volume CT dose index (CTDI{sub vol}) and dose-length product (DLP) were recorded. All the data were analyzed by using SPSS 13.0 statistical analysis software. Results: There was no statistically significant difference between the image qualities at 200 mAs with 50% ASIR blending technique and 300 mAs with FBP technique (p > .05). While between the image qualities at 200 mAs with FBP and 300 mAs with FBP technique a statistically significant difference (p < .05) was found. Conclusion: ASIR provided same image quality and diagnostic ability in brain imaging with greater than 30% dose reduction compared with FBP reconstruction technique.

  18. Development of a Disaggregation Framework toward the Estimation of Subdaily Reference Evapotranspiration: 2- Estimation of Subdaily Reference Evapotranspiration Using Disaggregated Weather Data

    Directory of Open Access Journals (Sweden)

    F. Parchami Araghi

    2016-09-01

    Full Text Available Introduction: Subdaily estimates of reference evapotranspiration (ET o are needed in many applications such as dynamic agro-hydrological modeling. However, in many regions, the lack of subdaily weather data availability has hampered the efforts to quantify the subdaily ET o. In the first presented paper, a physically based framework was developed to desegregate daily weather data needed for estimation of subdaily reference ET o, including air temperature, wind speed, dew point, actual vapour pressure, relative humidity, and solar radiation. The main purpose of this study was to estimate the subdaily ETo using disaggregated daily data derived from developed disaggregation framework in the first presented paper. Materials and Methods: Subdaily ET o estimates were made, using ASCE and FAO-56 Penman–Monteith models (ASCE-PM and FAO56-PM, respectively and subdaily weather data derived from the developed daily-to-subdaily weather data disaggregation framework. To this end, long-term daily weather data got from Abadan (59 years and Ahvaz (50 years synoptic weather stations were collected. Sensitivity analysis of Penman–Monteith model to the different meteorological variables (including, daily air temperature, wind speed at 2 m height, actual vapor pressure, and solar radiation was carried out, using partial derivatives of Penman–Monteith equation. The capability of the two models for retrieving the daily ETo was evaluated, using root mean square error RMSE (mm, the mean error ME (mm, the mean absolute error ME (mm, Pearson correlation coefficient r (-, and Nash–Sutcliffe model efficiency coefficient EF (-. Different contributions to the overall error were decomposed using a regression-based method. Results and Discussion: The results of the sensitivity analysis showed that the daily air temperature and the actual vapor pressure are the most significant meteorological variables, which affect the ETo estimates. In contrast, low sensitivity

  19. Small file aggregation in a parallel computing system

    Science.gov (United States)

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Zhang, Jingwang

    2014-09-02

    Techniques are provided for small file aggregation in a parallel computing system. An exemplary method for storing a plurality of files generated by a plurality of processes in a parallel computing system comprises aggregating the plurality of files into a single aggregated file; and generating metadata for the single aggregated file. The metadata comprises an offset and a length of each of the plurality of files in the single aggregated file. The metadata can be used to unpack one or more of the files from the single aggregated file.

  20. Counteracting 16-QAM Optical Fiber Transmission Impairments With Iterative Turbo Equalization

    DEFF Research Database (Denmark)

    Arlunno, Valeria; Caballero Jambrina, Antonio; Borkowski, Robert

    2013-01-01

    A turbo equalization (TE) scheme based on convolutional code and normalized least mean square equalizer for coherent optical communication links is proposed and experimentally demonstrated. The proposed iterative TE technique is proved effective for counteracting polarization-division-multiplexin......A turbo equalization (TE) scheme based on convolutional code and normalized least mean square equalizer for coherent optical communication links is proposed and experimentally demonstrated. The proposed iterative TE technique is proved effective for counteracting polarization...

  1. Validating CDIAC's population-based approach to the disaggregation of within-country CO2 emissions

    International Nuclear Information System (INIS)

    Cushman, R.M.; Beauchamp, J.J.; Brenkert, A.L.

    1998-01-01

    The Carbon Dioxide Information Analysis Center produces and distributes a data base of CO 2 emissions from fossil-fuel combustion and cement production, expressed as global, regional, and national estimates. CDIAC also produces a companion data base, expressed on a one-degree latitude-longitude grid. To do this gridding, emissions within each country are spatially disaggregated according to the distribution of population within that country. Previously, the lack of within-country emissions data prevented a validation of this approach. But emissions inventories are now becoming available for most US states. An analysis of these inventories confirms that population distribution explains most, but not all, of the variance in the distribution of CO 2 emissions within the US. Additional sources of variance (coal production, non-carbon energy sources, and interstate electricity transfers) are explored, with the hope that the spatial disaggregation of emissions can be improved

  2. Iterative Overlap FDE for Multicode DS-CDMA

    Science.gov (United States)

    Takeda, Kazuaki; Tomeba, Hiromichi; Adachi, Fumiyuki

    Recently, a new frequency-domain equalization (FDE) technique, called overlap FDE, that requires no GI insertion was proposed. However, the residual inter/intra-block interference (IBI) cannot completely be removed. In addition to this, for multicode direct sequence code division multiple access (DS-CDMA), the presence of residual interchip interference (ICI) after FDE distorts orthogonality among the spreading codes. In this paper, we propose an iterative overlap FDE for multicode DS-CDMA to suppress both the residual IBI and the residual ICI. In the iterative overlap FDE, joint minimum mean square error (MMSE)-FDE and ICI cancellation is repeated a sufficient number of times. The bit error rate (BER) performance with the iterative overlap FDE is evaluated by computer simulation.

  3. Iterating skeletons

    DEFF Research Database (Denmark)

    Dieterle, Mischa; Horstmeyer, Thomas; Berthold, Jost

    2012-01-01

    a particular skeleton ad-hoc for repeated execution turns out to be considerably complicated, and raises general questions about introducing state into a stateless parallel computation. In addition, one would strongly prefer an approach which leaves the original skeleton intact, and only uses it as a building...... block inside a bigger structure. In this work, we present a general framework for skeleton iteration and discuss requirements and variations of iteration control and iteration body. Skeleton iteration is expressed by synchronising a parallel iteration body skeleton with a (likewise parallel) state......Skeleton-based programming is an area of increasing relevance with upcoming highly parallel hardware, since it substantially facilitates parallel programming and separates concerns. When parallel algorithms expressed by skeletons involve iterations – applying the same algorithm repeatedly...

  4. Converged photonic data storage and switch platform for exascale disaggregated data centers

    Science.gov (United States)

    Pitwon, R.; Wang, K.; Worrall, A.

    2017-02-01

    We report on a converged optically enabled Ethernet storage, switch and compute platform, which could support future disaggregated data center architectures. The platform includes optically enabled Ethernet switch controllers, an advanced electro-optical midplane and optically interchangeable generic end node devices. We demonstrate system level performance using optically enabled Ethernet disk drives and micro-servers across optical links of varied lengths.

  5. Analytical and laser scanning techniques to determine shape properties of mineral aggregates

    CSIR Research Space (South Africa)

    Komba, Julius J

    2013-01-01

    Full Text Available processed to reconstruct 3-D models of the aggregate particles. The models were further analyzed to determine the form properties. In this paper, two analysis approaches, based on aggregate physical properties and spherical harmonic analysis, were employed...

  6. Study on the interaction between oxolinic acid aggregates and protein and its analytical application

    International Nuclear Information System (INIS)

    Wu Xia; Zheng Jinhua; Ding Honghong; Ran Dehuan; Xu Wei; Song Yuanyuan; Yang Jinghe

    2007-01-01

    It was found that oxolinic acid (OA) at high concentration can self-assemble into nano- to micro- meter scale OA aggregates in Tris-HCl (pH 7.48) buffer solution. The nanoparticles of OA were adopted as fluorescence probes in the quantitative analysis of proteins. Under optimum conditions, the fluorescence quenching extent of nanometer scale OA aggregates was in proportion to the concentration of albumins in the range of 3.0 x 10 -8 to 3.0 x 10 -5 g mL -1 for bovine serum albumin (BSA) and 8.0 x 10 -8 to 8.0 x 10 -6 g mL -1 for human serum albumin (HSA). The detection limits (S/N = 3) were 3.4 x 10 -9 g mL -1 for BSA, and 2.6 x 10 -8 g mL -1 for HSA, respectively. Samples were satisfactorily determined. The interaction mechanism of the system was studied using fluorescence, UV-vis, resonance light scattering (RLS) and transmission electron microscope (TEM) technology, etc., indicating that the nonluminescent complex was formed between serum albumin molecular and OA, to disaggregate the self-association of OA, which resulted in the dominated static fluorescence quenching in the system

  7. Quantitative evaluation of ASiR image quality: an adaptive statistical iterative reconstruction technique

    Science.gov (United States)

    Van de Casteele, Elke; Parizel, Paul; Sijbers, Jan

    2012-03-01

    Adaptive statistical iterative reconstruction (ASiR) is a new reconstruction algorithm used in the field of medical X-ray imaging. This new reconstruction method combines the idealized system representation, as we know it from the standard Filtered Back Projection (FBP) algorithm, and the strength of iterative reconstruction by including a noise model in the reconstruction scheme. It studies how noise propagates through the reconstruction steps, feeds this model back into the loop and iteratively reduces noise in the reconstructed image without affecting spatial resolution. In this paper the effect of ASiR on the contrast to noise ratio is studied using the low contrast module of the Catphan phantom. The experiments were done on a GE LightSpeed VCT system at different voltages and currents. The results show reduced noise and increased contrast for the ASiR reconstructions compared to the standard FBP method. For the same contrast to noise ratio the images from ASiR can be obtained using 60% less current, leading to a reduction in dose of the same amount.

  8. Adaptive algebraic reconstruction technique

    International Nuclear Information System (INIS)

    Lu Wenkai; Yin Fangfang

    2004-01-01

    Algebraic reconstruction techniques (ART) are iterative procedures for reconstructing objects from their projections. It is proven that ART can be computationally efficient by carefully arranging the order in which the collected data are accessed during the reconstruction procedure and adaptively adjusting the relaxation parameters. In this paper, an adaptive algebraic reconstruction technique (AART), which adopts the same projection access scheme in multilevel scheme algebraic reconstruction technique (MLS-ART), is proposed. By introducing adaptive adjustment of the relaxation parameters during the reconstruction procedure, one-iteration AART can produce reconstructions with better quality, in comparison with one-iteration MLS-ART. Furthermore, AART outperforms MLS-ART with improved computational efficiency

  9. Reusing recycled aggregates in structural concrete

    Science.gov (United States)

    Kou, Shicong

    The utilization of recycled aggregates in concrete can minimize environmental impact and reduce the consumption of natural resources in concrete applications. The aim of this thesis is to provide a scientific basis for the possible use of recycled aggregates in structure concrete by conducting a comprehensive programme of laboratory study to gain a better understanding of the mechanical, microstructure and durability properties of concrete produced with recycled aggregates. The study also explored possible techniques to of improve the properties of recycled aggregate concrete that is produced with high percentages (≧ 50%) of recycled aggregates. These techniques included: (a) using lower water-to-cement ratios in the concrete mix design; (b) using fly ash as a cement replacement or as an additional mineral admixture in the concrete mixes, and (c) precasting recycled aggregate concrete with steam curing regimes. The characteristics of the recycled aggregates produced both from laboratory and a commercially operated pilot construction and demolition (C&D) waste recycling plant were first studied. A mix proportioning procedure was then established to produce six series of concrete mixtures using different percentages of recycled coarse aggregates with and without the use of fly ash. The water-to-cement (binder) ratios of 0.55, 0.50, 0.45 and 0.40 were used. The fresh properties (including slump and bleeding) of recycled aggregate concrete (RAC) were then quantified. The effects of fly ash on the fresh and hardened properties of RAC were then studied and compared with those RAC prepared with no fly ash addition. Furthermore, the effects of steam curing on the hardened properties of RAC were investigated. For micro-structural properties, the interfacial transition zones of the aggregates and the mortar/cement paste were analyzed by SEM and EDX-mapping. Moreover, a detailed set of results on the fracture properties for RAC were obtained. Based on the experimental

  10. Navigating between Disaggregating Nation States and Entrenching Processes of Globalisation

    DEFF Research Database (Denmark)

    Jacobsen, Michael

    2007-01-01

    on the international community for its economic survival this dependency on the global has as a consequence that it rolls back aspects of national sovereignty thus opening up the national hinterland for further international influences. These developments initiate a process of disaggregating state and nation, meaning...... that a gradual disarticulation of the relationship between state and nation produces new societal spaces, which are contested by non-statist interest groups and transnational more or less deterritorialised ethnic affiliated groups and networks. The argument forwarded in this article is that the ethnic Chinese...

  11. ITER-FEAT - outline design report. Report by the ITER Director. ITER meeting, Tokyo, January 2000

    International Nuclear Information System (INIS)

    2001-01-01

    It is now possible to define the key elements of ITER-FEAT. This report provides the results, to date, of the joint work of the Special Working Group in the form of an Outline Design Report on the ITER-FEAT design which, subject to the views of ITER Council and of the Parties, will be the focus of further detailed design work and analysis in order to provide to the Parties a complete and fully integrated engineering design within the framework of the ITER EDA extension

  12. Effect of hybrid iterative reconstruction technique on quantitative and qualitative image analysis at 256-slice prospective gating cardiac CT

    International Nuclear Information System (INIS)

    Utsunomiya, Daisuke; Weigold, W. Guy; Weissman, Gaby; Taylor, Allen J.

    2012-01-01

    To evaluate the effect of hybrid iterative reconstruction on qualitative and quantitative parameters at 256-slice cardiac CT. Prospective cardiac CT images from 20 patients were analysed. Paired image sets were created using 3 reconstructions, i.e. filtered back projection (FBP) and moderate- and high-level iterative reconstructions. Quantitative parameters including CT-attenuation, noise, and contrast-to-noise ratio (CNR) were determined in both proximal- and distal coronary segments. Image quality was graded on a 4-point scale. Coronary CT attenuation values were similar for FBP, moderate- and high-level iterative reconstruction at 293 ± 74-, 290 ± 75-, and 283 ± 78 Hounsfield units (HU), respectively. CNR was significantly higher with moderate- and high-level iterative reconstructions (10.9 ± 3.5 and 18.4 ± 6.2, respectively) than FBP (8.2 ± 2.5) as was the visual grading of proximal vessels. Visualisation of distal vessels was better with high-level iterative reconstruction than FBP. The mean number of assessable segments among 289 segments was 245, 260, and 267 for FBP, moderate- and high-level iterative reconstruction, respectively; the difference between FBP and high-level iterative reconstruction was significant. Interobserver agreement was significantly higher for moderate- and high-level iterative reconstruction than FBP. Cardiac CT using hybrid iterative reconstruction yields higher CNR and better image quality than FBP. circle Cardiac CT helps clinicians to assess patients with coronary artery disease circle Hybrid iterative reconstruction provides improved cardiac CT image quality circle Hybrid iterative reconstruction improves the number of assessable coronary segments circle Hybrid iterative reconstruction improves interobserver agreement on cardiac CT. (orig.)

  13. Analysis techniques of lattice fringe images for quantified evaluation of pyrocarbon by chemical vapor infiltration.

    Science.gov (United States)

    Li, Miaoling; Zhao, Hongxia; Qi, Lehua; Li, Hejun

    2014-10-01

    Some image analysis techniques are developed for simplifying lattice fringe images of deposited pyrocarbon in carbon/carbon composites by chemical vapor infiltration. They are mainly the object counting method for detecting the optimum threshold, the self-adaptive morphological filtering, the node-separation technique for breaking the aggregate fringes, and some post processing algorithms for reconstructing the fringes. The simplified fringes are the foundation for defining and extracting quantitative nanostructure parameters of pyrocarbon. The frequency filter window of a Fourier transform is defined as the circular band that retains only those fringes with interlayer distance between 0.3 and 0.45 nm. Some judge criteria are set to define topological relation between fringes. For example, the aspect ratio and area of fringes are employed to detect aggregate fringes. Fringe coaxality and distance between endpoints are used to judge the disconnected fringes. The optimum values are determined by using the iterative correction techniques. The best cut-off value for the short fringes is chosen only when there is a reasonable match between the mean fringe length and the value measured by X-ray diffraction. The adopted techniques have been verified to be feasible and to have the potential to convert the complex lattice fringe image to a set of distinct fringe structures.

  14. Optimal Re-Routes and Ground Delays Using a Route-Based Aggregate Air Traffic Flow Model

    Science.gov (United States)

    Soler, Lluis

    The National Airspace System (NAS) is very complex and with a high level of uncertainty. For this reason, developing an automated conflict resolution tool at NAS level is presented as a big challenge. One way to address the problem is by using aggregate models, which can significantly reduce its dimension and complexity. Significant effort has been made to develop an air traffic aggregate model capable to effectively state and solve the problem. In this study, a Route-Based Aggregate Model is developed and tested. It consists in a modification of several existing models and overcomes some issues identified in previous aggregate models. It allows the implementation of Traffic Flow Management conventional controls, such as ground delay and rerouting. These control strategies can be used to avoid congestion conflicts based on sectors and airports capacity as well as regions affected by convective weather. The optimization problem is posed as a Linear Programming routine, which guarantees an optimal solution that minimizes the total accumulated delay required to avoid such capacity conflicts. The solutions can be directly translated into specific instructions at aircraft level, via modification of the times of departure and flight plans. The model is integrated with Future Air Traffic Management Concepts Evaluation Tool (FACET), a state of the art air traffic simulation tool, and uses its files as both input and output. This allows simulating in FACET the solution obtained from the aggregate domain. The approach is validated by applying it in three realistic scenarios at different scales. Results show that, for time horizons larger than 2 hours, the accuracy of the aggregate model is similar to other simulation tools. Also, the modified flight plans, the product of the disaggregated solution, reduce the number of capacity conflicts in the FACET simulation. Future research will study the robustness of these solutions and determine the most appropriate scenarios where to

  15. Impact of Public Aggregate Wind Forecasts on Electricity Market Outcomes

    DEFF Research Database (Denmark)

    Exizidis, Lazaros; Kazempour, Jalal; Pinson, Pierre

    2017-01-01

    Following a call to foster a transparent and more competitive market, member states of the European transmission system operator are required to publish, among other information, aggregate wind power forecasts. The publication of the latter information is expected to benefit market participants...... by offering better knowledge of the market operation, leading subsequently to a more competitive energy market. Driven by the above regulation, we consider an equilibrium study to address how public information of aggregate wind power forecasts can potentially affect market results, social welfare as well...... as the profits of participating power producers. We investigate, therefore, a joint day-ahead energy and reserve auction, where producers offer their conventional power strategically based on a complementarity approach and their wind power at generation cost based on a forecast. In parallel, an iterative game...

  16. Iterated random walks with shape prior

    DEFF Research Database (Denmark)

    Pujadas, Esmeralda Ruiz; Kjer, Hans Martin; Piella, Gemma

    2016-01-01

    the parametric probability density function. Then, random walks is performed iteratively aligning the prior with the current segmentation in every iteration. We tested the proposed approach with natural and medical images and compared it with the latest techniques with random walks and shape priors......We propose a new framework for image segmentation using random walks where a distance shape prior is combined with a region term. The shape prior is weighted by a confidence map to reduce the influence of the prior in high gradient areas and the region term is computed with k-means to estimate....... The experiments suggest that this method gives promising results for medical and natural images....

  17. Iterative concurrent reconstruction algorithms for emission computed tomography

    International Nuclear Information System (INIS)

    Brown, J.K.; Hasegawa, B.H.; Lang, T.F.

    1994-01-01

    Direct reconstruction techniques, such as those based on filtered backprojection, are typically used for emission computed tomography (ECT), even though it has been argued that iterative reconstruction methods may produce better clinical images. The major disadvantage of iterative reconstruction algorithms, and a significant reason for their lack of clinical acceptance, is their computational burden. We outline a new class of ''concurrent'' iterative reconstruction techniques for ECT in which the reconstruction process is reorganized such that a significant fraction of the computational processing occurs concurrently with the acquisition of ECT projection data. These new algorithms use the 10-30 min required for acquisition of a typical SPECT scan to iteratively process the available projection data, significantly reducing the requirements for post-acquisition processing. These algorithms are tested on SPECT projection data from a Hoffman brain phantom acquired with a 2 x 10 5 counts in 64 views each having 64 projections. The SPECT images are reconstructed as 64 x 64 tomograms, starting with six angular views. Other angular views are added to the reconstruction process sequentially, in a manner that reflects their availability for a typical acquisition protocol. The results suggest that if T s of concurrent processing are used, the reconstruction processing time required after completion of the data acquisition can be reduced by at least 1/3 T s. (Author)

  18. Iterative method for Amado's model

    International Nuclear Information System (INIS)

    Tomio, L.

    1980-01-01

    A recently proposed iterative method for solving scattering integral equations is applied to the spin doublet and spin quartet neutron-deuteron scattering in the Amado model. The method is tested numerically in the calculation of scattering lengths and phase-shifts and results are found better than those obtained by using the conventional Pade technique. (Author) [pt

  19. Advanced and automated laser-based technique to evaluate aggregates

    CSIR Research Space (South Africa)

    Anochie-Boateng, Joseph

    2011-11-01

    Full Text Available ; angularity; surface texture) of aggregates used in pavements and railway ballast. To date, no automated method is available for direct measurements of shape properties of these materials in Africa. The objective of this paper is to present a three...

  20. Extending ITER materials design to welded joints

    Energy Technology Data Exchange (ETDEWEB)

    Tavassoli, A.-A.F. [DMN/Dir, CEA/Saclay, Commissariat a l' Energie Atomique, 91191 Gif sur Yvette cedex (France)]. E-mail: tavassoli@cea.fr

    2007-08-01

    This paper extends the ITER materials properties documentation to weld metals and incorporates the needs of Test Blanket Modules for higher temperature materials properties. Since the main structural material selected for ITER is type 316L(N)-IG, the paper is focused on weld metals and joining techniques for this steel. Materials properties data are analysed according to the French design and construction rules for nuclear components (RCC-MR) and design allowables are equally derived using the same rules. Particular attention is paid to the type of weld metal, to the type and position of welding and their influence on the materials properties data and design allowables. The primary goal of this work, starting with 19-12-2 weld metal, is to produce comprehensive materials properties documentations that when combined with codification and inspection documents would satisfy ITER licensing needs. As a result, structural stability and capability of welded joints during manufacturing of ITER components and their subsequent service, including the effects of irradiation and eventual incidental or accidental situations, are also covered.

  1. ITER council proceedings: 2001

    International Nuclear Information System (INIS)

    2001-01-01

    Continuing the ITER EDA, two further ITER Council Meetings were held since the publication of ITER EDA documentation series no, 20, namely the ITER Council Meeting on 27-28 February 2001 in Toronto, and the ITER Council Meeting on 18-19 July in Vienna. That Meeting was the last one during the ITER EDA. This volume contains records of these Meetings, including: Records of decisions; List of attendees; ITER EDA status report; ITER EDA technical activities report; MAC report and advice; Final report of ITER EDA; and Press release

  2. ITER safety

    International Nuclear Information System (INIS)

    Raeder, J.; Piet, S.; Buende, R.

    1991-01-01

    As part of the series of publications by the IAEA that summarize the results of the Conceptual Design Activities for the ITER project, this document describes the ITER safety analyses. It contains an assessment of normal operation effluents, accident scenarios, plasma chamber safety, tritium system safety, magnet system safety, external loss of coolant and coolant flow problems, and a waste management assessment, while it describes the implementation of the safety approach for ITER. The document ends with a list of major conclusions, a set of topical remarks on technical safety issues, and recommendations for the Engineering Design Activities, safety considerations for siting ITER, and recommendations with regard to the safety issues for the R and D for ITER. Refs, figs and tabs

  3. Study of indicators aggregation techniques for the selection of a new nuclear reactor for Mexico

    International Nuclear Information System (INIS)

    Barragan M, A.M.; Martin del Campo M, C.

    2007-01-01

    A study on several aggregation techniques that can be used as multi criteria analysis methods, like important part of the methodology developed for the selection of a nuclear reactor for Mexico is described. In an arbitrary way three reactors were selected to be compared, these they are the AP1000 (Advance Passive from 1000 MWe), the PBMR (Pebble Bed Modular Reactor) and the GT-MHR (Gas Turbine Modular Helium). The evaluation approaches were classified in three categories: Economic, Socio-political and of safety and environment. In each category they were defined the more important evaluation indicators and then it was built a matrix with those values of each reactor. The four studied aggregation methods are described: Normalization, Linear deliberation, Fuzzy Logic and AHP (Analytic Hierarchy Process). The well-known aggregation mechanisms are those that are obtained of the lineal deliberation and of the normalization, which have demonstrated to give good results before the simplicity of their use. The fuzzy logic has the advantage that it allows to manage qualitative and quantitative information simultaneously without the aggregation problems that are presented since in a conventional system the semantic pattern on that is based, it is provided by the theory of the diffuse groups that has demonstrated in other areas of the knowledge a better approach to the reality, when admitting that the nature has shades and that the decisions take in function of a wide range of possibilities and of approaches in contradictory occasions or in conflict, all equally worth. The Analytic Hierarchical Process (AHP) that consists in formalizing the intuitive understanding of a multi criteria complex problem, by means of the construction of a hierarchical model that allows the decision agent to structure the problem in visual form, giving him the form of a hierarchy of attributes (global objective of the problem, approaches and alternative). Finally, using the matrix of initiators

  4. Iterative methods for tomography problems: implementation to a cross-well tomography problem

    Science.gov (United States)

    Karadeniz, M. F.; Weber, G. W.

    2018-01-01

    The velocity distribution between two boreholes is reconstructed by cross-well tomography, which is commonly used in geology. In this paper, iterative methods, Kaczmarz’s algorithm, algebraic reconstruction technique (ART), and simultaneous iterative reconstruction technique (SIRT), are implemented to a specific cross-well tomography problem. Convergence to the solution of these methods and their CPU time for the cross-well tomography problem are compared. Furthermore, these three methods for this problem are compared for different tolerance values.

  5. Final report of the ITER EDA. Final report of the ITER Engineering Design Activities. Prepared by the ITER Council

    International Nuclear Information System (INIS)

    2001-01-01

    This is the Final Report by the ITER Council on work carried out by ITER participating countries on cooperation in the Engineering Design Activities (EDA) for the ITER. In this report the main ITER EDA technical objectives, the scope of ITER EDA, its organization and resources, engineering design of ITER tokamak and its main parameters are presented. This Report also includes safety and environmental assessments, site requirements and proposed schedule and estimates of manpower and cost as well as proposals on approaches to joint implementation of the project

  6. ITER technology R and D progress report. Report by the Director. ITER technical advisory committee meeting, 25-27 June 2000, St. Petersburg

    International Nuclear Information System (INIS)

    2001-01-01

    The overall philosophy for the ITER design has been to use established approaches through detailed analysis and to validate their application to ITER through technology R and D, including fabrication of full scale or scalable models of key components. All this R and D work has been done for ITER under collaboration among the Home Teams, with a total resource of about 660 KIUA. R and D issues for ITER-FEAT are almost the same as for the 1998 ITER design. Major developments and fabrication have been completed and tests have significantly progressed. The technical output from the R and D validates the technologies and confirms the manufacturing techniques and quality assurance incorporated in the ITER design, and supports the manufacturing cost estimates for important key cost drivers. The testing of models is continuing to demonstrate their performance margin and/or to optimize their operational use. Their realisation offers insights useful for a possible future collaborative construction activity. Valuable and relevant experience has already been gained in the management of industrial scale, cross-party ventures. The successful progress of these projects increases confidence in the possibility of jointly constructing ITER in an international project framework. The R and D present status is summarized in the following: details are given in Chapters 2 and 3. Significant efforts and resources have been devoted to the Seven Large R and D Projects which cover all the major key components of the basic machine of ITER and their maintenance tools

  7. New Ideas for Confined Alpha Diagnostics on ITER

    Science.gov (United States)

    Fisher, R. K.

    2003-10-01

    Understanding the dynamics of a burning plasma will require development of adequate alpha particle diagnostics. Three new approaches to obtain information on the confined fast alphas in ITER are proposed. The first technique measures the energetic D and T charge exchange (CX) neutrals that result from the alpha collision-induced knock-on fuel ion tails undergoing electron capture on the MeV D neutral beams planned for heating and current drive. CX neutrals with energies >1 ,MeV would be measured to avoid the background due to the large population of injected beam ions. The second technique measures the energetic knock-on neutron tail due to alphas using the lengths of the proton recoil tracks produced by neutron collisions in the film. The range of the 14 to 18 MeV recoil protons increases by ˜400 microns per MeV. The third approach would measure the CX helium neutrals resulting from confined alphas capturing two electrons in the ablation cloud surrounding a dense gas jet that has been proposed for disruption mitigation in ITER. Jet Charge Exchange (JCX) could allow measurements in the plasma core, while the Pellet Charge Exchange (PCX) technique that provided much of the data on confined alphas in TFTR, will likely be limited by pellet penetration to measurements outside r/ a , ˜ ,0.5 in ITER.

  8. The Disaggregation of Value-Added Test Scores to Assess Learning Outcomes in Economics Courses

    Science.gov (United States)

    Walstad, William B.; Wagner, Jamie

    2016-01-01

    This study disaggregates posttest, pretest, and value-added or difference scores in economics into four types of economic learning: positive, retained, negative, and zero. The types are derived from patterns of student responses to individual items on a multiple-choice test. The micro and macro data from the "Test of Understanding in College…

  9. Performance and Complexity Evaluation of Iterative Receiver for Coded MIMO-OFDM Systems

    Directory of Open Access Journals (Sweden)

    Rida El Chall

    2016-01-01

    Full Text Available Multiple-input multiple-output (MIMO technology in combination with channel coding technique is a promising solution for reliable high data rate transmission in future wireless communication systems. However, these technologies pose significant challenges for the design of an iterative receiver. In this paper, an efficient receiver combining soft-input soft-output (SISO detection based on low-complexity K-Best (LC-K-Best decoder with various forward error correction codes, namely, LTE turbo decoder and LDPC decoder, is investigated. We first investigate the convergence behaviors of the iterative MIMO receivers to determine the required inner and outer iterations. Consequently, the performance of LC-K-Best based receiver is evaluated in various LTE channel environments and compared with other MIMO detection schemes. Moreover, the computational complexity of the iterative receiver with different channel coding techniques is evaluated and compared with different modulation orders and coding rates. Simulation results show that LC-K-Best based receiver achieves satisfactory performance-complexity trade-offs.

  10. iterClust: a statistical framework for iterative clustering analysis.

    Science.gov (United States)

    Ding, Hongxu; Wang, Wanxin; Califano, Andrea

    2018-03-22

    In a scenario where populations A, B1 and B2 (subpopulations of B) exist, pronounced differences between A and B may mask subtle differences between B1 and B2. Here we present iterClust, an iterative clustering framework, which can separate more pronounced differences (e.g. A and B) in starting iterations, followed by relatively subtle differences (e.g. B1 and B2), providing a comprehensive clustering trajectory. iterClust is implemented as a Bioconductor R package. andrea.califano@columbia.edu, hd2326@columbia.edu. Supplementary information is available at Bioinformatics online.

  11. Measurement of particle size distribution of soil and selected aggregate sizes using the hydrometer method and laser diffractometry

    Science.gov (United States)

    Guzmán, G.; Gómez, J. A.; Giráldez, J. V.

    2010-05-01

    Soil particle size distribution has been traditionally determined by the hydrometer or the sieve-pipette methods, both of them time consuming and requiring a relatively large soil sample. This might be a limitation in situations, such as for instance analysis of suspended sediment, when the sample is small. A possible alternative to these methods are the optical techniques such as laser diffractometry. However the literature indicates that the use of this technique as an alternative to traditional methods is still limited, because the difficulty in replicating the results obtained with the standard methods. In this study we present the percentages of soil grain size determined using laser diffractometry within ranges set between 0.04 - 2000 μm. A Beckman-Coulter ® LS-230 with a 750 nm laser beam and software version 3.2 in five soils, representative of southern Spain: Alameda, Benacazón, Conchuela, Lanjarón and Pedrera. In three of the studied soils (Alameda, Benacazón and Conchuela) the particle size distribution of each aggregate size class was also determined. Aggregate size classes were obtained by dry sieve analysis using a Retsch AS 200 basic ®. Two hundred grams of air dried soil were sieved during 150 s, at amplitude 2 mm, getting nine different sizes between 2000 μm and 10 μm. Analyses were performed by triplicate. The soil sample preparation was also adapted to our conditions. A small amount each soil sample (less than 1 g) was transferred to the fluid module full of running water and disaggregated by ultrasonication at energy level 4 and 80 ml of sodium hexametaphosphate solution during 580 seconds. Two replicates of each sample were performed. Each measurement was made for a 90 second reading at a pump speed of 62. After the laser diffractometry analysis, each soil and its aggregate classes were processed calibrating its own optical model fitting the optical parameters that mainly depends on the color and the shape of the analyzed particle. As a

  12. Disaggregated Energy Consumption and Sectoral Outputs in Thailand: ARDL Bound Testing Approach

    OpenAIRE

    Thurai Murugan Nathan; Venus Khim-Sen Liew; Wing-Keung Wong

    2016-01-01

    From an economic perspective, energy-output relationship studies have become increasingly popular in recent times, partly fuelled by a need to understand the effect of energy on production outputs rather than overall GDP. This study dealt with disaggregated energy consumption and outputs of some major economic sectors in Thailand. ARDL bound testing approach was employed to examine the co-integration relationship. The Granger causality test of the aforementioned ARDL framework was done to inv...

  13. Employment in Disequilibrium: a Disaggregated Approach on a Panel of French Firms

    OpenAIRE

    Brigitte Dormont

    1989-01-01

    The purpose of this paper is to understand disequilibrium phenomena at a disaggregated level. By using data on French firms, we carry out the estimation of labor demand model with two regimes, which correspond to the Keynesian and classical hypotheses. The results enable us to characterize classical firms as being particularly good performers: they have more rapid growth, younger productive plant and higher productivity gains and profitability. Classical firms stand out, with respect to their...

  14. Flocculation kinetics and aggregate structure of kaolinite mixtures in laminar tube flow.

    Science.gov (United States)

    Vaezi G, Farid; Sanders, R Sean; Masliyah, Jacob H

    2011-03-01

    Flocculation is commonly used in various solid-liquid separation processes in chemical and mineral industries to separate desired products or to treat waste streams. This paper presents an experimental technique to study flocculation processes in laminar tube flow. This approach allows for more realistic estimation of the shear rate to which an aggregate is exposed, as compared to more complicated shear fields (e.g. stirred tanks). A direct sampling method is used to minimize the effect of sampling on the aggregate structure. A combination of aggregate settling velocity and image analysis was used to quantify the structure of the aggregate. Aggregate size, density, and fractal dimension were found to be the most important aggregate structural parameters. The two methods used to determine aggregate fractal dimension were in good agreement. The effects of advective flow through an aggregate's porous structure and transition-regime drag coefficient on the evaluation of aggregate density were considered. The technique was applied to investigate the flocculation kinetics and the evolution of the aggregate structure of kaolin particles with an anionic flocculant under conditions similar to those of oil sands fine tailings. Aggregates were formed using a well controlled two-stage aggregation process. Detailed statistical analysis was performed to investigate the establishment of dynamic equilibrium condition in terms of aggregate size and density evolution. An equilibrium steady state condition was obtained within 90 s of the start of flocculation; after which no further change in aggregate structure was observed. Although longer flocculation times inside the shear field could conceivably cause aggregate structure conformation, statistical analysis indicated that this did not occur for the studied conditions. The results show that the technique and experimental conditions employed here produce aggregates having a well-defined, reproducible structure. Copyright © 2011

  15. Iterative-Transform Phase Retrieval Using Adaptive Diversity

    Science.gov (United States)

    Dean, Bruce H.

    2007-01-01

    A phase-diverse iterative-transform phase-retrieval algorithm enables high spatial-frequency, high-dynamic-range, image-based wavefront sensing. [The terms phase-diverse, phase retrieval, image-based, and wavefront sensing are defined in the first of the two immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] As described below, no prior phase-retrieval algorithm has offered both high dynamic range and the capability to recover high spatial-frequency components. Each of the previously developed image-based phase-retrieval techniques can be classified into one of two categories: iterative transform or parametric. Among the modifications of the original iterative-transform approach has been the introduction of a defocus diversity function (also defined in the cited companion article). Modifications of the original parametric approach have included minimizing alternative objective functions as well as implementing a variety of nonlinear optimization methods. The iterative-transform approach offers the advantage of ability to recover low, middle, and high spatial frequencies, but has disadvantage of having a limited dynamic range to one wavelength or less. In contrast, parametric phase retrieval offers the advantage of high dynamic range, but is poorly suited for recovering higher spatial frequency aberrations. The present phase-diverse iterative transform phase-retrieval algorithm offers both the high-spatial-frequency capability of the iterative-transform approach and the high dynamic range of parametric phase-recovery techniques. In implementation, this is a focus-diverse iterative-transform phaseretrieval algorithm that incorporates an adaptive diversity function, which makes it possible to avoid phase unwrapping while preserving high-spatial-frequency recovery. The algorithm includes an inner and an outer loop (see figure). An initial estimate of phase is used to start the algorithm on the inner loop, wherein

  16. Status of the ITER vacuum vessel construction

    Energy Technology Data Exchange (ETDEWEB)

    Choi, C.H.; Sborchia, C.; Ioki, K.; Giraud, B.; Utin, Yu.; Sa, J.W. [ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France); Wang, X., E-mail: xiaoyuwww@gmail.com [ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France); Teissier, P.; Martinez, J.M.; Le Barbier, R.; Jun, C.; Dani, S.; Barabash, V.; Vertongen, P.; Alekseev, A. [ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France); Jucker, P.; Bayon, A. [F4E, c/ Josep Pla, n. 2, Torres Diagonal Litoral, Edificio B3, E-08019 Barcelona (Spain); Pathak, H.; Raval, J. [ITER-India, IPR, A-29, Electronics Estate, GIDC, Sector-25, Gandhinagar 382025 (India); Ahn, H.J. [ITER Korea, National Fusion Research Institute, Daejeon (Korea, Republic of); and others

    2014-10-15

    Highlights: • Final design of the ITER vacuum vessel (VV). • Procurement of the ITER VV. • Manufacturing results of real scale mock-ups. • Manufacturing status of the VV in domestic agencies. - Abstract: The ITER vacuum vessel (VV) is under manufacturing by four domestic agencies after completion of engineering designs that have been approved by the Agreed Notified Body (ANB). Manufacturing designs of the VV have been being completed, component by component, by accommodating requirements of the RCC-MR 2007 edition. Manufacturing of the VV first sector has been started in February 2012 in Korea and in-wall shielding in May 2013 in India. EU will start manufacturing of its first sector from September 2013 and Russia the upper port by the end of 2013. All DAs have manufactured several mock-ups including real-size ones to justify/qualify and establish manufacturing techniques and procedures.

  17. Study of wall conditioning in tokamaks with application to ITER

    International Nuclear Information System (INIS)

    Kogut, Dmitri

    2014-01-01

    Thesis is devoted to studies of performance and efficiency of wall conditioning techniques in fusion reactors, such as ITER. Conditioning is necessary to control the state of the surface of plasma facing components to ensure plasma initiation and performance. Conditioning and operation of the JET tokamak with ITER-relevant material mix is extensively studied. A 2D model of glow conditioning discharges is developed and validated; it predicts reasonably uniform discharges in ITER. In the nuclear phase of ITER operation conditioning will be needed to control tritium inventory. It is shown here that isotopic exchange is an efficient mean to eliminate tritium from the walls by replacing it with deuterium. Extrapolations for tritium removal are comparable with expected retention per a nominal plasma pulse in ITER. A 1D model of hydrogen isotopic exchange in beryllium is developed and validated. It shows that fluence and temperature of the surface influence efficiency of the isotopic exchange. (author) [fr

  18. Influence of organic molecules on the aggregation of TiO{sub 2} nanoparticles in acidic conditions

    Energy Technology Data Exchange (ETDEWEB)

    Danielsson, Karin, E-mail: karin.danielsson@chem.gu.se [University of Gothenburg, Department of Chemistry and Molecular Biology (Sweden); Gallego-Urrea, Julián A.; Hassellov, Martin [University of Gothenburg, Department of Marine Sciences (Sweden); Gustafsson, Stefan [Chalmers University of Technology, Department of Applied Physics (Sweden); Jonsson, Caroline M. [University of Gothenburg, Department of Chemistry and Molecular Biology (Sweden)

    2017-04-15

    Engineered nanoparticles released into the environment may interact with natural organic matter (NOM). Surface complexation affects the surface potential, which in turn may lead to aggregation of the particles. Aggregation of synthetic TiO{sub 2} (anatase) nanoparticles in aqueous suspension was investigated at pH 2.8 as a function of time in the presence of various organic molecules and Suwannee River fulvic acid (SRFA), using dynamic light scattering (DLS) and high-resolution transmission electron microscopy (TEM). Results showed that the average hydrodynamic diameter and ζ-potential were dependent on both concentration and molecular structure of the organic molecule. Results were also compared with those of quantitative batch adsorption experiments. Further, a time study of the aggregation of TiO{sub 2} nanoparticles in the presence of 2,3-dihydroxybenzoic acid (2,3-DHBA) and SRFA, respectively, was performed in order to observe changes in ζ-potential and particle size over a time period of 9 months. In the 2,3-DHBA-TiO{sub 2} system, ζ-potentials decreased with time resulting in charge neutralization and/or inversion depending on ligand concentration. Aggregate sizes increased initially to the micrometer size range, followed by disaggregation after several months. No or very little interaction between SRFA and TiO{sub 2} occurred at the lowest concentrations tested. However, at the higher concentrations of SRFA, there was an increase in both aggregate size and the amount of SRFA adsorbed to the TiO{sub 2} surface. This was in correlation with the ζ-potential that decreased with increased SRFA concentration, leading to destabilization of the system. These results stress the importance of performing studies over both short and long time periods to better understand and predict the long-term effects of nanoparticles in the environment.

  19. Three-dimensional forward modeling of DC resistivity using the aggregation-based algebraic multigrid method

    Science.gov (United States)

    Chen, Hui; Deng, Ju-Zhi; Yin, Min; Yin, Chang-Chun; Tang, Wen-Wu

    2017-03-01

    To speed up three-dimensional (3D) DC resistivity modeling, we present a new multigrid method, the aggregation-based algebraic multigrid method (AGMG). We first discretize the differential equation of the secondary potential field with mixed boundary conditions by using a seven-point finite-difference method to obtain a large sparse system of linear equations. Then, we introduce the theory behind the pairwise aggregation algorithms for AGMG and use the conjugate-gradient method with the V-cycle AGMG preconditioner (AGMG-CG) to solve the linear equations. We use typical geoelectrical models to test the proposed AGMG-CG method and compare the results with analytical solutions and the 3DDCXH algorithm for 3D DC modeling (3DDCXH). In addition, we apply the AGMG-CG method to different grid sizes and geoelectrical models and compare it to different iterative methods, such as ILU-BICGSTAB, ILU-GCR, and SSOR-CG. The AGMG-CG method yields nearly linearly decreasing errors, whereas the number of iterations increases slowly with increasing grid size. The AGMG-CG method is precise and converges fast, and thus can improve the computational efficiency in forward modeling of three-dimensional DC resistivity.

  20. The Long-Run Macroeconomic Effects of Aid and Disaggregated Aid in Ethiopia

    DEFF Research Database (Denmark)

    Gebregziabher, Fiseha Haile

    2014-01-01

    positively, whereas it is negatively associated with government consumption. Our results concerning the impacts of disaggregated aid stand in stark contrast to earlier work. Bilateral aid increases investment and GDP and is negatively associated with government consumption, whereas multilateral aid is only...... positively associated with imports. Grants contribute to GDP, investment and imports, whereas loans affect none of the variables. Finally, there is evidence to suggest that multilateral aid and loans have been disbursed in a procyclical fashion...

  1. ITER council proceedings: 2000

    International Nuclear Information System (INIS)

    2001-01-01

    No ITER Council Meetings were held during 2000. However, two ITER EDA Meetings were held, one in Tokyo, January 19-20, and one in Moscow, June 29-30. The parties participating in these meetings were those that partake in the extended ITER EDA, namely the EU, the Russian Federation, and Japan. This document contains, a/o, the records of these meetings, the list of attendees, the agenda, the ITER EDA Status Reports issued during these meetings, the TAC (Technical Advisory Committee) reports and recommendations, the MAC Reports and Advice (also for the July 1999 Meeting), the ITER-FEAT Outline Design Report, the TAC Reports and Recommendations both meetings), Site requirements and Site Design Assumptions, the Tentative Sequence of technical Activities 2000-2001, Report of the ITER SWG-P2 on Joint Implementation of ITER, EU/ITER Canada Proposal for New ITER Identification

  2. Metrology for ITER Assembly

    International Nuclear Information System (INIS)

    Bogusch, E.

    2006-01-01

    The overall dimensions of the ITER Tokamak and the particular assembly sequence preclude the use of conventional optical metrology, mechanical jigs and traditional dimensional control equipment, as used for the assembly of smaller, previous generation, fusion devices. This paper describes the state of the art of the capabilities of available metrology systems, with reference to the previous experience in Fusion engineering and in other industries. Two complementary procedures of transferring datum from the primary datum network on the bioshield to the secondary datum s inside the VV with the desired accuracy of about 0.1 mm is described, one method using the access directly through the ports and the other using transfer techniques, developed during the co-operation with ITER/EFDA. Another important task described is the development of a method for the rapid and easy measurement of the gaps between sectors, required for the production of the customised splice plates between them. The scope of the paper includes the evaluation of the composition and cost of the systems and team of technical staff required to meet the requirements of the assembly procedure. The results from a practical, full-scale demonstration of the methodologies used, using the proposed equipment, is described. This work has demonstrated the feasibility of achieving the necessary accuracies for the successful building of ITER. (author)

  3. Perl Modules for Constructing Iterators

    Science.gov (United States)

    Tilmes, Curt

    2009-01-01

    The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.

  4. Synthetic food additive dye "Tartrazine" triggers amorphous aggregation in cationic myoglobin.

    Science.gov (United States)

    Al-Shabib, Nasser Abdulatif; Khan, Javed Masood; Khan, Mohd Shahnawaz; Ali, Mohd Sajid; Al-Senaidy, Abdulrahman M; Alsenaidy, Mohammad A; Husain, Fohad Mabood; Al-Lohedan, Hamad A

    2017-05-01

    Protein aggregation, a characteristic of several neurodegenerative diseases, displays vast conformational diversity from amorphous to amyloid-like aggregates. In this study, we have explored the interaction of tartrazine with myoglobin protein at two different pHs (7.4 and 2.0). We have utilized various spectroscopic techniques (turbidity, Rayleigh light scattering (RLS), intrinsic fluorescence, Congo Red and far-UV CD) along with microscopy techniques i.e. atomic force microscopy (AFM) and transmission electron microscopy (TEM) to characterize the tartrazine-induced aggregation in myoglobin. The results showed that higher concentrations of tartrazine (2.0-10.0mM) induced amorphous aggregation in myoglobin at pH 2.0 via electrostatic interactions. However, tartrazine was not able to induce aggregation in myoglobin at pH 7.4; because of strong electrostatic repulsion between myoglobin and tartrazine at this pH. The tartrazine-induced amorphous aggregation process is kinetically very fast, and aggregation occurred without the formation of a nucleus. These results proposed that the electrostatic interaction is responsible for tartrazine-induced amorphous aggregation. This study may help in the understanding of mechanistic insight of aggregation by tartrazine. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. ITER overview

    International Nuclear Information System (INIS)

    Shimomura, Y.; Aymar, R.; Chuyanov, V.; Huguet, M.; Parker, R.R.

    2001-01-01

    This report summarizes technical works of six years done by the ITER Joint Central Team and Home Teams under terms of Agreement of the ITER Engineering Design Activities. The major products are as follows: complete and detailed engineering design with supporting assessments, industrial-based cost estimates and schedule, non-site specific comprehensive safety and environmental assessment, and technology R and D to validate and qualify design including proof of technologies and industrial manufacture and testing of full size or scalable models of key components. The ITER design is at an advanced stage of maturity and contains sufficient technical information for a construction decision. The operation of ITER will demonstrate the availability of a new energy source, fusion. (author)

  6. ITER Overview

    International Nuclear Information System (INIS)

    Shimomura, Y.; Aymar, R.; Chuyanov, V.; Huguet, M.; Parker, R.

    1999-01-01

    This report summarizes technical works of six years done by the ITER Joint Central Team and Home Teams under terms of Agreement of the ITER Engineering Design Activities. The major products are as follows: complete and detailed engineering design with supporting assessments, industrial-based cost estimates and schedule, non-site specific comprehensive safety and environmental assessment, and technology R and D to validate and qualify design including proof of technologies and industrial manufacture and testing of full size or scalable models of key components. The ITER design is at an advanced stage of maturity and contains sufficient technical information for a construction decision. The operation of ITER will demonstrate the availability of a new energy source, fusion. (author)

  7. Balancing energy flexibilities through aggregation

    DEFF Research Database (Denmark)

    Valsomatzis, Emmanouil; Hose, Katja; Pedersen, Torben Bach

    2014-01-01

    One of the main goals of recent developments in the Smart Grid area is to increase the use of renewable energy sources. These sources are characterized by energy fluctuations that might lead to energy imbalances and congestions in the electricity grid. Exploiting inherent flexibilities, which exist...... in both energy production and consumption, is the key to solving these problems. Flexibilities can be expressed as flex-offers, which due to their high number need to be aggregated to reduce the complexity of energy scheduling. In this paper, we discuss balance aggregation techniques that already during...... aggregation aim at balancing flexibilities in production and consumption to reduce the probability of congestions and reduce the complexity of scheduling. We present results of our extensive experiments....

  8. ITER Council proceedings: 1993

    International Nuclear Information System (INIS)

    1994-01-01

    Records of the third ITER Council Meeting (IC-3), held on 21-22 April 1993, in Tokyo, Japan, and the fourth ITER Council Meeting (IC-4) held on 29 September - 1 October 1993 in San Diego, USA, are presented, giving essential information on the evolution of the ITER Engineering Design Activities (EDA), such as the text of the draft of Protocol 2 further elaborated in ''ITER EDA Agreement and Protocol 2'' (ITER EDA Documentation Series No. 5), recommendations on future work programmes: a description of technology R and D tasks; the establishment of a trust fund for the ITER EDA activities; arrangements for Visiting Home Team Personnel; the general framework for the involvement of other countries in the ITER EDA; conditions for the involvement of Canada in the Euratom Contribution to the ITER EDA; and other attachments as parts of the Records of Decision of the aforementioned ITER Council Meetings

  9. ITER council proceedings: 1993

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-12-31

    Records of the third ITER Council Meeting (IC-3), held on 21-22 April 1993, in Tokyo, Japan, and the fourth ITER Council Meeting (IC-4) held on 29 September - 1 October 1993 in San Diego, USA, are presented, giving essential information on the evolution of the ITER Engineering Design Activities (EDA), such as the text of the draft of Protocol 2 further elaborated in ``ITER EDA Agreement and Protocol 2`` (ITER EDA Documentation Series No. 5), recommendations on future work programmes: a description of technology R and D tastes; the establishment of a trust fund for the ITER EDA activities; arrangements for Visiting Home Team Personnel; the general framework for the involvement of other countries in the ITER EDA; conditions for the involvement of Canada in the Euratom Contribution to the ITER EDA; and other attachments as parts of the Records of Decision of the aforementioned ITER Council Meetings.

  10. Chapter 7: Diagnostics [Progress in the ITER Physics Basis (PIPB)

    International Nuclear Information System (INIS)

    Donne, A.J.H.; Costley, A.E.; Barnsley, R.

    2007-01-01

    In order to support the operation of ITER and the planned experimental programme an extensive set of plasma and first wall measurements will be required. The number and type of required measurements will be similar to those made on the present-day large tokamaks while the specification of the measurements-time and spatial resolutions, etc-will in some cases be more stringent. Many of the measurements will be used in the real time control of the plasma driving a requirement for very high reliability in the systems (diagnostics) that provide the measurements. The implementation of diagnostic systems on ITER is a substantial challenge. Because of the harsh environment (high levels of neutron and gamma fluxes, neutron heating, particle bombardment) diagnostic system selection and design has to cope with a range of phenomena not previously encountered in diagnostic design. Extensive design and R and D is needed to prepare the systems. In some cases the environmental difficulties are so severe that new diagnostic techniques are required. The starting point in the development of diagnostics for ITER is to define the measurement requirements and develop their justification. It is necessary to include all the plasma parameters needed to support the basic and advanced operation (including active control) of the device, machine protection and also those needed to support the physics programme. Once the requirements are defined, the appropriate (combination of) diagnostic techniques can be selected and their implementation onto the tokamak can be developed. The selected list of diagnostics is an important guideline for identifying dedicated research and development needs in the area of ITER diagnostics. This paper gives a comprehensive overview of recent progress in the field of ITER diagnostics with emphasis on the implementation issues. After a discussion of the measurement requirements for plasma parameters in ITER and their justifications, recent progress in the field of

  11. Multirobot autonomous landmine detection using distributed multisensor information aggregation

    Science.gov (United States)

    Jumadinova, Janyl; Dasgupta, Prithviraj

    2012-06-01

    We consider the problem of distributed sensor information fusion by multiple autonomous robots within the context of landmine detection. We assume that different landmines can be composed of different types of material and robots are equipped with different types of sensors, while each robot has only one type of landmine detection sensor on it. We introduce a novel technique that uses a market-based information aggregation mechanism called a prediction market. Each robot is provided with a software agent that uses sensory input of the robot and performs calculations of the prediction market technique. The result of the agent's calculations is a 'belief' representing the confidence of the agent in identifying the object as a landmine. The beliefs from different robots are aggregated by the market mechanism and passed on to a decision maker agent. The decision maker agent uses this aggregate belief information about a potential landmine and makes decisions about which other robots should be deployed to its location, so that the landmine can be confirmed rapidly and accurately. Our experimental results show that, for identical data distributions and settings, using our prediction market-based information aggregation technique increases the accuracy of object classification favorably as compared to two other commonly used techniques.

  12. Right adrenal vein: comparison between adaptive statistical iterative reconstruction and model-based iterative reconstruction.

    Science.gov (United States)

    Noda, Y; Goshima, S; Nagata, S; Miyoshi, T; Kawada, H; Kawai, N; Tanahashi, Y; Matsuo, M

    2018-06-01

    To compare right adrenal vein (RAV) visualisation and contrast enhancement degree on adrenal venous phase images reconstructed using adaptive statistical iterative reconstruction (ASiR) and model-based iterative reconstruction (MBIR) techniques. This prospective study was approved by the institutional review board, and written informed consent was waived. Fifty-seven consecutive patients who underwent adrenal venous phase imaging were enrolled. The same raw data were reconstructed using ASiR 40% and MBIR. The expert and beginner independently reviewed computed tomography (CT) images. RAV visualisation rates, background noise, and CT attenuation of the RAV, right adrenal gland, inferior vena cava (IVC), hepatic vein, and bilateral renal veins were compared between the two reconstruction techniques. RAV visualisation rates were higher with MBIR than with ASiR (95% versus 88%, p=0.13 in expert and 93% versus 75%, p=0.002 in beginner, respectively). RAV visualisation confidence ratings with MBIR were significantly greater than with ASiR (pASiR (pASiR (p=0.0013 and 0.02). Reconstruction of adrenal venous phase images using MBIR significantly reduces background noise, leading to an improvement in the RAV visualisation compared with ASiR. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  13. ITER-FEAT safety

    International Nuclear Information System (INIS)

    Gordon, C.W.; Bartels, H.-W.; Honda, T.; Raeder, J.; Topilski, L.; Iseli, M.; Moshonas, K.; Taylor, N.; Gulden, W.; Kolbasov, B.; Inabe, T.; Tada, E.

    2001-01-01

    Safety has been an integral part of the design process for ITER since the Conceptual Design Activities of the project. The safety approach adopted in the ITER-FEAT design and the complementary assessments underway, to be documented in the Generic Site Safety Report (GSSR), are expected to help demonstrate the attractiveness of fusion and thereby set a good precedent for future fusion power reactors. The assessments address ITER's radiological hazards taking into account fusion's favourable safety characteristics. The expectation that ITER will need regulatory approval has influenced the entire safety design and assessment approach. This paper summarises the ITER-FEAT safety approach and assessments underway. (author)

  14. The fractal character of radiation defects aggregation in crystals

    International Nuclear Information System (INIS)

    Akylbekov, A.; Akimbekov, E.; Baktybekov, K.; Vasil'eva, I.

    2002-01-01

    In processes of self-organization, which characterize open systems, the source of ordering is a non-equilibrium. One of the samples of ordering system is radiation-stimulated aggregation of defects in solids. In real work the analysis of criterions of ordering defects structures in solid, which is continuously irradiate at low temperature is presented. The method of cellular automata used in simulation of irradiation. It allowed us to imitate processes of defects formation and recombination. The simulation realized on the surfaces up to 1000x1000 units with initial concentration of defects C n (the power of dose) 0.1-1 %. The number of iterations N (duration of irradiation) mounted to 10 6 cycles. The single centers, which are the sources of formation aggregates, survive in the result of probabilistic nature of formation and recombination genetic pairs of defects and with strictly fixed radius of recombination (the minimum inter anionic distance). For determination the character of same type defects distribution the potential of their interaction depending of defects type and reciprocal distance is calculated. For more detailed study of processes, proceeding in cells with certain sizes of aggregates, the time dependence of potential interaction is constructed. It is shown, that on primary stage the potential is negative, then it increase and approach the saturation in positive area. The minimum of interaction potential corresponds to state of physical chaos in system. Its increasing occurs with formation of same type defects aggregates. Further transition to saturation and 'undulating' character of curves explains by formation and destruction aggregates. The data indicated that - these processes occur simultaneously in cells with different sizes. It allows us to assume that the radiation defects aggregation have a fractal nature

  15. Iterative solution of the semiconductor device equations

    Energy Technology Data Exchange (ETDEWEB)

    Bova, S.W.; Carey, G.F. [Univ. of Texas, Austin, TX (United States)

    1996-12-31

    Most semiconductor device models can be described by a nonlinear Poisson equation for the electrostatic potential coupled to a system of convection-reaction-diffusion equations for the transport of charge and energy. These equations are typically solved in a decoupled fashion and e.g. Newton`s method is used to obtain the resulting sequences of linear systems. The Poisson problem leads to a symmetric, positive definite system which we solve iteratively using conjugate gradient. The transport equations lead to nonsymmetric, indefinite systems, thereby complicating the selection of an appropriate iterative method. Moreover, their solutions exhibit steep layers and are subject to numerical oscillations and instabilities if standard Galerkin-type discretization strategies are used. In the present study, we use an upwind finite element technique for the transport equations. We also evaluate the performance of different iterative methods for the transport equations and investigate various preconditioners for a few generalized gradient methods. Numerical examples are given for a representative two-dimensional depletion MOSFET.

  16. Logic-based aggregation methods for ranking student applicants

    Directory of Open Access Journals (Sweden)

    Milošević Pavle

    2017-01-01

    Full Text Available In this paper, we present logic-based aggregation models used for ranking student applicants and we compare them with a number of existing aggregation methods, each more complex than the previous one. The proposed models aim to include depen- dencies in the data using Logical aggregation (LA. LA is a aggregation method based on interpolative Boolean algebra (IBA, a consistent multi-valued realization of Boolean algebra. This technique is used for a Boolean consistent aggregation of attributes that are logically dependent. The comparison is performed in the case of student applicants for master programs at the University of Belgrade. We have shown that LA has some advantages over other presented aggregation methods. The software realization of all applied aggregation methods is also provided. This paper may be of interest not only for student ranking, but also for similar problems of ranking people e.g. employees, team members, etc.

  17. Determining the disaggregated economic value of irrigation water in the Musi sub-basin in India

    NARCIS (Netherlands)

    Hellegers, P.J.G.J.; Davidson, B.

    2010-01-01

    In this paper the residual method is used to determine the disaggregated economic value of irrigation water used in agriculture across crops, zones and seasons. This method relies on the belief that the value of a good (its price by its quantity) is equal to the summation of the quantity of each

  18. Efficient clustering aggregation based on data fragments.

    Science.gov (United States)

    Wu, Ou; Hu, Weiming; Maybank, Stephen J; Zhu, Mingliang; Li, Bing

    2012-06-01

    Clustering aggregation, known as clustering ensembles, has emerged as a powerful technique for combining different clustering results to obtain a single better clustering. Existing clustering aggregation algorithms are applied directly to data points, in what is referred to as the point-based approach. The algorithms are inefficient if the number of data points is large. We define an efficient approach for clustering aggregation based on data fragments. In this fragment-based approach, a data fragment is any subset of the data that is not split by any of the clustering results. To establish the theoretical bases of the proposed approach, we prove that clustering aggregation can be performed directly on data fragments under two widely used goodness measures for clustering aggregation taken from the literature. Three new clustering aggregation algorithms are described. The experimental results obtained using several public data sets show that the new algorithms have lower computational complexity than three well-known existing point-based clustering aggregation algorithms (Agglomerative, Furthest, and LocalSearch); nevertheless, the new algorithms do not sacrifice the accuracy.

  19. Travel behaviour and the total activity pattern

    NARCIS (Netherlands)

    van der Hoorn, A.I.J.M.

    1979-01-01

    the past years the behavioural basis of travel demand models has been considerably extended. In many cases individual behaviour is taken as the starting point of the analysis. Conventional aggregate models have been complemented by disaggregate models. However, even in most disaggregate models there

  20. DAIET: a system for data aggregation inside the network

    KAUST Repository

    Sapio, Amedeo

    2017-09-27

    Many data center applications nowadays rely on distributed computation models like MapReduce and Bulk Synchronous Parallel (BSP) for data-intensive computation at scale [4]. These models scale by leveraging the partition/aggregate pattern where data and computations are distributed across many worker servers, each performing part of the computation. A communication phase is needed each time workers need to synchronize the computation and, at last, to produce the final output. In these applications, the network communication costs can be one of the dominant scalability bottlenecks especially in case of multi-stage or iterative computations [1].

  1. Salt-induced aggregation of stiff polyelectrolytes

    International Nuclear Information System (INIS)

    Fazli, Hossein; Mohammadinejad, Sarah; Golestanian, Ramin

    2009-01-01

    Molecular dynamics simulation techniques are used to study the process of aggregation of highly charged stiff polyelectrolytes due to the presence of multivalent salt. The dominant kinetic mode of aggregation is found to be the case of one end of one polyelectrolyte meeting others at right angles, and the kinetic pathway to bundle formation is found to be similar to that of flocculation dynamics of colloids as described by Smoluchowski. The aggregation process is found to favor the formation of finite bundles of 10-11 filaments at long times. Comparing the distribution of the cluster sizes with the Smoluchowski formula suggests that the energy barrier for the aggregation process is negligible. Also, the formation of long-lived metastable structures with similarities to the raft-like structures of actin filaments is observed within a range of salt concentration.

  2. Suspensions of colloidal particles and aggregates

    CERN Document Server

    Babick, Frank

    2016-01-01

    This book addresses the properties of particles in colloidal suspensions. It has a focus on particle aggregates and the dependency of their physical behaviour on morphological parameters. For this purpose, relevant theories and methodological tools are reviewed and applied to selected examples. The book is divided into four main chapters. The first of them introduces important measurement techniques for the determination of particle size and interfacial properties in colloidal suspensions. A further chapter is devoted to the physico-chemical properties of colloidal particles—highlighting the interfacial phenomena and the corresponding interactions between particles. The book’s central chapter examines the structure-property relations of colloidal aggregates. This comprises concepts to quantify size and structure of aggregates, models and numerical tools for calculating the (light) scattering and hydrodynamic properties of aggregates, and a discussion on van-der-Waals and double layer interactions between ...

  3. Fly ash aggregates. Vliegaskunstgrind

    Energy Technology Data Exchange (ETDEWEB)

    1983-03-01

    A study has been carried out into artificial aggregates made from fly ash, 'fly ash aggregates'. Attention has been drawn to the production of fly ash aggregates in the Netherlands as a way to obviate the need of disposal of fly ash. Typical process steps for the manufacturing of fly ash aggregates are the agglomeration and the bonding of fly ash particles. Agglomeration techniques are subdivided into agitation and compaction, bonding methods into sintering, hydrothermal and 'cold' bonding. In sintering no bonding agent is used. The fly ash particles are more or less welded together. Sintering in general is performed at a temperature higher than 900 deg C. In hydrothermal processes lime reacts with fly ash to a crystalline hydrate at temperatures between 100 and 250 deg C at saturated steam pressure. As a lime source not only lime as such, but also portland cement can be used. Cold bonding processes rely on reaction of fly ash with lime or cement at temperatures between 0 and 100 deg C. The pozzolanic properties of fly ash are used. Where cement is applied, this bonding agent itself contributes also to the strength development of the artificial aggregate. Besides the use of lime and cement, several processes are known which make use of lime containing wastes such as spray dry absorption desulfurization residues or fluid bed coal combustion residues. (In Dutch)

  4. State-of-the-Art Fluorescence Fluctuation-Based Spectroscopic Techniques for the Study of Protein Aggregation.

    Science.gov (United States)

    Kitamura, Akira; Kinjo, Masataka

    2018-03-23

    Neurodegenerative diseases, including amyotrophic lateral sclerosis (ALS), Alzheimer's disease, Parkinson's disease, and Huntington's disease, are devastating proteinopathies with misfolded protein aggregates accumulating in neuronal cells. Inclusion bodies of protein aggregates are frequently observed in the neuronal cells of patients. Investigation of the underlying causes of neurodegeneration requires the establishment and selection of appropriate methodologies for detailed investigation of the state and conformation of protein aggregates. In the current review, we present an overview of the principles and application of several methodologies used for the elucidation of protein aggregation, specifically ones based on determination of fluctuations of fluorescence. The discussed methods include fluorescence correlation spectroscopy (FCS), imaging FCS, image correlation spectroscopy (ICS), photobleaching ICS (pbICS), number and brightness (N&B) analysis, super-resolution optical fluctuation imaging (SOFI), and transient state (TRAST) monitoring spectroscopy. Some of these methodologies are classical protein aggregation analyses, while others are not yet widely used. Collectively, the methods presented here should help the future development of research not only into protein aggregation but also neurodegenerative diseases.

  5. Fabrication progress of the ITER vacuum vessel sector in Korea

    Energy Technology Data Exchange (ETDEWEB)

    Kim, B.C., E-mail: bckim@nfri.re.kr [National Fusion Research Institute, Gwahangno 113, Yuseong-gu, Daejeon (Korea, Republic of); Lee, Y.J.; Hong, K.H.; Sa, J.W.; Kim, H.S.; Park, C.K.; Ahn, H.J.; Bak, J.S.; Jung, K.J. [National Fusion Research Institute, Gwahangno 113, Yuseong-gu, Daejeon (Korea, Republic of); Park, K.H.; Roh, B.R.; Kim, T.S.; Lee, J.S.; Jung, Y.H.; Sung, H.J.; Choi, S.Y.; Kim, H.G.; Kwon, I.K.; Kwon, T.H. [Hyundai Heavy Industries Co. Ltd., Dong-gu, Ulsan (Korea, Republic of)

    2013-10-15

    Highlights: ► Fabrication of ITER vacuum vessel sector full scale mock-up to develop fabrication procedures. ► The welding and nondestructive examination techniques conform to RCC-MR. ► The preparation of real manufacturing of ITER vacuum vessel sector. -- Abstract: As a participant of ITER project, ITER Korea has to supply two ITER vacuum vessel sectors (Sector no. 6, no. 1) of total nine ITER VV sectors. After the procurement arrangement with ITER Organization, ITER Korea made the contract with Hyundai Heavy Industries (HHI) for fabrication of two sectors. Then the start of the manufacturing design was initiated from January 2010. HHI made three real scale R and D mock-ups to verify the critical fabrication feasibility issues on electron beam welding, 3D forming, welding distortion and achievable tolerances. The documentation according to IO and the French nuclear safety regulation requirement, the qualification of welding and nondestructive examination procedures conform to RCC-MR 2007 were proceed in parallel. The mass production of raw material was done after receiving ANB (agreed notified body) verification of product/parts and shop qualification. The manufacturing drawing, manufacturing and inspection plan of VV sector with supporting fabrication procedures are also verified by ANB, accordingly the first cutting and forming of plates for VV sector fabrication started from February 2012. This paper reports the latest fabrication progress of ITER vacuum vessel Sector no. 6 that will be assembled as the first sector in the ITER pit. The overall fabrication route, R and D mock-up fabrication results with forming and welding distortion analysis, qualification status of welding and nondestructive examination (NDE) are also presented.

  6. The Economic Impact of Higher Education Institutions in Ireland: Evidence from Disaggregated Input-Output Tables

    Science.gov (United States)

    Zhang, Qiantao; Larkin, Charles; Lucey, Brian M.

    2017-01-01

    While there has been a long history of modelling the economic impact of higher education institutions (HEIs), little research has been undertaken in the context of Ireland. This paper provides, for the first time, a disaggregated input-output table for Ireland's higher education sector. The picture painted overall is a higher education sector that…

  7. Advanced examination techniques applied to the qualification of critical welds for the ITER correction coils

    CERN Document Server

    Sgobba, Stefano; Libeyre, Paul; Marcinek, Dawid Jaroslaw; Piguiet, Aline; Cécillon, Alexandre

    2015-01-01

    The ITER correction coils (CCs) consist of three sets of six coils located in between the toroidal (TF) and poloidal field (PF) magnets. The CCs rely on a Cable-in-Conduit Conductor (CICC), whose supercritical cooling at 4.5 K is provided by helium inlets and outlets. The assembly of the nozzles to the stainless steel conductor conduit includes fillet welds requiring full penetration through the thickness of the nozzle. Static and cyclic stresses have to be sustained by the inlet welds during operation. The entire volume of helium inlet and outlet welds, that are submitted to the most stringent quality levels of imperfections according to standards in force, is virtually uninspectable with sufficient resolution by conventional or computed radiography or by Ultrasonic Testing. On the other hand, X-ray computed tomography (CT) was successfully applied to inspect the full weld volume of several dozens of helium inlet qualification samples. The extensive use of CT techniques allowed a significant progress in the ...

  8. Regionalisation of asset values for risk analyses

    Directory of Open Access Journals (Sweden)

    A. H. Thieken

    2006-01-01

    Full Text Available In risk analysis there is a spatial mismatch of hazard data that are commonly modelled on an explicit raster level and exposure data that are often only available for aggregated units, e.g. communities. Dasymetric mapping techniques that use ancillary information to disaggregate data within a spatial unit help to bridge this gap. This paper presents dasymetric maps showing the population density and a unit value of residential assets for whole Germany. A dasymetric mapping approach, which uses land cover data (CORINE Land Cover as ancillary variable, was adapted and applied to regionalize aggregated census data that are provided for all communities in Germany. The results were validated by two approaches. First, it was ascertained whether population data disaggregated at the community level can be used to estimate population in postcodes. Secondly, disaggregated population and asset data were used for a loss evaluation of two flood events that occurred in 1999 and 2002, respectively. It must be concluded that the algorithm tends to underestimate the population in urban areas and to overestimate population in other land cover classes. Nevertheless, flood loss evaluations demonstrate that the approach is capable of providing realistic estimates of the number of exposed people and assets. Thus, the maps are sufficient for applications in large-scale risk assessments such as the estimation of population and assets exposed to natural and man-made hazards.

  9. An analysis of the impact of the thermonuclear pilot project ITER on industry and research in Austria

    International Nuclear Information System (INIS)

    Hangel, G.

    2007-03-01

    An analysis of the influence of the thermonuclear pilot project ITER on Austrian research and industrial activities is presented in terms of the following subjects: fusion research history, ITER technique, security, nuclear fusion, ITER (reactor, project specifications for quotations), possibilities for Austrian companies and fusion research in Austria. (nevyjel)

  10. Fast generating Greenberger-Horne-Zeilinger state via iterative interaction pictures

    Science.gov (United States)

    Huang, Bi-Hua; Chen, Ye-Hong; Wu, Qi-Cheng; Song, Jie; Xia, Yan

    2016-10-01

    We delve a little deeper into the construction of shortcuts to adiabatic passage for three-level systems by iterative interaction picture (multiple Schrödinger dynamics). As an application example, we use the deduced iterative based shortcuts to rapidly generate the Greenberger-Horne-Zeilinger (GHZ) state in a three-atom system with the help of quantum Zeno dynamics. Numerical simulation shows the dynamics designed by the iterative picture method is physically feasible and the shortcut scheme performs much better than that using the conventional adiabatic passage techniques. Also, the influences of various decoherence processes are discussed by numerical simulation and the results prove that the scheme is fast and robust against decoherence and operational imperfection.

  11. A theoretically consistent stochastic cascade for temporal disaggregation of intermittent rainfall

    Science.gov (United States)

    Lombardo, F.; Volpi, E.; Koutsoyiannis, D.; Serinaldi, F.

    2017-06-01

    Generating fine-scale time series of intermittent rainfall that are fully consistent with any given coarse-scale totals is a key and open issue in many hydrological problems. We propose a stationary disaggregation method that simulates rainfall time series with given dependence structure, wet/dry probability, and marginal distribution at a target finer (lower-level) time scale, preserving full consistency with variables at a parent coarser (higher-level) time scale. We account for the intermittent character of rainfall at fine time scales by merging a discrete stochastic representation of intermittency and a continuous one of rainfall depths. This approach yields a unique and parsimonious mathematical framework providing general analytical formulations of mean, variance, and autocorrelation function (ACF) for a mixed-type stochastic process in terms of mean, variance, and ACFs of both continuous and discrete components, respectively. To achieve the full consistency between variables at finer and coarser time scales in terms of marginal distribution and coarse-scale totals, the generated lower-level series are adjusted according to a procedure that does not affect the stochastic structure implied by the original model. To assess model performance, we study rainfall process as intermittent with both independent and dependent occurrences, where dependence is quantified by the probability that two consecutive time intervals are dry. In either case, we provide analytical formulations of main statistics of our mixed-type disaggregation model and show their clear accordance with Monte Carlo simulations. An application to rainfall time series from real world is shown as a proof of concept.

  12. Aggregating Local Descriptors for Epigraphs Recognition

    OpenAIRE

    Amato, Giuseppe; Falchi, Fabrizio; Rabitti, Fausto; Vadicamo, Lucia

    2014-01-01

    In this paper, we consider the task of recognizing epigraphs in images such as photos taken using mobile devices. Given a set of 17,155 photos related to 14,560 epigraphs, we used a k-NearestNeighbor approach in order to perform the recognition. The contribution of this work is in evaluating state-of-the-art visual object recognition techniques in this specific context. The experimental results conducted show that Vector of Locally Aggregated Descriptors obtained aggregating SIFT descriptors ...

  13. ITER council proceedings: 1998

    International Nuclear Information System (INIS)

    1999-01-01

    This volume contains documents of the 13th and the 14th ITER council meeting as well as of the 1st extraordinary ITER council meeting. Documents of the ITER meetings held in Vienna and Yokohama during 1998 are also included. The contents include an outline of the ITER objectives, the ITER parameters and design overview as well as operating scenarios and plasma performance. Furthermore, design features, safety and environmental characteristics are given

  14. Reflective metallic coatings for first mirrors on ITER

    International Nuclear Information System (INIS)

    Eren, Baran; Marot, Laurent; Litnovsky, Andrey; Matveeva, Maria; Steiner, Roland; Emberger, Valentin; Wisse, Marco; Mathys, Daniel; Covarel, Gregory; Meyer, Ernst

    2011-01-01

    Metallic mirrors are foreseen to play a crucial role for all optical diagnostics in ITER. Therefore, the development of reliable techniques for the production of mirrors which are able to maintain their optical properties in the harsh ITER environment is highly important. By applying magnetron sputtering and evaporation techniques, rhodium and molybdenum films have been prepared for tokamak tests. The films were characterised in terms of chemical composition, surface roughness, crystallite structure, reflectivity and adhesion. No impurities were detected on the surface after deposition. The effects of deposition parameters and substrate temperature on the resulting crystallite structure, surface roughness and hence on the reflectivity, were investigated. The films are found to exhibit nanometric crystallites with a dense columnar structure. Open boundaries between the crystallite columns, which are sometimes present after evaporation, are found to reduce the reflectivity as compared to rhodium or molybdenum references.

  15. Reflective metallic coatings for first mirrors on ITER

    Energy Technology Data Exchange (ETDEWEB)

    Eren, Baran, E-mail: baran.eren@unibas.ch [Department of Physics, University of Basel, Klingelbergstrasse 82, CH-4056 Basel (Switzerland); Marot, Laurent [Department of Physics, University of Basel, Klingelbergstrasse 82, CH-4056 Basel (Switzerland); Litnovsky, Andrey; Matveeva, Maria [Institut fuer Energieforschung (Plasmaphysik), Forschungszentrum Juelich, Association EURATOM-FZJ, D 52425 Juelich (Germany); Steiner, Roland; Emberger, Valentin; Wisse, Marco [Department of Physics, University of Basel, Klingelbergstrasse 82, CH-4056 Basel (Switzerland); Mathys, Daniel [Centre of Microscopy, University of Basel, Klingelbergstrasse 50/70, CH-4056 Basel (Switzerland); Covarel, Gregory [Laboratoire de Physique et Mecanique Textile EA CNRS 7189, Universite de Haute Alsace, 61 rue Albert Camus, 68093 Mulhouse Cedex (France); Meyer, Ernst [Department of Physics, University of Basel, Klingelbergstrasse 82, CH-4056 Basel (Switzerland)

    2011-10-15

    Metallic mirrors are foreseen to play a crucial role for all optical diagnostics in ITER. Therefore, the development of reliable techniques for the production of mirrors which are able to maintain their optical properties in the harsh ITER environment is highly important. By applying magnetron sputtering and evaporation techniques, rhodium and molybdenum films have been prepared for tokamak tests. The films were characterised in terms of chemical composition, surface roughness, crystallite structure, reflectivity and adhesion. No impurities were detected on the surface after deposition. The effects of deposition parameters and substrate temperature on the resulting crystallite structure, surface roughness and hence on the reflectivity, were investigated. The films are found to exhibit nanometric crystallites with a dense columnar structure. Open boundaries between the crystallite columns, which are sometimes present after evaporation, are found to reduce the reflectivity as compared to rhodium or molybdenum references.

  16. Iter

    Science.gov (United States)

    Iotti, Robert

    2015-04-01

    ITER is an international experimental facility being built by seven Parties to demonstrate the long term potential of fusion energy. The ITER Joint Implementation Agreement (JIA) defines the structure and governance model of such cooperation. There are a number of necessary conditions for such international projects to be successful: a complete design, strong systems engineering working with an agreed set of requirements, an experienced organization with systems and plans in place to manage the project, a cost estimate backed by industry, and someone in charge. Unfortunately for ITER many of these conditions were not present. The paper discusses the priorities in the JIA which led to setting up the project with a Central Integrating Organization (IO) in Cadarache, France as the ITER HQ, and seven Domestic Agencies (DAs) located in the countries of the Parties, responsible for delivering 90%+ of the project hardware as Contributions-in-Kind and also financial contributions to the IO, as ``Contributions-in-Cash.'' Theoretically the Director General (DG) is responsible for everything. In practice the DG does not have the power to control the work of the DAs, and there is not an effective management structure enabling the IO and the DAs to arbitrate disputes, so the project is not really managed, but is a loose collaboration of competing interests. Any DA can effectively block a decision reached by the DG. Inefficiencies in completing design while setting up a competent organization from scratch contributed to the delays and cost increases during the initial few years. So did the fact that the original estimate was not developed from industry input. Unforeseen inflation and market demand on certain commodities/materials further exacerbated the cost increases. Since then, improvements are debatable. Does this mean that the governance model of ITER is a wrong model for international scientific cooperation? I do not believe so. Had the necessary conditions for success

  17. Retention of ferrofluid aggregates at the target site during magnetic drug targeting

    Energy Technology Data Exchange (ETDEWEB)

    Asfer, Mohammed, E-mail: asfer786@gmail.com [School of Engineering and Technology, BML Munjal University, Haryana (India); Saroj, Sunil Kumar [Department of Mechanical Engineering, IIT Kanpur, Kanpur (India); Panigrahi, Pradipta Kumar, E-mail: panig@iitk.ac.in [Department of Mechanical Engineering, IIT Kanpur, Kanpur (India)

    2017-08-15

    Highlights: • The present in vitro work reports the retention dynamics of ferrofluid aggregates at the target site against a bulk flow of DI water inside a micro capillary during magnetic drug targeting. • The recirculation zone at the downstream of the aggregate is found to be a function of aggregate height, Reynolds number and the degree of surface roughness of the outer boundary of the aggregate. • The reported results of the present work can be used as a guideline for the better design of MDT technique for in vivo applications. - Abstract: The present study reports the retention dynamics of a ferrofluid aggregate localized at the target site inside a glass capillary (500 × 500 µm{sup 2} square cross section) against a bulk flow of DI water (Re = 0.16 and 0.016) during the process of magnetic drug targeting (MDT). The dispersion dynamics of iron oxide nanoparticles (IONPs) into bulk flow for different initial size of aggregate at the target site is reported using the brightfield visualization technique. The flow field around the aggregate during the retention is evaluated using the µPIV technique. IONPs at the outer boundary experience a higher shear force as compared to the magnetic force, resulting in dispersion of IONPs into the bulk flow downstream to the aggregate. The blockage effect and the roughness of the outer boundary of the aggregate resulting from chain like clustering of IONPs contribute to the flow recirculation at the downstream region of the aggregate. The entrapment of seeding particles inside the chain like clusters of IONPs at the outer boundary of the aggregate reduces the degree of roughness resulting in a streamlined aggregate at the target site at later time. The effect of blockage, structure of the aggregate, and disturbed flow such as recirculation around the aggregate are the primary factors, which must be investigated for the effectiveness of the MDT process for in vivo applications.

  18. Proficiency Testing for Determination of Water Content in Toluene of Chemical Reagents by iteration robust statistic technique

    Science.gov (United States)

    Wang, Hao; Wang, Qunwei; He, Ming

    2018-05-01

    In order to investigate and improve the level of detection technology of water content in liquid chemical reagents of domestic laboratories, proficiency testing provider PT0031 (CNAS) has organized proficiency testing program of water content in toluene, 48 laboratories from 18 provinces/cities/municipals took part in the PT. This paper introduces the implementation process of proficiency testing for determination of water content in toluene, including sample preparation, homogeneity and stability test, the results of statistics of iteration robust statistic technique and analysis, summarized and analyzed those of the different test standards which are widely used in the laboratories, put forward the technological suggestions for the improvement of the test quality of water content. Satisfactory results were obtained by 43 laboratories, amounting to 89.6% of the total participating laboratories.

  19. Application of the perturbation iteration method to boundary layer type problems.

    Science.gov (United States)

    Pakdemirli, Mehmet

    2016-01-01

    The recently developed perturbation iteration method is applied to boundary layer type singular problems for the first time. As a preliminary work on the topic, the simplest algorithm of PIA(1,1) is employed in the calculations. Linear and nonlinear problems are solved to outline the basic ideas of the new solution technique. The inner and outer solutions are determined with the iteration algorithm and matched to construct a composite expansion valid within all parts of the domain. The solutions are contrasted with the available exact or numerical solutions. It is shown that the perturbation-iteration algorithm can be effectively used for solving boundary layer type problems.

  20. ECE diagnostics for RTO/RC ITER

    International Nuclear Information System (INIS)

    Vayakis, G.; Bartlett, D.V.; Costley, A.E.

    2001-01-01

    This paper presents the current status of the Electron Cyclotron Emission (ECE) diagnostic on the Reduced Technical Objectives/Reduced Cost International Thermonuclear Experimental Reactor (RTO/RC ITER). It discusses the implications of the new machine design on the measurement requirements, the ability of the diagnostic technique to meet these, and the changes in the implementation imposed by the new layout. Finally, it outlines the physics studies, design and R and D work required prior to the detailed design and construction of the diagnostic. Key results are: (i) that the localisation of the measurement is similar to that in ITER-FDR (40-100 mm in X-mode, 60-200 mm in O-mode for the reference scenario), so that the relative spatial resolution degrades in this, smaller, machine, and (ii) the expected effect of transport barriers on the temperature profile in the high temperature region will be poorly resolved, because the effect of the temperature gradient on the outboard side is to degrade the resolution to (∼250 mm in X-mode, ∼350 mm in O-mode). Nevertheless ECE will be able to make a unique and useful contribution to the RTO/RC ITER measurement set

  1. Separation and collection of coarse aggregate from waste concrete by electric pulsed power

    Science.gov (United States)

    Shigeishi, Mitsuhiro

    2017-09-01

    Waste concrete accounts for a substantial fraction of construction waste, and the recycling of waste concrete as concrete aggregate for construction is an important challenge associated with the rapid increase in the amount of waste concrete and the tight supply of natural aggregate. In this study, we propose a technique based on the use of high-voltage pulsed electric discharge into concrete underwater for separating and collecting aggregate from waste concrete with minimal deterioration of quality. By using this technique, the quality of the coarse aggregate separated and collected from concrete test specimens is comparable to that of coarse aggregate recycled by heating and grinding methods, thus satisfying the criteria in Japan Industrial Standard (JIS) A 5021 for the oven-dry density and the water absorption of coarse aggregate by advanced recycling.

  2. Development of liquid type TBM technology for ITER

    International Nuclear Information System (INIS)

    Lee, Dong Won; Kim, S. K.; Yoon, J. S.

    2012-03-01

    The final objectives of this project are as follows; Development of the key techniques for the liquid type TBM for ITER: Developing plan for leading and participating liquid TBM concepts; Estimating cost and schedule according to development schedule and managing technologies; Developing integrated design system and completing the engineering design for liquid TBM; Developing the key technologies for the liquid TBM; Construction of performance test systems for liquid TBM and verification of the performance. We are technically surveying the ITER system design data, the insufficient part of ITER design, and required R and D items and so on. In Korea, HCML TBM, liquid type breeder with lithium or lead lithium, has been studied during the past years to develop a tritium breeding technology for tritium self-sufficiency of nuclear fusion reactor and the TBM was proposed to be tested in ITER. In this study, we can obtain the key technology of nuclear fusion reactor especially on the TBM design, analysis and manufacturing technology through the present project and these technologies will help the construction of Korea fusion DEMO reactor and the development of commercial nuclear fusion reactor in Korea

  3. Evaluation of Medium Spatial Resolution BRDF-Adjustment Techniques Using Multi-Angular SPOT4 (Take5 Acquisitions

    Directory of Open Access Journals (Sweden)

    Martin Claverie

    2015-09-01

    Full Text Available High-resolution sensor Surface Reflectance (SR data are affected by surface anisotropy but are difficult to adjust because of the low temporal frequency of the acquisitions and the low angular sampling. This paper evaluates five high spatial resolution Bidirectional Reflectance Distribution Function (BRDF adjustment techniques. The evaluation is based on the noise level of the SR Time Series (TS corrected to a normalized geometry (nadir view, 45° sun zenith angle extracted from the multi-angular acquisitions of SPOT4 over three study areas (one in Arizona, two in France during the five-month SPOT4 (Take5 experiment. Two uniform techniques (Cst, for Constant, and Av, for Average, relying on the Vermote–Justice–Bréon (VJB BRDF method, assume no variation in space of the BRDF shape. Two methods (VI-dis, for NDVI-based disaggregation and LC-dis, for Land-Cover based disaggregation are based on disaggregation of the MODIS-derived BRDF VJB parameters using vegetation index and land cover, respectively. The last technique (LUM, for Look-Up Map relies on the MCD43 MODIS BRDF products and a crop type data layer. The VI-dis technique produced the lowest level of noise corresponding to the most effective adjustment: reduction from directional to normalized SR TS noises by 40% and 50% on average, for red and near-infrared bands, respectively. The uniform techniques displayed very good results, suggesting that a simple and uniform BRDF-shape assumption is good enough to adjust the BRDF in such geometric configuration (the view zenith angle varies from nadir to 25°. The most complex techniques relying on land cover (LC-dis and LUM displayed contrasting results depending on the land cover.

  4. Improved Classification by Non Iterative and Ensemble Classifiers in Motor Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    PANIGRAHY, P. S.

    2018-02-01

    Full Text Available Data driven approach for multi-class fault diagnosis of induction motor using MCSA at steady state condition is a complex pattern classification problem. This investigation has exploited the built-in ensemble process of non-iterative classifiers to resolve the most challenging issues in this area, including bearing and stator fault detection. Non-iterative techniques exhibit with an average 15% of increased fault classification accuracy against their iterative counterparts. Particularly RF has shown outstanding performance even at less number of training samples and noisy feature space because of its distributive feature model. The robustness of the results, backed by the experimental verification shows that the non-iterative individual classifiers like RF is the optimum choice in the area of automatic fault diagnosis of induction motor.

  5. Investigation of the hydrodynamic behavior of diatom aggregates using particle image velocimetry.

    Science.gov (United States)

    Xiao, Feng; Li, Xiaoyan; Lam, Kitming; Wang, Dongsheng

    2012-01-01

    The hydrodynamic behavior of diatom aggregates has a significant influence on the interactions and flocculation kinetics of algae. However, characterization of the hydrodynamics of diatoms and diatom aggregates in water is rather difficult. In this laboratory study, an advanced visualization technique in particle image velocimetry (PIV) was employed to investigate the hydrodynamic properties of settling diatom aggregates. The experiments were conducted in a settling column filled with a suspension of fluorescent polymeric beads as seed tracers. A laser light sheet was generated by the PIV setup to illuminate a thin vertical planar region in the settling column, while the motions of particles were recorded by a high speed charge-coupled device (CCD) camera. This technique was able to capture the trajectories of the tracers when a diatom aggregate settled through the tracer suspension. The PIV results indicated directly the curvilinear feature of the streamlines around diatom aggregates. The rectilinear collision model largely overestimated the collision areas of the settling particles. Algae aggregates appeared to be highly porous and fractal, which allowed streamlines to penetrate into the aggregate interior. The diatom aggregates have a fluid collection efficiency of 10%-40%. The permeable feature of aggregates can significantly enhance the collisions and flocculation between the aggregates and other small particles including algal cells in water.

  6. Monitoring the Aggregation of Dansyl Chloride in Acetone through Fluorescence Measurements

    Institute of Scientific and Technical Information of China (English)

    FANG,Yu; YIN,Yi-Qing; 等

    2002-01-01

    The aggregation of dansyl chloride (DNS-Cl) in acetone has been studied in detail by steady-state fluorescence techniques.It has been demonstrated that DNS-Cl is stable in acetone during purification and aggregation study processes.The aggregates are not solvolyzed in acetone,and do not take part n any chemical reactions either.It has been found that DNS-Cl tends to aggregate even when its concentration is much lower than its solubility in acetone.The aggregation is reversible,and both the aggregation and the deaggregation are very slow processes.Introduction of SDS has a positive effect upon the formation and stabilization of the aggregates.

  7. Monitoring the Aggregation of Dansyl Chloride in Acetone through Fluorescence Measurements

    Institute of Scientific and Technical Information of China (English)

    FANG,Yu(房喻); YIN,Yi-Qing(尹艺青); HU,Dao-Dao(胡道道); GAO,Gai-Ling(高改玲)

    2002-01-01

    The aggregation of dansyl chloride (DNS-Cl) in acetone has been studied in detail by steady-state fluorescence techniques. It has been demonstrated that DNS-Cl is stable in acetone during purification and aggregation study processes. The aggregates are not solvolyzed in acetone, and do not take part in any chemical reactions either. It has been found that DNS-Cl tends to aggregate even when its concentration is much lower than its solubility in acetone. The aggregation is reversible, and both the aggregation and the deaggregation are very slow processes.Introduction of SDS has a positive effect upon the formation and stabilization of the aggregates.

  8. Structural analysis of the ITER Vacuum Vessel regarding 2012 ITER Project-Level Loads

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, J.-M., E-mail: jean-marc.martinez@live.fr [ITER Organization, Route de Vinon sur Verdon, 13115 St Paul lez Durance (France); Jun, C.H.; Portafaix, C.; Choi, C.-H.; Ioki, K.; Sannazzaro, G.; Sborchia, C. [ITER Organization, Route de Vinon sur Verdon, 13115 St Paul lez Durance (France); Cambazar, M.; Corti, Ph.; Pinori, K.; Sfarni, S.; Tailhardat, O. [Assystem EOS, 117 rue Jacquard, L' Atrium, 84120 Pertuis (France); Borrelly, S. [Sogeti High Tech, RE2, 180 rue René Descartes, Le Millenium – Bat C, 13857 Aix en Provence (France); Albin, V.; Pelletier, N. [SOM Calcul – Groupe ORTEC, 121 ancien Chemin de Cassis – Immeuble Grand Pré, 13009 Marseille (France)

    2014-10-15

    Highlights: • ITER Vacuum Vessel is a part of the first barrier to confine the plasma. • ITER Vacuum Vessel as Nuclear Pressure Equipment (NPE) necessitates a third party organization authorized by the French nuclear regulator to assure design, fabrication, conformance testing and quality assurance, i.e. Agreed Notified Body (ANB). • A revision of the ITER Project-Level Load Specification was implemented in April 2012. • ITER Vacuum Vessel Loads (seismic, pressure, thermal and electromagnetic loads) were summarized. • ITER Vacuum Vessel Structural Margins with regards to RCC-MR code were summarized. - Abstract: A revision of the ITER Project-Level Load Specification (to be used for all systems of the ITER machine) was implemented in April 2012. This revision supports ITER's licensing by accommodating requests from the French regulator to maintain consistency with the plasma physics database and our present understanding of plasma transients and electro-magnetic (EM) loads, to investigate the possibility of removing unnecessary conservatism in the load requirements and to review the list and definition of incidental cases. The purpose of this paper is to present the impact of this 2012 revision of the ITER Project-Level Load Specification (LS) on the ITER Vacuum Vessel (VV) loads and the main structural margins required by the applicable French code, RCC-MR.

  9. ITER test programme

    International Nuclear Information System (INIS)

    Abdou, M.; Baker, C.; Casini, G.

    1991-01-01

    ITER has been designed to operate in two phases. The first phase which lasts for 6 years, is devoted to machine checkout and physics testing. The second phase lasts for 8 years and is devoted primarily to technology testing. This report describes the technology test program development for ITER, the ancillary equipment outside the torus necessary to support the test modules, the international collaboration aspects of conducting the test program on ITER, the requirements on the machine major parameters and the R and D program required to develop the test modules for testing in ITER. 15 refs, figs and tabs

  10. Linear multifrequency-grey acceleration recast for preconditioned Krylov iterations

    International Nuclear Information System (INIS)

    Morel, Jim E.; Brian Yang, T.-Y.; Warsa, James S.

    2007-01-01

    The linear multifrequency-grey acceleration (LMFGA) technique is used to accelerate the iterative convergence of multigroup thermal radiation diffusion calculations in high energy density simulations. Although it is effective and efficient in one-dimensional calculations, the LMFGA method has recently been observed to significantly degrade under certain conditions in multidimensional calculations with large discontinuities in material properties. To address this deficiency, we recast the LMFGA method in terms of a preconditioned system that is solved with a Krylov method (LMFGK). Results are presented demonstrating that the new LMFGK method always requires fewer iterations than the original LMFGA method. The reduction in iteration count increases with both the size of the time step and the inhomogeneity of the problem. However, for reasons later explained, the LMFGK method can cost more per iteration than the LMFGA method, resulting in lower but comparable efficiency in problems with small time steps and weak inhomogeneities. In problems with large time steps and strong inhomogeneities, the LMFGK method is significantly more efficient than the LMFGA method

  11. ITER-FEAT outline design report

    International Nuclear Information System (INIS)

    2001-01-01

    In July 1998 the ITER Parties were unable, for financial reasons, to proceed with construction of the ITER design proposed at that time, to meet the detailed technical objectives and target cost set in 1992. It was therefore decided to investigate options for the design of ITER with reduced technical objectives and with possibly decreased technical margins, whose target construction cost was one half that of the 1998 ITER design, while maintaining the overall programmatic objective. To identify designs that might meet the revised objectives, task forces involving the JCT and Home Teams met during 1998 and 1999 to analyse and compare a range of options for the design of such a device. This led at the end of 1999 to a single configuration for the ITER design with parameters considered to be the most credible consistent with technical limitations and the financial target, yet meeting fully the objectives with appropriate margins. This new design of ITER, called ''ITER-FEAT'', was submitted to the ITER Director to the ITER Parties as the ''ITER-FEAT Outline Design Report'' (ODR) in January 2000, at their meeting in Tokyo. The Parties subsequently conducted their domestic assessments of this report and fed the resulting comments back into the progressing design. The progress on the developing design was reported to the ITER Technical Advisory Committee (TAC) in June 2000 in the report ''Progress in Resolving Open Design Issues from the ODR'' alongside a report on Progress in Technology R and D for ITER. In addition, the progress in the ITER-FEAT Design and Validating R and D was reported to the ITER Parties. The ITER-FEAT design was subsequently approved by the governing body of ITER in Moscow in June 2000 as the basis for the preparation of the Final Design Report, recognising it as a single mature design for ITER consistent with its revised objectives. This volume contains the documents pertinent to the process described above. More detailed technical information

  12. European technology activities to prepare for ITER component procurement

    International Nuclear Information System (INIS)

    Gasparotto, M.

    2006-01-01

    phase. In particular the manufacturing feasibility of the superconducting strand for the toroidal and poloidal field coils and of the toroidal field coils has been demonstrated, Studies of manufacturing techniques for the vacuum vessel, blanket modules and divertor are also in progress, and a number of EU industries have been prepared to successfully participate to the ITER construction. The paper will report on the main R(and)D activities performed in the EU and the major achievements in preparation for ITER component procurement. (author)

  13. A new iterative algorithm to reconstruct the refractive index.

    Science.gov (United States)

    Liu, Y J; Zhu, P P; Chen, B; Wang, J Y; Yuan, Q X; Huang, W X; Shu, H; Li, E R; Liu, X S; Zhang, K; Ming, H; Wu, Z Y

    2007-06-21

    The latest developments in x-ray imaging are associated with techniques based on the phase contrast. However, the image reconstruction procedures demand significant improvements of the traditional methods, and/or new algorithms have to be introduced to take advantage of the high contrast and sensitivity of the new experimental techniques. In this letter, an improved iterative reconstruction algorithm based on the maximum likelihood expectation maximization technique is presented and discussed in order to reconstruct the distribution of the refractive index from data collected by an analyzer-based imaging setup. The technique considered probes the partial derivative of the refractive index with respect to an axis lying in the meridional plane and perpendicular to the propagation direction. Computer simulations confirm the reliability of the proposed algorithm. In addition, the comparison between an analytical reconstruction algorithm and the iterative method has been also discussed together with the convergent characteristic of this latter algorithm. Finally, we will show how the proposed algorithm may be applied to reconstruct the distribution of the refractive index of an epoxy cylinder containing small air bubbles of about 300 micro of diameter.

  14. Development of ITER relevant laser techniques for deposited layer characterisation and tritium inventory

    NARCIS (Netherlands)

    Malaquias, A.; Philipps, V.; Huber, A.; Hakola, A.; Likonen, J.; Kolehmainen, J.; Tervakangas, S.; Aints, M.; Paris, P.; Laan, M.; Lissovski, A.; Almaviva, S.; Caneve, L.; Colao, F.; Maddaluno, G.; Kubkowska, M.; Gasior, P.; van der Meiden, H. J.; Lof, A. R.; van Emmichoven, P. A. Zeijlma; Petersson, P.; Rubel, M.; Fortuna, E.; Xiao, Q.

    2013-01-01

    Laser Induced Breakdown Spectroscopy (LIBS) is a potential candidate to monitor the layer composition and fuel retention during and after plasma shots on specific locations of the main chamber and divertor of ITER. This method is being investigated in a cooperative research programme on plasma

  15. The ITER divertor cassette project

    International Nuclear Information System (INIS)

    Ulrickson, M.; Tivey, R.; Akiba, M.

    2001-01-01

    The divertor ''Large Project'' was conceived with the aim of demonstrating the feasibility of meeting the lifetime requirements by employing the candidate armor materials of beryllium, tungsten (W) and carbon-fiber-composite (CFC). At the start, there existed only limited experience with constructing water-cooled high heat flux armored components for tokamaks. To this was added the complication posed by the need to use a silver-free joining technique that avoids the transmutation of n-irradiated silver to cadmium. The research project involving the four Home Teams (HTs) has focused on the design, development, manufacture and testing of full-scale Plasma Facing Components (PFCs) suitable for ITER. The task addressed all the issues facing ITER divertor design, such as providing adequate armor erosion lifetime, meeting the required armor-heat sink joint lifetime and heat sink fatigue life, sustaining thermal-hydraulic and electromechanical loads, and seeking to identify the most cost-effective manufacturing options. This paper will report the results of the divertor large project. (author)

  16. The ITER divertor cassette project

    International Nuclear Information System (INIS)

    Ulrickson, M.; Tivey, R.; Akiba, M.

    1999-01-01

    The divertor 'Large Project' was conceived with the aim of demonstrating the feasibility of meeting the lifetime requirements by employing the candidate armor materials of beryllium, tungsten (W) and carbon-fiber-composite (CFC). At the start, there existed only limited experience with constructing water-cooled high heat flux armored components for tokamaks. To this was added the complication posed by the need to use a silver-free joining technique that avoids the transmutation of n-irradiated silver to cadmium. The research project involving the four Home Teams (HTs) has focused on the design, development, manufacture and testing of full-scale Plasma Facing Components (PFCs) suitable for ITER. The task addressed all the issues facing ITER divertor design, such as providing adequate armor erosion lifetime, meeting the required armor-heat sink joint lifetime and heat sink fatigue life, sustaining thermal-hydraulic and electromechanical loads, and seeking to identify the most cost-effective manufacturing options. This paper will report the results of the divertor large project. (author)

  17. ITER council proceedings: 1995

    International Nuclear Information System (INIS)

    1996-01-01

    Records of the 8. ITER Council Meeting (IC-8), held on 26-27 July 1995, in San Diego, USA, and the 9. ITER Council Meeting (IC-9) held on 12-13 December 1995, in Garching, Germany, are presented, giving essential information on the evolution of the ITER Engineering Design Activities (EDA) and the ITER Interim Design Report Package and Relevant Documents. Figs, tabs

  18. The adaptive statistical iterative reconstruction-V technique for radiation dose reduction in abdominal CT: comparison with the adaptive statistical iterative reconstruction technique.

    Science.gov (United States)

    Kwon, Heejin; Cho, Jinhan; Oh, Jongyeong; Kim, Dongwon; Cho, Junghyun; Kim, Sanghyun; Lee, Sangyun; Lee, Jihyun

    2015-10-01

    To investigate whether reduced radiation dose abdominal CT images reconstructed with adaptive statistical iterative reconstruction V (ASIR-V) compromise the depiction of clinically competent features when compared with the currently used routine radiation dose CT images reconstructed with ASIR. 27 consecutive patients (mean body mass index: 23.55 kg m(-2) underwent CT of the abdomen at two time points. At the first time point, abdominal CT was scanned at 21.45 noise index levels of automatic current modulation at 120 kV. Images were reconstructed with 40% ASIR, the routine protocol of Dong-A University Hospital. At the second time point, follow-up scans were performed at 30 noise index levels. Images were reconstructed with filtered back projection (FBP), 40% ASIR, 30% ASIR-V, 50% ASIR-V and 70% ASIR-V for the reduced radiation dose. Both quantitative and qualitative analyses of image quality were conducted. The CT dose index was also recorded. At the follow-up study, the mean dose reduction relative to the currently used common radiation dose was 35.37% (range: 19-49%). The overall subjective image quality and diagnostic acceptability of the 50% ASIR-V scores at the reduced radiation dose were nearly identical to those recorded when using the initial routine-dose CT with 40% ASIR. Subjective ratings of the qualitative analysis revealed that of all reduced radiation dose CT series reconstructed, 30% ASIR-V and 50% ASIR-V were associated with higher image quality with lower noise and artefacts as well as good sharpness when compared with 40% ASIR and FBP. However, the sharpness score at 70% ASIR-V was considered to be worse than that at 40% ASIR. Objective image noise for 50% ASIR-V was 34.24% and 46.34% which was lower than 40% ASIR and FBP. Abdominal CT images reconstructed with ASIR-V facilitate radiation dose reductions of to 35% when compared with the ASIR. This study represents the first clinical research experiment to use ASIR-V, the newest version of

  19. Integration of remote refurbishment performed on ITER components

    Energy Technology Data Exchange (ETDEWEB)

    Dammann, A., E-mail: alexis.dammann@iter.org [ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France); Antola, L. [AMEC, 31 Parc du Golf, CS 90519, 13596 Aix en Provence (France); Beaudoin, V. [ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France); Dremel, C. [Westinghouse, Electrique France/Astare, 122 Avenue de Hambourg, 13008 Marseille (France); Evrard, D. [SOGETI High Tech, 180 Rue René Descartes, 13851 Aix en Provence (France); Friconneau, J.P. [ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France); Lemée, A. [SOGETI High Tech, 180 Rue René Descartes, 13851 Aix en Provence (France); Levesy, B.; Pitcher, C.S. [ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France)

    2015-10-15

    Highlights: • System engineering approach to consolidate requirements to modify the layout of the Hot Cell. • Illustration of the loop between requirement and design. • Verification process. - Abstract: Internal components of the ITER Tokamak are replaced and transferred to the Hot Cell by remote handling equipment. These components include port plugs, cryopumps, divertor cassettes, blanket modules, etc. They are brought to the refurbishment area of the ITER Hot Cell Building for cleaning and maintenance, using remote handling techniques. The ITER refurbishment area will be unique in the world, when considering combination of size, quantity of complex component to refurbish in presence of radiation, activated dust and tritium. The refurbishment process to integrate covers a number of workstations to perform specific remote operations fully covered by a mast on crane system. This paper describes the integration of the Refurbishment Area, explaining the functions, the methodology followed, some illustrations of trade-off and safety improvements.

  20. Economic impacts of climate change: Methods of estimating impacts at an aggregate level using Hordaland as an illustration

    International Nuclear Information System (INIS)

    Aaheim, Asbjoern

    2003-01-01

    This report discusses methods for calculating economic impacts of climate change, and uses Hordaland county in Norway as an illustrative example. The calculations are based on estimated climate changes from the RegClim project. This study draws from knowledge of the relationship between economic activity and climate at a disaggregate level and calculates changes in production of and demand for goods and services within aggregate sectors, which are specified in the county budget for Hordaland. Total impacts for the county thus are expressed through known values from the national budget, such as the county's ''national product'', total consumption, and investments. The estimates of impacts of climate changes at a disaggregate level in Hordaland are quantified only to small degree. The calculations made in this report can thus only be considered appropriate for illustrating methods and interpretations. In terms of relative economic significance for the county, however, it is likely that the hydropower sector will be the most affected. Increased precipitation will result in greater production potential, but profitability will largely depend on projected energy prices and investment costs associated with expansion. Agriculture and forestry will increase their production potential, but they are relatively small sectors in the county. Compared with the uncertainty about how climate change will affect production, however, the uncertainty about changes in demand is far greater. The demand for personal transportation and construction in particular can have significant consequences for the county's economy. (author)

  1. Validity of linear measurements of the jaws using ultralow-dose MDCT and the iterative techniques of ASIR and MBIR.

    Science.gov (United States)

    Al-Ekrish, Asma'a A; Al-Shawaf, Reema; Schullian, Peter; Al-Sadhan, Ra'ed; Hörmann, Romed; Widmann, Gerlig

    2016-10-01

    To assess the comparability of linear measurements of dental implant sites recorded from multidetector computed tomography (MDCT) images obtained using standard-dose filtered backprojection (FBP) technique with those from various ultralow doses combined with FBP, adaptive statistical iterative reconstruction (ASIR), and model-based iterative reconstruction (MBIR) techniques. The results of the study may contribute to MDCT dose optimization for dental implant site imaging. MDCT scans of two cadavers were acquired using a standard reference protocol and four ultralow-dose test protocols (TP). The volume CT dose index of the different dose protocols ranged from a maximum of 30.48-36.71 mGy to a minimum of 0.44-0.53 mGy. All scans were reconstructed using FBP, ASIR-50, ASIR-100, and MBIR, and either a bone or standard reconstruction kernel. Linear measurements were recorded from standardized images of the jaws by two examiners. Intra- and inter-examiner reliability of the measurements were analyzed using Cronbach's alpha and inter-item correlation. Agreement between the measurements obtained with the reference-dose/FBP protocol and each of the test protocols was determined with Bland-Altman plots and linear regression. Statistical significance was set at a P-value of 0.05. No systematic variation was found between the linear measurements obtained with the reference protocol and the other imaging protocols. The only exceptions were TP3/ASIR-50 (bone kernel) and TP4/ASIR-100 (bone and standard kernels). The mean measurement differences between these three protocols and the reference protocol were within ±0.1 mm, with the 95 % confidence interval limits being within the range of ±1.15 mm. A nearly 97.5 % reduction in dose did not significantly affect the height and width measurements of edentulous jaws regardless of the reconstruction algorithm used.

  2. Improvement of brain perfusion SPET using iterative reconstruction with scatter and non-uniform attenuation correction

    Energy Technology Data Exchange (ETDEWEB)

    Kauppinen, T.; Vanninen, E.; Kuikka, J.T. [Kuopio Central Hospital (Finland). Dept. of Clinical Physiology; Koskinen, M.O. [Dept. of Clinical Physiology and Nuclear Medicine, Tampere Univ. Hospital, Tampere (Finland); Alenius, S. [Signal Processing Lab., Tampere Univ. of Technology, Tampere (Finland)

    2000-09-01

    Filtered back-projection (FBP) is generally used as the reconstruction method for single-photon emission tomography although it produces noisy images with apparent streak artefacts. It is possible to improve the image quality by using an algorithm with iterative correction steps. The iterative reconstruction technique also has an additional benefit in that computation of attenuation correction can be included in the process. A commonly used iterative method, maximum-likelihood expectation maximisation (ML-EM), can be accelerated using ordered subsets (OS-EM). We have applied to the OS-EM algorithm a Bayesian one-step late correction method utilising median root prior (MRP). Methodological comparison was performed by means of measurements obtained with a brain perfusion phantom and using patient data. The aim of this work was to quantitate the accuracy of iterative reconstruction with scatter and non-uniform attenuation corrections and post-filtering in SPET brain perfusion imaging. SPET imaging was performed using a triple-head gamma camera with fan-beam collimators. Transmission and emission scans were acquired simultaneously. The brain phantom used was a high-resolution three-dimensional anthropomorphic JB003 phantom. Patient studies were performed in ten chronic pain syndrome patients. The images were reconstructed using conventional FBP and iterative OS-EM and MRP techniques including scatter and nonuniform attenuation corrections. Iterative reconstructions were individually post-filtered. The quantitative results obtained with the brain perfusion phantom were compared with the known actual contrast ratios. The calculated difference from the true values was largest with the FBP method; iteratively reconstructed images proved closer to the reality. Similar findings were obtained in the patient studies. The plain OS-EM method improved the contrast whereas in the case of the MRP technique the improvement in contrast was not so evident with post-filtering. (orig.)

  3. Improvement of brain perfusion SPET using iterative reconstruction with scatter and non-uniform attenuation correction

    International Nuclear Information System (INIS)

    Kauppinen, T.; Vanninen, E.; Kuikka, J.T.; Alenius, S.

    2000-01-01

    Filtered back-projection (FBP) is generally used as the reconstruction method for single-photon emission tomography although it produces noisy images with apparent streak artefacts. It is possible to improve the image quality by using an algorithm with iterative correction steps. The iterative reconstruction technique also has an additional benefit in that computation of attenuation correction can be included in the process. A commonly used iterative method, maximum-likelihood expectation maximisation (ML-EM), can be accelerated using ordered subsets (OS-EM). We have applied to the OS-EM algorithm a Bayesian one-step late correction method utilising median root prior (MRP). Methodological comparison was performed by means of measurements obtained with a brain perfusion phantom and using patient data. The aim of this work was to quantitate the accuracy of iterative reconstruction with scatter and non-uniform attenuation corrections and post-filtering in SPET brain perfusion imaging. SPET imaging was performed using a triple-head gamma camera with fan-beam collimators. Transmission and emission scans were acquired simultaneously. The brain phantom used was a high-resolution three-dimensional anthropomorphic JB003 phantom. Patient studies were performed in ten chronic pain syndrome patients. The images were reconstructed using conventional FBP and iterative OS-EM and MRP techniques including scatter and nonuniform attenuation corrections. Iterative reconstructions were individually post-filtered. The quantitative results obtained with the brain perfusion phantom were compared with the known actual contrast ratios. The calculated difference from the true values was largest with the FBP method; iteratively reconstructed images proved closer to the reality. Similar findings were obtained in the patient studies. The plain OS-EM method improved the contrast whereas in the case of the MRP technique the improvement in contrast was not so evident with post-filtering. (orig.)

  4. Spatial accuracy of a simplified disaggregation method for traffic emissions applied in seven mid-sized Chilean cities

    Science.gov (United States)

    Ossés de Eicker, Margarita; Zah, Rainer; Triviño, Rubén; Hurni, Hans

    The spatial accuracy of top-down traffic emission inventory maps obtained with a simplified disaggregation method based on street density was assessed in seven mid-sized Chilean cities. Each top-down emission inventory map was compared against a reference, namely a more accurate bottom-up emission inventory map from the same study area. The comparison was carried out using a combination of numerical indicators and visual interpretation. Statistically significant differences were found between the seven cities with regard to the spatial accuracy of their top-down emission inventory maps. In compact cities with a simple street network and a single center, a good accuracy of the spatial distribution of emissions was achieved with correlation values>0.8 with respect to the bottom-up emission inventory of reference. In contrast, the simplified disaggregation method is not suitable for complex cities consisting of interconnected nuclei, resulting in correlation valuessituation to get an overview on the spatial distribution of the emissions generated by traffic activities.

  5. Iterative normalization technique for reference sequence generation for zero-tail discrete fourier transform spread orthogonal frequency division multiplexing

    DEFF Research Database (Denmark)

    2017-01-01

    , and performing an iterative manipulation of the input sequence. The performing of the iterative manipulation of the input sequence may include, for example: computing frequency domain response of the sequence, normalizing elements of the computed frequency domain sequence to unitary power while maintaining phase...

  6. ITER CTA newsletter. No. 3

    International Nuclear Information System (INIS)

    2001-11-01

    This ITER CTA newsletter comprises reports of Dr. P. Barnard, Iter Canada Chairman and CEO, about the progress of the first formal ITER negotiations and about the demonstration of details of Canada's bid on ITER workshops, and Dr. V. Vlasenkov, Project Board Secretary, about the meeting of the ITER CTA project board

  7. ITER council proceedings: 1997

    International Nuclear Information System (INIS)

    1997-01-01

    This volume of the ITER EDA Documentation Series presents records of the 12th ITER Council Meeting, IC-12, which took place on 23-24 July, 1997 in Tampere, Finland. The Council received from the Parties (EU, Japan, Russia, US) positive responses on the Detailed Design Report. The Parties stated their willingness to contribute to fulfil their obligations in contributing to the ITER EDA. The summary discussions among the Parties led to the consensus that in July 1998 the ITER activities should proceed for additional three years with a general intent to enable an efficient start of possible, future ITER construction

  8. Iterative and iterative-noniterative integral solutions in 3-loop massive QCD calculations

    International Nuclear Information System (INIS)

    Ablinger, J.; Radu, C.S.; Schneider, C.; Behring, A.; Imamoglu, E.; Van Hoeij, M.; Von Manteuffel, A.; Raab, C.G.

    2017-11-01

    Various of the single scale quantities in massless and massive QCD up to 3-loop order can be expressed by iterative integrals over certain classes of alphabets, from the harmonic polylogarithms to root-valued alphabets. Examples are the anomalous dimensions to 3-loop order, the massless Wilson coefficients and also different massive operator matrix elements. Starting at 3-loop order, however, also other letters appear in the case of massive operator matrix elements, the so called iterative non-iterative integrals, which are related to solutions based on complete elliptic integrals or any other special function with an integral representation that is definite but not a Volterra-type integral. After outlining the formalism leading to iterative non-iterative integrals,we present examples for both of these cases with the 3-loop anomalous dimension γ (2) qg and the structure of the principle solution in the iterative non-interative case of the 3-loop QCD corrections to the ρ-parameter.

  9. Iterative and iterative-noniterative integral solutions in 3-loop massive QCD calculations

    Energy Technology Data Exchange (ETDEWEB)

    Ablinger, J.; Radu, C.S.; Schneider, C. [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation (RISC); Behring, A. [RWTH Aachen Univ. (Germany). Inst. fuer Theoretische Teilchenphysik und Kosmologie; Bluemlein, J.; Freitas, A. de [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Imamoglu, E.; Van Hoeij, M. [Florida State Univ., Tallahassee, FL (United States). Dept. of Mathematics; Von Manteuffel, A. [Michigan State Univ., East Lansing, MI (United States). Dept. of Physics and Astronomy; Raab, C.G. [Johannes Kepler Univ., Linz (Austria). Inst. for Algebra

    2017-11-15

    Various of the single scale quantities in massless and massive QCD up to 3-loop order can be expressed by iterative integrals over certain classes of alphabets, from the harmonic polylogarithms to root-valued alphabets. Examples are the anomalous dimensions to 3-loop order, the massless Wilson coefficients and also different massive operator matrix elements. Starting at 3-loop order, however, also other letters appear in the case of massive operator matrix elements, the so called iterative non-iterative integrals, which are related to solutions based on complete elliptic integrals or any other special function with an integral representation that is definite but not a Volterra-type integral. After outlining the formalism leading to iterative non-iterative integrals,we present examples for both of these cases with the 3-loop anomalous dimension γ{sup (2)}{sub qg} and the structure of the principle solution in the iterative non-interative case of the 3-loop QCD corrections to the ρ-parameter.

  10. Elliptic Preconditioner for Accelerating the Self-Consistent Field Iteration in Kohn--Sham Density Functional Theory

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Lin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Yang, Chao [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division

    2013-10-28

    We discuss techniques for accelerating the self consistent field (SCF) iteration for solving the Kohn-Sham equations. These techniques are all based on constructing approximations to the inverse of the Jacobian associated with a fixed point map satisfied by the total potential. They can be viewed as preconditioners for a fixed point iteration. We point out different requirements for constructing preconditioners for insulating and metallic systems respectively, and discuss how to construct preconditioners to keep the convergence rate of the fixed point iteration independent of the size of the atomistic system. We propose a new preconditioner that can treat insulating and metallic system in a unified way. The new preconditioner, which we call an elliptic preconditioner, is constructed by solving an elliptic partial differential equation. The elliptic preconditioner is shown to be more effective in accelerating the convergence of a fixed point iteration than the existing approaches for large inhomogeneous systems at low temperature.

  11. Physics research needs for ITER

    International Nuclear Information System (INIS)

    Sauthoff, N.R.

    1995-01-01

    Design of ITER entails the application of physics design tools that have been validated against the world-wide data base of fusion research. In many cases, these tools do not yet exist and must be developed as part of the ITER physics program. ITER's considerable increases in power and size demand significant extrapolations from the current data base; in several cases, new physical effects are projected to dominate the behavior of the ITER plasma. This paper focuses on those design tools and data that have been identified by the ITER team and are not yet available; these needs serve as the basis for the ITER Physics Research Needs, which have been developed jointly by the ITER Physics Expert Groups and the ITER design team. Development of the tools and the supporting data base is an on-going activity that constitutes a significant opportunity for contributions to the ITER program by fusion research programs world-wide

  12. Small heat shock proteins protect against α-synuclein-induced toxicity and aggregation

    International Nuclear Information System (INIS)

    Outeiro, Tiago Fleming; Klucken, Jochen; Strathearn, Katherine E.; Liu Fang; Nguyen, Paul; Rochet, Jean-Christophe; Hyman, Bradley T.; McLean, Pamela J.

    2006-01-01

    Protein misfolding and inclusion formation are common events in neurodegenerative diseases, such as Parkinson's disease (PD), Alzheimer's disease (AD) or Huntington's disease (HD). α-Synuclein (aSyn) is the main protein component of inclusions called Lewy bodies (LB) which are pathognomic of PD, Dementia with Lewy bodies (DLB), and other diseases collectively known as LB diseases. Heat shock proteins (HSPs) are one class of the cellular quality control system that mediate protein folding, remodeling, and even disaggregation. Here, we investigated the role of the small heat shock proteins Hsp27 and αB-crystallin, in LB diseases. We demonstrate, via quantitative PCR, that Hsp27 messenger RNA levels are ∼2-3-fold higher in DLB cases compared to control. We also show a corresponding increase in Hsp27 protein levels. Furthermore, we found that Hsp27 reduces aSyn-induced toxicity by ∼80% in a culture model while αB-crystallin reduces toxicity by ∼20%. In addition, intracellular inclusions were immunopositive for endogenous Hsp27, and overexpression of this protein reduced aSyn aggregation in a cell culture model

  13. Infrared laser diagnostics for ITER

    International Nuclear Information System (INIS)

    Hutchinson, D.P.; Richards, R.K.; Ma, C.H.

    1995-01-01

    Two infrared laser-based diagnostics are under development at ORNL for measurements on burning plasmas such as ITER. The primary effort is the development of a CO 2 laser Thomson scattering diagnostic for the measurement of the velocity distribution of confined fusion-product alpha particles. Key components of the system include a high-power, single-mode CO 2 pulsed laser, an efficient optics system for beam transport and a multichannel low-noise infrared heterodyne receiver. A successful proof-of-principle experiment has been performed on the Advanced Toroidal Facility (ATF) stellerator at ORNL utilizing scattering from electron plasma frequency satellites. The diagnostic system is currently being installed on Alcator C-Mod at MIT for measurements of the fast ion tail produced by ICRH heating. A second diagnostic under development at ORNL is an infrared polarimeter for Faraday rotation measurements in future fusion experiments. A preliminary feasibility study of a CO 2 laser tangential viewing polarimeter for measuring electron density profiles in ITER has been completed. For ITER plasma parameters and a polarimeter wavelength of 10.6 microm, a Faraday rotation of up to 26 degree is predicted. An electro-optic polarization modulation technique has been developed at ORNL. Laboratory tests of this polarimeter demonstrated a sensitivity of ≤ 0.01 degree. Because of the similarity in the expected Faraday rotation in ITER and Alcator C-Mod, a collaboration between ORNL and the MIT Plasma Fusion Center has been undertaken to test this polarimeter system on Alcator C-Mod. A 10.6 microm polarimeter for this measurement has been constructed and integrated into the existing C-Mod multichannel two-color interferometer. With present experimental parameters for C-Mod, the predicted Faraday rotation was on the order of 0.1 degree. Significant output signals were observed during preliminary tests. Further experiment and detailed analyses are under way

  14. Destabilization of Human Insulin Fibrils by Peptides of Fruit Bromelain Derived From Ananas comosus (Pineapple).

    Science.gov (United States)

    Das, Sromona; Bhattacharyya, Debasish

    2017-12-01

    Deposition of insulin aggregates in human body leads to dysfunctioning of several organs. Effectiveness of fruit bromelain from pineapple in prevention of insulin aggregate was investigated. Proteolyses of bromelain was done as par human digestive system and the pool of small peptides was separated from larger peptides and proteins. Under conditions of growth of insulin aggregates from its monomers, this pool of peptides restricted the reaction upto formation of oligomers of limited size. These peptides also destabilized preformed insulin aggregates to oligomers. These processes were followed fluorimetrically using Thioflavin T and 1-ANS, size-exclusion HPLC, dynamic light scattering, atomic force microscopy, and transmission electron microscopy. Sequences of insulin (A and B chains) and bromelain were aligned using Clustal W software to predict most probable sites of interactions. Synthetic tripeptides corresponding to the hydrophobic interactive sites of bromelain showed disaggregation of insulin suggesting specificity of interactions. The peptides GG and AAA serving as negative controls showed no potency in destabilization of aggregates. Disaggregation potency of the peptides was also observed when insulin was deposited on HepG2 liver cells where no formation of toxic oligomers occurred. Amyloidogenic des-octapeptide (B23-B30 of insulin) incapable of cell signaling showed cytotoxicity similar to insulin. This toxicity could be neutralized by bromelain derived peptides. FT-IR and far-UV circular dichroism analysis indicated that disaggregated insulin had structure distinctly different from that of its hexameric (native) or monomeric states. Based on the stoichiometry of interaction and irreversibility of disaggregation, the mechanism/s of the peptides and insulin interactions has been proposed. J. Cell. Biochem. 118: 4881-4896, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  15. Iterative calculation of reflected and transmitted acoustic waves at a rough interface

    NARCIS (Netherlands)

    Berkhoff, Arthur P.; van den Berg, P.M.; Thijssen, J.M.

    A rigorous iterative technique is described for calculating the acoustic wave reflection and transmission at an irregular interface between two different media. The method is based upon a plane-wave expansion technique in which the acoustic field equations and the radiation condition are satisfied

  16. Burst Transmission and Frame Aggregation for VANET Communications

    Directory of Open Access Journals (Sweden)

    Wei Kuang Lai

    2017-09-01

    Full Text Available In vehicular ad hoc networks (VANETs, due to highly mobile and frequently changing topology, available resources and transmission opportunities are restricted. To address this, we propose a burst transmission and frame aggregation (FAB scheme to enhance transmission opportunity (TXOP efficiency of IEEE 802.11p. Aggregation and TXOP techniques are useful for improving transmission performance. FAB aggregates frames in the relay node and utilizes the TXOP to transmit these frames to the next hop with a burst transmission. Simulation results show that the proposed FAB scheme can significantly improve the performance of inter-vehicle communications.

  17. Iterated Process Analysis over Lattice-Valued Regular Expressions

    DEFF Research Database (Denmark)

    Midtgaard, Jan; Nielson, Flemming; Nielson, Hanne Riis

    2016-01-01

    We present an iterated approach to statically analyze programs of two processes communicating by message passing. Our analysis operates over a domain of lattice-valued regular expressions, and computes increasingly better approximations of each process's communication behavior. Overall the work e...... extends traditional semantics-based program analysis techniques to automatically reason about message passing in a manner that can simultaneously analyze both values of variables as well as message order, message content, and their interdependencies.......We present an iterated approach to statically analyze programs of two processes communicating by message passing. Our analysis operates over a domain of lattice-valued regular expressions, and computes increasingly better approximations of each process's communication behavior. Overall the work...

  18. A unified aggregation and relaxation approach for stress-constrained topology optimization

    DEFF Research Database (Denmark)

    Verbart, Alexander; Langelaar, Matthijs; Keulen, Fred van

    2017-01-01

    design-independent set of constraints. The next step is to perform constraint aggregation over the reformulated local constraints using a lower bound aggregation function. We demonstrate that this approach concurrently aggregates the constraints and relaxes the feasible domain, thereby making singular...... optima accessible. The main advantage is that no separate constraint relaxation techniques are necessary, which reduces the parameter dependence of the problem. Furthermore, there is a clear relationship between the original feasible domain and the perturbed feasible domain via this aggregation parameter....

  19. Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm

    Science.gov (United States)

    Elahi, Sana; kaleem, Muhammad; Omer, Hammad

    2018-01-01

    Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.

  20. ITER radio frequency systems

    International Nuclear Information System (INIS)

    Bosia, G.

    1998-01-01

    Neutral Beam Injection and RF heating are two of the methods for heating and current drive in ITER. The three ITER RF systems, which have been developed during the EDA, offer several complementary services and are able to fulfil ITER operational requirements

  1. Fast iterative censoring CFAR algorithm for ship detection from SAR images

    Science.gov (United States)

    Gu, Dandan; Yue, Hui; Zhang, Yuan; Gao, Pengcheng

    2017-11-01

    Ship detection is one of the essential techniques for ship recognition from synthetic aperture radar (SAR) images. This paper presents a fast iterative detection procedure to eliminate the influence of target returns on the estimation of local sea clutter distributions for constant false alarm rate (CFAR) detectors. A fast block detector is first employed to extract potential target sub-images; and then, an iterative censoring CFAR algorithm is used to detect ship candidates from each target blocks adaptively and efficiently, where parallel detection is available, and statistical parameters of G0 distribution fitting local sea clutter well can be quickly estimated based on an integral image operator. Experimental results of TerraSAR-X images demonstrate the effectiveness of the proposed technique.

  2. Stimulation of Cysteine-Coated CdSe/ZnS Quantum Dot Luminescence by meso-Tetrakis (p-sulfonato-phenyl) Porphyrin

    Science.gov (United States)

    Parra, Gustavo G.; Ferreira, Lucimara P.; Gonçalves, Pablo J.; Sizova, Svetlana V.; Oleinikov, Vladimir A.; Morozov, Vladimir N.; Kuzmin, Vladimir A.; Borissevitch, Iouri E.

    2018-02-01

    Interaction between porphyrins and quantum dots (QD) via energy and/or charge transfer is usually accompanied by reduction of the QD luminescence intensity and lifetime. However, for CdSe/ZnS-Cys QD water solutions, kept at 276 K during 3 months (aged QD), the significant increase in the luminescence intensity at the addition of meso-tetrakis (p-sulfonato-phenyl) porphyrin (TPPS4) has been observed in this study. Aggregation of QD during the storage provokes reduction in the quantum yield and lifetime of their luminescence. Using steady-state and time-resolved fluorescence techniques, we demonstrated that TPPS4 stimulated disaggregation of aged CdSe/ZnS-Cys QD in aqueous solutions, increasing the quantum yield of their luminescence, which finally reached that of the fresh-prepared QD. Disaggregation takes place due to increase in electrostatic repulsion between QD at their binding with negatively charged porphyrin molecules. Binding of just four porphyrin molecules per single QD was sufficient for total QD disaggregation. The analysis of QD luminescence decay curves demonstrated that disaggregation stronger affected the luminescence related with the electron-hole annihilation in the QD shell. The obtained results demonstrate the way to repair aged QD by adding of some molecules or ions to the solutions, stimulating QD disaggregation and restoring their luminescence characteristics, which could be important for QD biomedical applications, such as bioimaging and fluorescence diagnostics. On the other hand, the disaggregation is important for QD applications in biology and medicine since it reduces the size of the particles facilitating their internalization into living cells across the cell membrane.

  3. ITER Construction--Plant System Integration

    International Nuclear Information System (INIS)

    Tada, E.; Matsuda, S.

    2009-01-01

    This brief paper introduces how the ITER will be built in the international collaboration. The ITER Organization plays a central role in constructing ITER and leading it into operation. Since most of the ITER components are to be provided in-kind from the member countries, integral project management should be scoped in advance of real work. Those include design, procurement, system assembly, testing, licensing and commissioning of ITER.

  4. ITER definition phase

    International Nuclear Information System (INIS)

    1989-01-01

    The International Thermonuclear Experimental Reactor (ITER) is envisioned as a fusion device which would demonstrate the scientific and technological feasibility of fusion power. As a first step towards achieving this goal, the European Community, Japan, the Soviet Union, and the United States of America have entered into joint conceptual design activities under the auspices of the International Atomic Energy Agency. A brief summary of the Definition Phase of ITER activities is contained in this report. Included in this report are the background, objectives, organization, definition phase activities, and research and development plan of this endeavor in international scientific collaboration. A more extended technical summary is contained in the two-volume report, ''ITER Concept Definition,'' IAEA/ITER/DS/3. 2 figs, 2 tabs

  5. United States rejoin ITER

    International Nuclear Information System (INIS)

    Roberts, M.

    2003-01-01

    Upon pressure from the United States Congress, the US Department of Energy had to withdraw from further American participation in the ITER Engineering Design Activities after the end of its commitment to the EDA in July 1998. In the years since that time, changes have taken place in both the ITER activity and the US fusion community's position on burning plasma physics. Reflecting the interest in the United States in pursuing burning plasma physics, the DOE's Office of Science commissioned three studies as part of its examination of the option of entering the Negotiations on the Agreement on the Establishment of the International Fusion Energy Organization for the Joint Implementation of the ITER Project. These were a National Academy Review Panel Report supporting the burning plasma mission; a Fusion Energy Sciences Advisory Committee (FESAC) report confirming the role of ITER in achieving fusion power production, and The Lehman Review of the ITER project costing and project management processes (for the latter one, see ITER CTA Newsletter, no. 15, December 2002). All three studies have endorsed the US return to the ITER activities. This historical decision was announced by DOE Secretary Abraham during his remarks to employees of the Department's Princeton Plasma Physics Laboratory. The United States will be working with the other Participants in the ITER Negotiations on the Agreement and is preparing to participate in the ITA

  6. Preplaced aggregate concrete application on Fort St. Vrain PCRV construction

    International Nuclear Information System (INIS)

    Ople, F.S. Jr.

    1976-01-01

    Two distinct concreting methods were employed in the construction of the prestressed concrete reactor vessel (PCRV) of the Fort St. Vrain (FSV) Nuclear Generating Station, a 330 MW(e) High Temperature Gas-Cooled Reactor installation near Denver, Colorado. Preplaced aggregate concrete (PAC) techniques were employed in the PCRV bottom head and the core support floor; conventional job-mixed concrete was used in the PCRV sidewall and top head regions. This paper describes the successful application of PAC techniques utilized primarily in solving construction difficulties associated with confined and heavily congested regions of the PCRV. The PAC technique consists of placing coarse aggregate inside the forms, followed by injection of grout under pressure through embedded pipes to fill the interstices in the aggregate mass. Details of the PAC construction method including grout mix development, grouting equipment, grout pipe layout, grouting sequence, grout level monitoring, concrete temperature control, and pre-construction mockups are described. (author)

  7. Adaptive control in multi-threaded iterated integration

    International Nuclear Information System (INIS)

    Doncker, Elise de; Yuasa, Fukuko

    2013-01-01

    In recent years we have developed a technique for the direct computation of Feynman loop-integrals, which are notorious for the occurrence of integrand singularities. Especially for handling singularities in the interior of the domain, we approximate the iterated integral using an adaptive algorithm in the coordinate directions. We present a novel multi-core parallelization scheme for adaptive multivariate integration, by assigning threads to the rule evaluations in the outer dimensions of the iterated integral. The method ensures a large parallel granularity as each function evaluation by itself comprises an integral over the lower dimensions, while the application of the threads is governed by the adaptive control in the outer level. We give computational results for a test set of 3- to 6-dimensional integrals, where several problems exhibit a loop integral behavior.

  8. ITER towards the construction

    International Nuclear Information System (INIS)

    Shimomura, Y.

    2005-01-01

    The ITER Project has been significantly developed in the last few years in preparation for its construction. The ITER Participant's Negotiators have developed the Joint Implementation Agreement (JIA), ready for finalisation following selection of the construction site and nomination of the project's Director General. The ITER International Team and Participant Teams have continued technical and organisational preparations. Construction will be able to start immediately after the international ITER organisation is established, following signature of the JIA. The Project is strongly supported by the governments of the Participants as well as by the scientific community. The real negotiations, including siting and the final details of cost sharing, started in December 2003. The EU, with Cadarache, and Japan, with Rokkasho, have both promised large contributions to the project to strongly support their construction site proposals. Their wish to host ITER construction is too strong to allow convergence to a single site considering the ITER device in isolation. A broader collaboration among the Parties is therefore being contemplated, covering complementary activities to help accelerate fusion development towards a viable power source, and allow the Participants to reach a conclusion on ITER siting. This report reviews these preparations, and the status of negotiations

  9. Iterated interactions method. Realistic NN potential

    International Nuclear Information System (INIS)

    Gorbatov, A.M.; Skopich, V.L.; Kolganova, E.A.

    1991-01-01

    The method of iterated potential is tested in the case of realistic fermionic systems. As a base for comparison calculations of the 16 O system (using various versions of realistic NN potentials) by means of the angular potential-function method as well as operators of pairing correlation were used. The convergence of genealogical series is studied for the central Malfliet-Tjon potential. In addition the mathematical technique of microscopical calculations is improved: new equations for correlators in odd states are suggested and the technique of leading terms was applied for the first time to calculations of heavy p-shell nuclei in the basis of angular potential functions

  10. ITER-FEAT operation

    International Nuclear Information System (INIS)

    Shimomura, Y.; Huguet, M.; Mizoguchi, T.; Murakami, Y.; Polevoi, A.R.; Shimada, M.; Aymar, R.; Chuyanov, V.A.; Matsumoto, H.

    2001-01-01

    ITER is planned to be the first fusion experimental reactor in the world operating for research in physics and engineering. The first ten years of operation will be devoted primarily to physics issues at low neutron fluence and the following ten years of operation to engineering testing at higher fluence. ITER can accommodate various plasma configurations and plasma operation modes, such as inductive high Q modes, long pulse hybrid modes and non-inductive steady state modes, with large ranges of plasma current, density, beta and fusion power, and with various heating and current drive methods. This flexibility will provide an advantage for coping with uncertainties in the physics database, in studying burning plasmas, in introducing advanced features and in optimizing the plasma performance for the different programme objectives. Remote sites will be able to participate in the ITER experiment. This concept will provide an advantage not only in operating ITER for 24 hours a day but also in involving the worldwide fusion community and in promoting scientific competition among the ITER Parties. (author)

  11. POVERTY AND CALORIE DEPRIVATION ACROSS SOCIO-ECONOMIC GROUPS IN RURAL INDIA: A DISAGGREGATED ANALYSIS

    OpenAIRE

    Gupta, Abha; Mishra, Deepak K.

    2013-01-01

    This paper examines the linkages between calorie deprivation and poverty in rural India at a disaggregated level. It aims to explore the trends and pattern in levels of nutrient intake across social and economic groups. A spatial analysis at the state and NSS-region level unravels the spatial distribution of calorie deprivation in rural India. The gap between incidence of poverty and calorie deprivation has also been investigated. The paper also estimates the factors influencing calorie depri...

  12. ITER CTA newsletter. No. 2

    International Nuclear Information System (INIS)

    2001-10-01

    This ITER CTA newsletter contains results of the ITER toroidal field model coil project presented by ITER EU Home Team (Garching) and an article in commemoration of the late Dr. Charles Maisonnier, one of the former leaders of ITER who made significant contributions to its development

  13. Elements in nucleotide sensing and hydrolysis of the AAA+ disaggregation machine ClpB: a structure-based mechanistic dissection of a molecular motor

    Energy Technology Data Exchange (ETDEWEB)

    Zeymer, Cathleen, E-mail: cathleen.zeymer@mpimf-heidelberg.mpg.de; Barends, Thomas R. M.; Werbeck, Nicolas D.; Schlichting, Ilme; Reinstein, Jochen, E-mail: cathleen.zeymer@mpimf-heidelberg.mpg.de [Max Planck Institute for Medical Research, Jahnstrasse 29, 69120 Heidelberg (Germany)

    2014-02-01

    High-resolution crystal structures together with mutational analysis and transient kinetics experiments were utilized to understand nucleotide sensing and the regulation of the ATPase cycle in an AAA+ molecular motor. ATPases of the AAA+ superfamily are large oligomeric molecular machines that remodel their substrates by converting the energy from ATP hydrolysis into mechanical force. This study focuses on the molecular chaperone ClpB, the bacterial homologue of Hsp104, which reactivates aggregated proteins under cellular stress conditions. Based on high-resolution crystal structures in different nucleotide states, mutational analysis and nucleotide-binding kinetics experiments, the ATPase cycle of the C-terminal nucleotide-binding domain (NBD2), one of the motor subunits of this AAA+ disaggregation machine, is dissected mechanistically. The results provide insights into nucleotide sensing, explaining how the conserved sensor 2 motif contributes to the discrimination between ADP and ATP binding. Furthermore, the role of a conserved active-site arginine (Arg621), which controls binding of the essential Mg{sup 2+} ion, is described. Finally, a hypothesis is presented as to how the ATPase activity is regulated by a conformational switch that involves the essential Walker A lysine. In the proposed model, an unusual side-chain conformation of this highly conserved residue stabilizes a catalytically inactive state, thereby avoiding unnecessary ATP hydrolysis.

  14. Elements in nucleotide sensing and hydrolysis of the AAA+ disaggregation machine ClpB: a structure-based mechanistic dissection of a molecular motor

    International Nuclear Information System (INIS)

    Zeymer, Cathleen; Barends, Thomas R. M.; Werbeck, Nicolas D.; Schlichting, Ilme; Reinstein, Jochen

    2014-01-01

    High-resolution crystal structures together with mutational analysis and transient kinetics experiments were utilized to understand nucleotide sensing and the regulation of the ATPase cycle in an AAA+ molecular motor. ATPases of the AAA+ superfamily are large oligomeric molecular machines that remodel their substrates by converting the energy from ATP hydrolysis into mechanical force. This study focuses on the molecular chaperone ClpB, the bacterial homologue of Hsp104, which reactivates aggregated proteins under cellular stress conditions. Based on high-resolution crystal structures in different nucleotide states, mutational analysis and nucleotide-binding kinetics experiments, the ATPase cycle of the C-terminal nucleotide-binding domain (NBD2), one of the motor subunits of this AAA+ disaggregation machine, is dissected mechanistically. The results provide insights into nucleotide sensing, explaining how the conserved sensor 2 motif contributes to the discrimination between ADP and ATP binding. Furthermore, the role of a conserved active-site arginine (Arg621), which controls binding of the essential Mg 2+ ion, is described. Finally, a hypothesis is presented as to how the ATPase activity is regulated by a conformational switch that involves the essential Walker A lysine. In the proposed model, an unusual side-chain conformation of this highly conserved residue stabilizes a catalytically inactive state, thereby avoiding unnecessary ATP hydrolysis

  15. Simultaneous versus sequential pharmacokinetic-pharmacodynamic population analysis using an iterative two-stage Bayesian technique

    NARCIS (Netherlands)

    Proost, Johannes H.; Schiere, Sjouke; Eleveld, Douglas J.; Wierda, J. Mark K. H.

    A method for simultaneous pharmacokinetic-pharmacodynamic (PK-PD) population analysis using an Iterative Two-Stage Bayesian (ITSB) algorithm was developed. The method was evaluated using clinical data and Monte Carlo simulations. Data from a clinical study with rocuronium in nine anesthetized

  16. Fabrication of prototype mockups of ITER shielding blanket with separable first wall

    International Nuclear Information System (INIS)

    Kosaku, Yasuo; Kuroda, Toshimasa; Enoeda, Mikio; Hatano, Toshihisa; Sato, Satoshi; Akiba, Masato

    2002-07-01

    Design of shielding blanket for ITER-FEAT applies the first wall which has the separable structure from the shield block for the purpose of radio-active waste reduction in the maintenance work and cost reduction in fabrication process. Also, it is required to have various types of slots in both of the first wall and the shield block, to reduce the eddy current for reduction of electro-magnetic force in disruption events. This report summarizes the demonstrative fabrication of the ITER shielding blanket with separable first wall performed for the shielding blanket fabrication technology development, under the task agreement of G 16 TT 108 FJ (T420-2) in ITER Engineering Design Activity Extension Period. The objectives of the demonstrative fabrication are: to demonstrate the comprehensive fabrication technique in a large scale component (e.g the joining techniques for the beryllium armor/copper alloy and copper alloy/SS, and the slotting method of the FW and shield block); to develop an improved fabrication method for the shielding blanket based on the ITER-FEAT updated design. In this work, the fabrication technique of full scale separable first wall shield blanket was confirmed by fabricating full width Be armored first wall panel, full scale of 1/2 shield block with poloidal cooling channels. As the R and D for updated cooling channel configuration, the fabrication technique of the radial channel shield block was also demonstrated. Concluding to the all R and D results, it was demonstrated successfully that the fabrication technique and optimized conditions in the results obtained under the task agreement of G 16 TT 95 FJ (T420-1) was applicable to the prototype of the separable first wall blanket module. Additionally, basic echo data of ultra-sonic test method (UT) was obtained to show the applicability of UT method for in tube access detection of defect on the Cu alloy/SS tube interface. (author)

  17. Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER

    Energy Technology Data Exchange (ETDEWEB)

    Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.; Petrov, A. A.; Petrov, V. G.; Tugarinov, S. N. [Institution Project center ITER, Moscow (Russian Federation)

    2014-08-21

    In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and represent the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ–ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.

  18. Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER

    Science.gov (United States)

    Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.; Petrov, A. A.; Petrov, V. G.; Tugarinov, S. N.

    2014-08-01

    In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and represent the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ-ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.

  19. Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER

    International Nuclear Information System (INIS)

    Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.; Petrov, A. A.; Petrov, V. G.; Tugarinov, S. N.

    2014-01-01

    In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and represent the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ–ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed

  20. Characterization of wet aggregate stability of soils by ¹H-NMR relaxometry.

    Science.gov (United States)

    Buchmann, C; Meyer, M; Schaumann, G E

    2015-09-01

    For the assessment of soil structural stability against hydraulic stress, wet sieving or constant head permeability tests are typically used but rather limited in their intrinsic information value. The multiple applications of several tests is the only possibility to assess important processes and mechanisms during soil aggregate breakdown, e.g. the influences of soil fragment release or differential swelling on the porous systems of soils or soil aggregate columns. Consequently, the development of new techniques for a faster and more detailed wet aggregate stability assessment is required. (1)H nuclear magnetic resonance relaxometry ((1)H-NMR relaxometry) might provide these requirements because it has already been successfully applied on soils. We evaluated the potential of (1)H-NMR relaxometry for the assessment of wet aggregate stability of soils, with more detailed information on occurring mechanisms at the same time. Therefore, we conducted single wet sieving and constant head permeability tests on untreated and 1% polyacrylic acid-treated soil aggregates of different textures and organic matter contents, subsequently measured by (1)H-NMR relaxometry after percolation. The stability of the soil aggregates were mainly depending on their organic matter contents and the type of aggregate stabilization, whereby additional effects of clay swelling on the measured wet aggregate stability were identified by the transverse relaxation time (T2) distributions. Regression analyses showed that only the percentage of water stable aggregates could be determined accurately from percolated soil aggregate columns by (1)H-NMR relaxometry measurements. (1)H-NMR relaxometry seems a promising technique for wet aggregate stability measurements but should be further developed for nonpercolated aggregate columns and real soil samples. Copyright © 2014 John Wiley & Sons, Ltd.

  1. Iterative reconstruction methods for Thermo-acoustic Tomography

    International Nuclear Information System (INIS)

    Marinesque, Sebastien

    2012-01-01

    We define, study and implement various iterative reconstruction methods for Thermo-acoustic Tomography (TAT): the Back and Forth Nudging (BFN), easy to implement and to use, a variational technique (VT) and the Back and Forth SEEK (BF-SEEK), more sophisticated, and a coupling method between Kalman filter (KF) and Time Reversal (TR). A unified formulation is explained for the sequential techniques aforementioned that defines a new class of inverse problem methods: the Back and Forth Filters (BFF). In addition to existence and uniqueness (particularly for backward solutions), we study many frameworks that ensure and characterize the convergence of the algorithms. Thus we give a general theoretical framework for which the BFN is a well-posed problem. Then, in application to TAT, existence and uniqueness of its solutions and geometrical convergence of the algorithm are proved, and an explicit convergence rate and a description of its numerical behaviour are given. Next, theoretical and numerical studies of more general and realistic framework are led, namely different objects, speeds (with or without trapping), various sensor configurations and samplings, attenuated equations or external sources. Then optimal control and best estimate tools are used to characterize the BFN convergence and converging feedbacks for BFF, under observability assumptions. Finally, we compare the most flexible and efficient current techniques (TR and an iterative variant) with our various BFF and the VT in several experiments. Thus, robust, with different possible complexities and flexible, the methods that we propose are very interesting reconstruction techniques, particularly in TAT and when observations are degraded. (author) [fr

  2. Spirit and prospects of ITER

    Energy Technology Data Exchange (ETDEWEB)

    Velikhov, E.P. [Kurchatov Institute of Atomic Energy, Moscow (Russian Federation)

    2002-10-01

    ITER is the unique and the most straightforward way to study the burning plasma science in the nearest future. ITER has a firm physics ground based on the results from the world tokamaks in terms of confinement, stability, heating, current drive, divertor, energetic particle confinement to an extend required in ITER. The flexibility of ITER will allow the exploration of broad operation space of fusion power, beta, pulse length and Q values in various operational scenarios. Success of the engineering R and D programs has demonstrated that all party has an enough capability to produce all the necessary equipment in agreement with the specifications of ITER. The acquired knowledge and technologies in ITER project allow us to demonstrate the scientific and technical feasibility of a fusion reactor. It can be concluded that ITER must be constructed in the nearest future. (author)

  3. Spirit and prospects of ITER

    International Nuclear Information System (INIS)

    Velikhov, E.P.

    2002-01-01

    ITER is the unique and the most straightforward way to study the burning plasma science in the nearest future. ITER has a firm physics ground based on the results from the world tokamaks in terms of confinement, stability, heating, current drive, divertor, energetic particle confinement to an extend required in ITER. The flexibility of ITER will allow the exploration of broad operation space of fusion power, beta, pulse length and Q values in various operational scenarios. Success of the engineering R and D programs has demonstrated that all party has an enough capability to produce all the necessary equipment in agreement with the specifications of ITER. The acquired knowledge and technologies in ITER project allow us to demonstrate the scientific and technical feasibility of a fusion reactor. It can be concluded that ITER must be constructed in the nearest future. (author)

  4. ITER interim design report package documents

    International Nuclear Information System (INIS)

    1996-01-01

    This publication contains the Excerpt from the ITER Council (IC-8), the ITER Interim Design Report, Cost Review and Safety Analysis, ITER Site Requirements and ITER Site Design Assumptions and the Excerpt from the ITER Council (IC-9). 8 figs, 2 tabs

  5. Variational Iteration Method for Fifth-Order Boundary Value Problems Using He's Polynomials

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam Noor

    2008-01-01

    Full Text Available We apply the variational iteration method using He's polynomials (VIMHP for solving the fifth-order boundary value problems. The proposed method is an elegant combination of variational iteration and the homotopy perturbation methods and is mainly due to Ghorbani (2007. The suggested algorithm is quite efficient and is practically well suited for use in these problems. The proposed iterative scheme finds the solution without any discritization, linearization, or restrictive assumptions. Several examples are given to verify the reliability and efficiency of the method. The fact that the proposed technique solves nonlinear problems without using Adomian's polynomials can be considered as a clear advantage of this algorithm over the decomposition method.

  6. Blood velocity estimation using ultrasound and spectral iterative adaptive approaches

    DEFF Research Database (Denmark)

    Gudmundson, Erik; Jakobsson, Andreas; Jensen, Jørgen Arendt

    2011-01-01

    -mode images are interleaved with the Doppler emissions. Furthermore, the techniques are shown, using both simplified and more realistic Field II simulations as well as in vivo data, to outperform current state-of-the-art techniques, allowing for accurate estimation of the blood velocity spectrum using only 30......This paper proposes two novel iterative data-adaptive spectral estimation techniques for blood velocity estimation using medical ultrasound scanners. The techniques make no assumption on the sampling pattern of the emissions or the depth samples, allowing for duplex mode transmissions where B...

  7. Effects of Contrast Media on Blood Rheology: Comparison in Humans, Pigs, and Sheep

    International Nuclear Information System (INIS)

    Laurent, Alexandre; Durussel, Jean Jacques; Dufaux, Jacques; Penhouet, Laurence; Bailly, Anne Laure; Bonneau, Michel; Merland, Jean Jacques

    1999-01-01

    Purpose: To compare whole blood viscosity and erythrocyte aggregation in humans, pigs, and sheep, before and after adding water-soluble iodinated contrast medium (CM). Methods: Two CMs were studied: iopromide (nonionic) and ioxaglate (ionic). The blood-CM viscosity was measured with a Couette viscometer. Erythrocyte aggregation was measured with an erythroaggregometer. Results: The blood-CM viscosity was increased up to +20% (relative to pure blood) with a CM concentration of 0%-10%. At CM concentrations from 10% to 50%, the viscosity decreased. The disaggregation shear stress was increased (relative to pure blood) at low CM concentration (0%-10%). When the CM concentration increased from 10% to 20%, the disaggregation shear stress was decreased, except with the pig blood-ioxaglate mixture. Conclusion: At low CM concentration the blood viscosity was increased in pig, sheep, and humans and the disaggregation shear stress was increased in pig and humans. The aggregation of sheep blood was too low to be detected by the erythroaggregometer. This rise can be explained by the formation of poorly deformable echinocytes. At higher CM concentration, the viscosity and the disaggregation shear stress decreased in relation to the blood dilution. We conclude that pig blood and sheep blood can both be used to study the effect of CM injection on blood viscosity. Nevertheless, the rheologic behavior of pig blood in terms of erythrocyte aggregation is closer to that of human blood than is sheep blood when mixed with CM. Pigs could thus be more suitable than sheep for in vivo studies of CM miscibility with blood during selective cannulation procedures

  8. ITER CTA newsletter. No. 6

    International Nuclear Information System (INIS)

    2002-01-01

    This ITER CTA Newsletter issue comprises information about the following ITER Meetings: The second negotiation meeting on the joint implementation of ITER, held in Tokyo(Japan) on 22-23 January 2002, and an international ITER symposium on burning plasma science and technology, held the day later after the second negotiation meeting at the same place

  9. Optical network scaling: roles of spectral and spatial aggregation.

    Science.gov (United States)

    Arık, Sercan Ö; Ho, Keang-Po; Kahn, Joseph M

    2014-12-01

    As the bit rates of routed data streams exceed the throughput of single wavelength-division multiplexing channels, spectral and spatial traffic aggregation become essential for optical network scaling. These aggregation techniques reduce network routing complexity by increasing spectral efficiency to decrease the number of fibers, and by increasing switching granularity to decrease the number of switching components. Spectral aggregation yields a modest decrease in the number of fibers but a substantial decrease in the number of switching components. Spatial aggregation yields a substantial decrease in both the number of fibers and the number of switching components. To quantify routing complexity reduction, we analyze the number of multi-cast and wavelength-selective switches required in a colorless, directionless and contentionless reconfigurable optical add-drop multiplexer architecture. Traffic aggregation has two potential drawbacks: reduced routing power and increased switching component size.

  10. Growth hormone aggregates in the rat adenohypophysis

    Science.gov (United States)

    Farrington, M.; Hymer, W. C.

    1990-01-01

    Although it has been known for some time that GH aggregates are contained within the rat anterior pituitary gland, the role that they might play in pituitary function is unknown. The present study examines this issue using the technique of Western blotting, which permitted visualization of 11 GH variants with apparent mol wt ranging from 14-88K. Electroelution of the higher mol wt variants from gels followed by their chemical reduction with beta-mercaptoethanol increased GH immunoassayability by about 5-fold. With the blot procedure we found 1) that GH aggregates greater than 44K were associated with a 40,000 x g sedimentable fraction; 2) that GH aggregates were not present in glands from thyroidectomized rats, but were in glands from the thyroidectomized rats injected with T4; 3) that GH aggregates were uniquely associated with a heavily granulated somatotroph subpopulation isolated by density gradient centrifugation; and 4) that high mol wt GH forms were released from the dense somatotrophs in culture, since treatment of the culture medium with beta-mercaptoethanol increased GH immunoassayability by about 5-fold. Taken together, the results show that high mol wt GH aggregates are contained in secretory granules of certain somatotrophs and are also released in aggregate form from these cells in vitro.

  11. Regularization iteration imaging algorithm for electrical capacitance tomography

    Science.gov (United States)

    Tong, Guowei; Liu, Shi; Chen, Hongyan; Wang, Xueyao

    2018-03-01

    The image reconstruction method plays a crucial role in real-world applications of the electrical capacitance tomography technique. In this study, a new cost function that simultaneously considers the sparsity and low-rank properties of the imaging targets is proposed to improve the quality of the reconstruction images, in which the image reconstruction task is converted into an optimization problem. Within the framework of the split Bregman algorithm, an iterative scheme that splits a complicated optimization problem into several simpler sub-tasks is developed to solve the proposed cost function efficiently, in which the fast-iterative shrinkage thresholding algorithm is introduced to accelerate the convergence. Numerical experiment results verify the effectiveness of the proposed algorithm in improving the reconstruction precision and robustness.

  12. Non-destructive qualification tests for ITER cryogenic axial insulating breaks

    International Nuclear Information System (INIS)

    Kosek, Jacek; Lopez, Roberto; Tommasini, Davide; Rodriguez-Mateos, Felix

    2014-01-01

    In the ITER superconducting magnets the dielectric separation between the CICC (Cable-In-Conduit Conductors) and the helium supply pipes is made through the so-called insulating breaks (IB). These devices shall provide the required dielectric insulation at a 30 kV level under different types of stresses and constraints: thermal, mechanical, dielectric and ionizing radiations. As part of the R and D program, the ITER Organization launched contracts with industrial companies aimed at the qualification of the manufacturing techniques. After reviewing the main functional aspects, this paper describes and discusses the protocol established for non-destructive qualification tests of the prototypes

  13. Non-destructive qualification tests for ITER cryogenic axial insulating breaks

    Energy Technology Data Exchange (ETDEWEB)

    Kosek, Jacek [Wroclaw University of Technology, Wybrzeze Wyspianskiego 27, 50-370 Wroclaw, Poland and CERN, Geneva 23,CH-1211 (Switzerland); Lopez, Roberto; Tommasini, Davide [CERN, Geneva 23,CH-1211 (Switzerland); Rodriguez-Mateos, Felix [CERN, Geneva 23,CH-1211, Switzerland and ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul lez Durance (France)

    2014-01-29

    In the ITER superconducting magnets the dielectric separation between the CICC (Cable-In-Conduit Conductors) and the helium supply pipes is made through the so-called insulating breaks (IB). These devices shall provide the required dielectric insulation at a 30 kV level under different types of stresses and constraints: thermal, mechanical, dielectric and ionizing radiations. As part of the R and D program, the ITER Organization launched contracts with industrial companies aimed at the qualification of the manufacturing techniques. After reviewing the main functional aspects, this paper describes and discusses the protocol established for non-destructive qualification tests of the prototypes.

  14. ITER Status and Plans

    Science.gov (United States)

    Greenfield, Charles M.

    2017-10-01

    The US Burning Plasma Organization is pleased to welcome Dr. Bernard Bigot, who will give an update on progress in the ITER Project. Dr. Bigot took over as Director General of the ITER Organization in early 2015 following a distinguished career that included serving as Chairman and CEO of the French Alternative Energies and Atomic Energy Commission and as High Commissioner for ITER in France. During his tenure at ITER the project has moved into high gear, with rapid progress evident on the construction site and preparation of a staged schedule and a research plan leading from where we are today through all the way to full DT operation. In an unprecedented international effort, seven partners ``China, the European Union, India, Japan, Korea, Russia and the United States'' have pooled their financial and scientific resources to build the biggest fusion reactor in history. ITER will open the way to the next step: a demonstration fusion power plant. All DPP attendees are welcome to attend this ITER town meeting.

  15. Accelerating nuclear configuration interaction calculations through a preconditioned block iterative eigensolver

    Science.gov (United States)

    Shao, Meiyue; Aktulga, H. Metin; Yang, Chao; Ng, Esmond G.; Maris, Pieter; Vary, James P.

    2018-01-01

    We describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. The use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. We also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.

  16. Multi-user MIMO and carrier aggregation in 4G systems

    DEFF Research Database (Denmark)

    Cattoni, Andrea Fabio; Nguyen, Hung Tuan; Duplicy, Jonathan

    2012-01-01

    The market success of broadband multimediaenabled devices such as smart phones, tablets, and laptops is increasing the demand for wireless data capacity in mobile cellular systems. In order to meet such requirements, the introduction of advanced techniques for increasing the efficiency in spectrum...... usage was required. Multi User -Multiple Input Multiple Output (MU-MIMO) and Carrier Aggregation (CA) are two important techniques addressed by 3GPP for LTE and LTE-Advanced. The aim of the EU FP7 project on ”Spectrum Aggregation and Multiuser-MIMO: real-World Impact” (SAMURAI) is to investigate...

  17. Automatic analysis of microscopic images of red blood cell aggregates

    Science.gov (United States)

    Menichini, Pablo A.; Larese, Mónica G.; Riquelme, Bibiana D.

    2015-06-01

    Red blood cell aggregation is one of the most important factors in blood viscosity at stasis or at very low rates of flow. The basic structure of aggregates is a linear array of cell commonly termed as rouleaux. Enhanced or abnormal aggregation is seen in clinical conditions, such as diabetes and hypertension, producing alterations in the microcirculation, some of which can be analyzed through the characterization of aggregated cells. Frequently, image processing and analysis for the characterization of RBC aggregation were done manually or semi-automatically using interactive tools. We propose a system that processes images of RBC aggregation and automatically obtains the characterization and quantification of the different types of RBC aggregates. Present technique could be interesting to perform the adaptation as a routine used in hemorheological and Clinical Biochemistry Laboratories because this automatic method is rapid, efficient and economical, and at the same time independent of the user performing the analysis (repeatability of the analysis).

  18. Disaggregating measurement uncertainty from population variability and Bayesian treatment of uncensored results

    International Nuclear Information System (INIS)

    Strom, Daniel J.; Joyce, Kevin E.; Maclellan, Jay A.; Watson, David J.; Lynch, Timothy P.; Antonio, Cheryl L.; Birchall, Alan; Anderson, Kevin K.; Zharov, Peter

    2012-01-01

    In making low-level radioactivity measurements of populations, it is commonly observed that a substantial portion of net results are negative. Furthermore, the observed variance of the measurement results arises from a combination of measurement uncertainty and population variability. This paper presents a method for disaggregating measurement uncertainty from population variability to produce a probability density function (PDF) of possibly true results. To do this, simple, justifiable, and reasonable assumptions are made about the relationship of the measurements to the measurands (the 'true values'). The measurements are assumed to be unbiased, that is, that their average value is the average of the measurands. Using traditional estimates of each measurement's uncertainty to disaggregate population variability from measurement uncertainty, a PDF of measurands for the population is produced. Then, using Bayes's theorem, the same assumptions, and all the data from the population of individuals, a prior PDF is computed for each individual's measurand. These PDFs are non-negative, and their average is equal to the average of the measurement results for the population. The uncertainty in these Bayesian posterior PDFs is all Berkson with no remaining classical component. The methods are applied to baseline bioassay data from the Hanford site. The data include 90Sr urinalysis measurements on 128 people, 137Cs in vivo measurements on 5,337 people, and 239Pu urinalysis measurements on 3,270 people. The method produces excellent results for the 90Sr and 137Cs measurements, since there are nonzero concentrations of these global fallout radionuclides in people who have not been occupationally exposed. The method does not work for the 239Pu measurements in non-occupationally exposed people because the population average is essentially zero.

  19. ITER council proceedings: 1999

    International Nuclear Information System (INIS)

    1999-01-01

    In 1999 the ITER meeting in Cadarache (10-11 March 1999) and the Programme Directors Meeting in Grenoble (28-29 July 1999) took place. Both meetings were exclusively devoted to ITER engineering design activities and their agendas covered all issues important for the development of ITER. This volume presents the documents of these two important meetings

  20. ITER EDA technical activities

    International Nuclear Information System (INIS)

    Aymar, R.

    1998-01-01

    Six years of technical work under the ITER EDA Agreement have resulted in a design which constitutes a complete description of the ITER device and of its auxiliary systems and facilities. The ITER Council commented that the Final Design Report provides the first comprehensive design of a fusion reactor based on well established physics and technology

  1. Alpha diagnostics using pellet charge exchange: Results on TFTR and prospects for ITER

    International Nuclear Information System (INIS)

    Fisher, R.K.; Duong, H.H.; McChesney, J.M.

    1996-05-01

    Confinement of alpha particles is essential for fusion ignition and alpha physics studies are a major goal of the TFTR, JET, and ITER DT experiments, but alpha measurements remain one of the most challenging plasma diagnostic tasks. The Pellet Charge Exchange (PCX) diagnostic has successfully measured the radial density profile and energy distribution of fast (0.5 to 3.5 MeV) confined alpha particles in TFTR. This paper describes the diagnostic capabilities of PCX demonstrated on TFTR and discusses the prospects for applying this technique to ITER. Major issues on ITER include the pellet's perturbation to the plasma and obtaining satisfactory pellet penetration into the plasma

  2. Performance and capacity analysis of Poisson photon-counting based Iter-PIC OCDMA systems.

    Science.gov (United States)

    Li, Lingbin; Zhou, Xiaolin; Zhang, Rong; Zhang, Dingchen; Hanzo, Lajos

    2013-11-04

    In this paper, an iterative parallel interference cancellation (Iter-PIC) technique is developed for optical code-division multiple-access (OCDMA) systems relying on shot-noise limited Poisson photon-counting reception. The novel semi-analytical tool of extrinsic information transfer (EXIT) charts is used for analysing both the bit error rate (BER) performance as well as the channel capacity of these systems and the results are verified by Monte Carlo simulations. The proposed Iter-PIC OCDMA system is capable of achieving two orders of magnitude BER improvements and a 0.1 nats of capacity improvement over the conventional chip-level OCDMA systems at a coding rate of 1/10.

  3. Real-time amyloid aggregation monitoring with a photonic crystal-based approach.

    Science.gov (United States)

    Santi, Sara; Musi, Valeria; Descrovi, Emiliano; Paeder, Vincent; Di Francesco, Joab; Hvozdara, Lubos; van der Wal, Peter; Lashuel, Hilal A; Pastore, Annalisa; Neier, Reinhard; Herzig, Hans Peter

    2013-10-21

    We propose the application of a new label-free optical technique based on photonic nanostructures to real-time monitor the amyloid-beta 1-42 (Aβ(1-42)) fibrillization, including the early stages of the aggregation process, which are related to the onset of the Alzheimer's Disease (AD). The aggregation of Aβ peptides into amyloid fibrils has commonly been associated with neuronal death, which culminates in the clinical features of the incurable degenerative AD. Recent studies revealed that cell toxicity is determined by the formation of soluble oligomeric forms of Aβ peptides in the early stages of aggregation. At this phase, classical amyloid detection techniques lack in sensitivity. Upon a chemical passivation of the sensing surface by means of polyethylene glycol, the proposed approach allows an accurate, real-time monitoring of the refractive index variation of the solution, wherein Aβ(1-42) peptides are aggregating. This measurement is directly related to the aggregation state of the peptide throughout oligomerization and subsequent fibrillization. Our findings open new perspectives in the understanding of the dynamics of amyloid formation, and validate this approach as a new and powerful method to screen aggregation at early stages. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. NIR-Red Spectra-Based Disaggregation of SMAP Soil Moisture to 250 m Resolution Based on SMAPEx-4/5 in Southeastern Australia

    Directory of Open Access Journals (Sweden)

    Nengcheng Chen

    2017-01-01

    Full Text Available To meet the demand of regional hydrological and agricultural applications, a new method named near infrared-red (NIR-red spectra-based disaggregation (NRSD was proposed to perform a disaggregation of Soil Moisture Active Passive (SMAP products from 36 km to 250 m resolution. The NRSD combined proposed normalized soil moisture index (NSMI with SMAP data to obtain 250 m resolution soil moisture mapping. The experiment was conducted in southeastern Australia during SMAP Experiments (SMAPEx 4/5 and validated with the in situ SMAPEx network. Results showed that NRSD performed a decent downscaling (root-mean-square error (RMSE = 0.04 m3/m3 and 0.12 m3/m3 during SMAPEx-4 and SMAPEx-5, respectively. Based on the validation, it was found that the proposed NSMI was a new alternative indicator for denoting the heterogeneity of soil moisture at sub-kilometer scales. Attributed to the excellent performance of the NSMI, NRSD has a higher overall accuracy, finer spatial representation within SMAP pixels and wider applicable scope on usability tests for land cover, vegetation density and drought condition than the disaggregation based on physical and theoretical scale change (DISPATCH has at 250 m resolution. This revealed that the NRSD method is expected to provide soil moisture mapping at 250-resolution for large-scale hydrological and agricultural studies.

  5. Variable aperture-based ptychographical iterative engine method

    Science.gov (United States)

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches.

  6. Future plan of ITER

    International Nuclear Information System (INIS)

    Kitsunezaki, Akio

    1998-01-01

    In cooperation of four countries, Japan, USA, EU and Russia, ITER plan has been proceeding as ''the conceptual design activities'' from 1988 to 1990 and ''the industrial design activities'' since 1992. To construct ITER, the legal and work side of ITER operation has been investigated by four countries. However, their economic conditions have been changed to be wrong. So that, construction of ITER can not begin after end of industrial design activities in 1998. Accordingly, they determined to continue the industrial design activities more three years in order to study low cost options and to test the superconductive model·coil. (S.Y.)

  7. The proteome of neurofilament-containing protein aggregates in blood

    Directory of Open Access Journals (Sweden)

    Rocco Adiutori

    2018-07-01

    Full Text Available Protein aggregation in biofluids is a poorly understood phenomenon. Under normal physiological conditions, fluid-borne aggregates may contain plasma or cell proteins prone to aggregation. Recent observations suggest that neurofilaments (Nf, the building blocks of neurons and a biomarker of neurodegeneration, are included in high molecular weight complexes in circulation. The composition of these Nf-containing hetero-aggregates (NCH may change in systemic or organ-specific pathologies, providing the basis to develop novel disease biomarkers. We have tested ultracentrifugation (UC and a commercially available protein aggregate binder, Seprion PAD-Beads (SEP, for the enrichment of NCH from plasma of healthy individuals, and then characterised the Nf content of the aggregate fractions using gel electrophoresis and their proteome by mass spectrometry (MS. Western blot analysis of fractions obtained by UC showed that among Nf isoforms, neurofilament heavy chain (NfH was found within SDS-stable high molecular weight aggregates. Shotgun proteomics of aggregates obtained with both extraction techniques identified mostly cell structural and to a lesser extent extra-cellular matrix proteins, while functional analysis revealed pathways involved in inflammatory response, phagosome and prion-like protein behaviour. UC aggregates were specifically enriched with proteins involved in endocrine, metabolic and cell-signalling regulation. We describe the proteome of neurofilament-containing aggregates isolated from healthy individuals biofluids using different extraction methods.

  8. Not All Large Customers are Made Alike: Disaggregating Response to Default-Service Day-Ahead Market Pricing

    International Nuclear Information System (INIS)

    Hopper, Nicole; Goldman, Charles; Neenan, Bernie

    2006-01-01

    For decades, policymakers and program designers have gone on the assumption that large customers, particularly industrial facilities, are the best candidates for realtime pricing (RTP). This assumption is based partly on practical considerations (large customers can provide potentially large load reductions) but also on the premise that businesses focused on production cost minimization are most likely to participate and respond to opportunities for bill savings. Yet few studies have examined the actual price response of large industrial and commercial customers in a disaggregated fashion, nor have factors such as the impacts of demand response (DR) enabling technologies, simultaneous emergency DR program participation and price response barriers been fully elucidated. This second-phase case study of Niagara Mohawk Power Corporation (NMPC)'s large customer RTP tariff addresses these information needs. The results demonstrate the extreme diversity of large customers' response to hourly varying prices. While two-thirds exhibit some price response, about 20 percent of customers provide 75-80 percent of the aggregate load reductions. Manufacturing customers are most price-responsive as a group, followed by government/education customers, while other sectors are largely unresponsive. However, individual customer response varies widely. Currently, enabling technologies do not appear to enhance hourly price response; customers report using them for other purposes. The New York Independent System Operator (NYISO)'s emergency DR programs enhance price response, in part by signaling to customers that day-ahead prices are high. In sum, large customers do currently provide moderate price response, but there is significant room for improvement through targeted programs that help customers develop and implement automated load-response strategies

  9. ITER council proceedings: 1992

    International Nuclear Information System (INIS)

    1994-01-01

    At the signing of the ITER EDA Agreement on July, 1992, each of the Parties presented to the Director General the names of their designated members of the ITER Council. Upon receiving those names, the Director General stated that the ITER Engineering Design Activities were ''ready to begin''. The next step in this process was the convening of the first meeting of the ITER Council. The first meeting of the Council, held in Vienna, was opened by Director General Hans Blix. The second meeting was held in Moscow, the formal seat of the Council. This volume presents records of these first two Council meetings and, together with the previous volumes on the text of the Agreement and Protocol 1 and the preparations for their signing respectively, represents essential information on the evolution of the ITER EDA

  10. Iterative oscillation results for second-order differential equations with advanced argument

    Directory of Open Access Journals (Sweden)

    Irena Jadlovska

    2017-07-01

    Full Text Available This article concerns the oscillation of solutions to a linear second-order differential equation with advanced argument. Sufficient oscillation conditions involving limit inferior are given which essentially improve known results. We base our technique on the iterative construction of solution estimates and some of the recent ideas developed for first-order advanced differential equations. We demonstrate the advantage of our results on Euler-type advanced equation. Using MATLAB software, a comparison of the effectiveness of newly obtained criteria as well as the necessary iteration length in particular cases are discussed.

  11. Temperature dependence of erythrocyte aggregation in vitro by backscattering nephelometry

    Science.gov (United States)

    Sirko, Igor V.; Firsov, Nikolai N.; Ryaboshapka, Olga M.; Priezzhev, Alexander V.

    1997-05-01

    We apply backscattering nephelometry technique to register the alterations of the scattering signal from a whole blood sample due to appearance or disappearance of different types of erythrocyte aggregates in stasis and under controlled shear stress. The measured parameters are: the characteristic times of linear and 3D aggregates formation, and the strength of aggregates of different types. These parameters depend on the sample temperature in the range of 2 divided by 50 degrees C. Temporal parameters of the aggregation process strongly increase at temperature 45 degrees C. For samples of normal blood the aggregates strength parameters do not significantly depend on the sample temperature, whereas for blood samples from patients suffering Sjogren syndrome we observe high increase of the strength of 3D and linear aggregates and decrease of time of linear aggregates formation at low temperature of the sample. This combination of parameters is opposite to that observed in the samples of pathological blood at room temperature. Possible reasons of this behavior of aggregation state of blood and explanation of the observed effects will be discussed.

  12. Role of Outgassing of ITER Vacuum Vessel In-Wall Shielding Materials in Leak Detection of ITER Vacuum Vessel

    Science.gov (United States)

    Maheshwari, A.; Pathak, H. A.; Mehta, B. K.; Phull, G. S.; Laad, R.; Shaikh, M. S.; George, S.; Joshi, K.; Khan, Z.

    2017-04-01

    ITER Vacuum Vessel is a torus-shaped, double wall structure. The space between the double walls of the VV is filled with In-Wall Shielding Blocks (IWS) and Water. The main purpose of IWS is to provide neutron shielding during ITER plasma operation and to reduce ripple of Toroidal Magnetic Field (TF). Although In-Wall Shield Blocks (IWS) will be submerged in water in between the walls of the ITER Vacuum Vessel (VV), Outgassing Rate (OGR) of IWS materials plays a significant role in leak detection of Vacuum Vessel of ITER. Thermal Outgassing Rate of a material critically depends on the Surface Roughness of material. During leak detection process using RGA equipped Leak detector and tracer gas Helium, there will be a spill over of mass 3 and mass 2 to mass 4 which creates a background reading. Helium background will have contribution of Hydrogen too. So it is necessary to ensure the low OGR of Hydrogen. To achieve an effective leak test it is required to obtain a background below 1 × 10-8 mbar 1 s-1 and hence the maximum Outgassing rate of IWS Materials should comply with the maximum Outgassing rate required for hydrogen i.e. 1 x 10-10 mbar 1 s-1 cm-2 at room temperature. As IWS Materials are special materials developed for ITER project, it is necessary to ensure the compliance of Outgassing rate with the requirement. There is a possibility of diffusing the gasses in material at the time of production. So, to validate the production process of materials as well as manufacturing of final product from this material, three coupons of each IWS material have been manufactured with the same technique which is being used in manufacturing of IWS blocks. Manufacturing records of these coupons have been approved by ITER-IO (International Organization). Outgassing rates of these coupons have been measured at room temperature and found in acceptable limit to obtain the required Helium Background. On the basis of these measurements, test reports have been generated and got

  13. Comparison of pure and hybrid iterative reconstruction techniques with conventional filtered back projection: Image quality assessment in the cervicothoracic region

    International Nuclear Information System (INIS)

    Katsura, Masaki; Sato, Jiro; Akahane, Masaaki; Matsuda, Izuru; Ishida, Masanori; Yasaka, Koichiro; Kunimatsu, Akira; Ohtomo, Kuni

    2013-01-01

    Objectives: To evaluate the impact on image quality of three different image reconstruction techniques in the cervicothoracic region: model-based iterative reconstruction (MBIR), adaptive statistical iterative reconstruction (ASIR), and filtered back projection (FBP). Methods: Forty-four patients underwent unenhanced standard-of-care clinical computed tomography (CT) examinations which included the cervicothoracic region with a 64-row multidetector CT scanner. Images were reconstructed with FBP, 50% ASIR-FBP blending (ASIR50), and MBIR. Two radiologists assessed the cervicothoracic region in a blinded manner for streak artifacts, pixilated blotchy appearances, critical reproduction of visually sharp anatomical structures (thyroid gland, common carotid artery, and esophagus), and overall diagnostic acceptability. Objective image noise was measured in the internal jugular vein. Data were analyzed using the sign test and pair-wise Student's t-test. Results: MBIR images had significant lower quantitative image noise (8.88 ± 1.32) compared to ASIR images (18.63 ± 4.19, P 0.9 for ASIR vs. FBP for both readers). MBIR images were all diagnostically acceptable. Unique features of MBIR images included pixilated blotchy appearances, which did not adversely affect diagnostic acceptability. Conclusions: MBIR significantly improves image noise and streak artifacts of the cervicothoracic region over ASIR and FBP. MBIR is expected to enhance the value of CT examinations for areas where image noise and streak artifacts are problematic

  14. ITER physics design guidelines: 1989

    International Nuclear Information System (INIS)

    Uckan, N.A.

    1990-01-01

    The physics basis for ITER has been developed from an assessment of the results of the last twenty-five years of tokamak research and from detailed analysis of important physics issues specifically for the ITER design. This assessment has been carried out with direct participation of members of the experimental teams of each of the major tokamaks in the world fusion program through participation in ITER workshops, contributions to the ITER Physics R and D Program, and by direct contacts between the ITER team and the cognizant experimentalists. Extrapolations to the present data base, where needed, are made in the most cautious way consistent with engineering constraints and performance goals of the ITER. In cases where a working assumptions had to be introduced, which is insufficiently supported by the present data base, is explicitly stated. While a strong emphasis has been placed on the physics credibility of the design, the guidelines also take into account that ITER should be designed to be able to take advantage of potential improvements in tokamak physics that may occur before and during the operation of ITER. (author). 33 refs

  15. ITER council proceedings: 1996

    International Nuclear Information System (INIS)

    1997-01-01

    Records of the 10. ITER Council Meeting (IC-10), held on 26-27 July 1996, in St. Petersburg, Russia, and the 11. ITER Council Meeting (IC-11) held on 17-18 December 1996, in Tokyo, Japan, are presented, giving essential information on the evolution of the ITER Engineering Design Activities (EDA) and the cost review and safety analysis. Figs, tabs

  16. A comparison of aggregation behavior in aqueous humic acids

    Directory of Open Access Journals (Sweden)

    von Wandruszka Ray

    2001-02-01

    Full Text Available The ability of six humic acids (HAs to form pseudomicellar structures in aqueous solution was evaluated by five techniques: size exclusion chromatography; pyrene fluorescence enhancement; the pyrene I1/I3 ratio; the cloud point of dilute HA solutions; and the fluorescence anisotropy of HAs. Soil HAs were found to aggregate most easily, both on microscopic and macroscopic scales. The formation of amphiphilic structures was chiefly related to HA-solvent interactions: highly solvated HAs aggregated poorly, while a lignite derived material underwent intermolecular, rather than intramolecular, rearrangements. A newly discovered algal HA was found to have minimal aggregative properties.

  17. ITER concept definition. V.2

    International Nuclear Information System (INIS)

    1989-01-01

    Volume II of the two volumes describing the concept definition of the International Thermonuclear Experimental Reactor deals with the ITER concept in technical depth, and covers all areas of design of the ITER tokamak. Included are an assessment of the current database for design, scoping studies, rationale for concepts selection, performance flexibility, the ITER concept, the operations and experimental/testing program, ITER parameters and design phase schedule, and research and development specific to ITER. This latter includes a definition of specific research and development tasks, a division of tasks among members, specific milestones, required results, and schedules. Figs and tabs

  18. ITER CTA newsletter. No. 10

    International Nuclear Information System (INIS)

    2002-07-01

    This ITER CTA newsletter issue comprises the ITER backgrounder, which was approved as an official document by the participants in the Negotiations on the ITER Implementation agreement at their fourth meeting, held in Cadarache from 4-6 June 2002, and information about two ITER meetings: one is the third meeting of the ITER parties' designated Safety Representatives, which took place in Cadarache, France from 6-7 June 2002, and the other is the second meeting of the International Tokamak Physics Activity (ITPA) topical group on diagnostics, which was held at General Atomics, San Diego, USA, from 4-8 March 2002

  19. Toward construction of ITER

    International Nuclear Information System (INIS)

    Shimomura, Yasuo

    2005-01-01

    The ITER Project has been significantly developed in the past years in preparation for its construction. The ITER Negotiators have developed a draft Joint Implementation Agreement (JIA), ready for completion following the nomination of the Project's Director General (DG). The ITER International Team and Participant Teams have continued technical and organizational preparations. The actual construction will be able to start immediately after the international ITER organization will be established, following signature of the JIA. The Project is now strongly supported by all the participants as well as by the scientific community with the final high-level negotiations, focused on siting and the concluding details of cost sharing, started in December 2003. The EU, with Cadarache, and Japan, with Rokkasho, have both promised large contributions to the project to strongly support their construction site proposals. The extent to which they both wish to host the ITER facility is such that large contributions to a broader collaboration among the Parties are also proposed by them. This covers complementary activities to help accelerate fusion development towards a viable power source, as well as may allow the Participants to reach a conclusion on ITER siting. (author)

  20. ITER tokamak device

    International Nuclear Information System (INIS)

    Doggett, J.; Salpietro, E.; Shatalov, G.

    1991-01-01

    The results of the Conceptual Design Activities for the International Thermonuclear Experimental Reactor (ITER) are summarized. These activities, carried out between April 1988 and December 1990, produced a consistent set of technical characteristics and preliminary plans for co-ordinated research and development support of ITER; and a conceptual design, a description of design requirements and a preliminary construction schedule and cost estimate. After a description of the design basis, an overview is given of the tokamak device, its auxiliary systems, facility and maintenance. The interrelation and integration of the various subsystems that form the ITER tokamak concept are discussed. The 16 ITER equatorial port allocations, used for nuclear testing, diagnostics, fuelling, maintenance, and heating and current drive, are given, as well as a layout of the reactor building. Finally, brief descriptions are given of the major ITER sub-systems, i.e., (i) magnet systems (toroidal and poloidal field coils and cryogenic systems), (ii) containment structures (vacuum and cryostat vessels, machine gravity supports, attaching locks, passive loops and active coils), (iii) first wall, (iv) divertor plate (design and materials, performance and lifetime, a.o.), (v) blanket/shield system, (vi) maintenance equipment, (vii) current drive and heating, (viii) fuel cycle system, and (ix) diagnostics. 11 refs, figs and tabs

  1. Improved Vote Aggregation Techniques for the Geo-Wiki Cropland Capture Crowdsourcing Game

    Science.gov (United States)

    Baklanov, Artem; Fritz, Steffen; Khachay, Michael; Nurmukhametov, Oleg; Salk, Carl; See, Linda; Shchepashchenko, Dmitry

    2016-04-01

    Crowdsourcing is a new approach for solving data processing problems for which conventional methods appear to be inaccurate, expensive, or time-consuming. Nowadays, the development of new crowdsourcing techniques is mostly motivated by so called Big Data problems, including problems of assessment and clustering for large datasets obtained in aerospace imaging, remote sensing, and even in social network analysis. By involving volunteers from all over the world, the Geo-Wiki project tackles problems of environmental monitoring with applications to flood resilience, biomass data analysis and classification of land cover. For example, the Cropland Capture Game, which is a gamified version of Geo-Wiki, was developed to aid in the mapping of cultivated land, and was used to gather 4.5 million image classifications from the Earth's surface. More recently, the Picture Pile game, which is a more generalized version of Cropland Capture, aims to identify tree loss over time from pairs of very high resolution satellite images. Despite recent progress in image analysis, the solution to these problems is hard to automate since human experts still outperform the majority of machine learning algorithms and artificial systems in this field on certain image recognition tasks. The replacement of rare and expensive experts by a team of distributed volunteers seems to be promising, but this approach leads to challenging questions such as: how can individual opinions be aggregated optimally, how can confidence bounds be obtained, and how can the unreliability of volunteers be dealt with? In this paper, on the basis of several known machine learning techniques, we propose a technical approach to improve the overall performance of the majority voting decision rule used in the Cropland Capture Game. The proposed approach increases the estimated consistency with expert opinion from 77% to 86%.

  2. Cardiovascular CT angiography in neonates and children : Image quality and potential for radiation dose reduction with iterative image reconstruction techniques

    NARCIS (Netherlands)

    Tricarico, Francesco; Hlavacek, Anthony M.; Schoepf, U. Joseph; Ebersberger, Ullrich; Nance, John W.; Vliegenthart, Rozemarijn; Cho, Young Jun; Spears, J. Reid; Secchi, Francesco; Savino, Giancarlo; Marano, Riccardo; Schoenberg, Stefan O.; Bonomo, Lorenzo; Apfaltrer, Paul

    To evaluate image quality (IQ) of low-radiation-dose paediatric cardiovascular CT angiography (CTA), comparing iterative reconstruction in image space (IRIS) and sinogram-affirmed iterative reconstruction (SAFIRE) with filtered back-projection (FBP) and estimate the potential for further dose

  3. Flexible hydrological modeling - Disaggregation from lumped catchment scale to higher spatial resolutions

    Science.gov (United States)

    Tran, Quoc Quan; Willems, Patrick; Pannemans, Bart; Blanckaert, Joris; Pereira, Fernando; Nossent, Jiri; Cauwenberghs, Kris; Vansteenkiste, Thomas

    2015-04-01

    Based on an international literature review on model structures of existing rainfall-runoff and hydrological models, a generalized model structure is proposed. It consists of different types of meteorological components, storage components, splitting components and routing components. They can be spatially organized in a lumped way, or on a grid, spatially interlinked by source-to-sink or grid-to-grid (cell-to-cell) routing. The grid size of the model can be chosen depending on the application. The user can select/change the spatial resolution depending on the needs and/or the evaluation of the accuracy of the model results, or use different spatial resolutions in parallel for different applications. Major research questions addressed during the study are: How can we assure consistent results of the model at any spatial detail? How can we avoid strong or sudden changes in model parameters and corresponding simulation results, when one moves from one level of spatial detail to another? How can we limit the problem of overparameterization/equifinality when we move from the lumped model to the spatially distributed model? The proposed approach is a step-wise one, where first the lumped conceptual model is calibrated using a systematic, data-based approach, followed by a disaggregation step where the lumped parameters are disaggregated based on spatial catchment characteristics (topography, land use, soil characteristics). In this way, disaggregation can be done down to any spatial scale, and consistently among scales. Only few additional calibration parameters are introduced to scale the absolute spatial differences in model parameters, but keeping the relative differences as obtained from the spatial catchment characteristics. After calibration of the spatial model, the accuracies of the lumped and spatial models were compared for peak, low and cumulative runoff total and sub-flows (at downstream and internal gauging stations). For the distributed models, additional

  4. Modeling decisions information fusion and aggregation operators

    CERN Document Server

    Torra, Vicenc

    2007-01-01

    Information fusion techniques and aggregation operators produce the most comprehensive, specific datum about an entity using data supplied from different sources, thus enabling us to reduce noise, increase accuracy, summarize and extract information, and make decisions. These techniques are applied in fields such as economics, biology and education, while in computer science they are particularly used in fields such as knowledge-based systems, robotics, and data mining. This book covers the underlying science and application issues related to aggregation operators, focusing on tools used in practical applications that involve numerical information. Starting with detailed introductions to information fusion and integration, measurement and probability theory, fuzzy sets, and functional equations, the authors then cover the following topics in detail: synthesis of judgements, fuzzy measures, weighted means and fuzzy integrals, indices and evaluation methods, model selection, and parameter extraction. The method...

  5. Multiscale optical simulation settings: challenging applications handled with an iterative ray-tracing FDTD interface method.

    Science.gov (United States)

    Leiner, Claude; Nemitz, Wolfgang; Schweitzer, Susanne; Kuna, Ladislav; Wenzl, Franz P; Hartmann, Paul; Satzinger, Valentin; Sommer, Christian

    2016-03-20

    We show that with an appropriate combination of two optical simulation techniques-classical ray-tracing and the finite difference time domain method-an optical device containing multiple diffractive and refractive optical elements can be accurately simulated in an iterative simulation approach. We compare the simulation results with experimental measurements of the device to discuss the applicability and accuracy of our iterative simulation procedure.

  6. Molecular mechanisms used by chaperones to reduce the toxicity of aberrant protein oligomers

    NARCIS (Netherlands)

    Mannini, Benedetta; Cascella, Roberta; Zampagni, Mariagioia; Van Waarde-Verhagen, Maria; Meehan, Sarah; Roodveldt, Cintia; Campioni, Silvia; Boninsegna, Matilde; Penco, Amanda; Relini, Annalisa; Kampinga, Harm H.; Dobson, Christopher M.; Wilson, Mark R.; Cecchi, Cristina; Chiti, Fabrizio

    2012-01-01

    Chaperones are the primary regulators of the proteostasis network and are known to facilitate protein folding, inhibit protein aggregation, and promote disaggregation and clearance of misfolded aggregates inside cells. We have tested the effects of five chaperones on the toxicity of misfolded

  7. Variable aperture-based ptychographical iterative engine method.

    Science.gov (United States)

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  8. Mechanical and Physical Properties of Hydrophobized Lightweight Aggregate Concrete with Sewage Sludge.

    Science.gov (United States)

    Suchorab, Zbigniew; Barnat-Hunek, Danuta; Franus, Małgorzata; Łagód, Grzegorz

    2016-04-27

    This article is focused on lightweight aggregate-concrete modified by municipal sewage sludge and lightweight aggregate-concrete obtained from light aggregates. The article presents laboratory examinations of material physical parameters. Water absorptivity of the examined material was decreased by the admixture of water emulsion of reactive polysiloxanes. Water transport properties were determined using Time Domain Reflectometry, an indirect technique for moisture detection in porous media. Together with basic physical parameters, the heat conductivity coefficient λ was determined for both types of lightweight aggregate-concrete. Analysis of moisture and heat properties of the examined materials confirmed the usefulness of light aggregates supplemented with sewage sludge for prospective production.

  9. Power converters for ITER

    CERN Document Server

    Benfatto, I

    2006-01-01

    The International Thermonuclear Experimental Reactor (ITER) is a thermonuclear fusion experiment designed to provide long deuterium– tritium burning plasma operation. After a short description of ITER objectives, the main design parameters and the construction schedule, the paper describes the electrical characteristics of the French 400 kV grid at Cadarache: the European site proposed for ITER. Moreover, the paper describes the main requirements and features of the power converters designed for the ITER coil and additional heating power supplies, characterized by a total installed power of about 1.8 GVA, modular design with basic units up to 90 MVA continuous duty, dc currents up to 68 kA, and voltages from 1 kV to 1 MV dc.

  10. Wet scrubber technology for tritium confinement at ITER

    Energy Technology Data Exchange (ETDEWEB)

    Perevezentsev, A.N., E-mail: alexander.perevezentsev@iter.org [ITER Organization, CS 90 046, 13067 St Paul lez Durance Cedex (France); Andreev, B.M.; Rozenkevich, M.B.; Pak, Yu.S.; Ovcharov, A.V.; Marunich, S.A. [Mendeleev University of Chemical Technology, 125047 Miusskaya Sq. 9, Moscow (Russian Federation)

    2010-12-15

    Operation of the ITER machine with tritium plasma requires tritium confinement systems to protect workers and the environment. Tritium confinement at ITER is based on multistage approach. The final stage provides tritium confinement in building sectors and consists of building's walls as physical barriers and control of sub-atmospheric pressure in those volumes as a dynamic barrier. The dynamic part of the confinement function shall be provided by safety important components that are available all the time when required. Detritiation of air prior to its release to the environment is based on catalytic conversion of tritium containing gaseous species to water vapour followed by their isotopic exchange with liquid water in scrubber column of packed bed type. Wet scrubber technology has been selected because of its advantages over conventional air detritiation technique based on gas drying by water adsorption. The most important design target of system availability was very difficult to meet with conventional water adsorption driers. This paper presents results of experimental trial for validation of wet scrubber technology application in the ITER tritium confinement system and process evaluation using developed simulation computer code.

  11. Efficient approach to simulate EM loads on massive structures in ITER machine

    Energy Technology Data Exchange (ETDEWEB)

    Alekseev, A. [ITER Organization, Route de Vinon sur Verdon, 13115 St. Paul-Lez-Durance (France); Andreeva, Z.; Belov, A.; Belyakov, V.; Filatov, O. [D.V. Efremov Scientific Research Institute, 196641 St. Petersburg (Russian Federation); Gribov, Yu.; Ioki, K. [ITER Organization, Route de Vinon sur Verdon, 13115 St. Paul-Lez-Durance (France); Kukhtin, V.; Labusov, A.; Lamzin, E.; Lyublin, B.; Malkov, A.; Mazul, I. [D.V. Efremov Scientific Research Institute, 196641 St. Petersburg (Russian Federation); Rozov, V.; Sugihara, M. [ITER Organization, Route de Vinon sur Verdon, 13115 St. Paul-Lez-Durance (France); Sychevsky, S., E-mail: sytch@sintez.niiefa.spb.su [D.V. Efremov Scientific Research Institute, 196641 St. Petersburg (Russian Federation)

    2013-10-15

    Highlights: ► A modelling technique to predict EM loads in ITER conducting structures is presented. ► The technique provides low computational cost and parallel computations. ► Detailed models were built for the system “vacuum vessel, cryostat, thermal shields”. ► EM loads on massive in-vessel structures were simulated with the use of local models. ► A flexible combination of models enables desired accuracy of load distributions. -- Abstract: Operation of the ITER machine is associated with high electromagnetic (EM) loads. An essential contributor to EM loads is eddy currents induced in passive conductive structures. Reasoning from the ITER construction, a modelling technique has been developed and applied in computations to efficiently predict anticipated loads. The technique allows us to avoid building a global 3D finite-element (FE) model that requires meshing of the conducting structures and their vacuum environment into 3D solid elements that leads to high computational cost. The key features of the proposed technique are: (i) the use of an existing shell model for the system “vacuum vessel (VV), cryostat, and thermal shields (TS)” implementing the magnetic shell approach. A solution is obtained in terms of a single-component, in this case, vector electric potential taken within the conducting shells of the “VV + cryostat + TS” system. (ii) EM loads on in-vessel conducting structures are simulated with the use of local FE models. The local models use either the 3D solid body or shell approximations. Reasoning from the simulation efficiency, the local boundary conditions are put with respect to the total field or an external field. The use of an integral-differential formulation and special procedures ensures smooth and accurate simulated distributions of fields from current sources of any geometry. The local FE models have been developed and applied for EM analyses of a variety of the ITER components including the diagnostic systems

  12. Iteration and accelerator dynamics

    International Nuclear Information System (INIS)

    Peggs, S.

    1987-10-01

    Four examples of iteration in accelerator dynamics are studied in this paper. The first three show how iterations of the simplest maps reproduce most of the significant nonlinear behavior in real accelerators. Each of these examples can be easily reproduced by the reader, at the minimal cost of writing only 20 or 40 lines of code. The fourth example outlines a general way to iteratively solve nonlinear difference equations, analytically or numerically

  13. ITER ITA newsletter No. 31, June 2006

    International Nuclear Information System (INIS)

    2006-07-01

    This issue of ITER ITA (ITER transitional Arrangements) newsletter contains concise information about initialling the ITER Agreement and its related instruments by seven ITER parties, which too place in Brussels on 24 May 2006. The initialling constituted the final act of the ITER negotiations. It confirmed the Parties' common acceptance of the negotiated texts, ad referendum, and signalled their intentions to move forward towards the entry into force of the ITER Agreement as soon as possible. 'ITER - Uniting science today, global energy tomorrow' was the theme of a number of media events timed to accompany a remarkable day in the history of the ITER international venture, May 24th 2006, initialling of the ITER international agreement

  14. Status of the ITER EDA

    International Nuclear Information System (INIS)

    Aymar, R.

    2000-01-01

    This article summarizes progress made in the ITER Engineering Design Activities in the period between the ITER Meeting in Tokyo (January 2000) and June 2000. Topics: Termination of EDA, Joint Central Team and Support, Task Assignments, ITER Physics, Urgent and High Priority Physics Research Areas

  15. Reducing the effects of acoustic heterogeneity with an iterative reconstruction method from experimental data in microwave induced thermoacoustic tomography

    International Nuclear Information System (INIS)

    Wang, Jinguo; Zhao, Zhiqin; Song, Jian; Chen, Guoping; Nie, Zaiping; Liu, Qing-Huo

    2015-01-01

    Purpose: An iterative reconstruction method has been previously reported by the authors of this paper. However, the iterative reconstruction method was demonstrated by solely using the numerical simulations. It is essential to apply the iterative reconstruction method to practice conditions. The objective of this work is to validate the capability of the iterative reconstruction method for reducing the effects of acoustic heterogeneity with the experimental data in microwave induced thermoacoustic tomography. Methods: Most existing reconstruction methods need to combine the ultrasonic measurement technology to quantitatively measure the velocity distribution of heterogeneity, which increases the system complexity. Different to existing reconstruction methods, the iterative reconstruction method combines time reversal mirror technique, fast marching method, and simultaneous algebraic reconstruction technique to iteratively estimate the velocity distribution of heterogeneous tissue by solely using the measured data. Then, the estimated velocity distribution is used subsequently to reconstruct the highly accurate image of microwave absorption distribution. Experiments that a target placed in an acoustic heterogeneous environment are performed to validate the iterative reconstruction method. Results: By using the estimated velocity distribution, the target in an acoustic heterogeneous environment can be reconstructed with better shape and higher image contrast than targets that are reconstructed with a homogeneous velocity distribution. Conclusions: The distortions caused by the acoustic heterogeneity can be efficiently corrected by utilizing the velocity distribution estimated by the iterative reconstruction method. The advantage of the iterative reconstruction method over the existing correction methods is that it is successful in improving the quality of the image of microwave absorption distribution without increasing the system complexity

  16. ITER EDA newsletter. V. 10, special issue

    International Nuclear Information System (INIS)

    2001-07-01

    This ITER EDA Newsletter includes summaries of the reports of ITER EDA JCT Physics unit about ITER physics R and D during the Engineering Design Activities (EDA), ITER EDA JCT Naka JWC ITER technology R and D during the EDA, and Safety, Environment and Health group of ITER EDA JCT, Garching JWS on EDA activities related to safety

  17. A framework for joint management of regional water-energy systems

    DEFF Research Database (Denmark)

    Cardenal, Silvio Javier Pereira

    in unrealistic operation rules, such as emptying the reservoir during the month with the highest price, which can only be avoided through the inclusion of additional constraints. In contrast, including a simple representation of the power market into a hydro-economic model resulted in more realistic reservoir...... operation policies that adapted to changing inflow conditions. The effects of spatial aggregation on the analysis of water-power systems were evaluated by comparing results from an aggregated and a partially disaggregated model. The aggregated model, where all reservoirs were represented as a single...... equivalent energy reservoir, provided valuable insights into the management of water and power systems, but only at the Peninsula scale. The disaggregated model revealed that optimal allocations were achieved by managing water resources differently in each river basin according to local inflow, storage...

  18. Environmental degradation, economic growth and energy consumption: Evidence of the environmental Kuznets curve in Malaysia

    International Nuclear Information System (INIS)

    Saboori, Behnaz; Sulaiman, Jamalludin

    2013-01-01

    This paper tests for the short and long-run relationship between economic growth, carbon dioxide (CO 2 ) emissions and energy consumption, using the Environmental Kuznets Curve (EKC) by employing both the aggregated and disaggregated energy consumption data in Malaysia for the period 1980–2009. The Autoregressive Distributed Lag (ARDL) methodology and Johansen–Juselius maximum likelihood approach were used to test the cointegration relationship; and the Granger causality test, based on the vector error correction model (VECM), to test for causality. The study does not support an inverted U-shaped relationship (EKC) when aggregated energy consumption data was used. When data was disaggregated based on different energy sources such as oil, coal, gas and electricity, the study does show evidences of the EKC hypothesis. The long-run Granger causality test shows that there is bi-directional causality between economic growth and CO 2 emissions, with coal, gas, electricity and oil consumption. This suggests that decreasing energy consumption such as coal, gas, electricity and oil appears to be an effective way to control CO 2 emissions but simultaneously will hinder economic growth. Thus suitable policies related to the efficient consumption of energy resources and consumption of renewable sources are required. - Highlights: • We investigated the EKC hypothesis by using Malaysian energy aggregated and disaggregated data. • It was found that the EKC is not supported, using the aggregated data (energy consumption). • However using disaggregated energy data (oil, coal and electricity) there is evidence of EKC. • Causality shows no causal relationship between economic growth and energy consumption in the short-run. • Economic growth Granger causes energy consumption and energy consumption causes CO 2 emissions in long-run

  19. ITER CTA newsletter. No. 13, October 2002

    International Nuclear Information System (INIS)

    2002-11-01

    This ITER CTA newsletter issue comprises concise information about an ITER related meeting concerning the joint implementation of ITER - the fifth ITER Negotiations Meeting - which was held in Toronto, Canada, 19-20 September, 2002, and information about assessment of the possible ITER site in Clarington, Ontario, Canada, which was the subject of the first official stage of the Joint Assessment of Specific Sites (JASS) for the ITER Project. This assessment was completed just before the Fifth ITER Negotiations Meeting

  20. The ITER remote maintenance system

    International Nuclear Information System (INIS)

    Tesini, A.; Palmer, J.

    2008-01-01

    The aim of this paper is to summarize the ITER approach to machine components maintenance. A major objective of the ITER project is to demonstrate that a future power producing fusion device can be maintained effectively and offer practical levels of plant availability. During its operational lifetime, many systems of the ITER machine will require maintenance and modification; this can be achieved using remote handling methods. The need for timely, safe and effective remote operations on a machine as complex as ITER and within one of the world's most hostile remote handling environments represents a major challenge at every level of the ITER Project organization, engineering and technology. The basic principles of fusion reactor maintenance are presented. An updated description of the ITER remote maintenance system is provided. This includes the maintenance equipment used inside the vacuum vessel, inside the hot cell and the hot cell itself. The correlation between the functions of the remote handling equipment, of the hot cell and of the radwaste processing system is also described. The paper concludes that ITER has equipped itself with a good platform to tackle the challenges presented by its own maintenance and upgrade needs

  1. A Novel Iterative Scheme for the Very Fast and Accurate Solution of Non-LTE Radiative Transfer Problems

    Science.gov (United States)

    Trujillo Bueno, J.; Fabiani Bendicho, P.

    1995-12-01

    Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel

  2. Aggregation kinetics and structure of cryoimmunoglobulins clusters

    CERN Document Server

    De Spirito, M; Bassi, F A; Di Stasio, E; Giardina, B; Arcovito, G

    2002-01-01

    Cryoimmunoglobulins are pathological antibodies characterized by a temperature-dependent reversible insolubility. Rheumatoid factors (RF) are immunoglobulins possessing anti-immunoglobulin activity and usually consist of an IgM antibody that recognizes IgG as antigen. These proteins are present in sera of patients affected by a large variety of different pathologies, such as HCV infection, neoplastic and autoimmune diseases. Aggregation and precipitation of cryoimmunoglobulins, leading to vasculiti, are physical phenomena behind such pathologies. A deep knowledge of the physico-chemical mechanisms regulating such phenomena plays a fundamental role in biological and clinical applications. In this work, a preliminary investigation of the aggregation kinetics and of the final macro- molecular structure of the aggregates is presented. Through static light scattering techniques, the gyration radius R/sub g/ and the fractal dimension D/sub m/ of the growing clusters have been determined. However, while the initial ...

  3. Nano-aggregates: emerging delivery tools for tumor therapy.

    Science.gov (United States)

    Sharma, Vinod Kumar; Jain, Ankit; Soni, Vandana

    2013-01-01

    A plethora of formulation techniques have been reported in the literature for site-specific targeting of water-soluble and -insoluble anticancer drugs. Along with other vesicular and particulate carrier systems, nano-aggregates have recently emerged as a novel supramolecular colloidal carrier with promise for using poorly water-soluble drugs in molecular targeted therapies. Nano-aggregates possess some inherent properties such as size in the nanometers, high loading efficiency, and in vivo stability. Nano-aggregates can provide site-specific drug delivery via either a passive or active targeting mechanism. Nano-aggregates are formed from a polymer-drug conjugated amphiphilic block copolymer. They are suitable for encapsulation of poorly water-soluble drugs by covalent conjugation as well as physical encapsulation. Because of physical encapsulation, a maximum amount of drug can be loaded in nano-aggregates, which helps to achieve a sufficiently high drug concentration at the target site. Active transport can be achieved by conjugating a drug with vectors or ligands that bind specifically to receptors being overexpressed in the tumor cells. In this review, we explore synthesis and tumor targeting potential of nano-aggregates with active and passive mechanisms, and we discuss various characterization parameters, ex vivo studies, biodistribution studies, clinical trials, and patents.

  4. A Secure-Enhanced Data Aggregation Based on ECC in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Qiang Zhou

    2014-04-01

    Full Text Available Data aggregation is an important technique for reducing the energy consumption of sensor nodes in wireless sensor networks (WSNs. However, compromised aggregators may forge false values as the aggregated results of their child nodes in order to conduct stealthy attacks or steal other nodes’ privacy. This paper proposes a Secure-Enhanced Data Aggregation based on Elliptic Curve Cryptography (SEDA-ECC. The design of SEDA-ECC is based on the principles of privacy homomorphic encryption (PH and divide-and-conquer. An aggregation tree disjoint method is first adopted to divide the tree into three subtrees of similar sizes, and a PH-based aggregation is performed in each subtree to generate an aggregated subtree result. Then the forged result can be identified by the base station (BS by comparing the aggregated count value. Finally, the aggregated result can be calculated by the BS according to the remaining results that have not been forged. Extensive analysis and simulations show that SEDA-ECC can achieve the highest security level on the aggregated result with appropriate energy consumption compared with other asymmetric schemes.

  5. Iterative procedures for wave propagation in the frequency domain

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seongjai [Rice Univ., Houston, TX (United States); Symes, W.W.

    1996-12-31

    A parallelizable two-grid iterative algorithm incorporating a domain decomposition (DD) method is considered for solving the Helmholtz problem. Since a numerical method requires choosing at least 6 to 8 grid points per wavelength, the coarse-grid problem itself is not an easy task for high frequency applications. We solve the coarse-grid problem using a nonoverlapping DD method. To accelerate the convergence of the iteration, an artificial damping technique and relaxation parameters are introduced. Automatic strategies for finding efficient parameters are discussed. Numerical results are presented to show the effectiveness of the method. It is numerically verified that the rate of convergence of the algorithm depends on the wave number sub-linearly and does not deteriorate as the mesh size decreases.

  6. ITER EDA Newsletter. Vol. 1, No. 1

    International Nuclear Information System (INIS)

    1992-11-01

    After the ITER Engineering Design Activities (EDA) Agreement and Protocol 1 had been signed by the four ITER parties on July 21, 1992 and had entered into force, the ITER Council suggested at its first meeting (Vienna, September 10-11, 1992) that the publication of the ITER Newsletter be continued during the EDA with assistance of the International Atomic Energy Agency. This suggestion was supported by the Agency and subsequently the ITER office in Vienna assumed its responsibilities for planning and executing activities related to the publication of the Newsletter. The ITER EDA Newsletter is planned to be a monthly publication aimed at disseminating broad information and understanding, including the description of the personal and institutional involvements in the ITER project in addition to technical facts about it. The responsibility for the Newsletter rests with the ITER council. In this first issue the signing of the ITER EDA Activities and Protocol 1 is reported. The EDA organizational structure is described. This issue also reports on the first ITER EDA council meeting, the opening of the ITER EDA NAKA Co-Centre, the first meeting of the ITER Technical Advisory Committee, activities of special working groups, an ITER Technical Meeting, as well as ''News in Brief'' and ''Coming Events''

  7. Report of the international symposium for ITER. 'Burning plasma science and technology on ITER'

    International Nuclear Information System (INIS)

    2002-10-01

    This report contains the presentations on the International Symposium for ITER, held on Jan. 24, 2002 on the occasion of the ITER Governmental Negotiations in Tokyo. This symposium is organized by Japan Atomic Energy Research Institute with the support of the Ministry of Education, Culture, Sports, Science and Technology (MEXT). The meaningful results were obtained through this symposium especially on new frontiers of science and technology brought by ITER, accelerated road maps towards realizing fusion energy, and portfolio of other fusion configurations from ITER. The 5 of the presented papers are indexed individually (J.P.N.)

  8. ITER Council tour of Clarington site

    International Nuclear Information System (INIS)

    Dautovich, D.

    2001-01-01

    The ITER Council meeting was recently held in Toronto on 27 and 28 February. ITER Canada provided local arrangements for the Council meeting on behalf of Europe as the Official host. Following the meeting, on 1 March, ITER Canada conducted a tour of the proposed ITER construction site at Charington, and the ITER Council members attended a luncheon followed by a speech by Dr. Peter Barnard, Chairman and CEO of ITER Canada, at the Empire Club of Canada. The official invitation to participate in these events came from Dr. Peter Harrison, Deputy Minister of Natural Resources Canada. This report provides a brief summary of the events on 1 March

  9. ITER licensing

    International Nuclear Information System (INIS)

    Gordon, C.W.

    2005-01-01

    ITER was fortunate to have four countries interested in ITER siting to the point where licensing discussions were initiated. This experience uncovered the challenges of licensing a first of a kind, fusion machine under different licensing regimes and helped prepare the way for the site specific licensing process. These initial steps in licensing ITER have allowed for refining the safety case and provide confidence that the design and safety approach will be licensable. With site-specific licensing underway, the necessary regulatory submissions have been defined and are well on the way to being completed. Of course, there is still work to be done and details to be sorted out. However, the informal international discussions to bring both the proponent and regulatory authority up to a common level of understanding have laid the foundation for a licensing process that should proceed smoothly. This paper provides observations from the perspective of the International Team. (author)

  10. ITER-FEAT operation

    International Nuclear Information System (INIS)

    Shimomura, Y.; Huget, M.; Mizoguchi, T.; Murakami, Y.; Polevoi, A.; Shimada, M.; Aymar, R.; Chuyanov, V.; Matsumoto, H.

    2001-01-01

    ITER is planned to be the first fusion experimental reactor in the world operating for research in physics and engineering. The first 10 years' operation will be devoted primarily to physics issues at low neutron fluence and the following 10 years' operation to engineering testing at higher fluence. ITER can accommodate various plasma configurations and plasma operation modes such as inductive high Q modes, long pulse hybrid modes, non-inductive steady-state modes, with large ranges of plasma current, density, beta and fusion power, and with various heating and current drive methods. This flexibility will provide an advantage for coping with uncertainties in the physics database, in studying burning plasmas, in introducing advanced features and in optimizing the plasma performance for the different programme objectives. Remote sites will be able to participate in the ITER experiment. This concept will provide an advantage not only in operating ITER for 24 hours per day but also in involving the world-wide fusion communities and in promoting scientific competition among the Parties. (author)

  11. Guidance to Achieve Accurate Aggregate Quantitation in Biopharmaceuticals by SV-AUC.

    Science.gov (United States)

    Arthur, Kelly K; Kendrick, Brent S; Gabrielson, John P

    2015-01-01

    The levels and types of aggregates present in protein biopharmaceuticals must be assessed during all stages of product development, manufacturing, and storage of the finished product. Routine monitoring of aggregate levels in biopharmaceuticals is typically achieved by size exclusion chromatography (SEC) due to its high precision, speed, robustness, and simplicity to operate. However, SEC is error prone and requires careful method development to ensure accuracy of reported aggregate levels. Sedimentation velocity analytical ultracentrifugation (SV-AUC) is an orthogonal technique that can be used to measure protein aggregation without many of the potential inaccuracies of SEC. In this chapter, we discuss applications of SV-AUC during biopharmaceutical development and how characteristics of the technique make it better suited for some applications than others. We then discuss the elements of a comprehensive analytical control strategy for SV-AUC. Successful implementation of these analytical control elements ensures that SV-AUC provides continued value over the long time frames necessary to bring biopharmaceuticals to market. © 2015 Elsevier Inc. All rights reserved.

  12. Layout compliance for triple patterning lithography: an iterative approach

    Science.gov (United States)

    Yu, Bei; Garreton, Gilda; Pan, David Z.

    2014-10-01

    As the semiconductor process further scales down, the industry encounters many lithography-related issues. In the 14nm logic node and beyond, triple patterning lithography (TPL) is one of the most promising techniques for Metal1 layer and possibly Via0 layer. As one of the most challenging problems in TPL, recently layout decomposition efforts have received more attention from both industry and academia. Ideally the decomposer should point out locations in the layout that are not triple patterning decomposable and therefore manual intervention by designers is required. A traditional decomposition flow would be an iterative process, where each iteration consists of an automatic layout decomposition step and manual layout modification task. However, due to the NP-hardness of triple patterning layout decomposition, automatic full chip level layout decomposition requires long computational time and therefore design closure issues continue to linger around in the traditional flow. Challenged by this issue, we present a novel incremental layout decomposition framework to facilitate accelerated iterative decomposition. In the first iteration, our decomposer not only points out all conflicts, but also provides the suggestions to fix them. After the layout modification, instead of solving the full chip problem from scratch, our decomposer can provide a quick solution for a selected portion of layout. We believe this framework is efficient, in terms of performance and designer friendly.

  13. Novel Fourier-based iterative reconstruction for sparse fan projection using alternating direction total variation minimization

    International Nuclear Information System (INIS)

    Jin Zhao; Zhang Han-Ming; Yan Bin; Li Lei; Wang Lin-Yuan; Cai Ai-Long

    2016-01-01

    Sparse-view x-ray computed tomography (CT) imaging is an interesting topic in CT field and can efficiently decrease radiation dose. Compared with spatial reconstruction, a Fourier-based algorithm has advantages in reconstruction speed and memory usage. A novel Fourier-based iterative reconstruction technique that utilizes non-uniform fast Fourier transform (NUFFT) is presented in this work along with advanced total variation (TV) regularization for a fan sparse-view CT. The proposition of a selective matrix contributes to improve reconstruction quality. The new method employs the NUFFT and its adjoin to iterate back and forth between the Fourier and image space. The performance of the proposed algorithm is demonstrated through a series of digital simulations and experimental phantom studies. Results of the proposed algorithm are compared with those of existing TV-regularized techniques based on compressed sensing method, as well as basic algebraic reconstruction technique. Compared with the existing TV-regularized techniques, the proposed Fourier-based technique significantly improves convergence rate and reduces memory allocation, respectively. (paper)

  14. Monetary Value of Quality-Adjusted Life Years (QALY) among Patients with Cardiovascular Disease: a Willingness to Pay Study (WTP).

    Science.gov (United States)

    Moradi, Najmeh; Rashidian, Arash; Rasekh, Hamid Reza; Olyaeemanesh, Alireza; Foroughi, Mahnoosh; Mohammadi, Teymoor

    2017-01-01

    The aim of this study was to estimate the monetary value of a QALY among patients with heart disease and to identify its determinants. A cross-sectional survey was conducted through face-to-face interview on 196 patients with cardiovascular disease from two heart hospitals in Tehran, Iran, to estimate the value of QALY using disaggregated and aggregated approaches. The EuroQol-5 Dimension (EQ-5D) questionnaire, Visual Analogue Scale (VAS), Time Trade-Off (TTO) and contingent valuation WTP techniques were employed, first to elicit patients' preferences and then, to estimate WTP for QALY. The association of patients' characteristics with WTP for QALY, was assessed through Heckman selection model. The Mean willingness to pay per QALY, estimated by the disaggregated approach ranged from 2,799 to 3599 US dollars. It is higher than the values, estimated from aggregated methods (USD 2,256 to 3,137). However, in both approaches, the values were less than one Gross Domestic Product (GDP) per capita of Iran. Significant variables were: Current health state, education, age, marital status, number of comorbidities, and household's cost group. Our results challenge two major issues: the first, is a policy challenge which concerns the WHO recommendation to use less than 3 GDP per capita as a cost-effectiveness threshold value. The second, is an analytical challenge related to patients with zero QALY gain. More scrutiny is suggested on the issue of how patients with full health state valuation should be dealt with and what arbitrary value could be included in the estimation value of QALY when the disaggregated approach used.

  15. Laser cleaning of ITER's diagnostic mirrors

    Science.gov (United States)

    Skinner, C. H.; Gentile, C. A.; Doerner, R.

    2012-10-01

    Practical methods to clean ITER's diagnostic mirrors and restore reflectivity will be critical to ITER's plasma operations. We report on laser cleaning of single crystal molybdenum mirrors coated with either carbon or beryllium films 150 - 420 nm thick. A 1.06 μm Nd laser system provided 220 ns pulses at 8 kHz with typical power densities of 1-2 J/cm^2. The laser beam was fiber optically coupled to a scanner suitable for tokamak applications. The efficacy of mirror cleaning was assessed with a new technique that combines microscopic imaging and reflectivity measurements [1]. The method is suitable for hazardous materials such as beryllium as the mirrors remain sealed in a vacuum chamber. Excellent restoration of reflectivity for the carbon coated Mo mirrors was observed after laser scanning under vacuum conditions. For the beryllium coated mirrors restoration of reflectivity has so far been incomplete and modeling indicates that a shorter duration laser pulse is needed. No damage of the molybdenum mirror substrates was observed.[4pt][1] C.H. Skinner et al., Rev. Sci. Instrum. at press.

  16. Constructing Frozen Jacobian Iterative Methods for Solving Systems of Nonlinear Equations, Associated with ODEs and PDEs Using the Homotopy Method

    Directory of Open Access Journals (Sweden)

    Uswah Qasim

    2016-03-01

    Full Text Available A homotopy method is presented for the construction of frozen Jacobian iterative methods. The frozen Jacobian iterative methods are attractive because the inversion of the Jacobian is performed in terms of LUfactorization only once, for a single instance of the iterative method. We embedded parameters in the iterative methods with the help of the homotopy method: the values of the parameters are determined in such a way that a better convergence rate is achieved. The proposed homotopy technique is general and has the ability to construct different families of iterative methods, for solving weakly nonlinear systems of equations. Further iterative methods are also proposed for solving general systems of nonlinear equations.

  17. Metamodeling Techniques to Aid in the Aggregation Process of Large Hierarchical Simulation Models

    National Research Council Canada - National Science Library

    Rodriguez, June F

    2008-01-01

    .... More specifically, investigating how to accurately aggregate hierarchical lower-level (higher resolution) models into the next higher-level in order to reduce the complexity of the overall simulation model...

  18. Dis-aggregation of airborne flux measurements using footprint analysis

    NARCIS (Netherlands)

    Hutjes, R.W.A.; Vellinga, O.S.; Gioli, B.; Miglietta, F.

    2010-01-01

    Aircraft measurements of turbulent fluxes are generally being made with the objective to obtain an estimate of regional exchanges between land surface and atmosphere, to investigate the spatial variability of these fluxes, but also to learn something about the fluxes from some or all of the land

  19. Two-photon absorption of a supramolecular pseudoisocyanine J-aggregate assembly

    International Nuclear Information System (INIS)

    Belfield, Kevin D.; Bondar, Mykhailo V.; Hernandez, Florencio E.; Przhonska, Olga V.; Yao, Sheng

    2006-01-01

    Linear spectral properties, including excitation anisotropy, of pseudoisocyanine or 1,1'-diethyl-2,2'-cyanine iodide (PIC) J-aggregates in aqueous solutions with J-band position at 573 nm were investigated. Two-photon absorption of PIC J-aggregates and monomer molecules was studied using an open aperture Z-scan technique. A strong enhancement of the two-photon absorption cross-section of PIC in the supramolecular J-aggregate assembly was observed in aqueous solution. This enhancement is attributed to a strong coupling of the molecular transition dipoles. No two-photon absorption at the peak of the J-band was detected

  20. Spontaneous high piezoelectricity in poly(vinylidene fluoride) nanoribbons produced by iterative thermal size reduction technique.

    Science.gov (United States)

    Kanik, Mehmet; Aktas, Ozan; Sen, Huseyin Sener; Durgun, Engin; Bayindir, Mehmet

    2014-09-23

    We produced kilometer-long, endlessly parallel, spontaneously piezoelectric and thermally stable poly(vinylidene fluoride) (PVDF) micro- and nanoribbons using iterative size reduction technique based on thermal fiber drawing. Because of high stress and temperature used in thermal drawing process, we obtained spontaneously polar γ phase PVDF micro- and nanoribbons without electrical poling process. On the basis of X-ray diffraction (XRD) analysis, we observed that PVDF micro- and nanoribbons are thermally stable and conserve the polar γ phase even after being exposed to heat treatment above the melting point of PVDF. Phase transition mechanism is investigated and explained using ab initio calculations. We measured an average effective piezoelectric constant as -58.5 pm/V from a single PVDF nanoribbon using a piezo evaluation system along with an atomic force microscope. PVDF nanoribbons are promising structures for constructing devices such as highly efficient energy generators, large area pressure sensors, artificial muscle and skin, due to the unique geometry and extended lengths, high polar phase content, high thermal stability and high piezoelectric coefficient. We demonstrated two proof of principle devices for energy harvesting and sensing applications with a 60 V open circuit peak voltage and 10 μA peak short-circuit current output.

  1. Manufacturing preparations for the European Vacuum Vessel Sector for ITER

    International Nuclear Information System (INIS)

    Jones, Lawrence; Arbogast, Jean François; Bayon, Angel; Bianchi, Aldo; Caixas, Joan; Facca, Aldo; Fachin, Gianbattista; Fernández, José; Giraud, Benoit; Losasso, Marcello; Löwer, Thorsten; Micó, Gonzalo; Pacheco, Jose Miguel; Paoletti, Roberto; Sanguinetti, Gian Paolo; Stamos, Vassilis; Tacconelli, Massimiliano; Trentea, Alexandru; Utin, Yuri

    2012-01-01

    The contract for the seven European Sectors of the ITER Vacuum Vessel, which has very tight tolerances and high density of welding, was placed at the end of 2010 with AMW, a consortium of three companies. The start-up of the engineering, including R and D, design and analysis activities of this large and complex contract, one of the largest placed by F4E, the European Domestic Agency for ITER, is described. The statutory and regulatory requirements of ITER Organization and the French Nuclear Safety regulations have made the design development subject to rigorous controls. AMW was able to make use of the previous extensive R and D and prototype work carried out during the past 9 years, especially in relation to advanced welding and inspection techniques. The paper describes the manufacturing methodology with the focus on controlling distortion with predictions by analysis, avoiding use of welded-on jigs, and making use of low heat input narrow-gap welding with electron beam welding as far as possible and narrow-gap TIG when not. Further R and D and more than ten significant mock-ups are described. All these preparations will help to assure the successful manufacture of this critical path item of ITER.

  2. ITER EDA Newsletter. V. 3, no. 8

    International Nuclear Information System (INIS)

    1994-08-01

    This ITER EDA (Engineering Design Activities) Newsletter issue reports on the sixth ITER council meeting; introduces the newly appointed ITER director and reports on his address to the ITER council. The vacuum tank for the ITER model coil testing, installed at JAERI, Naka, Japan is also briefly described

  3. F4E studies for the electromagnetic analysis of ITER components

    Energy Technology Data Exchange (ETDEWEB)

    Testoni, P., E-mail: pietro.testoni@f4e.europa.eu [Fusion for Energy, Torres Diagonal Litoral B3, c/ Josep Plá n.2, Barcelona (Spain); Cau, F.; Portone, A. [Fusion for Energy, Torres Diagonal Litoral B3, c/ Josep Plá n.2, Barcelona (Spain); Albanese, R. [Associazione EURATOM/ENEA/CREATE, DIETI, Università Federico II di Napoli, Napoli (Italy); Juirao, J. [Numerical Analysis TEChnologies S.L. (NATEC), c/ Marqués de San Esteban, 52 Entlo D Gijón (Spain)

    2014-10-15

    Highlights: • Several ITER components have been analyzed from the electromagnetic point of view. • Categorization of DINA load cases is described. • VDEs, MDs and MFD have been studied. • Integral values of forces and moments components versus time have been computed for all the ITER components under study. - Abstract: Fusion for Energy (F4E) is involved in a relevant number of activities in the area of electromagnetic analysis in support of ITER general design and EU in-kind procurement. In particular several ITER components (vacuum vessel, blanket shield modules and first wall panels, test blanket modules, ICRH antenna) are being analyzed from the electromagnetic point of view. In this paper we give an updated description of our main activities, highlighting the main assumptions, objectives, results and conclusions. The plasma instabilities we consider, typically disruptions and VDEs, can be both toroidally symmetric and asymmetric. This implies that, depending on the specific component and loading conditions, FE models we use span from a sector of 10 up to 360° of the ITER machine. The techniques for simulating the electromagnetic phenomena involved in a disruption and the postprocessing of the results to obtain the loads acting on the structures are described. Finally we summarize the typical loads applied to different components and give a critical view of the results.

  4. Intuitionistic fuzzy aggregation and clustering

    CERN Document Server

    Xu, Zeshui

    2012-01-01

    This book offers a systematic introduction to the clustering algorithms for intuitionistic fuzzy values, the latest research results in intuitionistic fuzzy aggregation techniques, the extended results in interval-valued intuitionistic fuzzy environments, and their applications in multi-attribute decision making, such as supply chain management, military system performance evaluation, project management, venture capital, information system selection, building materials classification, and operational plan assessment, etc.

  5. ITER technical advisory committee meeting

    International Nuclear Information System (INIS)

    Fujiwara, M.

    2001-01-01

    The 17th Meeting of the ITER Technical Advisory Committee (TAC-17) was held on February 19-22, the ITER Garching Work Site in Germany. The objective of the meeting was to review the Draft Final Design Report of ITER-FEAT and assess the ability of the self-consistent overall design both to satisfy the technical objectives previously defined and to meet the cost limitations. TAC-17 was also organized to confirm that the design and critical elements, with emphasis on the key recommendations made at previous TAC meetings, are such as to extend the confidence in starting ITER construction. It was also intended to provide the ITER Council, scheduled to meet on 27 and 28 February in Toronto, with a technical assessment and key recommendations of the above mentioned report

  6. iHAT: interactive Hierarchical Aggregation Table for Genetic Association Data

    Directory of Open Access Journals (Sweden)

    Heinrich Julian

    2012-05-01

    Full Text Available Abstract In the search for single-nucleotide polymorphisms which influence the observable phenotype, genome wide association studies have become an important technique for the identification of associations between genotype and phenotype of a diverse set of sequence-based data. We present a methodology for the visual assessment of single-nucleotide polymorphisms using interactive hierarchical aggregation techniques combined with methods known from traditional sequence browsers and cluster heatmaps. Our tool, the interactive Hierarchical Aggregation Table (iHAT, facilitates the visualization of multiple sequence alignments, associated metadata, and hierarchical clusterings. Different color maps and aggregation strategies as well as filtering options support the user in finding correlations between sequences and metadata. Similar to other visualizations such as parallel coordinates or heatmaps, iHAT relies on the human pattern-recognition ability for spotting patterns that might indicate correlation or anticorrelation. We demonstrate iHAT using artificial and real-world datasets for DNA and protein association studies as well as expression Quantitative Trait Locus data.

  7. ITER EDA status

    International Nuclear Information System (INIS)

    Aymar, R.

    2001-01-01

    The Project has focused on drafting the Plant Description Document (PDD), which will be published as the Technical Basis for the ITER Final Design Report (FDR), and its related documentation in time for the ITER review process. The preparations have involved continued intensive detailed design work, analyses and assessments by the Home Teams and the Joint Central Team, who have co-operated closely and efficiently. The main technical document has been completed in time for circulation, as planned, to TAC members for their review at TAC-17 (19-22 February 2001). Some of the supporting documents, such as the Plant Design Specification (PDS), Design Requirements and Guidelines (DRG1 and DRG2), and the Plant Safety Requirement (PSR) are also available for reference in draft form. A summary paper of the PDD for the Council's information is available as a separate document. A new documentation structure for the Project has been established. This hierarchical structure for documentation facilitates the entire organization in a way that allows better change control and avoids duplications. The initiative was intended to make this documentation system valid for the construction and operation phases of ITER. As requested, the Director and the JCT have been assisting the Explorations to plan for future joint technical activities during the Negotiations, and to consider technical issues important for ITER construction and operation for their introduction in the draft of a future joint implementation agreement. As charged by the Explorers, the Director has held discussions with the Home Team Leaders in order to prepare for the staffing of the International Team and Participants Teams during the Negotiations (Co-ordinated Technical Activities, CTA) and also in view of informing all ITER staff about their future directions in a timely fashion. One important element of the work was the completion by the Parties' industries of costing studies of about 83 ''procurement packages

  8. ITER ITA newsletter. No. 8, September 2003

    International Nuclear Information System (INIS)

    2003-10-01

    This issue of ITER ITA (ITER transitional Arrangements) newsletter contains concise information about ITER related activities including Robert Aymar's leaving ITER for CERN, ITER related issues at the IAEA General Conference and status and prospects of thermonuclear power and activity during the ITA on materials foe vessel and in-vessel components

  9. CT imaging of congenital lung lesions: effect of iterative reconstruction on diagnostic performance and radiation dose

    International Nuclear Information System (INIS)

    Haggerty, Jay E.; Smith, Ethan A.; Dillman, Jonathan R.; Kunisaki, Shaun M.

    2015-01-01

    Different iterative reconstruction techniques are available for use in pediatric computed tomography (CT), but these techniques have not been systematically evaluated in infants. To determine the effect of iterative reconstruction on diagnostic performance, image quality and radiation dose in infants undergoing CT evaluation for congenital lung lesions. A retrospective review of contrast-enhanced chest CT in infants (<1 year) with congenital lung lesions was performed. CT examinations were reviewed to document the type of lung lesion, vascular anatomy, image noise measurements and image reconstruction method. CTDI vol was used to calculate size-specific dose estimates (SSDE). CT findings were correlated with intraoperative and histopathological findings. Analysis of variance and the Student's t-test were used to compare image noise measurements and radiation dose estimates between groups. Fifteen CT examinations used filtered back projection (FBP; mean age: 84 days), 15 used adaptive statistical iterative reconstruction (ASiR; mean age: 93 days), and 11 used model-based iterative reconstruction (MBIR; mean age: 98 days). Compared to operative findings, 13/15 (87%), 14/15 (93%) and 11/11 (100%) lesions were correctly characterized using FBP, ASiR and MBIR, respectively. Arterial anatomy was correctly identified in 12/15 (80%) using FBP, 13/15 (87%) using ASiR and 11/11 (100%) using MBIR. Image noise was less for MBIR vs. ASiR (P < 0.0001). Mean SSDE was different among groups (P = 0.003; FBP = 7.35 mGy, ASiR = 1.89 mGy, MBIR = 1.49 mGy). Congenital lung lesions can be adequately characterized in infants using iterative CT reconstruction techniques while maintaining image quality and lowering radiation dose. (orig.)

  10. CT imaging of congenital lung lesions: effect of iterative reconstruction on diagnostic performance and radiation dose

    Energy Technology Data Exchange (ETDEWEB)

    Haggerty, Jay E.; Smith, Ethan A.; Dillman, Jonathan R. [University of Michigan Health System, Section of Pediatric Radiology, Department of Radiology, C.S. Mott Children' s Hospital, Ann Arbor, MI (United States); Kunisaki, Shaun M. [University of Michigan Health System, Section of Pediatric Surgery, Department of Surgery, C.S. Mott Children' s Hospital, Ann Arbor, MI (United States)

    2015-07-15

    Different iterative reconstruction techniques are available for use in pediatric computed tomography (CT), but these techniques have not been systematically evaluated in infants. To determine the effect of iterative reconstruction on diagnostic performance, image quality and radiation dose in infants undergoing CT evaluation for congenital lung lesions. A retrospective review of contrast-enhanced chest CT in infants (<1 year) with congenital lung lesions was performed. CT examinations were reviewed to document the type of lung lesion, vascular anatomy, image noise measurements and image reconstruction method. CTDI{sub vol} was used to calculate size-specific dose estimates (SSDE). CT findings were correlated with intraoperative and histopathological findings. Analysis of variance and the Student's t-test were used to compare image noise measurements and radiation dose estimates between groups. Fifteen CT examinations used filtered back projection (FBP; mean age: 84 days), 15 used adaptive statistical iterative reconstruction (ASiR; mean age: 93 days), and 11 used model-based iterative reconstruction (MBIR; mean age: 98 days). Compared to operative findings, 13/15 (87%), 14/15 (93%) and 11/11 (100%) lesions were correctly characterized using FBP, ASiR and MBIR, respectively. Arterial anatomy was correctly identified in 12/15 (80%) using FBP, 13/15 (87%) using ASiR and 11/11 (100%) using MBIR. Image noise was less for MBIR vs. ASiR (P < 0.0001). Mean SSDE was different among groups (P = 0.003; FBP = 7.35 mGy, ASiR = 1.89 mGy, MBIR = 1.49 mGy). Congenital lung lesions can be adequately characterized in infants using iterative CT reconstruction techniques while maintaining image quality and lowering radiation dose. (orig.)

  11. A heuristic statistical stopping rule for iterative reconstruction in emission tomography

    International Nuclear Information System (INIS)

    Ben Bouallegue, F.; Mariano-Goulart, D.; Crouzet, J.F.

    2013-01-01

    We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for maximum likelihood expectation maximization (MLEM) reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the Geant4 application in emission tomography (GATE) platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time. (author)

  12. ITER CTA newsletter. No. 9

    International Nuclear Information System (INIS)

    2002-06-01

    This ITER CTA newsletter contains information about the Fourth Negotiations Meeting on the Joint Implementation of ITER held in Cadarache, France on 4-6 June 2002 and about the meeting of the ITER CTA Project Board which took place on the occasion of the N4 Meeting at Cadarache on 3-4 June 2002

  13. ITER CTA newsletter. No. 1

    International Nuclear Information System (INIS)

    2001-01-01

    This ITER CTA newsletter comprises reports on ITER co-ordinated technical activities, information about the Meeting of the ITER CTA project board which took place in Vienna on 16 July 2001, and the Meeting of the expert group on MHD, disruptions and plasma control which was held on 25-26 June 2001 in Funchal, Madeira

  14. Rydberg aggregates

    Science.gov (United States)

    Wüster, S.; Rost, J.-M.

    2018-02-01

    We review Rydberg aggregates, assemblies of a few Rydberg atoms exhibiting energy transport through collective eigenstates, considering isolated atoms or assemblies embedded within clouds of cold ground-state atoms. We classify Rydberg aggregates, and provide an overview of their possible applications as quantum simulators for phenomena from chemical or biological physics. Our main focus is on flexible Rydberg aggregates, in which atomic motion is an essential feature. In these, simultaneous control over Rydberg-Rydberg interactions, external trapping and electronic energies, allows Born-Oppenheimer surfaces for the motion of the entire aggregate to be tailored as desired. This is illustrated with theory proposals towards the demonstration of joint motion and excitation transport, conical intersections and non-adiabatic effects. Additional flexibility for quantum simulations is enabled by the use of dressed dipole-dipole interactions or the embedding of the aggregate in a cold gas or Bose-Einstein condensate environment. Finally we provide some guidance regarding the parameter regimes that are most suitable for the realization of either static or flexible Rydberg aggregates based on Li or Rb atoms. The current status of experimental progress towards enabling Rydberg aggregates is also reviewed.

  15. The JET ITER-like wall experiment: First results and lessons for ITER

    Energy Technology Data Exchange (ETDEWEB)

    Horton, Lorne, E-mail: Lorne.Horton@jet.efda.org [EFDA-CSU Culham, Culham Science Centre, Abingdon, OX14 3DB (United Kingdom); European Commission, B-1049 Brussels (Belgium)

    2013-10-15

    Highlights: ► JET has recently completed the installation of an ITER-like wall. ► Important operational aspects have changed with the new wall. ► Initial experiments have confirmed the expected low fuel retention. ► Disruption dynamics have change dramatically. ► Development of wall-compatible, ITER-relevant regimes of operation has begun. -- Abstract: The JET programme is strongly focused on preparations for ITER construction and exploitation. To this end, a major programme of machine enhancements has recently been completed, including a new ITER-like wall, in which the plasma-facing armour in the main vacuum chamber is beryllium while that in the divertor is tungsten—the same combination of plasma-facing materials foreseen for ITER. The goal of the initial experimental campaigns is to fully characterise operation with the new wall, concentrating in particular on plasma-material interactions, and to make direct comparisons of plasma performance with the previous, carbon wall. This is being done in a progressive manner, with the input power and plasma performance being increased in combination with the commissioning of a comprehensive new real-time protection system. Progress achieved during the first set of experimental campaigns with the new wall, which took place from September 2011 to July 2012, is reported.

  16. ITER concept definition. V.1

    International Nuclear Information System (INIS)

    1989-01-01

    Under the auspices of the International Atomic Energy Agency (IAEA), an agreement among the four parties representing the world's major fusion programs resulted in a program for conceptual design of the next logical step in the fusion program, the International Thermonuclear Experimental Reactor (ITER). The definition phase, which ended in November, 1989, is summarized in two reports: a brief summary is contained in the ITER Definition Phase Report (IAEA/ITER/DS/2); the extended technical summary and technical details of ITER are contained in this two-volume report. The first volume of this report contains the Introduction and Summary, and the remainder will appear in Volume II. In the Conceptual Design Activities phase, ITER has been defined as being a tokamak device. The basic performance parameters of ITER are given in Volume I of this report. In addition, the rationale for selection of this concept, the performance flexibility, technical issues, operations, safety, reliability, cost, and research and development needed to proceed with the design are discussed. Figs and tabs

  17. ITER primary cryopump test facility

    International Nuclear Information System (INIS)

    Petersohn, N.; Mack, A.; Boissin, J.C.; Murdoc, D.

    1998-01-01

    A cryopump as ITER primary vacuum pump is being developed at FZK under the European fusion technology programme. The ITER vacuum system comprises of 16 cryopumps operating in a cyclic mode which fulfills the vacuum requirements in all ITER operation modes. Prior to the construction of a prototype cryopump, the concept is tested on a reduced scale model pump. To test the model pump, the TIMO facility is being built at FZK in which the model pump operation under ITER environmental conditions, except for tritium exposure, neutron irradiation and magnetic fields, can be simulated. The TIMO facility mainly consists of a test vessel for ITER divertor duct simulation, a 600 W refrigerator system supplying helium in the 5 K stage and a 30 kW helium supply system for the 80 K stage. The model pump test programme will be performed with regard to the pumping performance and cryogenic operation of the pump. The results of the model pump testing will lead to the design of the full scale ITER cryopump. (orig.)

  18. ITER ITA newsletter No. 32, July 2006

    International Nuclear Information System (INIS)

    2006-07-01

    This issue of ITER ITA (ITER transitional Arrangements) newsletter contains concise information about ITER related activities. The ITER Parties, at their Ministerial Meeting in May 2006 in Brussels, initialled the draft text of the prospective Agreement on the Establishment of the ITER International Fusion Energy Organization for the Joint Implementation of the ITER Project as well as the draft text of the Agreement on the Privileges and Immunities of the ITER International Fusion Energy Organisation for the Joint Implementation of the ITER Project. The Parties have requested that the IAEA Director General serve as Depositary of the two aforementioned Agreements and that the IAEA establish a Trust Fund to Support Common Expenditures under the ITER Transitional Arrangements, pending entry into force of the prospective Agreement on the Establishment of the ITER International Fusion Energy Organization for the Joint Implementation of the ITER Project. At its June Meeting in Vienna, the IAEA Board of Governors approved these requests. There is also information about the Tenth Meeting of the International Tokamak Physics Activity (ITPA) Topical Group (TG) on Diagnostics was held at the Kurchatov Institute, Moscow, from 10-14 April 2006

  19. Disaggregated regulation in network sections: The normative and positive theory; Disaggregierte Regulierung in Netzsektoren: Normative und positive Theorie

    Energy Technology Data Exchange (ETDEWEB)

    Knieps, G. [Inst. fuer Verkehrswissenschaft und Regionalpolitik, Albert-Ludwigs-Univ. Freiburg i.B. (Germany)

    2007-09-15

    The article deals with the interaction of normative and positive theorie of regulation. Those parts of the network which need regulation could be localised and regulated with the help of the normative theory of the monopolistic bottlenecks. Using the positive theory, the basic elements of a mandate for regulation in the sense of the disaggregated economy of regulation are derived.

  20. On the spectral analysis of iterative solutions of the discretized one-group transport equation

    International Nuclear Information System (INIS)

    Sanchez, Richard

    2004-01-01

    We analyze the Fourier-mode technique used for the spectral analysis of iterative solutions of the one-group discretized transport equation. We introduce a direct spectral analysis for the iterative solution of finite difference approximations for finite slabs composed of identical layers, providing thus a complementary analysis that is more appropriate for reactor applications. Numerical calculations for the method of characteristics and with the diamond difference approximation show the appearance of antisymmetric modes generated by the iteration on boundary data. We have also utilized the discrete Fourier transform to compute the spectrum for a periodic slab containing N identical layers and shown that at the limit N → ∞ one obtains the familiar Fourier-mode solution

  1. Energy efficient structure-free data aggregation and delivery in WSN

    Directory of Open Access Journals (Sweden)

    Prabhudutta Mohanty

    2016-11-01

    Full Text Available In Wireless Sensor Networks (WSNs, the energy consumption due to the sensed data transmission is more than processing data locally within the sensor node. The data aggregation is one of the techniques to conserve energy by eliminating the redundant data transmission in dense WSNs. In this paper, we propose an energy efficient structure-free data aggregation and delivery (ESDAD protocol, which aggregates the redundant data in the intermediate nodes. In the proposed protocol, waiting time for packets at each intermediate node is calculated very sensibly so that data can be aggregated efficiently in the routing path. The sensed data packets are transmitted judicially to the aggregation point for data aggregation. The ESDAD protocol computes a cost function for structure-free, next-hop node selection and performs near source data aggregation. The buffer of each node is partitioned to maintain different types of flows for fair and efficient data delivery. The transmission rates of the sources and intermediate nodes are adjusted during congestion. The performance of the proposed protocol is evaluated through extensive simulations. The simulation results reveal that it outperforms the existing structure-free protocols in terms of energy efficiency, reliability and on-time delivery ratio.

  2. ITER EDA Newsletter. V. 10, no. 7

    International Nuclear Information System (INIS)

    2001-07-01

    This ITER EDA Newsletter presents an overview of meetings held at IAEA Headquarters in Vienna during the week 16-20 July 2001 related to the successful completion of the ITER Engineering Design Activities (EDA). Among them were the final meeting of the ITER Council, the closing ceremony to commemorate the EDA completion, the final meeting of the ITER Management Advisory Committee, a briefing of issues related to ITER developments, and discussions on the possible joint implementation of ITER

  3. Disaggregation of MODIS surface temperature over an agricultural area using a time series of Formosat-2 images

    OpenAIRE

    Merlin, O.; Duchemin, Benoit; Hagolle, O.; Jacob, Frédéric; Coudert, B.; Chehbouni, Abdelghani; Dedieu, G.; Garatuza, J.; Kerr, Yann

    2010-01-01

    No of Pages 13; International audience; The temporal frequency of the thermal data provided by current spaceborne high-resolution imagery systems is inadequate for agricultural applications. As an alternative to the lack of high-resolution observations, kilometric thermal data can be disaggregated using a green (photosynthetically active) vegetation index e.g. NDVI (Normalized Difference Vegetation Index) collected at high resolution. Nevertheless, this approach is only valid in the condition...

  4. IAEA activities related to ITER

    International Nuclear Information System (INIS)

    Dolan, T.J.; Schneider, U.

    2001-01-01

    As agreed between the IAEA and the ITER Parties, special sessions are dedicated to ITER at the IAEA Fusion Energy Conferences. At the 18th IAEA Fusion Energy Conference, held on 4-10 October 2000 in Sorrento, Italy, in the Artsimovich-Kadomtsev Memorial opening session there were special lectures by Carlo Rubbia (President, ENEA, Italy), A. Arima (Japan), and E.P. Velikhov (Russia); an overview talk on ITER by R. Aymar (ITER Director); and a talk on the FTU experiment by F. Romanelli. In total, 573 participants from 34 countries presented 389 papers (including 11 post-deadline papers and the 4 summaries)

  5. New optical sensing technique of tissue viability and blood flow based on nanophotonic iterative multi-plane reflectance measurements

    Directory of Open Access Journals (Sweden)

    Yariv I

    2016-10-01

    Full Text Available Inbar Yariv,1 Menashe Haddad,2,3 Hamootal Duadi,1 Menachem Motiei,1 Dror Fixler1 1Faculty of Engineering and the Institute of Nanotechnology and Advanced Materials, Bar Ilan University, Ramat Gan, Israel; 2Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel; 3Mayanei Hayeshua Medical Center, Benei Brak, Israel Abstract: Physiological substances pose a challenge for researchers since their optical properties change constantly according to their physiological state. Examination of those substances noninvasively can be achieved by different optical methods with high sensitivity. Our research suggests the application of a novel noninvasive nanophotonics technique, ie, iterative multi-plane optical property extraction (IMOPE based on reflectance measurements, for tissue viability examination and gold nanorods (GNRs and blood flow detection. The IMOPE model combines an experimental setup designed for recording light intensity images with the multi-plane iterative Gerchberg-Saxton algorithm for reconstructing the reemitted light phase and calculating its standard deviation (STD. Changes in tissue composition affect its optical properties which results in changes in the light phase that can be measured by its STD. We have demonstrated this new concept of correlating the light phase STD and the optical properties of a substance, using transmission measurements only. This paper presents, for the first time, reflectance based IMOPE tissue viability examination, producing a decrease in the computed STD for older tissues, as well as investigating their organic material absorption capability. Finally, differentiation of the femoral vein from adjacent tissues using GNRs and the detection of their presence within blood circulation and tissues are also presented with high sensitivity (better than computed tomography to low quantities of GNRs (<3 mg. Keywords: Gerchberg-Saxton, optical properties, gold nanorods, blood vessel, tissue viability

  6. ITER blanket designs

    International Nuclear Information System (INIS)

    Gohar, Y.; Parker, R.; Rebut, P.H.

    1995-01-01

    The ITER first wall, blanket, and shield system is being designed to handle 1.5±0.3 GW of fusion power and 3 MWa m -2 average neutron fluence. In the basic performance phase of ITER operation, the shielding blanket uses austenitic steel structural material and water coolant. The first wall is made of bimetallic structure, austenitic steel and copper alloy, coated with beryllium and it is protected by beryllium bumper limiters. The choice of copper first wall is dictated by the surface heat flux values anticipated during ITER operation. The water coolant is used at low pressure and low temperature. A breeding blanket has been designed to satisfy the technical objectives of the Enhanced Performance Phase of ITER operation for the Test Program. The breeding blanket design is geometrically similar to the shielding blanket design except it is a self-cooled liquid lithium system with vanadium structural material. Self-healing electrical insulator (aluminum nitride) is used to reduce the MHD pressure drop in the system. Reactor relevancy, low tritium inventory, low activation material, low decay heat, and a tritium self-sufficiency goal are the main features of the breeding blanket design. (orig.)

  7. ITER ITA Newsletter. No. 29, March 2006

    International Nuclear Information System (INIS)

    2006-05-01

    This issue of ITER ITA (ITER transitional Arrangements) newsletter contains concise information about ITER related activities and meetings, namely, the ITER Director-General Nominee, Dr. Kaname Ikeda, took up his position as ITER Project Leader in Cadarache on 13 March, the consolidation of information technology infrastructure for ITER and about he Thirty-Fifth Meeting of the Fusion Power Co-ordinating Committee (FPCC), which was held on 28 February-1 March 2006 at the headquarters of the International Energy Agency (IEA) in Paris

  8. Investigation of the effects of type of crusher on coarse aggregates shape properties using three-dimensional Laser Scanning Technique

    CSIR Research Space (South Africa)

    Komba, Julius J

    2016-07-01

    Full Text Available of quartzite aggregates crushed by using four different types of crushers were investigated. The results have demonstrated the extent to which the aggregate shape indices computed using laser scan results can be used to distinguish aggregate by-product from...

  9. Phase-only stereoscopic hologram calculation based on Gerchberg–Saxton iterative algorithm

    International Nuclear Information System (INIS)

    Xia Xinyi; Xia Jun

    2016-01-01

    A phase-only computer-generated holography (CGH) calculation method for stereoscopic holography is proposed in this paper. The two-dimensional (2D) perspective projection views of the three-dimensional (3D) object are generated by the computer graphics rendering techniques. Based on these views, a phase-only hologram is calculated by using the Gerchberg–Saxton (GS) iterative algorithm. Comparing with the non-iterative algorithm in the conventional stereoscopic holography, the proposed method improves the holographic image quality, especially for the phase-only hologram encoded from the complex distribution. Both simulation and optical experiment results demonstrate that our proposed method can give higher quality reconstruction comparing with the traditional method. (special topic)

  10. Disaggregated seismic hazard and the elastic input energy spectrum: An approach to design earthquake selection

    Science.gov (United States)

    Chapman, Martin Colby

    1998-12-01

    The design earthquake selection problem is fundamentally probabilistic. Disaggregation of a probabilistic model of the seismic hazard offers a rational and objective approach that can identify the most likely earthquake scenario(s) contributing to hazard. An ensemble of time series can be selected on the basis of the modal earthquakes derived from the disaggregation. This gives a useful time-domain realization of the seismic hazard, to the extent that a single motion parameter captures the important time-domain characteristics. A possible limitation to this approach arises because most currently available motion prediction models for peak ground motion or oscillator response are essentially independent of duration, and modal events derived using the peak motions for the analysis may not represent the optimal characterization of the hazard. The elastic input energy spectrum is an alternative to the elastic response spectrum for these types of analyses. The input energy combines the elements of amplitude and duration into a single parameter description of the ground motion that can be readily incorporated into standard probabilistic seismic hazard analysis methodology. This use of the elastic input energy spectrum is examined. Regression analysis is performed using strong motion data from Western North America and consistent data processing procedures for both the absolute input energy equivalent velocity, (Vsbea), and the elastic pseudo-relative velocity response (PSV) in the frequency range 0.5 to 10 Hz. The results show that the two parameters can be successfully fit with identical functional forms. The dependence of Vsbea and PSV upon (NEHRP) site classification is virtually identical. The variance of Vsbea is uniformly less than that of PSV, indicating that Vsbea can be predicted with slightly less uncertainty as a function of magnitude, distance and site classification. The effects of site class are important at frequencies less than a few Hertz. The regression

  11. ITER safety challenges and opportunities

    International Nuclear Information System (INIS)

    Piet, S.J.

    1992-01-01

    This paper reports on results of the Conceptual Design Activity (CDA) for the International Thermonuclear Experimental Reactor (ITER) suggest challenges and opportunities. ITER is capable of meeting anticipated regulatory dose limits, but proof is difficult because of large radioactive inventories needing stringent radioactivity confinement. Much research and development (R ampersand D) and design analysis is needed to establish that ITER meets regulatory requirements. There is a further oportunity to do more to prove more of fusion's potential safety and environmental advantages and maximize the amount of ITER technology on the path toward fusion power plants. To fulfill these tasks, three programmatic challenges and three technical challenges must be overcome. The first step is to fund a comprehensive safety and environmental ITER R ampersand D plan. Second is to strengthen safety and environment work and personnel in the international team. Third is to establish an external consultant group to advise the ITER Joint Team on designing ITER to meet safety requirements for siting by any of the Parties. The first of three key technical challenges is plasma engineering - burn control, plasma shutdown, disruptions, tritium burn fraction, and steady state operation. The second is the divertor, including tritium inventory, activation hazards, chemical reactions, and coolant disturbances. The third technical challenge is optimization of design requirements considering safety risk, technical risk, and cost

  12. Status of ITER

    International Nuclear Information System (INIS)

    Aymar, R.

    2002-01-01

    At the end of engineering design activities (EDA) in July 2001, all the essential elements became available to make a decision on construction of ITER. A sufficiently detailed and integrated engineering design now exists for a generic site, has been assessed for feasibility, and costed, and essential physics and technology R and D has been carried out to underpin the design choices. Formal negotiations have now begun between the current participants--Canada, Euratom, Japan, and the Russian Federation--on a Joint Implementation Agreement for ITER which also establishes the legal entity to run ITER. These negotiations are supported on technical aspects by Coordinated Technical Activities (CTA), which maintain the integrity of the project, for the good of all participants, and concentrate on preparing for procurement by industry of the longest lead items, and for formal application for a construction license with the host country. This paper highlights the main features of the ITER design. With cryogenically-cooled magnets close to neutron-generating plasma, the design of shielding with adequate access via port plugs for auxiliaries such as heating and diagnostics, and of remote replacement and refurbishing systems for in-vessel components, are particularly interesting nuclear technology challenges. Making a safety case for ITER to satisfy potential regulators and to demonstrate, as far as possible at this stage, the environmental attractiveness of fusion as an energy source, is also important. The paper gives illustrative details on this work, and an update on the progress of technical preparations for construction, as well as the status of the above negotiations

  13. Effect of protein-surfactant interactions on aggregation of β-lactoglobulin.

    Science.gov (United States)

    Hansted, Jon G; Wejse, Peter L; Bertelsen, Hans; Otzen, Daniel E

    2011-05-01

    The milk protein β-lactoglobulin (βLG) dominates the properties of whey aggregates in food products. Here we use spectroscopic and calorimetric techniques to elucidate how anionic, cationic and non-ionic surfactants interact with bovine βLG and modulate its heat-induced aggregation. Alkyl trimethyl ammonium chlorides (xTAC) strongly promote aggregation, while sodium alkyl sulfates (SxS) and alkyl maltopyranosides (xM) reduce aggregation. Sodium dodecyl sulfate (SDS) binds to non-aggregated βLG in several steps, but reduction of aggregation was associated with the first binding step, which occurs far below the critical micelle concentration. In contrast, micellar concentrations of xMs are required to reduce aggregation. The ranking order for reduction of aggregation (normalized to their tendency to self-associate) was C10-C12>C8>C14 for SxS and C8>C10>C12>C14>C16 for xM. xTAC promote aggregation in the same ranking order as xM reduce it. We conclude that SxS reduce aggregation by stabilizing the protein's ligand-bound state (the melting temperature t(m) increases by up to 10°C) and altering its charge potential. xM monomers also stabilize the protein's ligand-bound state (increasing t(m) up to 6°C) but in the absence of charged head groups this is not sufficient by itself to prevent aggregation. Although micelles of both anionic and non-ionic surfactants destabilize βLG, they also solubilize unfolded protein monomers, leaving them unavailable for protein-protein association and thus inhibiting aggregation. Cationic surfactants promote aggregation by a combination of destabilization and charge neutralization. The food compatible surfactant sodium dodecanoate also inhibited aggregation well below the cmc, suggesting that surfactants may be a practical way to modulate whey protein properties. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. ITER CTA newsletter. No. 4

    International Nuclear Information System (INIS)

    2001-12-01

    This ITER CTA Newsletter contains information about the organization of the ITER Co-ordinated Technical Activities (CTA) International Team as the follow-up of the ITER CTA project board meeting in Toronto on 7 November 2001. It also includes a summary on the start of the international tokamak physics activity by Dr. D. Campbell, Chair of the ITPA Co-ordinating Committee

  15. ITER management advisory committee meeting

    International Nuclear Information System (INIS)

    Yoshikawa, M.

    2001-01-01

    The ITER Management Advisory Committee (MAC) Meeting was held on 23 February in Garching, Germany. The main topics were: the consideration of the report by the Director on the ITER EDA Status, the review of the Work Programme, the review of the Joint Fund, the review of a schedule of ITER meetings, and the arrangements for termination and wind-up of the EDA

  16. ITER ITA newsletter. No. 6, July 2003

    International Nuclear Information System (INIS)

    2003-09-01

    This issue of ITER ITA (ITER transitional Arrangements) newsletter contains concise information about ITER related activities. One of them was the farewell party for for Annick Lyraud and Robert Aymar, who will take up his position as Director-General of CERN in January 2004, another is information about Dr. Yasuo Shimomura, ITER interim project leader, and ITER technical work during the transitional arrangements

  17. JET experience on managing radioactive waste and implications for ITER

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, Stephen, E-mail: Stephen.reynolds@ccfe.ac.uk [EUROfusion Consortium, JET, Culham Science Centre, Abingdon OX14 3DB (United Kingdom); CCFE/Power and Active Operations Department, Culham Science Centre, Abingdon OX14 3DB (United Kingdom); Newman, Mark; Coombs, Dave; Witts, David [EUROfusion Consortium, JET, Culham Science Centre, Abingdon OX14 3DB (United Kingdom); CCFE/Power and Active Operations Department, Culham Science Centre, Abingdon OX14 3DB (United Kingdom)

    2016-11-01

    Highlights: • We describe the current waste management structure and processes in place for managing radioactive waste generated as part of JET operations. • We detail the key lessons to be learnt for future fusion experiments and specifically ITER. • Early involvement of specialist waste management advisors and representatives are recommended. • Implementation of a complete integrated electronic waste tracking system will streamline the waste management process. - Abstract: The reduced radiotoxicity and half-life of radioactive waste arisings from nuclear fusion reactors as compared to current fission reactors is one of the key benefits of nuclear fusion. As a result of the research programme at the Joint European Torus (JET), significant experience on the management of radioactive waste has been gained which will be of benefit to ITER and the nuclear fusion community. The successful management of radioactive waste is dependent on accurate and efficient tracking and characterisation of waste streams. To accomplish this all items at JET which are removed from radiological areas are identified and pre-characterised, by recording the radiological history, before being removed from or moved between radiological areas. This system ensures a history of each item is available when it is finally consigned as radioactive waste and also allows detailed forecasting of future arisings. All radioactive waste generated as part of JET operations is transferred to dedicated, on-site, handling facilities for further sorting, sampling and final streaming for off-site disposal. Tritium extraction techniques including leaching, combustion and thermal treatment followed by liquid scintillation counting are used to determine tritium content. Recent changes to government legislation and Culham specific disposal permit conditions have allowed CCFE to adopt additional disposal routes for fusion wastes requiring new treatment and analysis techniques. Facilities currently under

  18. ITER EDA newsletter. V. 7, no. 7

    International Nuclear Information System (INIS)

    1998-07-01

    This newsletter contains the articles: 'Extraordinary ITER council meeting', 'ITER EDA final safety meeting' and 'Summary report of the 3rd combined workshop of the ITER confinement and transport and ITER confinement database and modeling expert groups'

  19. ITER Central Solenoid Module Fabrication

    Energy Technology Data Exchange (ETDEWEB)

    Smith, John [General Atomics, San Diego, CA (United States)

    2016-09-23

    The fabrication of the modules for the ITER Central Solenoid (CS) has started in a dedicated production facility located in Poway, California, USA. The necessary tools have been designed, built, installed, and tested in the facility to enable the start of production. The current schedule has first module fabrication completed in 2017, followed by testing and subsequent shipment to ITER. The Central Solenoid is a key component of the ITER tokamak providing the inductive voltage to initiate and sustain the plasma current and to position and shape the plasma. The design of the CS has been a collaborative effort between the US ITER Project Office (US ITER), the international ITER Organization (IO) and General Atomics (GA). GA’s responsibility includes: completing the fabrication design, developing and qualifying the fabrication processes and tools, and then completing the fabrication of the seven 110 tonne CS modules. The modules will be shipped separately to the ITER site, and then stacked and aligned in the Assembly Hall prior to insertion in the core of the ITER tokamak. A dedicated facility in Poway, California, USA has been established by GA to complete the fabrication of the seven modules. Infrastructure improvements included thick reinforced concrete floors, a diesel generator for backup power, along with, cranes for moving the tooling within the facility. The fabrication process for a single module requires approximately 22 months followed by five months of testing, which includes preliminary electrical testing followed by high current (48.5 kA) tests at 4.7K. The production of the seven modules is completed in a parallel fashion through ten process stations. The process stations have been designed and built with most stations having completed testing and qualification for carrying out the required fabrication processes. The final qualification step for each process station is achieved by the successful production of a prototype coil. Fabrication of the first

  20. ITER safety and operational scenario

    International Nuclear Information System (INIS)

    Shimomura, Y.; Saji, G.

    1998-01-01

    The safety and environmental characteristics of ITER and its operational scenario are described. Fusion has built-in safety characteristics without depending on layers of safety protection systems. Safety considerations are integrated in the design by making use of the intrinsic safety characteristics of fusion adequate to the moderate hazard inventories. In addition to this, a systematic nuclear safety approach has been applied to the design of ITER. The safety assessment of the design shows how ITER will safely accommodate uncertainties, flexibility of plasma operations, and experimental components, which is fundamental in ITER, the first experimental fusion reactor. The operation of ITER will progress step by step from hydrogen plasma operation with low plasma current, low magnetic field, short pulse and low duty factor without fusion power to deuterium-tritium plasma operation with full plasma current, full magnetic field, long pulse and high duty factor with full fusion power. In each step, characteristics of plasma and optimization of plasma operation will be studied which will significantly reduce uncertainties and frequency/severity of plasma transient events in the next step. This approach enhances reliability of ITER operation. (orig.)