WorldWideScience

Sample records for deep pleat hepa

  1. The effect of media area on the dust holding capacity of deep pleat HEPA filters

    Energy Technology Data Exchange (ETDEWEB)

    Dyment, J. [AWE, Aldermaston (United Kingdom); Loughborough, D. [AEAT Harwell, Oxford (United Kingdom)

    1997-08-01

    The high potential cost of storage, treatment and disposal of radioactive wastes places a premium on the longevity of installed HEPA filters in situations in radioactive processing facilities where dust capacity is a life determining factor. Previous work investigated the dust holding capacity v pressure drop characteristics of different designs of HEPA filter and also the effect of using graded density papers. This paper records an investigation of the effect of media area variation on the dust holding capacity of the {open_quotes}deep-pleat{close_quotes} design of HEPA filter. As in the previously reported work two test dusts (carbon black and sub micron sodium chloride) in the range (0.15 - 0.4{mu}m) were used. Media area adjustment was effected by varying the number of separators within the range 60 - 90. Results with the coarser dust allowed an optimum media area to be identified. Media areas greater or smaller than this optimum retained less dust than the optimum for the same terminal pressure drop. Conversely with the finer sodium chloride aerosol the dust holding capacity continued to increase up to the maximum area investigated. 7 refs., 4 figs.

  2. Criteria for calculating the efficiency of deep-pleated HEPA filters with aluminum separators during and after design basis accidents

    International Nuclear Information System (INIS)

    Bergman, W.; First, M.W.; Anderson, W.L.

    1995-01-01

    We have reviewed the literature on the performance of HEPA filters under normal and abnormal conditions to establish criteria for calculating the efficiency of HEPA filters in a DOE nonreactor nuclear facility during and after a Design Basis Accident (DBA). This study is only applicable to the standard deep-pleated HEPA filter with aluminum separators as specified in ASME N509[1]. Other HEPA filter designs such as the mini-pleat and separatorless filters are not included in this study. The literature review included the performance of new filters and parameters that may cause deterioration in the filter performance such as filter age, radiation, corrosive chemicals, seismic and rough handling, high temperature, moisture, particle clogging, high air flow and pressure pulses. The deterioration of the filter efficiency depends on the exposure parameters; in severe exposure conditions the filter will be damaged and have a residual efficiency of 0%. There are large gaps and limitations in the data that introduce significant error in the estimates of HEPA filter efficiencies under DBA conditions. Because of this limitation, conservative values of filter efficiency were chosen. The estimation of the efficiency of the HEPA filters under DBA conditions involves three steps: (1) The filter pressure drop and environmental parameters are determined during and after the DBA, (2) Comparing the filter pressure drop to a set of threshold values above which the filter is damaged. There is a different threshold value for each combination of environmental parameters, and (3) Determining the filter efficiency. If the filter pressure drop is greater than the threshold value, the filter is damaged and is assigned 0% efficiency. If the pressure drop is less, then the filter is not damaged and the efficiency is determined from literature values of the efficiency at the environmental conditions

  3. Criteria for calculating the efficiency of deep-pleated HEPA filters with aluminum separators during and after design basis accidents

    Energy Technology Data Exchange (ETDEWEB)

    Bergman, W.; First, M.W.; Anderson, W.L. [Lawrence Livermore National Laboratory, CA (United States)] [and others

    1995-02-01

    We have reviewed the literature on the performance of HEPA filters under normal and abnormal conditions to establish criteria for calculating the efficiency of HEPA filters in a DOE nonreactor nuclear facility during and after a Design Basis Accident (DBA). This study is only applicable to the standard deep-pleated HEPA filter with aluminum separators as specified in ASME N509[1]. Other HEPA filter designs such as the mini-pleat and separatorless filters are not included in this study. The literature review included the performance of new filters and parameters that may cause deterioration in the filter performance such as filter age, radiation, corrosive chemicals, seismic and rough handling, high temperature, moisture, particle clogging, high air flow and pressure pulses. The deterioration of the filter efficiency depends on the exposure parameters; in severe exposure conditions the filter will be damaged and have a residual efficiency of 0%. There are large gaps and limitations in the data that introduce significant error in the estimates of HEPA filter efficiencies under DBA conditions. Because of this limitation, conservative values of filter efficiency were chosen. The estimation of the efficiency of the HEPA filters under DBA conditions involves three steps: (1) The filter pressure drop and environmental parameters are determined during and after the DBA, (2) Comparing the filter pressure drop to a set of threshold values above which the filter is damaged. There is a different threshold value for each combination of environmental parameters, and (3) Determining the filter efficiency. If the filter pressure drop is greater than the threshold value, the filter is damaged and is assigned 0% efficiency. If the pressure drop is less, then the filter is not damaged and the efficiency is determined from literature values of the efficiency at the environmental conditions.

  4. The effect of media area on the dust holding capacity of deep pleat HEPA filters

    International Nuclear Information System (INIS)

    Dyment, J.; Loughborough, D.

    1997-01-01

    The high potential cost of storage, treatment and disposal of radioactive wastes places a premium on the longevity of installed HEPA filters in situations in radioactive processing facilities where dust capacity is a life determining factor. Previous work investigated the dust holding capacity v pressure drop characteristics of different designs of HEPA filter and also the effect of using graded density papers. This paper records an investigation of the effect of media area variation on the dust holding capacity of the open-quotes deep-pleatclose quotes design of HEPA filter. As in the previously reported work two test dusts (carbon black and sub micron sodium chloride) in the range (0.15 - 0.4μm) were used. Media area adjustment was effected by varying the number of separators within the range 60 - 90. Results with the coarser dust allowed an optimum media area to be identified. Media areas greater or smaller than this optimum retained less dust than the optimum for the same terminal pressure drop. Conversely with the finer sodium chloride aerosol the dust holding capacity continued to increase up to the maximum area investigated. 7 refs., 4 figs

  5. Development and evaluation of a HEPA filter for increased strength and resistance to elevated temperature

    International Nuclear Information System (INIS)

    Gilbert, H.; Bergman, W.; Fretthold, J.K.

    1993-01-01

    We have completed a preliminary study of an improved HEPA filter for increased strength and resistance to elevated temperature to improve the reliability of the standard deep pleated HEPA filter under accident conditions. The improvements to the HEPA filter consist of a silicone rubber sealant and a new HEPA medium reinforced with a glass cloth. Three prototype filters were built and evaluated for temperature and pressure resistance and resistance to rough handling. The temperature resistance test consisted of exposing the HEPA filter to 1,000 scan (1,700 m 3 /hr) at 700 degrees F (371 degrees C) for five minutes.The pressure resistance test consisted of exposing the HEPA filter to a differential pressure of 10 in. w.g. (2.5 kPa) using a water saturated air flow at 95 degrees F (35 degrees C). For the rough handling test, we used a vibrating machine designated the Q110. DOP filter efficiency tests were performed before and after each of the environmental tests. In addition to following the standard practice of using a separate new filter for each environmental test, we also subjected the same filter to the elevated temperature test followed by the pressure resistance test. The efficiency test results show that the improved HEPA filter is significantly better than the standard HEPA filter. Further studies are recommended to evaluate the improved HEPA filter and to assess its performance under more severe accident conditions

  6. HEPA Help

    Science.gov (United States)

    Rathey, Allen

    2006-01-01

    Poor indoor air quality in school facilities can detract from the health and productivity of students, teachers and other employees. Asthma--often triggered or aggravated by dust--is the No. 1 cause of chronic absenteeism in schools. Using vacuum cleaners equipped with high-efficiency particulate air (HEPA) filters to clean education institutions…

  7. Hygroscopic Metamorphic 4D Pleats

    Science.gov (United States)

    Yang, Shu

    There have been significant interests in morphing 2D sheets into 3D structures via programmed out-of-plane distortion, including bending, tilting, rotating, and folding as seen in recent origami and kirigami strategies. Hydrogel is one of the unique soft materials that can swell and shrink, thereby enabling real-time 4D motions in response to external stimuli, such as pH, temperature, and moisture. To achieve reliable folding behaviors, it often requires a large amount of water molecules or ions diffusing in and out of the hydrogel sheet, thus the entire sheet is immersed in an aqueous solution. Here, we demonstrate the design and folding of hierarchical pleats patterned from a combination of hydrophobic and hygroscopic materials, allowing us to spatially and locally control the water condensation induced by environmental humidity. In turn, we show out-of-plane deformation of the 2D sheets only in the patterned hygroscopic regions, much like the folding behaviors of many plants. By designing the dimension, geometry, and density of hygroscopic microstructures (as pixels) in the hydrophobic materials, we can display the enhanced water condensation together with the spatial guidance of obtained droplets as unified water-harvesting systems. When the water droplets become large enough, they roll off from the hierarchical sheet along the inclined plane that is programmed by the hygroscopic motion of hydrogel, and eventually wrapped by the folded sheet to keep them from evaporation. We acknowledge support from NSF/EFRI-ODISSEI, EFRI 13-31583.

  8. Evaluation of the effect of media velocity on HEPA filter performance

    International Nuclear Information System (INIS)

    Alderman, Steven; Parsons, Michael; Hogancamp, Kristina; Norton, O. Perry; Waggoner, Charles

    2007-01-01

    Section FC of the ASME AG-1 Code addresses glass fiber HEPA filters and restricts the media velocity to a maximum of 2.54 cm/s (5 ft/min). Advances in filter media technology allow glass fiber HEPA filters to function at significantly higher velocities and still achieve HEPA performance. However, diffusional capture of particles < 100 nm is reduced at higher media velocities due to shorter residence times within the media matrix. Therefore, it is unlikely that higher media velocities for HEPA filters will be allowed without data to demonstrate the effect of media velocity on removal of particles in the smaller size classes. In order to address this issue, static testing has been conducted to generate performance related data and a range of dynamic testing has provided data regarding filter lifetimes, loading characteristics, changes in filter efficiency and the most penetrating particle size over time. Testing was conducted using 31 cm x 31 cm x 29 cm deep pleat HEPA filters supplied from two manufacturers. Testing was conducted at media velocities ranging from 2.0-4.5 cm/s with a solid aerosol challenge composed of potassium chloride. Two set of media velocity data were obtained for each filter type. In one set of evaluations, the maximum aerosol challenge particle size was limited to 3 μm, while particles above 3 μm were not constrained in the second set. This provided for considerable variability in the challenge mass mean diameter and overall mass loading rate. Results of this testing will be provided to the ASME AG-1 FC Committee for consideration in future versions of the HEPA standard. In general, the initial filter efficiency decreased with increasing media velocity. However, initial filter efficiencies were generally good in all cases. Filter efficiency values averaged over the first ten minute of the loading cycle ranged from 99.970 to 99.996 %. Additionally, the most penetrating particle size was observed to decrease with increasing media velocity

  9. The development of a HEPA filter with improved dust holding characteristics

    International Nuclear Information System (INIS)

    Dyment, J.; Hamblin, C.

    1995-01-01

    A limitation of the HEPA filters used in the extract of nuclear facilities is their relatively low capacity for captured dust. The costs associated with the disposal of a typical filter means that there are clear incentives to extend filter life. The work described in this report are the initial stages in the development of a filter which incorporates a medium which enhances its dust holding capacity. Experimental equipment was installed to enable the dust loading characteristics of candidate media to be compared with those of the glass fibre based papers currently used in filter construction. These tests involved challenging representative samples of the media with an air stream containing a controlled concentration of thermally generated sodium chloride particles. The dust loading characteristics of the media were then compared in terms of the rate of increasing in pressure differential. A number of open-quotes graded densityclose quotes papers were subsequently identified which appeared to offer significant improvements in dust holding. In the second phase of the programme deep-pleat filters (1,700 M 3 h -1 ) incorporating graded density papers were manufactured and tested. Improvements of up to 50% were observed in their capacity for the sub-micron sodium chloride test dust. Smaller differences (15%) were measured when a coarser, carbon black, challenge was used. This is attributed to the differences in the particles sizes of the two dusts

  10. The development of a HEPA filter with improved dust holding characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Dyment, J.; Hamblin, C.

    1995-02-01

    A limitation of the HEPA filters used in the extract of nuclear facilities is their relatively low capacity for captured dust. The costs associated with the disposal of a typical filter means that there are clear incentives to extend filter life. The work described in this report are the initial stages in the development of a filter which incorporates a medium which enhances its dust holding capacity. Experimental equipment was installed to enable the dust loading characteristics of candidate media to be compared with those of the glass fibre based papers currently used in filter construction. These tests involved challenging representative samples of the media with an air stream containing a controlled concentration of thermally generated sodium chloride particles. The dust loading characteristics of the media were then compared in terms of the rate of increasing in pressure differential. A number of {open_quotes}graded density{close_quotes} papers were subsequently identified which appeared to offer significant improvements in dust holding. In the second phase of the programme deep-pleat filters (1,700 M{sup 3}h{sup {minus}1}) incorporating graded density papers were manufactured and tested. Improvements of up to 50% were observed in their capacity for the sub-micron sodium chloride test dust. Smaller differences (15%) were measured when a coarser, carbon black, challenge was used. This is attributed to the differences in the particles sizes of the two dusts.

  11. HEPA air filter (image)

    Science.gov (United States)

    ... pet dander and other irritating allergens from the air. Along with other methods to reduce allergens, such ... controlling the amount of allergens circulating in the air. HEPA filters can be found in most air ...

  12. HEPA Filter Vulnerability Assessment

    International Nuclear Information System (INIS)

    GUSTAVSON, R.D.

    2000-01-01

    This assessment of High Efficiency Particulate Air (HEPA) filter vulnerability was requested by the USDOE Office of River Protection (ORP) to satisfy a DOE-HQ directive to evaluate the effect of filter degradation on the facility authorization basis assumptions. Within the scope of this assessment are ventilation system HEPA filters that are classified as Safety-Class (SC) or Safety-Significant (SS) components that perform an accident mitigation function. The objective of the assessment is to verify whether HEPA filters that perform a safety function during an accident are likely to perform as intended to limit release of hazardous or radioactive materials, considering factors that could degrade the filters. Filter degradation factors considered include aging, wetting of filters, exposure to high temperature, exposure to corrosive or reactive chemicals, and exposure to radiation. Screening and evaluation criteria were developed by a site-wide group of HVAC engineers and HEPA filter experts from published empirical data. For River Protection Project (RPP) filters, the only degradation factor that exceeded the screening threshold was for filter aging. Subsequent evaluation of the effect of filter aging on the filter strength was conducted, and the results were compared with required performance to meet the conditions assumed in the RPP Authorization Basis (AB). It was found that the reduction in filter strength due to aging does not affect the filter performance requirements as specified in the AB. A portion of the HEPA filter vulnerability assessment is being conducted by the ORP and is not part of the scope of this study. The ORP is conducting an assessment of the existing policies and programs relating to maintenance, testing, and change-out of HEPA filters used for SC/SS service. This document presents the results of a HEPA filter vulnerability assessment conducted for the River protection project as requested by the DOE Office of River Protection

  13. Performance of HEPA filters under hot dynamic conditions

    International Nuclear Information System (INIS)

    Frankum, D.P.; Costigan, G.

    1995-01-01

    Accidents in nuclear facilities involving fires may have implications upon the ventilation systems where high efficiency particulate air (HEPA) filters are used to minimise the airborne release of radioactive or toxic particles. The Filter Development Section at Harwell Laboratory has been investigating the effect of temperature on the performance of HEPA filters under hot dynamic conditions[ 1 ] for a number of years. The test rig is capable of delivering air flows of 10001/s (at ambient conditions) at temperatures up to 500 degrees C, where measurements of the penetration and pressure drop across the filter are obtained. This paper reports the experiments on different constructions of HEPA filters; rectangular and circular. The filters were tested at an air temperature of 200 degrees C for up to 48 hours at the rated airflow to assess their performance. The penetration measurements for rectangular filters were observed to be below 0.021% after prolonged operation. In a number of cases, holes appeared along the pleat creases of circular filters although the penetration remained below 1%. The sealing gasket for these filters was noted to deform with temperature, permitting a leakage path. A prototype high strength circular filter was evaluated at temperatures of up to 400 degrees C with a penetration less than 0.65%

  14. Performance of HEPA filters under hot dynamic conditions

    Energy Technology Data Exchange (ETDEWEB)

    Frankum, D.P.; Costigan, G. [AEA Technology, Oxfordshire (United Kingdom)

    1995-02-01

    Accidents in nuclear facilities involving fires may have implications upon the ventilation systems where high efficiency particulate air (HEPA) filters are used to minimise the airborne release of radioactive or toxic particles. The Filter Development Section at Harwell Laboratory has been investigating the effect of temperature on the performance of HEPA filters under hot dynamic conditions[{sub 1}] for a number of years. The test rig is capable of delivering air flows of 10001/s (at ambient conditions) at temperatures up to 500{degrees}C, where measurements of the penetration and pressure drop across the filter are obtained. This paper reports the experiments on different constructions of HEPA filters; rectangular and circular. The filters were tested at an air temperature of 200{degrees}C for up to 48 hours at the rated airflow to assess their performance. The penetration measurements for rectangular filters were observed to be below 0.021% after prolonged operation. In a number of cases, holes appeared along the pleat creases of circular filters although the penetration remained below 1%. The sealing gasket for these filters was noted to deform with temperature, permitting a leakage path. A prototype high strength circular filter was evaluated at temperatures of up to 400{degrees}C with a penetration less than 0.65%.

  15. Experimental and numerical study of pleated filters clogging

    International Nuclear Information System (INIS)

    Gervais, Pierre-Colin

    2013-01-01

    Pleated filters are widely used in air treatments because of the advantageous effective surface to overall dimension ratio they offer. Their major drawback though resides in their reduced lifetime which still needs to be controlled. Indeed, when clogging, the pressure drop considerably increases, the filtration flow is then no longer maintained which might lead to the deterioration of the media. It is then crucial to characterize the evolution of the pressure drop under operating conditions in order to best design these equipments. Part of our work consisted in studying how the operating conditions influence the geometry of the deposit. To do so, we used Single- Photon Emission Computed Tomography (SPECT), a non-destructive imaging technique that keeps intact the particle structuring. The visualization of aerosol deposit at the beginning of the filtration process allows observing preferential particle deposition on the whole height of the pleat. A numerical approach was used to study the permeability of bimodal fibrous media and we experimentally studied the local velocity as well as the biphasic flow inside pleated filter media. Comparison between experiments and simulations allowed us to validate the Geodict code for a wide range of media properties and velocities. Regarding bimodal fibrous media, the fast data acquisition has allowed testing several existing models which resulted in classifying them in a unique way. If the experimental results on the initial deposition in pleated filters are encouraging, those related to beforehand clogging point to several improvements regarding the technique we used. (author) [fr

  16. Simulation of the air flows in many industrial pleated filters

    International Nuclear Information System (INIS)

    Del Fabbro, L.; Brun, P.; Laborde, J.C.; Lacan, J.; Ricciardi, L.; Renoux, A.

    2000-01-01

    The study presents results concerning the characterization of the charge loss and the air flow in nuclear and automobile type pleated filters. The experimental studies in correlation with the numerical models showed an homogenous distribution of the air flows in a THE nuclear type filter, whereas the distribution is heterogenous in the case of an automobile filter. (A.L.B.)

  17. A method and machine for forming pleated and bellow tubes

    International Nuclear Information System (INIS)

    Banks, J.W.

    1975-01-01

    In a machine, the rollers outside the rough tube are rigidly supported for assuring the accurate forming of each turn of the pleated tube, the latter being position-indexed independently of the already formed turns. An inner roller is supported by a device for adjusting and indexing the position thereof on a carriage. The thus obtained tubes are suitable, in particular, for forming expansion sealing joints for power generators or nuclear reactors [fr

  18. Catalytic pleat filter bags for combined particulate separation and nitrogen oxides removal from flue gas streams

    International Nuclear Information System (INIS)

    Park, Young Ok; Choi, Ho Kyung

    2010-01-01

    The development of a high temperature catalytically active pleated filter bag with hybrid filter equipment for the combined removal of particles and nitrogen oxides from flue gas streams is presented. A special catalyst load in stainless steel mesh cartridge with a high temperature pleated filter bag followed by optimized catalytic activation was developed to reach the required nitrogen oxides levels and to maintain the higher collection efficiencies. The catalytic properties of the developed high temperature filter bags with hybrid filter equipment were studied and demonstrated in a pilot scale test rig and a demonstration plant using commercial scale of high temperature catalytic pleated filter bags. The performance of the catalytic pleated filter bags were tested under different operating conditions, such as filtration velocity and operating temperature. Moreover, the cleaning efficiency and residual pressure drop of the catalyst loaded cartridges in pleated filter bags were tested. As result of theses studies, the optimum operating conditions for the catalytic pleated filter bags are determined. (author)

  19. Effect of elevated temperature on the mechanical strength of HEPA filters

    International Nuclear Information System (INIS)

    Elfawal, M.M.; Eladham, K.A.; Hammed, F.H.; Abdrabbo, M.F.

    1993-01-01

    The effect of elevated temperature on the mechanical strength of HEPA filters was studied in order to evaluate and improve their performance under high temperature conditions. As part of this study the mechanical strength of HEPA filter medium which is the limiting factor in terms of the filter strength was experimentally studied at elevated temperature up to 400 degree C, and thermal exposure times ranged from 2 min to 4 h. The failure pressures of HEPA filter units after long exposure to 250 degree C were also investigated. The test results show that the medium strength decreases with increase in temperature challenge and thermal exposure time due to burnout of the organic binder used to improve the strength and flexibility of the medium. The test results also show that the tensile strength of the conventional filter medium drops to about 40 % of the value at room temperature after exposure to 250 degree C for 6 h; therefore, the continuous exposure of the conventional filter medium to this temperature is critical. The average failure differential pressures of all commercial tested filters were found to lie between 9 and 18 kPa at ambient temperature and between 6 and 11 kPa after thermal challenge at 250 degree C for 100 h. It was found that swelling and capture of the ends of individual pleats has led to filter failure.3 fig., 2 tab

  20. Further development of the cleanable steel HEPA filter, cost/benefit analysis, and comparison with competing technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bergman, W.; Lopez, R.; Wilson, K. [Lawrence Livermore National Lab., CA (United States)] [and others

    1997-08-01

    We have made further progress in developing a cleanable steel fiber HEPA filter. We fabricated a pleated cylindrical cartridge using commercially available steel fiber media that is made with 1 {mu}m stainless steel fibers and sintered into a sheet form. Test results at the Department of Energy (DOE) Filter Test Station at Oak Ridge show the prototype filter cartridge has 99.99% efficiency for 0.3 {mu}m dioctyl phthalate (DOP) aerosols and a pressure drop of 1.5 inches. Filter loading and cleaning tests using AC Fine dust showed the filter could be repeatedly cleaned using reverse air pulses. Our analysis of commercially optimized filters suggest that cleanable steel HEPA filters need to be made from steel fibers less than 1{mu}m, and preferably 0.5 {mu}m, to meet the standard HEPA filter requirements in production units. We have demonstrated that 0.5 {mu}m steel fibers can be produced using the fiber bundling and drawing process. The 0.5 {mu}m steel fibers are then sintered into small filter samples and tested for efficiency and pressure drop. Test results on the sample showed a penetration of 0.0015 % at 0.3 {mu}m and a pressure drop of 1.15 inches at 6.9 ft/min (3.5 cm/s) velocity. Based on these results, steel fiber media can easily meet the requirements of 0.03 % penetration and 1.0 inch of pressure drop by using less fibers in the media. A cost analysis of the cleanable steel HEPA filter shows that, although the steel HEPA filter costs much more than the standard glass fiber HEPA filter, it has the potential to be very cost effective because of the high disposal costs of contaminated HEPA filters. We estimate that the steel HEPA filter will save an average of $16,000 over its 30 year life. The additional savings from the clean-up costs resulting from ruptured glass HEPA filters during accidents was not included but makes the steel HEPA filter even more cost effective. 33 refs., 28 figs., 1 tab.

  1. ASME AG-1 Section FC Qualified HEPA Filters; a Particle Loading Comparison - 13435

    International Nuclear Information System (INIS)

    Stillo, Andrew; Ricketts, Craig I.

    2013-01-01

    High Efficiency Particulate Air (HEPA) Filters used to protect personnel, the public and the environment from airborne radioactive materials are designed, manufactured and qualified in accordance with ASME AG-1 Code section FC (HEPA Filters) [1]. The qualification process requires that filters manufactured in accordance with this ASME AG-1 code section must meet several performance requirements. These requirements include performance specifications for resistance to airflow, aerosol penetration, resistance to rough handling, resistance to pressure (includes high humidity and water droplet exposure), resistance to heated air, spot flame resistance and a visual/dimensional inspection. None of these requirements evaluate the particle loading capacity of a HEPA filter design. Concerns, over the particle loading capacity, of the different designs included within the ASME AG-1 section FC code[1], have been voiced in the recent past. Additionally, the ability of a filter to maintain its integrity, if subjected to severe operating conditions such as elevated relative humidity, fog conditions or elevated temperature, after loading in use over long service intervals is also a major concern. Although currently qualified HEPA filter media are likely to have similar loading characteristics when evaluated independently, filter pleat geometry can have a significant impact on the in-situ particle loading capacity of filter packs. Aerosol particle characteristics, such as size and composition, may also have a significant impact on filter loading capacity. Test results comparing filter loading capacities for three different aerosol particles and three different filter pack configurations are reviewed. The information presented represents an empirical performance comparison among the filter designs tested. The results may serve as a basis for further discussion toward the possible development of a particle loading test to be included in the qualification requirements of ASME AG-1

  2. ASME AG-1 Section FC Qualified HEPA Filters; a Particle Loading Comparison - 13435

    Energy Technology Data Exchange (ETDEWEB)

    Stillo, Andrew [Camfil Farr, 1 North Corporate Drive, Riverdale, NJ 07457 (United States); Ricketts, Craig I. [New Mexico State University, Department of Engineering Technology and Surveying Engineering, P.O. Box 30001 MSC 3566, Las Cruces, NM 88003-8001 (United States)

    2013-07-01

    High Efficiency Particulate Air (HEPA) Filters used to protect personnel, the public and the environment from airborne radioactive materials are designed, manufactured and qualified in accordance with ASME AG-1 Code section FC (HEPA Filters) [1]. The qualification process requires that filters manufactured in accordance with this ASME AG-1 code section must meet several performance requirements. These requirements include performance specifications for resistance to airflow, aerosol penetration, resistance to rough handling, resistance to pressure (includes high humidity and water droplet exposure), resistance to heated air, spot flame resistance and a visual/dimensional inspection. None of these requirements evaluate the particle loading capacity of a HEPA filter design. Concerns, over the particle loading capacity, of the different designs included within the ASME AG-1 section FC code[1], have been voiced in the recent past. Additionally, the ability of a filter to maintain its integrity, if subjected to severe operating conditions such as elevated relative humidity, fog conditions or elevated temperature, after loading in use over long service intervals is also a major concern. Although currently qualified HEPA filter media are likely to have similar loading characteristics when evaluated independently, filter pleat geometry can have a significant impact on the in-situ particle loading capacity of filter packs. Aerosol particle characteristics, such as size and composition, may also have a significant impact on filter loading capacity. Test results comparing filter loading capacities for three different aerosol particles and three different filter pack configurations are reviewed. The information presented represents an empirical performance comparison among the filter designs tested. The results may serve as a basis for further discussion toward the possible development of a particle loading test to be included in the qualification requirements of ASME AG-1

  3. HEPA filter concerns - an overview

    Energy Technology Data Exchange (ETDEWEB)

    Leonard, J.F. [Department of Energy, Washington, DC (United States)

    1995-02-01

    The U.S. Department of Energy (DOE) recently initiated a complete review of the DOE High Efficiency Particulate Air (HEPA) Filter Program to identify areas for improvement. Although this process is currently ongoing, various issues and problems have already been identified for action that not only impacts the DOE HEPA filter program, but potentially the national and international air cleaning community as well. This paper briefly reviews a few of those concerns that may be of interest, and discusses actions initiated by the DOE to address the associated issues and problems. Issues discussed include: guidance standards, in-place testing, specifications, Test Facilities, portable units, vacuum cleaners, substitute aerosols, filter efficiencies, aging/shelf life/service life, fire suppression, handbook, Quality Products List (QPL), QA testing, and evaluations.

  4. Factors Influencing HEPA Filter Performance

    International Nuclear Information System (INIS)

    Parsons, M.S.; Waggoner, Ch.A.

    2009-01-01

    Properly functioning HEPA air filtration systems depend on a variety of factors that start with the use of fully characterized challenge conditions for system design and then process control during operation. This paper addresses factors that should be considered during the design phase as well as operating parameters that can be monitored to ensure filter function and lifetime. HEPA filters used in nuclear applications are expected to meet design, fabrication, and performance requirements set forth in the ASME AG-1 standard. The DOE publication Nuclear Air Cleaning Handbook (NACH) is an additional guidance document for design and operation HEPA filter systems in DOE facilities. These two guidelines establish basic maximum operating parameters for temperature, maximum aerosol particle size, maximum particulate matter mass concentration, acceptable differential pressure range, and filter media velocity. Each of these parameters is discussed along with data linking variability of each parameter with filter function and lifetime. Temporal uncertainty associated with gas composition, temperature, and absolute pressure of the air flow can have a direct impact on the volumetric flow rate of the system with a corresponding impact on filter media velocity. Correlations between standard units of flow rate (standard meters per minute or cubic feet per minute) versus actual units of volumetric flow rate are shown for variations in relative humidity for a 70 deg. C to 200 deg. C temperature range as an example of gas composition that, uncorrected, will influence media velocity. The AG-1 standard establishes a 2.5 cm/s (5 feet per minute) ceiling for media velocities of nuclear grade HEPA filters. Data are presented that show the impact of media velocities from 2.0 to 4.0 cm/s media velocities (4 to 8 fpm) on differential pressure, filter efficiency, and filter lifetime. Data will also be presented correlating media velocity effects with two different particle size

  5. Pressure transients across HEPA filters

    International Nuclear Information System (INIS)

    Gregory, W.; Reynolds, G.; Ricketts, C.; Smith, P.R.

    1977-01-01

    Nuclear fuel cycle facilities require ventilation for health and safety reasons. High efficiency particulate air (HEPA) filters are located within ventilation systems to trap radioactive dust released in reprocessing and fabrication operations. Pressure transients within the air cleaning systems may be such that the effectiveness of the filtration system is questioned under certain accident conditions. These pressure transients can result from both natural and man-caused phenomena: atmospheric pressure drop caused by a tornado or explosions and nuclear excursions initiate pressure pulses that could create undesirable conditions across HEPA filters. Tornado depressurization is a relatively slow transient as compared to pressure pulses that result from combustible hydrogen-air mixtures. Experimental investigation of these pressure transients across air cleaning equipment has been undertaken by Los Alamos Scientific Laboratory and New Mexico State University. An experimental apparatus has been constructed to impose pressure pulses across HEPA filters. The experimental equipment is described as well as preliminary results using variable pressurization rates. Two modes of filtration of an aerosol injected upstream of the filter is examined. A laser instrumentation for measuring the aerosol release, during the transient, is described

  6. Modelling of air flows in pleated filters and of their clogging by solid particles

    International Nuclear Information System (INIS)

    Del Fabbro, L.

    2002-01-01

    The devices of air cleaning against particles are widely spread in various branches of industry: nuclear, motor, food, electronic,...; among these devices, numerous are constituted by pleated porous media to increase the surface of filtration and thus to reduce the pressure drop, for given air flow. The objective of our work is to compensate a lack evident of knowledge on the evolution of the pressure drop of pleated filter during the clogging and to deduct a modelling from it, on the basis of experiments concerning industrial filters of nuclear and car types. The obtained model is a function of characteristics of the filtering medium and pleats, of the characteristics of solid particles deposited on the filter, of the mass of particles and of the aeraulic conditions of air flow. It also depends on data on the clogging of flat filters of equivalent medium. To elaborate this model of pressure drop, an initial stage was carried out in order to characterize, experimentally and numerically, the pressure drop and the distribution of air flow in clean pleated filters of nuclear (high efficiency particulate air filter, in fiberglasses) and car (mean efficiency filter, in fibers of cellulose) types. The numerical model allowed to understand the fundamental role played by the aeraulic resistance of the filtering medium. From an non-dimensional approach, we established a semi-empirical model of pressure drop for a clean pleated filter valid for both studied types of medium; this model is used of first base for the development of the final model of clogging. The study of the clogging of the filters showed the complexity of the phenomenon dependent mainly on a reduction of the surface of filtration. This observation brings us to propose a clogging of pleated filters in three phases. Both first phases are similar in those observed for flat filters, while last phase corresponds to a reduction of the surface of filtration and leads a strong increase of the filter pressure drop

  7. A Study on the Bandwidth Characteristics of Pleated Pneumatic Artificial Muscles

    Directory of Open Access Journals (Sweden)

    Rino Versluys

    2009-01-01

    Full Text Available Pleated pneumatic artificial muscles have interesting properties that can be of considerable significance in robotics and automation. With a view to the potential use of pleated pneumatic artificial muscles as actuators for a fatigue test bench (high forces and small displacements, the bandwidth characteristics of a muscle-valve system were investigated. Bandwidth is commonly used for linear systems, as the Bode plot is independent of the amplitude of the input signal. However, due to the non-linear behaviour of pleated pneumatic artificial muscles, the system's gain becomes dependent on the amplitude of the input sine wave. As a result, only one Bode plot is insufficient to clearly describe or identify a non-linear system. In this study, the bandwidth of a muscle-valve system was assessed from two perspectives: a varying amplitude and a varying offset of the input sine wave. A brief introduction to pneumatic artificial muscles is given. The concept of pleated pneumatic artificial muscles is explained. Furthermore, the different test methods and experimental results are presented.

  8. Analysis of an MCU HEPA filter

    Energy Technology Data Exchange (ETDEWEB)

    Peters, T. B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Fondeur, F. F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-01-01

    A series of direct analyses on three portions (inlet, center, and outlet) of the High Efficiency Particulate Air (HEPA) filter material from the Modular Caustic-Side Solvent Extraction Unit (MCU) have been performed; this includes x-ray methods such as X-Ray Diffraction (XRD), Contained Scanning Electron Microscopy (CSEM) and X-Ray Fluorescence (XRF), as well as Fourier Transform InfraRed spectroscopy (FTIR). Additionally, two leaching studies (one with water, one with dichloromethane) have been performed on three portions (inlet, center, and outlet) of the HEPA filter material, with the leachates being analyzed by Inductively-coupled plasma emission spectroscopy (ICPES), Semi-Volatile Organic Analysis (SVOA) and gammascan. From the results of the analyses, SRNL feels that cesium-depleted solvent is being introduced into the HEPA filter. The most likely avenue for this is mechanical aerosolization of solvent, where the aerosol is then carried along an airstream into the HEPA filter. Once introduced into the HEPA filter media, the solvent wicks throughout the material, and migrates towards the outlet end. Once on the outlet end, continual drying could cause particulate flakes to exit the filter and travel farther down the airstream path.

  9. Evaluation of HEPA filter service life

    International Nuclear Information System (INIS)

    Fretthold, J.K.; Stithem, A.R.

    1997-01-01

    Rocky Flats Environmental Technology Site (RFETS), has approximately 10,000 High Efficiency Particulate Air (HEPA) Filters installed in a variety of filter plenums. These ventilation/filtration plenum systems are used to control the release of airborne particulate contaminates to the environment during normal operations and potential accidents. This report summarizes the results of destructive and non-destructive tests on HEPA filters obtained from a wide variety of ages and service conditions. These tests were performed to determine an acceptable service life criteria for HEPA filters used at Rocky Flats Environmental Technology Site (RFETS). A total of 140 filters of various ages (1972 to 1996) and service history (new, aged unused, used) were tested. For the purpose of this report, filter age from manufacture date/initial test date to the current sample date was used, as opposed to the actual time a filter was installed in an operating system

  10. Experimental investigation of in situ cleanable HEPA filters

    International Nuclear Information System (INIS)

    Adamson, D.J.

    2000-01-01

    Savannah River Technology Center (SRTC), High Level Waste Division, Tanks Focus Area, and the Federal Energy Technology Center (FETC) have been investigating high efficiency particulate air (HEPA) filters which can be regenerated or cleaned in situ as an alternative to conventional disposable HEPA filters. This technical report documents concerns pertaining to conventional HEPA filters

  11. In Situ Cleanable Alternative HEPA Filter Media

    International Nuclear Information System (INIS)

    Adamson, D. J.; Terry, M. T.

    2002-01-01

    The Westinghouse Savannah River Company, located at the Savannah River Site in Aiken, South Carolina, is currently testing two types of filter media for possible deployment as in situ regenerable/cleanable High Efficiency Particulate Air (HEPA) filters. The filters are being investigated to replace conventional, disposable, glass-fiber, HEPA filters that require frequent removal, replacement, and disposal. This is not only costly and subjects site personnel to radiation exposure, but adds to the ever-growing waste disposal problem. The types of filter media being tested, as part of a National Energy Technology Laboratory procurement, are sintered nickel metal and ceramic monolith membrane. These media were subjected to a hostile environment to simulate conditions that challenge the high-level waste tank ventilation systems. The environment promoted rapid filter plugging to maximize the number of filter loading/cleaning cycles that would occur in a specified period of time. The filters were challenged using nonradioactive simulated high-level waste materials and atmospheric dust; materials that cause filter pluggage in the field. The filters are cleaned in situ using an aqueous solution. The study found that both filter media were insensitive to high humidity or moisture conditions and were easily cleaned in situ. The filters regenerated to approximately clean filter status even after numerous plugging and in situ cleaning cycles. Air Techniques International is conducting particle retention testing on the filter media at the Oak Ridge Filter Test Facility. The filters are challenged using 0.3-mm di-octyl phthalate particles. Both the ceramic and sintered media have a particle retention efficiency > 99.97%. The sintered metal and ceramic filters not only can be cleaned in situ, but also hold great potential as a long life alternative to conventional HEPA filters. The Defense Nuclear Facility Safety Board Technical Report, ''HEPA Filters Used in the Department of

  12. HEPA filter fire (and subsequent unfiltered release)

    International Nuclear Information System (INIS)

    Powers, T.B.

    1996-01-01

    This document supports the development and presentation of the following accident scenario in the TWRS Final Safety Analysis Report: HEPA Filter Failure - Exposure to High Temperature or Pressure. The calculations needed to quantify the risk associated with this accident scenario are included within

  13. In-place HEPA filter penetration test

    International Nuclear Information System (INIS)

    Bergman, W.; Wilson, K.; Elliott, J.; Bettencourt, B.; Slawski, J.W.

    1997-01-01

    We have demonstrated the feasibility of conducting penetration tests on high efficiency particulate air (HEPA) filters as installed in nuclear ventilation systems. The in-place penetration test, which is designed to yield equivalent penetration measurements as the standard DOP efficiency test, is based on measuring the aerosol penetration of the filter installation as a function of particle size using a portable laser particle counter. This in-place penetration test is compared to the current in-place leak test using light scattering photometers for single HEPA filter installations and for HEPA filter plenums using the shroud method. Test results show the in-place penetration test is more sensitive than the in-place leak test, has a similar operating procedure, but takes longer to conduct. Additional tests are required to confirm that the in-place penetration test yields identical results as the standard dioctyl phthalate (DOP) penetration test for HEPA filters with controlled leaks in the filter and gasket and duct by-pass leaks. Further development of the procedure is also required to reduce the test time before the in- place penetration test is practical

  14. In-place HEPA filter penetration test

    Energy Technology Data Exchange (ETDEWEB)

    Bergman, W.; Wilson, K.; Elliott, J. [Lawrence Livermore National Lab., CA (United States)] [and others

    1997-08-01

    We have demonstrated the feasibility of conducting penetration tests on high efficiency particulate air (HEPA) filters as installed in nuclear ventilation systems. The in-place penetration test, which is designed to yield equivalent penetration measurements as the standard DOP efficiency test, is based on measuring the aerosol penetration of the filter installation as a function of particle size using a portable laser particle counter. This in-place penetration test is compared to the current in-place leak test using light scattering photometers for single HEPA filter installations and for HEPA filter plenums using the shroud method. Test results show the in-place penetration test is more sensitive than the in-place leak test, has a similar operating procedure, but takes longer to conduct. Additional tests are required to confirm that the in-place penetration test yields identical results as the standard dioctyl phthalate (DOP) penetration test for HEPA filters with controlled leaks in the filter and gasket and duct by-pass leaks. Further development of the procedure is also required to reduce the test time before the in-place penetration test is practical. 14 refs., 14 figs., 3 tabs.

  15. Multi-Canister overpack internal HEPA filters

    International Nuclear Information System (INIS)

    SMITH, K.E.

    1998-01-01

    The rationale for locating a filter assembly inside each Multi-Canister Overpack (MCO) rather than include the filter in the Cold Vacuum Drying (CVD) process piping system was to eliminate the potential for contamination to the operators, processing equipment, and the MCO. The internal HEPA filters provide essential protection to facility workers from alpha contamination, both external skin contamination and potential internal depositions. Filters installed in the CVD process piping cannot mitigate potential contamination when breaking the process piping connections. Experience with K-Basin material has shown that even an extremely small release can result in personnel contamination and costly schedule disruptions to perform equipment and facility decontamination. Incorporating the filter function internal to the MCO rather than external is consistent with ALARA requirements of 10 CFR 835. Based on the above, the SNF Project position is to retain the internal HEPA filters in the MCO design

  16. Degradation of HEPA filters exposed to DMSO

    International Nuclear Information System (INIS)

    Bergman, W.; Wilson, K.; Larsen, G.; Lopez, R.; LeMay, J.

    1994-01-01

    Dimethyl sulfoxide (DMSO) sprays are being used to remove the high explosive (HE) from nuclear weapons in the process of their dismantlement. A boxed 50 cfm HEPA filter with an integral prefilter was exposed to DMSO vapor and aerosols that were generated by a spray nozzle to simulate conditions expected in the HE dissolution operation. After 198 hours of operation, the pressure drop of the filter had increased from 1.15 inches to 2.85 inches, and the efficiency for 0.3 μm dioctyl sebacate (DOS) aerosols decreased from 99.992% to 98.6%. Most of the DMSO aerosols had collected as a liquid pool inside the boxed HEPA. The liquid was blown out of the filter exit with 100 cfm air flow at the end of the test. Since the filter still met the minimum allowed efficiency of 99.97% after 166 hours of exposure, we recommend replacing the filter every 160 hours of operation or sooner if the pressure drop increases by 50%. Examination of the filter showed that visible cracks appeared at the joints of the wooden frame and a portion of the sealant had pulled away from the frame. Since all of the DMSO will be trapped in the first HEPA filter, the second HEPA filter should not suffer from DMSO degradation. Thus the combined efficiency for the first filter (98.6%) and the second filter (99.97%) is 99.99996% for 0.3μm particles. If the first filter is replaced prior to its degradation, each of the filters will have 99.97% efficiency, and the combined efficiency will be 99.999991%. The collection efficiency for DMSO/HE aerosols will be much higher because the particle size is much greater

  17. Studies on Hepa filter test methods

    International Nuclear Information System (INIS)

    Lee, S.H.; Jon, K.S.; Park, W.J.; Ryoo, R.

    1981-01-01

    The purpose of this study is to compare testing methods of the HEPA filter adopted in other countries with each other, and to design and construct a test duct system to establish testing methods. The American D.O.P. test method, the British NaCl test method and several other independently developed methods are compared. It is considered that the D.O.P. method is most suitable for in-plant and leak tests

  18. Degradation of HEPA filters exposed to DMSO

    Energy Technology Data Exchange (ETDEWEB)

    Bergman, W.; Wilson, K.; Larsen, G. [Lawrence Livermore National Laboratory, CA (United States)] [and others

    1995-02-01

    Dimethyl sulfoxide (DMSO) sprays are being used to remove the high explosive (HE) from nuclear weapons in the process of their dismantlement. A boxed 50 cmf HEPA filter with an integral prefilter was exposed to DMSO vapor and aerosols that were generated by a spray nozzle to simulate conditions expected in the HE dissolution operation. After 198 hours of operation, the pressure drop of the filter had increased form 1.15 inches to 2,85 inches, and the efficiency for 0.3 {mu}m dioctyl sebacate (DOS) aerosols decreased form 99.992% to 98.6%. Most of the DMSO aerosols had collected as a liquid pool inside the boxed HEPA. The liquid was blown out of the filter exit with 100 cmf air flow at the end of the test. Since the filter still met the minimum allowed efficiency of 99.97% after 166 hours of exposure, we recommend replacing the filter every 160 hours of operation or sooner if the pressure drop increases by 50%. Examination of the filter showed that visible cracks appeared at the joints of the wooden frame and a portion of the sealant had pulled away from the frame. Since all of the DMSO will be trapped in the first HEPA filter, the second HEPA filter should not suffer from DMSO degradation. Thus the combined efficiency for the first filter (98.6%) and the second filter (99.97%) is 99.99996% for 0.3 {mu}m particles. If the first filter is replaced prior to its degradation, each of the filters will have 99.97% efficiency, and the combined efficiency will be 99.999991%. The collection efficiency for DMSO/HE aerosols will be much higher because the particle size is much greater.

  19. Efficient simulations of fluid flow coupled with poroelastic deformations in pleated filters

    KAUST Repository

    Calo, Victor M.

    2015-04-27

    Pleated filters are broadly used for various applications. In certain cases, especially in solid-liquid separation case, the filtering media may get deflected and that may change the overall performance characteristics of the filter. From the modeling point of view, this is a challenging multiphysics problem, namely the interaction of the fluid with a so-called poroelastic structure. This work focuses on the development of an algorithm for the simulation of the Fluid Porous Structure Interaction (FPSI) problem in the case of pleated filtering media. The first part of the work is concerned with the development of a robust and accurate numerical method for solving the Stokes-Brinkman system of equations on quadrilateral grids. The mathematical model describes a free fluid flow coupled with a flow in porous media in a domain that contains the filtering media. To discretize the complex computational domain we use quadrilateral boundary fitted grids which resolve porous-fluid interfaces. The Stokes-Brinkman system of equations is discretized here using a sophisticated finite volume method, namely multi-point flux approximation (MPFA) O-method. MPFA is widely used, e.g., in solving scalar elliptic equations with full tensor and highly varying coefficients and/or solving on heterogeneous non-orthogonalgrids. Up to the authors’ knowledge, there was no investigation of MPFA discretization for Stokes-Brinkman problems, and this study aims to fill this gap. Some numerical experiments are presented in order to demonstrate the robustness of the proposed numerical algorithm[1]. The second part of this study focuses on the coupling of the flow model with the deflection of the filtering media. For the consideration of the FPSI problem in 3D, the classical Biot system describes coupled flow and deformations in a porous body due to difference in the upstream and downstream pressures. Solving the Biot system of equations is complicated and requires a significant amount of

  20. DOE standard: Quality assurance inspection and testing of HEPA filters

    International Nuclear Information System (INIS)

    1999-02-01

    This standard establishes essential elements for the quality assurance inspection and testing of HEPA filters by US Department of Energy (DOE)-accepted Filter Test Facilities (FTF). The standard specifies HEPA filter quality assurance inspection and testing practices established in DOE-STD-3022-98, DOE HEPA Filter Test Program, and provides a basis for the preparation of written operating procedures for primary FTF functions

  1. Fundamental study on recovery uranium oxide from HEPA filters

    International Nuclear Information System (INIS)

    Izumida, T.; Noguchi, Y.

    1993-01-01

    Large numbers of spent HEPA filters are produced at uranium fuel fabrication facilities. Uranium oxide particles have been collected on these filters. Then, a spent HEPA filter treatment system was developed from the viewpoint of recovering the UO 2 and minimizing the volume. The system consists of a mechanical separation process and a chemical dissolution process. This paper describes the results of fundamental experiments on recovering UO 2 from HEPA filters

  2. Qualification of box HEPA filters for nuclear applications

    International Nuclear Information System (INIS)

    Bergman, W.; Larsen, G.; Wilson, K.; Rainer, F.

    1995-03-01

    We have successfully completed qualification tests on high efficiency particulate air (HEPA) filters that are encapsulated within a box and manufactured by American Air Filters. The qualification tests are required by the American Society of Mechanical Engineers Standard ASME N509 and the U.S. Military Standard MIL-F-51068 for HEPA filters to be used in nuclear applications. The qualification tests specify minimum filter efficiencies following exposure to heated air, overpressure, and rough handling. Prior to this study, no box HEPA filters from any manufacturer had been qualified despite their wide-spread use in Department of Energy (DOE) facilities. Box HEPA filters are not addressed in any of the existing HEPA standards and only briefly discussed in the Nuclear Air Cleaning Handbook

  3. Performance of HEPA filters under severe conditions, 3

    International Nuclear Information System (INIS)

    Osaki, Makoto; Zanma, Tokugo; Kanagawa, Akira.

    1986-01-01

    Performance of high efficiency particulate air (HEPA) filters at temperatures from ambient to 240 deg C was measured to prove that HEPA filters kept up their regulated decontamination factor (DF) at elevated temperatures. The DF for NaCl aerosol was measured by using a laser particle spectrometer. Pressure drop of HEPA filters at elevated temperatures was also measured. The DF increased at elevated temperatures. The DF at 200 deg C was an order of magnitude higher than that at ambient. The change of DF at elevated temperatures of various HEPA filters was effectively evaluated by using the ratio of single fiber collection efficiencies at ambient to those at elevated temperatures. Pressure drop of HEPA filters also increased at elevated temperatures. The pressure drop at 200 deg C was 1.3 times larger than that at ambient. The change of DF and pressure drop at elevated temperatures was explained by applying Kirsh's theory to elevated temperatures. (author)

  4. Improved remote HEPA filtration development program

    International Nuclear Information System (INIS)

    Wilson, C.E. III.

    1987-03-01

    This paper presents a summary of the prototype development and hot cell mock-up testing program undertaken to adapt a commercial remote HEPA filter housing for use in the Process Facility Modification Project (PFMP). This program was initiated in response to the project design criteria and documentation that required the air from the hot cell environment to be exhausted through three stages of HEPA filtration. Due to the anticipated quantity of radioactive contamination captured by the first stage of filters, it was determined that the first stage would need to be located in a remotely operated and maintained shielded cell adjoining the primary hot cell areas. Commercially available remote filtration equipment was evaluated and candidate unit was identified, which could be developed into a suitable filter housing. A candidate unit was obtained from Flanders Filters, Inc. and a series of hot cell mock-up tests were identified in the 305 facility at the Hanford site. The results of these tests, and further interaction with the vendor, led to a prototype remote filter housing which satisfied most PFMP criteria and proved to be significantly superior to existing commercial units for remote operation/maintenance

  5. Self Cleaning HEPA Filtration without Interrupting Process Flow

    International Nuclear Information System (INIS)

    Wylde, M.

    2009-01-01

    The strategy of protecting the traditional glass fibre HEPA filtration train from it's blinding contamination and the recovery of dust by the means of self cleaning, pre-filtration is a proven means in the reduction of ultimate disposal volumes and has been used within the Fuel Production Industry. However, there is an increasing demand in nuclear applications requiring elevated operating temperatures, fire resistance, moisture resistance and chemical composition that the existing glass fibre HEPA filtration cannot accommodate, which can be remedied by the use of a metallic HEPA filter media. Previous research (Bergman et al 1997, Moore et al 1992) suggests that the then costs to the DOE, based on a five year life cycle, was $29.5 million for the installation, testing, removal and disposal of glass fibre HEPA filtration trains. Within these costs, $300 was the value given to the filter and $4,450 was given to the peripheral activity. Development of a low cost, cleanable, metallic, direct replacement of the traditional filter train will the clear solution. The Bergman et al work has suggested that a 1000 ft 3 /min, cleanable, stainless HEPA could be commercially available for $5,000 each, whereas the industry has determined that the truer cost of such an item in isolation would be closer to $15,000. This results in a conflict within the requirement between 'low cost' and 'stainless HEPA'. By proposing a system that combines metallic HEPA filtration with the ability to self clean without interrupting the process flow, the need for a tradition HEPA filtration train will be eliminated and this dramatically reduces the resources required for cleaning or disposal, thus presenting a route to reducing ultimate costs. The paper will examine the performance characteristics, filtration efficiency, flow verses differential pressure and cleanability of a self cleaning HEPA grade sintered metal filter element, together with data to prove the contention. (authors)

  6. Remote aerosol testing of large size HEPA filter banks

    International Nuclear Information System (INIS)

    Franklin, B.; Pasha, M.; Bronger, C.A.

    1987-01-01

    Different methods of testing HEPA filter banks are described. Difficulties in remote testing of large banks of HEPA filters in series with minimum distances between banks, and with no available access upstream and downstream of the filter house, are discussed. Modifications incorporated to make the filter system suitable for remote testing without personnel re-entry into the filter house are described for a 51,000 m/sup 3//hr filter unit at the WIPP site

  7. HEPA Filter Performance under Adverse Conditions

    International Nuclear Information System (INIS)

    Parsons, Michael; Hogancamp, Kristina; Alderman, Steven; Waggoner, Charles

    2007-01-01

    This study involved challenging nuclear grade high-efficiency particulate air (HEPA) filters under a variety of conditions that can arise in Department of Energy (DOE) applications such as: low or high RH, controlled and uncontrolled challenge, and filters with physically damaged media or seals (i.e., leaks). Reported findings correlate filter function as measured by traditional differential pressure techniques in comparison with simultaneous instrumental determination of up and down stream PM concentrations. Additionally, emission rates and failure signatures will be discussed for filters that have either failed or exceeded their usable lifetime. Significant findings from this effort include the use of thermocouples up and down stream of the filter housing to detect the presence of moisture. Also demonstrated in the moisture challenge series of tests is the effect of repeated wetting of the filter. This produces a phenomenon referred to as transient failure before the tensile strength of the media weakens to the point of physical failure. An evaluation of the effect of particle size distribution of the challenge aerosol on loading capacity of filters is also included. Results for soot and two size distributions of KCl are reported. Loading capacities for filters ranged from approximately 70 g of soot to nearly 900 g for the larger particle size distribution of KCl. (authors)

  8. HEPA-filter smoke plugging problem

    International Nuclear Information System (INIS)

    Gaskill, J.R.; Magee, M.W.

    1975-01-01

    Actual experiences indicate that during the early stages of a fire, pyrolysis and incomplete combustion of organic materials used in the furnishings or interior finishes of laboratories yield copious quantities of smoke particulates, both liquid and solid. Furthermore, the use of fire retardants in materials used for the above purpose interferes with the combustion process, so that burning of such materials in later stages of a fire will yield dense smoke. These particulates can plug up a HEPA filter or even a more porous prefilter, and thus effectively shut off the exhaust ventilation. In this case, the fire room will pressurize and contamination may spread in an uncontrolled manner. Both small- and large-scale tests have been conducted to evaluate the nature and degree of the problem as a function of materials involved, rate of exposure to the fire, and kinds and temperatures of smoke so generated. Some test work has also been done on scrubbing of smoke. Proposed future work is described. (U.S.)

  9. Performance of multiple HEPA filters against plutonium aerosols

    International Nuclear Information System (INIS)

    Gonzales, M.; Elder, J.; Ettinger, H.

    1975-01-01

    Performance of multiple stages of High Efficiency Particulate Air (HEPA) filters against aerosols similar to those produced by plutonium processing facilities has been verified as part of an experimental program. A system of three HEPA filters in series was tested against 238 PuO 2 aerosol concentrations as high as 3.3 x 10 10 d/s-m 3 . An air nebulization aerosol generation system, using ball milled plutonium oxide suspended in water, provided test aerosols with size characteristics similar to those defined by a field sampling program at several different AEC plutonium processing facilities. Aerosols have been produced ranging from 0.22 μm activity median aerodynamic diameter (amad) to 1.6 μm amad. The smaller size distributions yield 10 to 30 percent of the total activity in the less than 0.22 μm size range allowing efficiency measurement as a function of size for the first two HEPA filters in series. The low level of activity on the sampler downstream of the third HEPA filter (approximately 0.01 c/s) precludes aerosol size characterization downstream of this filter. For the first two HEPA filters, overall efficiency, and efficiency as a function of size, exceeds 99.98 percent including the <0.12 μm and the 0.12 to 0.22 μm size intervals. Efficiency of the third HEPA filter is somewhat lower with an overall average efficiency of 99.8 percent and an apparent minimum efficiency of 99.5 percent. This apparently lower efficiency is an artifact due to the low level of activity on the sampler downstream of HEPA No. 3 and the variations due to counting statistics. Recent runs with higher concentrations, thereby improving statistical variations, show efficiencies well within minimum requirements. (U.S.)

  10. Experimental investigation of in situ cleanable HEPA filter

    International Nuclear Information System (INIS)

    Adamson, D.J.

    1999-01-01

    The Westinghouse Savannah River Company located at the Savannah River Site (SRS) in Aiken, South Carolina is currently testing the feasibility of developing an in situ cleanable high efficiency particulate air (HEPA) filter system. Sintered metal filters are being tested for regenerability or cleanability in simulated conditions found in a high level waste (HLW) tank ventilation system. The filters are being challenged using materials found in HLW tanks. HLW simulated salt, HLW simulated sludge and South Carolina road dust. Various cleaning solutions have been used to clean the filters in situ. The tanks are equipped with a ventilation system to maintain the tank contents at negative pressure to prevent the release of radioactive material to the environment. This system is equipped with conventional disposable glass-fiber HEPA filter cartridges. Removal and disposal of these filters is not only costly, but subjects site personnel to radiation exposure and possible contamination. A test apparatus was designed to simulate the ventilation system of a HLW tank with an in situ cleaning system. Test results indicate that the Mott sintered metal HEPA filter is suitable as an in situ cleanable or regenerable HEPA filter. Data indicates that high humidity or water did not effect the filter performance and the sintered metal HEPA filter was easily cleaned numerous times back to new filter performance by an in situ spray system. The test apparatus allows the cleaning of the soiled HEPA filters to be accomplished without removing the filters from process. This innovative system would eliminate personnel radiation exposure associated with removal of contaminated filters and the high costs of filter replacement and disposal. The results of these investigations indicate that an in situ cleanable HEPA filter system for radioactive and commercial use could be developed and manufactured

  11. Experience with HEPA filters at United States nuclear installations

    International Nuclear Information System (INIS)

    Bellamy, R.R.

    1977-01-01

    Part 50 of Title 10 of the United States Code of Federal Regulations requires that a number of atmosphere cleanup systems be included in the design of commercial nuclear power plants to be licensed in the United States. These filtering systems are to contain high efficiency particulate air (HEPA) filters for removal of radioactive particulate matter generated during normal and accident conditions. Recommendations for the design, testing and maintenance of the filtering systems and HEPA filter components are contained in a number of United States Nuclear Regulatory Commission documents and industry standards. This paper will discuss this published guidance available to designers of filtering systems and the plant operators of U.S. commercial nuclear power plants. The paper will also present a survey of published reports of experience with HEPA filters, failures and possible causes for the failures, and other abnormal occurrences pertaining to HEPA filters installed in U.S. nuclear power installations. A discussion will be included of U.S. practices for qualification of HEPA filters before installation, and verification of continued performance capability at scheduled intervals during operation

  12. Ceramic High Efficiency Particulate Air (HEPA) Filter Final Report CRADA No. TC02160.0

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bergman, W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-08-25

    The technical objective of this project was to develop a ceramic HEPA filter technology, by initially producing and testing coupon ceramics, small scale prototypes, and full scale prototype HEPA filters, and to address relevant manufacturing and commercialization technical issues.

  13. Determination of HEPA Filter Efficiency With Diocthyl Pthalate Aerosol

    International Nuclear Information System (INIS)

    Bunawas; Ruslanto, P O; Suhariyono, G

    1996-01-01

    Ultrafine aerosol filtration by HEPA (High Efficiency Particulate Air) filter has been determinated experimentally, based on the measurement of monodisperse Diocthyl Pthalate (DOP) aerosol concentration before and after passing the test filter. Using this technique, filter efficiency can be determined as a function of aerosol diameter with range from 0.017 to 0.747 um. The average efficiencies for Whatman -41 ; Whatman -42 and Whatman GF/A filters were 56.14 %; 95,74 %; and 99.65 % respectively. Gelman A Fiber Glass and Whatman membrane filter have fulfilled criterion as HEPA filter according to standard of IAEA, because of their minimum effiency of 99.90 %

  14. Evaluation of self-contained HEPA filter

    Energy Technology Data Exchange (ETDEWEB)

    Arndt, T.E. [Westinghouse Hanford Company, Richland, WA (United States)

    1995-02-01

    This paper presents the results of an evaluation of a self-contained high-efficiency particulate air filter (SHEPA) used in nuclear applications. A SCHEPA consists of filter medium encapsulated in a casing that is part of the system boundary. The SCHEPA filter serves as a combination of filter housing and filter. The filter medium is attached directly to the casing using adhesive as a bonding agent. A cylindrical connection in the middle of the end caps connects the filter assembly to adjoining ductwork. The SCHEPA must perform the functions of a filter housing, filter frame, and filter. It was recognized that the codes and standards do not address the SCHEPA specifically. Therefore, the investigation evaluated the SCHEPA against current codes and standards related to the functional requirements of an air-cleaning system. The specific standards used are required by DOE Order 6430.1A{sup 1} and include ASME N509{sup 3}, ASME N510{sup 4}, ERDA 76-21{sup 5}, MIL-F-51068F{sup 6}, NFPA 90A, {sup 7} and NFPA 91{sup 8}. The evaluation does not address whether the SCHEPA as a standard (off-the-shelf) filter could be upgraded to meet the current code requirements for an air-cleaning unit. The evaluation also did not consider how the SCHEPA was used in a system (e.g., whether it was under positive or negative pressure or whether it served as an air inlet filter to prevent contamination releases under system pressurization). The results of the evaluation show that, the SCHEPA filter does not meet design, fabrication, testing, and documentation requirements of ASME N509{sup 3} and ASME N510{sup 4}. The paper will identify these deficiencies. Specific exhaust system requirements and application should be considered when an evaluation of the SCHEPA filter is being performed in existing systems. When new designs are being comtemplated, other types of HEPA filter housings can be used in lieu of the SCHEPA filter.

  15. Effect of age on the structural integrity of HEPA filters

    International Nuclear Information System (INIS)

    Johnson, J.S.; Beason, D.G.; Smith, P.R.; Gregory, W.S.

    1989-01-01

    All of the controls on high-efficiency particulate air (HEPA) filters are based on rigid manufacturing standards with regard to filtration efficiency, temperature performance, pressure integrity, and strength. Third-party inspection and testing by the US Department of Energy increases the reliability of new HEPA filters, but only routine in-place testing is used to assure that an aging filter performs adequately. In 1980 the Lawrence Livermore National Laboratory initiated a small evaluation to determine if age has a significant effect on the structural integrity of HEPA filters. A series of used uncontaminated filters dating back to 1965 was obtained for these tests. Tensile strength tests on the old media indicated a decrease in strength. To provide additional measurement of the filters' overall strength, several of these aged filters were subjected to pressure pulses equivalent to the NRC Region I tornado pulses and shock wave over pressures. Data from these tests indicate a decrease in breaking pressure of from 25-50%. A large increase in complete filter pack blow-out during the simulated NRC Region I tornado tests was also observed. The preliminary results indicate the need for an administrative lifetime for HEPA filters used in critical nuclear facilities. Due to the unique conditions in each facility, different administrative lifetimes may be necessary

  16. ALTERNATE HIGH EFFICIENCY PARTICULATE AIR (HEPA) FILTRATION SYSTEM

    Energy Technology Data Exchange (ETDEWEB)

    Bruce Bishop; Robert Goldsmith; Karsten Nielsen; Phillip Paquette

    2002-08-16

    In Phase IIA of this project, CeraMem has further developed and scaled up ceramic HEPA filters that are appropriate for use on filtration of vent gas from HLW tanks at DOE sites around the country. This work included procuring recrystallized SiC monoliths, developing membrane and cement materials, and defining a manufacturing process for the production of prototype full sizes HEPA filters. CeraMem has demonstrated that prototype full size filters can be manufactured by producing 9 full size filters that passed DOP aerosol testing at the Oak Ridge Filter Test Facility. One of these filters was supplied to the Savannah River Technical Center (SRTC) for process tests using simulated HLW tank waste. SRTC has reported that the filter was regenerable (with some increase in pressure drop) and that the filter retained its HEPA retention capability. CeraMem has also developed a Regenerable HEPA Filter System (RHFS) design and acceptance test plan that was reviewed by DOE personnel. The design and acceptance test plan form the basis of the system proposal for follow-on work in Phase IIB of this project.

  17. Evaluation of data from HEPA filter quality assurance testing stations

    International Nuclear Information System (INIS)

    Collins, J.T.; Bellamy, R.R.; Allen, J.R.

    1979-01-01

    In Revision 1 to Regulatory Guide 1.52, issued in July 1976, the NRC recommended that high efficiency particulate air (HEPA) filters for use in engineered safety features (ESF) atmosphere cleanup systems be visually inspected and dioctylphtalate (DOP) tested at either of two Department of Energy (DOE) operated QA Filter Testing Stations prior to their installation and use in commercial nuclear power plants. This practice was initiated because filter vendors were unable to consistently provide a HEPA filter that would meet the stringent requirements established by DOE and NRC and its predecessor the AEC. In 1977, the NRC staff undertook a program to revise Regulatory Guide 1.52 to reflect recently issued industry standards (e.g., ANSI N509 and N510) and current industry practices. Revision 2 to Regulatory Guide 1.52 was formally issued in March 1978. In conducting this review, the recommendation that HEPA filters, intended for use in ESF systems in commercial nuclear power plants, be routinely tested at the DOE-QA Filter Testing Stations was revaluated. As part of this evluation a detailed analysis of the filter test results recorded by the two QA Testing Stations during the period 1971 to 1977 was conducted. This paper summarizes the results of the analysis and explains the rationale for deleting the requirement that all HEPA filters intended for use in ESF systems be tested at the AQ Testing Station

  18. ALTERNATE HIGH EFFICIENCY PARTICULATE AIR (HEPA) FILTRATION SYSTEM

    International Nuclear Information System (INIS)

    Bruce Bishop; Robert Goldsmith; Karsten Nielsen; Phillip Paquette

    2002-01-01

    In Phase IIA of this project, CeraMem has further developed and scaled up ceramic HEPA filters that are appropriate for use on filtration of vent gas from HLW tanks at DOE sites around the country. This work included procuring recrystallized SiC monoliths, developing membrane and cement materials, and defining a manufacturing process for the production of prototype full sizes HEPA filters. CeraMem has demonstrated that prototype full size filters can be manufactured by producing 9 full size filters that passed DOP aerosol testing at the Oak Ridge Filter Test Facility. One of these filters was supplied to the Savannah River Technical Center (SRTC) for process tests using simulated HLW tank waste. SRTC has reported that the filter was regenerable (with some increase in pressure drop) and that the filter retained its HEPA retention capability. CeraMem has also developed a Regenerable HEPA Filter System (RHFS) design and acceptance test plan that was reviewed by DOE personnel. The design and acceptance test plan form the basis of the system proposal for follow-on work in Phase IIB of this project

  19. A review of DOE HEPA filter component test activities

    Energy Technology Data Exchange (ETDEWEB)

    Slawski, J.W.; Bresson, J.F. [Informatics Corp., Inc., Albuquerque, NM (United States); Scripsick, R.C. [Los Alamos National Lab., NM (United States)

    1997-08-01

    All HEPA filters purchased for installation in DOE nuclear facilities are required to be tested at a Filter Test Facility (FTF) prior to installation. The number of HEPA filters purchased by DOE has been reduced so much that the Hanford FTF was closed. From Fiscal Year (FY) 1992 to 1994, funding was not provided to the FTF Technical Support Group (TSG) at the Los Alamos National Laboratory. As a consequence, Round Robin Tests (RRTs), performed twice each year by the FTFs to assess constituency of test results among the FTFs, were not performed in FY 1992 and FY 1993. The Annual Reports of FTF test activities were not prepared for FY 1992 - 1995. Technical support provided to the FTFs was minimal. There is talk of closing a second FTF, and ongoing discussions as to whether DOE will continue to fund operation of the FTFs. In FY 1994, DOE Defense Programs commenced funding the TSG. RRT data for FY 1994 and 1995 have been entered into the database; the FY 1994 RRT report has been issued; and the FY 1995 RRT report is in progress. Data from semiannual reports have been retrieved and entered into the database. Standards related to HEPA filter test and procurement activities are now scheduled for issuance by FY 1996. Continuation of these activities depends on whether DOE will continue to support the HEPA filter test program. The history and activities of the FTFs and the TSG at Los Alamos have been reported at previous Air Cleaning Conferences. Data from the FY 1991 Annual Report of FTF activities was presented at the 1992 Air Cleaning Conference. Preparation of the Annual Reports was temporarily suspended in 1992. However, all of the FTF Semiannual report data have been retrieved and entered into the data base. This paper focuses primarily on the results of HEPA filter tests conducted by FTFs during FY 1992 - FY 1995, and the possible effects of the DOE program uncertainties on the quality of HEPA filters for installation at the DOE sites. 15 refs., 13 tabs.

  20. Review of Department of Energy HEPA filter test activities

    International Nuclear Information System (INIS)

    McIntyre, J.A.

    1993-01-01

    Filter Test Facilities (FTFs) and the FTF Technical Support Group (TSG) continue to provide services to the Department of Energy (DOE). Additional tasks relating to the HEPA filter cycle have been added to the TSG. The tasks include the quality assessment review for the in-place testing of HEPA filters at DOE sites and the formation of an in-place testing standards writing group. Summary of ongoing FTFs and TSG activities for FY 1990-FY 1992 including the technical input for implementation of the High Flow Alternative Test System (HFATS), update of the DOE Standards, the status of the quality assessment review and in-place testing standards writing group are discussed

  1. Performance of multiple HEPA filters against plutonium aerosols

    International Nuclear Information System (INIS)

    Gonzales, M.; Elder, J.C.; Tillery, M.I.; Ettinger, H.J.

    1976-11-01

    Performance of multiple stages of high-efficiency particulate air (HEPA) filters has been verified against plutonium aerosols similar in size characteristics to those challenging the air-cleaning systems of plutonium-processing facilities. An experimental program was conducted to test each filter in systems of three HEPA filters operated in series against 238 PuO 2 aerosols as high as 3.3 x 10 10 dis/s . m 3 in activity concentration and ranging from 0.22 μm to 1.6 μm in activity median aerodynamic diameter (amad). Mean penetration (ratio of downstream to upstream concentration) of each of the three filters in series was below 0.0002, but it apparently increased at each successive filter. Penetration vs size measurements showed that maximum penetration of 238 PuO 2 occurred for sizes between 0.4- and 0.7-μm aerodynamic diameter (D/sub ae/). HEPA filter penetration at half of rated flow differed little from full-flow penetration

  2. The impact of metallic filter media on HEPA filtration

    International Nuclear Information System (INIS)

    Chadwick, Chris; Kaufman, Seth

    2006-01-01

    Traditional HEPA filter systems have limitations that often prevent them from solving many of the filtration problems in the nuclear industry; particularly in applications where long service or storage life, high levels of radioactivity, dangerous decomposition products, chemical aggression, organic solvents, elevated operating temperatures, fire resistance and resistance to moisture are issues. This paper addresses several of these matters of concern by considering the use of metallic filter media to solve HEPA filtration problems ranging from the long term storage of transuranic waste at the WIPP site, spent and damaged fuel assemblies, in glove box ventilation and tank venting to the venting of fumes at elevated temperatures from incinerators, vitrification processes and conversion and sintering furnaces as well as downstream of iodine absorbers in gas cooled reactors in the UK. The paper reviews the basic technology, development, performance characteristics and filtration efficiency, flow versus differential pressure, cleanability and costs of sintered metal fiber in comparison with traditional resin bonded glass fiber filter media and sintered metal powder filter media. Examples of typical filter element and system configurations and applications will be presented The paper will also address the economic case for installing self cleaning pre-filtration, using metallic media, to recover the small volumes of dust that would otherwise blind large volumes of final disposable HEPA filters, thus presenting a route to reduce ultimate disposal volumes and secondary waste streams. (authors)

  3. Structural testing of salt loaded HEPA filters for WIPP

    International Nuclear Information System (INIS)

    Smith, P.R.; Leslie, I.H.; Hensel, E.C.; Shultheis, T.M.; Walls, J.R.

    1993-01-01

    The ventilation studies of the Waste Isolation Pilot Plant described in this paper were performed by personnel from New Mexico State Univ. in collaboration with Sandia National Laboratories, Los Alamos National Laboratory and Westinghouse Corporation. High efficiency particulate air filters (0.61m by 0.61m by 0.3m) of the type in use at the Waste Isolation Pilot Plant were loaded with salt aerosol provided from that site. The structural strength of salt-loaded, high-efficiency filters was investigated at two humidity levels, high (75%RH) and low (13-14% RH), by subjecting the filters to pressure transients of the types expected from tornadoes. Filters loaded under the high humidity condition proved to have a greater structural strength than did the filters loaded under the low humidity conditions, when both types were subjected to tornado-like pressure pulses. This unexpected results was apparently due to the crystallization of salt upon the wire face guard of the HEPA filter loaded under the high humidity condition which kept salt from penetrating the filter medium while still providing a substantial pressure drop at the standard flow rate. Results are also presented for HEPA filters pre-conditioned at 100% RH before structural testing and for HEPA filters in series with pre-filters

  4. Response of HEPA filters to simulated-accident conditions

    International Nuclear Information System (INIS)

    Gregory, W.S.; Martin, R.A.; Smith, P.R.; Fenton, D.E.

    1982-01-01

    High-efficiency particulate air (HEPA) filters have been subjected to simulated accident conditions to determine their response to abnormal operating events. Both domestic and European standard and high-capacity filters have been evaluated to determine their response to simulated fire, explosion, and tornado conditions. The HEPA filter structural limitations for tornado and explosive loadings are discussed. In addition, filtration efficiencies during these accident conditions are reported for the first time. Our data indicate efficiencies between 80% and 90% for shock loadings below the structural limit level. We describe two types of testing for ineffective filtration - clean filters exposed to pulse-entrained aerosol and dirty filters exposed to tornado and shock pulses. Efficiency and material loss data are described. Also, the resonse of standard HEPA filters to simulated fire conditions is presented. We describe a unique method of measuring accumulated combustion products on the filter. Additionally, data relating to pressure drop vs accumulated mass during plugging are reported for simulated combustion aerosols. The effects of concentration and moisture levels on filter plugging were evaluated. We are obtaining all of the above data so that mathematical models can be developed for fire, explosion, and tornado accident analysis computer codes. These computer codes can be used to assess the response of nuclear air cleaning systems to accident conditions

  5. A user's evaluation of radial flow HEPA filters

    International Nuclear Information System (INIS)

    Purcell, J.A.

    1992-07-01

    High efficiency particulate air (HEPA) filters of rectangular cross section have been used to remove particulates and the associated radioactivity from air ventilation streams since the advent of nuclear materials processing. Use of round axial flow HEPA filters is also longstanding. The advantages of radial flow filters in a circular configuration have been well demonstrated in UKAEA during the last 5--7 years. An evaluation of radial flow filters for fissile process gloveboxes reveals several substantial benefits in addition to the advantages claimed in UKAEA Facilities. The radial flow filter may be provided in a favorable geometry resulting in improved criticality safety. The filter configuration lends to in-place testing at the glovebox to exhaust duct interface. This will achieve compliance with DOE Order 6430.1A, Section 99.0.2. Preliminary testing at SRS for radial flow filters manufactured by Flanders Filters, Inc. revealed compliance in all the usual specifications for filtration efficiency, pressure differential and materials of construction. An evaluation, further detailed in this report, indicates that the radial flow HEPA filter should be considered for inclusion in new ventilation system designs

  6. Closure of 324 Facility potential HEPA filter failure unreviewed safety questions

    International Nuclear Information System (INIS)

    Enghusen, M.B.

    1997-01-01

    This document summarizes the activities which occurred to resolve an Unreviewed Safety Question (USQ) for the 324 Facility [Waste Technology Engineering Laboratory] involving Potential HEPA Filter Breach. The facility ventilation system had the capacity to fail the HEPA filters during accident conditions which would totally plug the filters. The ventilation system fans were modified which lowered fan operating parameters and prevented HEPA filter failures which might occur during accident conditions

  7. Use of evidence in 3 local level HEPA policies in Denmark

    DEFF Research Database (Denmark)

    Jakobsen, Mette Winge; Juel Lau, Cathrine; Skovgaard, Thomas

    2013-01-01

    of relevant evidence for HEPA, resources as well as organizational structure, culture and capacity. Discussion: Our insight into the actual impact of research in HEPA policy making is still sketchy. However, projects such as REPOPA will help to further our understanding of how research and other kind...... activity (HEPA) policies in 7 countries. This presentation draws on the Danish results of the policy analyses. Focus is on the use and the type of research used in three local level HEPA policies in Denmark. Methods: Three municipal level policies were selected for further investigation. Document analysis...

  8. Behavior of HEPA filters under high humidity airflows

    International Nuclear Information System (INIS)

    Ricketts, C.I.

    1992-10-01

    To help determine and improve the safety margins of High Efficiency Particulate Air (HEPA) filter units in nuclear facilities under possible accident conditions, the structural limits and failure mechanisms of filter in high-humidity airflows were established and the fundamental physical phenomena underlying filter failure or malfunction in humid air were identified. Empirical models for increases in filter pressure drop with time in terms of the relevant airstream parameters were also developed. The weaknesses of currently employed humidity countermeasures used in filter protection are discussed and fundamental explanations for reported filter failures in normal service are given. (orig./DG) [de

  9. Development of acid-resistant HEPA filter components

    International Nuclear Information System (INIS)

    Terada, K.; Woodard, R.W.; Buttedahl, O.I.

    1981-01-01

    Laboratory and in-service tests of various HEPA filter media and separators were conducted to establish their relative resistances to HNO 3 -HF vapors. Filter medium of glass fiber with Nomex additive and aluminum separators with an epoxy-vinyl coating have performed quite well in the acid environment in the laboratory, and in prototype-filters placed in service in a plenum at Rocky Flats. Proprietary filters with new design and/or components were also tested in service with generally good results

  10. Viral Penetration of High Efficiency Particulate Air (HEPA) Filters

    Science.gov (United States)

    2007-02-01

    PVC tubing (Excelon® RNT,US Plastics, Lima , Ohio). Each path runs through a test article and thence through one AGI-30 all-glass impingers (Chemglass...a mechanical flow meter (Blue–White 400, Huntington Beach , California, or PMR1-101346, Cole– Parmer, Vernon Hills, Illinois). At the end of the...fibrous Filters." Air Pollution Control Association 30(4): 377-381. Leenders, G. J. M. and J. H. Stadhouders (1980s). "Effectiveness of HEPA

  11. Development and evaluation of a HEPA filter for increased strength and resistance to elevated temperature

    International Nuclear Information System (INIS)

    Gilbert, H.; Bergman, W.; Fretthold, J.K.

    1992-01-01

    We have developed an improved HEPA filter for increased strength and resistance to elevated temperature to improve the reliability of HEPA filters under accident conditions. The improvements to the HEPA filter consist of a silicone rubber sealant and a new HEPA medium reinforced with a glass cloth. Several prototype filters were built and evaluated for temperature and pressure resistance and resistance to rough handling. The temperature resistance test consisted of exposing the HEPA filter to 1,000 scan at 700 degrees F for five minutes. The pressure resistance test consisted of exposing the HEPA filter to a differential pressure of 10 in. w.g. using a water saturated air flow at 95 degrees F. For the rough handling test, we used a vibrating machine designated the Q110. DOP filter efficiency tests were performed before and after each of the environmental tests. In addition to following the standard practice of using a separate new filter for each environmental test, we also subjected the same filter to the elevated temperature test followed by the pressure resistance test. The efficiency test results show that the improved HEPA filter is significantly better than the standard HEPA filter

  12. Requirements for a cleanable steel HEPA filter derived from a systems analysis

    International Nuclear Information System (INIS)

    Bergman, W.

    1996-06-01

    A systems analysis was conducted to determine customer requirements for a cleanable high efficiency particulate air (HEPA) filter in DOE Environmental Management (EM) facilities. The three principal drivers for cleanable steel HEPA are large cost savings, improved filter reliability, and new regulations; they produce a strong incentive to DOE customers to use cleanable steel HEPA filters. Input for customer requirements were obtained from field trips to EM sites and from discussions. Most existing applications require that cleanable steel HEPA filters meet size/performance requirements of standard glass HEPA filters; applications in new facilities can relax size/weight/pressure drop requirements on a case-by-case basis. We then obtained input from commercial firms on availability of cleanable steel HEPA filters. Systems analysis then showed that currently available technology was only able to meet customer needs in a limited number of cases. Further development is needed to meet requirements of EM customers. For cleanable steel HEPA to be retrofitted into existing systems, pressure drop and weight must be reduced. Pressure drop can be reduced by developing steel fiber media from 0.5 μm dia steel fibers. Weight can be reduced by packaging the steel fiber media in one of the standard HEPA configurations. Although most applications will be able to use standard 304 or 316L alloys, an acid resistant alloy such as Hastelloy or Inconel will be needed for incinerator and other thermal processes

  13. Transient Heating and Thermomechanical Stress Modeling of Ceramic HEPA Filters

    Energy Technology Data Exchange (ETDEWEB)

    Bogle, Brandon [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kelly, James [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Haslam, Jeffrey [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-29

    The purpose of this report is to showcase an initial finite-element analysis model of a ceramic High-Efficiency Particulate (HEPA) Air filter design. Next generation HEPA filter assemblies are being developed at LLNL to withstand high-temperature fire scenarios by use of ceramics and advanced materials. The filters are meant for use in radiological and nuclear facilities, and are required to survive 500°C fires over an hour duration. During such conditions, however, collecting data under varying parameters can be challenging; therefore, a Finite Element Analysis model of the filter was conducted using COMSOL ® Multiphysics to analyze the effects of fire. Finite Element Analysis (FEA) modelling offers several opportunities: researchers can quickly and easily consider impacts of potential design changes, material selection, and flow characterization on filter performance. Specifically, this model provides stress references for the sealant at high temperatures. Modeling of full filter assemblies was deemed inefficient given the computational requirements, so a section of three tubes from the assembly was modeled. The model looked at the transient heating and thermomechanical stress development during a 500°C air flow at 6 CFM. Significant stresses were found at the ceramic-metal interfaces of the filter, and conservative temperature profiles at locations of interest were plotted. The model can be used for the development of sealants that minimize stresses at the ceramic-metal interface. Further work on the model would include the full filter assembly and consider heat losses to make more accurate predictions.

  14. Improved HEPA Filter Technology for Flexible and Rigid Containment Barriers

    International Nuclear Information System (INIS)

    Pinson, Paul Arthur

    1998-01-01

    Safety and reliability in glovebox operations can be significantly improved and waste packaging efficiencies can be increased by inserting flexible, lightweight, high capacity HEPA filters into the walls of plastic sheet barriers. This HEPA filter/barrier technology can be adapted to a wide variety of applications: disposable waste bags, protective environmental barriers for electronic equipment, single or multiple use glovebag assemblies, flexible glovebox wall elements, and room partitions. These reliable and inexpensive filtered barriers have many uses in fields such as radioactive waste processing, HVAC filter changeout, vapor or grit blasting, asbestos cleanup, pharmaceutical, medical, biological, and electronic equipment containment. The applications can result in significant cost savings, improved operational reliability and safety, and total waste volume reduction. This technology was developed at the Argonne National Laboratory-West (ANL-W) in 1993 and has been used at ANL-W since then at the TRU Waste Characterization Chamber Gloveboxes. Another 1998 AGS Conference paper titled ''TRU Waste Characterization Gloveboxes'', presented by Mr. David Duncan of ANL-W, describes these boxes

  15. HEPA Filter Disposal Write-Up 10/19/16

    Energy Technology Data Exchange (ETDEWEB)

    Loll, C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-10-20

    Process knowledge (PK) collection on HEPA filters is handled via the same process as other waste streams at LLNL. The Field technician or Characterization point of contact creates an information gathering document (IGD) in the IGD database, with input provided from the generator, and submits it for electronic approval. This document is essentially a waste generation profile, detailing the physical, chemical as well as radiological characteristics, and hazards, of a waste stream. It will typically contain a general, but sometimes detailed, description of the work processes which generated the waste. It will contain PK as well as radiological and industrial hygiene analytical swipe results, and any other analytical or other supporting knowledge related to characterization. The IGD goes through an electronic approval process to formalize the characterization and to ensure the waste has an appropriate disposal path. The waste generator is responsible for providing initial process knowledge information, and approves the IGD before it routed to chemical and radiological waste characterization professionals. This is the standard characterization process for LLNL-generated HEPA Filters.

  16. Multiple HEPA filter test methods, January--December 1976

    International Nuclear Information System (INIS)

    Schuster, B.; Kyle, T.; Osetek, D.

    1977-06-01

    The testing of tandem high-efficiency particulate air (HEPA) filter systems is of prime importance for the measurement of accurate overall system protection factors. A procedure, based on the use of an intra-cavity laser particle spectrometer, has been developed for measuring protection factors in the 10 8 range. A laboratory scale model of a filter system was constructed and initially tested to determine individual HEPA filter characteristics with regard to size and state (liquid or solid) of several test aerosols. Based on these laboratory measurements, in-situ testing has been successfully conducted on a number of single and tandem filter installations within the Los Alamos Scientific Laboratory as well as on extraordinary large single systems at Rocky Flats. For the purpose of recovery and for simplified solid waste disposal, or prefiltering purposes, two versions of an inhomogeneous electric field air cleaner have been devised and are undergoing testing. Initial experience with one of the systems, which relies on an electrostatic spraying phenomenon, indicates performance efficiency of greater than 99.9% for flow velocities commonly used in air cleaning systems. Among the effluents associated with nuclear fuel reprocessing is 129 I. An intra-cavity laser detection system is under development which shows promise of being able to detect mixing ratios of one part in 10 7 , I 2 in air

  17. Preliminary studies to determine the shelf life of HEPA filters

    Energy Technology Data Exchange (ETDEWEB)

    Gilbert, H.; Fretthold, J.K.; Rainer, F. [Lawrence Livermore National Laboratory, CA (United States)] [and others

    1995-02-01

    We have completed a preliminary study using filter media tests and filter qualification tests to investigate the effect of shelf-life on HEPA filter performance. Our media studies showed that the tensile strength decreased with age, but the data were not sufficient to establish a shelf-life. Thermogravimetric analyses demonstrated that one manufacturer had media with low tensile strength due to insufficient binder. The filter qualification tests (heated air and overpressure) conducted on different aged filters showed that filter age is not the primary factor affecting filter performance; materials and the construction design have a greater effect. An unexpected finding of our study was that sub-standard HEPA filters have been installed in DOE facilities despite existing regulations and filter qualification tests. We found that the filter with low tensile strength failed the overpressure test. The same filter had passed the heated air test, but left the filter so structurally weak, it was prone to blow-out. We recommend that DOE initiate a filter qualification program to prevent this occurrence.

  18. Self Cleaning High Efficiency Particulate Air (HEPA) Filtration without Interrupting Process Flow - 59347

    International Nuclear Information System (INIS)

    Chadwick, Chris

    2012-01-01

    The strategy of protecting the traditional glass fibre HEPA filtration train from it's blinding contamination and the recovery of dust by the means of self cleaning, pre-filtration is a proven means in the reduction of ultimate disposal volumes and has been used within the Fuel Production Industry. However, there is an increasing demand in nuclear applications requiring elevated operating temperatures, fire resistance, moisture resistance and chemical composition that the existing glass fibre HEPA filtration cannot accommodate, which can be remedied by the use of a metallic HEPA filter media. Previous research suggests that the then costs to the Department of Energy (DOE), based on a five year life cycle, was $29.5 million for the installation, testing, removal and disposal of glass fibre HEPA filtration trains. Within these costs, $300 was the value given to the filter and $4, 450 was given to the peripheral activity. Development of a low cost, cleanable, metallic, direct replacement of the traditional filter train will the clear solution. The Bergman et al work has suggested that a 1000 ft 3 /min, cleanable, stainless HEPA could be commercially available for $5, 000 each, whereas the industry has determined that the truer cost of such an item in isolation would be closer to $15, 000. This results in a conflict within the requirement between 'low cost' and 'stainless HEPA'. By proposing a system that combines metallic HEPA filtration with the ability to self clean without interrupting the process flow, the need for a tradition HEPA filtration train will be eliminated and this dramatically reduces the resources required for cleaning or disposal, thus presenting a route to reducing ultimate costs. The paper will examine the performance characteristics, filtration efficiency, flow verses differential pressure and cleanability of a self cleaning HEPA grade sintered metal filter element, together with data to prove the contention. (authors)

  19. Investigation of HEPA filters subjected to tornado pressure pulses

    International Nuclear Information System (INIS)

    Gregory, W.S.; Horak, H.L.; Smith, P.R.; Ricketts, C.

    1977-03-01

    An experimental program is described that will determine the response of 0.6-x 0.6-m (24-x 24-in.) high-efficiency particulate air (HEPA) filters to tornado-induced pressure transients. A blow-down system will be used to impose pressure differentials across the filters. Progress in construction of this system is reported with a description of the component parts and their functions. The test facility is essentially complete with the exception of an air dryer system that has not yet been delivered. Initial structural testing will begin in March 1977. A description is given of the instrumentation needed to measure air pressure, velocity, turbulence, humidity and particulate concentration. This instrumentation includes pressure transducers, humidity equipment, laser Doppler velocimeters (LDV), signal processors and a data acquisition system. Operational theory of the LDV and its proposed use as a particle counting device are described

  20. Investigation and deactivation of B Plant HEPA filters

    International Nuclear Information System (INIS)

    Roege, P.E.

    1997-01-01

    This paper describes the integrated approach used to manage environmental, safety, and health considerations related to the B Plant canyon exhaust air filters at the US Department of Energy (DOE) Hanford Site. The narrative illustrates the development and implementation of integrated safety management as applied to a facility and its systems undergoing deactivation. During their lifetime, the high efficiency particulate air (HEPA) filters prevented the release of significant quantities of radioactive materials into the air. As the material in B Plant AVESF accumulated on the filters, it created an unusual situation. Over long periods of time, the radiation dose from the filter loading, combined with aging and chemical exposure actually degrade those filters which were intended to protect against any release to the environment

  1. Method for HEPA filter leak scanning with differentiating aerosol detector

    Energy Technology Data Exchange (ETDEWEB)

    Kovach, B.J.; Banks, E.M.; Wikoff, W.O. [NUCON International, Inc., Columbus, OH (United States)

    1997-08-01

    While scanning HEPA filters for leaks with {open_quotes}Off the Shelf{close_quote} aerosol detection equipment, the operator`s scanning speed is limited by the time constant and threshold sensitivity of the detector. This is based on detection of the aerosol density, where the maximum signal is achieved when the scanning probe resides over the pinhole longer than several detector time-constants. Since the differential value of the changing signal can be determined by observing only the first small fraction of the rising signal, using a differentiating amplifier will speed up the locating process. The other advantage of differentiation is that slow signal drift or zero offset will not interfere with the process of locating the leak, since they are not detected. A scanning hand-probe attachable to any NUCON{reg_sign} Aerosol Detector displaying the combination of both aerosol density and differentiated signal was designed. 3 refs., 1 fig.

  2. Penetration of HEPA filters by alpha recoil aerosols

    International Nuclear Information System (INIS)

    McDowell, W.J.; Seeley, F.G.; Ryan, M.T.

    1976-01-01

    The self-scattering of alpha-active substances has long been recognized and is attributed to expulsion of aggregates of atoms from the surface of alpha-active materials by alpha emission recoil energy, and perhaps to further propulsion of these aggregates by subsequent alpha recoils. Workers at the University of Lowell recently predicted that this phenomenon might affect the retention of alpha-active particulate matter by HEPA filters, and found support in experiments with 212 Pb. Tests at Oak Ridge National Laboratory have confirmed that alpha-emitting particulate matter does penetrate high-efficiency filter media, such as that used in HEPA filters, much more effectively than do non-radioactive or beta-gamma active aerosols. Filter retention efficiencies drastically lower than the 99.9 percent quoted for ordinary particulate matter were observed with 212 Pb, 253 Es, and 238 Pu sources, indicating that the phenomenon is common to all of these and probably to all alpha-emitting materials of appropriate half-life. Results with controlled air-flow through filters in series are consistent with the picture of small particles dislodged from the ''massive'' surface of an alpha-active material, and then repeatedly dislodged from positions on the filter fibers by subsequent alpha recoils. The process shows only a small dependence on the physical form of the source material. Oxide dust, nitrate salt, and plated metal all seem to generate the recoil particles effectively. The amount penetrating a series of filters depends on the total amount of activity in the source material, its specific activity, and the length of time of air flow

  3. Survey of life-cycle costs of glass-paper HEPA filters

    International Nuclear Information System (INIS)

    Moore, P.; Bergman, W.; Gilbert, H.

    1992-08-01

    We have conducted a survey of the major users of glass-paper HEPA filters in the DOE complex to ascertain the life cycle costs of these filters. Purchase price of the filters is only a minor portion of the costs; the major expenditures are incurred during the removal and disposal of contaminated filters. Through personal interviews, site visits and completion of questionnaires, we have determined the costs associated with the use of HEPA filters in the DOE complex. The total approximate life-cycle cost for a standard (2 in. x 2 in. x 1 in.) glass-paper HEPA filter is $3,000 for one considered low-level waste (LLW), $11,780 for transuranic (TRU) and $15,000 for high-level waste (HLW). The weighted-average cost for a standard HEPA filter in the complex is $4,753

  4. Criteria for calculating the efficiency of HEPA filters during and after design basis accidents

    International Nuclear Information System (INIS)

    Bergman, W.; First, M.W.; Anderson, W.L.; Gilbert, H.; Jacox, J.W.

    1994-12-01

    We have reviewed the literature on the performance of high efficiency particulate air (HEPA) filters under normal and abnormal conditions to establish criteria for calculating the efficiency of HEPA filters in a DOE nonreactor nuclear facility during and after a Design Basis Accident (DBA). The literature review included the performance of new filters and parameters that may cause deterioration in the filter performance such as filter age, radiation, corrosive chemicals, seismic and rough handling, high temperature, moisture, particle clogging, high air flow and pressure pulses. The deterioration of the filter efficiency depends on the exposure parameters; in severe exposure conditions the filter will be structurally damaged and have a residual efficiency of 0%. Despite the many studies on HEPA filter performance under adverse conditions, there are large gaps and limitations in the data that introduce significant error in the estimates of HEPA filter efficiencies under DBA conditions. Because of this limitation, conservative values of filter efficiency were chosen when there was insufficient data

  5. HEPA filter testing - Department of Energy Office of Nuclear Energy Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Sherwood, G.L. Jr. [Department of Energy, Washington, DC (United States)

    1995-02-01

    This paper provides the background of, and some results from, a review of HEPA filter testing during 1993 at selected Department of Energy (DOE) facilities. Recommendations for improvements in standards resulting from the review are also presented.

  6. Safety evaluation for packaging (onsite) for the Pacific Northwest National Laboratory HEPA filter box

    International Nuclear Information System (INIS)

    McCoy, J.C.

    1998-01-01

    This safety evaluation for packaging (SEP) evaluates and documents the safe onsite transport of eight high-efficiency particulate air (HEPA) filters in the Pacific Northwest National Laboratory HEPA Filter Box from the 300 Area of the Hanford Site to the Central Waste Complex and on to burial in the 200 West Area. Use of this SEP is authorized for 1 year from the date of release

  7. Extension of the maintenance cycle of HEPA filters by optimization of the technical characteristics of filters and their construction

    International Nuclear Information System (INIS)

    Bella, H.; Stiehl, H.H.; Sinhuber, D.

    1977-01-01

    The knowledge of the parameters of HEPA filters used at present in nuclear plants allows optimization of such filters with respect to flow rate, pressure drop and service life. The application of optimizing new types of HEPA filters of improved performance is reported. The calculated results were checked experimentally. The use of HEPA filters optimized with respect to dust capacity and service life, and the effects of this new type of filter on the reduction of operating and maintenance costs are discussed

  8. Modelling of air flows in pleated filters and of their clogging by solid particles; Modelisation des ecoulements d'air et du colmatage des filtres plisses par des aerosols solides

    Energy Technology Data Exchange (ETDEWEB)

    Del Fabbro, L

    2002-07-01

    The devices of air cleaning against particles are widely spread in various branches of industry: nuclear, motor, food, electronic,...; among these devices, numerous are constituted by pleated porous media to increase the surface of filtration and thus to reduce the pressure drop, for given air flow. The objective of our work is to compensate a lack evident of knowledge on the evolution of the pressure drop of pleated filter during the clogging and to deduct a modelling from it, on the basis of experiments concerning industrial filters of nuclear and car types. The obtained model is a function of characteristics of the filtering medium and pleats, of the characteristics of solid particles deposited on the filter, of the mass of particles and of the aeraulic conditions of air flow. It also depends on data on the clogging of flat filters of equivalent medium. To elaborate this model of pressure drop, an initial stage was carried out in order to characterize, experimentally and numerically, the pressure drop and the distribution of air flow in clean pleated filters of nuclear (high efficiency particulate air filter, in fiberglasses) and car (mean efficiency filter, in fibers of cellulose) types. The numerical model allowed to understand the fundamental role played by the aeraulic resistance of the filtering medium. From an non-dimensional approach, we established a semi-empirical model of pressure drop for a clean pleated filter valid for both studied types of medium; this model is used of first base for the development of the final model of clogging. The study of the clogging of the filters showed the complexity of the phenomenon dependent mainly on a reduction of the surface of filtration. This observation brings us to propose a clogging of pleated filters in three phases. Both first phases are similar in those observed for flat filters, while last phase corresponds to a reduction of the surface of filtration and leads a strong increase of the filter pressure drop

  9. Mini-pleat filters for improved indoor air quality. Filtri a 'piccole pieghe' per una migliore qualita' dell'aria negli ambienti civili e negli impianti industriali

    Energy Technology Data Exchange (ETDEWEB)

    Zucchelli, D.

    1992-07-01

    Advanced manufacturing techniques applied to the fabrication of air filters have led to the creation of a high quality/efficiency mini-pleat filter which, however, has yet to see wide use in commercial space heating ventilation and air conditioning systems. Now, with greater attention being given to indoor air quality, these high performance filters should see greater market demand. This paper discusses the design and performance characteristics of mini-pleat filters and surveys the range of models currently available on the market.

  10. Characterizing radionuclides in the B Plant HEPA filters

    International Nuclear Information System (INIS)

    Roege, P.E.

    1998-01-01

    B Plant was built during World War II to separate plutonium for nuclear weapons from reactor fuel. Later, the plant was re-equipped and used to separate radioactive fission products from the Hanford Site's nuclear processing waste tanks. The facility is now being deactivated: eliminating, stabilizing, and documenting existing hazards to allow safe surveillance and maintenance pending a final disposition which is yet to be determined. The processing areas of the plant, including process cells and exhaust air system, are heavily contaminated with radioactive cesium and strontium from the tank waste separation process. However, detailed characterization is difficult because many of these areas are inaccessible because of physical barriers and high radiological dose rates. The five existing canyon high efficiency particulate air (HEPA) filters were thought to contain a significant fraction of the inventory, but estimates were highly uncertain. This paper describes the process used to inspect and characterize the radionuclide content in one of these filters. The investigation required a collaborative effort among field and technical personnel. Sophisticated computer modeling and detector technologies were employed in conjunction with sound radiological control and field work practices. The outcome of the effort was a considerable reduction in the filter inventory estimate, accompanied by a greatly improved level of confidence in the data. The information derived from this project will provide a sound basis for future decisions regarding filter disposition

  11. Cost and waste volume reduction in HEPA filter trains by effective pre-filtration

    International Nuclear Information System (INIS)

    Chadwick, Chris; Kaufman, Seth

    2006-01-01

    Data published elsewhere (Moore, et el 1992; Bergman et al 1997) suggests that the then costs of disposable type Glass Fibre HEPA filtration trains to the DOE was USD 55 million per year (based on an average usage of HEPA panels of 11,748 pieces per year between 1987 and 1990), USD 50 million of which was attributable to installation, testing, removal and disposal - although the life cycle costs are themselves based on estimates dating from 1987-1990. The same authors suggest that by 1995 the number of HEPA panels being used had dropped to an estimated 4000 pieces per year due to the ending of the Cold War. The yearly cost to the DOE of 4000 units per year was estimated to be USD 29.5 million using the same parameters that suggested the previously stated USD 55 million for the larger quantity. Within that cost estimate, USD 300 was the value given to the filter and USD 4,450 was given to peripheral activity per filter. Clearly, if the USD 4,450 component could be reduced, tremendous saving could result, in addition to a significant reduction in the legacy burden of waste volumes. This same cost is applied to both the 11,748 and 4000 usage figures. The work up to now has focussed on the development of a low cost, long life (cleanable) direct replacement of the traditional filter train, but this paper will review an alternative strategy, that of preventing the contaminating dust from reaching and blinding the HEPA filters, and thereby removing the need to replace them. What has become clear is that 'low cost' and 'stainless HEPA' are not compatible terms. The original Bergman et al work suggested that 1000 ft 3 /min stainless HEPAs could be commercially available for USD 5000 each after development (although the USD 70,000 development unit may be somewhat exaggerated - the authors have estimated that development units able to be retro-fitted into strengthened standard housings would be available for perhaps USD 30,000). The likely true cost of such an item produced

  12. Cost and waste volume reduction in HEPA filter trains by effective pre-filtration

    International Nuclear Information System (INIS)

    Chadwick, Chris

    2007-01-01

    Data published elsewhere (Moore, et al., 1992; Bergman et al., 1997) suggests that the then costs of disposable type Glass Fibre HEPA filtration trains to the DOE was $55 million per year (based on an average usage of HEPA panels of 11,748 pieces per year between 1987 and 1990), $50 million of which was attributable to installation, testing, removal and disposal. The same authors suggest that by 1995 the number of HEPA panels being used had dropped to an estimated 4000 pieces per year due to the ending of the Cold War. The yearly cost to the DOE of 4000 units per year was estimated to be $29.5 million using the same parameters that previously suggested the $55 million figure. Within that cost estimate, $300 each was the value given to the filter and $4,450 was given to peripheral activity per filter. Clearly, if the $4,450 component could be reduced, tremendous saving could result, in addition to a significant reduction in the legacy burden of waste volumes. This same cost is applied to both the 11,748 and 4000 usage figures. The work up to now has focussed on the development of a low cost, long life (cleanable), direct replacement of the traditional filter train. This paper will review an alternative strategy, that of preventing the contaminating dust from reaching and blinding the HEPA filters, and thereby removing the need to replace them. What has become clear is that 'low cost' and 'Metallic HEPA' are not compatible terms. The original Bergman et al., 1997 work suggested that 1000 cfm (cubic feet per minute) (1690 m 3 /hr) stainless HEPAs could be commercially available for $5000 each after development (although the $70,000 development unit may be somewhat exaggerated - the authors own company have estimated development units able to be retrofitted into strengthened standard housings would be available for perhaps $30,000). The likely true cost of such an item produced industrially in significant numbers may be closer to $15,000 each. That being the case, the

  13. Filter Paper: Solution to High Self-Attenuation Corrections in HEPA Filter Measurements

    International Nuclear Information System (INIS)

    Oberer, R.B.; Harold, N.B.; Gunn, C.A.; Brummett, M.; Chaing, L.G.

    2005-01-01

    An 8 by 8 by 6 inch High Efficiency Particulate Air (HEPA) filter was measured as part of a uranium holdup survey in June of 2005 as it has been routinely measured every two months since 1998. Although the survey relies on gross gamma count measurements, this was one of a few measurements that had been converted to a quantitative measurement in 1998. The measurement was analyzed using the traditional Generalized Geometry Holdup (GGH) approach, using HMS3 software, with an area calibration and self-attenuation corrected with an empirical correction factor of 1.06. A result of 172 grams of 235 U was reported. The actual quantity of 235 U in the filter was approximately 1700g. Because of this unusually large discrepancy, the measurement of HEPA filters will be discussed. Various techniques for measuring HEPA filters will be described using the measurement of a 24 by 24 by 12 inch HEPA filter as an example. A new method to correct for self attenuation will be proposed for this measurement Following the discussion of the 24 by 24 by 12 inch HEPA filter, the measurement of the 8 by 8 by 6 inch will be discussed in detail

  14. Penetration of HEPA filters by alpha recoil aerosols

    International Nuclear Information System (INIS)

    McDowell, W.J.; Seeley, F.G.; Ryan, M.T.

    1976-01-01

    Tests at Oak Ridge National Laboratory confirmed that alpha-emitting particulate matter does penetrate high-efficiency filter medium, identical to that used in HEPA filters, much more effectively than do non-radioactive or beta-gamma active aerosols. Filter retention efficiencies drastically lower than the 99.97 percent quoted for ordinary particulate matter have been observed with 212 Pb, 253 Es, and 238 Pu sources, indicating that the phenomenon is common to all of these and probably to all alpha-emitting materials of appropriate half-life. Results with controlled air-flow through filters in series are consistent with the picture of small particles dislodged from the ''massive'' surface of an alpha-active material, and then repeatedly dislodged from positions on the filter fibers, by the alpha recoils. The process shows only a small dependence on the physical form of the source material. Oxide dust, nitrate salt, and plated metal all seem to generate the recoil particles effectively. The amount penetrating a series of filters depends on the total amount of activity in the source material, its specific activity, and the length of time of air flow. Dependence on the air flow velocity is slight. It appears that this phenomenon has not been observed in previous experiments with alpha-active aerosols because the tests did not continue for a sufficiently long time. A theoretical model of the process has been developed, amenable to computer handling, that should allow calculation of the rate constants associated with the transfer through and release of radioactive material from a filter system by this process

  15. Pilot-scale tests of HEME and HEPA dissolution process

    Energy Technology Data Exchange (ETDEWEB)

    Qureshi, Z.H.; Strege, D.K.

    1994-06-01

    A series of pilot-scale demonstration tests for the dissolution of High Efficiency Mist Eliminators (HEME`s) and High Efficiency Particulate Airfilters (HEPA) were performed on a 1/5th linear scale. These fiberglass filters are to be used in the Defense Waste Processing Facility (DWPF) to decontaminate the effluents from the off-gases generated during the feed preparation process and vitrification. When removed, these filters will be dissolved in the Decontamination Waste Treatment Tank (DWTT) using 5 wt% NaOH solution. The contaminated fiberglass is converted to an aqueous stream which will be transferred to the waste tanks. The filter metal structure will be rinsed with process water before its disposal as low-level solid waste. The pilot-scale study reported here successfully demonstrated a simple one step process using 5 wt% NaOH solution. The proposed process requires the installation of a new water spray ring with 30 nozzles. In addition to the reduced waste generated, the total process time is reduced to 48 hours only (66% saving in time). The pilot-scale tests clearly demonstrated that the dissolution process of HEMEs has two stages - chemical digestion of the filter and mechanical erosion of the digested filter. The digestion is achieved by a boiling 5 wt% caustic solutions, whereas the mechanical break down of the digested filter is successfully achieved by spraying process water on the digested filter. An alternate method of breaking down the digested filter by increased air sparging of the solution was found to be marginally successful are best. The pilot-scale tests also demonstrated that the products of dissolution are easily pumpable by a centrifugal pump.

  16. Pilot-scale tests of HEME and HEPA dissolution process

    International Nuclear Information System (INIS)

    Qureshi, Z.H.; Strege, D.K.

    1994-06-01

    A series of pilot-scale demonstration tests for the dissolution of High Efficiency Mist Eliminators (HEME's) and High Efficiency Particulate Airfilters (HEPA) were performed on a 1/5th linear scale. These fiberglass filters are to be used in the Defense Waste Processing Facility (DWPF) to decontaminate the effluents from the off-gases generated during the feed preparation process and vitrification. When removed, these filters will be dissolved in the Decontamination Waste Treatment Tank (DWTT) using 5 wt% NaOH solution. The contaminated fiberglass is converted to an aqueous stream which will be transferred to the waste tanks. The filter metal structure will be rinsed with process water before its disposal as low-level solid waste. The pilot-scale study reported here successfully demonstrated a simple one step process using 5 wt% NaOH solution. The proposed process requires the installation of a new water spray ring with 30 nozzles. In addition to the reduced waste generated, the total process time is reduced to 48 hours only (66% saving in time). The pilot-scale tests clearly demonstrated that the dissolution process of HEMEs has two stages - chemical digestion of the filter and mechanical erosion of the digested filter. The digestion is achieved by a boiling 5 wt% caustic solutions, whereas the mechanical break down of the digested filter is successfully achieved by spraying process water on the digested filter. An alternate method of breaking down the digested filter by increased air sparging of the solution was found to be marginally successful are best. The pilot-scale tests also demonstrated that the products of dissolution are easily pumpable by a centrifugal pump

  17. Testing cleanable/reuseable HEPA prefilters for mixed waste incinerator air pollution control systems

    Energy Technology Data Exchange (ETDEWEB)

    Burns, D.B.; Wong, A.; Walker, B.W.; Paul, J.D. [Westinghouse Savannah River Co., Aiken, SC (United States)

    1997-08-01

    The Consolidated Incineration Facility (CIF) at the US DOE Savannah River Site is undergoing preoperational testing. The CIF is designed to treat solid and liquid RCRA hazardous and mixed wastes from site operations and clean-up activities. The technologies selected for use in the air pollution control system (APCS) were based on reviews of existing incinerators, air pollution control experience, and recommendations from consultants. This approach resulted in a facility design using experience from other operating hazardous/radioactive incinerators. In order to study the CIF APCS prior to operation, a 1/10 scale pilot facility, the Offgas Components Test Facility (OCTF), was constructed and has been in operation since late 1994. Its mission is to demonstrate the design integrity of the CIF APCS and optimize equipment/instrument performance of the full scale production facility. Operation of the pilot facility has provided long-term performance data of integrated systems and critical facility components. This has reduced facility startup problems and helped ensure compliance with facility performance requirements. Technical support programs assist in assuring all stakeholders the CIF can properly treat combustible hazardous, mixed, and low-level radioactive wastes. High Efficiency Particulate Air (HEPA) filters are used to remove hazardous and radioactive particulates from the exhaust gas strewn before being released into the atmosphere. The HEPA filter change-out frequency has been a potential issue and was the first technical issue to be studied at the OCTF. Tests were conducted to evaluate the performance of HEPA filters under different operating conditions. These tests included evaluating the impact on HEPA life of scrubber operating parameters and the type of HEPA prefilter used. This pilot-scale testing demonstrated satisfactory HEPA filter life when using cleanable metal prefilters and high flows of steam and water in the offgas scrubber. 8 figs., 2 tabs.

  18. Summary of meeting on disposal of LET ampersand D HEPA filters

    International Nuclear Information System (INIS)

    1991-01-01

    This report is a compilation of correspondence between Westinghouse Idaho Nuclear Company and the US EPA over a period of time from 1988 to 1992 (most from 1991-92) regarding waste management compliance with EPA regulations. Typical subjects include: compliance with satellite accumulation requirements; usage of ''Sure Shot'' containers in place of aerosol cans; notice of upcoming recyclable battery shipments; disposition of batteries; HEPA filter leach sampling and permit impacts; functional and operation requirements for the spent filter handling system; summary of meeting on disposal of LET and D HEPA filters; solvent substitution database report; and mercury vapor light analytical testing

  19. Recleaning of HEPA filters by reverse flow - evaluation of the underlying processes and the cleaning technique

    International Nuclear Information System (INIS)

    Leibold, H.; Leiber, T.; Doeffert, I.; Wilhelm, J.G.

    1993-08-01

    HEPA filter operation at high concentrations of fine dusts requires the periodic recleaning of the filter units in their service locations. Due to the low mechanical stress induced during the recleaning process the regenration via low pressure reverse flow is a very suitable technique. Recleanability of HEPA filter had been attained for particle diameter >0,4 μm at air velocities up to 1 m/s, but filter clogging occurred in case of smaller particles. The recleaning forces are too weak for particles [de

  20. A single standard for in-place testing of DOE HEPA filters - not

    Energy Technology Data Exchange (ETDEWEB)

    Mokler, B.V. [Los Alamos National Laboratory, NM (United States)

    1995-02-01

    This article is a review of arguments against the use of a single standard for in-place testing of DOE HEPA filters. The author feels that the term `standard` entails mandatory compliance. Additionally, the author feels that the variety of DOE HEPA systems requiring in-place testing is such that the guidance for testing must be written in a permissive fashion, allowing options and alternatives. With this in mind, it is not possible to write a single document entailing mandatory compliance for all DOE facilities.

  1. FULL SCALE REGENERABLE HEPA FILTER DESIGN USING SINTERED METAL FILTER ELEMENTS

    International Nuclear Information System (INIS)

    Gil Ramos; Kenneth Rubow; Ronald Sekellick

    2002-01-01

    A Department of Energy funded contract involved the development of porous metal as a HEPA filter, and the subsequent design of a full-scale regenerable HEPA filtration system (RHFS). This RHFS could replace the glass fiber HEPA filters currently being used on the high level waste (HLW) tank ventilation system with a system that would be moisture tolerant, durable, and cleanable in place. The origins of the contract are a 1996 investigation at the Savannah River Technology Center (SRTC) regarding the use of porous metal as a HEPA filter material. This contract was divided into Phases I, IIA and IIB. Phase I of the contract evaluated simple filter cylinders in a simulated High Level Waste (HLW) environment and the ability to clean and regenerate the filter media after fouling. Upon the successful completion of Phase I, Phase IIA was conducted, which included lab scale prototype testing and design of a full-scale system. The work completed under Phase IIA included development of a full-scale system design, development of a filter media meeting the HEPA filtration efficiency that would also be regenerable using prescribed cleaning procedures, and the testing of a single element system prototype at Savannah River. All contract objectives were met. The filter media selected was a nickel material already under development at Mott, which met the HEPA filtration efficiency standard. The Mott nickel media met and exceeded the HEPA requirement, providing 99.99% removal against a requirement of 99.97%. Double open-ended elements of this media were provided to the Savannah River Test Center for HLW simulation testing in the single element prototype filter. These elements performed well and further demonstrated the practicality of a metallic media regenerable HEPA filter system. An evaluation of the manufacturing method on many elements demonstrated the reproducibility to meet the HEPA filtration requirement. The full-scale design of the Mott RHFS incorporated several important

  2. A Method for Cobalt and Cesium Leaching from Glass Fiber in HEPA Filter

    International Nuclear Information System (INIS)

    Kim, Gye Nam; Lee, Suk Chol; Yang, Hee Chul; Yoon, In Ho; Choi, Wang Kyu; Moon, Jei Kwon

    2011-01-01

    A great amount of radioactive waste has been generated during the operation of nuclear facilities. Recently, the storage space of a radioactive waste storage facility in the Korea Atomic Energy Research Institute (KAERI) was almost saturated with many radioactive wastes. So, the present is a point of time that a volume reduction of the wastes in a radioactive waste storage facility needs. There are spent HEPA filter wastes of about 2,226 sets in the radioactive waste storage facility in KAERI. All these spent filter wastes have been stored in accordance with their original form without any treatment. Up to now a compression treatment of these spent HEPA filters has been carried out to repack the compressed spent HEPA filters into a 200 liter drum for their volume reduction. Frame and separator are contaminated with a low concentration of nuclide, while the glass fiber is contaminated with a high concentration of nuclide. So, for the disposal of the glass filter to the environment, the glass fiber should be leached to lower its radioactive concentration first and then must be stabilized by solidification and so on. Therefore, it is necessary to develop a leaching process of glass fiber in a HEPA filter. Leaching is a separation technology, which is often used to remove a metal or a nuclide from a solid mixture with the help of a liquid solvent

  3. Applied patent RFID systems for building reacting HEPA air ventilation system in hospital operation rooms.

    Science.gov (United States)

    Lin, Jesun; Pai, Jar-Yuan; Chen, Chih-Cheng

    2012-12-01

    RFID technology, an automatic identification and data capture technology to provide identification, tracing, security and so on, was widely applied to healthcare industry in these years. Employing HEPA ventilation system in hospital is a way to ensure healthful indoor air quality to protect patients and healthcare workers against hospital-acquired infections. However, the system consumes lots of electricity which cost a lot. This study aims to apply the RFID technology to offer a unique medical staff and patient identification, and reacting HEPA air ventilation system in order to reduce the cost, save energy and prevent the prevalence of hospital-acquired infection. The system, reacting HEPA air ventilation system, contains RFID tags (for medical staffs and patients), sensor, and reacting system which receives the information regarding the number of medical staff and the status of the surgery, and controls the air volume of the HEPA air ventilation system accordingly. A pilot program was carried out in a unit of operation rooms of a medical center with 1,500 beds located in central Taiwan from Jan to Aug 2010. The results found the air ventilation system was able to function much more efficiently with less energy consumed. Furthermore, the indoor air quality could still keep qualified and hospital-acquired infection or other occupational diseases could be prevented.

  4. Evaluating the Efficiency of Hepatoprotector Hepa Veda in Patients with Liver Pathology

    Directory of Open Access Journals (Sweden)

    Yu.M. Stepanov

    2015-04-01

    Full Text Available The article presents the results of efficiency of monotherapy with hepatoprotector Hepa veda in patients with liver pathology. There were found a significant decrease of aminotransferase level in patients with non-alcoholic steatohepatitis and a tendency to decrease in patients with chronic viral hepatitis C that showed the efficiency of this hepatoprotector.

  5. Particle Removal Efficiency of the Portable HEPA Air Cleaner in a Simulated Hospital Ward

    DEFF Research Database (Denmark)

    Qian, Hua; Li, Yuguo; Sun, Hequan

    2010-01-01

    of beds in an isolation ward is insufficient. An experiment was conducted in a full scale experimental ward with a dimension of 6.7 m × 6 m × 2.7 m and 6 beds to test these hypotheses for a portable HEPA filter. The removal efficiency for different size particles was measured at different locations...

  6. Software Verification and Validation Test Report for the HEPA filter Differential Pressure Fan Interlock System

    International Nuclear Information System (INIS)

    ERMI, A.M.

    2000-01-01

    The HEPA Filter Differential Pressure Fan Interlock System PLC ladder logic software was tested using a Software Verification and Validation (VandV) Test Plan as required by the ''Computer Software Quality Assurance Requirements''. The purpose of his document is to report on the results of the software qualification

  7. Alternative strategies to reduce cost and waste volume in HEPA filtration using metallic filter media - 59348

    International Nuclear Information System (INIS)

    Chadwick, Chris

    2012-01-01

    Document available in abstract form only. Full text of publication follows: The disposal costs of contaminated HEPA and THE filter elements have been proved to be disproportionately high compared with the cost of the elements themselves. Work published elsewhere (Moore, et el 1992; Bergman et al 1997) suggests that the cost of use of traditional, panel type, glass fibre HEPA filtration trains to the DOE was, during that period, $29.5 million, based on a five year life cycle, and including installation, testing, removal and disposal life cycle costs being based on estimates dating from 1987-1990. Within that cost estimate, $300 was the value given to the filter and $4, 450 was given to the peripheral activity. Clearly, if the $4, 450 component could be reduced, tremendous saving could ensue, in addition to the reduction of the legacy burden of waste volume. This issue exists for operators in both the US and in Europe. If HEPA filters could be cleaned to a condition where they could either be re-used or decontaminated to the extent that they could be stored as a lower cost wasteform or if HEPA/THE filter elements were available without any organic content likely to give rise to flammable or explosive decomposition gases during long term storage this would also reduce the costs and monitoring necessary in storage. (author)

  8. Measurement of gamma activity from the PUREX stack, Number 296-A-10, HEPA filters

    International Nuclear Information System (INIS)

    Barnett, J.M.

    1995-11-01

    In response to the Environmental Protection Agency's requirements for evaluating radioactive emissions from stacks, this test plan was developed. The test plan employs the use of low resolution (NaI) portable gamma spectrometry to identify and measure gamma emitting radionuclides from HEPA filters. The test description, expected results, and test set-up and steps are discussed

  9. Potential for HEPA filter damage from water spray systems in filter plenums

    Energy Technology Data Exchange (ETDEWEB)

    Bergman, W. [Lawrence Livermore National Lab., CA (United States); Fretthold, J.K. [Rocky Flats Safe Sites of Colorado, Golden, CO (United States); Slawski, J.W. [Department of Energy, Germantown, MD (United States)

    1997-08-01

    The water spray systems in high efficiency particulate air (HEPA) filter plenums that are used in nearly all Department of Energy (DOE) facilities for protection against fire was designed under the assumption that the HEPA filters would not be damaged by the water sprays. The most likely scenario for filter damage involves filter plugging by the water spray, followed by the fan blowing out the filter medium. A number of controlled laboratory tests that were previously conducted in the late 1980s are reviewed in this paper to provide a technical basis for the potential HEPA filter damage by the water spray system in HEPA filter plenums. In addition to the laboratory tests, the scenario for BEPA filter damage during fires has also occurred in the field. A fire in a four-stage, BEPA filter plenum at Rocky Flats in 1980 caused the first three stages of BEPA filters to blow out of their housing and the fourth stage to severely bow. Details of this recently declassified fire are presented in this paper. Although these previous findings suggest serious potential problems exist with the current water spray system in filter plenums, additional studies are required to confirm unequivocally that DOE`s critical facilities are at risk. 22 refs., 15 figs.

  10. Investigation of water accumulation in an offgas test facility HEPA housing

    International Nuclear Information System (INIS)

    Speed, D.L.; Burns, D.B.; Van Pelt, W.B.; Burns, H.H.

    1997-01-01

    The Consolidated Incineration Facility, at the Department of Energy's Savannah River Site, is designed to treat solid and liquid RCRA hazardous and mixed wastes generated by site operations and clean-up activities. During CIF's pretrial burn campaigns in 1995, an appreciable amount of water was recovered from the HEPA housings. Questions were immediately raised as to the source of the water, and the degree of wetness of the filters during operation. There are two primary issues involved: Water could reduce the life expectancy and performance of the HEPA filters, housing, and associated ducting, and wet HEPAs also present radiological concerns for personnel during filter change-out. A similar phenomenon was noted at the Offgas Components Test Facility (OCTF), a 1/10 scale pilot of CIF's air pollution control system. Tests at OCTF indicated the water's most likely origin to be vapor condensing out from the flue gas stream due to excessive air in-leakage at housing door seals, ducting flanges, and actual holes in the ducting. The rate of accumulation bears no statistical correlation to such process parameters as steam flow, reheater outlet temperature and offgas velocity in the duct. Test results also indicated that the HEPA filter media is moistened by the initial process flow while the facility is being brought on line. However, even when the HEPA filters were manually drenched prior to startup, they became completely dry within four hours of the time steam was introduced to the reheater. Finally, no demonstrable relationship was found between the degree of filter media wetness and filter dP

  11. Predicting mass loading as a function of pressure difference across prefilter/HEPA filter systems

    International Nuclear Information System (INIS)

    Novick, V.J.; Klassen, J.F.; Monson, P.R.

    1992-01-01

    The purpose of this work is to develop a methodology for predicting the mass loading and pressure drop effects on a prefilter/ HEPA filter system. The methodology relies on the use of empirical equations for the specific resistance of the aerosol loaded filter as a function of the particle diameter. These correlations relate the pressure difference across a filter to the mass loading on the filter and accounts for aerosol particle density effects. These predictions are necessary for the efficient design of new filtration systems and for risk assessment studies of existing filter systems. This work specifically addresses the prefilter/HEPA filter Airborne Activity Confinement Systems (AACS) at the Savannah River Plant. In order to determine the mass loading on the system, it is necessary to establish the efficiency characteristics for the prefilter, the mass loading characteristics of the prefilter measured as a function of pressure difference across the prefilter, and the mass loading characteristics of the HEPA filter as a function of pressure difference across the filter. Furthermore, the efficiency and mass loading characteristics need to be determined as a function of the aerosol particle diameter. A review of the literature revealed that no previous work had been performed to characterize the prefilter material of interest. In order to complete the foundation of information necessary to predict total mass loadings on prefilter/HEPA filter systems, it was necessary to determine the prefilter efficiency and mass loading characteristics. The measured prefilter characteristics combined with the previously determined HEPA filter characteristics allowed the resulting pressure difference across both filters to be predicted as a function of total particle mass for a given particle distribution. These predictions compare favorably to experimental measurements (±25%)

  12. Test plan for N2 HEPA filters assembly shop stock used on PFP E4 exhaust system

    International Nuclear Information System (INIS)

    DICK, J.D.

    1999-01-01

    At Plutonium Finishing Plant (PFP) and Plutonium Reclamation Facility (PRF) Self-contained HEPA filters, encased in wooden frames and boxes, are installed in the E4 Exhaust Ventilation System to provide confinement of radioactive releases to the environment and confinement of radioactive contamination within designated zones inside the facility. Recently during the routine testing in-leakage was discovered downstream of the Self-contained HEPA filters boxes. This Test Plan describes the approach to conduct investigation of the root causes for the in-leakage of HEPA filters

  13. Volatility and leachability of heavy metals and radionuclides in thermally treated HEPA filter media generated from nuclear facilities.

    Science.gov (United States)

    Yoon, In-Ho; Choi, Wang-Kyu; Lee, Suk-Chol; Min, Byung-Youn; Yang, Hee-Chul; Lee, Kune-Woo

    2012-06-15

    The purpose of the present study was to apply thermal treatments to reduce the volume of HEPA filter media and to investigate the volatility and leachability of heavy metals and radionuclides during thermal treatment. HEPA filter media were transformed to glassy bulk material by thermal treatment at 900°C for 2h. The most abundant heavy metal in the HEPA filter media was Zn, followed by Sr, Pb and Cr, and the main radionuclide was Cs-137. The volatility tests showed that the heavy metals and radionuclides in radioactive HEPA filter media were not volatilized during the thermal treatment. PCT tests indicated that the leachability of heavy metals and radionuclides was relatively low compared to those of other glasses. XRD results showed that Zn and Cs reacted with HEPA filter media and were transformed into crystalline willemite (ZnO·SiO(2)) and pollucite (Cs(2)OAl(2)O(3)4SiO(2)), which are not volatile or leachable. The proposed technique for the volume reduction and transformation of radioactive HEPA filter media into glassy bulk material is a simple and energy efficient procedure without additives that can be performed at relatively low temperature compared with conventional vitrification process. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. First Study Of HEPA Filter Prototype Performance To Control The Airborne Pollution

    International Nuclear Information System (INIS)

    Soetomo; Suwarno

    2000-01-01

    This paper will report the efficiency test result of the filtration tool prototype of High Efficiency Particulate Air (HEPA filter) for low temperature, to control the airborne pollution of aerosol particle of solid and liquid. The prototype design of HEPA filter was based on the characteristic data of filter material (fibrous diameter, density, filter thickness), flow rate of air and first pressure drop. From the result of laboratory scale test, using DOP/PSL aerosol with 0,3 mum diameter and the flow rate of 3,78 m exp.3/min, was obtained filtration efficiency revolve between 89,90 and 99,94 % for the filter prototype of A, B, C, and D. the efficiency estimation of theory with filtration programme and the experiment was different amount 1 %. The value of the prototype efficiency of D filter was not far different with AAF-USA filter and its price is cheaper 30 % than the price of AAF-USA filter

  15. Review of Department of Energy HEPA filter test activities, FY 1990--FY 1992

    International Nuclear Information System (INIS)

    McIntyre, J.A.

    1992-01-01

    Filter Test Facilities (FTFs) and the FTF Technical Support Group (TSG) continue to provide services to the Department of Energy (DOE). Additional tasks relating to the HEPA filter cycle have been added to the TSG. The tasks include the quality assessment review for the in-place testing of HEPA filters at DOE sites and the formation of an in-place testing standards writing group. Summary of ongoing FTFs and TSG activities for FY 1990-FY 1992 including the technical input for implementation of the High Flow Alternative Test System (HFATS), update of the DOE Standards, the status of the quality assessment review and in-place testing standards writing group are discussed

  16. Particle size for greatest penetration of HEPA filters - and their true efficiency

    International Nuclear Information System (INIS)

    da Roza, R.A.

    1982-01-01

    The particle size that most greatly penetrates a filter is a function of filter media construction, aerosol density, and air velocity. In this paper the published results of several experiments are compared with a modern filtration theory that predicts single-fiber efficiency and the particle size of maximum penetration. For high-efficiency particulate air (HEPA) filters used under design conditions this size is calculated to be 0.21 μm diam. This is in good agreement with the experimental data. The penetration at 0.21 μm is calculated to be seven times greater than at the 0.3 μm used for testing HEPA filters. Several mechanisms by which filters may have a lower efficiency in use than when tested are discussed

  17. Performance of HEPA Filter Medium under Accidental Conditions in Nuclear Installations

    International Nuclear Information System (INIS)

    El-Fawal, M.M.

    2011-01-01

    High Efficiency Particulate Air filters (HEPA Filters) are the main components in ventilation or confinement system for the retention of radioactive particles in nuclear installations. During abnormal conditions or accidents (e.g. fire accident, criticality in a nuclear fuel cycle facility and LOCA in power reactors) the resulting heat, smoke and humidity affect to a large extent the performance of HEPA filters. As a part of a research programme aims at the evaluation and improvement of the performance of HEPA filter media during abnormal conditions, the effect of elevated temperatures up to 400 degree C on the resistance of medium to penetration of water under pressure has been investigated. The test results showed that the resistance of the medium to penetration of water decreases with increase in temperature and thermal exposure time. This could be attributed to burnout of the organic binder used to improve the resistance of the medium to the penetration of water. The results also showed that at 400 degree C the resistance of the medium to the penetration of water disappeared. This was confirmed by inspection of the filter medium samples after exposure to high temperature using a scanning electron microscope. The inspection of the medium samples showed that the organic binder in the medium was deformed and finally collapsed at 400 degree C. Also, a best estimate model for the relation of filter medium resistance to water penetration under elevated temperature has been implemented. The results of this study can help in establishing a regulatory operating limit conditions (OLCs) for HEPA filter operation at high temperatures conditions in nuclear installations

  18. Performance of HEPA Filter Medium under Accidental Conditions in Nuclear Installations

    International Nuclear Information System (INIS)

    ElFawal, M.M.

    2009-01-01

    High Efficiency Particulate Air filters (HEPA Filters) are the main components in ventilation or confinement system for the retention of radioactive particles in nuclear installations. During abnormal conditions or accidents (e.g. fire accident, criticality in a nuclear fuel cycle facility and LOCA in power reactors) the resulting heat, smoke and humidity affect to a large extent the performance of HEPA filters. As a part of a research programme aims at the evaluation and improvement of the performance of HEPA filter media during abnormal conditions, the effect of elevated temperatures up to 400 degree C on the resistance of medium to penetration of water under pressure has been investigated. The test results showed that the resistance of the medium to penetration of water decreases with increase in temperature and thermal exposure time. This could be attributed to burnout of the organic binder used to improve the resistance of the medium to the penetration of water. The results also showed that at 400 degree C the resistance of the medium to the penetration of water disappeared. This was confirmed by inspection of the filter medium samples after exposure to high temperature using a scanning electron microscope. The inspection of the medium samples showed that the organic binder in the medium was deformed and finally collapsed at 400 degree C. Also, a best estimate model for the relation of filter medium resistance to water penetration under elevated temperature has been implemented. The results of this study can help in establishing a regulatory operating limit conditions (OLCs) for HEPA filter operation at high temperatures conditions in nuclear installations.

  19. Impact of isomalathion on malathion cytotoxicity and genotoxicity in human HepaRG cells.

    OpenAIRE

    Josse , Rozenn; Sharanek , Ahmad; Savary , Camille C; Guillouzo , André

    2014-01-01

    International audience; Isomalathion is a major impurity of technical grade malathion, one of the most abundantly applied insecticides; however little is known about its hepatotoxicity. In the present study, cytotoxicity and genotoxicity of malathion and isomalathion either individually or in combination, were assessed using the metabolically competent human liver HepaRG cell line. Isomalathion reduced cell viability starting at a 100 μM concentration after a 24h exposure. It also significant...

  20. Efficiency and mass loading characteristics of a typical HEPA filter media material

    International Nuclear Information System (INIS)

    Novick, V.J.; Higgins, P.J.; Dierkschiede, B.; Abrahamson, C.; Richardson, W.B.; Monson, P.R.; Ellison, P.G.

    1991-01-01

    The particle removal efficiency of the high-efficiency particulate air (HEPA) filter material used at the Savannah River Site was measured as a function of monodisperse particle diameter and two gas filtration velocities. the results indicate that the material meets or exceeds the minimum specified efficiency of 99.97% for all particle diameters at both normal and minimum operating flow conditions encountered at the Savannah River site. The pressure drop across the HEPA filter material used at the Savannah River site was measured as a function of particle mass loading for various aerosol size distributions. The pressure drop was found to increase linearly with the particle mass loaded onto the filters, as long as the particles were completely dry. The slope of the curve was found to be dependent on the particle diameter and velocity of the aerosol. The linear behavior between the initial pressure drop (clean filter) and the final pressure drop (loaded filter) implies that the filtration mechanism is dominated by the particle cake that rapidly forms on the front surface of the HEPA filter. This behavior is consistent with the high filtration efficiency of the material

  1. Multiple HEPA filter test methods, July 1, 1974--March 31, 1975

    International Nuclear Information System (INIS)

    Schuster, B.G.; Osetek, D.J.

    1975-08-01

    A laboratory apparatus has been constructed for testing two HEPA filters in a series configuration. The apparatus consists of an instrumented wind tunnel in which the HEPA filters are mounted, and an auxiliary wind tunnel for obtaining diluted samples of the challenge aerosol upstream of the first filter. Measurements performed with a single particle aerosol spectrometer demonstrate the capability for measuring overall protection factors of greater than 2.5 x 10 8 . The decay of penetration as a function of time in individual HEPA filters indicates no preferential size discrimination in the range of 0.1 μm to 1.0 μm; nor is there a preferential size discrimination of penetration in this same range. A theoretical feasibility study has been performed on the use of an inhomogeneous electric field/induced aerosol electric dipole interaction for potential use as an air cleaning mechanism. Numerical evaluation of a coaxial cylinder geometry indicates that the method is feasible for collection of particles down to 0.1 μm under typical airflow velocity conditions. Small modifications in the geometry may be incorporated to create an instrument capable of measuring particle size. Geometries other than coaxial cylinders are also under investigation

  2. Methods for in-place testing of HEPA and iodine filters used in nuclear power plants

    International Nuclear Information System (INIS)

    Holmberg, R.; Laine, J.

    1978-04-01

    The purpose of this work was a general investigation of existing in-place test methods and to build an equipment for in-place testing of HEPA and iodine sorption filters. In this work the discussion is limited to methods used in in-place testing of HEPA and iodine sorption filters used in light-water-cooled reactor plants. Dealy systems, built for the separation of noble gases, and testing of them is not discussed in the work. Contaminants present in the air of a reactor containment can roughly be diveded into three groups: aerosols, reactive gases, and noble gases. The aerosols are filtered with HEPA (High Efficiency Particulate Air) filters. The most important reactive gases are molecular iodine and its two compounds: hydrogen iodide and methyl iodide. Of gases to be removed by the filters methyl iodide is the gas most difficult to remove especially at high relative humidities. Impregnated activated charcoal is generally used as sorption material in the iodine filters. Experience gained from the use of nuclear power plants proves that the function of high efficiency air filter systems can not be considered safe until this is proved by in-place tests. In-place tests in use are basically equal. A known test agent is injected upstream of the filter to be tested. The efficiency is calculated from air samples taken from both sides of the filter. (author)

  3. Behavior of the polygonal HEPA filter exposed to water droplets carried by the offgas flow

    International Nuclear Information System (INIS)

    Jannakos, K.; Potgeter, G.; Legner, W.

    1991-01-01

    A polygonal high-efficiency particulate air (HEPA) filter element has been developed and tested with a view to cleaning the dissolver offgas from reprocessing plants. It is likewise suited to filter process offgases generated in other plants. Due to its high dew point (about 30 degree C) the dissolver offgas, before being directed into the HEPA filter, is heated with a gas heater to approx. 100 degree C so that condensation in the pipework upstream of the filter and in the filter proper is avoided. In case of failure of the heater the offgas may undergo condensation upstream of the HEPA filter until it is bypassed to a standby heater or a standby filter system. Consequently, the filter may be loaded with water droplets. therefore, experiments have been performed with a view to estimating the behavior of the polygonal filter element when exposed to condensate droplets in a real plant. According to the experiments performed so far it can be anticipated that in case of failure of the heater the amount of condensate produced until bypassing to a standby system will not damage a new or little loaded polygonal filter element. The experiments will be carried on with the goal of investigating the behavior of a heavily loaded polygonal filter element exposed to water droplets

  4. Overexpression of HepaCAM inhibits cell viability and motility through suppressing nucleus translocation of androgen receptor and ERK signaling in prostate cancer.

    Science.gov (United States)

    Song, Xuedong; Wang, Yin; Du, Hongfei; Fan, Yanru; Yang, Xue; Wang, Xiaorong; Wu, Xiaohou; Luo, Chunli

    2014-07-01

    HepaCAM is suppressed in a variety of human cancers, and involved in cell adhesion, growth, migration, invasion, and survival. However, the expression and function of HepaCAM in prostate cancer are still unknown. HepaCAM expression has been detected by RT-PCR, Western blotting and immunohistochemistry staining in prostate cell lines RWPE-1, LNCap, DU145, PC3, and in 75 human prostate tissue specimens, respectively. Meanwhile, the cell proliferation ability was detected by WST-8 assay. The role of HepaCAM in prostate cancer cell migration and invasion was examined by wound healing and transwell assay. And flow cytometry was used to observe the apoptosis of prostate cancer cells. Then we detected changes of Androgen Receptor translocation and ERK signaling using immunofluorescence staining and western blot after overexpression of HepaCAM. The HepaCAM expression was significantly down-regulated in prostate cancer tissues and undetected in prostate cancer cells. However, the low HepaCAM expression was not statistically associated with clinicopathological characteristics of prostate cancer. Overexpression of HepaCAM in prostate cancer cells decreased the cell proliferation, migration and invasion, and induced the cell apoptosis. Meanwhile, HepaCAM prevented the androgen receptor translocation from the cytoplasm to the nucleus and down-regulated the MAPK/ERK signaling. Our results suggested that HepaCAM acted as a tumor suppressor in prostate cancer. HepaCAM inhibited cell viability and motility which might be through suppressing the nuclear translocation of Androgen Receptor and down-regulating the ERK signaling. Therefore, it was indicated that HepaCAM may be a potential therapeutic target for prostate cancer. © 2014 Wiley Periodicals, Inc.

  5. Volatility and leachability of heavy metals and radionuclides in thermally treated HEPA filter media generated from nuclear facilities

    International Nuclear Information System (INIS)

    Yoon, In-Ho; Choi, Wang-Kyu; Lee, Suk-Chol; Min, Byung-Youn; Yang, Hee-Chul; Lee, Kune-Woo

    2012-01-01

    Highlights: ► Thermally treated HEPA filter media was transformed into glassy bulk materials. ► Main radionuclide and heavy metal were Cs-137 and Zn. ► Cs and Zn were transformed into stable form without volatilization and leaching. ► The proposed technique is simple and energy efficient procedure. - Abstract: The purpose of the present study was to apply thermal treatments to reduce the volume of HEPA filter media and to investigate the volatility and leachability of heavy metals and radionuclides during thermal treatment. HEPA filter media were transformed to glassy bulk material by thermal treatment at 900 °C for 2 h. The most abundant heavy metal in the HEPA filter media was Zn, followed by Sr, Pb and Cr, and the main radionuclide was Cs-137. The volatility tests showed that the heavy metals and radionuclides in radioactive HEPA filter media were not volatilized during the thermal treatment. PCT tests indicated that the leachability of heavy metals and radionuclides was relatively low compared to those of other glasses. XRD results showed that Zn and Cs reacted with HEPA filter media and were transformed into crystalline willemite (ZnO·SiO 2 ) and pollucite (Cs 2 OAl 2 O 3 4SiO 2 ), which are not volatile or leachable. The proposed technique for the volume reduction and transformation of radioactive HEPA filter media into glassy bulk material is a simple and energy efficient procedure without additives that can be performed at relatively low temperature compared with conventional vitrification process.

  6. Advantageous use of HepaRG cells for the screening and mechanistic study of drug-induced steatosis

    Energy Technology Data Exchange (ETDEWEB)

    Tolosa, Laia [Unidad de Hepatología Experimental, Instituto de Investigación Sanitaria La Fe, Valencia 46026 (Spain); Gómez-Lechón, M. José [Unidad de Hepatología Experimental, Instituto de Investigación Sanitaria La Fe, Valencia 46026 (Spain); CIBERehd, FIS, Barcelona 08036 (Spain); Jiménez, Nuria [Unidad de Hepatología Experimental, Instituto de Investigación Sanitaria La Fe, Valencia 46026 (Spain); Hervás, David [Biostatistics Unit, Instituto de Investigación Sanitaria La Fe, Valencia 46026 (Spain); Jover, Ramiro [Unidad de Hepatología Experimental, Instituto de Investigación Sanitaria La Fe, Valencia 46026 (Spain); CIBERehd, FIS, Barcelona 08036 (Spain); Departamento de Bioquímica y Biología Molecular, Facultad de Medicina, Universidad de Valencia, Valencia 46010 (Spain); Donato, M. Teresa, E-mail: donato_mte@gva.es [Unidad de Hepatología Experimental, Instituto de Investigación Sanitaria La Fe, Valencia 46026 (Spain); CIBERehd, FIS, Barcelona 08036 (Spain); Departamento de Bioquímica y Biología Molecular, Facultad de Medicina, Universidad de Valencia, Valencia 46010 (Spain)

    2016-07-01

    Only a few in vitro assays have been proposed to evaluate the steatotic potential of new drugs. The present study examines the utility of HepaRG cells as a cell-based assay system for screening drug-induced liver steatosis. A high-content screening assay was run to evaluate multiple toxicity-related cell parameters in HepaRG cells exposed to 28 compounds, including drugs reported to cause steatosis through different mechanisms and non-steatotic compounds. Lipid content was the most sensitive parameter for all the steatotic drugs, whereas no effects on lipid levels were produced by non-steatotic compounds. Apart from fat accumulation, increased ROS production and altered mitochondrial membrane potential were also found in the cells exposed to steatotic drugs, which indicates that all these cellular events contributed to drug-induced hepatotoxicity. These findings are of clinical relevance as most effects were observed at drug concentrations under 100-fold of the therapeutic peak plasmatic concentration. HepaRG cells showed increased lipid overaccumulation vs. HepG2 cells, which suggests greater sensitivity to drug-induced steatosis. An altered expression profile of transcription factors and the genes that code key proteins in lipid metabolism was also found in the cells exposed to drugs capable of inducing liver steatosis. Our results generally indicate the value of HepaRG cells for assessing the risk of liver damage associated with steatogenic compounds and for investigating the molecular mechanisms involved in drug-induced steatosis. - Highlights: • HepaRG cells were explored as an in vitro model to detect steatogenic potential. • Multiple toxicity-related endpoints were analysed by HCS. • HepaRG showed a greater sensitivity to drug-induced steatosis than HepG2 cells. • Changes in the expression of genes related to lipid metabolism were revealed. • HepaRG allow mechanistic understanding of liver damage induced by steatogenic drugs.

  7. Advantageous use of HepaRG cells for the screening and mechanistic study of drug-induced steatosis

    International Nuclear Information System (INIS)

    Tolosa, Laia; Gómez-Lechón, M. José; Jiménez, Nuria; Hervás, David; Jover, Ramiro; Donato, M. Teresa

    2016-01-01

    Only a few in vitro assays have been proposed to evaluate the steatotic potential of new drugs. The present study examines the utility of HepaRG cells as a cell-based assay system for screening drug-induced liver steatosis. A high-content screening assay was run to evaluate multiple toxicity-related cell parameters in HepaRG cells exposed to 28 compounds, including drugs reported to cause steatosis through different mechanisms and non-steatotic compounds. Lipid content was the most sensitive parameter for all the steatotic drugs, whereas no effects on lipid levels were produced by non-steatotic compounds. Apart from fat accumulation, increased ROS production and altered mitochondrial membrane potential were also found in the cells exposed to steatotic drugs, which indicates that all these cellular events contributed to drug-induced hepatotoxicity. These findings are of clinical relevance as most effects were observed at drug concentrations under 100-fold of the therapeutic peak plasmatic concentration. HepaRG cells showed increased lipid overaccumulation vs. HepG2 cells, which suggests greater sensitivity to drug-induced steatosis. An altered expression profile of transcription factors and the genes that code key proteins in lipid metabolism was also found in the cells exposed to drugs capable of inducing liver steatosis. Our results generally indicate the value of HepaRG cells for assessing the risk of liver damage associated with steatogenic compounds and for investigating the molecular mechanisms involved in drug-induced steatosis. - Highlights: • HepaRG cells were explored as an in vitro model to detect steatogenic potential. • Multiple toxicity-related endpoints were analysed by HCS. • HepaRG showed a greater sensitivity to drug-induced steatosis than HepG2 cells. • Changes in the expression of genes related to lipid metabolism were revealed. • HepaRG allow mechanistic understanding of liver damage induced by steatogenic drugs.

  8. In-duct countermeasures for reducing fire-generated-smoke-aerosol exposure to HEPA filters

    International Nuclear Information System (INIS)

    Alvares, N.J.; Beason, D.G.; Ford, H.W.

    1978-01-01

    An experimental program was conducted to assess the endurance and lifetime of HEPA filters exposed to fire-generated aerosols, and to reduce the aerosol exposure by installing engineering countermeasures in the duct between the fire source and HEPA filters. Large cribs of wood and other potential fuels of interest were ''forcefully burned'' in a partially ventilated enclosure. In a ''forceful burn'' the crib of fuel is continuously exposed to an energetic premixed methane flame during the entire experimental period. This tactic serves two purposes: it optimizes the production of smoke rich in unburned pyrolyzates which provides severe exposure to the filters, and it facilitates the ignition and enhances the combustion of cribs formed with synthetic polymers. The experiments were conducted in an enclosure specifically designed and instrumented for fire tests. The test cell has a volume of 100 m 3 and includes instrumentation to measure the internal temperature distribution, pressure, thermal radiation field, flow fields, gas concentration, particulate size distribution and mass, fuel weight loss, inlet and exit air velocities, and smoke optical density. The countermeasure techniques include the use of passively operated sprinkler systems in the fire test cell, of fine and dense water scrubbing sprays, and of rolling prefiltration systems in the exit duct of the fire test cell. Of the countermeasures surveyed, the rolling prefilter system showed the most promise. This paper concentrates on the effect of control variables; i.e., enclosure air supply, fuel composition and crib porosity on the combustion response; i.e., crib burning rate, enclosure temperature rise, oxygen consumption, and CO, CO 2 and total hydrocarbon production. A discussion of the attempts to rationalize smoke aerosol properties will be included along with results from the effect of countermeasure application on HEPA filter lifetimes

  9. Three-dimensional HepaRG model as an attractive tool for toxicity testing.

    Science.gov (United States)

    Leite, Sofia B; Wilk-Zasadna, Iwona; Zaldivar, Jose M; Airola, Elodie; Reis-Fernandes, Marcos A; Mennecozzi, Milena; Guguen-Guillouzo, Christiane; Chesne, Christopher; Guillou, Claude; Alves, Paula M; Coecke, Sandra

    2012-11-01

    The culture of HepaRG cells as three dimensional (3D) structures in the spinner-bioreactor may represent added value as a hepatic system for toxicological purposes. The use of a cost-effective commercially available bioreactor, which is compatible with high-throughput cell analysis, constitutes an attractive approach for routine use in the drug testing industry. In order to assess specific aspects of the biotransformation capacity of the bioreactor-based HepaRG system, the induction of CYP450 enzymes (i.e., CYP1A2, 2B6, 2C9, and 3A4) and the activity of the phase II enzyme, uridine diphosphate glucuronoltransferase (UGT), were tested. The long-term functionality of the system was demonstrated by 7-week stable profiles of albumin secretion, CYP3A4 induction, and UGT activities. Immunofluorescence-based staining showed formation of tissue-like arrangements including bile canaliculi-like structures and polar distribution of transporters. The use of in silico models to analyze the in vitro data related to hepatotoxic activity of acetaminophen (APAP) demonstrated the advantage of the integration of kinetic and dynamic aspects for a better understanding of the in vitro cell behavior. The bioactivation of APAP and its related cytotoxicity was assessed in a system compatible to high-throughput screening. The approach also proved to be a good strategy to reduce the time necessary to obtain fully differentiated cell cultures. In conclusion, HepaRG cells cultured in 3D spinner-bioreactors are an attractive tool for toxicological studies, showing a liver-like performance and demonstrating a practical applicability for toxicodynamic approaches.

  10. Multiple HEPA filter test methods. Progress report, January--December 1977

    International Nuclear Information System (INIS)

    Schuster, B.; Kyle, T.; Osetek, D.

    1978-09-01

    Tandem high-efficiency particulate air (HEPA) filter efficiency measurements have been successfully performed on a large number of 20,000 CFM installations. The testing procedure relies on the use of a laser intracavity particle spectrometer and a very high-volume thermal dioctyl phthalate aerosol generator designed and constructed specifically for this purpose. For systems that cannot be tested in this fashion, work has been initiated on the generation and detection of a fluorescent self-identifying aerosol to eliminate the background problem. General candidate aerosols and methods to disperse them have been uncovered. Two distinct detection concepts have evolved for the measurement of size and concentration of these particles

  11. HEPA filter leaching concept validation trials at the Idaho Chemical Processing Plant

    International Nuclear Information System (INIS)

    Chakravartty, A.C.

    1995-04-01

    The enclosed report documents six New Waste Calcining Facility (NWCF) HEPA filter leaching trials conducted at the Idaho Chemical Processing Plant using a filter leaching system to validate the filter leaching treatment concept. The test results show that a modified filter leaching system will be able to successfully remove both hazardous and radiological constituents to RCRA disposal levels. Based on the success of the filter leach trials, the existing leaching system will be modified to provide a safe, simple, effective, and operationally flexible filter leaching system

  12. Invasive aspergillosis in severely neutropenic patients over 18 years: impact of intranasal amphotericin B and HEPA filtration.

    Science.gov (United States)

    Withington, S; Chambers, S T; Beard, M E; Inder, A; Allen, J R; Ikram, R B; Schousboe, M I; Heaton, D C; Spearing, R I; Hart, D N

    1998-01-01

    The impact of intranasal amphotericin B and high-efficiency particulate air (HEPA) filtration on the incidence of invasive aspergillosis was reviewed in patients from 1977 to 1994 undergoing intensive chemotherapy. Overall, the incidence of proven invasive aspergillosis was reduced from 24.4% (1977-1984) to 7.1% (1985-1991) (P < 0.001) following the introduction of intranasal prophylaxis, but when probable cases of aspergillosis were included and lymphoma cases excluded, there was no change in incidence. Following the introduction of HEPA filtration, patient exposure to aspergillus spores as measured by air sampling was markedly reduced and there were no new cases of invasive aspergillosis. HEPA filtration proved effective in reducing invasive aspergillosis and has allowed increasingly aggressive treatment regimens to be introduced.

  13. Health hazards associated with the use of di-(2-ethylhexyl) phthalate (commonly referred to as DOP) in HEPA filter test

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-01-01

    Di-(2-ethylhexyl) phthalate (DEHP), commonly referred to as di-octyl phthalate, is an important production chemical in the US. In addition to its major use as an additive in plastics, DEHP is widely used to evaluate the effectiveness of high efficiency particulate air (HEPA) filters. Historically, DEHP was also used in quantitative fit testing for respirators. Evaluations of this compound a decade ago showed that it can induce hepatocellular carcinomas in laboratory animals. Although most Department of Energy (DOE) facilities have since discontinued using DEHP in respirator fit testing, DEHP continues to be used for evaluating HEPA filters. This report summarizes available information on the toxicity, mutagenicity, carcinogenicity, and other hazards and problems posed by DEHP, specifically with reference to HEPA filter testing. Information on work practice improvements as well as the availability and suitability of DEHP substitutes are also presented. This material should assist the DOE in the safe use of this material.

  14. Use of sulfuric-nitric acid for the recovery of plutonium from HEPA filters. (620.2, WH001/LWE)

    International Nuclear Information System (INIS)

    Clark, D.E.

    1978-09-01

    Contaminated high-efficiency particulate air (HEPA) filter media, containing PuO 2 powder which had been calcined at 700 0 C, were treated with concentrated H 2 SO 4 -HNO 3 at 190 to 200 0 C for periods ranging from 0.5 to 2.0 hours. When followed by a dilute HNO 3 rinse, this treatment was shown to be very effective as a plutonium recovery operation (approximately greater than 97% of the plutonium was solubilized). A proposed treatment scheme is given which could provide both a plutonium recovery option for HEPA filters and a reduction in overall waste volume

  15. Proposed retrofit of HEPA filter plenums with injection and sampling manifolds for in-place filter testing

    Energy Technology Data Exchange (ETDEWEB)

    Fretthold, J.K. [EG& G Rocky Flats, Inc., Golden, CO (United States)

    1995-02-01

    The importance of testing HEPA filter exhaust plenums with consideration for As Low as Reasonably Achievable (ALARA) will require that new technology be applied to existing plenum designs. HEPA filter testing at Rocky Flats has evolved slowly due to a number of reasons. The first plenums were built in the 1950`s, preceding many standards. The plenums were large, which caused air dispersal problems. The systems were variable air flow. Access to the filters was difficult. The test methods became extremely conservative. Changes in methods were difficult to make. The acceptance of new test methods has been made in recent years with the change in plant mission and the emphasis on worker safety.

  16. Performance of HEPA filters at LLNL following the 1980 and 1989 earthquakes

    International Nuclear Information System (INIS)

    Bergman, W.; Elliott, J.; Wilson, K.

    1995-01-01

    The Lawrence Livermore National Laboratory has experienced two significant earthquakes for which data is available to assess the ability of HEPA filters to withstand seismic conditions. A 5.9 magnitude earthquake with an epicenter 10 miles from LLNL struck on January 24, l980. Estimates of the peak ground accelerations ranged from 0.2 to 0.3 g. A 7.0 magnitude earthquake with an epicenter about 50 miles from LLNL struck on October 17, 1989. Measurements of the ground accelerations at LLNL averaged 0.1 g. The results from the in-place filter tests obtained after each of the earthquakes were compiled and studied to determine if the earthquakes had caused filter leakage. Our study showed that only the 1980 earthquake resulted in a small increase in the number of HEPA filters developing leaks. In the 12 months following the 1980 and 1989 earthquakes, the in-place filter tests showed 8.0% and 4.1% of all filters respectively developed leaks. The average percentage of filters developing leaks from 1980 to 1993 was 3.3%+/-1.7%. The increase in the filter leaks is significant for the 1980 earthquake, but not for the 1989 earthquake. No contamination was detected following the earthquakes that would suggest transient releases from the filtration system

  17. Preliminary studies to determine the shelf life of HEPA filters. Revision 1

    International Nuclear Information System (INIS)

    Gilbert, H.; Fretthold, J.K.; Rainer, F.; Bergman, W.; Beason, D.

    1995-02-01

    We have completed a preliminary study using filter media tests and filter qualification tests to investigate the effect of shelf-life on HEPA filter performance. Our media studies showed that the tensile strength decreased with age, but the data were not sufficient to establish a shelf-life. Thermogravimetric analyses demonstrated that one manufacturer had media with low tensile strength due to insufficient binder. The filter qualification tests (heated air and overpressure) conducted on different aged filters showed that filter age is not the primary factor affecting filter performance; materials and the construction design have a greater effect. An unexpected finding of our study was that sub-standard HEPA filters have been installed in DOE facilities despite existing regulations and filter qualification tests. We found that the filter with low tensile strength failed the overpressure test. The same filter had passed the heated air test, but left the filter so structurally weak, it was prone to blow-out. We recommend that DOE initiate a filter qualification program to prevent this occurrence

  18. Replacement of HEPA Filters at the LANL CMR Facility: Risks Reduced by Comprehensive Waste Characterization

    International Nuclear Information System (INIS)

    Corpion, J.; Barr, A.; Martinez, P.; Bader, M.

    2002-01-01

    In March 2001, the Los Alamos National Laboratory (LANL) completed the replacement of 720 radioactively contaminated HEPA filters for $5.7M. This project was completed five months ahead of schedule and $6.0M under budget with no worker injuries or contaminations. Numerous health and safety, environmental, and waste disposal problems were overcome, including having to perform work in a radioactively contaminated work environment, that was also contaminated with perchlorates (potential explosive). High waste disposal costs were also an issue. A project risk analysis and government cost estimate determined that the cost of performing the work would be $11.8M. To reduce risk, a $1.2M comprehensive condition assessment was performed to determine the degree of toxic and radioactive contamination trapped on the HEPA filters; and to determine whether explosive concentrations of perchlorates were present. Workers from LANL and personnel from Waldheim International of Knoxville, TN collected hundreds of samples wearing personnel protective gear against radioactive, toxic, and explosive hazards. LANL also funded research at the New Mexico Institute of Mining and Technology to determine the explosivity of perchlorates. The data acquired from the condition assessment showed that toxic metals, toxic organic compounds, and explosive concentrations of perchlorates were absent. The data also showed that the extent of actinide metal contamination was less than expected, reducing the potential of transuranic waste generation by 50%. Consequently, $4.2M in cost savings and $1.8M in risk reduction were realized by increased worker productivity and waste segregation

  19. In-situ continuous scanning high efficiency particulate air (HEPA) filter monitoring system

    International Nuclear Information System (INIS)

    Kirchner, K.N.; Johnson, C.M.; Lucerna, J.J.; Barnett, R.L.

    1985-01-01

    The testing and replacement of HEPA filters, which are widely used in the nuclear industry to purify process air before it is ventilated to the atmosphere, is a costly and labor-intensive undertaking. Current methods of testing filter performance, such as differential pressure measurement and scanning air monitoring, allow for determination of overall filter performance but preclude detection of symptoms of incipient filter failure, such as small holes in the filters themselves. Using current technology, a continual in-situ monitoring system has been designed which provides three major improvements over current methods of filter testing and replacement. This system (1) realizes a cost savings by reducing the number of intact filters which are currently being replaced unnecessarily, (2) provides a more accurate and quantitative measurement of filter performance than is currently achieved with existing testing methods, and (3) reduces personnel exposure to a radioactive environment by automatically performing most testing operations. The operation and performance of the HEPA filter monitoring system are discussed

  20. High-efficiency particulate air (HEPA) filter performance following service and radiation exposure

    International Nuclear Information System (INIS)

    Jones, L.R.

    1975-01-01

    Small HEPA filters were exposed to a 60 Co source with a radiation strength of 3 x 10 7 rads per hour and then exposed to steam--air mixtures at several times filter design flow, followed by extended exposure to steam and air at reduced flow. Additional filters were exposed to air flow in a reactor confinement system and then similarly tested with steam--air mixture flows. The test data and calculated effects of filter pluggage with moisture on confinement system performance following potential reactor accidents are described. Gamma radiation exposure impaired the performance of new filters only slightly and temporarily improved performance of service aged filters. Normal confinement system service significantly impaired filter performance although not sufficiently to prevent adequate performance of the SRP confinement system following an unlikely reactor accident. Calculations based on measured filter pluggage indicate that during an accident air flow could be reduced approximately 50 percent with service-degraded HEPA filters present, or approximately 10 percent with new filters damaged by the radiation exposure. (U.S.)

  1. Performance of HEPA filters at LLNL following the 1980 and 1989 earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Bergman, W.; Elliott, J.; Wilson, K. [Lawrence Livermore National Laboratory, CA (United States)

    1995-02-01

    The Lawrence Livermore National Laboratory has experienced two significant earthquakes for which data is available to assess the ability of HEPA filters to withstand seismic conditions. A 5.9 magnitude earthquake with an epicenter 10 miles from LLNL struck on January 24, l980. Estimates of the peak ground accelerations ranged from 0.2 to 0.3 g. A 7.0 magnitude earthquake with an epicenter about 50 miles from LLNL struck on October 17, 1989. Measurements of the ground accelerations at LLNL averaged 0.1 g. The results from the in-place filter tests obtained after each of the earthquakes were compiled and studied to determine if the earthquakes had caused filter leakage. Our study showed that only the 1980 earthquake resulted in a small increase in the number of HEPA filters developing leaks. In the 12 months following the 1980 and 1989 earthquakes, the in-place filter tests showed 8.0% and 4.1% of all filters respectively developed leaks. The average percentage of filters developing leaks from 1980 to 1993 was 3.3%+/-1.7%. The increase in the filter leaks is significant for the 1980 earthquake, but not for the 1989 earthquake. No contamination was detected following the earthquakes that would suggest transient releases from the filtration system.

  2. Real world industrial solutions to cost and waste volume reduction using metallic HEPA/THE filtration together with an examination of effective HEPA Pre-Filtration Preventing the Blinding Solids from reaching the HEPA/THE filters and recovering the blinding solids for disposal, reducing both waste volume and cost

    International Nuclear Information System (INIS)

    Chadwick, Ch.

    2008-01-01

    The disposal costs of contaminated HEPA and THE filter elements have been proved to be disproportionately high compared with the cost of the elements themselves. If HEPA filters could be cleaned to a condition where they could either be re-used or decontaminated to the extent that they could be stored as a lower cost wasteform or if HEPA/THE filter elements were available without any organic content likely to give rise to flammable or explosive decomposition gases during long term storage this would also reduce the costs and monitoring necessary in storage. Using current state-of-the-art metallic filter media, it is possible to provide robust, completely inorganic, cleanable HEPA/THE filter elements to meet any duty already met by traditional glass-fibre HEPA/THE elements, within the same space limitations and with equivalent pressure loss. Additionally, traditional HEPA filter systems have limitations that often prevent them from solving many of the filtration problems in the nuclear industry. The paper will address several of these matters of concern by considering the use of metallic filter media to solve HEPA filtration problems ranging from the long term storage of transuranic waste at the WIPP site, spent and damaged fuel assemblies, in glove box ventilation and tank venting to the venting of fumes at elevated temperatures from incinerators, vitrification processes, conversion and sintering furnaces as well as downstream of iodine absorbers in gas cooled reactors in the UK. The paper reviews the technology, development, performance characteristics, filtration efficiency, flow/differential pressure character, cleanability and cost of sintered metal fiber in comparison with traditional resin bonded glass fiber filter media and sintered metal powder filter media. Examples of typical filter element and system configurations and applications will be presented. In addition, the paper will also address the economic case for installing self cleaning pre

  3. Investigations into the penetration and pressure drop of HEPA filter media during loading with submicron particle aerosols at high concentrations

    International Nuclear Information System (INIS)

    Leibold, H; Wilhelm, J.G.

    1991-01-01

    High Efficiency Particulate Air (HEPA) filters are typically employed in particle removal and retention within the air cleaning systems of clean rooms in the pharmaceutical, nuclear and semiconductor industries for dust concentrations of some μg/m 3 . Their extremely high removal efficiencies for submicron particles make them attractive candidates in complying with increasingly lower emission limits for industrial processes that involve dust concentrations of up to several g/m 3 . Cost-effective operation under such conditions requires the filter units to be recleanable. The recleanability of HEPA filter media depends not only on the operating conditions during the cleaning process but also on the filtration conditions during particle loading. The structure and location of the particles captured by the glass fiber matrix greatly affect the degree to which they can be subsequently dislodged and removed from the filter medium. Changes in filtration efficiency with service time for various particle diameters in the critical submicron size range, as well as the effects of filtration velocity on the increase in pressure drop, are important criteria with regard to recleaning HEPA filter units. Of special significance for the recleanability of HEPA filter media is knowledge of how operating conditions affect dust cake formation. (author)

  4. Evaluation of Alternative Control for Prevention and or Mitigation of HEPA Filter Failure Accidents at Tank Farm Facilities

    International Nuclear Information System (INIS)

    GUSTAVSON, R.D.

    2000-01-01

    This study evaluates the adequacy and benefit of use of HEPA filter differential pressure limiting setpoints to initiate exhauster shut down as an alternative safety control for postulated accidents that might result in filtration failure and subsequent unfiltered release from Tank Farm primary tank ventilators

  5. Comparison of Emery 3004 and 3006 characteristics with DOP for possible use in HEPA filter leak tests

    Energy Technology Data Exchange (ETDEWEB)

    Kovach, B.J.; Banks, E.M.; Kovacs, G. [Nuclear Consulting Services, Inc., Columbus, OH (United States)

    1995-02-01

    The particle size distribution, concentration, liquid to aerosol conversion rate and ignition properties of DOP, Emery 3004 and Emery 3006 aerosols generated by the NUCON Aerosol Generators Models SN-10 and DG-F were obtained. Results demonstrate the Emery products are acceptable replacements for DOP in performing leak testing of HEPA filters.

  6. PPAR agonists reduce steatosis in oleic acid-overloaded HepaRG cells

    International Nuclear Information System (INIS)

    Rogue, Alexandra; Anthérieu, Sébastien; Vluggens, Aurore; Umbdenstock, Thierry; Claude, Nancy; Moureyre-Spire, Catherine de la; Weaver, Richard J.; Guillouzo, André

    2014-01-01

    Although non-alcoholic fatty liver disease (NAFLD) is currently the most common form of chronic liver disease there is no pharmacological agent approved for its treatment. Since peroxisome proliferator-activated receptors (PPARs) are closely associated with hepatic lipid metabolism, they seem to play important roles in NAFLD. However, the effects of PPAR agonists on steatosis that is a common pathology associated with NAFLD, remain largely controversial. In this study, the effects of various PPAR agonists, i.e. fenofibrate, bezafibrate, troglitazone, rosiglitazone, muraglitazar and tesaglitazar on oleic acid-induced steatotic HepaRG cells were investigated after a single 24-hour or 2-week repeat treatment. Lipid vesicles stained by Oil-Red O and triglycerides accumulation caused by oleic acid overload, were decreased, by up to 50%, while fatty acid oxidation was induced after 2-week co-treatment with PPAR agonists. The greatest effects on reduction of steatosis were obtained with the dual PPARα/γ agonist muraglitazar. Such improvement of steatosis was associated with up-regulation of genes related to fatty acid oxidation activity and down-regulation of many genes involved in lipogenesis. Moreover, modulation of expression of some nuclear receptor genes, such as FXR, LXRα and CAR, which are potent actors in the control of lipogenesis, was observed and might explain repression of de novo lipogenesis. Conclusion: Altogether, our in vitro data on steatotic HepaRG cells treated with PPAR agonists correlated well with clinical investigations, bringing a proof of concept that drug-induced reversal of steatosis in human can be evaluated in in vitro before conducting long-term and costly in vivo studies in animals and patients. - Highlights: • There is no pharmacological agent approved for the treatment of NAFLD. • This study demonstrates that PPAR agonists can reduce fatty acid-induced steatosis. • Some nuclear receptors appear to be potent actors in the control

  7. PPAR agonists reduce steatosis in oleic acid-overloaded HepaRG cells

    Energy Technology Data Exchange (ETDEWEB)

    Rogue, Alexandra [Inserm UMR 991, 35043 Rennes Cedex (France); Université de Rennes 1, Faculté des Sciences Pharmaceutiques et Biologiques, 35043 Rennes Cedex (France); Biologie Servier, Gidy (France); Anthérieu, Sébastien; Vluggens, Aurore [Inserm UMR 991, 35043 Rennes Cedex (France); Université de Rennes 1, Faculté des Sciences Pharmaceutiques et Biologiques, 35043 Rennes Cedex (France); Umbdenstock, Thierry [Technologie Servier, Orléans (France); Claude, Nancy [Institut de Recherches Servier, Courbevoie (France); Moureyre-Spire, Catherine de la; Weaver, Richard J. [Biologie Servier, Gidy (France); Guillouzo, André, E-mail: Andre.Guillouzo@univ-rennes1.fr [Inserm UMR 991, 35043 Rennes Cedex (France); Université de Rennes 1, Faculté des Sciences Pharmaceutiques et Biologiques, 35043 Rennes Cedex (France)

    2014-04-01

    Although non-alcoholic fatty liver disease (NAFLD) is currently the most common form of chronic liver disease there is no pharmacological agent approved for its treatment. Since peroxisome proliferator-activated receptors (PPARs) are closely associated with hepatic lipid metabolism, they seem to play important roles in NAFLD. However, the effects of PPAR agonists on steatosis that is a common pathology associated with NAFLD, remain largely controversial. In this study, the effects of various PPAR agonists, i.e. fenofibrate, bezafibrate, troglitazone, rosiglitazone, muraglitazar and tesaglitazar on oleic acid-induced steatotic HepaRG cells were investigated after a single 24-hour or 2-week repeat treatment. Lipid vesicles stained by Oil-Red O and triglycerides accumulation caused by oleic acid overload, were decreased, by up to 50%, while fatty acid oxidation was induced after 2-week co-treatment with PPAR agonists. The greatest effects on reduction of steatosis were obtained with the dual PPARα/γ agonist muraglitazar. Such improvement of steatosis was associated with up-regulation of genes related to fatty acid oxidation activity and down-regulation of many genes involved in lipogenesis. Moreover, modulation of expression of some nuclear receptor genes, such as FXR, LXRα and CAR, which are potent actors in the control of lipogenesis, was observed and might explain repression of de novo lipogenesis. Conclusion: Altogether, our in vitro data on steatotic HepaRG cells treated with PPAR agonists correlated well with clinical investigations, bringing a proof of concept that drug-induced reversal of steatosis in human can be evaluated in in vitro before conducting long-term and costly in vivo studies in animals and patients. - Highlights: • There is no pharmacological agent approved for the treatment of NAFLD. • This study demonstrates that PPAR agonists can reduce fatty acid-induced steatosis. • Some nuclear receptors appear to be potent actors in the control

  8. U-235 Holdup Measurements in the 321-M Lathe HEPA Banks

    International Nuclear Information System (INIS)

    Salaymeh, S.R.

    2002-01-01

    The Analytical Development Section of Savannah River Technology Center (SRTC) was requested by the Facilities Decommissioning Division (FDD) to determine the holdup of enriched uranium in the 321-M facility as part of an overall deactivation project of the facility. The results of the holdup assays are essential for determining compliance with the Waste Acceptance Criteria, Material Control and Accountability, and to meet criticality safety controls. This report covers holdup measurements of uranium residue in six high efficiency particulate air (HEPA) filter banks of the A-lathe and B-lathe exhaust systems of the 321-M facility. This report discusses the non-destructive assay measurements, assumptions, calculations, and results of the uranium holdup in these six items

  9. Performance testing of HEPA filters: Progress towards a European standard procedure

    Energy Technology Data Exchange (ETDEWEB)

    Dyment, J.

    1997-08-01

    Proposals for a future European testing procedure for {open_quotes}High Efficiency Particulate Air Filters (HEPA and ULPA){close_quotes} are being developed by CEN (Comite Europeen de Normalisation). The new standard will be given the status of national standard in participating countries, conflicting national standards being withdrawn. The standard will comprise 5 parts covering the grouping and classification of HEPA and ULPA filters according to their efficiency, fundamental principles of testing, marking etc (in part 1). Part 2 will cover aerosol production, measurement principles, counting equipment and statistics. Parts 3-5 will cover testing flat sheet media, leak testing of filter elements and the efficiency testing of filter elements respectively. The efficiency test methods allow the use of either homogeneous monodisperse or polydisperse aerosols for the determination of particulate filtration efficiencies as a function of particle size. The particle size at which maximum penetration occurs is first determined in flat sheet media tests; tests on filter elements (constructed using the same filter medium) may be carried out using either a homogeneous monodisperse aerosol of the size at which maximum penetration occurs (MPPS) or a polydisperse aerosol whose median size is close to the MPPS. Tests with monodisperse aerosols may be conducted using condensation nucleus counting equipment; tests using polydisperse test aerosols require the use of optical sizing particle counters. When determining the efficiency of filter elements the downstream aerosol concentrations may be determined from air samples obtained using either an overall method (single point sampling after mixing) or a scan method. The scan method also allows {open_quotes}local{close_quotes} efficiency values to be determined. 1 ref., 1 fig., 1 tab.

  10. Institute for Clean Energy Technology Mississippi State University NSR&D Aged HEPA Filter Study Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Jacks, Robert [Mississippi State Univ., Mississippi State, MS (United States); Stormo, Julie [Mississippi State Univ., Mississippi State, MS (United States); Rose, Coralie [Mississippi State Univ., Mississippi State, MS (United States); Rickert, Jaime [Mississippi State Univ., Mississippi State, MS (United States); Waggoner, Charles A. [Mississippi State Univ., Mississippi State, MS (United States)

    2017-03-22

    Data have demonstrated that filter media lose tensile strength and the ability to resist the effects of moisture as a function of age. Testing of new and aged filters needs to be conducted to correlate reduction of physical strength of HEPA media to the ability of filters to withstand upset conditions. Appendix C of the Nuclear Air Cleaning Handbook provides the basis for DOE’s HEPA filter service life guidance. However, this appendix also points out the variability of data, and it does not correlate performance of aged filters to degradation of media due to age. Funding awarded by NSR&D to initiate full-scale testing of aged HEPA filters addresses the issue of correlating media degradation due to age with testing of new and aged HEPA filters under a generic design basis event set of conditions. This funding has accelerated the process of describing this study via: (1) establishment of a Technical Working Group of all stakeholders, (2) development and approval of a test plan, (3) development of testing and autopsy procedures, (4) acquiring an initial set of aged filters, (5) testing the initial set of aged filters, and (6) developing the filter test report content for each filter tested. This funding was very timely and has moved the project forward by at least three years. Activities have been correlated with testing conducted under DOE-EM funding for evaluating performance envelopes for AG-1 Section FC Separator and Separatorless filters. This coordination allows correlation of results from the NSR&D Aged Filter Study with results from testing new filters of the Separator and Separatorless Filter Study. DOE-EM efforts have identified approximately 100 more filters of various ages that have been stored under Level B conditions. NSR&D funded work allows a time for rigorous review among subject matter experts before moving forward with development of the testing matrix that will be used for additional filters. The NSR&D data sets are extremely valuable in as much

  11. Analysis of fire and smoke threat to off-gas HEPA filters in a transuranium processing plant

    International Nuclear Information System (INIS)

    Alvares, N.J.

    1988-01-01

    The author performed an analysis of fire risk to the high-efficiency particulate air (HEPA) filters that provide ventilation containment for a transuranium processing plant at the Oak Ridge National Laboratory. A fire-safety survey by an independent fire-protection consulting company had identified the HEPA filters in the facility's off-gas containment ventilation system as being at risk from fire effects. Independently studied were the ventilation networks and flow dynamics, and typical fuel loads were analyzed. It was found that virtually no condition for fire initiation exists and that, even if a fire started, its consequences would be minimal as a result of standard shut-down procedures. Moreover, the installed fire-protection system would limit any fire and thus would further reduce smoke or heat exposure to the ventilation components. 4 references, 4 figures, 5 tables

  12. Submicron and Nanoparticulate Matter Removal by HEPA-Rated Media Filters and Packed Beds of Granular Materials

    Science.gov (United States)

    Perry, J. L.; Agui, J. H.; Vijayakimar, R

    2016-01-01

    Contaminants generated aboard crewed spacecraft by diverse sources consist of both gaseous chemical contaminants and particulate matter. Both HEPA media filters and packed beds of granular material, such as activated carbon, which are both commonly employed for cabin atmosphere purification purposes have efficacy for removing nanoparticulate contaminants from the cabin atmosphere. The phenomena associated with particulate matter removal by HEPA media filters and packed beds of granular material are reviewed relative to their efficacy for removing fine (less than 2.5 micrometers) and ultrafine (less than 0.01 micrometers) sized particulate matter. Considerations are discussed for using these methods in an appropriate configuration to provide the most effective performance for a broad range of particle sizes including nanoparticulates.

  13. Performance of 1000- and 1800- cfm HEPA filters on long exposure to low atmospheric dust loadings, II

    International Nuclear Information System (INIS)

    First, M.W.; Rudnick, S.N.

    1981-01-01

    Comparative tests were made to evaluate the performance characteristics of American- and European-design HEPA filters when exposed, for a number of years, to aerosols characteristic of nuclear and biohazard service. Although some of the European-design filters were operated at their rated airflow capacity of 1800 cfm, some were downrated to 1000 cfm to determine if their service life could be more than tripled compared to conventional 1000-cfm Americal-design HEPA filters, as filter theory predicts. Initial results indicate, however, that for the ambient aerosol used in this study, a European-design filter has a service life of only 1.6 times greater than an American-design filter when both operate at 1000 cfm. Further tests are in progress to verify this result

  14. Study on DOP substitutes for leaking rate testing of HEPA filter used in nuclear air cleaning systems

    International Nuclear Information System (INIS)

    Qiu Dangui; Zhang Jirong; Hou Jianrong; Qiao Taifei; Shen Dapeng; Shi Yingxia

    2012-01-01

    Based on an extensive investigation over available literatures concerning HEPA filter testing, PEG400, SHELL on dina oil 15 and P.a. were chosen as candidates for Dop substitutes, and on which a series of tests were conducted about their aerosol conversion rate, particle size distribution, Dop detector response and leaking rate in H EPA filter. With consideration of technical properties, safety performance and economy, homemade P.a. is finally selected as the best substitute for Dop among the three. (authors)

  15. Extraction of semivolatile organic compounds from high-efficiency particulate air (HEPA) filters by supercritical carbon dioxide

    International Nuclear Information System (INIS)

    Schilling, J.B.

    1997-09-01

    Supercritical fluid extraction (SFE) using unmodified carbon dioxide has been explored as an alternative method for the extraction of semivolatile organic compounds from high-efficiency particulate air (HEPA) filters. HEPA filters provide the final stage of containment on many exhaust systems in US Department of Energy (DOE) facilities by preventing the escape of chemical and radioactive materials entrained in the exhausted air. The efficiency of the filters is tested by the manufacturer and DOE using dioctylphthalate (DOP), a substance regulated by the US Environmental Protection Agency under the Resource Conservation and Recovery Act. Therefore, the filters must be analyzed for semivolatile organics before disposal. Ninety-eight acid, base, and neutral semivolatile organics were spiked onto blank HEPA material and extracted using SFE, Soxhlet, automated Soxhlet, and sonication techniques. The SFE conditions were optimized using a Dionex SFE-703 instrument. Average recoveries for the 98 semivolatile compounds are 82.7% for Soxhlet, 74.0% for sonication, 70.2% for SFE, and 62.9% for Soxtec. Supercritical fluid extraction reduces the extraction solvent volume to 10--15 mL, a factor of 20--30 less than Soxhlet and more than 5 times less than Soxtec and sonication. Extraction times of 30--45 min are used compared to 16--18 h for Soxhlet extraction

  16. A device for uranium series leaching from glass fiber in HEPA filter

    International Nuclear Information System (INIS)

    Gye-Nam Kim; Hye-Min Park; Wang-Kyu Choi; Jei-Kwon Moon

    2012-01-01

    For the disposal of a high efficiency particulate air (HEPA) glass filter into the environment, the glass fiber should be leached to lower its radioactive concentration to the clearance level. To derive an optimum method for the removal of uranium series from a HEPA glass fiber, five methods were applied in this study. That is, chemical leaching by a 4.0 M HNO 3 -0.1 M Ce(IV) solution, chemical leaching by a 5 wt% NaOH solution, chemical leaching by a 0.5 M H 2 O 2 -1.0 M Na 2 CO 3 solution, chemical consecutive chemical leaching by a 4.0 M HNO 3 solution, and repeated chemical leaching by a 4.0 M HNO 3 solution were used to remove the uranium series. The residual radioactivity concentrations of 238 U, 235 U, 226 Ra, and 234 Th in glass after leaching for 5 h by the 4.0 M HNO 3 -0.1 M Ce(IV) solution were 2.1, 0.3, 1.1, and 1.2 Bq/g. The residual radioactivity concentrations of 238 U, 235 U, 226 Ra, and 234 Th in glass after leaching for 36 h by 4.0 M HNO 3 -0.1 M Ce(IV) solution were 76.9, 3.4, 63.7, and 71.9 Bq/g. The residual radioactivity concentrations of 238 U, 235 U, 226 Ra, and 234 Th in glass after leaching for 8 h by a 0.5 M H 2 O 2 -1.0 M Na 2 CO 3 solution were 8.9, 0.0, 1.91, and 6.4 Bq/g. The residual radioactivity concentrations of 238 U, 235 U, 226 Ra, and 234 Th in glass after consecutive leaching for 8 h by the 4.0 M HNO 3 solution were 2.08, 0.12, 1.55, and 2.0 Bq/g. The residual radioactivity concentrations of 238 U, 235 U, 226 Ra, and 234 Th in glass after three repetitions of leaching for 3 h by the 4.0 M HNO 3 solution were 0.02, 0.02, 0.29, and 0.26 Bq/g. Meanwhile, the removal efficiencies of 238 U, 235 U, 226 Ra, and 234 Th from the waste solution after its precipitation-filtration treatment with NaOH and alum for reuse of the 4.0 M HNO 3 waste solution were 100, 100, 93.3, and 100%. (author)

  17. Survey of HEPA filter applications and experience at Department of Energy sites

    International Nuclear Information System (INIS)

    Carbaugh, E.H.

    1981-11-01

    Results indicated that approximately 58% of the filters surveyed were changed out in the 1977 to 1979 study period and some 18% of all filters were changed out more than once. Most changeouts (60%) were due to the existence of a high pressure drop across the filter, indicative of filter plugging. The next most recurrent reasons for changeout and their percentage changeouts were leak test failure (15%) and preventive maintenance service life limit (12%). An average filter service life was calculated to be 3.0 years with a 2.0-year standard deviation. The labor required for filter changeout was calculated as 1.5 manhours per filter changed. Filter failures occurred with approximately 12% of all installed filters. Most failures (60%) occurred for unknown reasons and handling or installation damage accounted for an additional 20% of all failures. Media ruptures, filter frame failures and seal failures occurred with approximately equal frequency at 5 to 6% each. Subjective responses to the questionnaire indicate problems are: need for improved acid and moisture resistant filters; filters more readily disposable as radioactive waste; improved personnel training in filter handling and installation; and need for pretreatment of air prior to HEPA filtration

  18. The histone deacetylase inhibiting drug Entinostat induces lipid accumulation in differentiated HepaRG cells

    Science.gov (United States)

    Nunn, Abigail D. G.; Scopigno, Tullio; Pediconi, Natalia; Levrero, Massimo; Hagman, Henning; Kiskis, Juris; Enejder, Annika

    2016-06-01

    Dietary overload of toxic, free metabolic intermediates leads to disrupted insulin signalling and fatty liver disease. However, it was recently reported that this pathway might not be universal: depletion of histone deacetylase (HDAC) enhances insulin sensitivity alongside hepatic lipid accumulation in mice, but the mechanistic role of microscopic lipid structure in this effect remains unclear. Here we study the effect of Entinostat, a synthetic HDAC inhibitor undergoing clinical trials, on hepatic lipid metabolism in the paradigmatic HepaRG liver cell line. Specifically, we statistically quantify lipid droplet morphology at single cell level utilizing label-free microscopy, coherent anti-Stokes Raman scattering, supported by gene expression. We observe Entinostat efficiently rerouting carbohydrates and free-fatty acids into lipid droplets, upregulating lipid coat protein gene Plin4, and relocating droplets nearer to the nucleus. Our results demonstrate the power of Entinostat to promote lipid synthesis and storage, allowing reduced systemic sugar levels and sequestration of toxic metabolites within protected protein-coated droplets, suggesting a potential therapeutic strategy for diseases such as diabetes and metabolic syndrome.

  19. Aging assessment of nuclear air-treatment system HEPA filters and adsorbers

    International Nuclear Information System (INIS)

    Winegardner, W.K.

    1993-08-01

    A Phase I aging assessment of high-efficiency particulate air (HEPA) filters and activated carbon gas adsorption units (adsorbers) was performed by the Pacific Northwest Laboratory (PNL) as part of the US Nuclear Regulatory Commission's (NRC) Nuclear Plant Aging Research (NPAR) Program. Information concerning design features; failure experience; aging mechanisms, effects, and stressors; and surveillance and monitoring methods for these key air-treatment system components was compiled. Over 1100 failures, or 12 percent of the filter installations, were reported as part of a Department of Energy (DOE) survey. Investigators from other national laboratories have suggested that aging effects could have contributed to over 80 percent of these failures. Tensile strength tests on aged filter media specimens indicated a decrease in strength. Filter aging mechanisms range from those associated with particle loading to reactions that alter properties of sealants and gaskets. Low radioiodine decontamination factors associated with the Three Mile Island (TMI) accident were attributed to the premature aging of the carbon in the adsorbers. Mechanisms that can lead to impaired adsorber performance include oxidation as well as the loss of potentially available active sites as a result of the adsorption of pollutants. Stressors include heat, moisture, radiation, and airborne particles and contaminants

  20. Review of in-place HEPA filter testing at several DOE facilities

    Energy Technology Data Exchange (ETDEWEB)

    Mokler, B.V.; Scripsick, R.C. [Los Alamos National Laboratory, NM (United States)

    1995-02-01

    The Office of Nuclear Energy Self-Assessment recently sponsored reviews of HEPA filter systems at several DOE facilities. One aspect emphasized in these reviews was in-place filter testing practices. Although in-place testing was generally performed as required in facility specifications, we noted several areas in which improvements were possible. Examples of some common problems and approaches to their solution will be presented. Areas of suggested improvement include: (1) ensuring the validity of test results; (2) recognizing and quantifying the uncertainty in penetration measurements; (3) expanding the analysis and reporting of test results to provide more than pass/fail information; (4) addressing the special problems of multiple stage systems; and (5) increasing the technical support and training provided in-place testing personnel. Ensuring the validity of test results, for example, requires more careful attention to the operation of test equipment, checking test measurements and system operating parameters for internal consistency, and more attention to documentation of system geometry and operation. Some issues will require additional study before the results can be incorporated into decision making on filter bank testing requirements and performance specifications.

  1. Review of in-place HEPA filter testing at several DOE facilities

    International Nuclear Information System (INIS)

    Mokler, B.V.; Scripsick, R.C.

    1995-01-01

    The Office of Nuclear Energy Self-Assessment recently sponsored reviews of HEPA filter systems at several DOE facilities. One aspect emphasized in these reviews was in-place filter testing practices. Although in-place testing was generally performed as required in facility specifications, we noted several areas in which improvements were possible. Examples of some common problems and approaches to their solution will be presented. Areas of suggested improvement include: (1) ensuring the validity of test results; (2) recognizing and quantifying the uncertainty in penetration measurements; (3) expanding the analysis and reporting of test results to provide more than pass/fail information; (4) addressing the special problems of multiple stage systems; and (5) increasing the technical support and training provided in-place testing personnel. Ensuring the validity of test results, for example, requires more careful attention to the operation of test equipment, checking test measurements and system operating parameters for internal consistency, and more attention to documentation of system geometry and operation. Some issues will require additional study before the results can be incorporated into decision making on filter bank testing requirements and performance specifications

  2. Efficient transfection of Xenobiotic Responsive Element-biosensor plasmid using diether lipid and phosphatidylcholine liposomes in differentiated HepaRG cells.

    Science.gov (United States)

    Demazeau, Maxime; Quesnot, Nicolas; Ripoche, Nicolas; Rauch, Claudine; Jeftić, Jelena; Morel, Fabrice; Gauffre, Fabienne; Benvegnu, Thierry; Loyer, Pascal

    2017-05-30

    In this study, we evaluated cationic liposomes prepared from diether-NH 2 and egg phosphatidylcholine (EPC) for in vitro gene delivery. The impact of the lipid composition, i.e. the EPC and Diether-NH 2 molar ratio, on in vitro transfection efficiency and cytotoxicity was investigated using the human HEK293T and hepatoma HepaRG cells known to be permissive and poorly permissive cells for liposome-mediated gene transfer, respectively. Here, we report that EPC/Diether-NH 2 -based liposomes enabled a very efficient transfection with low cytotoxicity compared to commercial transfection reagents in both HEK293T and proliferating progenitor HepaRG cells. Taking advantage of these non-toxic EPC/Diether-NH 2 -based liposomes, we developed a method to efficiently transfect differentiated hepatocyte-like HepaRG cells and a biosensor plasmid containing a Xenobiotic Responsive Element and a minimal promoter driving the transcription of the luciferase reporter gene. We demonstrated that the luciferase activity was induced by a canonical inducer of cytochrome P450 genes, the benzo[a]pyrene, and two environmental contaminants, the fluoranthene, a polycyclic aromatic hydrocarbon, and the endosulfan, an organochlorine insecticide, known to induce toxicity and genotoxicity in differentiated HepaRG cells. In conclusion, we established a new efficient lipofection-mediated gene transfer in hepatocyte-like HepaRG cells opening new perspectives in drug evaluation relying on xenobiotic inducible biosensor plasmids. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Resistance of HEPA filter separator materials to humid air--hydrogen fluoride--fluorine environments

    International Nuclear Information System (INIS)

    Weber, C.W.; Petit, G.S.; Woodfin, S.B.

    1977-01-01

    The U. S. Energy Research and Development Administration (ERDA) is interested in the development of a high-efficiency particulate air (HEPA) filter that is resistant to such corrosive reagents as hydrogen fluoride (HF) and fluorine (F 2 ) in air environments of normal relative humidity (about 50% RH). Several types of separator materials are used in the fabrication of commercial filters. The basic types of separator materials are asbestos, Kraft paper, plastic, and aluminum. At the request of the ERDA Division of Operational Safety, the different types of separator materials have been evaluated for their resistance to corrosive attack by HF and F 2 . The separator materials were dynamically tested in the 4-stage multiunit tester located in the Oak Ridge Gaseous Diffusion Plant laboratories. This is the system previously used in the evaluation of the Herty Foundation filter paper samples. Concurrent with the testing of filter media for its resistance to HF and F 2 , another component of the completed filter, the separator, was tested. All samples were exposed to a constant air flow (50% RH) of 32 liters/min, at 100 0 F, containing 900 ppM HF and 300 ppM F 2 . Exposure periods varied from 2 to 1000 h; however, the longer exposures were made only on the stronger candidates. Test results show the plastic and aluminum separator materials to be superior to the other types in resistance to HF and F 2 . The asbestos separators disintegrated after a relatively short exposure time; the Kraft paper types were the next weakest. The Clear Plastic S was the best performer of the plastics tested

  4. Differential toxicity of heterocyclic aromatic amines and their mixture in metabolically competent HepaRG cells

    International Nuclear Information System (INIS)

    Dumont, Julie; Josse, Rozenn; Lambert, Carine; Antherieu, Sebastien; Le Hegarat, Ludovic; Aninat, Caroline; Robin, Marie-Anne; Guguen-Guillouzo, Christiane

    2010-01-01

    Human exposure to heterocyclic aromatic amines (HAA) usually occurs through mixtures rather than individual compounds. However, the toxic effects and related mechanisms of co-exposure to HAA in humans remain unknown. We compared the effects of two of the most common HAA, 2-amino-1-methyl-6-phenylimidazo[4,5-b]pyridine (PhIP) and 2-amino-3,8-dimethylimidazo[4,5-f]quinoxaline (MeIQx), individually or in combination, in the metabolically competent human hepatoma HepaRG cells. Various endpoints were measured including cytotoxicity, apoptosis, oxidative stress and DNA damage by the comet assay. Moreover, the effects of PhIP and/or MeIQx on mRNA expression and activities of enzymes involved in their activation and detoxification pathways were evaluated. After a 24 h treatment, PhIP and MeIQx, individually and in combination, exerted differential effects on apoptosis, oxidative stress, DNA damage and cytochrome P450 (CYP) activities. Only PhIP induced DNA damage. It was also a stronger inducer of CYP1A1 and CYP1B1 expression and activity than MeIQx. In contrast, only MeIQx exposure resulted in a significant induction of CYP1A2 activity. The combination of PhIP with MeIQx induced an oxidative stress and showed synergistic effects on apoptosis. However, PhIP-induced genotoxicity was abolished by a co-exposure with MeIQx. Such an inhibitory effect could be explained by a significant decrease in CYP1A2 activity which is responsible for PhIP genotoxicity. Our findings highlight the need to investigate interactions between HAA when assessing risks for human health and provide new insights in the mechanisms of interaction between PhIP and MeIQx.

  5. The case for improved HEPA-filter mechanical performance standards revisited

    Energy Technology Data Exchange (ETDEWEB)

    Ricketts, C.I.; Smith, P.R. [New Mexico State Univ., Las Cruces, NM (United States)

    1997-08-01

    Under benign operating conditions, High Efficiency Particulate Air (HEPA) filter units serve as reliable and relatively economical components in the air cleaning systems of nuclear facilities worldwide. Despite more than four decades of filter-unit evaluation and improvements, however, the material strength characteristics of the glass fiber filter medium continue to ultimately limit filter functional reliability. In worst-case scenarios involving fire suppression, loss-of-coolant accidents (LOCA`s), or exposure to shock waves or tornado induced flows, rupture of the filter medium of units meeting current qualification standards cannot be entirely ruled out. Even under so-called normal conditions of operation, instances of filter failure reported in the literature leave open questions of filter-unit reliability. Though developments of filter units with improved burst strengths have been pursued outside the United States, support for efforts in this country has been comparatively minimal. This despite user requests for filters with greater moisture resistance, for example. Or the fact that conventional filter designs result in not only the least robust component to be found in a nuclear air cleaning system, but also the one most sensitive to the adverse effects of conditions deviating from those of normal operation. Filter qualification-test specifications of current codes, standards, and regulatory guidelines in the United States are based primarily upon research performed in a 30-year period beginning in the 1950`s. They do not seem to reflect the benefits of the more significant developments and understanding of filter failure modes and mechanisms achieved since that time. One overseas design, based on such knowledge, has proven reliability under adverse operating conditions involving combined and serial challenges. Its widespread use, however, has faltered on a lack of consensus in upgrading filter performance standards. 34 refs., 2 figs., 3 tabs.

  6. In Vivo Evaluation of a New Embolic Spherical Particle (HepaSphere) in a Kidney Animal Model

    International Nuclear Information System (INIS)

    Luis, Esther de; Bilbao, Jose I.; Ciercoles, Jose A. Garcia Jalon de; Martinez-Cuesta, Antonio; Martino Rodriguez, Alba de; Lozano, Maria D.

    2008-01-01

    HepaSphere is a new spherical embolic material developed in a dry state that absorbs fluids and adapts to the vessel wall, leaving no space between the particle and the arterial wall. The aim of this study was to elucidate the final in vivo size, deformation, final location, and main properties of the particles when reconstituted with two different contrast media (Iodixanol and Ioxaglate) in an animal model. Two sizes of 'dry-state' particles (50-100 and 150-200 μm) were reconstituted using both ionic and nonionic contrast media. The mixture was used to partly embolize both kidneys in an animal model (14 pigs). The animals were sacrificed 4 weeks after the procedure and the samples processed. The final size of the particles was 230.2 ± 62.5 μm for the 50- to 100-μm dry-state particles and 314.4 ± 71 μm for the 150- to 200-μm dry-state particles. When the contrast medium (ionic versus nonionic) used for the reconstitution was studied to compare (Student's t-test) the final size of the particles, no differences were found (p > 0.05). The mean in vivo deformation for HepaSphere was 17.1% ± 12.3%. No differences (p > 0.05) were found in the deformation of the particle regarding the dry-state size or the contrast medium (Mann-Whitney test). We conclude that HepaSphere is stable, occludes perfectly, and morphologically adapts to the vessel lumen of the arteries embolized. There is no recanalization of the arteries 4 weeks after embolization. Its final in vivo size is predictable and the particle has the same properties in terms of size and deformation with the two different contrast media (Iodixanol and Ioxaglate)

  7. Dose- and time-dependent effects of phenobarbital on gene expression profiling in human hepatoma HepaRG cells

    International Nuclear Information System (INIS)

    Lambert, Carine B.; Spire, Catherine; Claude, Nancy; Guillouzo, Andre

    2009-01-01

    Phenobarbital (PB) induces or represses a wide spectrum of genes in rodent liver. Much less is known about its effects in human liver. We used pangenomic cDNA microarrays to analyze concentration- and time-dependent gene expression profile changes induced by PB in the well-differentiated human HepaRG cell line. Changes in gene expression profiles clustered at specific concentration ranges and treatment times. The number of correctly annotated genes significantly modulated by at least three different PB concentration ranges (spanning 0.5 to 3.2 mM) at 20 h exposure amounted to 77 and 128 genes (p ≤ 0.01) at 2- and 1.8-fold filter changes, respectively. At low concentrations (0.5 and 1 mM), PB-responsive genes included the well-recognized CAR- and PXR-dependent responsive cytochromes P450 (CYP2B6, CYP3A4), sulfotransferase 2A1 and plasma transporters (ABCB1, ABCC2), as well as a number of genes critically involved in various metabolic pathways, including lipid (CYP4A11, CYP4F3), vitamin D (CYP24A1) and bile (CYP7A1 and CYP8B1) metabolism. At concentrations of 3.2 mM or higher after 20 h, and especially 48 h, increased cytotoxic effects were associated with disregulation of numerous genes related to oxidative stress, DNA repair and apoptosis. Primary human hepatocyte cultures were also exposed to 1 and 3.2 mM PB for 20 h and the changes were comparable to those found in HepaRG cells treated under the same conditions. Taken altogether, our data provide further evidence that HepaRG cells closely resemble primary human hepatocytes and provide new information on the effects of PB in human liver. These data also emphasize the importance of investigating dose- and time-dependent effects of chemicals when using toxicogenomic approaches

  8. Stable Overexpression of the Constitutive Androstane Receptor Reduces the Requirement for Culture with Dimethyl Sulfoxide for High Drug Metabolism in HepaRG Cells.

    Science.gov (United States)

    van der Mark, Vincent A; Rudi de Waart, D; Shevchenko, Valery; Elferink, Ronald P J Oude; Chamuleau, Robert A F M; Hoekstra, Ruurdtje

    2017-01-01

    Dimethylsulfoxide (DMSO) induces cellular differentiation and expression of drug metabolic enzymes in the human liver cell line HepaRG; however, DMSO also induces cell death and interferes with cellular activities. The aim of this study was to examine whether overexpression of the constitutive androstane receptor (CAR, NR1I3), the nuclear receptor controlling various drug metabolism genes, would sufficiently promote differentiation and drug metabolism in HepaRG cells, optionally without using DMSO. By stable lentiviral overexpression of CAR, HepaRG cultures were less affected by DMSO in total protein content and obtained increased resistance to acetaminophen- and amiodarone-induced cell death. Transcript levels of CAR target genes were significantly increased in HepaRG-CAR cultures without DMSO, resulting in increased activities of cytochrome P450 (P450) enzymes and bilirubin conjugation to levels equal or surpassing those of HepaRG cells cultured with DMSO. Unexpectedly, CAR overexpression also increased the activities of non-CAR target P450s, as well as albumin production. In combination with DMSO treatment, CAR overexpression further increased transcript levels and activities of CAR targets. Induction of CYP1A2 and CYP2B6 remained unchanged, whereas CYP3A4 was reduced. Moreover, the metabolism of low-clearance compounds warfarin and prednisolone was increased. In conclusion, CAR overexpression creates a more physiologically relevant environment for studies on hepatic (drug) metabolism and differentiation in HepaRG cells without the utilization of DMSO. DMSO still may be applied to accomplish higher drug metabolism, required for sensitive assays, such as low-clearance studies and identification of (rare) metabolites, whereas reduced total protein content after DMSO culture is diminished by CAR overexpression. Copyright © 2016 by The American Society for Pharmacology and Experimental Therapeutics.

  9. Observations of the distribution and the nature of alpha-active particulate material in a HEPA filter used for plutonium-containing dust

    International Nuclear Information System (INIS)

    Ryan, M.T.; McDowell, W.J.

    1977-02-01

    Autoradiography has been used to determine the distribution and the nature of plutonium particulate material on a high-efficiency particulate air (HEPA) filter used to filter 239 Pu-containing dust. Higher concentrations of alpha-active material on upstream and downstream folds of the filter indicate uneven airflow through the filter. Observations of aggregate recoil particles on the downstream face of the filter suggest that aggregate recoil transfer, a mechanism which may reduce long-term HEPA filter efficiency, may be occurring. Amounts of alpha activity found on downstream filters confirm this supposition

  10. Study of the effect of humidity, particle hygroscopicity and size on the mass loading capacity of HEPA filters

    International Nuclear Information System (INIS)

    Gupta, A.

    1992-01-01

    The effect of humidity, particle hygroscopicity and size on the mass loading capacity of glass fiber HEPA filters has been studied. At humidifies above the deliquescent point, the pressure drop across the HEPA filter increased non-linearly with the areal loading density (mass collected/filtration area) of NaCl aerosol, thus significantly reducing the mass loading capacity of the filter compared to dry hygroscopic or non-hygroscopic particle mass loadings. The specific cake resistance, K 2 , has been computed for different test conditions and used as a measure of the mass loading capacity. K. was found to decrease with increasing humidity for the non-hygroscopic aluminum oxide particles and the hygroscopic NaCl particles (at humidities below the deliquescent point). It is postulated that an increase in humidity leads to the formation of a more open particulate cake which lowers the pressure drop for a given mass loading. A formula for predicting K 2 for lognormally distributed aerosols (parameters obtained from impactor data) is derived. The resistance factor, R, calculated using this formula was compared to the theoretical R calculated using the Rudnick-Happel expression. For the non-hygroscopic aluminum oxide the agreement was good but for the hygroscopic sodium chloride, due to large variation in the cake porosity estimates, the agreement was poor

  11. Solutions for Dioctyl Phthalate (DOP) tested high efficiency particulate air (HEPA) filters destined for disposal at Hanford, Washington

    International Nuclear Information System (INIS)

    Gablin, K.A.

    1992-11-01

    In January 1992, Argonne National Laboratory East, Environmental and Waste Management Program, learned that a chemical material used for testing of all HEPA filters at the primary source, Flanders Filter, Inc. in Washington, NC, was considered a hazardous chemical by Washington State Dangerous Waste Regulations. These regulations are under the jurisdiction of the Washington Administration Code, Chapter 173-303, and therefore directly under impact the Hanford Site Solid Waste Acceptance Criteria. Dioctyl Phthalate, ''DOP'' as it is referred to in chemical abbreviation form, is added in small test quantities at the factory, at three Department of Energy (DOE) operated HEPA filter test facilities, and in the installed duct work at various operating laboratories or production facilities. When small amounts of radioactivity are added to the filter media in operation, the result is a mixed waste. This definition would normally only develop in the state of Washington since their acceptance criteria is ten times more stringent then the US Environmental Protection Agencys' (US EPA). Methods of Processing will be discussed, which will include detoxification, physical separation, heat and vacuum separation, and compaction. The economic impact of a mixed waste definition in the State of Washington, and an Low Level Waste (LLW) definition in other locations, may lend this product to be a prime candidate for commercial disposal in the future, or a possible de-listing by the State of Washington

  12. Evaluation of the HEPA filter in-place test method in a corrosive off-gas environment

    International Nuclear Information System (INIS)

    Murphy, L.P.; Wong, M.A.; Girton, R.C.

    1978-01-01

    Experiments were performed to determine if the combined effects of temperature, humidity, and oxides of nitrogen (NO/sub x/) hinder the in-place testing of high-efficiency particulate air (HEPA) filters used for cleaning the off-gas from a nuclear waste solidification facility. The laboratory system that was designed to simulate the process off-gas contained two HEPA filters in series with sample ports before each filter and after the filter bank. The system also included a reaction bomb for partial conversion of NO to NO 2 . Instrumentation measured stream flow, humidity, NO/sub x/ concentration, and temperature. Comparison measurements of the DOP concentrations were made by a forward light-scattering photometer and a single particle intra-cavity laser particle spectrometer. Experimental conditions could be varied, but maximum system capabilities were 95% relative humidity, 90 0 C, and 10,000 ppM of NO/sub x/. A 2 3 factorial experimental design was used for the test program. This design determined the main effects of each factor plus the interactions of the factors in combination. The results indicated that water vapor and NO/sub x/ interfere with the conventional photometer measurements. Suggested modifications that include a unique sample dryer are described to correct the interferences. The laser particle spectrometer appears to be an acceptable instrument for measurements under adverse off-gas conditions

  13. Reliability and Validity of the SE-HEPA: Examining Physical Activity- and Healthy Eating-Specific Self-Efficacy among a Sample of Preadolescents

    Science.gov (United States)

    Steele, Michael M.; Burns, Leonard G.; Whitaker, Brandi N.

    2013-01-01

    Objective. The purpose of this study was to examine the psychometric properties of the self-efficacy for healthy eating and physical activity measure (SE-HEPA) for preadolescents. Method. The reliability of the measure was examined to determine if the internal consistency of the measure was adequate (i.e., [alpha]s greater than 0.70). Next, in an…

  14. Liver Progenitor Cell Line HepaRG Differentiated in a Bioartificial Liver Effectively Supplies Liver Support to Rats with Acute Liver Failure

    NARCIS (Netherlands)

    Nibourg, Geert A. A.; Chamuleau, Robert A. F. M.; van der Hoeven, Tessa V.; Maas, Martinus A. W.; Ruiter, An F. C.; Lamers, Wouter H.; Oude Elferink, Ronald P. J.; van Gulik, Thomas M.; Hoekstra, Ruurdtje

    2012-01-01

    A major roadblock to the application of bioartificial livers is the need for a human liver cell line that displays a high and broad level of hepatic functionality. The human bipotent liver progenitor cell line HepaRG is a promising candidate in this respect, for its potential to differentiate into

  15. The human hepatocyte cell lines IHH and HepaRG: models to study glucose, lipid and lipoprotein metabolism.

    Science.gov (United States)

    Samanez, Carolina Huaman; Caron, Sandrine; Briand, Olivier; Dehondt, Hélène; Duplan, Isabelle; Kuipers, Folkert; Hennuyer, Nathalie; Clavey, Véronique; Staels, Bart

    2012-07-01

    Metabolic diseases reach epidemic proportions. A better knowledge of the associated alterations in the metabolic pathways in the liver is necessary. These studies need in vitro human cell models. Several human hepatoma models are used, but the response of many metabolic pathways to physiological stimuli is often lost. Here, we characterize two human hepatocyte cell lines, IHH and HepaRG, by analysing the expression and regulation of genes involved in glucose and lipid metabolism. Our results show that the glycolysis pathway is activated by glucose and insulin in both lines. Gluconeogenesis gene expression is induced by forskolin in IHH cells and inhibited by insulin in both cell lines. The lipogenic pathway is regulated by insulin in IHH cells. Finally, both cell lines secrete apolipoprotein B-containing lipoproteins, an effect promoted by increasing glucose concentrations. These two human cell lines are thus interesting models to study the regulation of glucose and lipid metabolism.

  16. Tailored liquid chromatography-mass spectrometry analysis improves the coverage of the intracellular metabolome of HepaRG cells.

    Science.gov (United States)

    Cuykx, Matthias; Negreira, Noelia; Beirnaert, Charlie; Van den Eede, Nele; Rodrigues, Robim; Vanhaecke, Tamara; Laukens, Kris; Covaci, Adrian

    2017-03-03

    Metabolomics protocols are often combined with Liquid Chromatography-Mass Spectrometry (LC-MS) using mostly reversed phase chromatography coupled to accurate mass spectrometry, e.g. quadrupole time-of-flight (QTOF) mass spectrometers to measure as many metabolites as possible. In this study, we optimised the LC-MS separation of cell extracts after fractionation in polar and non-polar fractions. Both phases were analysed separately in a tailored approach in four different runs (two for the non-polar and two for the polar-fraction), each of them specifically adapted to improve the separation of the metabolites present in the extract. This approach improves the coverage of a broad range of the metabolome of the HepaRG cells and the separation of intra-class metabolites. The non-polar fraction was analysed using a C18-column with end-capping, mobile phase compositions were specifically adapted for each ionisation mode using different co-solvents and buffers. The polar extracts were analysed with a mixed mode Hydrophilic Interaction Liquid Chromatography (HILIC) system. Acidic metabolites from glycolysis and the Krebs cycle, together with phosphorylated compounds, were best detected with a method using ion pairing (IP) with tributylamine and separation on a phenyl-hexyl column. Accurate mass detection was performed with the QTOF in MS-mode only using an extended dynamic range to improve the quality of the dataset. Parameters with the greatest impact on the detection were the balance between mass accuracy and linear range, the fragmentor voltage, the capillary voltage, the nozzle voltage, and the nebuliser pressure. By using a tailored approach for the intracellular HepaRG metabolome, consisting of three different LC techniques, over 2200 metabolites can be measured with a high precision and acceptable linear range. The developed method is suited for qualitative untargeted LC-MS metabolomics studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Geometric mechanics of periodic pleated origami.

    Science.gov (United States)

    Wei, Z Y; Guo, Z V; Dudte, L; Liang, H Y; Mahadevan, L

    2013-05-24

    Origami structures are mechanical metamaterials with properties that arise almost exclusively from the geometry of the constituent folds and the constraint of piecewise isometric deformations. Here we characterize the geometry and planar and nonplanar effective elastic response of a simple periodically folded Miura-ori structure, which is composed of identical unit cells of mountain and valley folds with four-coordinated ridges, defined completely by two angles and two lengths. We show that the in-plane and out-of-plane Poisson's ratios are equal in magnitude, but opposite in sign, independent of material properties. Furthermore, we show that effective bending stiffness of the unit cell is singular, allowing us to characterize the two-dimensional deformation of a plate in terms of a one-dimensional theory. Finally, we solve the inverse design problem of determining the geometric parameters for the optimal geometric and mechanical response of these extreme structures.

  18. Deep frying

    NARCIS (Netherlands)

    Koerten, van K.N.

    2016-01-01

    Deep frying is one of the most used methods in the food processing industry. Though practically any food can be fried, French fries are probably the most well-known deep fried products. The popularity of French fries stems from their unique taste and texture, a crispy outside with a mealy soft

  19. Deep learning

    CERN Document Server

    Goodfellow, Ian; Courville, Aaron

    2016-01-01

    Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language proces...

  20. Walking in the high-rise city: a Health Enhancement and Pedometer-determined Ambulatory (HEPA program in Hong Kong

    Directory of Open Access Journals (Sweden)

    Leung AYM

    2014-08-01

    Full Text Available Angela YM Leung,1,2 Mike KT Cheung,3 Michael A Tse,4 Wai Chuen Shum,5 BJ Lancaster,1,6 Cindy LK Lam7 1School of Nursing, 2Research Centre on Heart, Brain, Hormone and Healthy Aging, Li Ka Shing Faculty of Medicine, University of Hong Kong, 3Centre on Research and Advocacy, Hong Kong Society for Rehabilitation, 4Institute of Human Performance, University of Hong Kong, 5Sheng Kung Hui Holy Carpenter Church Social Services, Hong Kong Special Administrative Region, People’s Republic of China; 6School of Nursing, Vanderbilt University, Nashville, TN, USA; 7Department of Family Medicine and Primary Care, University of Hong Kong, Hong Kong Special Administrative Region, People’s Republic of China Abstract: Due to the lack of good infrastructure in the public estates, many older adults in urban areas are sedentary. The Health Enhancement and Pedometer-Determined Ambulatory (HEPA program was developed to assist older adults with diabetes and/or hypertension to acquire walking exercise habits and to build social support, while engaged in regular physical activity. This study aimed to describe the HEPA program and to report changes in participants’ walking capacity and body strength after 10-week walking sessions. A pre- and postintervention design was used. Pedometers were used to measure the number of steps taken per day before and after the 10-week intervention. Upper and lower body strength, lower body flexibility, and quality of life were assessed. A total of 205 older adults completed the program and all health assessments. After the 10-week intervention, the average number of steps per day increased by 36%, from 6,591 to 8,934. Lower body strength, upper body strength, and aerobic fitness increased significantly after 10 weeks, along with improvement in the 12-item Short Form Health Survey (SF™-12 physical and mental health component summary scores. A social support network was built in the neighborhood, and the local environment was

  1. 3-Nitrobenzanthrone and 3-aminobenzanthrone induce DNA damage and cell signalling in Hepa1c1c7 cells.

    Science.gov (United States)

    Landvik, N E; Arlt, V M; Nagy, E; Solhaug, A; Tekpli, X; Schmeiser, H H; Refsnes, M; Phillips, D H; Lagadic-Gossmann, D; Holme, J A

    2010-02-03

    3-Nitrobenzanthrone (3-NBA) is a mutagenic and carcinogenic environmental pollutant found in diesel exhaust and urban air pollution. In the present work we have characterised the effects of 3-NBA and its metabolite 3-aminobenzanthrone (3-ABA) on cell death and cytokine release in mouse hepatoma Hepa1c1c7 cells. These effects were related to induced DNA damage and changes in cell signalling pathways. 3-NBA resulted in cell death and caused most DNA damage as judged by the amount of DNA adducts ((32)P-postlabelling assay), single strand (ss)DNA breaks and oxidative DNA lesions (comet assay) detected. An increased phosphorylation of H2AX, chk1, chk2 and partly ATM was observed using flow cytometry and/or Western blotting. Both compounds increased phosphorylation of p53 and MAPKs (ERK, p38 and JNK). However, only 3-NBA caused an accumulation of p53 in the nucleus and a translocation of Bax to the mitochondria. The p53 inhibitor pifithrin-alpha inhibited 3-NBA-induced apoptosis, indicating that cell death was a result of the triggering of DNA signalling pathways. The highest phosphorylation of Akt and degradation of IkappaB-alpha (suggesting activation of NF-kappaB) were also seen after treatment with 3-NBA. In contrast 3-ABA increased IL-6 release, but caused little or no toxicity. Cytokine release was inhibited by PD98059 and curcumin, suggesting that ERK and NF-kappaB play a role in this process. In conclusion, 3-NBA seems to have a higher potency to induce DNA damage compatible with its cytotoxic effects, while 3-ABA seems to have a greater effect on the immune system. Copyright 2009 Elsevier B.V. All rights reserved.

  2. 3-Nitrobenzanthrone and 3-aminobenzanthrone induce DNA damage and cell signalling in Hepa1c1c7 cells

    Energy Technology Data Exchange (ETDEWEB)

    Landvik, N.E. [Division of Environmental Medicine, Norwegian Institute of Public Health, P.O. Box 404 Torshov N-4303 Oslo (Norway); Arlt, V.M.; Nagy, E. [Section of Molecular Carcinogenesis, Institute of Cancer Research, Brookes Lawley Building, Sutton, Surrey SM2 5NG (United Kingdom); Solhaug, A. [Section for Toxicology, Department of Feed and Food Safety, National Veterinary Institute Pb 750 Sentrum, N-0106 Oslo (Norway); Tekpli, X. [EA SeRAIC, Equipe labellisee Ligue contre le Cancer, IFR 140, Universite de Rennes 1, Rennes (France); Schmeiser, H.H. [Research Group Genetic Alteration in Carcinogenesis, German Cancer Research Center, Im Neuenheimer Feld 280, 69120 Heidelberg (Germany); Refsnes, M. [Division of Environmental Medicine, Norwegian Institute of Public Health, P.O. Box 404 Torshov N-4303 Oslo (Norway); Phillips, D.H. [Section of Molecular Carcinogenesis, Institute of Cancer Research, Brookes Lawley Building, Sutton, Surrey SM2 5NG (United Kingdom); Lagadic-Gossmann, D. [EA SeRAIC, Equipe labellisee Ligue contre le Cancer, IFR 140, Universite de Rennes 1, Rennes (France); Holme, J.A., E-mail: jorn.holme@fhi.no [Division of Environmental Medicine, Norwegian Institute of Public Health, P.O. Box 404 Torshov N-4303 Oslo (Norway)

    2010-02-03

    3-Nitrobenzanthrone (3-NBA) is a mutagenic and carcinogenic environmental pollutant found in diesel exhaust and urban air pollution. In the present work we have characterised the effects of 3-NBA and its metabolite 3-aminobenzanthrone (3-ABA) on cell death and cytokine release in mouse hepatoma Hepa1c1c7 cells. These effects were related to induced DNA damage and changes in cell signalling pathways. 3-NBA resulted in cell death and caused most DNA damage as judged by the amount of DNA adducts ({sup 32}P-postlabelling assay), single strand (ss)DNA breaks and oxidative DNA lesions (comet assay) detected. An increased phosphorylation of H2AX, chk1, chk2 and partly ATM was observed using flow cytometry and/or Western blotting. Both compounds increased phosphorylation of p53 and MAPKs (ERK, p38 and JNK). However, only 3-NBA caused an accumulation of p53 in the nucleus and a translocation of Bax to the mitochondria. The p53 inhibitor pifithrin-alpha inhibited 3-NBA-induced apoptosis, indicating that cell death was a result of the triggering of DNA signalling pathways. The highest phosphorylation of Akt and degradation of I{kappa}B-{alpha} (suggesting activation of NF-{kappa}B) were also seen after treatment with 3-NBA. In contrast 3-ABA increased IL-6 release, but caused little or no toxicity. Cytokine release was inhibited by PD98059 and curcumin, suggesting that ERK and NF-{kappa}B play a role in this process. In conclusion, 3-NBA seems to have a higher potency to induce DNA damage compatible with its cytotoxic effects, while 3-ABA seems to have a greater effect on the immune system.

  3. Preferential induction of the AhR gene battery in HepaRG cells after a single or repeated exposure to heterocyclic aromatic amines

    International Nuclear Information System (INIS)

    Dumont, Julie; Josse, Rozenn; Lambert, Carine; Antherieu, Sebastien; Laurent, Veronique; Loyer, Pascal; Robin, Marie-Anne; Guillouzo, Andre

    2010-01-01

    2-Amino-1-methyl-6-phenylimidazo[4,5-b]pyridine (PhIP) and 2-amino-3,8-dimethylimidazo[4,5-f]quinoxaline (MeIQx) are two of the most common heterocyclic aromatic amines (HAA) produced during cooking of meat, fish and poultry. Both HAA produce different tumor profiles in rodents and are suspected to be carcinogenic in humans. In order to better understand the molecular basis of HAA toxicity, we have analyzed gene expression profiles in the metabolically competent human HepaRG cells using pangenomic oligonucleotide microarrays, after either a single (24-h) or a repeated (28-day) exposure to 10 μM PhIP or MeIQx. The most responsive genes to both HAA were downstream targets of the arylhydrocarbon receptor (AhR): CYP1A1 and CYP1A2 after both time points and CYP1B1 and ALDH3A1 after 28 days. Accordingly, CYP1A1/1A2 induction in HAA-treated HepaRG cells was prevented by chemical inhibition or small interference RNA-mediated down-regulation of the AhR. Consistently, HAA induced activity of the CYP1A1 promoter, which contains a consensus AhR-related xenobiotic-responsive element (XRE). In addition, several other genes exhibited both time-dependent and compound-specific expression changes with, however, a smaller magnitude than previously reported for the prototypical AhR target genes. These changes concerned genes mainly related to cell growth and proliferation, apoptosis, and cancer. In conclusion, these results identify the AhR gene battery as the preferential target of PhIP and MeIQx in HepaRG cells and further support the hypothesis that intake of HAA in diet might increase human cancer risk.

  4. Human HepaRG Cells can be Cultured in Hanging-drop Plates for Cytochrome P450 Induction and Function Assays.

    Science.gov (United States)

    Murayama, Norie; Usui, Takashi; Slawny, Nicky; Chesné, Christophe; Yamazaki, Hiroshi

    2015-01-01

    Recent guidance/guidelines for industry recommend that cytochrome P450 induction can be assessed using human hepatocyte enzyme activity and/or mRNA levels to evaluate potential drug- drug interactions. To evaluate time-dependent cytochrome P450 induction precisely, induction of CYP1A2, CYP2B6, and CYP3A4 mRNA was confirmed (>2-fold) by the treatment with omeprazole, phenobarbital, and rifampicin, respectively, for 24 or 48 h on day 3 from the start of culture. After 24 h, the fold induction of CYP1A2 with 3.6 and 1.8x10(4) HepaRG cells per well was lower than that for 7.2x10(4) cells. CYP1A2 induction levels at 24 h were higher than those after 48 h. In contrast, higher CYP2B6 inductions were confirmed after 48 h exposure than after 24 h, independent of the number of cells per well. To help reduce the use of human cryopreserved hepatocytes, typical P450-dependent enzyme activities were investigated in human HepaRG cells cultured in commercial hanging-drop plates. Newly designed 96-well hanging-drop plates were capable of maintaining human CYP3A-dependent midazolam hydroxylation activities for up to 4 days using only 10% of the recommended initial 7.2x10(4) cells per well. Favorable HepaRG function using hanging-drop plates was confirmed by detecting 1'- hydroxymidazolam O-glucuronide on day 3, suggesting an improvement over traditional control plates in which this metabolite can be detected for 24-well plates. These results suggest that the catalytic function and/or induction of CYP1A2, CYP2B6, and CYP3A4 can be readily assessed with reduced numbers of starting HepaRG cells cultured in three-dimensional cultures in drops prepared with hanging-drop plates.

  5. Deep Learning

    DEFF Research Database (Denmark)

    Jensen, Morten Bornø; Bahnsen, Chris Holmberg; Nasrollahi, Kamal

    2018-01-01

    I løbet af de sidste 10 år er kunstige neurale netværk gået fra at være en støvet, udstødt tekno-logi til at spille en hovedrolle i udviklingen af kunstig intelligens. Dette fænomen kaldes deep learning og er inspireret af hjernens opbygning.......I løbet af de sidste 10 år er kunstige neurale netværk gået fra at være en støvet, udstødt tekno-logi til at spille en hovedrolle i udviklingen af kunstig intelligens. Dette fænomen kaldes deep learning og er inspireret af hjernens opbygning....

  6. Deep geothermics

    International Nuclear Information System (INIS)

    Anon.

    1995-01-01

    The hot-dry-rocks located at 3-4 km of depth correspond to low permeable rocks carrying a large amount of heat. The extraction of this heat usually requires artificial hydraulic fracturing of the rock to increase its permeability before water injection. Hot-dry-rocks geothermics or deep geothermics is not today a commercial channel but only a scientific and technological research field. The Soultz-sous-Forets site (Northern Alsace, France) is characterized by a 6 degrees per meter geothermal gradient and is used as a natural laboratory for deep geothermal and geological studies in the framework of a European research program. Two boreholes have been drilled up to 3600 m of depth in the highly-fractured granite massif beneath the site. The aim is to create a deep heat exchanger using only the natural fracturing for water transfer. A consortium of german, french and italian industrial companies (Pfalzwerke, Badenwerk, EdF and Enel) has been created for a more active participation to the pilot phase. (J.S.). 1 fig., 2 photos

  7. Experimental relationship between the specific resistance of a HEPA [High Efficiency Particulate Air] filter and particle diameters of different aerosol materials

    International Nuclear Information System (INIS)

    Novick, V.J.; Monson, P.R.; Ellison, P.E.

    1990-01-01

    The increase in pressure drop across a HEPA filter has been measured as a function of the particle mass loading using two materials with different particle morphologies. The HEPA filter media chosen, is identical to the filter media used in the Airborne Activity Confinement System (AACS) on the Savannah River Reactors. The velocity through the test filter media was the same as the velocity through the AACS media, under normal operating flow conditions. Sodium Chloride challenge particles were generated using an atomizer, resulting in regularly shaped crystalline forms. Ammonium chloride aerosols were formed from the gas phase reaction of HCl and NH 4 OH vapors resulting in irregular agglomerates. In both cases, the generation conditions were adjusted to provide several different particle size distributions. For each particle size distribution, the mass of material loaded per unit area of filter per unit pressure drop for a given filtration velocity (1/Specific resistance) was measured. Theoretical considerations in the most widely accepted filter cake model predict that the mass per unit area and per unit pressure drop should increase with the particle density times the particle diameter squared. However, these test results indicate that the increase in the mass loaded per unit area per unit pressure drop, for both materials, can be better described by plotting the specific resistance divided by the particle density as an inverse function of the particle density times the particle diameter squared. 9 refs., 7 figs

  8. Differences in TCDD-elicited gene expression profiles in human HepG2, mouse Hepa1c1c7 and rat H4IIE hepatoma cells

    Directory of Open Access Journals (Sweden)

    Burgoon Lyle D

    2011-04-01

    Full Text Available Abstract Background 2,3,7,8-Tetrachlorodibenzo-p-dioxin (TCDD is an environmental contaminant that elicits a broad spectrum of toxic effects in a species-specific manner. Current risk assessment practices routinely extrapolate results from in vivo and in vitro rodent models to assess human risk. In order to further investigate the species-specific responses elicited by TCDD, temporal gene expression responses in human HepG2, mouse Hepa1c1c7 and rat H4IIE cells were compared. Results Microarray analysis identified a core set of conserved gene expression responses across species consistent with the role of AhR in mediating adaptive metabolic responses. However, significant species-specific as well as species-divergent responses were identified. Computational analysis of the regulatory regions of species-specific and -divergent responses suggests that dioxin response elements (DREs are involved. These results are consistent with in vivo rat vs. mouse species-specific differential gene expression, and more comprehensive comparative DRE searches. Conclusions Comparative analysis of human HepG2, mouse Hepa1c1c7 and rat H4IIE TCDD-elicited gene expression responses is consistent with in vivo rat-mouse comparative gene expression studies, and more comprehensive comparative DRE searches, suggesting that AhR-mediated gene expression is species-specific.

  9. Deep smarts.

    Science.gov (United States)

    Leonard, Dorothy; Swap, Walter

    2004-09-01

    When a person sizes up a complex situation and rapidly comes to a decision that proves to be not just good but brilliant, you think, "That was smart." After you watch him do this a few times, you realize you're in the presence of something special. It's not raw brainpower, though that helps. It's not emotional intelligence, either, though that, too, is often involved. It's deep smarts. Deep smarts are not philosophical--they're not"wisdom" in that sense, but they're as close to wisdom as business gets. You see them in the manager who understands when and how to move into a new international market, in the executive who knows just what kind of talk to give when her organization is in crisis, in the technician who can track a product failure back to an interaction between independently produced elements. These are people whose knowledge would be hard to purchase on the open market. Their insight is based on know-how more than on know-what; it comprises a system view as well as expertise in individual areas. Because deep smarts are experienced based and often context specific, they can't be produced overnight or readily imported into an organization. It takes years for an individual to develop them--and no time at all for an organization to lose them when a valued veteran walks out the door. They can be taught, however, with the right techniques. Drawing on their forthcoming book Deep Smarts, Dorothy Leonard and Walter Swap say the best way to transfer such expertise to novices--and, on a larger scale, to make individual knowledge institutional--isn't through PowerPoint slides, a Web site of best practices, online training, project reports, or lectures. Rather, the sage needs to teach the neophyte individually how to draw wisdom from experience. Companies have to be willing to dedicate time and effort to such extensive training, but the investment more than pays for itself.

  10. Selecting Cells for Bioartificial Liver Devices and the Importance of a 3D Culture Environment: A Functional Comparison between the HepaRG and C3A Cell Lines.

    Science.gov (United States)

    van Wenum, Martien; Adam, Aziza A A; Hakvoort, Theodorus B M; Hendriks, Erik J; Shevchenko, Valery; van Gulik, Thomas M; Chamuleau, Robert A F M; Hoekstra, Ruurdtje

    2016-01-01

    Recently, the first clinical trials on Bioartificial Livers (BALs) loaded with a proliferative human hepatocyte cell source have started. There are two cell lines that are currently in an advanced state of BAL development; HepaRG and HepG2/C3A. In this study we aimed to compare both cell lines on applicability in BALs and to identify possible strategies for further improvement. We tested both cell lines in monolayer- and BAL cultures on growth characteristics, hepatic differentiation, nitrogen-, carbohydrate-, amino acid- and xenobiotic metabolism. Interestingly, both cell lines adapted the hepatocyte phenotype more closely when cultured in BALs; e.g. monolayer cultures produced lactate, while BAL cultures showed diminished lactate production (C3A) or conversion to elimination (HepaRG), and urea cycle activity increased upon BAL culturing in both cell lines. HepaRG-BALs outperformed C3A-BALs on xenobiotic metabolism, ammonia elimination and lactate elimination, while protein synthesis was comparable. In BAL cultures of both cell lines ammonia elimination correlated positively with glutamine production and glutamate consumption, suggesting ammonia elimination was mainly driven by the balance between glutaminase and glutamine synthetase activity. Both cell lines lacked significant urea cycle activity and both required multiple culture weeks before reaching optimal differentiation in BALs. In conclusion, culturing in BALs enhanced hepatic functionality of both cell lines and from these, the HepaRG cells are the most promising proliferative cell source for BAL application.

  11. Selecting Cells for Bioartificial Liver Devices and the Importance of a 3D Culture Environment: A Functional Comparison between the HepaRG and C3A Cell Lines

    NARCIS (Netherlands)

    van Wenum, Martien; Adam, Aziza A. A.; Hakvoort, Theodorus B. M.; Hendriks, Erik J.; Shevchenko, Valery; van Gulik, Thomas M.; Chamuleau, Robert A. F. M.; Hoekstra, Ruurdtje

    2016-01-01

    Recently, the first clinical trials on Bioartificial Livers (BALs) loaded with a proliferative human hepatocyte cell source have started. There are two cell lines that are currently in an advanced state of BAL development; HepaRG and HepG2/C3A. In this study we aimed to compare both cell lines on

  12. Activation of the sonic hedgehog signaling pathway occurs in the CD133 positive cells of mouse liver cancer Hepa 1–6 cells

    Directory of Open Access Journals (Sweden)

    Jeng KS

    2013-08-01

    Full Text Available Kuo-Shyang Jeng,1 I-Shyan Sheen,2 Wen-Juei Jeng,2 Ming-Che Yu,3 Hsin-I Hsiau,3 Fang-Yu Chang,3 Hsin-Hua Tsai31Department of Surgery, Far Eastern Memorial Hospital, Taipei, 2Department of Hepato-Gastroenterology, Chang Gung Memorial Hospital, Linkou Medical Center, Chang Gung University, 3Department of Medical Research, Far Eastern Memorial Hospital, Taipei, Taiwan, Republic of ChinaBackground: The important role of cancer stem cells in carcinogenesis has been emphasized in research. CD133+ cells have been mentioned as liver cancer stem cells in hepatocellular carcinoma (HCC. Some researchers have proposed that the sonic hedgehog (Shh pathway contributes to hepatocarcinogenesis and that the pathway activation occurs mainly in cancer stem cells. We investigated whether the activation of the Shh pathway occurs in CD133+ cells from liver cancer.Materials and methods: We used magnetic sorting to isolate CD133+ cells from mouse cancer Hepa 1–6 cells. To examine the clonogenicity, cell culture and soft agar colony formation assay were performed between CD133+ and CD133- cells. To study the activation of the Shh pathway, we examined the mRNA expressions of Shh, patched homolog 1 (Ptch-1, glioma-associated oncogene homolog 1 (Gli-1, and smoothened homolog (Smoh by real-time polymerase chain reaction of both CD133+ and CD133- cells.Results: The number (mean ± standard deviation of colonies of CD133+ cells and CD133- cells was 1,031.0 ± 104.7 and 119.7 ± 17.6 respectively. This difference was statistically significant (P < 0.001. Their clonogenicity was 13.7% ± 1.4% and 1.6% ± 0.2% respectively with a statistically significant difference found (P < 0.001. CD133+ cells and CD133– cells were found to have statistically significant differences in Shh mRNA and Smoh mRNA (P = 0.005 and P = 0.043 respectively.Conclusion: CD133+ Hepa 1–6 cells have a significantly higher colony proliferation and clonogenicity. The Shh pathway is activated in these

  13. A Transcriptional Regulatory Network Containing Nuclear Receptors and Long Noncoding RNAs Controls Basal and Drug-Induced Expression of Cytochrome P450s in HepaRG Cells.

    Science.gov (United States)

    Chen, Liming; Bao, Yifan; Piekos, Stephanie C; Zhu, Kexin; Zhang, Lirong; Zhong, Xiao-Bo

    2018-07-01

    Cytochrome P450 (P450) enzymes are responsible for metabolizing drugs. Expression of P450s can directly affect drug metabolism, resulting in various outcomes in therapeutic efficacy and adverse effects. Several nuclear receptors are transcription factors that can regulate expression of P450s at both basal and drug-induced levels. Some long noncoding RNAs (lncRNAs) near a transcription factor are found to participate in the regulatory functions of the transcription factors. The aim of this study is to determine whether there is a transcriptional regulatory network containing nuclear receptors and lncRNAs controlling both basal and drug-induced expression of P450s in HepaRG cells. Small interfering RNAs or small hairpin RNAs were applied to knock down four nuclear receptors [hepatocyte nuclear factor 1 α (HNF1 α ), hepatocyte nuclear factor 4 α (HNF4 α ), pregnane X receptor (PXR), and constitutive androstane receptor (CAR)] as well as two lncRNAs [HNF1 α antisense RNA 1 (HNF1 α -AS1) and HNF4 α antisense RNA 1 (HNF4 α -AS1)] in HepaRG cells with or without treatment of phenobarbital or rifampicin. Expression of eight P450 enzymes was examined in both basal and drug-induced levels. CAR and PXR mainly regulated expression of specific P450s. HNF1 α and HNF4 α affected expression of a wide range of P450s as well as other transcription factors. HNF1 α and HNF4 α controlled the expression of their neighborhood lncRNAs, HNF1 α -AS1 and HNF4 α -AS1, respectively. HNF1 α -AS1 and HNF4 α -AS1 was also involved in the regulation of P450s and transcription factors in diverse manners. Altogether, our study concludes that a transcription regulatory network containing the nuclear receptors and lncRNAs controls both basal and drug-induced expression of P450s in HepaRG cells. Copyright © 2018 by The American Society for Pharmacology and Experimental Therapeutics.

  14. Prospective evaluation of FibroTest®, FibroMeter®, and HepaScore® for staging liver fibrosis in chronic hepatitis B: comparison with hepatitis C.

    Science.gov (United States)

    Leroy, Vincent; Sturm, Nathalie; Faure, Patrice; Trocme, Candice; Marlu, Alice; Hilleret, Marie-Noëlle; Morel, Françoise; Zarski, Jean-Pierre

    2014-07-01

    Fibrosis blood tests have been validated in chronic hepatitis C. Their diagnostic accuracy is less documented in hepatitis B. The aim of this study was to describe the diagnostic performance of FibroTest®, FibroMeter®, and HepaScore® for liver fibrosis in hepatitis B compared to hepatitis C. 510 patients mono-infected with hepatitis B or C and matched on fibrosis stage were included. Blood tests were performed the day of the liver biopsy. Histological lesions were staged according to METAVIR. Fibrosis stages were distributed as followed: F0 n=76, F1 n=192, F2 n=132, F3 n=54, F4 n=56. Overall diagnostic performance of blood tests were similar between hepatitis B and C with AUROC ranging from 0.75 to 0.84 for significant fibrosis, 0.82 to 0.85 for extensive fibrosis and 0.84 to 0.87 for cirrhosis. Optimal cut-offs were consistently lower in hepatitis B compared to hepatitis C, especially for the diagnosis of extensive fibrosis and cirrhosis, with decreased sensitivity and negative predictive values. More hepatitis B than C patients with F ⩾3 were underestimated: FibroTest®: 47% vs. 26%, FibroMeter®: 24% vs. 6%, HepaScore®: 41% vs. 24%, pfibrosis underestimation. Overall the diagnostic performance of blood tests is similar in hepatitis B and C. The risk of underestimating significant fibrosis and cirrhosis is however greater in hepatitis B and cannot be entirely corrected by the use of more stringent cut-offs. Copyright © 2014 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.

  15. DeepPy: Pythonic deep learning

    DEFF Research Database (Denmark)

    Larsen, Anders Boesen Lindbo

    This technical report introduces DeepPy – a deep learning framework built on top of NumPy with GPU acceleration. DeepPy bridges the gap between highperformance neural networks and the ease of development from Python/NumPy. Users with a background in scientific computing in Python will quickly...... be able to understand and change the DeepPy codebase as it is mainly implemented using high-level NumPy primitives. Moreover, DeepPy supports complex network architectures by letting the user compose mathematical expressions as directed graphs. The latest version is available at http...

  16. Greedy Deep Dictionary Learning

    OpenAIRE

    Tariyal, Snigdha; Majumdar, Angshul; Singh, Richa; Vatsa, Mayank

    2016-01-01

    In this work we propose a new deep learning tool called deep dictionary learning. Multi-level dictionaries are learnt in a greedy fashion, one layer at a time. This requires solving a simple (shallow) dictionary learning problem, the solution to this is well known. We apply the proposed technique on some benchmark deep learning datasets. We compare our results with other deep learning tools like stacked autoencoder and deep belief network; and state of the art supervised dictionary learning t...

  17. Tumor necrosis factor-alpha potentiates the cytotoxicity of amiodarone in Hepa1c1c7 cells: roles of caspase activation and oxidative stress.

    Science.gov (United States)

    Lu, Jingtao; Miyakawa, Kazuhisa; Roth, Robert A; Ganey, Patricia E

    2013-01-01

    Amiodarone (AMD), a class III antiarrhythmic drug, causes idiosyncratic hepatotoxicity in human patients. We demonstrated previously that tumor necrosis factor-alpha (TNF-α) plays an important role in a rat model of AMD-induced hepatotoxicity under inflammatory stress. In this study, we developed a model in vitro to study the roles of caspase activation and oxidative stress in TNF potentiation of AMD cytotoxicity. AMD caused cell death in Hepa1c1c7 cells, and TNF cotreatment potentiated its toxicity. Activation of caspases 9 and 3/7 was observed in AMD/TNF-cotreated cells, and caspase inhibitors provided minor protection from cytotoxicity. Intracellular reactive oxygen species (ROS) generation and lipid peroxidation were observed after treatment with AMD and were further elevated by TNF cotreatment. Adding water-soluble antioxidants (trolox, N-acetylcysteine, glutathione, or ascorbate) produced only minor attenuation of AMD/TNF-induced cytotoxicity and did not influence the effect of AMD alone. On the other hand, α-tocopherol (TOCO), which reduced lipid peroxidation and ROS generation, prevented AMD toxicity and caused pronounced reduction in cytotoxicity from AMD/TNF cotreatment. α-TOCO plus a pancaspase inhibitor completely abolished AMD/TNF-induced cytotoxicity. In summary, activation of caspases and oxidative stress were observed after AMD/TNF cotreatment, and caspase inhibitors and a lipid-soluble free-radical scavenger attenuated AMD/TNF-induced cytotoxicity.

  18. EXPERIENCE OF ORNITHINE ASPARTATE (HEPA-MERZ AND PROBIOTICS BIOFLORUM FORTE IN THE TREATMENT OF NON-SEVERE FORMS OF ALCOHOLIC AND NON-ALCOHOLIC FATTY LIVER DISEASE

    Directory of Open Access Journals (Sweden)

    L. Yu. Ilchenko

    2016-01-01

    Full Text Available Aim: to evaluate the efficacy and tolerability of ornithine aspartate, probiotic Bioflorum Forte and their combination with steatosis and steatohepatitis in patients  with alcohol and non-alcoholic  fatty  liver disease. Materials and methods.  An open, randomized,  comparative  clinical study, which included 30 outpatients and inpatients with a diagnosis of steatosis, steatohepatitis. We analyzed the clinical symptoms, functional state of the liver. With the help of questionnaires  (Grids LeGo and post intoxication alcohol syndrome have established the presence of chronic alcohol intoxication. Test transmissions of numbers used to characterize the cognitive function, as well as detection  of minimal hepatic encephalopathy. Quality of life was assessed by questionnaire for patients with chronic liver disease — CLDQ (The chronic liver disease questionnaire. The duration of treatment was4 weeks. Results: all three treatment regimens have demonstrated therapeutic  efficacy: clinical improvement, recovery of liver function and results in cognitive function. When combined therapy also produced a significant improvement  in patients’ quality of life. It is shown that  the safety and tolerability of the means employed, adverse events were not reported. Conclusion: the results obtained allow us to recommend the use of ornithine aspartate (Hepa-Merz, both as monotherapy and as part of complex therapy of steatosis,  steatohepatitis with probiotic Bioflorum Forte in patients with alcoholic and non-alcoholic fatty liver disease.

  19. Taoism and Deep Ecology.

    Science.gov (United States)

    Sylvan, Richard; Bennett, David

    1988-01-01

    Contrasted are the philosophies of Deep Ecology and ancient Chinese. Discusses the cosmology, morality, lifestyle, views of power, politics, and environmental philosophies of each. Concludes that Deep Ecology could gain much from Taoism. (CW)

  20. Deep Incremental Boosting

    OpenAIRE

    Mosca, Alan; Magoulas, George D

    2017-01-01

    This paper introduces Deep Incremental Boosting, a new technique derived from AdaBoost, specifically adapted to work with Deep Learning methods, that reduces the required training time and improves generalisation. We draw inspiration from Transfer of Learning approaches to reduce the start-up time to training each incremental Ensemble member. We show a set of experiments that outlines some preliminary results on some common Deep Learning datasets and discuss the potential improvements Deep In...

  1. Deep Space Telecommunications

    Science.gov (United States)

    Kuiper, T. B. H.; Resch, G. M.

    2000-01-01

    The increasing load on NASA's deep Space Network, the new capabilities for deep space missions inherent in a next-generation radio telescope, and the potential of new telescope technology for reducing construction and operation costs suggest a natural marriage between radio astronomy and deep space telecommunications in developing advanced radio telescope concepts.

  2. Deep learning with Python

    CERN Document Server

    Chollet, Francois

    2018-01-01

    DESCRIPTION Deep learning is applicable to a widening range of artificial intelligence problems, such as image classification, speech recognition, text classification, question answering, text-to-speech, and optical character recognition. Deep Learning with Python is structured around a series of practical code examples that illustrate each new concept introduced and demonstrate best practices. By the time you reach the end of this book, you will have become a Keras expert and will be able to apply deep learning in your own projects. KEY FEATURES • Practical code examples • In-depth introduction to Keras • Teaches the difference between Deep Learning and AI ABOUT THE TECHNOLOGY Deep learning is the technology behind photo tagging systems at Facebook and Google, self-driving cars, speech recognition systems on your smartphone, and much more. AUTHOR BIO Francois Chollet is the author of Keras, one of the most widely used libraries for deep learning in Python. He has been working with deep neural ...

  3. Deep learning evaluation using deep linguistic processing

    OpenAIRE

    Kuhnle, Alexander; Copestake, Ann

    2017-01-01

    We discuss problems with the standard approaches to evaluation for tasks like visual question answering, and argue that artificial data can be used to address these as a complement to current practice. We demonstrate that with the help of existing 'deep' linguistic processing technology we are able to create challenging abstract datasets, which enable us to investigate the language understanding abilities of multimodal deep learning models in detail, as compared to a single performance value ...

  4. Interactions of endosulfan and methoxychlor involving CYP3A4 and CYP2B6 in human HepaRG cells.

    Science.gov (United States)

    Savary, Camille C; Jossé, Rozenn; Bruyère, Arnaud; Guillet, Fabrice; Robin, Marie-Anne; Guillouzo, André

    2014-08-01

    Humans are usually exposed to several pesticides simultaneously; consequently, combined actions between pesticides themselves or between pesticides and other chemicals need to be addressed in the risk assessment. Many pesticides are efficient activators of pregnane X receptor (PXR) and/or constitutive androstane receptor (CAR), two major nuclear receptors that are also activated by other substrates. In the present work, we searched for interactions between endosulfan and methoxychlor, two organochlorine pesticides whose major routes of metabolism involve CAR- and PXR-regulated CYP3A4 and CYP2B6, and whose mechanisms of action in humans remain poorly understood. For this purpose, HepaRG cells were treated with both pesticides separately or in mixture for 24 hours or 2 weeks at concentrations relevant to human exposure levels. In combination they exerted synergistic cytotoxic effects. Whatever the duration of treatment, both compounds increased CYP3A4 and CYP2B6 mRNA levels while differently affecting their corresponding activities. Endosulfan exerted a direct reversible inhibition of CYP3A4 activity that was confirmed in human liver microsomes. By contrast, methoxychlor induced this activity. The effects of the mixture on CYP3A4 activity were equal to the sum of those of each individual compound, suggesting an additive effect of each pesticide. Despite CYP2B6 activity being unchanged and increased with endosulfan and methoxychlor, respectively, no change was observed with their mixture, supporting an antagonistic effect. Altogether, our data suggest that CAR and PXR activators endosulfan and methoxychlor can interact together and with other exogenous substrates in human hepatocytes. Their effects on CYP3A4 and CYP2B6 activities could have important consequences if extrapolated to the in vivo situation. Copyright © 2014 by The American Society for Pharmacology and Experimental Therapeutics.

  5. 1-Nitropyrene (1-NP) induces apoptosis and apparently a non-apoptotic programmed cell death (paraptosis) in Hepa1c1c7 cells

    International Nuclear Information System (INIS)

    Asare, Nana; Landvik, Nina E.; Lagadic-Gossmann, Dominique; Rissel, Mary; Tekpli, Xavier; Ask, Kjetil; Lag, Marit; Holme, Jorn A.

    2008-01-01

    Mechanistic studies of nitro-PAHs (polycyclic aromatic hydrocarbons) of interest might help elucidate which chemical characteristics are most important in eliciting toxic effects. 1-Nitropyrene (1-NP) is the predominant nitrated PAH emitted in diesel exhaust. 1-NP-exposed Hepa1c1c7 cells exhibited marked changes in cellular morphology, decreased proliferation and different forms of cell death. A dramatic increase in cytoplasmic vacuolization was observed already after 6 h of exposure and the cells started to round up at 12 h. The rate of cell proliferation was markedly reduced at 24 h and apoptotic as well as propidium iodide (PI)-positive cells appeared. Electron microscopic examination revealed that the vacuolization was partly due to mitochondria swelling. The caspase inhibitor Z-VAD-FMK inhibited only the apoptotic cell death and Nec-1 (an inhibitor of necroptosis) exhibited no inhibitory effects on either cell death or vacuolization. In contrast, cycloheximide markedly reduced both the number of apoptotic and PI-positive cells as well as the cytoplasmic vacuolization, suggesting that 1-NP induced paraptotic cell death. All the MAPKs; ERK1/2, p38 and JNK, appear to be involved in the death process since marked activation was observed upon 1-NP exposure, and their inhibitors partly reduced the induced cell death. The ERK1/2 inhibitor PD 98057 completely blocked the induced vacuolization, whereas the other MAPKs inhibitors only had minor effects on this process. These findings suggest that 1-NP may cause apoptosis and paraptosis. In contrast, the corresponding amine (1-aminopyrene) elicited only minor apoptotic and necrotic cell death, and cells with characteristics typical of paraptosis were absent

  6. Dual-color fluorescence imaging to monitor CYP3A4 and CYP3A7 expression in human hepatic carcinoma HepG2 and HepaRG cells.

    Directory of Open Access Journals (Sweden)

    Saori Tsuji

    Full Text Available Human adult hepatocytes expressing CYP3A4, a major cytochrome P450 enzyme, are required for cell-based assays to evaluate the potential risk of drug-drug interactions caused by transcriptional induction of P450 enzymes in early-phase drug discovery and development. However, CYP3A7 is preferentially expressed in premature hepatoblasts and major hepatic carcinoma cell lines. The human hepatocellular carcinoma cell line HepaRG possesses a high self-renewal capacity and can differentiate into hepatic cells similar to human adult hepatocytes in vitro. Transgenic HepaRG cells, in which the expression of fluorescent reporters is regulated by 35 kb regulatory elements of CYP3A4, have a distinct advantage over human hepatocytes isolated by collagenase perfusion, which are unstable in culture. Thus, we created transgenic HepaRG and HepG2 cells by replacing the protein-coding regions of human CYP3A4 and CYP3A7 with enhanced green fluorescent protein (EGFP and DsRed reporters, respectively, in a bacterial artificial chromosome vector that included whole regulatory elements. The intensity of DsRed fluorescence was initially high during the proliferation of transgenic HepaRG cells. However, most EGFP-positive cells were derived from those in which DsRed fluorescence was extinguished. Comparative analyses in these transgenic clones showed that changes in the total fluorescence intensity of EGFP reflected fold changes in the mRNA level of endogenous CYP3A4. Moreover, CYP3A4 induction was monitored by the increase in EGFP fluorescence. Thus, this assay provides a real-time evaluation system for quality assurance of hepatic differentiation into CYP3A4-expressing cells, unfavourable CYP3A4 induction, and fluorescence-activated cell sorting-mediated enrichment of CYP3A4-expressing hepatocytes based on the total fluorescence intensities of fluorescent reporters, without the need for many time-consuming steps.

  7. Deep learning relevance

    DEFF Research Database (Denmark)

    Lioma, Christina; Larsen, Birger; Petersen, Casper

    2016-01-01

    train a Recurrent Neural Network (RNN) on existing relevant information to that query. We then use the RNN to "deep learn" a single, synthetic, and we assume, relevant document for that query. We design a crowdsourcing experiment to assess how relevant the "deep learned" document is, compared...... to existing relevant documents. Users are shown a query and four wordclouds (of three existing relevant documents and our deep learned synthetic document). The synthetic document is ranked on average most relevant of all....

  8. The Pleating of History: Weaving the Threads of Nationhood

    Directory of Open Access Journals (Sweden)

    Martin Ball

    2013-08-01

    Full Text Available As any etymologist knows, the word ‘text’ is derived from the past participle of the Latin verb texere, to weave. Text is therefore something that is ‘woven’. It’s a persuasive metaphor, to imagine writing in terms of the warp and weft of ideas and words, of narrative threads woven together to become a piece of fabric. The idea of history as fabric brings together a whole different set of tropes, not just of weaving, but of the very materiality of fabric. Does the fabric have a nap, or a pattern? Is it cut with the grain, or on the bias? What of its folds, its seams? All these qualities of fabric have application in the interpretation of history, and some of these images are already familiar in historical discourse.

  9. Deep Vein Thrombosis

    African Journals Online (AJOL)

    OWNER

    Deep Vein Thrombosis: Risk Factors and Prevention in Surgical Patients. Deep Vein ... preventable morbidity and mortality in hospitalized surgical patients. ... the elderly.3,4 It is very rare before the age ... depends on the risk level; therefore an .... but also in the post-operative period. ... is continuing uncertainty regarding.

  10. Deep Echo State Network (DeepESN): A Brief Survey

    OpenAIRE

    Gallicchio, Claudio; Micheli, Alessio

    2017-01-01

    The study of deep recurrent neural networks (RNNs) and, in particular, of deep Reservoir Computing (RC) is gaining an increasing research attention in the neural networks community. The recently introduced deep Echo State Network (deepESN) model opened the way to an extremely efficient approach for designing deep neural networks for temporal data. At the same time, the study of deepESNs allowed to shed light on the intrinsic properties of state dynamics developed by hierarchical compositions ...

  11. Deep learning in bioinformatics.

    Science.gov (United States)

    Min, Seonwoo; Lee, Byunghan; Yoon, Sungroh

    2017-09-01

    In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Deep subsurface microbial processes

    Science.gov (United States)

    Lovley, D.R.; Chapelle, F.H.

    1995-01-01

    Information on the microbiology of the deep subsurface is necessary in order to understand the factors controlling the rate and extent of the microbially catalyzed redox reactions that influence the geophysical properties of these environments. Furthermore, there is an increasing threat that deep aquifers, an important drinking water resource, may be contaminated by man's activities, and there is a need to predict the extent to which microbial activity may remediate such contamination. Metabolically active microorganisms can be recovered from a diversity of deep subsurface environments. The available evidence suggests that these microorganisms are responsible for catalyzing the oxidation of organic matter coupled to a variety of electron acceptors just as microorganisms do in surface sediments, but at much slower rates. The technical difficulties in aseptically sampling deep subsurface sediments and the fact that microbial processes in laboratory incubations of deep subsurface material often do not mimic in situ processes frequently necessitate that microbial activity in the deep subsurface be inferred through nonmicrobiological analyses of ground water. These approaches include measurements of dissolved H2, which can predict the predominant microbially catalyzed redox reactions in aquifers, as well as geochemical and groundwater flow modeling, which can be used to estimate the rates of microbial processes. Microorganisms recovered from the deep subsurface have the potential to affect the fate of toxic organics and inorganic contaminants in groundwater. Microbial activity also greatly influences 1 the chemistry of many pristine groundwaters and contributes to such phenomena as porosity development in carbonate aquifers, accumulation of undesirably high concentrations of dissolved iron, and production of methane and hydrogen sulfide. Although the last decade has seen a dramatic increase in interest in deep subsurface microbiology, in comparison with the study of

  13. Deep Water Survey Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The deep water biodiversity surveys explore and describe the biodiversity of the bathy- and bentho-pelagic nekton using Midwater and bottom trawls centered in the...

  14. Deep Space Habitat Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The Deep Space Habitat was closed out at the end of Fiscal Year 2013 (September 30, 2013). Results and select content have been incorporated into the new Exploration...

  15. Deep Learning in Neuroradiology.

    Science.gov (United States)

    Zaharchuk, G; Gong, E; Wintermark, M; Rubin, D; Langlotz, C P

    2018-02-01

    Deep learning is a form of machine learning using a convolutional neural network architecture that shows tremendous promise for imaging applications. It is increasingly being adapted from its original demonstration in computer vision applications to medical imaging. Because of the high volume and wealth of multimodal imaging information acquired in typical studies, neuroradiology is poised to be an early adopter of deep learning. Compelling deep learning research applications have been demonstrated, and their use is likely to grow rapidly. This review article describes the reasons, outlines the basic methods used to train and test deep learning models, and presents a brief overview of current and potential clinical applications with an emphasis on how they are likely to change future neuroradiology practice. Facility with these methods among neuroimaging researchers and clinicians will be important to channel and harness the vast potential of this new method. © 2018 by American Journal of Neuroradiology.

  16. Deep inelastic lepton scattering

    International Nuclear Information System (INIS)

    Nachtmann, O.

    1977-01-01

    Deep inelastic electron (muon) nucleon and neutrino nucleon scattering as well as electron positron annihilation into hadrons are reviewed from a theoretical point of view. The emphasis is placed on comparisons of quantum chromodynamics with the data. (orig.) [de

  17. Neuromorphic Deep Learning Machines

    OpenAIRE

    Neftci, E; Augustine, C; Paul, S; Detorakis, G

    2017-01-01

    An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Back Propagation (BP) rule, often relies on the immediate availability of network-wide...

  18. Pathogenesis of deep endometriosis.

    Science.gov (United States)

    Gordts, Stephan; Koninckx, Philippe; Brosens, Ivo

    2017-12-01

    The pathophysiology of (deep) endometriosis is still unclear. As originally suggested by Cullen, change the definition "deeper than 5 mm" to "adenomyosis externa." With the discovery of the old European literature on uterine bleeding in 5%-10% of the neonates and histologic evidence that the bleeding represents decidual shedding, it is postulated/hypothesized that endometrial stem/progenitor cells, implanted in the pelvic cavity after birth, may be at the origin of adolescent and even the occasionally premenarcheal pelvic endometriosis. Endometriosis in the adolescent is characterized by angiogenic and hemorrhagic peritoneal and ovarian lesions. The development of deep endometriosis at a later age suggests that deep infiltrating endometriosis is a delayed stage of endometriosis. Another hypothesis is that the endometriotic cell has undergone genetic or epigenetic changes and those specific changes determine the development into deep endometriosis. This is compatible with the hereditary aspects, and with the clonality of deep and cystic ovarian endometriosis. It explains the predisposition and an eventual causal effect by dioxin or radiation. Specific genetic/epigenetic changes could explain the various expressions and thus typical, cystic, and deep endometriosis become three different diseases. Subtle lesions are not a disease until epi(genetic) changes occur. A classification should reflect that deep endometriosis is a specific disease. In conclusion the pathophysiology of deep endometriosis remains debated and the mechanisms of disease progression, as well as the role of genetics and epigenetics in the process, still needs to be unraveled. Copyright © 2017 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  19. Why & When Deep Learning Works: Looking Inside Deep Learnings

    OpenAIRE

    Ronen, Ronny

    2017-01-01

    The Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI) has been heavily supporting Machine Learning and Deep Learning research from its foundation in 2012. We have asked six leading ICRI-CI Deep Learning researchers to address the challenge of "Why & When Deep Learning works", with the goal of looking inside Deep Learning, providing insights on how deep networks function, and uncovering key observations on their expressiveness, limitations, and potential. The outp...

  20. A novel genotoxin-specific qPCR array based on the metabolically competent human HepaRG™ cell line as a rapid and reliable tool for improved in vitro hazard assessment.

    Science.gov (United States)

    Ates, Gamze; Mertens, Birgit; Heymans, Anja; Verschaeve, Luc; Milushev, Dimiter; Vanparys, Philippe; Roosens, Nancy H C; De Keersmaecker, Sigrid C J; Rogiers, Vera; Doktorova, Tatyana Y

    2018-04-01

    Although the value of the regulatory accepted batteries for in vitro genotoxicity testing is recognized, they result in a high number of false positives. This has a major impact on society and industries developing novel compounds for pharmaceutical, chemical, and consumer products, as afflicted compounds have to be (prematurely) abandoned or further tested on animals. Using the metabolically competent human HepaRG ™ cell line and toxicogenomics approaches, we have developed an upgraded, innovative, and proprietary gene classifier. This gene classifier is based on transcriptomic changes induced by 12 genotoxic and 12 non-genotoxic reference compounds tested at sub-cytotoxic concentrations, i.e., IC10 concentrations as determined by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay. The resulting gene classifier was translated into an easy-to-handle qPCR array that, as shown by pathway analysis, covers several different cellular processes related to genotoxicity. To further assess the predictivity of the tool, a set of 5 known positive and 5 known negative test compounds for genotoxicity was evaluated. In addition, 2 compounds with debatable genotoxicity data were tested to explore how the qPCR array would classify these. With an accuracy of 100%, when equivocal results were considered positive, the results showed that combining HepaRG ™ cells with a genotoxin-specific qPCR array can improve (geno)toxicological hazard assessment. In addition, the developed qPCR array was able to provide additional information on compounds for which so far debatable genotoxicity data are available. The results indicate that the new in vitro tool can improve human safety assessment of chemicals in general by basing predictions on mechanistic toxicogenomics information.

  1. Auxiliary Deep Generative Models

    DEFF Research Database (Denmark)

    Maaløe, Lars; Sønderby, Casper Kaae; Sønderby, Søren Kaae

    2016-01-01

    Deep generative models parameterized by neural networks have recently achieved state-of-the-art performance in unsupervised and semi-supervised learning. We extend deep generative models with auxiliary variables which improves the variational approximation. The auxiliary variables leave...... the generative model unchanged but make the variational distribution more expressive. Inspired by the structure of the auxiliary variable we also propose a model with two stochastic layers and skip connections. Our findings suggest that more expressive and properly specified deep generative models converge...... faster with better results. We show state-of-the-art performance within semi-supervised learning on MNIST (0.96%), SVHN (16.61%) and NORB (9.40%) datasets....

  2. Deep Learning from Crowds

    DEFF Research Database (Denmark)

    Rodrigues, Filipe; Pereira, Francisco Camara

    Over the last few years, deep learning has revolutionized the field of machine learning by dramatically improving the stateof-the-art in various domains. However, as the size of supervised artificial neural networks grows, typically so does the need for larger labeled datasets. Recently...... networks from crowds. We begin by describing an EM algorithm for jointly learning the parameters of the network and the reliabilities of the annotators. Then, a novel general-purpose crowd layer is proposed, which allows us to train deep neural networks end-to-end, directly from the noisy labels......, crowdsourcing has established itself as an efficient and cost-effective solution for labeling large sets of data in a scalable manner, but it often requires aggregating labels from multiple noisy contributors with different levels of expertise. In this paper, we address the problem of learning deep neural...

  3. Deep boreholes; Tiefe Bohrloecher

    Energy Technology Data Exchange (ETDEWEB)

    Bracke, Guido [Gesellschaft fuer Anlagen- und Reaktorsicherheit gGmbH Koeln (Germany); Charlier, Frank [NSE international nuclear safety engineering gmbh, Aachen (Germany); Geckeis, Horst [Karlsruher Institut fuer Technologie (Germany). Inst. fuer Nukleare Entsorgung; and others

    2016-02-15

    The report on deep boreholes covers the following subject areas: methods for safe enclosure of radioactive wastes, requirements concerning the geological conditions of possible boreholes, reversibility of decisions and retrievability, status of drilling technology. The introduction covers national and international activities. Further chapters deal with the following issues: basic concept of the storage in deep bore holes, status of the drilling technology, safe enclosure, geomechanics and stability, reversibility of decisions, risk scenarios, compliancy with safe4ty requirements and site selection criteria, research and development demand.

  4. Deep Water Acoustics

    Science.gov (United States)

    2016-06-28

    the Deep Water project and participate in the NPAL Workshops, including Art Baggeroer (MIT), J. Beron- Vera (UMiami), M. Brown (UMiami), T...Kathleen E . Wage. The North Pacific Acoustic Laboratory deep-water acoustic propagation experiments in the Philippine Sea. J. Acoust. Soc. Am., 134(4...estimate of the angle α during PhilSea09, made from ADCP measurements at the site of the DVLA. Sim. A B1 B2 B3 C D E F Prof. # 0 4 4 4 5 10 16 20 α

  5. Deep diode atomic battery

    International Nuclear Information System (INIS)

    Anthony, T.R.; Cline, H.E.

    1977-01-01

    A deep diode atomic battery is made from a bulk semiconductor crystal containing three-dimensional arrays of columnar and lamellar P-N junctions. The battery is powered by gamma rays and x-ray emission from a radioactive source embedded in the interior of the semiconductor crystal

  6. Deep Learning Policy Quantization

    NARCIS (Netherlands)

    van de Wolfshaar, Jos; Wiering, Marco; Schomaker, Lambertus

    2018-01-01

    We introduce a novel type of actor-critic approach for deep reinforcement learning which is based on learning vector quantization. We replace the softmax operator of the policy with a more general and more flexible operator that is similar to the robust soft learning vector quantization algorithm.

  7. Deep-sea fungi

    Digital Repository Service at National Institute of Oceanography (India)

    Raghukumar, C; Damare, S.R.

    significant in terms of carbon sequestration (5, 8). In light of this, the diversity, abundance, and role of fungi in deep-sea sediments may form an important link in the global C biogeochemistry. This review focuses on issues related to collection...

  8. Deep inelastic scattering

    International Nuclear Information System (INIS)

    Aubert, J.J.

    1982-01-01

    Deep inelastic lepton-nucleon interaction experiments are renewed. Singlet and non-singlet structure functions are measured and the consistency of the different results is checked. A detailed analysis of the scaling violation is performed in terms of the quantum chromodynamics predictions [fr

  9. Deep Vein Thrombosis

    Centers for Disease Control (CDC) Podcasts

    2012-04-05

    This podcast discusses the risk for deep vein thrombosis in long-distance travelers and ways to minimize that risk.  Created: 4/5/2012 by National Center for Emerging and Zoonotic Infectious Diseases (NCEZID).   Date Released: 4/5/2012.

  10. Deep Learning Microscopy

    KAUST Repository

    Rivenson, Yair; Gorocs, Zoltan; Gunaydin, Harun; Zhang, Yibo; Wang, Hongda; Ozcan, Aydogan

    2017-01-01

    regular optical microscope, without any changes to its design. We blindly tested this deep learning approach using various tissue samples that are imaged with low-resolution and wide-field systems, where the network rapidly outputs an image with remarkably

  11. The deep universe

    CERN Document Server

    Sandage, AR; Longair, MS

    1995-01-01

    Discusses the concept of the deep universe from two conflicting theoretical viewpoints: firstly as a theory embracing the evolution of the universe from the Big Bang to the present; and secondly through observations gleaned over the years on stars, galaxies and clusters.

  12. Teaching for Deep Learning

    Science.gov (United States)

    Smith, Tracy Wilson; Colby, Susan A.

    2007-01-01

    The authors have been engaged in research focused on students' depth of learning as well as teachers' efforts to foster deep learning. Findings from a study examining the teaching practices and student learning outcomes of sixty-four teachers in seventeen different states (Smith et al. 2005) indicated that most of the learning in these classrooms…

  13. Deep Trawl Dataset

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Otter trawl (36' Yankee and 4-seam net deepwater gear) catches from mid-Atlantic slope and canyons at 200 - 800 m depth. Deep-sea (200-800 m depth) flat otter trawls...

  14. [Deep vein thrombosis prophylaxis.

    Science.gov (United States)

    Sandoval-Chagoya, Gloria Alejandra; Laniado-Laborín, Rafael

    2013-01-01

    Background: despite the proven effectiveness of preventive therapy for deep vein thrombosis, a significant proportion of patients at risk for thromboembolism do not receive prophylaxis during hospitalization. Our objective was to determine the adherence to thrombosis prophylaxis guidelines in a general hospital as a quality control strategy. Methods: a random audit of clinical charts was conducted at the Tijuana General Hospital, Baja California, Mexico, to determine the degree of adherence to deep vein thrombosis prophylaxis guidelines. The instrument used was the Caprini's checklist for thrombosis risk assessment in adult patients. Results: the sample included 300 patient charts; 182 (60.7 %) were surgical patients and 118 were medical patients. Forty six patients (15.3 %) received deep vein thrombosis pharmacologic prophylaxis; 27.1 % of medical patients received deep vein thrombosis prophylaxis versus 8.3 % of surgical patients (p < 0.0001). Conclusions: our results show that adherence to DVT prophylaxis at our hospital is extremely low. Only 15.3 % of our patients at risk received treatment, and even patients with very high risk received treatment in less than 25 % of the cases. We have implemented strategies to increase compliance with clinical guidelines.

  15. Deep inelastic scattering

    International Nuclear Information System (INIS)

    Zakharov, V.I.

    1977-01-01

    The present status of the quark-parton-gluon picture of deep inelastic scattering is reviewed. The general framework is mostly theoretical and covers investigations since 1970. Predictions of the parton model and of the asymptotically free field theories are compared with experimental data available. The valence quark approximation is concluded to be valid in most cases, but fails to account for the data on the total momentum transfer. On the basis of gluon corrections introduced to the parton model certain predictions concerning both the deep inelastic structure functions and form factors are made. The contributions of gluon exchanges and gluon bremsstrahlung are highlighted. Asymptotic freedom is concluded to be very attractive and provide qualitative explanation to some experimental observations (scaling violations, breaking of the Drell-Yan-West type relations). Lepton-nuclear scattering is pointed out to be helpful in probing the nature of nuclear forces and studying the space-time picture of the parton model

  16. Deep Energy Retrofit

    DEFF Research Database (Denmark)

    Zhivov, Alexander; Lohse, Rüdiger; Rose, Jørgen

    Deep Energy Retrofit – A Guide to Achieving Significant Energy User Reduction with Major Renovation Projects contains recommendations for characteristics of some of core technologies and measures that are based on studies conducted by national teams associated with the International Energy Agency...... Energy Conservation in Buildings and Communities Program (IEA-EBC) Annex 61 (Lohse et al. 2016, Case, et al. 2016, Rose et al. 2016, Yao, et al. 2016, Dake 2014, Stankevica et al. 2016, Kiatreungwattana 2014). Results of these studies provided a base for setting minimum requirements to the building...... envelope-related technologies to make Deep Energy Retrofit feasible and, in many situations, cost effective. Use of energy efficiency measures (EEMs) in addition to core technologies bundle and high-efficiency appliances will foster further energy use reduction. This Guide also provides best practice...

  17. Deep groundwater chemistry

    International Nuclear Information System (INIS)

    Wikberg, P.; Axelsen, K.; Fredlund, F.

    1987-06-01

    Starting in 1977 and up till now a number of places in Sweden have been investigated in order to collect the necessary geological, hydrogeological and chemical data needed for safety analyses of repositories in deep bedrock systems. Only crystalline rock is considered and in many cases this has been gneisses of sedimentary origin but granites and gabbros are also represented. Core drilled holes have been made at nine sites. Up to 15 holes may be core drilled at one site, the deepest down to 1000 m. In addition to this a number of boreholes are percussion drilled at each site to depths of about 100 m. When possible drilling water is taken from percussion drilled holes. The first objective is to survey the hydraulic conditions. Core drilled boreholes and sections selected for sampling of deep groundwater are summarized. (orig./HP)

  18. Deep Reinforcement Fuzzing

    OpenAIRE

    Böttinger, Konstantin; Godefroid, Patrice; Singh, Rishabh

    2018-01-01

    Fuzzing is the process of finding security vulnerabilities in input-processing code by repeatedly testing the code with modified inputs. In this paper, we formalize fuzzing as a reinforcement learning problem using the concept of Markov decision processes. This in turn allows us to apply state-of-the-art deep Q-learning algorithms that optimize rewards, which we define from runtime properties of the program under test. By observing the rewards caused by mutating with a specific set of actions...

  19. Deep Visual Attention Prediction

    Science.gov (United States)

    Wang, Wenguan; Shen, Jianbing

    2018-05-01

    In this work, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although Convolutional Neural Networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve CNN based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark datasets demonstrate our method yields state-of-the-art performance with competitive inference time.

  20. Deep Red (Profondo Rosso)

    CERN Multimedia

    Cine Club

    2015-01-01

    Wednesday 29 April 2015 at 20:00 CERN Council Chamber    Deep Red (Profondo Rosso) Directed by Dario Argento (Italy, 1975) 126 minutes A psychic who can read minds picks up the thoughts of a murderer in the audience and soon becomes a victim. An English pianist gets involved in solving the murders, but finds many of his avenues of inquiry cut off by new murders, and he begins to wonder how the murderer can track his movements so closely. Original version Italian; English subtitles

  1. Reversible deep disposal

    International Nuclear Information System (INIS)

    2009-10-01

    This presentation, given by the national agency of radioactive waste management (ANDRA) at the meeting of October 8, 2009 of the high committee for the nuclear safety transparency and information (HCTISN), describes the concept of deep reversible disposal for high level/long living radioactive wastes, as considered by the ANDRA in the framework of the program law of June 28, 2006 about the sustainable management of radioactive materials and wastes. The document presents the social and political reasons of reversibility, the technical means considered (containers, disposal cavities, monitoring system, test facilities and industrial prototypes), the decisional process (progressive development and blocked off of the facility, public information and debate). (J.S.)

  2. Deep inelastic neutron scattering

    International Nuclear Information System (INIS)

    Mayers, J.

    1989-03-01

    The report is based on an invited talk given at a conference on ''Neutron Scattering at ISIS: Recent Highlights in Condensed Matter Research'', which was held in Rome, 1988, and is intended as an introduction to the techniques of Deep Inelastic Neutron Scattering. The subject is discussed under the following topic headings:- the impulse approximation I.A., scaling behaviour, kinematical consequences of energy and momentum conservation, examples of measurements, derivation of the I.A., the I.A. in a harmonic system, and validity of the I.A. in neutron scattering. (U.K.)

  3. [Deep mycoses rarely described].

    Science.gov (United States)

    Charles, D

    1986-01-01

    Beside deep mycoses very well known: histoplasmosis, candidosis, cryptococcosis, there are other mycoses less frequently described. Some of them are endemic in some countries: South American blastomycosis in Brazil, coccidioidomycosis in California; some others are cosmopolitan and may affect everyone: sporotrichosis, or may affect only immunodeficient persons: mucormycosis. They do not spare Africa, we may encounter basidiobolomycosis, rhinophycomycosis, dermatophytosis, sporotrichosis and, more recently reported, rhinosporidiosis. Important therapeutic progresses have been accomplished with amphotericin B and with antifungus imidazole compounds (miconazole and ketoconazole). Surgical intervention is sometime recommended in chromomycosis and rhinosporidiosis.

  4. Deep penetration calculations

    International Nuclear Information System (INIS)

    Thompson, W.L.; Deutsch, O.L.; Booth, T.E.

    1980-04-01

    Several Monte Carlo techniques are compared in the transport of neutrons of different source energies through two different deep-penetration problems each with two parts. The first problem involves transmission through a 200-cm concrete slab. The second problem is a 90 0 bent pipe jacketed by concrete. In one case the pipe is void, and in the other it is filled with liquid sodium. Calculations are made with two different Los Alamos Monte Carlo codes: the continuous-energy code MCNP and the multigroup code MCMG

  5. Deep Super Learner: A Deep Ensemble for Classification Problems

    OpenAIRE

    Young, Steven; Abdou, Tamer; Bener, Ayse

    2018-01-01

    Deep learning has become very popular for tasks such as predictive modeling and pattern recognition in handling big data. Deep learning is a powerful machine learning method that extracts lower level features and feeds them forward for the next layer to identify higher level features that improve performance. However, deep neural networks have drawbacks, which include many hyper-parameters and infinite architectures, opaqueness into results, and relatively slower convergence on smaller datase...

  6. Deep sea biophysics

    International Nuclear Information System (INIS)

    Yayanos, A.A.

    1982-01-01

    A collection of deep-sea bacterial cultures was completed. Procedures were instituted to shelter the culture collection from accidential warming. A substantial data base on the rates of reproduction of more than 100 strains of bacteria from that collection was obtained from experiments and the analysis of that data was begun. The data on the rates of reproduction were obtained under conditions of temperature and pressure found in the deep sea. The experiments were facilitated by inexpensively fabricated pressure vessels, by the streamlining of the methods for the study of kinetics at high pressures, and by computer-assisted methods. A polybarothermostat was used to study the growth of bacteria along temperature gradients at eight distinct pressures. This device should allow for the study of microbial processes in the temperature field simulating the environment around buried HLW. It is small enough to allow placement in a radiation field in future studies. A flow fluorocytometer was fabricated. This device will be used to determine the DNA content per cell in bacteria grown in laboratory culture and in microorganisms in samples from the ocean. The technique will be tested for its rapidity in determining the concentration of cells (standing stock of microorganisms) in samples from the ocean

  7. Deep Learning in Radiology.

    Science.gov (United States)

    McBee, Morgan P; Awan, Omer A; Colucci, Andrew T; Ghobadi, Comeron W; Kadom, Nadja; Kansagra, Akash P; Tridandapani, Srini; Auffermann, William F

    2018-03-29

    As radiology is inherently a data-driven specialty, it is especially conducive to utilizing data processing techniques. One such technique, deep learning (DL), has become a remarkably powerful tool for image processing in recent years. In this work, the Association of University Radiologists Radiology Research Alliance Task Force on Deep Learning provides an overview of DL for the radiologist. This article aims to present an overview of DL in a manner that is understandable to radiologists; to examine past, present, and future applications; as well as to evaluate how radiologists may benefit from this remarkable new tool. We describe several areas within radiology in which DL techniques are having the most significant impact: lesion or disease detection, classification, quantification, and segmentation. The legal and ethical hurdles to implementation are also discussed. By taking advantage of this powerful tool, radiologists can become increasingly more accurate in their interpretations with fewer errors and spend more time to focus on patient care. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  8. Deep Learning Microscopy

    KAUST Repository

    Rivenson, Yair

    2017-05-12

    We demonstrate that a deep neural network can significantly improve optical microscopy, enhancing its spatial resolution over a large field-of-view and depth-of-field. After its training, the only input to this network is an image acquired using a regular optical microscope, without any changes to its design. We blindly tested this deep learning approach using various tissue samples that are imaged with low-resolution and wide-field systems, where the network rapidly outputs an image with remarkably better resolution, matching the performance of higher numerical aperture lenses, also significantly surpassing their limited field-of-view and depth-of-field. These results are transformative for various fields that use microscopy tools, including e.g., life sciences, where optical microscopy is considered as one of the most widely used and deployed techniques. Beyond such applications, our presented approach is broadly applicable to other imaging modalities, also spanning different parts of the electromagnetic spectrum, and can be used to design computational imagers that get better and better as they continue to image specimen and establish new transformations among different modes of imaging.

  9. Deep Transfer Metric Learning.

    Science.gov (United States)

    Junlin Hu; Jiwen Lu; Yap-Peng Tan; Jie Zhou

    2016-12-01

    Conventional metric learning methods usually assume that the training and test samples are captured in similar scenarios so that their distributions are assumed to be the same. This assumption does not hold in many real visual recognition applications, especially when samples are captured across different data sets. In this paper, we propose a new deep transfer metric learning (DTML) method to learn a set of hierarchical nonlinear transformations for cross-domain visual recognition by transferring discriminative knowledge from the labeled source domain to the unlabeled target domain. Specifically, our DTML learns a deep metric network by maximizing the inter-class variations and minimizing the intra-class variations, and minimizing the distribution divergence between the source domain and the target domain at the top layer of the network. To better exploit the discriminative information from the source domain, we further develop a deeply supervised transfer metric learning (DSTML) method by including an additional objective on DTML, where the output of both the hidden layers and the top layer are optimized jointly. To preserve the local manifold of input data points in the metric space, we present two new methods, DTML with autoencoder regularization and DSTML with autoencoder regularization. Experimental results on face verification, person re-identification, and handwritten digit recognition validate the effectiveness of the proposed methods.

  10. Equivalent molecular mass of cytosolic and nuclear forms of Ah receptor from Hepa-1 cells determined by photoaffinity labeling with 2,3,7,8-[3H]tetrachlorodibenzo-p-dioxin

    International Nuclear Information System (INIS)

    Prokipcak, R.D.; Okey, A.B.

    1990-01-01

    The structure of the Ah receptor previously has been extensively characterized by reversible binding of the high affinity ligand 2,3,7,8-tetrachlorodibenzo-p-dioxin. We report the use of [ 3 H]2,3,7,8-tetrachlorodibenzo-p-dioxin as a photoaffinity ligand for Ah receptor from the mouse hepatoma cell line Hepa-1c1c9. Both cytosolic and nuclear forms of Ah receptor could be specifically photoaffinity-labeled, which allowed determination of molecular mass for the two forms under denaturing conditions. After analysis by fluorography of polyacrylamide gels run in the presence of sodium dodecyl sulfate, molecular mass for the cytosolic form of Ah receptor was estimated at 92,000 +/- 4,300 and that for the nuclear form was estimated at 93,500 +/- 3,400. Receptor in mixture of cytosol and nuclear extract (each labeled separately with [ 3 H]2,3,7,8-tetrachlorodibenzo-p-dioxin) migrated as a single band. These results are consistent with the presence of a common ligand-binding subunit of identical molecular mass in both cytosolic and nuclear complexes

  11. Deep Reinforcement Learning: An Overview

    OpenAIRE

    Li, Yuxi

    2017-01-01

    We give an overview of recent exciting achievements of deep reinforcement learning (RL). We discuss six core elements, six important mechanisms, and twelve applications. We start with background of machine learning, deep learning and reinforcement learning. Next we discuss core RL elements, including value function, in particular, Deep Q-Network (DQN), policy, reward, model, planning, and exploration. After that, we discuss important mechanisms for RL, including attention and memory, unsuperv...

  12. Deep Feature Consistent Variational Autoencoder

    OpenAIRE

    Hou, Xianxu; Shen, Linlin; Sun, Ke; Qiu, Guoping

    2016-01-01

    We present a novel method for constructing Variational Autoencoder (VAE). Instead of using pixel-by-pixel loss, we enforce deep feature consistency between the input and the output of a VAE, which ensures the VAE's output to preserve the spatial correlation characteristics of the input, thus leading the output to have a more natural visual appearance and better perceptual quality. Based on recent deep learning works such as style transfer, we employ a pre-trained deep convolutional neural net...

  13. Deep learning for image classification

    Science.gov (United States)

    McCoppin, Ryan; Rizki, Mateen

    2014-06-01

    This paper provides an overview of deep learning and introduces the several subfields of deep learning including a specific tutorial of convolutional neural networks. Traditional methods for learning image features are compared to deep learning techniques. In addition, we present our preliminary classification results, our basic implementation of a convolutional restricted Boltzmann machine on the Mixed National Institute of Standards and Technology database (MNIST), and we explain how to use deep learning networks to assist in our development of a robust gender classification system.

  14. Deep learning? What deep learning? | Fourie | South African ...

    African Journals Online (AJOL)

    In teaching generally over the past twenty years, there has been a move towards teaching methods that encourage deep, rather than surface approaches to learning. The reason for this being that students, who adopt a deep approach to learning are considered to have learning outcomes of a better quality and desirability ...

  15. Deep sea radionuclides

    International Nuclear Information System (INIS)

    Kanisch, G.; Vobach, M.

    1993-01-01

    Every year since 1979, either in sping or in summer, the fishing research vessel 'Walther Herwig' goes to the North Atlantic disposal areas of solid radioactive wastes, and, for comparative purposes, to other areas, in order to collect water samples, plankton and nekton, and, from the deep sea bed, sediment samples and benthos organisms. In addition to data on the radionuclide contents of various media, information about the plankton, nekton and benthos organisms living in those areas and about their biomasses could be gathered. The investigations are aimed at acquiring scientifically founded knowledge of the uptake of radioactive substances by microorganisms, and their migration from the sea bottom to the areas used by man. (orig.) [de

  16. Deep inelastic phenomena

    International Nuclear Information System (INIS)

    Aubert, J.J.

    1982-01-01

    The experimental situation of the deep inelastic scattering for electrons (muons) is reviewed. A brief history of experimentation highlights Mohr and Nicoll's 1932 experiment on electron-atom scattering and Hofstadter's 1950 experiment on electron-nucleus scattering. The phenomenology of electron-nucleon scattering carried out between 1960 and 1970 is described, with emphasis on the parton model, and scaling. Experiments at SLAC and FNAL since 1974 exhibit scaling violations. Three muon-nucleon scattering experiments at BFP, BCDMA, and EMA, currently producing new results in the high Q 2 domain suggest a rather flat behaviour of the structure function at fixed x as a function of Q 2 . It is seen that the structure measured in DIS can then be projected into a pure hadronic process to predict a cross section. Protonneutron difference, moment analysis, and Drell-Yan pairs are also considered

  17. Context and Deep Learning Design

    Science.gov (United States)

    Boyle, Tom; Ravenscroft, Andrew

    2012-01-01

    Conceptual clarification is essential if we are to establish a stable and deep discipline of technology enhanced learning. The technology is alluring; this can distract from deep design in a surface rush to exploit the affordances of the new technology. We need a basis for design, and a conceptual unit of organization, that are applicable across…

  18. Deep Learning Fluid Mechanics

    Science.gov (United States)

    Barati Farimani, Amir; Gomes, Joseph; Pande, Vijay

    2017-11-01

    We have developed a new data-driven model paradigm for the rapid inference and solution of the constitutive equations of fluid mechanic by deep learning models. Using generative adversarial networks (GAN), we train models for the direct generation of solutions to steady state heat conduction and incompressible fluid flow without knowledge of the underlying governing equations. Rather than using artificial neural networks to approximate the solution of the constitutive equations, GANs can directly generate the solutions to these equations conditional upon an arbitrary set of boundary conditions. Both models predict temperature, velocity and pressure fields with great test accuracy (>99.5%). The application of our framework for inferring and generating the solutions of partial differential equations can be applied to any physical phenomena and can be used to learn directly from experiments where the underlying physical model is complex or unknown. We also have shown that our framework can be used to couple multiple physics simultaneously, making it amenable to tackle multi-physics problems.

  19. Deep video deblurring

    KAUST Repository

    Su, Shuochen

    2016-11-25

    Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.

  20. Deep space telescopes

    CERN Multimedia

    CERN. Geneva

    2006-01-01

    The short series of seminars will address results and aims of current and future space astrophysics as the cultural framework for the development of deep space telescopes. It will then present such new tools, as they are currently available to, or imagined by, the scientific community, in the context of the science plans of ESA and of all major world space agencies. Ground-based astronomy, in the 400 years since Galileo’s telescope, has given us a profound phenomenological comprehension of our Universe, but has traditionally been limited to the narrow band(s) to which our terrestrial atmosphere is transparent. Celestial objects, however, do not care about our limitations, and distribute most of the information about their physics throughout the complete electromagnetic spectrum. Such information is there for the taking, from millimiter wavelengths to gamma rays. Forty years astronomy from space, covering now most of the e.m. spectrum, have thus given us a better understanding of our physical Universe then t...

  1. Deep inelastic final states

    International Nuclear Information System (INIS)

    Girardi, G.

    1980-11-01

    In these lectures we attempt to describe the final states of deep inelastic scattering as given by QCD. In the first section we shall briefly comment on the parton model and give the main properties of decay functions which are of interest for the study of semi-inclusive leptoproduction. The second section is devoted to the QCD approach to single hadron leptoproduction. First we recall basic facts on QCD log's and derive after that the evolution equations for the fragmentation functions. For this purpose we make a short detour in e + e - annihilation. The rest of the section is a study of the factorization of long distance effects associated with the initial and final states. We then show how when one includes next to leading QCD corrections one induces factorization breaking and describe the double moments useful for testing such effects. The next section contains a review on the QCD jets in the hadronic final state. We begin by introducing the notion of infrared safe variable and defining a few useful examples. Distributions in these variables are studied to first order in QCD, with some comments on the resummation of logs encountered in higher orders. Finally the last section is a 'gaullimaufry' of jet studies

  2. Deep Mapping and Spatial Anthropology

    Directory of Open Access Journals (Sweden)

    Les Roberts

    2016-01-01

    Full Text Available This paper provides an introduction to the Humanities Special Issue on “Deep Mapping”. It sets out the rationale for the collection and explores the broad-ranging nature of perspectives and practices that fall within the “undisciplined” interdisciplinary domain of spatial humanities. Sketching a cross-current of ideas that have begun to coalesce around the concept of “deep mapping”, the paper argues that rather than attempting to outline a set of defining characteristics and “deep” cartographic features, a more instructive approach is to pay closer attention to the multivalent ways deep mapping is performatively put to work. Casting a critical and reflexive gaze over the developing discourse of deep mapping, it is argued that what deep mapping “is” cannot be reduced to the otherwise a-spatial and a-temporal fixity of the “deep map”. In this respect, as an undisciplined survey of this increasing expansive field of study and practice, the paper explores the ways in which deep mapping can engage broader discussion around questions of spatial anthropology.

  3. Deep learning for computational chemistry.

    Science.gov (United States)

    Goh, Garrett B; Hodas, Nathan O; Vishnu, Abhinav

    2017-06-15

    The rise and fall of artificial neural networks is well documented in the scientific literature of both computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on multilayer neural networks. Within the last few years, we have seen the transformative impact of deep learning in many domains, particularly in speech recognition and computer vision, to the extent that the majority of expert practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. In this review, we provide an introductory overview into the theory of deep neural networks and their unique properties that distinguish them from traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including quantitative structure activity relationship, virtual screening, protein structure prediction, quantum chemistry, materials design, and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non-neural networks state-of-the-art models across disparate research topics, and deep neural network-based models often exceeded the "glass ceiling" expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a valuable tool for computational chemistry. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  4. Deep learning for computational chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Goh, Garrett B. [Advanced Computing, Mathematics, and Data Division, Pacific Northwest National Laboratory, 902 Battelle Blvd Richland Washington 99354; Hodas, Nathan O. [Advanced Computing, Mathematics, and Data Division, Pacific Northwest National Laboratory, 902 Battelle Blvd Richland Washington 99354; Vishnu, Abhinav [Advanced Computing, Mathematics, and Data Division, Pacific Northwest National Laboratory, 902 Battelle Blvd Richland Washington 99354

    2017-03-08

    The rise and fall of artificial neural networks is well documented in the scientific literature of both the fields of computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on “deep” neural networks. Within the last few years, we have seen the transformative impact of deep learning the computer science domain, notably in speech recognition and computer vision, to the extent that the majority of practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. In this review, we provide an introductory overview into the theory of deep neural networks and their unique properties as compared to traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including QSAR, virtual screening, protein structure modeling, QM calculations, materials synthesis and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non neural networks state-of-the-art models across disparate research topics, and deep neural network based models often exceeded the “glass ceiling” expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a useful tool and may grow into a pivotal role for various challenges in the computational chemistry field.

  5. Camel Milk Modulates the Expression of Aryl Hydrocarbon Receptor-Regulated Genes, Cyp1a1, Nqo1, and Gsta1, in Murine hepatoma Hepa 1c1c7 Cells

    Directory of Open Access Journals (Sweden)

    Hesham M. Korashy

    2012-01-01

    Full Text Available There is a traditional belief in the Middle East that camel milk may aid in prevention and treatment of numerous cases of cancer yet, the exact mechanism was not investigated. Therefore, we examined the ability of camel milk to modulate the expression of a well-known cancer-activating gene, Cytochrome P450 1a1 (Cyp1a1, and cancer-protective genes, NAD(PH:quinone oxidoreductase 1 (Nqo1 and glutathione S-transferase a1 (Gsta1, in murine hepatoma Hepa 1c1c7 cell line. Our results showed that camel milk significantly inhibited the induction of Cyp1a1 gene expression by 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD, the most potent Cyp1a1 inducer and known carcinogenic chemical, at mRNA, protein, and activity levels in a concentration-dependent manner. In addition, camel milk significantly decreased the xenobiotic responsive element (XRE-dependent luciferase activity, suggesting a transcriptional mechanism is involved. Furthermore, this inhibitory effect of camel milk was associated with a proportional increase in heme oxygenase 1. On the other hand, camel milk significantly induced Nqo1 and Gsta1 mRNA expression level in a concentration-dependent fashion. The RNA synthesis inhibitor, actinomycin D, completely blocked the induction of Nqo1 mRNA by camel milk suggesting the requirement of de novo RNA synthesis through a transcriptional mechanism. In conclusion, camel milk modulates the expression of Cyp1a1, Nqo1, and Gsta1 at the transcriptional and posttranscriptional levels.

  6. DeepSimulator: a deep simulator for Nanopore sequencing

    KAUST Repository

    Li, Yu; Han, Renmin; Bi, Chongwei; Li, Mo; Wang, Sheng; Gao, Xin

    2017-01-01

    or assembled contigs, we simulate the electrical current signals by a context-dependent deep learning model, followed by a base-calling procedure to yield simulated reads. This workflow mimics the sequencing procedure more naturally. The thorough experiments

  7. Deep UV LEDs

    Science.gov (United States)

    Han, Jung; Amano, Hiroshi; Schowalter, Leo

    2014-06-01

    Deep ultraviolet (DUV) photons interact strongly with a broad range of chemical and biological molecules; compact DUV light sources could enable a wide range of applications in chemi/bio-sensing, sterilization, agriculture, and industrial curing. The much shorter wavelength also results in useful characteristics related to optical diffraction (for lithography) and scattering (non-line-of-sight communication). The family of III-N (AlGaInN) compound semiconductors offers a tunable energy gap from infrared to DUV. While InGaN-based blue light emitters have been the primary focus for the obvious application of solid state lighting, there is a growing interest in the development of efficient UV and DUV light-emitting devices. In the past few years we have witnessed an increasing investment from both government and industry sectors to further the state of DUV light-emitting devices. The contributions in Semiconductor Science and Technology 's special issue on DUV devices provide an up-to-date snapshot covering many relevant topics in this field. Given the expected importance of bulk AlN substrate in DUV technology, we are pleased to include a review article by Hartmann et al on the growth of AlN bulk crystal by physical vapour transport. The issue of polarization field within the deep ultraviolet LEDs is examined in the article by Braut et al. Several commercial companies provide useful updates in their development of DUV emitters, including Nichia (Fujioka et al ), Nitride Semiconductors (Muramoto et al ) and Sensor Electronic Technology (Shatalov et al ). We believe these articles will provide an excellent overview of the state of technology. The growth of AlGaN heterostructures by molecular beam epitaxy, in contrast to the common organo-metallic vapour phase epitaxy, is discussed by Ivanov et al. Since hexagonal boron nitride (BN) has received much attention as both a UV and a two-dimensional electronic material, we believe it serves readers well to include the

  8. DEEP INFILTRATING ENDOMETRIOSIS

    Directory of Open Access Journals (Sweden)

    Martina Ribič-Pucelj

    2018-02-01

    Full Text Available Background: Endometriosis is not considered a unified disease, but a disease encompassing three differ- ent forms differentiated by aetiology and pathogenesis: peritoneal endometriosis, ovarian endometriosis and deep infiltrating endometriosis (DIE. The disease is classified as DIE when the lesions penetrate 5 mm or more into the retroperitoneal space. The estimated incidence of endometriosis in women of reproductive age ranges from 10–15 % and that of DIE from 3–10 %, the highest being in infertile women and in those with chronic pelvic pain. The leading symptoms of DIE are chronic pelvic pain which increases with age and correlates with the depth of infiltration and infertility. The most important diagnostic procedures are patient’s history and proper gynecological examination. The diagnosis is confirmed with laparoscopy. DIE can affect, beside reproductive organs, also bowel, bladder and ureters, therefore adi- tional diagnostic procedures must be performed preopertively to confirm or to exclude the involvement of the mentioned organs. Endometriosis is hormon dependent disease, there- fore several hormonal treatment regims are used to supress estrogen production but the symptoms recurr soon after caesation of the treatment. At the moment, surgical treatment with excision of all lesions, including those of bowel, bladder and ureters, is the method of choice but requires frequently interdisciplinary approach. Surgical treatment significantly reduces pain and improves fertility in inferile patients. Conclusions: DIE is not a rare form of endometriosis characterized by chronic pelvic pain and infertility. Medical treatment is not efficient. The method of choice is surgical treatment with excision of all lesions. It significantly reduces pelvic pain and enables high spontaneus and IVF preg- nacy rates.Therefore such patients should be treated at centres with experience in treatment of DIE and with possibility of interdisciplinary approach.

  9. Telepresence for Deep Space Missions

    Data.gov (United States)

    National Aeronautics and Space Administration — Incorporating telepresence technologies into deep space mission operations can give the crew and ground personnel the impression that they are in a location at time...

  10. Hybrid mask for deep etching

    KAUST Repository

    Ghoneim, Mohamed T.

    2017-01-01

    Deep reactive ion etching is essential for creating high aspect ratio micro-structures for microelectromechanical systems, sensors and actuators, and emerging flexible electronics. A novel hybrid dual soft/hard mask bilayer may be deposited during

  11. Deep Learning and Bayesian Methods

    OpenAIRE

    Prosper Harrison B.

    2017-01-01

    A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such meth...

  12. Density functionals from deep learning

    OpenAIRE

    McMahon, Jeffrey M.

    2016-01-01

    Density-functional theory is a formally exact description of a many-body quantum system in terms of its density; in practice, however, approximations to the universal density functional are required. In this work, a model based on deep learning is developed to approximate this functional. Deep learning allows computational models that are capable of naturally discovering intricate structure in large and/or high-dimensional data sets, with multiple levels of abstraction. As no assumptions are ...

  13. Deep Unfolding for Topic Models.

    Science.gov (United States)

    Chien, Jen-Tzung; Lee, Chao-Hsi

    2018-02-01

    Deep unfolding provides an approach to integrate the probabilistic generative models and the deterministic neural networks. Such an approach is benefited by deep representation, easy interpretation, flexible learning and stochastic modeling. This study develops the unsupervised and supervised learning of deep unfolded topic models for document representation and classification. Conventionally, the unsupervised and supervised topic models are inferred via the variational inference algorithm where the model parameters are estimated by maximizing the lower bound of logarithm of marginal likelihood using input documents without and with class labels, respectively. The representation capability or classification accuracy is constrained by the variational lower bound and the tied model parameters across inference procedure. This paper aims to relax these constraints by directly maximizing the end performance criterion and continuously untying the parameters in learning process via deep unfolding inference (DUI). The inference procedure is treated as the layer-wise learning in a deep neural network. The end performance is iteratively improved by using the estimated topic parameters according to the exponentiated updates. Deep learning of topic models is therefore implemented through a back-propagation procedure. Experimental results show the merits of DUI with increasing number of layers compared with variational inference in unsupervised as well as supervised topic models.

  14. Hot, deep origin of petroleum: deep basin evidence and application

    Science.gov (United States)

    Price, Leigh C.

    1978-01-01

    Use of the model of a hot deep origin of oil places rigid constraints on the migration and entrapment of crude oil. Specifically, oil originating from depth migrates vertically up faults and is emplaced in traps at shallower depths. Review of petroleum-producing basins worldwide shows oil occurrence in these basins conforms to the restraints of and therefore supports the hypothesis. Most of the world's oil is found in the very deepest sedimentary basins, and production over or adjacent to the deep basin is cut by or directly updip from faults dipping into the basin deep. Generally the greater the fault throw the greater the reserves. Fault-block highs next to deep sedimentary troughs are the best target areas by the present concept. Traps along major basin-forming faults are quite prospective. The structural style of a basin governs the distribution, types, and amounts of hydrocarbons expected and hence the exploration strategy. Production in delta depocenters (Niger) is in structures cut by or updip from major growth faults, and structures not associated with such faults are barren. Production in block fault basins is on horsts next to deep sedimentary troughs (Sirte, North Sea). In basins whose sediment thickness, structure and geologic history are known to a moderate degree, the main oil occurrences can be specifically predicted by analysis of fault systems and possible hydrocarbon migration routes. Use of the concept permits the identification of significant targets which have either been downgraded or ignored in the past, such as production in or just updip from thrust belts, stratigraphic traps over the deep basin associated with major faulting, production over the basin deep, and regional stratigraphic trapping updip from established production along major fault zones.

  15. How Stressful Is "Deep Bubbling"?

    Science.gov (United States)

    Tyrmi, Jaana; Laukkanen, Anne-Maria

    2017-03-01

    Water resistance therapy by phonating through a tube into the water is used to treat dysphonia. Deep submersion (≥10 cm in water, "deep bubbling") is used for hypofunctional voice disorders. Using it with caution is recommended to avoid vocal overloading. This experimental study aimed to investigate how strenuous "deep bubbling" is. Fourteen subjects, half of them with voice training, repeated the syllable [pa:] in comfortable speaking pitch and loudness, loudly, and in strained voice. Thereafter, they phonated a vowel-like sound both in comfortable loudness and loudly into a glass resonance tube immersed 10 cm into the water. Oral pressure, contact quotient (CQ, calculated from electroglottographic signal), and sound pressure level were studied. The peak oral pressure P(oral) during [p] and shuttering of the outer end of the tube was measured to estimate the subglottic pressure P(sub) and the mean P(oral) during vowel portions to enable calculation of transglottic pressure P(trans). Sensations during phonation were reported with an open-ended interview. P(sub) and P(oral) were higher in "deep bubbling" and P(trans) lower than in loud syllable phonation, but the CQ did not differ significantly. Similar results were obtained for the comparison between loud "deep bubbling" and strained phonation, although P(sub) did not differ significantly. Most of the subjects reported "deep bubbling" to be stressful only for respiratory and lip muscles. No big differences were found between trained and untrained subjects. The CQ values suggest that "deep bubbling" may increase vocal fold loading. Further studies should address impact stress during water resistance exercises. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  16. Evaluation of the effect of media velocity on filter efficiency and most penetrating particle size of nuclear grade high-efficiency particulate air filters.

    Science.gov (United States)

    Alderman, Steven L; Parsons, Michael S; Hogancamp, Kristina U; Waggoner, Charles A

    2008-11-01

    High-efficiency particulate air (HEPA) filters are widely used to control particulate matter emissions from processes that involve management or treatment of radioactive materials. Section FC of the American Society of Mechanical Engineers AG-1 Code on Nuclear Air and Gas Treatment currently restricts media velocity to a maximum of 2.5 cm/sec in any application where this standard is invoked. There is some desire to eliminate or increase this media velocity limit. A concern is that increasing media velocity will result in higher emissions of ultrafine particles; thus, it is unlikely that higher media velocities will be allowed without data to demonstrate the effect of media velocity on removal of ultrafine particles. In this study, the performance of nuclear grade HEPA filters, with respect to filter efficiency and most penetrating particle size, was evaluated as a function of media velocity. Deep-pleat nuclear grade HEPA filters (31 cm x 31 cm x 29 cm) were evaluated at media velocities ranging from 2.0 to 4.5 cm/sec using a potassium chloride aerosol challenge having a particle size distribution centered near the HEPA filter most penetrating particle size. Filters were challenged under two distinct mass loading rate regimes through the use of or exclusion of a 3 microm aerodynamic diameter cut point cyclone. Filter efficiency and most penetrating particle size measurements were made throughout the duration of filter testing. Filter efficiency measured at the onset of aerosol challenge was noted to decrease with increasing media velocity, with values ranging from 99.999 to 99.977%. The filter most penetrating particle size recorded at the onset of testing was noted to decrease slightly as media velocity was increased and was typically in the range of 110-130 nm. Although additional testing is needed, these findings indicate that filters operating at media velocities up to 4.5 cm/sec will meet or exceed current filter efficiency requirements. Additionally

  17. Accelerating Deep Learning with Shrinkage and Recall

    OpenAIRE

    Zheng, Shuai; Vishnu, Abhinav; Ding, Chris

    2016-01-01

    Deep Learning is a very powerful machine learning model. Deep Learning trains a large number of parameters for multiple layers and is very slow when data is in large scale and the architecture size is large. Inspired from the shrinking technique used in accelerating computation of Support Vector Machines (SVM) algorithm and screening technique used in LASSO, we propose a shrinking Deep Learning with recall (sDLr) approach to speed up deep learning computation. We experiment shrinking Deep Lea...

  18. What Really is Deep Learning Doing?

    OpenAIRE

    Xiong, Chuyu

    2017-01-01

    Deep learning has achieved a great success in many areas, from computer vision to natural language processing, to game playing, and much more. Yet, what deep learning is really doing is still an open question. There are a lot of works in this direction. For example, [5] tried to explain deep learning by group renormalization, and [6] tried to explain deep learning from the view of functional approximation. In order to address this very crucial question, here we see deep learning from perspect...

  19. Deep Learning and Bayesian Methods

    Directory of Open Access Journals (Sweden)

    Prosper Harrison B.

    2017-01-01

    Full Text Available A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such methods might be used to automate certain aspects of data analysis in particle physics. Next, the connection to Bayesian methods is discussed and the paper ends with thoughts on a significant practical issue, namely, how, from a Bayesian perspective, one might optimize the construction of deep neural networks.

  20. Deep Learning in Drug Discovery.

    Science.gov (United States)

    Gawehn, Erik; Hiss, Jan A; Schneider, Gisbert

    2016-01-01

    Artificial neural networks had their first heyday in molecular informatics and drug discovery approximately two decades ago. Currently, we are witnessing renewed interest in adapting advanced neural network architectures for pharmaceutical research by borrowing from the field of "deep learning". Compared with some of the other life sciences, their application in drug discovery is still limited. Here, we provide an overview of this emerging field of molecular informatics, present the basic concepts of prominent deep learning methods and offer motivation to explore these techniques for their usefulness in computer-assisted drug discovery and design. We specifically emphasize deep neural networks, restricted Boltzmann machine networks and convolutional networks. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Eric Davidson and deep time.

    Science.gov (United States)

    Erwin, Douglas H

    2017-10-13

    Eric Davidson had a deep and abiding interest in the role developmental mechanisms played in generating evolutionary patterns documented in deep time, from the origin of the euechinoids to the processes responsible for the morphological architectures of major animal clades. Although not an evolutionary biologist, Davidson's interests long preceded the current excitement over comparative evolutionary developmental biology. Here I discuss three aspects at the intersection between his research and evolutionary patterns in deep time: First, understanding the mechanisms of body plan formation, particularly those associated with the early diversification of major metazoan clades. Second, a critique of early claims about ancestral metazoans based on the discoveries of highly conserved genes across bilaterian animals. Third, Davidson's own involvement in paleontology through a collaborative study of the fossil embryos from the Ediacaran Doushantuo Formation in south China.

  2. Deep Learning in Gastrointestinal Endoscopy.

    Science.gov (United States)

    Patel, Vivek; Armstrong, David; Ganguli, Malika; Roopra, Sandeep; Kantipudi, Neha; Albashir, Siwar; Kamath, Markad V

    2016-01-01

    Gastrointestinal (GI) endoscopy is used to inspect the lumen or interior of the GI tract for several purposes, including, (1) making a clinical diagnosis, in real time, based on the visual appearances; (2) taking targeted tissue samples for subsequent histopathological examination; and (3) in some cases, performing therapeutic interventions targeted at specific lesions. GI endoscopy is therefore predicated on the assumption that the operator-the endoscopist-is able to identify and characterize abnormalities or lesions accurately and reproducibly. However, as in other areas of clinical medicine, such as histopathology and radiology, many studies have documented marked interobserver and intraobserver variability in lesion recognition. Thus, there is a clear need and opportunity for techniques or methodologies that will enhance the quality of lesion recognition and diagnosis and improve the outcomes of GI endoscopy. Deep learning models provide a basis to make better clinical decisions in medical image analysis. Biomedical image segmentation, classification, and registration can be improved with deep learning. Recent evidence suggests that the application of deep learning methods to medical image analysis can contribute significantly to computer-aided diagnosis. Deep learning models are usually considered to be more flexible and provide reliable solutions for image analysis problems compared to conventional computer vision models. The use of fast computers offers the possibility of real-time support that is important for endoscopic diagnosis, which has to be made in real time. Advanced graphics processing units and cloud computing have also favored the use of machine learning, and more particularly, deep learning for patient care. This paper reviews the rapidly evolving literature on the feasibility of applying deep learning algorithms to endoscopic imaging.

  3. Deep mycoses in Amazon region.

    Science.gov (United States)

    Talhari, S; Cunha, M G; Schettini, A P; Talhari, A C

    1988-09-01

    Patients with deep mycoses diagnosed in dermatologic clinics of Manaus (state of Amazonas, Brazil) were studied from November 1973 to December 1983. They came from the Brazilian states of Amazonas, Pará, Acre, and Rondônia and the Federal Territory of Roraima. All of these regions, with the exception of Pará, are situated in the western part of the Amazon Basin. The climatic conditions in this region are almost the same: tropical forest, high rainfall, and mean annual temperature of 26C. The deep mycoses diagnosed, in order of frequency, were Jorge Lobo's disease, paracoccidioidomycosis, chromomycosis, sporotrichosis, mycetoma, cryptococcosis, zygomycosis, and histoplasmosis.

  4. Producing deep-water hydrocarbons

    International Nuclear Information System (INIS)

    Pilenko, Thierry

    2011-01-01

    Several studies relate the history and progress made in offshore production from oil and gas fields in relation to reserves and the techniques for producing oil offshore. The intention herein is not to review these studies but rather to argue that the activities of prospecting and producing deep-water oil and gas call for a combination of technology and project management and, above all, of devotion and innovation. Without this sense of commitment motivating men and women in this industry, the human adventure of deep-water production would never have taken place

  5. DeepSimulator: a deep simulator for Nanopore sequencing

    KAUST Repository

    Li, Yu

    2017-12-23

    Motivation: Oxford Nanopore sequencing is a rapidly developed sequencing technology in recent years. To keep pace with the explosion of the downstream data analytical tools, a versatile Nanopore sequencing simulator is needed to complement the experimental data as well as to benchmark those newly developed tools. However, all the currently available simulators are based on simple statistics of the produced reads, which have difficulty in capturing the complex nature of the Nanopore sequencing procedure, the main task of which is the generation of raw electrical current signals. Results: Here we propose a deep learning based simulator, DeepSimulator, to mimic the entire pipeline of Nanopore sequencing. Starting from a given reference genome or assembled contigs, we simulate the electrical current signals by a context-dependent deep learning model, followed by a base-calling procedure to yield simulated reads. This workflow mimics the sequencing procedure more naturally. The thorough experiments performed across four species show that the signals generated by our context-dependent model are more similar to the experimentally obtained signals than the ones generated by the official context-independent pore model. In terms of the simulated reads, we provide a parameter interface to users so that they can obtain the reads with different accuracies ranging from 83% to 97%. The reads generated by the default parameter have almost the same properties as the real data. Two case studies demonstrate the application of DeepSimulator to benefit the development of tools in de novo assembly and in low coverage SNP detection. Availability: The software can be accessed freely at: https://github.com/lykaust15/DeepSimulator.

  6. Stimulation Technologies for Deep Well Completions

    Energy Technology Data Exchange (ETDEWEB)

    None

    2003-09-30

    The Department of Energy (DOE) is sponsoring the Deep Trek Program targeted at improving the economics of drilling and completing deep gas wells. Under the DOE program, Pinnacle Technologies is conducting a study to evaluate the stimulation of deep wells. The objective of the project is to assess U.S. deep well drilling & stimulation activity, review rock mechanics & fracture growth in deep, high pressure/temperature wells and evaluate stimulation technology in several key deep plays. An assessment of historical deep gas well drilling activity and forecast of future trends was completed during the first six months of the project; this segment of the project was covered in Technical Project Report No. 1. The second progress report covers the next six months of the project during which efforts were primarily split between summarizing rock mechanics and fracture growth in deep reservoirs and contacting operators about case studies of deep gas well stimulation.

  7. STIMULATION TECHNOLOGIES FOR DEEP WELL COMPLETIONS

    Energy Technology Data Exchange (ETDEWEB)

    Stephen Wolhart

    2003-06-01

    The Department of Energy (DOE) is sponsoring a Deep Trek Program targeted at improving the economics of drilling and completing deep gas wells. Under the DOE program, Pinnacle Technologies is conducting a project to evaluate the stimulation of deep wells. The objective of the project is to assess U.S. deep well drilling & stimulation activity, review rock mechanics & fracture growth in deep, high pressure/temperature wells and evaluate stimulation technology in several key deep plays. Phase 1 was recently completed and consisted of assessing deep gas well drilling activity (1995-2007) and an industry survey on deep gas well stimulation practices by region. Of the 29,000 oil, gas and dry holes drilled in 2002, about 300 were drilled in the deep well; 25% were dry, 50% were high temperature/high pressure completions and 25% were simply deep completions. South Texas has about 30% of these wells, Oklahoma 20%, Gulf of Mexico Shelf 15% and the Gulf Coast about 15%. The Rockies represent only 2% of deep drilling. Of the 60 operators who drill deep and HTHP wells, the top 20 drill almost 80% of the wells. Six operators drill half the U.S. deep wells. Deep drilling peaked at 425 wells in 1998 and fell to 250 in 1999. Drilling is expected to rise through 2004 after which drilling should cycle down as overall drilling declines.

  8. Deep Space Climate Observatory (DSCOVR)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Deep Space Climate ObserVatoRy (DSCOVR) satellite is a NOAA operated asset at the first Lagrange (L1) point. The primary space weather instrument is the PlasMag...

  9. Ploughing the deep sea floor.

    Science.gov (United States)

    Puig, Pere; Canals, Miquel; Company, Joan B; Martín, Jacobo; Amblas, David; Lastras, Galderic; Palanques, Albert

    2012-09-13

    Bottom trawling is a non-selective commercial fishing technique whereby heavy nets and gear are pulled along the sea floor. The direct impact of this technique on fish populations and benthic communities has received much attention, but trawling can also modify the physical properties of seafloor sediments, water–sediment chemical exchanges and sediment fluxes. Most of the studies addressing the physical disturbances of trawl gear on the seabed have been undertaken in coastal and shelf environments, however, where the capacity of trawling to modify the seafloor morphology coexists with high-energy natural processes driving sediment erosion, transport and deposition. Here we show that on upper continental slopes, the reworking of the deep sea floor by trawling gradually modifies the shape of the submarine landscape over large spatial scales. We found that trawling-induced sediment displacement and removal from fishing grounds causes the morphology of the deep sea floor to become smoother over time, reducing its original complexity as shown by high-resolution seafloor relief maps. Our results suggest that in recent decades, following the industrialization of fishing fleets, bottom trawling has become an important driver of deep seascape evolution. Given the global dimension of this type of fishery, we anticipate that the morphology of the upper continental slope in many parts of the world’s oceans could be altered by intensive bottom trawling, producing comparable effects on the deep sea floor to those generated by agricultural ploughing on land.

  10. FOSTERING DEEP LEARNING AMONGST ENTREPRENEURSHIP ...

    African Journals Online (AJOL)

    An important prerequisite for this important objective to be achieved is that lecturers ensure that students adopt a deep learning approach towards entrepreneurship courses been taught, as this will enable them to truly understand key entrepreneurial concepts and strategies and how they can be implemented in the real ...

  11. Deep Space Gateway "Recycler" Mission

    Science.gov (United States)

    Graham, L.; Fries, M.; Hamilton, J.; Landis, R.; John, K.; O'Hara, W.

    2018-02-01

    Use of the Deep Space Gateway provides a hub for a reusable planetary sample return vehicle for missions to gather star dust as well as samples from various parts of the solar system including main belt asteroids, near-Earth asteroids, and Mars moon.

  12. Deep freezers with heat recovery

    Energy Technology Data Exchange (ETDEWEB)

    Kistler, J.

    1981-09-02

    Together with space and water heating systems, deep freezers are the biggest energy consumers in households. The article investigates the possibility of using the waste heat for water heating. The design principle of such a system is presented in a wiring diagram.

  13. A Deep-Sea Simulation.

    Science.gov (United States)

    Montes, Georgia E.

    1997-01-01

    Describes an activity that simulates exploration techniques used in deep-sea explorations and teaches students how this technology can be used to take a closer look inside volcanoes, inspect hazardous waste sites such as nuclear reactors, and explore other environments dangerous to humans. (DDR)

  14. Barbabos Deep-Water Sponges

    NARCIS (Netherlands)

    Soest, van R.W.M.; Stentoft, N.

    1988-01-01

    Deep-water sponges dredged up in two locations off the west coast of Barbados are systematically described. A total of 69 species is recorded, among which 16 are new to science, viz. Pachymatisma geodiformis, Asteropus syringiferus, Cinachyra arenosa, Theonella atlantica. Corallistes paratypus,

  15. Deep learning for visual understanding

    NARCIS (Netherlands)

    Guo, Y.

    2017-01-01

    With the dramatic growth of the image data on the web, there is an increasing demand of the algorithms capable of understanding the visual information automatically. Deep learning, served as one of the most significant breakthroughs, has brought revolutionary success in diverse visual applications,

  16. Deep-Sky Video Astronomy

    CERN Document Server

    Massey, Steve

    2009-01-01

    A guide to using modern integrating video cameras for deep-sky viewing and imaging with the kinds of modest telescopes available commercially to amateur astronomers. It includes an introduction and a brief history of the technology and camera types. It examines the pros and cons of this unrefrigerated yet highly efficient technology

  17. DM Considerations for Deep Drilling

    OpenAIRE

    Dubois-Felsmann, Gregory

    2016-01-01

    An outline of the current situation regarding the DM plans for the Deep Drilling surveys and an invitation to the community to provide feedback on what they would like to see included in the data processing and visualization of these surveys.

  18. Lessons from Earth's Deep Time

    Science.gov (United States)

    Soreghan, G. S.

    2005-01-01

    Earth is a repository of data on climatic changes from its deep-time history. Article discusses the collection and study of these data to predict future climatic changes, the need to create national study centers for the purpose, and the necessary cooperation between different branches of science in climatic research.

  19. Digging Deeper: The Deep Web.

    Science.gov (United States)

    Turner, Laura

    2001-01-01

    Focuses on the Deep Web, defined as Web content in searchable databases of the type that can be found only by direct query. Discusses the problems of indexing; inability to find information not indexed in the search engine's database; and metasearch engines. Describes 10 sites created to access online databases or directly search them. Lists ways…

  20. Deep Learning and Music Adversaries

    DEFF Research Database (Denmark)

    Kereliuk, Corey Mose; Sturm, Bob L.; Larsen, Jan

    2015-01-01

    the minimal perturbation of the input image such that the system misclassifies it with high confidence. We adapt this approach to construct and deploy an adversary of deep learning systems applied to music content analysis. In our case, however, the system inputs are magnitude spectral frames, which require...

  1. Stimulation Technologies for Deep Well Completions

    Energy Technology Data Exchange (ETDEWEB)

    Stephen Wolhart

    2005-06-30

    The Department of Energy (DOE) is sponsoring the Deep Trek Program targeted at improving the economics of drilling and completing deep gas wells. Under the DOE program, Pinnacle Technologies conducted a study to evaluate the stimulation of deep wells. The objective of the project was to review U.S. deep well drilling and stimulation activity, review rock mechanics and fracture growth in deep, high-pressure/temperature wells and evaluate stimulation technology in several key deep plays. This report documents results from this project.

  2. Particulate silica test agents for hepa filters

    International Nuclear Information System (INIS)

    Bauman, A.J.

    1987-01-01

    The authors developed a solid test aerosol (Dri-Test) and a versatile portable delivery system for it. The aerosol is based on thermal silica, modified chemically to make it surface-hydrophobic and fluorescent under UV illumination. The fluorescent tag enables one to identify tested filters. Primary particles are 7 nm in diameter, spherical, and of density 2.20 gm-cm/sup -3/ bulk aerosol powder has a density of 0.048 gm-cm/sup -3/. Tests by means of laser particle counters, TSI Nucleation counters and California Measurements Quartz Microbalance mass analyzer show that the delivered aerosol has a bimodal size distribution with peaks near 80 and 100 nm. An estimated 40-50% of the aerosol has a size below the limits of detectability by laser (Las-X) counters, i.e. 50 nm. The surfachydrophobic aerosol is unaffected by ambient humidity and unlike hydrophilic silicas is innocuous to health

  3. Study on the Metal Fiber Filter Modeling for Capturing Radioactive Aerosol

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seunguk; Lee, Chanhyun; Park, Minchan; Lee, Jaekeun [EcoEnergy Research Institute, Busan (Korea, Republic of)

    2015-05-15

    The components of air cleaning system are demisters to remove entrained moisture, pre-filters to remove the bulk of the particulate matter, high efficiency particulate air (HEPA) filters, iodine absorbers(generally, activated carbon) and HEPA filters after the absorbers for redundancy and collection of carbon fines. The HEPA filters are most important components to prevent radioactive aerosols from being released to control room and adjacent environment. The Conventional HEPA filter has pleated media for low pressure drop. Consequently, the filters must provide high collection efficiency as well as low pressure drop. Unfortunately, conventional HEPA filters are made of glass fiber and polyester, and pose disposal issues since they cannot be recycled. In fact, 31,055 HEPA filters used in nuclear facilities in the U.S are annually disposed. The Analyses at face velocities 1cm/s and 10cm/s are also carried out, and they also show R2 value of 0.995. However, since official HEPA filter standards are established at face velocity of 5cm/s, this value will be used in further analysis. From the comparative studies carried out at different filter thickness and face velocities, a good correlation is found between the model and the experiment.

  4. Deep Web and Dark Web: Deep World of the Internet

    OpenAIRE

    Çelik, Emine

    2018-01-01

    The Internet is undoubtedly still a revolutionary breakthrough in the history of humanity. Many people use the internet for communication, social media, shopping, political and social agenda, and more. Deep Web and Dark Web concepts not only handled by computer, software engineers but also handled by social siciensists because of the role of internet for the States in international arenas, public institutions and human life. By the moving point that very importantrole of internet for social s...

  5. DeepNAT: Deep convolutional neural network for segmenting neuroanatomy.

    Science.gov (United States)

    Wachinger, Christian; Reuter, Martin; Klein, Tassilo

    2018-04-15

    We introduce DeepNAT, a 3D Deep convolutional neural network for the automatic segmentation of NeuroAnaTomy in T1-weighted magnetic resonance images. DeepNAT is an end-to-end learning-based approach to brain segmentation that jointly learns an abstract feature representation and a multi-class classification. We propose a 3D patch-based approach, where we do not only predict the center voxel of the patch but also neighbors, which is formulated as multi-task learning. To address a class imbalance problem, we arrange two networks hierarchically, where the first one separates foreground from background, and the second one identifies 25 brain structures on the foreground. Since patches lack spatial context, we augment them with coordinates. To this end, we introduce a novel intrinsic parameterization of the brain volume, formed by eigenfunctions of the Laplace-Beltrami operator. As network architecture, we use three convolutional layers with pooling, batch normalization, and non-linearities, followed by fully connected layers with dropout. The final segmentation is inferred from the probabilistic output of the network with a 3D fully connected conditional random field, which ensures label agreement between close voxels. The roughly 2.7million parameters in the network are learned with stochastic gradient descent. Our results show that DeepNAT compares favorably to state-of-the-art methods. Finally, the purely learning-based method may have a high potential for the adaptation to young, old, or diseased brains by fine-tuning the pre-trained network with a small training sample on the target application, where the availability of larger datasets with manual annotations may boost the overall segmentation accuracy in the future. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Deep Phenotyping: Deep Learning For Temporal Phenotype/Genotype Classification

    OpenAIRE

    Najafi, Mohammad; Namin, Sarah; Esmaeilzadeh, Mohammad; Brown, Tim; Borevitz, Justin

    2017-01-01

    High resolution and high throughput, genotype to phenotype studies in plants are underway to accelerate breeding of climate ready crops. Complex developmental phenotypes are observed by imaging a variety of accessions in different environment conditions, however extracting the genetically heritable traits is challenging. In the recent years, deep learning techniques and in particular Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs) and Long-Short Term Memories (LSTMs), h...

  7. Deep Neuromuscular Blockade Improves Laparoscopic Surgical Conditions

    DEFF Research Database (Denmark)

    Rosenberg, Jacob; Herring, W Joseph; Blobner, Manfred

    2017-01-01

    INTRODUCTION: Sustained deep neuromuscular blockade (NMB) during laparoscopic surgery may facilitate optimal surgical conditions. This exploratory study assessed whether deep NMB improves surgical conditions and, in doing so, allows use of lower insufflation pressures during laparoscopic cholecys...

  8. Joint Training of Deep Boltzmann Machines

    OpenAIRE

    Goodfellow, Ian; Courville, Aaron; Bengio, Yoshua

    2012-01-01

    We introduce a new method for training deep Boltzmann machines jointly. Prior methods require an initial learning pass that trains the deep Boltzmann machine greedily, one layer at a time, or do not perform well on classifi- cation tasks.

  9. Building Program Vector Representations for Deep Learning

    OpenAIRE

    Mou, Lili; Li, Ge; Liu, Yuxuan; Peng, Hao; Jin, Zhi; Xu, Yan; Zhang, Lu

    2014-01-01

    Deep learning has made significant breakthroughs in various fields of artificial intelligence. Advantages of deep learning include the ability to capture highly complicated features, weak involvement of human engineering, etc. However, it is still virtually impossible to use deep learning to analyze programs since deep architectures cannot be trained effectively with pure back propagation. In this pioneering paper, we propose the "coding criterion" to build program vector representations, whi...

  10. Collection of aerosols in high efficiency particulate air filters

    International Nuclear Information System (INIS)

    Pratt, R.P.; Green, B.L.

    1987-01-01

    The investigation of the performance of HEPA filters of both minipleat and conventional deep pleat designs has continued at Harwell. Samples of filters from several manufacturers have been tested against the UKAEA/BNF plc filter purchasing specification. No unexpected problems have come to light in these tests, apart from some evidence to suggest that although meeting the specification minipleat filters are inherently weaker in burst strength terms than conventional filters. In addition tests have been carried out to investigate the dust loading versus pressure drop characteristics of both designs of filters using a range of test dusts - ASHRAE dust, carbon black, BS 2831 No. 2 test dust and sodium chloride. In parallel with laboratory test work a more fundamental study on the effects of geometric arrangement of filter media within the filter frame has been carried out on behalf of the UKAEA by Loughborough University. The results of this study has been the development of a mathematical model to predict the dust load versus pressure drop characteristic as a function of filter media geometry. This has produced good agreement with laboratory test results using a challenge aerosol in the 1-5 μm size range. Further observations have been made to enhance understanding of the deposition of aerosols within the filter structure. The observations suggest that the major influence on dust loading is the depth of material collected in the flow channel as a surface deposition, and this explains the relatively poor performance of the minipleat design of filter

  11. Is Multitask Deep Learning Practical for Pharma?

    Science.gov (United States)

    Ramsundar, Bharath; Liu, Bowen; Wu, Zhenqin; Verras, Andreas; Tudor, Matthew; Sheridan, Robert P; Pande, Vijay

    2017-08-28

    Multitask deep learning has emerged as a powerful tool for computational drug discovery. However, despite a number of preliminary studies, multitask deep networks have yet to be widely deployed in the pharmaceutical and biotech industries. This lack of acceptance stems from both software difficulties and lack of understanding of the robustness of multitask deep networks. Our work aims to resolve both of these barriers to adoption. We introduce a high-quality open-source implementation of multitask deep networks as part of the DeepChem open-source platform. Our implementation enables simple python scripts to construct, fit, and evaluate sophisticated deep models. We use our implementation to analyze the performance of multitask deep networks and related deep models on four collections of pharmaceutical data (three of which have not previously been analyzed in the literature). We split these data sets into train/valid/test using time and neighbor splits to test multitask deep learning performance under challenging conditions. Our results demonstrate that multitask deep networks are surprisingly robust and can offer strong improvement over random forests. Our analysis and open-source implementation in DeepChem provide an argument that multitask deep networks are ready for widespread use in commercial drug discovery.

  12. Evaluation of the DeepWind concept

    DEFF Research Database (Denmark)

    Schmidt Paulsen, Uwe; Borg, Michael; Gonzales Seabra, Luis Alberto

    The report describes the DeepWind 5 MW conceptual design as a baseline for results obtained in the scientific and technical work packages of the DeepWind project. A comparison of DeepWi nd with existing VAWTs and paper projects are carried out and the evaluation of the concept in terms of cost...

  13. Consolidated Deep Actor Critic Networks (DRAFT)

    NARCIS (Netherlands)

    Van der Laan, T.A.

    2015-01-01

    The works [Volodymyr et al. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.] and [Volodymyr et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.] have demonstrated the power of combining deep neural networks with

  14. Simulator Studies of the Deep Stall

    Science.gov (United States)

    White, Maurice D.; Cooper, George E.

    1965-01-01

    Simulator studies of the deep-stall problem encountered with modern airplanes are discussed. The results indicate that the basic deep-stall tendencies produced by aerodynamic characteristics are augmented by operational considerations. Because of control difficulties to be anticipated in the deep stall, it is desirable that adequate safeguards be provided against inadvertent penetrations.

  15. TOPIC MODELING: CLUSTERING OF DEEP WEBPAGES

    OpenAIRE

    Muhunthaadithya C; Rohit J.V; Sadhana Kesavan; E. Sivasankar

    2015-01-01

    The internet is comprised of massive amount of information in the form of zillions of web pages.This information can be categorized into the surface web and the deep web. The existing search engines can effectively make use of surface web information.But the deep web remains unexploited yet. Machine learning techniques have been commonly employed to access deep web content.

  16. DeepFlavour in CMS

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Flavour-tagging of jets is an important task in collider based high energy physics and a field where machine learning tools are applied by all major experiments. A new tagger (DeepFlavour) was developed and commissioned in CMS that is based on an advanced machine learning procedure. A deep neural network is used to do multi-classification of jets that origin from a b-quark, two b-quarks, a c-quark, two c-quarks or light colored particles (u, d, s-quark or gluon). The performance was measured in both, data and simulation. The talk will also include the measured performance of all taggers in CMS. The different taggers and results will be discussed and compared with some focus on details of the newest tagger.

  17. Deep Learning for ECG Classification

    Science.gov (United States)

    Pyakillya, B.; Kazachenko, N.; Mikhailovsky, N.

    2017-10-01

    The importance of ECG classification is very high now due to many current medical applications where this problem can be stated. Currently, there are many machine learning (ML) solutions which can be used for analyzing and classifying ECG data. However, the main disadvantages of these ML results is use of heuristic hand-crafted or engineered features with shallow feature learning architectures. The problem relies in the possibility not to find most appropriate features which will give high classification accuracy in this ECG problem. One of the proposing solution is to use deep learning architectures where first layers of convolutional neurons behave as feature extractors and in the end some fully-connected (FCN) layers are used for making final decision about ECG classes. In this work the deep learning architecture with 1D convolutional layers and FCN layers for ECG classification is presented and some classification results are showed.

  18. Deep Space Habitat Concept Demonstrator

    Science.gov (United States)

    Bookout, Paul S.; Smitherman, David

    2015-01-01

    This project will develop, integrate, test, and evaluate Habitation Systems that will be utilized as technology testbeds and will advance NASA's understanding of alternative deep space mission architectures, requirements, and operations concepts. Rapid prototyping and existing hardware will be utilized to develop full-scale habitat demonstrators. FY 2014 focused on the development of a large volume Space Launch System (SLS) class habitat (Skylab Gen 2) based on the SLS hydrogen tank components. Similar to the original Skylab, a tank section of the SLS rocket can be outfitted with a deep space habitat configuration and launched as a payload on an SLS rocket. This concept can be used to support extended stay at the Lunar Distant Retrograde Orbit to support the Asteroid Retrieval Mission and provide a habitat suitable for human missions to Mars.

  19. Hybrid mask for deep etching

    KAUST Repository

    Ghoneim, Mohamed T.

    2017-08-10

    Deep reactive ion etching is essential for creating high aspect ratio micro-structures for microelectromechanical systems, sensors and actuators, and emerging flexible electronics. A novel hybrid dual soft/hard mask bilayer may be deposited during semiconductor manufacturing for deep reactive etches. Such a manufacturing process may include depositing a first mask material on a substrate; depositing a second mask material on the first mask material; depositing a third mask material on the second mask material; patterning the third mask material with a pattern corresponding to one or more trenches for transfer to the substrate; transferring the pattern from the third mask material to the second mask material; transferring the pattern from the second mask material to the first mask material; and/or transferring the pattern from the first mask material to the substrate.

  20. Soft-Deep Boltzmann Machines

    OpenAIRE

    Kiwaki, Taichi

    2015-01-01

    We present a layered Boltzmann machine (BM) that can better exploit the advantages of a distributed representation. It is widely believed that deep BMs (DBMs) have far greater representational power than its shallow counterpart, restricted Boltzmann machines (RBMs). However, this expectation on the supremacy of DBMs over RBMs has not ever been validated in a theoretical fashion. In this paper, we provide both theoretical and empirical evidences that the representational power of DBMs can be a...

  1. Evolving Deep Networks Using HPC

    Energy Technology Data Exchange (ETDEWEB)

    Young, Steven R. [ORNL, Oak Ridge; Rose, Derek C. [ORNL, Oak Ridge; Johnston, Travis [ORNL, Oak Ridge; Heller, William T. [ORNL, Oak Ridge; Karnowski, thomas P. [ORNL, Oak Ridge; Potok, Thomas E. [ORNL, Oak Ridge; Patton, Robert M. [ORNL, Oak Ridge; Perdue, Gabriel [Fermilab; Miller, Jonathan [Santa Maria U., Valparaiso

    2017-01-01

    While a large number of deep learning networks have been studied and published that produce outstanding results on natural image datasets, these datasets only make up a fraction of those to which deep learning can be applied. These datasets include text data, audio data, and arrays of sensors that have very different characteristics than natural images. As these “best” networks for natural images have been largely discovered through experimentation and cannot be proven optimal on some theoretical basis, there is no reason to believe that they are the optimal network for these drastically different datasets. Hyperparameter search is thus often a very important process when applying deep learning to a new problem. In this work we present an evolutionary approach to searching the possible space of network hyperparameters and construction that can scale to 18, 000 nodes. This approach is applied to datasets of varying types and characteristics where we demonstrate the ability to rapidly find best hyperparameters in order to enable practitioners to quickly iterate between idea and result.

  2. Deep Space Gateway Science Opportunities

    Science.gov (United States)

    Quincy, C. D.; Charles, J. B.; Hamill, Doris; Sidney, S. C.

    2018-01-01

    The NASA Life Sciences Research Capabilities Team (LSRCT) has been discussing deep space research needs for the last two years. NASA's programs conducting life sciences studies - the Human Research Program, Space Biology, Astrobiology, and Planetary Protection - see the Deep Space Gateway (DSG) as affording enormous opportunities to investigate biological organisms in a unique environment that cannot be replicated in Earth-based laboratories or on Low Earth Orbit science platforms. These investigations may provide in many cases the definitive answers to risks associated with exploration and living outside Earth's protective magnetic field. Unlike Low Earth Orbit or terrestrial locations, the Gateway location will be subjected to the true deep space spectrum and influence of both galactic cosmic and solar particle radiation and thus presents an opportunity to investigate their long-term exposure effects. The question of how a community of biological organisms change over time within the harsh environment of space flight outside of the magnetic field protection can be investigated. The biological response to the absence of Earth's geomagnetic field can be studied for the first time. Will organisms change in new and unique ways under these new conditions? This may be specifically true on investigations of microbial communities. The Gateway provides a platform for microbiology experiments both inside, to improve understanding of interactions between microbes and human habitats, and outside, to improve understanding of microbe-hardware interactions exposed to the space environment.

  3. Deep water recycling through time.

    Science.gov (United States)

    Magni, Valentina; Bouilhol, Pierre; van Hunen, Jeroen

    2014-11-01

    We investigate the dehydration processes in subduction zones and their implications for the water cycle throughout Earth's history. We use a numerical tool that combines thermo-mechanical models with a thermodynamic database to examine slab dehydration for present-day and early Earth settings and its consequences for the deep water recycling. We investigate the reactions responsible for releasing water from the crust and the hydrated lithospheric mantle and how they change with subduction velocity ( v s ), slab age ( a ) and mantle temperature (T m ). Our results show that faster slabs dehydrate over a wide area: they start dehydrating shallower and they carry water deeper into the mantle. We parameterize the amount of water that can be carried deep into the mantle, W (×10 5 kg/m 2 ), as a function of v s (cm/yr), a (Myrs), and T m (°C):[Formula: see text]. We generally observe that a 1) 100°C increase in the mantle temperature, or 2) ∼15 Myr decrease of plate age, or 3) decrease in subduction velocity of ∼2 cm/yr all have the same effect on the amount of water retained in the slab at depth, corresponding to a decrease of ∼2.2×10 5 kg/m 2 of H 2 O. We estimate that for present-day conditions ∼26% of the global influx water, or 7×10 8 Tg/Myr of H 2 O, is recycled into the mantle. Using a realistic distribution of subduction parameters, we illustrate that deep water recycling might still be possible in early Earth conditions, although its efficiency would generally decrease. Indeed, 0.5-3.7 × 10 8 Tg/Myr of H 2 O could still be recycled in the mantle at 2.8 Ga. Deep water recycling might be possible even in early Earth conditions We provide a scaling law to estimate the amount of H 2 O flux deep into the mantle Subduction velocity has a a major control on the crustal dehydration pattern.

  4. Vision in the deep sea.

    Science.gov (United States)

    Warrant, Eric J; Locket, N Adam

    2004-08-01

    The deep sea is the largest habitat on earth. Its three great faunal environments--the twilight mesopelagic zone, the dark bathypelagic zone and the vast flat expanses of the benthic habitat--are home to a rich fauna of vertebrates and invertebrates. In the mesopelagic zone (150-1000 m), the down-welling daylight creates an extended scene that becomes increasingly dimmer and bluer with depth. The available daylight also originates increasingly from vertically above, and bioluminescent point-source flashes, well contrasted against the dim background daylight, become increasingly visible. In the bathypelagic zone below 1000 m no daylight remains, and the scene becomes entirely dominated by point-like bioluminescence. This changing nature of visual scenes with depth--from extended source to point source--has had a profound effect on the designs of deep-sea eyes, both optically and neurally, a fact that until recently was not fully appreciated. Recent measurements of the sensitivity and spatial resolution of deep-sea eyes--particularly from the camera eyes of fishes and cephalopods and the compound eyes of crustaceans--reveal that ocular designs are well matched to the nature of the visual scene at any given depth. This match between eye design and visual scene is the subject of this review. The greatest variation in eye design is found in the mesopelagic zone, where dim down-welling daylight and bio-luminescent point sources may be visible simultaneously. Some mesopelagic eyes rely on spatial and temporal summation to increase sensitivity to a dim extended scene, while others sacrifice this sensitivity to localise pinpoints of bright bioluminescence. Yet other eyes have retinal regions separately specialised for each type of light. In the bathypelagic zone, eyes generally get smaller and therefore less sensitive to point sources with increasing depth. In fishes, this insensitivity, combined with surprisingly high spatial resolution, is very well adapted to the

  5. The deep Canary poleward undercurrent

    Science.gov (United States)

    Velez-Belchi, P. J.; Hernandez-Guerra, A.; González-Pola, C.; Fraile, E.; Collins, C. A.; Machín, F.

    2012-12-01

    Poleward undercurrents are well known features in Eastern Boundary systems. In the California upwelling system (CalCEBS), the deep poleward flow has been observed along the entire outer continental shelf and upper-slope, using indirect methods based on geostrophic estimates and also using direct current measurements. The importance of the poleward undercurrents in the CalCEBS, among others, is to maintain its high productivity by means of the transport of equatorial Pacific waters all the way northward to Vancouver Island and the subpolar gyre but there is also concern about the low oxygen concentration of these waters. However, in the case of the Canary Current Eastern Boundary upwelling system (CanCEBS), there are very few observations of the poleward undercurrent. Most of these observations are short-term mooring records, or drifter trajectories of the upper-slope flow. Hence, the importance of the subsurface poleward flow in the CanCEBS has been only hypothesized. Moreover, due to the large differences between the shape of the coastline and topography between the California and the Canary Current system, the results obtained for the CalCEBS are not completely applicable to the CanCEBS. In this study we report the first direct observations of the continuity of the deep poleward flow of the Canary Deep Poleward undercurrent (CdPU) in the North-Africa sector of the CanCEBS, and one of the few direct observations in the North-Africa sector of the Canary Current eastern boundary. The results indicate that the Canary Island archipelago disrupts the deep poleward undercurrent even at depths where the flow is not blocked by the bathymetry. The deep poleward undercurrent flows west around the eastern-most islands and north east of the Conception Bank to rejoin the intermittent branch that follows the African slope in the Lanzarote Passage. This hypothesis is consistent with the AAIW found west of Lanzarote, as far as 17 W. But also, this hypothesis would be coherent

  6. Deep iCrawl: An Intelligent Vision-Based Deep Web Crawler

    OpenAIRE

    R.Anita; V.Ganga Bharani; N.Nityanandam; Pradeep Kumar Sahoo

    2011-01-01

    The explosive growth of World Wide Web has posed a challenging problem in extracting relevant data. Traditional web crawlers focus only on the surface web while the deep web keeps expanding behind the scene. Deep web pages are created dynamically as a result of queries posed to specific web databases. The structure of the deep web pages makes it impossible for traditional web crawlers to access deep web contents. This paper, Deep iCrawl, gives a novel and vision-based app...

  7. Deep Corals, Deep Learning: Moving the Deep Net Towards Real-Time Image Annotation

    OpenAIRE

    Lea-Anne Henry; Sankha S. Mukherjee; Neil M. Roberston; Laurence De Clippele; J. Murray Roberts

    2016-01-01

    The mismatch between human capacity and the acquisition of Big Data such as Earth imagery undermines commitments to Convention on Biological Diversity (CBD) and Aichi targets. Artificial intelligence (AI) solutions to Big Data issues are urgently needed as these could prove to be faster, more accurate, and cheaper. Reducing costs of managing protected areas in remote deep waters and in the High Seas is of great importance, and this is a realm where autonomous technology will be transformative.

  8. Invited talk: Deep Learning Meets Physics

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Deep Learning has emerged as one of the most successful fields of machine learning and artificial intelligence with overwhelming success in industrial speech, text and vision benchmarks. Consequently it evolved into the central field of research for IT giants like Google, facebook, Microsoft, Baidu, and Amazon. Deep Learning is founded on novel neural network techniques, the recent availability of very fast computers, and massive data sets. In its core, Deep Learning discovers multiple levels of abstract representations of the input. The main obstacle to learning deep neural networks is the vanishing gradient problem. The vanishing gradient impedes credit assignment to the first layers of a deep network or to early elements of a sequence, therefore limits model selection. Major advances in Deep Learning can be related to avoiding the vanishing gradient like stacking, ReLUs, residual networks, highway networks, and LSTM. For Deep Learning, we suggested self-normalizing neural networks (SNNs) which automatica...

  9. Deep remission: a new concept?

    Science.gov (United States)

    Colombel, Jean-Frédéric; Louis, Edouard; Peyrin-Biroulet, Laurent; Sandborn, William J; Panaccione, Remo

    2012-01-01

    Crohn's disease (CD) is a chronic inflammatory disorder characterized by periods of clinical remission alternating with periods of relapse defined by recurrent clinical symptoms. Persistent inflammation is believed to lead to progressive bowel damage over time, which manifests with the development of strictures, fistulae and abscesses. These disease complications frequently lead to a need for surgical resection, which in turn leads to disability. So CD can be characterized as a chronic, progressive, destructive and disabling disease. In rheumatoid arthritis, treatment paradigms have evolved beyond partial symptom control alone toward the induction and maintenance of sustained biological remission, also known as a 'treat to target' strategy, with the goal of improving long-term disease outcomes. In CD, there is currently no accepted, well-defined, comprehensive treatment goal that entails the treatment of both clinical symptoms and biologic inflammation. It is important that such a treatment concept begins to evolve for CD. A treatment strategy that delays or halts the progression of CD to increasing damage and disability is a priority. As a starting point, a working definition of sustained deep remission (that includes long-term biological remission and symptom control) with defined patient outcomes (including no disease progression) has been proposed. The concept of sustained deep remission represents a goal for CD management that may still evolve. It is not clear if the concept also applies to ulcerative colitis. Clinical trials are needed to evaluate whether treatment algorithms that tailor therapy to achieve deep remission in patients with CD can prevent disease progression and disability. Copyright © 2012 S. Karger AG, Basel.

  10. Topics in deep inelastic scattering

    International Nuclear Information System (INIS)

    Wandzura, S.M.

    1977-01-01

    Several topics in deep inelastic lepton--nucleon scattering are discussed, with emphasis on the structure functions appearing in polarized experiments. The major results are: infinite set of new sum rules reducing the number of independent spin dependent structure functions (for electroproduction) from two to one; the application of the techniques of Nachtmann to extract the coefficients appearing in the Wilson operator product expansion; and radiative corrections to the Wilson coefficients of free field theory. Also discussed are the use of dimensional regularization to simplify the calculation of these radiative corrections

  11. Deep groundwater flow at Palmottu

    International Nuclear Information System (INIS)

    Niini, H.; Vesterinen, M.; Tuokko, T.

    1993-01-01

    Further observations, measurements, and calculations aimed at determining the groundwater flow regimes and periodical variations in flow at deeper levels were carried out in the Lake Palmottu (a natural analogue study site for radioactive waste disposal in southwestern Finland) drainage basin. These water movements affect the migration of radionuclides from the Palmottu U-Th deposit. The deep water flow is essentially restricted to the bedrock fractures which developed under, and are still affected by, the stress state of the bedrock. Determination of the detailed variations was based on fracture-tectonic modelling of the 12 most significant underground water-flow channels that cross the surficial water of the Palmottu area. According to the direction of the hydraulic gradient the deep water flow is mostly outwards from the Palmottu catchment but in the westernmost section it is partly towards the centre. Estimation of the water flow through the U-Th deposit by the water-balance method is still only approximate and needs continued observation series and improved field measurements

  12. Deep ocean model penetrator experiments

    International Nuclear Information System (INIS)

    Freeman, T.J.; Burdett, J.R.F.

    1986-01-01

    Preliminary trials of experimental model penetrators in the deep ocean have been conducted as an international collaborative exercise by participating members (national bodies and the CEC) of the Engineering Studies Task Group of the Nuclear Energy Agency's Seabed Working Group. This report describes and gives the results of these experiments, which were conducted at two deep ocean study areas in the Atlantic: Great Meteor East and the Nares Abyssal Plain. Velocity profiles of penetrators of differing dimensions and weights have been determined as they free-fell through the water column and impacted the sediment. These velocity profiles are used to determine the final embedment depth of the penetrators and the resistance to penetration offered by the sediment. The results are compared with predictions of embedment depth derived from elementary models of a penetrator impacting with a sediment. It is tentatively concluded that once the resistance to penetration offered by a sediment at a particular site has been determined, this quantity can be used to sucessfully predict the embedment that penetrators of differing sizes and weights would achieve at the same site

  13. Academic Training: Deep Space Telescopes

    CERN Multimedia

    Françoise Benz

    2006-01-01

    2005-2006 ACADEMIC TRAINING PROGRAMME LECTURE SERIES 20, 21, 22, 23, 24 February from 11:00 to 12:00 - Council Chamber on 20, 21, 23, 24 February, TH Auditorium, bldg 4 - 3-006, on 22 February Deep Space Telescopes G. BIGNAMI / CNRS, Toulouse, F & Univ. di Pavia, I The short series of seminars will address results and aims of current and future space astrophysics as the cultural framework for the development of deep space telescopes. It will then present such new tools, as they are currently available to, or imagined by, the scientific community, in the context of the science plans of ESA and of all major world space agencies. Ground-based astronomy, in the 400 years since Galileo's telescope, has given us a profound phenomenological comprehension of our Universe, but has traditionally been limited to the narrow band(s) to which our terrestrial atmosphere is transparent. Celestial objects, however, do not care about our limitations, and distribute most of the information about their physics thro...

  14. Efficient simulations of fluid flow coupled with poroelastic deformations in pleated filters

    KAUST Repository

    Calo, Victor M.; Iliev, Dimitar; Iliev, Oleg; Kirsch, Ralf; Lakdawala, Zahra; Printsypar, Galina

    2015-01-01

    model describes a free fluid flow coupled with a flow in porous media in a domain that contains the filtering media. To discretize the complex computational domain we use quadrilateral boundary fitted grids which resolve porous-fluid interfaces

  15. Image Captioning with Deep Bidirectional LSTMs

    OpenAIRE

    Wang, Cheng; Yang, Haojin; Bartz, Christian; Meinel, Christoph

    2016-01-01

    This work presents an end-to-end trainable deep bidirectional LSTM (Long-Short Term Memory) model for image captioning. Our model builds on a deep convolutional neural network (CNN) and two separate LSTM networks. It is capable of learning long term visual-language interactions by making use of history and future context information at high level semantic space. Two novel deep bidirectional variant models, in which we increase the depth of nonlinearity transition in different way, are propose...

  16. Deep inelastic processes and the parton model

    International Nuclear Information System (INIS)

    Altarelli, G.

    The lecture was intended as an elementary introduction to the physics of deep inelastic phenomena from the point of view of theory. General formulae and facts concerning inclusive deep inelastic processes in the form: l+N→l'+hadrons (electroproduction, neutrino scattering) are first recalled. The deep inelastic annihilation e + e - →hadrons is then envisaged. The light cone approach, the parton model and their relation are mainly emphasized

  17. Deep inelastic electron and muon scattering

    International Nuclear Information System (INIS)

    Taylor, R.E.

    1975-07-01

    From the review of deep inelastic electron and muon scattering it is concluded that the puzzle of deep inelastic scattering versus annihilation was replaced with the challenge of the new particles, that the evidence for the simplest quark-algebra models of deep inelastic processes is weaker than a year ago. Definite evidence of scale breaking was found but the specific form of that scale breaking is difficult to extract from the data. 59 references

  18. Fast, Distributed Algorithms in Deep Networks

    Science.gov (United States)

    2016-05-11

    shallow networks, additional work will need to be done in order to allow for the application of ADMM to deep nets. The ADMM method allows for quick...Quock V Le, et al. Large scale distributed deep networks. In Advances in Neural Information Processing Systems, pages 1223–1231, 2012. [11] Ken-Ichi...A TRIDENT SCHOLAR PROJECT REPORT NO. 446 Fast, Distributed Algorithms in Deep Networks by Midshipman 1/C Ryan J. Burmeister, USN

  19. Learning Transferable Features with Deep Adaptation Networks

    OpenAIRE

    Long, Mingsheng; Cao, Yue; Wang, Jianmin; Jordan, Michael I.

    2015-01-01

    Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation...

  20. An overview of latest deep water technologies

    International Nuclear Information System (INIS)

    Anon.

    1995-01-01

    The 8th Deep Offshore Technology Conference (DOT VIII, Rio de Janeiro, October 30 - November 3, 1995) has brought together renowned specialists in deep water development projects, as well as managers from oil companies and engineering/service companies to discuss state-of-the-art technologies and ongoing projects in the deep offshore. This paper is a compilation of the session summaries about sub sea technologies, mooring and dynamic positioning, floaters (Tension Leg Platforms (TLP) and Floating Production Storage and Off loading (FPSO)), pipelines and risers, exploration and drilling, and other deep water techniques. (J.S.)

  1. Deep learning in neural networks: an overview.

    Science.gov (United States)

    Schmidhuber, Jürgen

    2015-01-01

    In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

  2. The deep ocean under climate change

    Science.gov (United States)

    Levin, Lisa A.; Le Bris, Nadine

    2015-11-01

    The deep ocean absorbs vast amounts of heat and carbon dioxide, providing a critical buffer to climate change but exposing vulnerable ecosystems to combined stresses of warming, ocean acidification, deoxygenation, and altered food inputs. Resulting changes may threaten biodiversity and compromise key ocean services that maintain a healthy planet and human livelihoods. There exist large gaps in understanding of the physical and ecological feedbacks that will occur. Explicit recognition of deep-ocean climate mitigation and inclusion in adaptation planning by the United Nations Framework Convention on Climate Change (UNFCCC) could help to expand deep-ocean research and observation and to protect the integrity and functions of deep-ocean ecosystems.

  3. Docker Containers for Deep Learning Experiments

    OpenAIRE

    Gerke, Paul K.

    2017-01-01

    Deep learning is a powerful tool to solve problems in the area of image analysis. The dominant compute platform for deep learning is Nvidia’s proprietary CUDA, which can only be used together with Nvidia graphics cards. The nivida-docker project allows exposing Nvidia graphics cards to docker containers and thus makes it possible to run deep learning experiments in docker containers.In our department, we use deep learning to solve problems in the area of medical image analysis and use docker ...

  4. Deep Brain Stimulation for Parkinson's Disease

    Science.gov (United States)

    ... about the BRAIN initiative, see www.nih.gov/science/brain . Show More Show Less Search Disorders SEARCH SEARCH Definition Treatment Prognosis Clinical Trials Organizations Publications Definition Deep ...

  5. Cultivating the Deep Subsurface Microbiome

    Science.gov (United States)

    Casar, C. P.; Osburn, M. R.; Flynn, T. M.; Masterson, A.; Kruger, B.

    2017-12-01

    Subterranean ecosystems are poorly understood because many microbes detected in metagenomic surveys are only distantly related to characterized isolates. Cultivating microorganisms from the deep subsurface is challenging due to its inaccessibility and potential for contamination. The Deep Mine Microbial Observatory (DeMMO) in Lead, SD however, offers access to deep microbial life via pristine fracture fluids in bedrock to a depth of 1478 m. The metabolic landscape of DeMMO was previously characterized via thermodynamic modeling coupled with genomic data, illustrating the potential for microbial inhabitants of DeMMO to utilize mineral substrates as energy sources. Here, we employ field and lab based cultivation approaches with pure minerals to link phylogeny to metabolism at DeMMO. Fracture fluids were directed through reactors filled with Fe3O4, Fe2O3, FeS2, MnO2, and FeCO3 at two sites (610 m and 1478 m) for 2 months prior to harvesting for subsequent analyses. We examined mineralogical, geochemical, and microbiological composition of the reactors via DNA sequencing, microscopy, lipid biomarker characterization, and bulk C and N isotope ratios to determine the influence of mineralogy on biofilm community development. Pre-characterized mineral chips were imaged via SEM to assay microbial growth; preliminary results suggest MnO2, Fe3O4, and Fe2O3 were most conducive to colonization. Solid materials from reactors were used as inoculum for batch cultivation experiments. Media designed to mimic fracture fluid chemistry was supplemented with mineral substrates targeting metal reducers. DNA sequences and microscopy of iron oxide-rich biofilms and fracture fluids suggest iron oxidation is a major energy source at redox transition zones where anaerobic fluids meet more oxidizing conditions. We utilized these biofilms and fluids as inoculum in gradient cultivation experiments targeting microaerophilic iron oxidizers. Cultivation of microbes endemic to DeMMO, a system

  6. Deep inelastic scattering and disquarks

    International Nuclear Information System (INIS)

    Anselmino, M.

    1993-01-01

    The most comprehensive and detailed analyses of the existing data on the structure function F 2 (x, Q 2 ) of free nucleons, from the deep inelastic scattering (DIS) of charged leptons on hydrogen and deuterium targets, have proved beyond any doubt that higher twist, 1/Q 2 corrections are needed in order to obtain a perfect agreement between perturbative QCD predictions and the data. These higher twist corrections take into account two quark correlations inside the nucleon; it is then natural to try to model them in the quark-diquark model of the proton. In so doing all interactions between the two quarks inside the diquark, both perturbative and non perturbative, are supposed to be taken into account. (orig./HSI)

  7. Detector for deep well logging

    International Nuclear Information System (INIS)

    1976-01-01

    A substantial improvement in the useful life and efficiency of a deep-well scintillation detector is achieved by a unique construction wherein the steel cylinder enclosing the sodium iodide scintillation crystal is provided with a tapered recess to receive a glass window which has a high transmittance at the critical wavelength and, for glass, a high coefficient of thermal expansion. A special high-temperature epoxy adhesive composition is employed to form a relatively thick sealing annulus which keeps the glass window in the tapered recess and compensates for the differences in coefficients of expansion between the container and glass so as to maintain a hermetic seal as the unit is subjected to a wide range of temperature

  8. Deep borehole disposal of plutonium

    International Nuclear Information System (INIS)

    Gibb, F. G. F.; Taylor, K. J.; Burakov, B. E.

    2008-01-01

    Excess plutonium not destined for burning as MOX or in Generation IV reactors is both a long-term waste management problem and a security threat. Immobilisation in mineral and ceramic-based waste forms for interim safe storage and eventual disposal is a widely proposed first step. The safest and most secure form of geological disposal for Pu yet suggested is in very deep boreholes and we propose here that the key to successful combination of these immobilisation and disposal concepts is the encapsulation of the waste form in small cylinders of recrystallized granite. The underlying science is discussed and the results of high pressure and temperature experiments on zircon, depleted UO 2 and Ce-doped cubic zirconia enclosed in granitic melts are presented. The outcomes of these experiments demonstrate the viability of the proposed solution and that Pu could be successfully isolated from its environment for many millions of years. (authors)

  9. Automatic Differentiation and Deep Learning

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Statistical learning has been getting more and more interest from the particle-physics community in recent times, with neural networks and gradient-based optimization being a focus. In this talk we shall discuss three things: automatic differention tools: tools to quickly build DAGs of computation that are fully differentiable. We shall focus on one such tool "PyTorch".  Easy deployment of trained neural networks into large systems with many constraints: for example, deploying a model at the reconstruction phase where the neural network has to be integrated into CERN's bulk data-processing C++-only environment Some recent models in deep learning for segmentation and generation that might be useful for particle physics problems.

  10. Jets in deep inelastic scattering

    International Nuclear Information System (INIS)

    Joensson, L.

    1995-01-01

    Jet production in deep inelastic scattering provides a basis for the investigation of various phenomena related to QCD. Two-jet production at large Q 2 has been studied and the distributions with respect to the partonic scaling variables have been compared to models and to next to leading order calculations. The first observations of azimuthal asymmetries of jets produced in first order α s processes have been obtained. The gluon initiated boson-gluon fusion process permits a direct determination of the gluon density of the proton from an analysis of the jets produced in the hard scattering process. A comparison of these results with those from indirect extractions of the gluon density provides an important test of QCD. (author)

  11. NESTOR Deep Sea Neutrino Telescope

    International Nuclear Information System (INIS)

    Aggouras, G.; Anassontzis, E.G.; Ball, A.E.; Bourlis, G.; Chinowsky, W.; Fahrun, E.; Grammatikakis, G.; Green, C.; Grieder, P.; Katrivanos, P.; Koske, P.; Leisos, A.; Markopoulos, E.; Minkowsky, P.; Nygren, D.; Papageorgiou, K.; Przybylski, G.; Resvanis, L.K.; Siotis, I.; Sopher, J.; Staveris-Polikalas, A.; Tsagli, V.; Tsirigotis, A.; Tzamarias, S.; Zhukov, V.A.

    2006-01-01

    One module of NESTOR, the Mediterranean deep-sea neutrino telescope, was deployed at a depth of 4000m, 14km off the Sapienza Island, off the South West coast of Greece. The deployment site provides excellent environmental characteristics. The deployed NESTOR module is constructed as a hexagonal star like latticed titanium star with 12 Optical Modules and an one-meter diameter titanium sphere which houses the electronics. Power and data were transferred through a 30km electro-optical cable to the shore laboratory. In this report we describe briefly the detector and the detector electronics and discuss the first physics data acquired and give the zenith angular distribution of the reconstructed muons

  12. Deep Borehole Disposal Safety Analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Freeze, Geoffrey A. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Stein, Emily [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Price, Laura L. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); MacKinnon, Robert J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Tillman, Jack Bruce [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2016-10-01

    This report presents a preliminary safety analysis for the deep borehole disposal (DBD) concept, using a safety case framework. A safety case is an integrated collection of qualitative and quantitative arguments, evidence, and analyses that substantiate the safety, and the level of confidence in the safety, of a geologic repository. This safety case framework for DBD follows the outline of the elements of a safety case, and identifies the types of information that will be required to satisfy these elements. At this very preliminary phase of development, the DBD safety case focuses on the generic feasibility of the DBD concept. It is based on potential system designs, waste forms, engineering, and geologic conditions; however, no specific site or regulatory framework exists. It will progress to a site-specific safety case as the DBD concept advances into a site-specific phase, progressing through consent-based site selection and site investigation and characterization.

  13. DeepRT: deep learning for peptide retention time prediction in proteomics

    OpenAIRE

    Ma, Chunwei; Zhu, Zhiyong; Ye, Jun; Yang, Jiarui; Pei, Jianguo; Xu, Shaohang; Zhou, Ruo; Yu, Chang; Mo, Fan; Wen, Bo; Liu, Siqi

    2017-01-01

    Accurate predictions of peptide retention times (RT) in liquid chromatography have many applications in mass spectrometry-based proteomics. Herein, we present DeepRT, a deep learning based software for peptide retention time prediction. DeepRT automatically learns features directly from the peptide sequences using the deep convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) model, which eliminates the need to use hand-crafted features or rules. After the feature learning, pr...

  14. Equivalent drawbead performance in deep drawing simulations

    NARCIS (Netherlands)

    Meinders, Vincent T.; Geijselaers, Hubertus J.M.; Huetink, Han

    1999-01-01

    Drawbeads are applied in the deep drawing process to improve the control of the material flow during the forming operation. In simulations of the deep drawing process these drawbeads can be replaced by an equivalent drawbead model. In this paper the usage of an equivalent drawbead model in the

  15. Is deep dreaming the new collage?

    Science.gov (United States)

    Boden, Margaret A.

    2017-10-01

    Deep dreaming (DD) can combine and transform images in surprising ways. But, being based in deep learning (DL), it is not analytically understood. Collage is an art form that is constrained along various dimensions. DD will not be able to generate collages until DL can be guided in a disciplined fashion.

  16. Deep web search: an overview and roadmap

    NARCIS (Netherlands)

    Tjin-Kam-Jet, Kien; Trieschnigg, Rudolf Berend; Hiemstra, Djoerd

    2011-01-01

    We review the state-of-the-art in deep web search and propose a novel classification scheme to better compare deep web search systems. The current binary classification (surfacing versus virtual integration) hides a number of implicit decisions that must be made by a developer. We make these

  17. Research Proposal for Distributed Deep Web Search

    NARCIS (Netherlands)

    Tjin-Kam-Jet, Kien

    2010-01-01

    This proposal identifies two main problems related to deep web search, and proposes a step by step solution for each of them. The first problem is about searching deep web content by means of a simple free-text interface (with just one input field, instead of a complex interface with many input

  18. Development of Hydro-Mechanical Deep Drawing

    DEFF Research Database (Denmark)

    Zhang, Shi-Hong; Danckert, Joachim

    1998-01-01

    The hydro-mechanical deep-drawing process is reviewed in this article. The process principles and features are introduced and the developments of the hydro-mechanical deep-drawing process in process performances, in theory and in numerical simulation are described. The applications are summarized....... Some other related hydraulic forming processes are also dealt with as a comparison....

  19. Stable architectures for deep neural networks

    Science.gov (United States)

    Haber, Eldad; Ruthotto, Lars

    2018-01-01

    Deep neural networks have become invaluable tools for supervised machine learning, e.g. classification of text or images. While often offering superior results over traditional techniques and successfully expressing complicated patterns in data, deep architectures are known to be challenging to design and train such that they generalize well to new data. Critical issues with deep architectures are numerical instabilities in derivative-based learning algorithms commonly called exploding or vanishing gradients. In this paper, we propose new forward propagation techniques inspired by systems of ordinary differential equations (ODE) that overcome this challenge and lead to well-posed learning problems for arbitrarily deep networks. The backbone of our approach is our interpretation of deep learning as a parameter estimation problem of nonlinear dynamical systems. Given this formulation, we analyze stability and well-posedness of deep learning and use this new understanding to develop new network architectures. We relate the exploding and vanishing gradient phenomenon to the stability of the discrete ODE and present several strategies for stabilizing deep learning for very deep networks. While our new architectures restrict the solution space, several numerical experiments show their competitiveness with state-of-the-art networks.

  20. Temperature impacts on deep-sea biodiversity.

    Science.gov (United States)

    Yasuhara, Moriaki; Danovaro, Roberto

    2016-05-01

    Temperature is considered to be a fundamental factor controlling biodiversity in marine ecosystems, but precisely what role temperature plays in modulating diversity is still not clear. The deep ocean, lacking light and in situ photosynthetic primary production, is an ideal model system to test the effects of temperature changes on biodiversity. Here we synthesize current knowledge on temperature-diversity relationships in the deep sea. Our results from both present and past deep-sea assemblages suggest that, when a wide range of deep-sea bottom-water temperatures is considered, a unimodal relationship exists between temperature and diversity (that may be right skewed). It is possible that temperature is important only when at relatively high and low levels but does not play a major role in the intermediate temperature range. Possible mechanisms explaining the temperature-biodiversity relationship include the physiological-tolerance hypothesis, the metabolic hypothesis, island biogeography theory, or some combination of these. The possible unimodal relationship discussed here may allow us to identify tipping points at which on-going global change and deep-water warming may increase or decrease deep-sea biodiversity. Predicted changes in deep-sea temperatures due to human-induced climate change may have more adverse consequences than expected considering the sensitivity of deep-sea ecosystems to temperature changes. © 2014 Cambridge Philosophical Society.

  1. Towards deep learning with segregated dendrites.

    Science.gov (United States)

    Guerguiev, Jordan; Lillicrap, Timothy P; Richards, Blake A

    2017-12-05

    Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network learns to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful higher-order representations-the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the morphology of neocortical pyramidal neurons.

  2. Analyses of the deep borehole drilling status for a deep borehole disposal system

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Youl; Choi, Heui Joo; Lee, Min Soo; Kim, Geon Young; Kim, Kyung Su [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    The purpose of disposal for radioactive wastes is not only to isolate them from humans, but also to inhibit leakage of any radioactive materials into the accessible environment. Because of the extremely high level and long-time scale radioactivity of HLW(High-level radioactive waste), a mined deep geological disposal concept, the disposal depth is about 500 m below ground, is considered as the safest method to isolate the spent fuels or high-level radioactive waste from the human environment with the best available technology at present time. Therefore, as an alternative disposal concept, i.e., deep borehole disposal technology is under consideration in number of countries in terms of its outstanding safety and cost effectiveness. In this paper, the general status of deep drilling technologies was reviewed for deep borehole disposal of high level radioactive wastes. Based on the results of these review, very preliminary applicability of deep drilling technology for deep borehole disposal analyzed. In this paper, as one of key technologies of deep borehole disposal system, the general status of deep drilling technologies in oil industry, geothermal industry and geo scientific field was reviewed for deep borehole disposal of high level radioactive wastes. Based on the results of these review, the very preliminary applicability of deep drilling technology for deep borehole disposal such as relation between depth and diameter, drilling time and feasibility classification was analyzed.

  3. Overview of deep learning in medical imaging.

    Science.gov (United States)

    Suzuki, Kenji

    2017-09-01

    The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a

  4. WFIRST: Science from Deep Field Surveys

    Science.gov (United States)

    Koekemoer, Anton; Foley, Ryan; WFIRST Deep Field Working Group

    2018-01-01

    WFIRST will enable deep field imaging across much larger areas than those previously obtained with Hubble, opening up completely new areas of parameter space for extragalactic deep fields including cosmology, supernova and galaxy evolution science. The instantaneous field of view of the Wide Field Instrument (WFI) is about 0.3 square degrees, which would for example yield an Ultra Deep Field (UDF) reaching similar depths at visible and near-infrared wavelengths to that obtained with Hubble, over an area about 100-200 times larger, for a comparable investment in time. Moreover, wider fields on scales of 10-20 square degrees could achieve depths comparable to large HST surveys at medium depths such as GOODS and CANDELS, and would enable multi-epoch supernova science that could be matched in area to LSST Deep Drilling fields or other large survey areas. Such fields may benefit from being placed on locations in the sky that have ancillary multi-band imaging or spectroscopy from other facilities, from the ground or in space. The WFIRST Deep Fields Working Group has been examining the science considerations for various types of deep fields that may be obtained with WFIRST, and present here a summary of the various properties of different locations in the sky that may be considered for future deep fields with WFIRST.

  5. Deep Learning and Its Applications in Biomedicine.

    Science.gov (United States)

    Cao, Chensi; Liu, Feng; Tan, Hai; Song, Deshou; Shu, Wenjie; Li, Weizhong; Zhou, Yiming; Bo, Xiaochen; Xie, Zhi

    2018-02-01

    Advances in biological and medical technologies have been providing us explosive volumes of biological and physiological data, such as medical images, electroencephalography, genomic and protein sequences. Learning from these data facilitates the understanding of human health and disease. Developed from artificial neural networks, deep learning-based algorithms show great promise in extracting features and learning patterns from complex data. The aim of this paper is to provide an overview of deep learning techniques and some of the state-of-the-art applications in the biomedical field. We first introduce the development of artificial neural network and deep learning. We then describe two main components of deep learning, i.e., deep learning architectures and model optimization. Subsequently, some examples are demonstrated for deep learning applications, including medical image classification, genomic sequence analysis, as well as protein structure classification and prediction. Finally, we offer our perspectives for the future directions in the field of deep learning. Copyright © 2018. Production and hosting by Elsevier B.V.

  6. The deep lymphatic anatomy of the hand.

    Science.gov (United States)

    Ma, Chuan-Xiang; Pan, Wei-Ren; Liu, Zhi-An; Zeng, Fan-Qiang; Qiu, Zhi-Qiang

    2018-04-03

    The deep lymphatic anatomy of the hand still remains the least described in medical literature. Eight hands were harvested from four nonembalmed human cadavers amputated above the wrist. A small amount of 6% hydrogen peroxide was employed to detect the lymphatic vessels around the superficial and deep palmar vascular arches, in webs from the index to little fingers, the thenar and hypothenar areas. A 30-gauge needle was inserted into the vessels and injected with a barium sulphate compound. Each specimen was dissected, photographed and radiographed to demonstrate deep lymphatic distribution of the hand. Five groups of deep collecting lymph vessels were found in the hand: superficial palmar arch lymph vessel (SPALV); deep palmar arch lymph vessel (DPALV); thenar lymph vessel (TLV); hypothenar lymph vessel (HTLV); deep finger web lymph vessel (DFWLV). Each group of vessels drained in different directions first, then all turned and ran towards the wrist in different layers. The deep lymphatic drainage of the hand has been presented. The results will provide an anatomical basis for clinical management, educational reference and scientific research. Copyright © 2018 Elsevier GmbH. All rights reserved.

  7. Hello World Deep Learning in Medical Imaging.

    Science.gov (United States)

    Lakhani, Paras; Gray, Daniel L; Pett, Carl R; Nagy, Paul; Shih, George

    2018-05-03

    There is recent popularity in applying machine learning to medical imaging, notably deep learning, which has achieved state-of-the-art performance in image analysis and processing. The rapid adoption of deep learning may be attributed to the availability of machine learning frameworks and libraries to simplify their use. In this tutorial, we provide a high-level overview of how to build a deep neural network for medical image classification, and provide code that can help those new to the field begin their informatics projects.

  8. Deep Generative Models for Molecular Science

    DEFF Research Database (Denmark)

    Jørgensen, Peter Bjørn; Schmidt, Mikkel Nørgaard; Winther, Ole

    2018-01-01

    Generative deep machine learning models now rival traditional quantum-mechanical computations in predicting properties of new structures, and they come with a significantly lower computational cost, opening new avenues in computational molecular science. In the last few years, a variety of deep...... generative models have been proposed for modeling molecules, which differ in both their model structure and choice of input features. We review these recent advances within deep generative models for predicting molecular properties, with particular focus on models based on the probabilistic autoencoder (or...

  9. Harnessing the Deep Web: Present and Future

    OpenAIRE

    Madhavan, Jayant; Afanasiev, Loredana; Antova, Lyublena; Halevy, Alon

    2009-01-01

    Over the past few years, we have built a system that has exposed large volumes of Deep-Web content to Google.com users. The content that our system exposes contributes to more than 1000 search queries per-second and spans over 50 languages and hundreds of domains. The Deep Web has long been acknowledged to be a major source of structured data on the web, and hence accessing Deep-Web content has long been a problem of interest in the data management community. In this paper, we report on where...

  10. Desalination Economic Evaluation Program (DEEP). User's manual

    International Nuclear Information System (INIS)

    2000-01-01

    DEEP (formerly named ''Co-generation and Desalination Economic Evaluation'' Spreadsheet, CDEE) has been developed originally by General Atomics under contract, and has been used in the IAEA's feasibility studies. For further confidence in the software, it was validated in March 1998. After that, a user friendly version has been issued under the name of DEEP at the end of 1998. DEEP output includes the levelised cost of water and power, a breakdown of cost components, energy consumption and net saleable power for each selected option. Specific power plants can be modelled by adjustment of input data including design power, power cycle parameters and costs

  11. Zooplankton at deep Red Sea brine pools

    KAUST Repository

    Kaartvedt, Stein

    2016-03-02

    The deep-sea anoxic brines of the Red Sea comprise unique, complex and extreme habitats. These environments are too harsh for metazoans, while the brine–seawater interface harbors dense microbial populations. We investigated the adjacent pelagic fauna at two brine pools using net tows, video records from a remotely operated vehicle and submerged echosounders. Waters just above the brine pool of Atlantis II Deep (2000 m depth) appeared depleted of macrofauna. In contrast, the fauna appeared to be enriched at the Kebrit Deep brine–seawater interface (1466 m).

  12. NATURAL GAS RESOURCES IN DEEP SEDIMENTARY BASINS

    Energy Technology Data Exchange (ETDEWEB)

    Thaddeus S. Dyman; Troy Cook; Robert A. Crovelli; Allison A. Henry; Timothy C. Hester; Ronald C. Johnson; Michael D. Lewan; Vito F. Nuccio; James W. Schmoker; Dennis B. Riggin; Christopher J. Schenk

    2002-02-05

    From a geological perspective, deep natural gas resources are generally defined as resources occurring in reservoirs at or below 15,000 feet, whereas ultra-deep gas occurs below 25,000 feet. From an operational point of view, ''deep'' is often thought of in a relative sense based on the geologic and engineering knowledge of gas (and oil) resources in a particular area. Deep gas can be found in either conventionally-trapped or unconventional basin-center accumulations that are essentially large single fields having spatial dimensions often exceeding those of conventional fields. Exploration for deep conventional and unconventional basin-center natural gas resources deserves special attention because these resources are widespread and occur in diverse geologic environments. In 1995, the U.S. Geological Survey estimated that 939 TCF of technically recoverable natural gas remained to be discovered or was part of reserve appreciation from known fields in the onshore areas and State waters of the United. Of this USGS resource, nearly 114 trillion cubic feet (Tcf) of technically-recoverable gas remains to be discovered from deep sedimentary basins. Worldwide estimates of deep gas are also high. The U.S. Geological Survey World Petroleum Assessment 2000 Project recently estimated a world mean undiscovered conventional gas resource outside the U.S. of 844 Tcf below 4.5 km (about 15,000 feet). Less is known about the origins of deep gas than about the origins of gas at shallower depths because fewer wells have been drilled into the deeper portions of many basins. Some of the many factors contributing to the origin of deep gas include the thermal stability of methane, the role of water and non-hydrocarbon gases in natural gas generation, porosity loss with increasing thermal maturity, the kinetics of deep gas generation, thermal cracking of oil to gas, and source rock potential based on thermal maturity and kerogen type. Recent experimental simulations

  13. Comet Dust After Deep Impact

    Science.gov (United States)

    Wooden, Diane H.; Harker, David E.; Woodward, Charles E.

    2006-01-01

    When the Deep Impact Mission hit Jupiter Family comet 9P/Tempel 1, an ejecta crater was formed and an pocket of volatile gases and ices from 10-30 m below the surface was exposed (A Hearn et aI. 2005). This resulted in a gas geyser that persisted for a few hours (Sugita et al, 2005). The gas geyser pushed dust grains into the coma (Sugita et a1. 2005), as well as ice grains (Schulz et al. 2006). The smaller of the dust grains were submicron in radii (0-25.3 micron), and were primarily composed of highly refractory minerals including amorphous (non-graphitic) carbon, and silicate minerals including amorphous (disordered) olivine (Fe,Mg)2SiO4 and pyroxene (Fe,Mg)SiO3 and crystalline Mg-rich olivine. The smaller grains moved faster, as expected from the size-dependent velocity law produced by gas-drag on grains. The mineralogy evolved with time: progressively larger grains persisted in the near nuclear region, having been imparted with slower velocities, and the mineralogies of these larger grains appeared simpler and without crystals. The smaller 0.2-0.3 micron grains reached the coma in about 1.5 hours (1 arc sec = 740 km), were more diverse in mineralogy than the larger grains and contained crystals, and appeared to travel through the coma together. No smaller grains appeared at larger coma distances later (with slower velocities), implying that if grain fragmentation occurred, it happened within the gas acceleration zone. These results of the high spatial resolution spectroscopy (GEMINI+Michelle: Harker et 4. 2005, 2006; Subaru+COMICS: Sugita et al. 2005) revealed that the grains released from the interior were different from the nominally active areas of this comet by their: (a) crystalline content, (b) smaller size, (c) more diverse mineralogy. The temporal changes in the spectra, recorded by GEMIM+Michelle every 7 minutes, indicated that the dust mineralogy is inhomogeneous and, unexpectedly, the portion of the size distribution dominated by smaller grains has

  14. Anisotropy in the deep Earth

    Science.gov (United States)

    Romanowicz, Barbara; Wenk, Hans-Rudolf

    2017-08-01

    Seismic anisotropy has been found in many regions of the Earth's interior. Its presence in the Earth's crust has been known since the 19th century, and is due in part to the alignment of anisotropic crystals in rocks, and in part to patterns in the distribution of fractures and pores. In the upper mantle, seismic anisotropy was discovered 50 years ago, and can be attributed for the most part, to the alignment of intrinsically anisotropic olivine crystals during large scale deformation associated with convection. There is some indication for anisotropy in the transition zone, particularly in the vicinity of subducted slabs. Here we focus on the deep Earth - the lower mantle and core, where anisotropy is not yet mapped in detail, nor is there consensus on its origin. Most of the lower mantle appears largely isotropic, except in the last 200-300 km, in the D″ region, where evidence for seismic anisotropy has been accumulating since the late 1980s, mostly from shear wave splitting measurements. Recently, a picture has been emerging, where strong anisotropy is associated with high shear velocities at the edges of the large low shear velocity provinces (LLSVPs) in the central Pacific and under Africa. These observations are consistent with being due to the presence of highly anisotropic MgSiO3 post-perovskite crystals, aligned during the deformation of slabs impinging on the core-mantle boundary, and upwelling flow within the LLSVPs. We also discuss mineral physics aspects such as ultrahigh pressure deformation experiments, first principles calculations to obtain information about elastic properties, and derivation of dislocation activity based on bonding characteristics. Polycrystal plasticity simulations can predict anisotropy but models are still highly idealized and neglect the complex microstructure of polyphase aggregates with strong and weak components. A promising direction for future progress in understanding the origin of seismic anisotropy in the deep mantle

  15. DeepDive: Declarative Knowledge Base Construction.

    Science.gov (United States)

    De Sa, Christopher; Ratner, Alex; Ré, Christopher; Shin, Jaeho; Wang, Feiran; Wu, Sen; Zhang, Ce

    2016-03-01

    The dark data extraction or knowledge base construction (KBC) problem is to populate a SQL database with information from unstructured data sources including emails, webpages, and pdf reports. KBC is a long-standing problem in industry and research that encompasses problems of data extraction, cleaning, and integration. We describe DeepDive, a system that combines database and machine learning ideas to help develop KBC systems. The key idea in DeepDive is that statistical inference and machine learning are key tools to attack classical data problems in extraction, cleaning, and integration in a unified and more effective manner. DeepDive programs are declarative in that one cannot write probabilistic inference algorithms; instead, one interacts by defining features or rules about the domain. A key reason for this design choice is to enable domain experts to build their own KBC systems. We present the applications, abstractions, and techniques of DeepDive employed to accelerate construction of KBC systems.

  16. Variational inference & deep learning : A new synthesis

    NARCIS (Netherlands)

    Kingma, D.P.

    2017-01-01

    In this thesis, Variational Inference and Deep Learning: A New Synthesis, we propose novel solutions to the problems of variational (Bayesian) inference, generative modeling, representation learning, semi-supervised learning, and stochastic optimization.

  17. Pathways to deep decarbonization - 2015 report

    International Nuclear Information System (INIS)

    Ribera, Teresa; Colombier, Michel; Waisman, Henri; Bataille, Chris; Pierfederici, Roberta; Sachs, Jeffrey; Schmidt-Traub, Guido; Williams, Jim; Segafredo, Laura; Hamburg Coplan, Jill; Pharabod, Ivan; Oury, Christian

    2015-12-01

    In September 2015, the Deep Decarbonization Pathways Project published the Executive Summary of the Pathways to Deep Decarbonization: 2015 Synthesis Report. The full 2015 Synthesis Report was launched in Paris on December 3, 2015, at a technical workshop with the Mitigation Action Plans and Scenarios (MAPS) program. The Deep Decarbonization Pathways Project (DDPP) is a collaborative initiative to understand and show how individual countries can transition to a low-carbon economy and how the world can meet the internationally agreed target of limiting the increase in global mean surface temperature to less than 2 degrees Celsius (deg. C). Achieving the 2 deg. C limit will require that global net emissions of greenhouse gases (GHG) approach zero by the second half of the century. In turn, this will require a profound transformation of energy systems by mid-century through steep declines in carbon intensity in all sectors of the economy, a transition we call 'deep decarbonization'

  18. Variational inference & deep learning: A new synthesis

    OpenAIRE

    Kingma, D.P.

    2017-01-01

    In this thesis, Variational Inference and Deep Learning: A New Synthesis, we propose novel solutions to the problems of variational (Bayesian) inference, generative modeling, representation learning, semi-supervised learning, and stochastic optimization.

  19. DNA Replication Profiling Using Deep Sequencing.

    Science.gov (United States)

    Saayman, Xanita; Ramos-Pérez, Cristina; Brown, Grant W

    2018-01-01

    Profiling of DNA replication during progression through S phase allows a quantitative snap-shot of replication origin usage and DNA replication fork progression. We present a method for using deep sequencing data to profile DNA replication in S. cerevisiae.

  20. DAPs: Deep Action Proposals for Action Understanding

    KAUST Repository

    Escorcia, Victor; Caba Heilbron, Fabian; Niebles, Juan Carlos; Ghanem, Bernard

    2016-01-01

    action proposals from long videos. We show how to take advantage of the vast capacity of deep learning models and memory cells to retrieve from untrimmed videos temporal segments, which are likely to contain actions. A comprehensive evaluation indicates

  1. Evaluation of static resistance of deep foundations.

    Science.gov (United States)

    2017-05-01

    The focus of this research was to evaluate and improve Florida Department of Transportation (FDOT) FB-Deep software prediction of nominal resistance of H-piles, prestressed concrete piles in limestone, large diameter (> 36) open steel and concrete...

  2. The deep ocean under climate change.

    Science.gov (United States)

    Levin, Lisa A; Le Bris, Nadine

    2015-11-13

    The deep ocean absorbs vast amounts of heat and carbon dioxide, providing a critical buffer to climate change but exposing vulnerable ecosystems to combined stresses of warming, ocean acidification, deoxygenation, and altered food inputs. Resulting changes may threaten biodiversity and compromise key ocean services that maintain a healthy planet and human livelihoods. There exist large gaps in understanding of the physical and ecological feedbacks that will occur. Explicit recognition of deep-ocean climate mitigation and inclusion in adaptation planning by the United Nations Framework Convention on Climate Change (UNFCCC) could help to expand deep-ocean research and observation and to protect the integrity and functions of deep-ocean ecosystems. Copyright © 2015, American Association for the Advancement of Science.

  3. Deep gold mine fracture zone behaviour

    CSIR Research Space (South Africa)

    Napier, JAL

    1998-12-01

    Full Text Available The investigation of the behaviour of the fracture zone surrounding deep level gold mine stopes is detailed in three main sections of this report. Section 2 outlines the ongoing study of fundamental fracture process and their numerical...

  4. Deep Ultraviolet Macroporous Silicon Filters, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This SBIR Phase I proposal describes a novel method to make deep and far UV optical filters from macroporous silicon. This type of filter consists of an array of...

  5. Toolkits and Libraries for Deep Learning.

    Science.gov (United States)

    Erickson, Bradley J; Korfiatis, Panagiotis; Akkus, Zeynettin; Kline, Timothy; Philbrick, Kenneth

    2017-08-01

    Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data. In this paper, we will describe some of the libraries and tools that are available to aid in the construction and efficient execution of deep learning as applied to medical images.

  6. Deep-Sea Soft Coral Habitat Suitability

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Deep-sea corals, also known as cold water corals, create complex communities that provide habitat for a variety of invertebrate and fish species, such as grouper,...

  7. Photon diffractive dissociation in deep inelastic scattering

    International Nuclear Information System (INIS)

    Ryskin, M.G.

    1990-01-01

    The new ep-collider HERA gives us the possibility to study the diffractive dissociation of virtual photon in deep inelastic ep-collision. The process of photon dissociation in deep inelastic scattering is the most direct way to measure the value of triple-pomeron vertex G 3P . It was shown that the value of the correct bare vertex G 3P may more than 4 times exceeds its effective value measuring in the triple-reggeon region and reaches the value of about 40-50% of the elastic pp-pomeron vertex. On the contrary in deep inelastic processes the perpendicular momenta q t of the secondary particles are large enough. Thus in deep inelastic reactions one can measure the absolute value of G 3P vertex in the most direct way and compare its value and q t dependence with the leading log QCD predictions

  8. Applications of Deep Learning in Biomedicine.

    Science.gov (United States)

    Mamoshina, Polina; Vieira, Armando; Putin, Evgeny; Zhavoronkov, Alex

    2016-05-02

    Increases in throughput and installed base of biomedical research equipment led to a massive accumulation of -omics data known to be highly variable, high-dimensional, and sourced from multiple often incompatible data platforms. While this data may be useful for biomarker identification and drug discovery, the bulk of it remains underutilized. Deep neural networks (DNNs) are efficient algorithms based on the use of compositional layers of neurons, with advantages well matched to the challenges -omics data presents. While achieving state-of-the-art results and even surpassing human accuracy in many challenging tasks, the adoption of deep learning in biomedicine has been comparatively slow. Here, we discuss key features of deep learning that may give this approach an edge over other machine learning methods. We then consider limitations and review a number of applications of deep learning in biomedical studies demonstrating proof of concept and practical utility.

  9. Mean associative multiplicities in deep inelastic processes

    International Nuclear Information System (INIS)

    Dzhaparidze, G.Sh.; Kiselev, A.V.; Petrov, V.A.

    1981-01-01

    The associative hadron multiplicities in deep inelastic and Drell--Yan processes are studied. In particular the mean multiplicities in different hard processes in QCD are found to be determined by the mean multiplicity in parton jet [ru

  10. Deep-Sea Stony Coral Habitat Suitability

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Deep-sea corals, also known as cold water corals, create complex communities that provide habitat for a variety of invertebrate and fish species, such as grouper,...

  11. Deep Learning and Applications in Computational Biology

    KAUST Repository

    Zeng, Jianyang

    2016-01-01

    In this work, we develop a general and flexible deep learning framework for modeling structural binding preferences and predicting binding sites of RBPs, which takes (predicted) RNA tertiary structural information

  12. Leading particle in deep inelastic scattering

    International Nuclear Information System (INIS)

    Petrov, V.A.

    1984-01-01

    The leading particle effect in deep inelastic scattering is considered. The change of the characteris cs shape of the leading particle inclusive spectrum with Q 2 is estimated to be rather significant at very high Q 2

  13. Progress in deep-UV photoresists

    Indian Academy of Sciences (India)

    Unknown

    This paper reviews the recent development and challenges of deep-UV photoresists and their ... small amount of acid, when exposed to light by photo- chemical ... anomalous insoluble skin and linewidth shift when the. PEB was delayed.

  14. Methods in mooring deep sea sediment traps

    Digital Repository Service at National Institute of Oceanography (India)

    Venkatesan, R.; Fernando, V.; Rajaraman, V.S.; Janakiraman, G.

    The experience gained during the process of deployment and retrieval of nearly 39 sets of deep sea sediment trap moorings on various ships like FS Sonne, ORV Sagarkanya and DSV Nand Rachit are outlined. The various problems encountered...

  15. Deep Water Horizon (HB1006, EK60)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Monitor and measure the biological, chemical, and physical environment in the area of the oil spill from the deep water horizon oil rig in the Gulf of Mexico. A wide...

  16. Biodiversity loss from deep-sea mining

    OpenAIRE

    C. L. Van Dover; J. A. Ardron; E. Escobar; M. Gianni; K. M. Gjerde; A. Jaeckel; D. O. B. Jones; L. A. Levin; H. Niner; L. Pendleton; C. R. Smith; T. Thiele; P. J. Turner; L. Watling; P. P. E. Weaver

    2017-01-01

    The emerging deep-sea mining industry is seen by some to be an engine for economic development in the maritime sector. The International Seabed Authority (ISA) – the body that regulates mining activities on the seabed beyond national jurisdiction – must also protect the marine environment from harmful effects that arise from mining. The ISA is currently drafting a regulatory framework for deep-sea mining that includes measures for environmental protection. Responsible mining increasingly stri...

  17. DEEP VADOSE ZONE TREATABILITY TEST PLAN

    International Nuclear Information System (INIS)

    Chronister, G.B.; Truex, M.J.

    2009-01-01

    (sm b ullet) Treatability test plan published in 2008 (sm b ullet) Outlines technology treatability activities for evaluating application of in situ technologies and surface barriers to deep vadose zone contamination (technetium and uranium) (sm b ullet) Key elements - Desiccation testing - Testing of gas-delivered reactants for in situ treatment of uranium - Evaluating surface barrier application to deep vadose zone - Evaluating in situ grouting and soil flushing

  18. Deep inelastic inclusive weak and electromagnetic interactions

    International Nuclear Information System (INIS)

    Adler, S.L.

    1976-01-01

    The theory of deep inelastic inclusive interactions is reviewed, emphasizing applications to electromagnetic and weak charged current processes. The following reactions are considered: e + N → e + X, ν + N → μ - + X, anti ν + N → μ + + X where X denotes a summation over all final state hadrons and the ν's are muon neutrinos. After a discussion of scaling, the quark-parton model is invoked to explain the principle experimental features of deep inelastic inclusive reactions

  19. Short-term Memory of Deep RNN

    OpenAIRE

    Gallicchio, Claudio

    2018-01-01

    The extension of deep learning towards temporal data processing is gaining an increasing research interest. In this paper we investigate the properties of state dynamics developed in successive levels of deep recurrent neural networks (RNNs) in terms of short-term memory abilities. Our results reveal interesting insights that shed light on the nature of layering as a factor of RNN design. Noticeably, higher layers in a hierarchically organized RNN architecture results to be inherently biased ...

  20. Deep Learning for Video Game Playing

    OpenAIRE

    Justesen, Niels; Bontrager, Philip; Togelius, Julian; Risi, Sebastian

    2017-01-01

    In this article, we review recent Deep Learning advances in the context of how they have been applied to play different types of video games such as first-person shooters, arcade games, and real-time strategy games. We analyze the unique requirements that different game genres pose to a deep learning system and highlight important open challenges in the context of applying these machine learning methods to video games, such as general game playing, dealing with extremely large decision spaces...

  1. Life Support for Deep Space and Mars

    Science.gov (United States)

    Jones, Harry W.; Hodgson, Edward W.; Kliss, Mark H.

    2014-01-01

    How should life support for deep space be developed? The International Space Station (ISS) life support system is the operational result of many decades of research and development. Long duration deep space missions such as Mars have been expected to use matured and upgraded versions of ISS life support. Deep space life support must use the knowledge base incorporated in ISS but it must also meet much more difficult requirements. The primary new requirement is that life support in deep space must be considerably more reliable than on ISS or anywhere in the Earth-Moon system, where emergency resupply and a quick return are possible. Due to the great distance from Earth and the long duration of deep space missions, if life support systems fail, the traditional approaches for emergency supply of oxygen and water, emergency supply of parts, and crew return to Earth or escape to a safe haven are likely infeasible. The Orbital Replacement Unit (ORU) maintenance approach used by ISS is unsuitable for deep space with ORU's as large and complex as those originally provided in ISS designs because it minimizes opportunities for commonality of spares, requires replacement of many functional parts with each failure, and results in substantial launch mass and volume penalties. It has become impractical even for ISS after the shuttle era, resulting in the need for ad hoc repair activity at lower assembly levels with consequent crew time penalties and extended repair timelines. Less complex, more robust technical approaches may be needed to meet the difficult deep space requirements for reliability, maintainability, and reparability. Developing an entirely new life support system would neglect what has been achieved. The suggested approach is use the ISS life support technologies as a platform to build on and to continue to improve ISS subsystems while also developing new subsystems where needed to meet deep space requirements.

  2. Deep Predictive Models in Interactive Music

    OpenAIRE

    Martin, Charles P.; Ellefsen, Kai Olav; Torresen, Jim

    2018-01-01

    Automatic music generation is a compelling task where much recent progress has been made with deep learning models. In this paper, we ask how these models can be integrated into interactive music systems; how can they encourage or enhance the music making of human users? Musical performance requires prediction to operate instruments, and perform in groups. We argue that predictive models could help interactive systems to understand their temporal context, and ensemble behaviour. Deep learning...

  3. Predicting Process Behaviour using Deep Learning

    OpenAIRE

    Evermann, Joerg; Rehse, Jana-Rebecca; Fettke, Peter

    2016-01-01

    Predicting business process behaviour is an important aspect of business process management. Motivated by research in natural language processing, this paper describes an application of deep learning with recurrent neural networks to the problem of predicting the next event in a business process. This is both a novel method in process prediction, which has largely relied on explicit process models, and also a novel application of deep learning methods. The approach is evaluated on two real da...

  4. A Deep Learning Approach to Drone Monitoring

    OpenAIRE

    Chen, Yueru; Aggarwal, Pranav; Choi, Jongmoo; Kuo, C. -C. Jay

    2017-01-01

    A drone monitoring system that integrates deep-learning-based detection and tracking modules is proposed in this work. The biggest challenge in adopting deep learning methods for drone detection is the limited amount of training drone images. To address this issue, we develop a model-based drone augmentation technique that automatically generates drone images with a bounding box label on drone's location. To track a small flying drone, we utilize the residual information between consecutive i...

  5. Bank of Weight Filters for Deep CNNs

    Science.gov (United States)

    2016-11-22

    very large even on the best available hardware . In some studies in transfer learning it has been observed that the network learnt on one task can be...CNNs. Keywords: CNN, deep learning , neural networks, transfer learning , bank of weigh filters, BWF 1. Introduction Object recognition is an important...of CNNs (or, in general, of deep neural networks) is that feature generation part is fused with the classifier part and both parts are learned together

  6. Leveraging multiple datasets for deep leaf counting

    OpenAIRE

    Dobrescu, Andrei; Giuffrida, Mario Valerio; Tsaftaris, Sotirios A

    2017-01-01

    The number of leaves a plant has is one of the key traits (phenotypes) describing its development and growth. Here, we propose an automated, deep learning based approach for counting leaves in model rosette plants. While state-of-the-art results on leaf counting with deep learning methods have recently been reported, they obtain the count as a result of leaf segmentation and thus require per-leaf (instance) segmentation to train the models (a rather strong annotation). Instead, our method tre...

  7. DeepSpark: A Spark-Based Distributed Deep Learning Framework for Commodity Clusters

    OpenAIRE

    Kim, Hanjoo; Park, Jaehong; Jang, Jaehee; Yoon, Sungroh

    2016-01-01

    The increasing complexity of deep neural networks (DNNs) has made it challenging to exploit existing large-scale data processing pipelines for handling massive data and parameters involved in DNN training. Distributed computing platforms and GPGPU-based acceleration provide a mainstream solution to this computational challenge. In this paper, we propose DeepSpark, a distributed and parallel deep learning framework that exploits Apache Spark on commodity clusters. To support parallel operation...

  8. Contemporary deep recurrent learning for recognition

    Science.gov (United States)

    Iftekharuddin, K. M.; Alam, M.; Vidyaratne, L.

    2017-05-01

    Large-scale feed-forward neural networks have seen intense application in many computer vision problems. However, these networks can get hefty and computationally intensive with increasing complexity of the task. Our work, for the first time in literature, introduces a Cellular Simultaneous Recurrent Network (CSRN) based hierarchical neural network for object detection. CSRN has shown to be more effective to solving complex tasks such as maze traversal and image processing when compared to generic feed forward networks. While deep neural networks (DNN) have exhibited excellent performance in object detection and recognition, such hierarchical structure has largely been absent in neural networks with recurrency. Further, our work introduces deep hierarchy in SRN for object recognition. The simultaneous recurrency results in an unfolding effect of the SRN through time, potentially enabling the design of an arbitrarily deep network. This paper shows experiments using face, facial expression and character recognition tasks using novel deep recurrent model and compares recognition performance with that of generic deep feed forward model. Finally, we demonstrate the flexibility of incorporating our proposed deep SRN based recognition framework in a humanoid robotic platform called NAO.

  9. Diabetic retinopathy screening using deep neural network.

    Science.gov (United States)

    Ramachandran, Nishanthan; Hong, Sheng Chiong; Sime, Mary J; Wilson, Graham A

    2017-09-07

    There is a burgeoning interest in the use of deep neural network in diabetic retinal screening. To determine whether a deep neural network could satisfactorily detect diabetic retinopathy that requires referral to an ophthalmologist from a local diabetic retinal screening programme and an international database. Retrospective audit. Diabetic retinal photos from Otago database photographed during October 2016 (485 photos), and 1200 photos from Messidor international database. Receiver operating characteristic curve to illustrate the ability of a deep neural network to identify referable diabetic retinopathy (moderate or worse diabetic retinopathy or exudates within one disc diameter of the fovea). Area under the receiver operating characteristic curve, sensitivity and specificity. For detecting referable diabetic retinopathy, the deep neural network had an area under receiver operating characteristic curve of 0.901 (95% confidence interval 0.807-0.995), with 84.6% sensitivity and 79.7% specificity for Otago and 0.980 (95% confidence interval 0.973-0.986), with 96.0% sensitivity and 90.0% specificity for Messidor. This study has shown that a deep neural network can detect referable diabetic retinopathy with sensitivities and specificities close to or better than 80% from both an international and a domestic (New Zealand) database. We believe that deep neural networks can be integrated into community screening once they can successfully detect both diabetic retinopathy and diabetic macular oedema. © 2017 Royal Australian and New Zealand College of Ophthalmologists.

  10. Some Challenges of Deep Mining†

    Directory of Open Access Journals (Sweden)

    Charles Fairhurst

    2017-08-01

    Full Text Available An increased global supply of minerals is essential to meet the needs and expectations of a rapidly rising world population. This implies extraction from greater depths. Autonomous mining systems, developed through sustained R&D by equipment suppliers, reduce miner exposure to hostile work environments and increase safety. This places increased focus on “ground control” and on rock mechanics to define the depth to which minerals may be extracted economically. Although significant efforts have been made since the end of World War II to apply mechanics to mine design, there have been both technological and organizational obstacles. Rock in situ is a more complex engineering material than is typically encountered in most other engineering disciplines. Mining engineering has relied heavily on empirical procedures in design for thousands of years. These are no longer adequate to address the challenges of the 21st century, as mines venture to increasingly greater depths. The development of the synthetic rock mass (SRM in 2008 provides researchers with the ability to analyze the deformational behavior of rock masses that are anisotropic and discontinuous—attributes that were described as the defining characteristics of in situ rock by Leopold Müller, the president and founder of the International Society for Rock Mechanics (ISRM, in 1966. Recent developments in the numerical modeling of large-scale mining operations (e.g., caving using the SRM reveal unanticipated deformational behavior of the rock. The application of massive parallelization and cloud computational techniques offers major opportunities: for example, to assess uncertainties in numerical predictions; to establish the mechanics basis for the empirical rules now used in rock engineering and their validity for the prediction of rock mass behavior beyond current experience; and to use the discrete element method (DEM in the optimization of deep mine design. For the first time, mining

  11. DeepInfer: open-source deep learning deployment toolkit for image-guided therapy

    Science.gov (United States)

    Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A.; Kapur, Tina; Wells, William M.; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang

    2017-03-01

    Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research work ows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.

  12. Ultra Deep Wave Equation Imaging and Illumination

    Energy Technology Data Exchange (ETDEWEB)

    Alexander M. Popovici; Sergey Fomel; Paul Sava; Sean Crawley; Yining Li; Cristian Lupascu

    2006-09-30

    In this project we developed and tested a novel technology, designed to enhance seismic resolution and imaging of ultra-deep complex geologic structures by using state-of-the-art wave-equation depth migration and wave-equation velocity model building technology for deeper data penetration and recovery, steeper dip and ultra-deep structure imaging, accurate velocity estimation for imaging and pore pressure prediction and accurate illumination and amplitude processing for extending the AVO prediction window. Ultra-deep wave-equation imaging provides greater resolution and accuracy under complex geologic structures where energy multipathing occurs, than what can be accomplished today with standard imaging technology. The objective of the research effort was to examine the feasibility of imaging ultra-deep structures onshore and offshore, by using (1) wave-equation migration, (2) angle-gathers velocity model building, and (3) wave-equation illumination and amplitude compensation. The effort consisted of answering critical technical questions that determine the feasibility of the proposed methodology, testing the theory on synthetic data, and finally applying the technology for imaging ultra-deep real data. Some of the questions answered by this research addressed: (1) the handling of true amplitudes in the downward continuation and imaging algorithm and the preservation of the amplitude with offset or amplitude with angle information required for AVO studies, (2) the effect of several imaging conditions on amplitudes, (3) non-elastic attenuation and approaches for recovering the amplitude and frequency, (4) the effect of aperture and illumination on imaging steep dips and on discriminating the velocities in the ultra-deep structures. All these effects were incorporated in the final imaging step of a real data set acquired specifically to address ultra-deep imaging issues, with large offsets (12,500 m) and long recording time (20 s).

  13. DeepPVP: phenotype-based prioritization of causative variants using deep learning

    KAUST Repository

    Boudellioua, Imene; Kulmanov, Maxat; Schofield, Paul N; Gkoutos, Georgios V; Hoehndorf, Robert

    2018-01-01

    phenotype-based methods that use similar features. DeepPVP is freely available at https://github.com/bio-ontology-research-group/phenomenet-vp Conclusions: DeepPVP further improves on existing variant prioritization methods both in terms of speed as well

  14. Assessment of deep geological environment condition

    International Nuclear Information System (INIS)

    Bae, Dae Seok; Han, Kyung Won; Joen, Kwan Sik

    2003-05-01

    The main tasks of geoscientific study in the 2nd stage was characterized focusing mainly on a near-field condition of deep geologic environment, and aimed to generate the geologic input data for a Korean reference disposal system for high level radioactive wastes and to establish site characterization methodology, including neotectonic features, fracture systems and mechanical properties of plutonic rocks, and hydrogeochemical characteristics. The preliminary assessment of neotectonics in the Korean peninsula was performed on the basis of seismicity recorded, Quarternary faults investigated, uplift characteristics studied on limited areas, distribution of the major regional faults and their characteristics. The local fracture system was studied in detail from the data obtained from deep boreholes in granitic terrain. Through this deep drilling project, the geometrical and hydraulic properties of different fracture sets are statistically analysed on a block scale. The mechanical properties of intact rocks were evaluated from the core samples by laboratory testing and the in-situ stress conditions were estimated by a hydro fracturing test in the boreholes. The hydrogeochemical conditions in the deep boreholes were characterized based on hydrochemical composition and isotopic signatures and were attempted to assess the interrelation with a major fracture system. The residence time of deep groundwater was estimated by C-14 dating. For the travel time of groundwater between the boreholes, the methodology and equipment for tracer test were established

  15. Molecular analysis of deep subsurface bacteria

    International Nuclear Information System (INIS)

    Jimenez Baez, L.E.

    1989-09-01

    Deep sediments samples from site C10a, in Appleton, and sites, P24, P28, and P29, at the Savannah River Site (SRS), near Aiken, South Carolina were studied to determine their microbial community composition, DNA homology and mol %G+C. Different geological formations with great variability in hydrogeological parameters were found across the depth profile. Phenotypic identification of deep subsurface bacteria underestimated the bacterial diversity at the three SRS sites, since bacteria with the same phenotype have different DNA composition and less than 70% DNA homology. Total DNA hybridization and mol %G+C analysis of deep sediment bacterial isolates suggested that each formation is comprised of different microbial communities. Depositional environment was more important than site and geological formation on the DNA relatedness between deep subsurface bacteria, since more 70% of bacteria with 20% or more of DNA homology came from the same depositional environments. Based on phenotypic and genotypic tests Pseudomonas spp. and Acinetobacter spp.-like bacteria were identified in 85 million years old sediments. This suggests that these microbial communities might have been adapted during a long period of time to the environmental conditions of the deep subsurface

  16. Preface: Deep Slab and Mantle Dynamics

    Science.gov (United States)

    Suetsugu, Daisuke; Bina, Craig R.; Inoue, Toru; Wiens, Douglas A.

    2010-11-01

    We are pleased to publish this special issue of the journal Physics of the Earth and Planetary Interiors entitled "Deep Slab and Mantle Dynamics". This issue is an outgrowth of the international symposium "Deep Slab and Mantle Dynamics", which was held on February 25-27, 2009, in Kyoto, Japan. This symposium was organized by the "Stagnant Slab Project" (SSP) research group to present the results of the 5-year project and to facilitate intensive discussion with well-known international researchers in related fields. The SSP and the symposium were supported by a Grant-in-Aid for Scientific Research (16075101) from the Ministry of Education, Culture, Sports, Science and Technology of the Japanese Government. In the symposium, key issues discussed by participants included: transportation of water into the deep mantle and its role in slab-related dynamics; observational and experimental constraints on deep slab properties and the slab environment; modeling of slab stagnation to constrain its mechanisms in comparison with observational and experimental data; observational, experimental and modeling constraints on the fate of stagnant slabs; eventual accumulation of stagnant slabs on the core-mantle boundary and its geodynamic implications. This special issue is a collection of papers presented in the symposium and other papers related to the subject of the symposium. The collected papers provide an overview of the wide range of multidisciplinary studies of mantle dynamics, particularly in the context of subduction, stagnation, and the fate of deep slabs.

  17. Training Deep Spiking Neural Networks Using Backpropagation.

    Science.gov (United States)

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael

    2016-01-01

    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  18. Deep Ocean Contribution to Sea Level Rise

    Science.gov (United States)

    Chang, L.; Sun, W.; Tang, H.; Wang, Q.

    2017-12-01

    The ocean temperature and salinity change in the upper 2000m can be detected by Argo floats, so we can know the steric height change of the ocean. But the ocean layers above 2000m represent only 50% of the total ocean volume. Although the temperature and salinity change are small compared to the upper ocean, the deep ocean contribution to sea level might be significant because of its large volume. There has been some research on the deep ocean rely on the very sparse situ observation and are limited to decadal and longer-term rates of change. The available observational data in the deep ocean are too spares to determine the temporal variability, and the long-term changes may have a bias. We will use the Argo date and combine the situ data and topographic data to estimate the temperature and salinity of the sea water below 2000m, so we can obtain a monthly data. We will analyze the seasonal and annual change of the steric height change due to the deep ocean between 2005 and 2016. And we will evaluate the result combination the present-day satellite and in situ observing systems. The deep ocean contribution can be inferred indirectly as the difference between the altimetry minus GRACE and Argo-based steric sea level.

  19. Deep Learning: A Primer for Radiologists.

    Science.gov (United States)

    Chartrand, Gabriel; Cheng, Phillip M; Vorontsov, Eugene; Drozdzal, Michal; Turcotte, Simon; Pal, Christopher J; Kadoury, Samuel; Tang, An

    2017-01-01

    Deep learning is a class of machine learning methods that are gaining success and attracting interest in many domains, including computer vision, speech recognition, natural language processing, and playing games. Deep learning methods produce a mapping from raw inputs to desired outputs (eg, image classes). Unlike traditional machine learning methods, which require hand-engineered feature extraction from inputs, deep learning methods learn these features directly from data. With the advent of large datasets and increased computing power, these methods can produce models with exceptional performance. These models are multilayer artificial neural networks, loosely inspired by biologic neural systems. Weighted connections between nodes (neurons) in the network are iteratively adjusted based on example pairs of inputs and target outputs by back-propagating a corrective error signal through the network. For computer vision tasks, convolutional neural networks (CNNs) have proven to be effective. Recently, several clinical applications of CNNs have been proposed and studied in radiology for classification, detection, and segmentation tasks. This article reviews the key concepts of deep learning for clinical radiologists, discusses technical requirements, describes emerging applications in clinical radiology, and outlines limitations and future directions in this field. Radiologists should become familiar with the principles and potential applications of deep learning in medical imaging. © RSNA, 2017.

  20. DeepPVP: phenotype-based prioritization of causative variants using deep learning

    KAUST Repository

    Boudellioua, Imene

    2018-05-02

    Background: Prioritization of variants in personal genomic data is a major challenge. Recently, computational methods that rely on comparing phenotype similarity have shown to be useful to identify causative variants. In these methods, pathogenicity prediction is combined with a semantic similarity measure to prioritize not only variants that are likely to be dysfunctional but those that are likely involved in the pathogenesis of a patient\\'s phenotype. Results: We have developed DeepPVP, a variant prioritization method that combined automated inference with deep neural networks to identify the likely causative variants in whole exome or whole genome sequence data. We demonstrate that DeepPVP performs significantly better than existing methods, including phenotype-based methods that use similar features. DeepPVP is freely available at https://github.com/bio-ontology-research-group/phenomenet-vp Conclusions: DeepPVP further improves on existing variant prioritization methods both in terms of speed as well as accuracy.

  1. Deep learning in TMVA Benchmarking Benchmarking TMVA DNN Integration of a Deep Autoencoder

    CERN Document Server

    Huwiler, Marc

    2017-01-01

    The TMVA library in ROOT is dedicated to multivariate analysis, and in partic- ular oers numerous machine learning algorithms in a standardized framework. It is widely used in High Energy Physics for data analysis, mainly to perform regression and classication. To keep up to date with the state of the art in deep learning, a new deep learning module was being developed this summer, oering deep neural net- work, convolutional neural network, and autoencoder. TMVA did not have yet any autoencoder method, and the present project consists in implementing the TMVA autoencoder class based on the deep learning module. It also includes some bench- marking performed on the actual deep neural network implementation, in comparison to the Keras framework with Tensorflow and Theano backend.

  2. DeepSurv: personalized treatment recommender system using a Cox proportional hazards deep neural network.

    Science.gov (United States)

    Katzman, Jared L; Shaham, Uri; Cloninger, Alexander; Bates, Jonathan; Jiang, Tingting; Kluger, Yuval

    2018-02-26

    Medical practitioners use survival models to explore and understand the relationships between patients' covariates (e.g. clinical and genetic features) and the effectiveness of various treatment options. Standard survival models like the linear Cox proportional hazards model require extensive feature engineering or prior medical knowledge to model treatment interaction at an individual level. While nonlinear survival methods, such as neural networks and survival forests, can inherently model these high-level interaction terms, they have yet to be shown as effective treatment recommender systems. We introduce DeepSurv, a Cox proportional hazards deep neural network and state-of-the-art survival method for modeling interactions between a patient's covariates and treatment effectiveness in order to provide personalized treatment recommendations. We perform a number of experiments training DeepSurv on simulated and real survival data. We demonstrate that DeepSurv performs as well as or better than other state-of-the-art survival models and validate that DeepSurv successfully models increasingly complex relationships between a patient's covariates and their risk of failure. We then show how DeepSurv models the relationship between a patient's features and effectiveness of different treatment options to show how DeepSurv can be used to provide individual treatment recommendations. Finally, we train DeepSurv on real clinical studies to demonstrate how it's personalized treatment recommendations would increase the survival time of a set of patients. The predictive and modeling capabilities of DeepSurv will enable medical researchers to use deep neural networks as a tool in their exploration, understanding, and prediction of the effects of a patient's characteristics on their risk of failure.

  3. Deep Learning in Open Source Learning Streams

    DEFF Research Database (Denmark)

    Kjærgaard, Thomas

    2016-01-01

    This chapter presents research on deep learning in a digital learning environment and raises the question if digital instructional designs can catalyze deeper learning than traditional classroom teaching. As a theoretical point of departure the notion of ‘situated learning’ is utilized...... and contrasted to the notion of functionalistic learning in a digital context. The mechanism that enables deep learning in this context is ‘The Open Source Learning Stream’. ‘The Open Source Learning Stream’ is the notion of sharing ‘learning instances’ in a digital space (discussion board, Facebook group......, unistructural, multistructural or relational learning. The research concludes that ‘The Open Source Learning Stream’ can catalyze deep learning and that there are four types of ‘Open Source Learning streams’; individual/ asynchronous, individual/synchronous, shared/asynchronous and shared...

  4. Deep learning in medical imaging: General overview

    Energy Technology Data Exchange (ETDEWEB)

    Lee, June Goo; Jun, Sang Hoon; Cho, Young Won; Lee, Hyun Na; KIm, Guk Bae; Seo, Joon Beom; Kim, Nam Kug [University of Ulsan College of Medicine, Asan Medical Center, Seoul (Korea, Republic of)

    2017-08-01

    The artificial neural network (ANN)–a machine learning technique inspired by the human neuronal synapse system–was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and health care, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging.

  5. Deep-seated sarcomas of the penis

    Directory of Open Access Journals (Sweden)

    Alberto A. Antunes

    2005-06-01

    Full Text Available Mesenchymal neoplasias represent 5% of tumors affecting the penis. Due to the rarity of such tumors, there is no agreement concerning the best method for staging and managing these patients. Sarcomas of the penis can be classified as deep-seated if they derive from the structures forming the spongy body and the cavernous bodies. Superficial lesions are usually low-grade and show a small tendency towards distant metastasis. In contrast, deep-seated lesions usually show behavior that is more aggressive and have poorer prognosis. The authors report 3 cases of deep-seated primary sarcomas of the penis and review the literature on this rare and aggressive neoplasia.

  6. Strategic Technologies for Deep Space Transport

    Science.gov (United States)

    Litchford, Ronald J.

    2016-01-01

    Deep space transportation capability for science and exploration is fundamentally limited by available propulsion technologies. Traditional chemical systems are performance plateaued and require enormous Initial Mass in Low Earth Orbit (IMLEO) whereas solar electric propulsion systems are power limited and unable to execute rapid transits. Nuclear based propulsion and alternative energetic methods, on the other hand, represent potential avenues, perhaps the only viable avenues, to high specific power space transport evincing reduced trip time, reduced IMLEO, and expanded deep space reach. Here, key deep space transport mission capability objectives are reviewed in relation to STMD technology portfolio needs, and the advanced propulsion technology solution landscape is examined including open questions, technical challenges, and developmental prospects. Options for potential future investment across the full compliment of STMD programs are presented based on an informed awareness of complimentary activities in industry, academia, OGAs, and NASA mission directorates.

  7. Deep learning in medical imaging: General overview

    International Nuclear Information System (INIS)

    Lee, June Goo; Jun, Sang Hoon; Cho, Young Won; Lee, Hyun Na; KIm, Guk Bae; Seo, Joon Beom; Kim, Nam Kug

    2017-01-01

    The artificial neural network (ANN)–a machine learning technique inspired by the human neuronal synapse system–was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and health care, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging

  8. Deep learning for SAR image formation

    Science.gov (United States)

    Mason, Eric; Yonel, Bariscan; Yazici, Birsen

    2017-04-01

    The recent success of deep learning has lead to growing interest in applying these methods to signal processing problems. This paper explores the applications of deep learning to synthetic aperture radar (SAR) image formation. We review deep learning from a perspective relevant to SAR image formation. Our objective is to address SAR image formation in the presence of uncertainties in the SAR forward model. We present a recurrent auto-encoder network architecture based on the iterative shrinkage thresholding algorithm (ISTA) that incorporates SAR modeling. We then present an off-line training method using stochastic gradient descent and discuss the challenges and key steps of learning. Lastly, we show experimentally that our method can be used to form focused images in the presence of phase uncertainties. We demonstrate that the resulting algorithm has faster convergence and decreased reconstruction error than that of ISTA.

  9. Oceanography related to deep sea waste disposal

    International Nuclear Information System (INIS)

    1978-09-01

    In connection with studies on the feasibility of the safe disposal of radioactive waste, from a large scale nuclear power programme, either on the bed of the deep ocean or within the deep ocean bed, preparation of the present document was commissioned by the (United Kingdom) Department of the Environment. It attempts (a) to summarize the present state of knowledge of the deep ocean environment relevant to the disposal options and assess the processes which could aid or hinder dispersal of material released from its container; (b) to identify areas of research in which more work is needed before the safety of disposal on, or beneath, the ocean bed can be assessed; and (c) to indicate which areas of research can or should be undertaken by British scientists. The programmes of international cooperation in this field are discussed. The report is divided into four chapters dealing respectively with geology and geophysics, geochemistry, physical oceanography and marine biology. (U.K.)

  10. In Brief: Deep-sea observatory

    Science.gov (United States)

    Showstack, Randy

    2008-11-01

    The first deep-sea ocean observatory offshore of the continental United States has begun operating in the waters off central California. The remotely operated Monterey Accelerated Research System (MARS) will allow scientists to monitor the deep sea continuously. Among the first devices to be hooked up to the observatory are instruments to monitor earthquakes, videotape deep-sea animals, and study the effects of acidification on seafloor animals. ``Some day we may look back at the first packets of data streaming in from the MARS observatory as the equivalent of those first words spoken by Alexander Graham Bell: `Watson, come here, I need you!','' commented Marcia McNutt, president and CEO of the Monterey Bay Aquarium Research Institute, which coordinated construction of the observatory. For more information, see http://www.mbari.org/news/news_releases/2008/mars-live/mars-live.html.

  11. Deep learning in jet reconstruction at CMS

    CERN Document Server

    Stoye, Markus

    2017-01-01

    Deep learning has led to several breakthroughs outside the field of high energy physics, yet in jet reconstruction for the CMS experiment at the CERN LHC it has not been used so far. This report shows results of applying deep learning strategies to jet reconstruction at the stage of identifying the original parton association of the jet (jet tagging), which is crucial for physics analyses at the LHC experiments. We introduce a custom deep neural network architecture for jet tagging. We compare the performance of this novel method with the other established approaches at CMS and show that the proposed strategy provides a significant improvement. The strategy provides the first multi-class classifier, instead of the few binary classifiers that previously were used, and thus yields more information and in a more convenient way. The performance results obtained with simulation imply a significant improvement for a large number of important physics analysis at the CMS experiment.

  12. Deep Learning in Medical Imaging: General Overview

    Science.gov (United States)

    Lee, June-Goo; Jun, Sanghoon; Cho, Young-Won; Lee, Hyunna; Kim, Guk Bae

    2017-01-01

    The artificial neural network (ANN)–a machine learning technique inspired by the human neuronal synapse system–was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and healthcare, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging. PMID:28670152

  13. Deep Learning in Medical Image Analysis.

    Science.gov (United States)

    Shen, Dinggang; Wu, Guorong; Suk, Heung-Il

    2017-06-21

    This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement.

  14. Pathways to deep decarbonization - Interim 2014 Report

    International Nuclear Information System (INIS)

    2014-01-01

    The interim 2014 report by the Deep Decarbonization Pathways Project (DDPP), coordinated and published by IDDRI and the Sustainable Development Solutions Network (SDSN), presents preliminary findings of the pathways developed by the DDPP Country Research Teams with the objective of achieving emission reductions consistent with limiting global warming to less than 2 deg. C. The DDPP is a knowledge network comprising 15 Country Research Teams and several Partner Organizations who develop and share methods, assumptions, and findings related to deep decarbonization. Each DDPP Country Research Team has developed an illustrative road-map for the transition to a low-carbon economy, with the intent of taking into account national socio-economic conditions, development aspirations, infrastructure stocks, resource endowments, and other relevant factors. The interim 2014 report focuses on technically feasible pathways to deep decarbonization

  15. Excess plutonium disposition: The deep borehole option

    International Nuclear Information System (INIS)

    Ferguson, K.L.

    1994-01-01

    This report reviews the current status of technologies required for the disposition of plutonium in Very Deep Holes (VDH). It is in response to a recent National Academy of Sciences (NAS) report which addressed the management of excess weapons plutonium and recommended three approaches to the ultimate disposition of excess plutonium: (1) fabrication and use as a fuel in existing or modified reactors in a once-through cycle, (2) vitrification with high-level radioactive waste for repository disposition, (3) burial in deep boreholes. As indicated in the NAS report, substantial effort would be required to address the broad range of issues related to deep bore-hole emplacement. Subjects reviewed in this report include geology and hydrology, design and engineering, safety and licensing, policy decisions that can impact the viability of the concept, and applicable international programs. Key technical areas that would require attention should decisions be made to further develop the borehole emplacement option are identified

  16. Deep Learning in Medical Imaging: General Overview.

    Science.gov (United States)

    Lee, June-Goo; Jun, Sanghoon; Cho, Young-Won; Lee, Hyunna; Kim, Guk Bae; Seo, Joon Beom; Kim, Namkug

    2017-01-01

    The artificial neural network (ANN)-a machine learning technique inspired by the human neuronal synapse system-was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and healthcare, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging.

  17. Stable isotope geochemistry of deep sea cherts

    Energy Technology Data Exchange (ETDEWEB)

    Kolodny, Y; Epstein, S [California Inst. of Tech., Pasadena (USA). Div. of Geological Sciences

    1976-10-01

    Seventy four samples of DSDP (Deep Sea Drilling Project) recovered cherts of Jurassic to Miocene age from varying locations, and 27 samples of on-land exposed cherts were analyzed for the isotopic composition of their oxygen and hydrogen. These studies were accompanied by mineralogical analyses and some isotopic analyses of the coexisting carbonates. delta/sup 18/0 of chert ranges between 27 and 39 parts per thousand relative to SMOW, delta/sup 18/0 of porcellanite - between 30 and 42 parts per thousand. The consistent enrichment of opal-CT in porcellanites in /sup 18/0 with respect to coexisting microcrystalline quartz in chert is probably a reflection of a different temperature (depth) of diagenesis of the two phases. delta/sup 18/0 of deep sea cherts generally decrease with increasing age, indicating an overall cooling of the ocean bottom during the last 150 m.y. A comparison of this trend with that recorded by benthonic foraminifera (Douglas et al., Initial Reports of the Deep Sea Drilling Project; 32:509(1975)) indicates the possibility of delta/sup 18/0 in deep sea cherts not being frozen in until several tens of millions of years after deposition. Cherts of any Age show a spread of delta/sup 18/0 values, increasing diagenesis being reflected in a lowering of delta/sup 18/0. Drusy quartz has the lowest delta/sup 18/0 values. On land exposed cherts are consistently depleted in /sup 18/0 in comparison to their deep sea time equivalent cherts. Water extracted from deep sea cherts ranges between 0.5 and 1.4 wt%. deltaD of this water ranges between -78 and -95 parts per thousand and is not a function of delta/sup 18/0 of the cherts (or the temperature of their formation).

  18. Deep Space Detection of Oriented Ice Crystals

    Science.gov (United States)

    Marshak, A.; Varnai, T.; Kostinski, A. B.

    2017-12-01

    The deep space climate observatory (DSCOVR) spacecraft resides at the first Lagrangian point about one million miles from Earth. A polychromatic imaging camera onboard delivers nearly hourly observations of the entire sun-lit face of the Earth. Many images contain unexpected bright flashes of light over both ocean and land. We constructed a yearlong time series of flash latitudes, scattering angles and oxygen absorption to demonstrate conclusively that the flashes over land are specular reflections off tiny ice crystals floating in the air nearly horizontally. Such deep space detection of tropospheric ice can be used to constrain the likelihood of oriented crystals and their contribution to Earth albedo.

  19. A clinical study on deep neck abscess

    International Nuclear Information System (INIS)

    Ota, Yumi; Ogawa, Yoshiko; Takemura, Teiji; Sawada, Toru

    2007-01-01

    Although various effective antibiotics have been synthesized, deep neck abscess is still a serious and life-threatening infection. It is important to diagnose promptly and treat adequately, and contrast-enhanced CT is useful and indispensable for diagnosis. We reviewed our patients with deep neck abscess, and analyzed the location by reviewing CT images, and discussed the treatment. Surgical drainage is a fundamental treatment for abscess but if it exists in only one area such as the parotid gland space, it can be cured with needle aspiration and suitable antibiotics. (author)

  20. Approximate Inference and Deep Generative Models

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Advances in deep generative models are at the forefront of deep learning research because of the promise they offer for allowing data-efficient learning, and for model-based reinforcement learning. In this talk I'll review a few standard methods for approximate inference and introduce modern approximations which allow for efficient large-scale training of a wide variety of generative models. Finally, I'll demonstrate several important application of these models to density estimation, missing data imputation, data compression and planning.

  1. Deep Belief Nets for Topic Modeling

    DEFF Research Database (Denmark)

    Maaløe, Lars; Arngren, Morten; Winther, Ole

    2015-01-01

    -formative. In this paper we describe large-scale content based collaborative filtering for digital publishing. To solve the digital publishing recommender problem we compare two approaches: latent Dirichlet allocation (LDA) and deep be-lief nets (DBN) that both find low-dimensional latent representations for documents....... Efficient retrieval can be carried out in the latent representation. We work both on public benchmarks and digital media content provided by Issuu, an on-line publishing platform. This article also comes with a newly developed deep belief nets toolbox for topic modeling tailored towards performance...

  2. Un paseo por la Deep Web

    OpenAIRE

    Ortega Castillo, Carlos

    2018-01-01

    Este documento busca presentar una mirada técnica e inclusiva a algunas de las tecnologías de interconexión desarrolladas en la DeepWeb, primero desde un punto de vista teórico y después con una breve introducción práctica. La desmitificación de los procesos desarrollados bajo la DeepWeb, brinda herramientas a los usuarios para esclarecer y construir nuevos paradigmas de sociedad, conocimiento y tecnología que aporten al desarrollo responsable de este tipo de redes y contribuyan al crecimi...

  3. Deep fracturation of granitic rock mass

    International Nuclear Information System (INIS)

    Bles, J.L.; Blanchin, R.; Bonijoly, D.; Dutartre, P.; Feybesse, J.L.; Gros, Y.; Landry, J.; Martin, P.

    1986-01-01

    This documentary study realized with the financial support of the European Communities and the CEA aims at the utilization of available data for the understanding of the evolution of natural fractures in granitic rocks from the surface to deep underground, in various feasibility studies dealing with radioactive wastes disposal. The Mont Blanc road tunnel, the EDF Arc-Isere gallerie, the Auriat deep borehole and the Pyrenean rock mass of Bassies are studied. In this study are more particularly analyzed the relationship between small fractures and large faults, evolution with depth of fracture density and direction, consequences of rock decompression and relationship between fracturation and groundwater [fr

  4. Gamma-rays from deep inelastic collisions

    International Nuclear Information System (INIS)

    Stephens, F.S.

    1979-01-01

    The γ-rays associated with deep inelastic collisions can give information about the magnitude and orientation of the angular momentum transferred in these events. In this review, special emphasis is placed on understanding the origin and nature of these γ-rays in order to avoid some of the ambiguities that can arise. The experimental information coming from these γ-ray studies is reviewed, and compared briefly with that obtained by other methods and also with the expectations from current models for deep inelastic collisions. 15 figures

  5. Fractal measures in a deep penetration problem

    International Nuclear Information System (INIS)

    Murthy, K.P.N.; Indira, R.; John, T.M.

    1993-01-01

    In the Monte Carlo simulation of a deep penetration problem the parameter, say b in the importance function must be assigned a value b' such that variance is minimum. If b b' the sample mean is still not reliable; but the sample fluctuations would be small and misleading, though the actual fluctuations are quite large. This is because the distribution of transmission has a tail which becomes prominent when b > b'. Considering a model deep penetration problem, and employing exact enumeration techniques, it is shown that in the limit of large biasing the long tailed distribution to the transmission is multifractal. (author). 5 refs., 3 figs

  6. La deep web : el mercado negro global

    OpenAIRE

    Gay Fernández, José

    2015-01-01

    La deep web es un espacio oculto de internet donde la primera garantía es el anonimato. En líneas generales, la deep web contiene todo aquello que los buscadores convencionales no pueden localizar. Esta garantía sirve para albergar una vasta red de servicios ilegales, como el narcotráfico, la trata de blancas, la contratación de sicarios, la compra-venta de pasaportes y cuentas bancarias, o la pornografía infantil, entre otros muchos. Pero el anonimato también posibilita que activ...

  7. Quantitative phase microscopy using deep neural networks

    Science.gov (United States)

    Li, Shuai; Sinha, Ayan; Lee, Justin; Barbastathis, George

    2018-02-01

    Deep learning has been proven to achieve ground-breaking accuracy in various tasks. In this paper, we implemented a deep neural network (DNN) to achieve phase retrieval in a wide-field microscope. Our DNN utilized the residual neural network (ResNet) architecture and was trained using the data generated by a phase SLM. The results showed that our DNN was able to reconstruct the profile of the phase target qualitatively. In the meantime, large error still existed, which indicated that our approach still need to be improved.

  8. Nuclear structure in deep-inelastic reactions

    International Nuclear Information System (INIS)

    Rehm, K.E.

    1986-01-01

    The paper concentrates on recent deep inelastic experiments conducted at Argonne National Laboratory and the nuclear structure effects evident in reactions between super heavy nuclei. Experiments indicate that these reactions evolve gradually from simple transfer processes which have been studied extensively for lighter nuclei such as 16 O, suggesting a theoretical approach connecting the one-step DWBA theory to the multistep statistical models of nuclear reactions. This transition between quasi-elastic and deep inelastic reactions is achieved by a simple random walk model. Some typical examples of nuclear structure effects are shown. 24 refs., 9 figs

  9. Deep Learning For Sequential Pattern Recognition

    OpenAIRE

    Safari, Pooyan

    2013-01-01

    Projecte realitzat en el marc d’un programa de mobilitat amb la Technische Universität München (TUM) In recent years, deep learning has opened a new research line in pattern recognition tasks. It has been hypothesized that this kind of learning would capture more abstract patterns concealed in data. It is motivated by the new findings both in biological aspects of the brain and hardware developments which have made the parallel processing possible. Deep learning methods come along with ...

  10. Environmental challenges of deep water activities

    International Nuclear Information System (INIS)

    Sande, Arvid

    1998-01-01

    In this presentation there are discussed the experiences of petroleum industry, and the projects that have been conducted in connection with the planning and drilling of the first deep water wells in Norway. There are also presented views on where to put more effort in the years to come, so as to increase the knowledge of deep water areas. Attention is laid on exploration drilling as this is the only activity with environmental potential that will take place during the next five years or so. The challenges for future field developments in these water depths are briefly discussed. 7 refs

  11. DeepMitosis: Mitosis detection via deep detection, verification and segmentation networks.

    Science.gov (United States)

    Li, Chao; Wang, Xinggang; Liu, Wenyu; Latecki, Longin Jan

    2018-04-01

    Mitotic count is a critical predictor of tumor aggressiveness in the breast cancer diagnosis. Nowadays mitosis counting is mainly performed by pathologists manually, which is extremely arduous and time-consuming. In this paper, we propose an accurate method for detecting the mitotic cells from histopathological slides using a novel multi-stage deep learning framework. Our method consists of a deep segmentation network for generating mitosis region when only a weak label is given (i.e., only the centroid pixel of mitosis is annotated), an elaborately designed deep detection network for localizing mitosis by using contextual region information, and a deep verification network for improving detection accuracy by removing false positives. We validate the proposed deep learning method on two widely used Mitosis Detection in Breast Cancer Histological Images (MITOSIS) datasets. Experimental results show that we can achieve the highest F-score on the MITOSIS dataset from ICPR 2012 grand challenge merely using the deep detection network. For the ICPR 2014 MITOSIS dataset that only provides the centroid location of mitosis, we employ the segmentation model to estimate the bounding box annotation for training the deep detection network. We also apply the verification model to eliminate some false positives produced from the detection model. By fusing scores of the detection and verification models, we achieve the state-of-the-art results. Moreover, our method is very fast with GPU computing, which makes it feasible for clinical practice. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Deep Seawater Intrusion Enhanced by Geothermal Through Deep Faults in Xinzhou Geothermal Field in Guangdong, China

    Science.gov (United States)

    Lu, G.; Ou, H.; Hu, B. X.; Wang, X.

    2017-12-01

    This study investigates abnormal sea water intrusion from deep depth, riding an inland-ward deep groundwater flow, which is enhanced by deep faults and geothermal processes. The study site Xinzhou geothermal field is 20 km from the coast line. It is in southern China's Guangdong coast, a part of China's long coastal geothermal belt. The geothermal water is salty, having fueled an speculation that it was ancient sea water retained. However, the perpetual "pumping" of the self-flowing outflow of geothermal waters might alter the deep underground flow to favor large-scale or long distant sea water intrusion. We studied geochemical characteristics of the geothermal water and found it as a mixture of the sea water with rain water or pore water, with no indication of dilution involved. And we conducted numerical studies of the buoyancy-driven geothermal flow in the deep ground and find that deep down in thousand meters there is favorable hydraulic gradient favoring inland-ward groundwater flow, allowing seawater intrude inland for an unusually long tens of kilometers in a granitic groundwater flow system. This work formed the first in understanding geo-environment for deep ground water flow.

  13. DeepBipolar: Identifying genomic mutations for bipolar disorder via deep learning.

    Science.gov (United States)

    Laksshman, Sundaram; Bhat, Rajendra Rana; Viswanath, Vivek; Li, Xiaolin

    2017-09-01

    Bipolar disorder, also known as manic depression, is a brain disorder that affects the brain structure of a patient. It results in extreme mood swings, severe states of depression, and overexcitement simultaneously. It is estimated that roughly 3% of the population of the United States (about 5.3 million adults) suffers from bipolar disorder. Recent research efforts like the Twin studies have demonstrated a high heritability factor for the disorder, making genomics a viable alternative for detecting and treating bipolar disorder, in addition to the conventional lengthy and costly postsymptom clinical diagnosis. Motivated by this study, leveraging several emerging deep learning algorithms, we design an end-to-end deep learning architecture (called DeepBipolar) to predict bipolar disorder based on limited genomic data. DeepBipolar adopts the Deep Convolutional Neural Network (DCNN) architecture that automatically extracts features from genotype information to predict the bipolar phenotype. We participated in the Critical Assessment of Genome Interpretation (CAGI) bipolar disorder challenge and DeepBipolar was considered the most successful by the independent assessor. In this work, we thoroughly evaluate the performance of DeepBipolar and analyze the type of signals we believe could have affected the classifier in distinguishing the case samples from the control set. © 2017 Wiley Periodicals, Inc.

  14. DeepPicker: A deep learning approach for fully automated particle picking in cryo-EM.

    Science.gov (United States)

    Wang, Feng; Gong, Huichao; Liu, Gaochao; Li, Meijing; Yan, Chuangye; Xia, Tian; Li, Xueming; Zeng, Jianyang

    2016-09-01

    Particle picking is a time-consuming step in single-particle analysis and often requires significant interventions from users, which has become a bottleneck for future automated electron cryo-microscopy (cryo-EM). Here we report a deep learning framework, called DeepPicker, to address this problem and fill the current gaps toward a fully automated cryo-EM pipeline. DeepPicker employs a novel cross-molecule training strategy to capture common features of particles from previously-analyzed micrographs, and thus does not require any human intervention during particle picking. Tests on the recently-published cryo-EM data of three complexes have demonstrated that our deep learning based scheme can successfully accomplish the human-level particle picking process and identify a sufficient number of particles that are comparable to those picked manually by human experts. These results indicate that DeepPicker can provide a practically useful tool to significantly reduce the time and manual effort spent in single-particle analysis and thus greatly facilitate high-resolution cryo-EM structure determination. DeepPicker is released as an open-source program, which can be downloaded from https://github.com/nejyeah/DeepPicker-python. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. DeepQA: improving the estimation of single protein model quality with deep belief networks.

    Science.gov (United States)

    Cao, Renzhi; Bhattacharya, Debswapna; Hou, Jie; Cheng, Jianlin

    2016-12-05

    Protein quality assessment (QA) useful for ranking and selecting protein models has long been viewed as one of the major challenges for protein tertiary structure prediction. Especially, estimating the quality of a single protein model, which is important for selecting a few good models out of a large model pool consisting of mostly low-quality models, is still a largely unsolved problem. We introduce a novel single-model quality assessment method DeepQA based on deep belief network that utilizes a number of selected features describing the quality of a model from different perspectives, such as energy, physio-chemical characteristics, and structural information. The deep belief network is trained on several large datasets consisting of models from the Critical Assessment of Protein Structure Prediction (CASP) experiments, several publicly available datasets, and models generated by our in-house ab initio method. Our experiments demonstrate that deep belief network has better performance compared to Support Vector Machines and Neural Networks on the protein model quality assessment problem, and our method DeepQA achieves the state-of-the-art performance on CASP11 dataset. It also outperformed two well-established methods in selecting good outlier models from a large set of models of mostly low quality generated by ab initio modeling methods. DeepQA is a useful deep learning tool for protein single model quality assessment and protein structure prediction. The source code, executable, document and training/test datasets of DeepQA for Linux is freely available to non-commercial users at http://cactus.rnet.missouri.edu/DeepQA/ .

  16. Deep sedation during pneumatic reduction of intussusception.

    Science.gov (United States)

    Ilivitzki, Anat; Shtark, Luda Glozman; Arish, Karin; Engel, Ahuva

    2012-05-01

    Pneumatic reduction of intussusception under fluoroscopic guidance is a routine procedure. The unsedated child may resist the procedure, which may lengthen its duration and increase the radiation dose. We use deep sedation during the procedure to overcome these difficulties. The purpose of this study was to summarize our experience with deep sedation during fluoroscopic reduction of intussusception and assess the added value and complication rate of deep sedation. All children with intussusception who underwent pneumatic reduction in our hospital between January 2004 and June 2011 were included in this retrospective study. Anesthetists sedated the children using propofol. The fluoroscopic studies, ultrasound (US) studies and the childrens' charts were reviewed. One hundred thirty-one attempted reductions were performed in 119 children, of which 121 (92%) were successful and 10 (8%) failed. Two perforations (1.5%) occurred during attempted reduction. Average fluoroscopic time was 1.5 minutes. No complication to sedation was recorded. Deep sedation with propofol did not add any complication to the pneumatic reduction. The fluoroscopic time was short. The success rate of reduction was high,raising the possibility that sedation is beneficial, possibly by smooth muscle relaxation.

  17. Evaluation of Deep Discount Fare Strategies

    Science.gov (United States)

    1995-08-01

    This report evaluates the success of a fare pricing strategy known as deep discounting, that entails the bulk sale of transit tickets or tokens to customers at a significant discount compared to the full fare single ticket price. This market-driven s...

  18. Parity violation in deep inelastic scattering

    Energy Technology Data Exchange (ETDEWEB)

    Souder, P. [Syracuse Univ., NY (United States)

    1994-04-01

    AA beam of polarized electrons at CEBAF with an energy of 8 GeV or more will be useful for performing precision measurements of parity violation in deep inelastic scattering. Possible applications include precision tests of the Standard Model, model-independent measurements of parton distribution functions, and studies of quark correlations.

  19. Into the depths of deep eutectic solvents

    NARCIS (Netherlands)

    Rodriguez, N.; Alves da Rocha, M.A.; Kroon, M.C.

    2015-01-01

    Ionic liquids (ILs) have been successfully tested in a wide range of applications; however, their high price and complicated synthesis make them infeasible for large scale implementation. A decade ago, a new generation of solvents so called deep eutectic solvents (DESs) was reported for the first

  20. Modern problems of deep processing of coal

    International Nuclear Information System (INIS)

    Ismagilov, Z.R.

    2013-01-01

    Present article is devoted to modern problems of deep processing of coal. The history and development of new Institute of Coal Chemistry and Material Sciences of Siberian Branch of Russian Academy of Science was described. The aims and purposes of new institute were discussed.

  1. Case Studies and Monitoring of Deep Excavations

    NARCIS (Netherlands)

    Korff, M.

    2017-01-01

    Several case histories from Dutch underground deep excavation projects are presented in this paper, including the lessons learned and the learning processes involved. The focus of the paper is on how the learning takes places and how it is documented. It is necessary to learn in a systematic and

  2. Performance of deep geothermal energy systems

    Science.gov (United States)

    Manikonda, Nikhil

    Geothermal energy is an important source of clean and renewable energy. This project deals with the study of deep geothermal power plants for the generation of electricity. The design involves the extraction of heat from the Earth and its conversion into electricity. This is performed by allowing fluid deep into the Earth where it gets heated due to the surrounding rock. The fluid gets vaporized and returns to the surface in a heat pipe. Finally, the energy of the fluid is converted into electricity using turbine or organic rankine cycle (ORC). The main feature of the system is the employment of side channels to increase the amount of thermal energy extracted. A finite difference computer model is developed to solve the heat transport equation. The numerical model was employed to evaluate the performance of the design. The major goal was to optimize the output power as a function of parameters such as thermal diffusivity of the rock, depth of the main well, number and length of lateral channels. The sustainable lifetime of the system for a target output power of 2 MW has been calculated for deep geothermal systems with drilling depths of 8000 and 10000 meters, and a financial analysis has been performed to evaluate the economic feasibility of the system for a practical range of geothermal parameters. Results show promising an outlook for deep geothermal systems for practical applications.

  3. Fingerprint Minutiae Extraction using Deep Learning

    CSIR Research Space (South Africa)

    Darlow, Luke Nicholas

    2017-10-01

    Full Text Available components, such as image enhancement. We pose minutiae extraction as a machine learning problem and propose a deep neural network – MENet, for Minutiae Extraction Network – to learn a data-driven representation of minutiae points. By using the existing...

  4. Evolutionary Scheduler for the Deep Space Network

    Science.gov (United States)

    Guillaume, Alexandre; Lee, Seungwon; Wang, Yeou-Fang; Zheng, Hua; Chau, Savio; Tung, Yu-Wen; Terrile, Richard J.; Hovden, Robert

    2010-01-01

    A computer program assists human schedulers in satisfying, to the maximum extent possible, competing demands from multiple spacecraft missions for utilization of the transmitting/receiving Earth stations of NASA s Deep Space Network. The program embodies a concept of optimal scheduling to attain multiple objectives in the presence of multiple constraints.

  5. Deep Water Coral (HB1402, EK60)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The cruise will survey and collect samples of deep-sea corals and related marine life in the canyons in the northern Gulf of Maine in U.S. and Canadian waters. The...

  6. Particle Production in Deep Inelastic Muon Scattering

    Energy Technology Data Exchange (ETDEWEB)

    Ryan, John James [MIT

    1991-01-01

    The E665 spectrometer at Fermila.b measured Deep-Inelastic Scattering of 490 GeV /c muons off several targets: Hydrogen, Deuterium, and Xenon. Events were selected from the Xenon and Deuterium targets, with a range of energy exchange, $\

  7. Influence functionals in deep inelastic reactions

    International Nuclear Information System (INIS)

    Avishai, Y.

    1978-01-01

    It is suggested that the concept of influence functionals introduced by Feynman and Vernon could be applied to the study of deep inelastic reactions among heavy ions if the coupling between the relative motion and the internal degrees of freedom has a separable form as suggested by Hofmann and Siemens. (Auth.)

  8. Optimizing interplanetary trajectories with deep space maneuvers

    Science.gov (United States)

    Navagh, John

    1993-09-01

    Analysis of interplanetary trajectories is a crucial area for both manned and unmanned missions of the Space Exploration Initiative. A deep space maneuver (DSM) can improve a trajectory in much the same way as a planetary swingby. However, instead of using a gravitational field to alter the trajectory, the on-board propulsion system of the spacecraft is used when the vehicle is not near a planet. The purpose is to develop an algorithm to determine where and when to use deep space maneuvers to reduce the cost of a trajectory. The approach taken to solve this problem uses primer vector theory in combination with a non-linear optimizing program to minimize Delta(V). A set of necessary conditions on the primer vector is shown to indicate whether a deep space maneuver will be beneficial. Deep space maneuvers are applied to a round trip mission to Mars to determine their effect on the launch opportunities. Other studies which were performed include cycler trajectories and Mars mission abort scenarios. It was found that the software developed was able to locate quickly DSM's which lower the total Delta(V) on these trajectories.

  9. Deep learning for studies of galaxy morphology

    Science.gov (United States)

    Tuccillo, D.; Huertas-Company, M.; Decencière, E.; Velasco-Forero, S.

    2017-06-01

    Establishing accurate morphological measurements of galaxies in a reasonable amount of time for future big-data surveys such as EUCLID, the Large Synoptic Survey Telescope or the Wide Field Infrared Survey Telescope is a challenge. Because of its high level of abstraction with little human intervention, deep learning appears to be a promising approach. Deep learning is a rapidly growing discipline that models high-level patterns in data as complex multilayered networks. In this work we test the ability of deep convolutional networks to provide parametric properties of Hubble Space Telescope like galaxies (half-light radii, Sérsic indices, total flux etc..). We simulate a set of galaxies including point spread function and realistic noise from the CANDELS survey and try to recover the main galaxy parameters using deep-learning. We compare the results with the ones obtained with the commonly used profile fitting based software GALFIT. This way showing that with our method we obtain results at least equally good as the ones obtained with GALFIT but, once trained, with a factor 5 hundred time faster.

  10. Pre-cementation of deep shaft

    Science.gov (United States)

    Heinz, W. F.

    1988-12-01

    Pre-cementation or pre-grouting of deep shafts in South Africa is an established technique to improve safety and reduce water ingress during shaft sinking. The recent completion of several pre-cementation projects for shafts deeper than 1000m has once again highlighted the effectiveness of pre-grouting of shafts utilizing deep slimline boreholes and incorporating wireline technique for drilling and conventional deep borehole grouting techniques for pre-cementation. Pre-cementation of deep shaft will: (i) Increase the safety of shaft sinking operation (ii) Minimize water and gas inflow during shaft sinking (iii) Minimize the time lost due to additional grouting operations during sinking of the shaft and hence minimize costly delays and standing time of shaft sinking crews and equipment. (iv) Provide detailed information of the geology of the proposed shaft site. Informations on anomalies, dykes, faults as well as reef (gold bearing conglomerates) intersections can be obtained from the evaluation of cores of the pre-cementation boreholes. (v) Provide improved rock strength for excavations in the immediate vicinity of the shaft area. The paper describes pre-cementation techniques recently applied successfully from surface and some conclusions drawn for further considerations.

  11. Should deep seabed mining be allowed?

    NARCIS (Netherlands)

    Kim, Rak

    2017-01-01

    Abstract Commercial interest in deep sea minerals in the area beyond the limits of national jurisdiction has rapidly increased in recent years. The International Seabed Authority has already given out 26 exploration contracts and it is currently in the process of developing the Mining Code for

  12. Dust Measurements Onboard the Deep Space Gateway

    Science.gov (United States)

    Horanyi, M.; Kempf, S.; Malaspina, D.; Poppe, A.; Srama, R.; Sternovsky, Z.; Szalay, J.

    2018-02-01

    A dust instrument onboard the Deep Space Gateway will revolutionize our understanding of the dust environment at 1 AU, help our understanding of the evolution of the solar system, and improve dust hazard models for the safety of crewed and robotic missions.

  13. A quantitative lubricant test for deep drawing

    DEFF Research Database (Denmark)

    Olsson, David Dam; Bay, Niels; Andreasen, Jan L.

    2010-01-01

    A tribological test for deep drawing has been developed by which the performance of lubricants may be evaluated quantitatively measuring the maximum backstroke force on the punch owing to friction between tool and workpiece surface. The forming force is found not to give useful information...

  14. Deep underground disposal facility and the public

    International Nuclear Information System (INIS)

    Sumberova, V.

    1997-01-01

    Factors arousing public anxiety in relation to the deep burial of radioactive wastes are highlighted based on Czech and foreign analyses, and guidelines are presented to minimize public opposition when planning a geologic disposal site in the Czech Republic. (P.A.)

  15. Emotional arousal and memory after deep encoding.

    Science.gov (United States)

    Leventon, Jacqueline S; Camacho, Gabriela L; Ramos Rojas, Maria D; Ruedas, Angelica

    2018-05-22

    Emotion often enhances long-term memory. One mechanism for this enhancement is heightened arousal during encoding. However, reducing arousal, via emotion regulation (ER) instructions, has not been associated with reduced memory. In fact, the opposite pattern has been observed: stronger memory for emotional stimuli encoded with an ER instruction to reduce arousal. This pattern may be due to deeper encoding required by ER instructions. In the current research, we examine the effects of emotional arousal and deep-encoding on memory across three studies. In Study 1, adult participants completed a writing task (deep-encoding) for encoding negative, neutral, and positive picture stimuli, whereby half the emotion stimuli had the ER instruction to reduce the emotion. Memory was strong across conditions, and no memory enhancement was observed for any condition. In Study 2, adult participants completed the same writing task as Study 1, as well as a shallow-encoding task for one-third of negative, neutral, and positive trials. Memory was strongest for deep vs. shallow encoding trials, with no effects of emotion or ER instruction. In Study 3, adult participants completed a shallow-encoding task for negative, neutral, and positive stimuli, with findings indicating enhanced memory for negative emotional stimuli. Findings suggest that deep encoding must be acknowledged as a source of memory enhancement when examining manipulations of emotion-related arousal. Copyright © 2018. Published by Elsevier B.V.

  16. Coherence effects in deep inelastic scattering

    International Nuclear Information System (INIS)

    Andersson, B.; Gustafson, G.; Loennblad, L.; Pettersson, U.

    1988-09-01

    We present a framework for deep inelastic scattering, with bound state properties in accordance with a QCD force field acting like a vortex line in a colour superconducting vacuum, which implies some simple coherence effects. Within this scheme one may describe the results of present energies very well, but one obtains an appreciable depletion of gluon radiation in the HERA energy regime. (authors)

  17. Regulatory issues for deep borehole plutonium disposition

    International Nuclear Information System (INIS)

    Halsey, W.G.

    1995-03-01

    As a result of recent changes throughout the world, a substantial inventory of excess separated plutonium is expected to result from dismantlement of US nuclear weapons. The safe and secure management and eventual disposition of this plutonium, and of a similar inventory in Russia, is a high priority. A variety of options (both interim and permanent) are under consideration to manage this material. The permanent solutions can be categorized into two broad groups: direct disposal and utilization. The deep borehole disposition concept involves placing excess plutonium deep into old stable rock formations with little free water present. Issues of concern include the regulatory, statutory and policy status of such a facility, the availability of sites with desirable characteristics and the technologies required for drilling deep holes, characterizing them, emplacing excess plutonium and sealing the holes. This white paper discusses the regulatory issues. Regulatory issues concerning construction, operation and decommissioning of the surface facility do not appear to be controversial, with existing regulations providing adequate coverage. It is in the areas of siting, licensing and long term environmental protection that current regulations may be inappropriate. This is because many current regulations are by intent or by default specific to waste forms, facilities or missions significantly different from deep borehole disposition of excess weapons usable fissile material. It is expected that custom regulations can be evolved in the context of this mission

  18. Deep Reflection on My Pedagogical Transformations

    Science.gov (United States)

    Suzawa, Gilbert S.

    2014-01-01

    This retrospective essay contains my reflection on the deep concept of ambiguity (uncertainty) and a concomitant epistemological theory that all of our human knowledge is ultimately self-referential in nature. This new epistemological perspective is subsequently utilized as a platform for gaining insights into my experiences in conjunction with…

  19. North Jamaican Deep Fore-Reef Sponges

    NARCIS (Netherlands)

    Lehnert, Helmut; Soest, van R.W.M.

    1996-01-01

    An unexpectedly high amount of new species, revealed within only one hour of summarized bottom time, leads to the conclusion that the sponge fauna of the steep slopes of the deep fore-reef is still largely unknown. Four mixed gas dives at depths between 70 and 90 m, performed in May and June, 1993,

  20. Top Tagging by Deep Learning Algorithm

    CERN Document Server

    Akil, Ali

    2015-01-01

    In this report I will show the application of a deep learning algorithm on a Monte Carlo simulation sample to test its performance in tagging hadronic decays of boosted top quarks and compare what we get with the results of the application of some other algorithms.

  1. Semantic Tagging with Deep Residual Networks

    NARCIS (Netherlands)

    Bjerva, Johannes; Plank, Barbara; Bos, Johan

    2016-01-01

    We propose a novel semantic tagging task, semtagging, tailored for the purpose of multilingual semantic parsing, and present the first tagger using deep residual networks (ResNets). Our tagger uses both word and character representations and includes a novel residual bypass architecture. We evaluate

  2. Priapulus from the deep sea (Vermes, Priapulida)

    NARCIS (Netherlands)

    Land, van der J.

    1972-01-01

    INTRODUCTION The species of the genus Priapulus occur in rather cold water. Hence, their shallow-water distribution is restricted to northern and southern waters (fig. 1); there are only a few isolated records from sub-tropical localities. However, in deep water the genus apparently has a world-wide

  3. Gamma-rays from deep inelastic collisions

    International Nuclear Information System (INIS)

    Stephens, F.S.

    1981-01-01

    My objective in this talk is to consider the question: 'What can be learned about deep inelastic collisions (DIC) from studying the associated gamma-rays'. First, I discuss the origin and nature of the gamma-rays from DIC, then the kinds of information gamma-ray spectra contain, and finally come to the combination of these two subjects. (orig./HSI)

  4. Deep Support Vector Machines for Regression Problems

    NARCIS (Netherlands)

    Wiering, Marco; Schutten, Marten; Millea, Adrian; Meijster, Arnold; Schomaker, Lambertus

    2013-01-01

    In this paper we describe a novel extension of the support vector machine, called the deep support vector machine (DSVM). The original SVM has a single layer with kernel functions and is therefore a shallow model. The DSVM can use an arbitrary number of layers, in which lower-level layers contain

  5. Using Cooperative Structures to Promote Deep Learning

    Science.gov (United States)

    Millis, Barbara J.

    2014-01-01

    The author explores concrete ways to help students learn more and have fun doing it while they support each other's learning. The article specifically shows the relationships between cooperative learning and deep learning. Readers will become familiar with the tenets of cooperative learning and its power to enhance learning--even more so when…

  6. Stimulating Deep Learning Using Active Learning Techniques

    Science.gov (United States)

    Yew, Tee Meng; Dawood, Fauziah K. P.; a/p S. Narayansany, Kannaki; a/p Palaniappa Manickam, M. Kamala; Jen, Leong Siok; Hoay, Kuan Chin

    2016-01-01

    When students and teachers behave in ways that reinforce learning as a spectator sport, the result can often be a classroom and overall learning environment that is mostly limited to transmission of information and rote learning rather than deep approaches towards meaningful construction and application of knowledge. A group of college instructors…

  7. Survey on deep learning for radiotherapy.

    Science.gov (United States)

    Meyer, Philippe; Noblet, Vincent; Mazzara, Christophe; Lallement, Alex

    2018-05-17

    More than 50% of cancer patients are treated with radiotherapy, either exclusively or in combination with other methods. The planning and delivery of radiotherapy treatment is a complex process, but can now be greatly facilitated by artificial intelligence technology. Deep learning is the fastest-growing field in artificial intelligence and has been successfully used in recent years in many domains, including medicine. In this article, we first explain the concept of deep learning, addressing it in the broader context of machine learning. The most common network architectures are presented, with a more specific focus on convolutional neural networks. We then present a review of the published works on deep learning methods that can be applied to radiotherapy, which are classified into seven categories related to the patient workflow, and can provide some insights of potential future applications. We have attempted to make this paper accessible to both radiotherapy and deep learning communities, and hope that it will inspire new collaborations between these two communities to develop dedicated radiotherapy applications. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Pathways to deep decarbonization in India

    DEFF Research Database (Denmark)

    Shukla, P.; Dhar, Subash; Pathak, Minal

    This report is a part of the global Deep Decarbonisation Pathways (DDP) Project. The analysis consider two development scenarios for India and assess alternate roadmaps for transiting to a low carbon economy consistent with the globally agreed 2°C stabilization target. The report does not conside...

  9. Pressure induced deep tissue injury explained

    NARCIS (Netherlands)

    Oomens, C.W.J.; Bader, D.L.; Loerakker, S.; Baaijens, F.P.T.

    The paper describes the current views on the cause of a sub-class of pressure ulcers known as pressure induced deep tissue injury (DTI). A multi-scale approach was adopted using model systems ranging from single cells in culture, tissue engineered muscle to animal studies with small animals. This

  10. Deep inelastic scattering near the Coulomb barrier

    International Nuclear Information System (INIS)

    Gehring, J.; Back, B.; Chan, K.

    1995-01-01

    Deep inelastic scattering was recently observed in heavy ion reactions at incident energies near and below the Coulomb barrier. Traditional models of this process are based on frictional forces and are designed to predict the features of deep inelastic processes at energies above the barrier. They cannot be applied at energies below the barrier where the nuclear overlap is small and friction is negligible. The presence of deep inelastic scattering at these energies requires a different explanation. The first observation of deep inelastic scattering near the barrier was in the systems 124,112 Sn + 58,64 Ni by Wolfs et al. We previously extended these measurements to the system 136 Xe + 64 Ni and currently measured the system 124 Xe + 58 Ni. We obtained better statistics, better mass and energy resolution, and more complete angular coverage in the Xe + Ni measurements. The cross sections and angular distributions are similar in all of the Sn + Ni and Xe + Ni systems. The data are currently being analyzed and compared with new theoretical calculations. They will be part of the thesis of J. Gehring

  11. Mean associated multiplicities in deep inelastic processes

    International Nuclear Information System (INIS)

    Dzhaparidze, G.Sh.; Kiselev, A.V.; Petrov, V.A.

    1982-01-01

    A formula is derived for the mean hadron multiplicity in the target fragmentation range of deep inelastic scattering processes. It is shown that in the high-x region the ratio of the mean multiplicities in the current fragmentation region and in the target fragmentation region tends to unity at high energies. The mean multiplicity for the Drell-Yan process is considered

  12. Deep inelastic scattering near the Coulomb barrier

    Energy Technology Data Exchange (ETDEWEB)

    Gehring, J.; Back, B.; Chan, K. [and others

    1995-08-01

    Deep inelastic scattering was recently observed in heavy ion reactions at incident energies near and below the Coulomb barrier. Traditional models of this process are based on frictional forces and are designed to predict the features of deep inelastic processes at energies above the barrier. They cannot be applied at energies below the barrier where the nuclear overlap is small and friction is negligible. The presence of deep inelastic scattering at these energies requires a different explanation. The first observation of deep inelastic scattering near the barrier was in the systems {sup 124,112}Sn + {sup 58,64}Ni by Wolfs et al. We previously extended these measurements to the system {sup 136}Xe + {sup 64}Ni and currently measured the system {sup 124}Xe + {sup 58}Ni. We obtained better statistics, better mass and energy resolution, and more complete angular coverage in the Xe + Ni measurements. The cross sections and angular distributions are similar in all of the Sn + Ni and Xe + Ni systems. The data are currently being analyzed and compared with new theoretical calculations. They will be part of the thesis of J. Gehring.

  13. Predicting galling behaviour in deep drawing processes

    NARCIS (Netherlands)

    van der Linde, G.

    2011-01-01

    Deep drawing is a sheet metal forming process which is widely used in, for example, the automotive industry. With this process it is possible to form complex shaped parts of sheet metal and it is suitable for products that have to be produced in large numbers. The tools for this process are required

  14. Deep Belief Networks for dimensionality reduction

    NARCIS (Netherlands)

    Noulas, A.K.; Kröse, B.J.A.

    2008-01-01

    Deep Belief Networks are probabilistic generative models which are composed by multiple layers of latent stochastic variables. The top two layers have symmetric undirected connections, while the lower layers receive directed top-down connections from the layer above. The current state-of-the-art

  15. DRREP: deep ridge regressed epitope predictor.

    Science.gov (United States)

    Sher, Gene; Zhi, Degui; Zhang, Shaojie

    2017-10-03

    The ability to predict epitopes plays an enormous role in vaccine development in terms of our ability to zero in on where to do a more thorough in-vivo analysis of the protein in question. Though for the past decade there have been numerous advancements and improvements in epitope prediction, on average the best benchmark prediction accuracies are still only around 60%. New machine learning algorithms have arisen within the domain of deep learning, text mining, and convolutional networks. This paper presents a novel analytically trained and string kernel using deep neural network, which is tailored for continuous epitope prediction, called: Deep Ridge Regressed Epitope Predictor (DRREP). DRREP was tested on long protein sequences from the following datasets: SARS, Pellequer, HIV, AntiJen, and SEQ194. DRREP was compared to numerous state of the art epitope predictors, including the most recently published predictors called LBtope and DMNLBE. Using area under ROC curve (AUC), DRREP achieved a performance improvement over the best performing predictors on SARS (13.7%), HIV (8.9%), Pellequer (1.5%), and SEQ194 (3.1%), with its performance being matched only on the AntiJen dataset, by the LBtope predictor, where both DRREP and LBtope achieved an AUC of 0.702. DRREP is an analytically trained deep neural network, thus capable of learning in a single step through regression. By combining the features of deep learning, string kernels, and convolutional networks, the system is able to perform residue-by-residue prediction of continues epitopes with higher accuracy than the current state of the art predictors.

  16. DeepARG: a deep learning approach for predicting antibiotic resistance genes from metagenomic data.

    Science.gov (United States)

    Arango-Argoty, Gustavo; Garner, Emily; Pruden, Amy; Heath, Lenwood S; Vikesland, Peter; Zhang, Liqing

    2018-02-01

    Growing concerns about increasing rates of antibiotic resistance call for expanded and comprehensive global monitoring. Advancing methods for monitoring of environmental media (e.g., wastewater, agricultural waste, food, and water) is especially needed for identifying potential resources of novel antibiotic resistance genes (ARGs), hot spots for gene exchange, and as pathways for the spread of ARGs and human exposure. Next-generation sequencing now enables direct access and profiling of the total metagenomic DNA pool, where ARGs are typically identified or predicted based on the "best hits" of sequence searches against existing databases. Unfortunately, this approach produces a high rate of false negatives. To address such limitations, we propose here a deep learning approach, taking into account a dissimilarity matrix created using all known categories of ARGs. Two deep learning models, DeepARG-SS and DeepARG-LS, were constructed for short read sequences and full gene length sequences, respectively. Evaluation of the deep learning models over 30 antibiotic resistance categories demonstrates that the DeepARG models can predict ARGs with both high precision (> 0.97) and recall (> 0.90). The models displayed an advantage over the typical best hit approach, yielding consistently lower false negative rates and thus higher overall recall (> 0.9). As more data become available for under-represented ARG categories, the DeepARG models' performance can be expected to be further enhanced due to the nature of the underlying neural networks. Our newly developed ARG database, DeepARG-DB, encompasses ARGs predicted with a high degree of confidence and extensive manual inspection, greatly expanding current ARG repositories. The deep learning models developed here offer more accurate antimicrobial resistance annotation relative to current bioinformatics practice. DeepARG does not require strict cutoffs, which enables identification of a much broader diversity of ARGs. The

  17. DeepQA: Improving the estimation of single protein model quality with deep belief networks

    OpenAIRE

    Cao, Renzhi; Bhattacharya, Debswapna; Hou, Jie; Cheng, Jianlin

    2016-01-01

    Background Protein quality assessment (QA) useful for ranking and selecting protein models has long been viewed as one of the major challenges for protein tertiary structure prediction. Especially, estimating the quality of a single protein model, which is important for selecting a few good models out of a large model pool consisting of mostly low-quality models, is still a largely unsolved problem. Results We introduce a novel single-model quality assessment method DeepQA based on deep belie...

  18. Deep Galaxy: Classification of Galaxies based on Deep Convolutional Neural Networks

    OpenAIRE

    Khalifa, Nour Eldeen M.; Taha, Mohamed Hamed N.; Hassanien, Aboul Ella; Selim, I. M.

    2017-01-01

    In this paper, a deep convolutional neural network architecture for galaxies classification is presented. The galaxy can be classified based on its features into main three categories Elliptical, Spiral, and Irregular. The proposed deep galaxies architecture consists of 8 layers, one main convolutional layer for features extraction with 96 filters, followed by two principles fully connected layers for classification. It is trained over 1356 images and achieved 97.272% in testing accuracy. A c...

  19. Combining shallow and deep processing for a robust, fast, deep-linguistic dependency parser

    OpenAIRE

    Schneider, G

    2004-01-01

    This paper describes Pro3Gres, a fast, robust, broad-coverage parser that delivers deep-linguistic grammatical relation structures as output, which are closer to predicate-argument structures and more informative than pure constituency structures. The parser stays as shallow as is possible for each task, combining shallow and deep-linguistic methods by integrating chunking and by expressing the majority of long-distance dependencies in a context-free way. It combines statistical and rule-base...

  20. New optimized drill pipe size for deep-water, extended reach and ultra-deep drilling

    Energy Technology Data Exchange (ETDEWEB)

    Jellison, Michael J.; Delgado, Ivanni [Grant Prideco, Inc., Hoston, TX (United States); Falcao, Jose Luiz; Sato, Ademar Takashi [PETROBRAS, Rio de Janeiro, RJ (Brazil); Moura, Carlos Amsler [Comercial Perfuradora Delba Baiana Ltda., Rio de Janeiro, RJ (Brazil)

    2004-07-01

    A new drill pipe size, 5-7/8 in. OD, represents enabling technology for Extended Reach Drilling (ERD), deep water and other deep well applications. Most world-class ERD and deep water wells have traditionally been drilled with 5-1/2 in. drill pipe or a combination of 6-5/8 in. and 5-1/2 in. drill pipe. The hydraulic performance of 5-1/2 in. drill pipe can be a major limitation in substantial ERD and deep water wells resulting in poor cuttings removal, slower penetration rates, diminished control over well trajectory and more tendency for drill pipe sticking. The 5-7/8 in. drill pipe provides a significant improvement in hydraulic efficiency compared to 5-1/2 in. drill pipe and does not suffer from the disadvantages associated with use of 6-5/8 in. drill pipe. It represents a drill pipe assembly that is optimized dimensionally and on a performance basis for casing and bit programs that are commonly used for ERD, deep water and ultra-deep wells. The paper discusses the engineering philosophy behind 5-7/8 in. drill pipe, the design challenges associated with development of the product and reviews the features and capabilities of the second-generation double-shoulder connection. The paper provides drilling case history information on significant projects where the pipe has been used and details results achieved with the pipe. (author)