WorldWideScience

Sample records for achieving deep reductions

  1. Achieving deep reductions in US transport greenhouse gas emissions: Scenario analysis and policy implications

    International Nuclear Information System (INIS)

    McCollum, David; Yang, Christopher

    2009-01-01

    This paper investigates the potential for making deep cuts in US transportation greenhouse gas (GHG) emissions in the long-term (50-80% below 1990 levels by 2050). Scenarios are used to envision how such a significant decarbonization might be achieved through the application of advanced vehicle technologies and fuels, and various options for behavioral change. A Kaya framework that decomposes GHG emissions into the product of four major drivers is used to analyze emissions and mitigation options. In contrast to most previous studies, a relatively simple, easily adaptable modeling methodology is used which can incorporate insights from other modeling studies and organize them in a way that is easy for policymakers to understand. Also, a wider range of transportation subsectors is considered here-light- and heavy-duty vehicles, aviation, rail, marine, agriculture, off-road, and construction. This analysis investigates scenarios with multiple options (increased efficiency, lower-carbon fuels, and travel demand management) across the various subsectors and confirms the notion that there are no 'silver bullet' strategies for making deep cuts in transport GHGs. If substantial emission reductions are to be made, considerable action is needed on all fronts, and no subsectors can be ignored. Light-duty vehicles offer the greatest potential for emission reductions; however, while deep reductions in other subsectors are also possible, there are more limitations in the types of fuels and propulsion systems that can be used. In all cases travel demand management strategies are critical; deep emission cuts will not likely be possible without slowing growth in travel demand across all modes. Even though these scenarios represent only a small subset of the potential futures in which deep reductions might be achieved, they provide a sense of the magnitude of changes required in our transportation system and the need for early and aggressive action if long-term targets are to be met.

  2. Fundamental research on sintering technology with super deep bed achieving energy saving and reduction of emissions

    International Nuclear Information System (INIS)

    Hongliang Han; Shengli Wu; Gensheng Feng; Luowen Ma; Weizhong Jiang

    2012-01-01

    In the general frame of energy saving, environment protection and the concept of circular economy, the fundamental research on the sintering technology with super deep bed, achieving energy saving and emission reduction, was carried out. At first, the characteristics of the process and exhaust emission in the sintering with super deep bed was mastered through the study of the influence of different bed depths on the sintering process. Then, considering the bed permeability and the fuel combustion, their influence on the sinter yield and quality, their potential for energy saving and emission reduction was studied. The results show that the improvement of the bed permeability and of the fuel combustibility respectively and simultaneously, leads to an improvement of the sintering technical indices, to energy saving and emission reduction in the condition of super deep bed. At 1000 mm bed depth, and taking the appropriate countermeasure, it is possible to decrease the solid fuel consumption and the emission of CO 2 , SO 2 , NO x by 10.08%, 11.20%, 22.62% and 25.86% respectively; and at 700 mm bed depth, it is possible to reduce the solid fuel consumption and the emission of CO 2 , SO 2 , NO x by 20.71%, 22.01%, 58.86% and 13.13% respectively. This research provides the theoretical and technical basis for the new technology of sintering with super deep bed, achieving energy saving and reduction of emission. (authors)

  3. The DEEP-South: Scheduling and Data Reduction Software System

    Science.gov (United States)

    Yim, Hong-Suh; Kim, Myung-Jin; Bae, Youngho; Moon, Hong-Kyu; Choi, Young-Jun; Roh, Dong-Goo; the DEEP-South Team

    2015-08-01

    The DEep Ecliptic Patrol of the Southern sky (DEEP-South), started in October 2012, is currently in test runs with the first Korea Microlensing Telescope Network (KMTNet) 1.6 m wide-field telescope located at CTIO in Chile. While the primary objective for the DEEP-South is physical characterization of small bodies in the Solar System, it is expected to discover a large number of such bodies, many of them previously unknown.An automatic observation planning and data reduction software subsystem called "The DEEP-South Scheduling and Data reduction System" (the DEEP-South SDS) is currently being designed and implemented for observation planning, data reduction and analysis of huge amount of data with minimum human interaction. The DEEP-South SDS consists of three software subsystems: the DEEP-South Scheduling System (DSS), the Local Data Reduction System (LDR), and the Main Data Reduction System (MDR). The DSS manages observation targets, makes decision on target priority and observation methods, schedules nightly observations, and archive data using the Database Management System (DBMS). The LDR is designed to detect moving objects from CCD images, while the MDR conducts photometry and reconstructs lightcurves. Based on analysis made at the LDR and the MDR, the DSS schedules follow-up observation to be conducted at other KMTNet stations. In the end of 2015, we expect the DEEP-South SDS to achieve a stable operation. We also have a plan to improve the SDS to accomplish finely tuned observation strategy and more efficient data reduction in 2016.

  4. Deep sedation during pneumatic reduction of intussusception.

    Science.gov (United States)

    Ilivitzki, Anat; Shtark, Luda Glozman; Arish, Karin; Engel, Ahuva

    2012-05-01

    Pneumatic reduction of intussusception under fluoroscopic guidance is a routine procedure. The unsedated child may resist the procedure, which may lengthen its duration and increase the radiation dose. We use deep sedation during the procedure to overcome these difficulties. The purpose of this study was to summarize our experience with deep sedation during fluoroscopic reduction of intussusception and assess the added value and complication rate of deep sedation. All children with intussusception who underwent pneumatic reduction in our hospital between January 2004 and June 2011 were included in this retrospective study. Anesthetists sedated the children using propofol. The fluoroscopic studies, ultrasound (US) studies and the childrens' charts were reviewed. One hundred thirty-one attempted reductions were performed in 119 children, of which 121 (92%) were successful and 10 (8%) failed. Two perforations (1.5%) occurred during attempted reduction. Average fluoroscopic time was 1.5 minutes. No complication to sedation was recorded. Deep sedation with propofol did not add any complication to the pneumatic reduction. The fluoroscopic time was short. The success rate of reduction was high,raising the possibility that sedation is beneficial, possibly by smooth muscle relaxation.

  5. Cost reduction in deep water production systems

    International Nuclear Information System (INIS)

    Beltrao, R.L.C.

    1995-01-01

    This paper describes a cost reduction program that Petrobras has conceived for its deep water field. Beginning with the Floating Production Unit, a new concept of FPSO was established where a simple system, designed to long term testing, can be upgraded, on the location, to be the definitive production unit. Regarding to the subsea system, the following projects will be considered. (1) Subsea Manifold: There are two 8-well-diverless manifolds designed for 1,000 meters presently under construction and after a value analysis, a new design was achieved for the next generation. Both projects will be discussed and a cost evaluation will also be provided. (2) Subsea Pipelines: Petrobras has just started a large program aiming to reduce cost on this important item. There are several projects such as hybrid (flexible and rigid) pipes for large diameter in deep water, alternatives laying methods, rigid riser on FPS, new material...etc. The authors intend to provide an overview of each project

  6. A core framework and scenario for deep GHG reductions at the city scale

    International Nuclear Information System (INIS)

    Lazarus, Michael; Chandler, Chelsea; Erickson, Peter

    2013-01-01

    Trends in increasing urbanization, paired with a lack of ambitious action on larger scales, uniquely position cities to resume leadership roles in climate mitigation. While many cities have adopted ambitious long-term emission reduction goals, few have articulated how to reach them. This paper presents one of the first long-term scenarios of deep greenhouse gas abatement for a major U.S. city. Using a detailed, bottom-up scenario analysis, we investigate how Seattle might achieve its recently stated goal of carbon neutrality by the year 2050. The analysis demonstrates that a series of ambitious strategies could achieve per capita GHG reductions of 34% in 2020, and 91% in 2050 in Seattle's “core” emissions from the buildings, transportation, and waste sectors. We examine the pros and cons of options to get to, or beyond, net zero emissions in these sectors. We also discuss methodological innovations for community-scale emissions accounting frameworks, including a “core” emissions focus that excludes industrial activity and a consumption perspective that expands the emissions footprint and scope of policy solutions. As in Seattle, other communities may find the mitigation strategies and analytical approaches presented here are useful for crafting policies to achieve deep GHG-reduction goals. - Highlights: ► Cities can play a pivotal role in mitigating climate change. ► Strategies modeled achieve per-capita GHG reductions of 91% by 2050 in Seattle. ► We discuss methodological innovations in community-scale accounting frameworks. ► We weigh options for getting to, or beyond, zero GHG emissions. ► Other cities may adapt these measures and analytical approaches to curb emissions

  7. Achieving 80% greenhouse gas reduction target in Saudi Arabia under low and medium oil prices

    International Nuclear Information System (INIS)

    Alshammari, Yousef M.; Sarathy, S. Mani

    2017-01-01

    COP 21 led to a global agreement to limit the earth's rising temperature to less than 2 °C. This will require countries to act upon climate change and achieve a significant reduction in their greenhouse gas emissions which will play a pivotal role in shaping future energy systems. Saudi Arabia is the World's largest exporter of crude oil, and the 11th largest CO_2 emitter. Understanding the Kingdom's role in global greenhouse gas reduction is critical in shaping the future of fossil fuels. Hence, this work presents an optimisation study to understand how Saudi Arabia can meet the CO_2 reduction targets to achieve the 80% reduction in the power generation sector. It is found that the implementation of energy efficiency measures is necessary to enable meeting the 80% target, and it would also lower costs of transition to low carbon energy system while maintaining cleaner use of hydrocarbons with CCS. Setting very deep GHG reduction targets may be economically uncompetitive in consideration of the energy supply requirements. In addition, we determine the breakeven price of crude oil needed to make CCS economically viable. Results show important dimension for pricing CO_2 and the role of CCS compared with alternative sources of energy. - Highlights: • Energy efficiency measures are needed to achieve 80% reduction. • Nuclear appears as an important option to achieve deep cuts in CO_2 by 2050. • Technology improvement can enable using heavy fuel oil with CCS until 2050. • IGCC requires lower net CO_2 footprint in order to be competitive. • Nuclear power causes a sharp increase in the CO_2 avoidance costs.

  8. Final LDRD report : science-based solutions to achieve high-performance deep-UV laser diodes.

    Energy Technology Data Exchange (ETDEWEB)

    Armstrong, Andrew M.; Miller, Mary A.; Crawford, Mary Hagerott; Alessi, Leonard J.; Smith, Michael L.; Henry, Tanya A.; Westlake, Karl R.; Cross, Karen Charlene; Allerman, Andrew Alan; Lee, Stephen Roger

    2011-12-01

    We present the results of a three year LDRD project that has focused on overcoming major materials roadblocks to achieving AlGaN-based deep-UV laser diodes. We describe our growth approach to achieving AlGaN templates with greater than ten times reduction of threading dislocations which resulted in greater than seven times enhancement of AlGaN quantum well photoluminescence and 15 times increase in electroluminescence from LED test structures. We describe the application of deep-level optical spectroscopy to AlGaN epilayers to quantify deep level energies and densities and further correlate defect properties with AlGaN luminescence efficiency. We further review our development of p-type short period superlattice structures as an approach to mitigate the high acceptor activation energies in AlGaN alloys. Finally, we describe our laser diode fabrication process, highlighting the development of highly vertical and smooth etched laser facets, as well as characterization of resulting laser heterostructures.

  9. Variance reduction methods applied to deep-penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course

  10. Modeling transitions in the California light-duty vehicles sector to achieve deep reductions in transportation greenhouse gas emissions

    International Nuclear Information System (INIS)

    Leighty, Wayne; Ogden, Joan M.; Yang, Christopher

    2012-01-01

    California’s target for reducing economy-wide greenhouse gas (GHG) emissions is 80% below 1990 levels by 2050. We develop transition scenarios for meeting this goal in California’s transportation sector, with focus on light-duty vehicles (LDVs). We explore four questions: (1) what options are available to reduce transportation sector GHG emissions 80% below 1990 levels by 2050; (2) how rapidly would transitions in LDV markets, fuels, and travel behaviors need to occur over the next 40 years; (3) how do intermediate policy goals relate to different transition pathways; (4) how would rates of technological change and market adoption between 2010 and 2050 impact cumulative GHG emissions? We develop four LDV transition scenarios to meet the 80in50 target through a combination of travel demand reduction, fuel economy improvements, and low-carbon fuel supply, subject to restrictions on trajectories of technological change, potential market adoption of new vehicles and fuels, and resource availability. These scenarios exhibit several common themes: electrification of LDVs, rapid improvements in vehicle efficiency, and future fuels with less than half the carbon intensity of current gasoline and diesel. Availability of low-carbon biofuels and the level of travel demand reduction are “swing factors” that influence the degree of LDV electrification required. - Highlights: ► We model change in California LDVs for deep reduction in transportation GHG emissions. ► Reduced travel demand, improved fuel economy, and low-carbon fuels are all needed. ► Transitions must begin soon and occur quickly in order to achieve the 80in50 goal. ► Low-C biofuel supply and travel demand influence the need for rapid LDV electrification. ► Cumulative GHG emissions from LDVs can differ between strategies by up to 40%.

  11. Modelling the potential to achieve deep carbon emission cuts in existing UK social housing: The case of Peabody

    International Nuclear Information System (INIS)

    Reeves, Andrew; Taylor, Simon; Fleming, Paul

    2010-01-01

    As part of the UK's effort to combat climate change, deep cuts in carbon emissions will be required from existing housing over the coming decades. The viability of achieving such emission cuts for the UK social housing sector has been explored through a case study of Peabody, a housing association operating in London. Various approaches to stock refurbishment were modelled for Peabody's existing stock up to the year 2030, incorporating insulation, communal heating and micro-generation technologies. Outputs were evaluated under four future socio-economic scenarios. The results indicate that the Greater London Authority's target of a 60% carbon emission cut by 2025 can be achieved if extensive stock refurbishment is coupled with a background of wider societal efforts to reduce carbon emissions. The two key external requirements identified are a significant reduction in the carbon intensity of grid electricity and a stabilisation or reduction in householder demand for energy. A target of achieving zero net carbon emissions across Peabody stock by 2030 can only be achieved if grid electricity becomes available from entirely zero-carbon sources. These results imply that stronger action is needed from both social landlords and Government to enable deep emission cuts to be achieved in UK social housing.

  12. Iron oxide reduction in methane-rich deep Baltic Sea sediments

    DEFF Research Database (Denmark)

    Egger, Matthias; Hagens, Mathilde; Sapart, Celia J.

    2017-01-01

    /L transition. Our results reveal a complex interplay between production, oxidation and transport of methane showing that besides organoclastic Fe reduction, oxidation of downward migrating methane with Fe oxides may also explain the elevated concentrations of dissolved ferrous Fe in deep Baltic Sea sediments...... profiles and numerical modeling, we propose that a potential coupling between Fe oxide reduction and methane oxidation likely affects deep Fe cycling and related biogeochemical processes, such as burial of phosphorus, in systems subject to changes in organic matter loading or bottom water salinity....

  13. Achieving 80% greenhouse gas reduction target in Saudi Arabia under low and medium oil prices

    KAUST Repository

    Alshammari, Yousef Mohammad

    2016-11-10

    COP 21 led to a global agreement to limit the earth\\'s rising temperature to less than 2 °C. This will require countries to act upon climate change and achieve a significant reduction in their greenhouse gas emissions which will play a pivotal role in shaping future energy systems. Saudi Arabia is the World\\'s largest exporter of crude oil, and the 11th largest CO2 emitter. Understanding the Kingdom\\'s role in global greenhouse gas reduction is critical in shaping the future of fossil fuels. Hence, this work presents an optimisation study to understand how Saudi Arabia can meet the CO2 reduction targets to achieve the 80% reduction in the power generation sector. It is found that the implementation of energy efficiency measures is necessary to enable meeting the 80% target, and it would also lower costs of transition to low carbon energy system while maintaining cleaner use of hydrocarbons with CCS. Setting very deep GHG reduction targets may be economically uncompetitive in consideration of the energy supply requirements. In addition, we determine the breakeven price of crude oil needed to make CCS economically viable. Results show important dimension for pricing CO2 and the role of CCS compared with alternative sources of energy.

  14. Are Reductions in Population Sodium Intake Achievable?

    Directory of Open Access Journals (Sweden)

    Jessica L. Levings

    2014-10-01

    Full Text Available The vast majority of Americans consume too much sodium, primarily from packaged and restaurant foods. The evidence linking sodium intake with direct health outcomes indicates a positive relationship between higher levels of sodium intake and cardiovascular disease risk, consistent with the relationship between sodium intake and blood pressure. Despite communication and educational efforts focused on lowering sodium intake over the last three decades data suggest average US sodium intake has remained remarkably elevated, leading some to argue that current sodium guidelines are unattainable. The IOM in 2010 recommended gradual reductions in the sodium content of packaged and restaurant foods as a primary strategy to reduce US sodium intake, and research since that time suggests gradual, downward shifts in mean population sodium intake are achievable and can move the population toward current sodium intake guidelines. The current paper reviews recent evidence indicating: (1 significant reductions in mean population sodium intake can be achieved with gradual sodium reduction in the food supply, (2 gradual sodium reduction in certain cases can be achieved without a noticeable change in taste or consumption of specific products, and (3 lowering mean population sodium intake can move us toward meeting the current individual guidelines for sodium intake.

  15. Deep learning methods for CT image-domain metal artifact reduction

    Science.gov (United States)

    Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Shan, Hongming; Claus, Bernhard; Jin, Yannan; De Man, Bruno; Wang, Ge

    2017-09-01

    Artifacts resulting from metal objects have been a persistent problem in CT images over the last four decades. A common approach to overcome their effects is to replace corrupt projection data with values synthesized from an interpolation scheme or by reprojection of a prior image. State-of-the-art correction methods, such as the interpolation- and normalization-based algorithm NMAR, often do not produce clinically satisfactory results. Residual image artifacts remain in challenging cases and even new artifacts can be introduced by the interpolation scheme. Metal artifacts continue to be a major impediment, particularly in radiation and proton therapy planning as well as orthopedic imaging. A new solution to the long-standing metal artifact reduction (MAR) problem is deep learning, which has been successfully applied to medical image processing and analysis tasks. In this study, we combine a convolutional neural network (CNN) with the state-of-the-art NMAR algorithm to reduce metal streaks in critical image regions. Training data was synthesized from CT simulation scans of a phantom derived from real patient images. The CNN is able to map metal-corrupted images to artifact-free monoenergetic images to achieve additional correction on top of NMAR for improved image quality. Our results indicate that deep learning is a novel tool to address CT reconstruction challenges, and may enable more accurate tumor volume estimation for radiation therapy planning.

  16. U.S. electric power sector transitions required to achieve 80% reductions in economy-wide greenhouse gas emissions: Results based on a state-level model of the U.S. energy system

    Energy Technology Data Exchange (ETDEWEB)

    Iyer, Gokul C.; Clarke, Leon E.; Edmonds, James A.; Kyle, Gordon P.; Ledna, Catherine M.; McJeon, Haewon C.; Wise, M. A.

    2017-05-01

    The United States has articulated a deep decarbonization strategy for achieving a reduction in economy-wide greenhouse gas (GHG) emissions of 80% below 2005 levels by 2050. Achieving such deep emissions reductions will entail a major transformation of the energy system and of the electric power sector in particular. , This study uses a detailed state-level model of the U.S. energy system embedded within a global integrated assessment model (GCAM-USA) to demonstrate pathways for the evolution of the U.S. electric power sector that achieve 80% economy-wide reductions in GHG emissions by 2050. The pathways presented in this report are based on feedback received during a workshop of experts organized by the U.S. Department of Energy’s Office of Energy Policy and Systems Analysis. Our analysis demonstrates that achieving deep decarbonization by 2050 will require substantial decarbonization of the electric power sector resulting in an increase in the deployment of zero-carbon and low-carbon technologies such as renewables and carbon capture utilization and storage. The present results also show that the degree to which the electric power sector will need to decarbonize and low-carbon technologies will need to deploy depends on the nature of technological advances in the energy sector, the ability of end-use sectors to electrify and level of electricity demand.

  17. Chronic sleep reduction, functioning at school and school achievement in preadolescents.

    Science.gov (United States)

    Meijer, Anne Marie

    2008-12-01

    This study investigates the relationship between chronic sleep reduction, functioning at school and school achievement of boys and girls. To establish individual consequences of chronic sleep reduction (tiredness, sleepiness, loss of energy and emotional instability) the Chronic Sleep Reduction Questionnaire has been developed. A total of 436 children (219 boys, 216 girls, 1 [corrected] missing; mean age = 11 years and 5 months) from the seventh and eight grades of 12 elementary schools participated in this study. The inter-item reliability (Cronbach's alpha = 0.84) and test-retest reliability (r = 0.78) of the Chronic Sleep Reduction Questionnaire were satisfactory. The construct validity of the questionnaire as measured by a confirmative factor analysis was acceptable as well (CMIN/DF = 1.49; CFI = 0.94; RMSEA = 0.034). Cronbach's alpha's of the scales measuring functioning at school (teacher's influence, self-image as pupil, and achievement motivation) were 0.69, 0.86 and 0.79. School achievement was based on self-reported marks concerning six school subjects. To test the models concerning the relations of chronic sleep reduction, functioning at school, and school achievement, the covariance matrix of these variables were analysed by means of structural equation modelling. To test for differences between boys and girls a multi-group model is used. The models representing the relations between chronic sleep reduction - school achievement and chronic sleep reduction - functioning at school - school achievement fitted the data quite well. The impact of chronic sleep reduction on school achievement and functioning at school appeared to be different for boys and girls. Based on the results of this study, it may be concluded that chronic sleep reduction may affect school achievement directly and indirectly via functioning at school, with worse school marks as a consequence.

  18. Criteria for achieving actinide reduction goals

    International Nuclear Information System (INIS)

    Liljenzin, J.O.

    1996-01-01

    In order to discuss various criteria for achieving actinide reduction goals, the goals for actinide reduction must be defined themselves. In this context the term actinides is interpreted to mean plutonium and the so called ''minor actinides'' neptunium, americium and curium, but also protactinium. Some possible goals and the reasons behind these will be presented. On the basis of the suggested goals it is possible to analyze various types of devices for production of nuclear energy from uranium or thorium, such as thermal or fast reactors and accelerator driven system, with their associated fuel cycles with regard to their ability to reach the actinide reduction goals. The relation between necessary single cycle burn-up values, fuel cycle processing losses and losses to waste will be defined and discussed. Finally, an attempt is made to arrange the possible systems on order of performance with regard to their potential to reduce the actinide inventory and the actinide losses to wastes. (author). 3 refs, 3 figs, 2 tabs

  19. Deep Energy Retrofit

    DEFF Research Database (Denmark)

    Zhivov, Alexander; Lohse, Rüdiger; Rose, Jørgen

    Deep Energy Retrofit – A Guide to Achieving Significant Energy User Reduction with Major Renovation Projects contains recommendations for characteristics of some of core technologies and measures that are based on studies conducted by national teams associated with the International Energy Agency...... Energy Conservation in Buildings and Communities Program (IEA-EBC) Annex 61 (Lohse et al. 2016, Case, et al. 2016, Rose et al. 2016, Yao, et al. 2016, Dake 2014, Stankevica et al. 2016, Kiatreungwattana 2014). Results of these studies provided a base for setting minimum requirements to the building...... envelope-related technologies to make Deep Energy Retrofit feasible and, in many situations, cost effective. Use of energy efficiency measures (EEMs) in addition to core technologies bundle and high-efficiency appliances will foster further energy use reduction. This Guide also provides best practice...

  20. THE DEEP2 GALAXY REDSHIFT SURVEY: DESIGN, OBSERVATIONS, DATA REDUCTION, AND REDSHIFTS

    International Nuclear Information System (INIS)

    Newman, Jeffrey A.; Cooper, Michael C.; Davis, Marc; Faber, S. M.; Guhathakurta, Puragra; Koo, David C.; Phillips, Andrew C.; Conroy, Charlie; Harker, Justin J.; Lai, Kamson; Coil, Alison L.; Dutton, Aaron A.; Finkbeiner, Douglas P.; Gerke, Brian F.; Rosario, David J.; Weiner, Benjamin J.; Willmer, C. N. A.; Yan Renbin; Kassin, Susan A.; Konidaris, N. P.

    2013-01-01

    We describe the design and data analysis of the DEEP2 Galaxy Redshift Survey, the densest and largest high-precision redshift survey of galaxies at z ∼ 1 completed to date. The survey was designed to conduct a comprehensive census of massive galaxies, their properties, environments, and large-scale structure down to absolute magnitude M B = –20 at z ∼ 1 via ∼90 nights of observation on the Keck telescope. The survey covers an area of 2.8 deg 2 divided into four separate fields observed to a limiting apparent magnitude of R AB = 24.1. Objects with z ∼ 0.7 to be targeted ∼2.5 times more efficiently than in a purely magnitude-limited sample. Approximately 60% of eligible targets are chosen for spectroscopy, yielding nearly 53,000 spectra and more than 38,000 reliable redshift measurements. Most of the targets that fail to yield secure redshifts are blue objects that lie beyond z ∼ 1.45, where the [O II] 3727 Å doublet lies in the infrared. The DEIMOS 1200 line mm –1 grating used for the survey delivers high spectral resolution (R ∼ 6000), accurate and secure redshifts, and unique internal kinematic information. Extensive ancillary data are available in the DEEP2 fields, particularly in the Extended Groth Strip, which has evolved into one of the richest multiwavelength regions on the sky. This paper is intended as a handbook for users of the DEEP2 Data Release 4, which includes all DEEP2 spectra and redshifts, as well as for the DEEP2 DEIMOS data reduction pipelines. Extensive details are provided on object selection, mask design, biases in target selection and redshift measurements, the spec2d two-dimensional data-reduction pipeline, the spec1d automated redshift pipeline, and the zspec visual redshift verification process, along with examples of instrumental signatures or other artifacts that in some cases remain after data reduction. Redshift errors and catastrophic failure rates are assessed through more than 2000 objects with duplicate

  1. VARIATION OF STRIKE INCENTIVES IN DEEP REDUCTIONS; FINAL

    International Nuclear Information System (INIS)

    G.H. CANAVAN

    2001-01-01

    This note studies the sensitivity of strike incentives to deep offensive force reductions using exchange, cost, and game theoretic decision models derived and discussed in companion reports. As forces fall, weapon allocations shift from military to high value targets, with the shift being half complete at about 1,000 weapons. By 500 weapons, the first and second strikes are almost totally on high value. The dominant cost for striking first is that of damage to one's high value, which is near total absent other constraints, and hence proportional to preferences for survival of high value. Changes in military costs are largely offsetting, so total first strike costs change little. The resulting costs at decision nodes are well above the costs of inaction, so the preferred course is inaction for all offensive reductions studied. As the dominant cost for striking first is proportional to the preference for survival of high value. There is a wide gap between the first strike cost and that of inaction for the parameters studied here. These conclusions should be insensitive to significant reductions in the preference for survival of high value, which is the most sensitive parameter

  2. Assessing Multiple Pathways for Achieving China’s National Emissions Reduction Target

    Directory of Open Access Journals (Sweden)

    Mingyue Wang

    2018-06-01

    Full Text Available In order to achieve China’s target of carbon intensity emissions reduction in 2030, there is a need to identify a scientific pathway and feasible strategies. In this study, we used stochastic frontier analysis method of energy efficiency, incorporating energy structure, economic structure, human capital, capital stock and potential energy efficiency to identify an efficient pathway for achieving emissions reduction target. We set up 96 scenarios including single factor scenarios and multi-factors combination scenarios for the simulation. The effects of each scenario on achieving the carbon intensity reduction target are then evaluated. It is found that: (1 Potential energy efficiency has the greatest contribution to the carbon intensity emissions reduction target; (2 they are unlikely to reach the 2030 carbon intensity reduction target of 60% by only optimizing a single factor; (3 in order to achieve the 2030 target, several aspects have to be adjusted: the fossil fuel ratio must be lower than 80%, and its average growth rate must be decreased by 2.2%; the service sector ratio in GDP must be higher than 58.3%, while the growth rate of non-service sectors must be lowered by 2.4%; and both human capital and capital stock must achieve and maintain a stable growth rate and a 1% increase annually in energy efficiency. Finally, the specific recommendations of this research were discussed, including constantly improved energy efficiency; the upgrading of China’s industrial structure must be accelerated; emissions reduction must be done at the root of energy sources; multi-level input mechanisms in overall levels of education and training to cultivate the human capital stock must be established; investment in emerging equipment and accelerate the closure of backward production capacity to accumulate capital stock.

  3. Deep vein thrombus formation induced by flow reduction in mice is determined by venous side branches.

    Science.gov (United States)

    Brandt, Moritz; Schönfelder, Tanja; Schwenk, Melanie; Becker, Christian; Jäckel, Sven; Reinhardt, Christoph; Stark, Konstantin; Massberg, Steffen; Münzel, Thomas; von Brühl, Marie-Luise; Wenzel, Philip

    2014-01-01

    Interaction between vascular wall abnormalities, inflammatory leukocytes, platelets, coagulation factors and hemorheology in the pathogenesis of deep vein thrombosis (DVT) is incompletely understood, requiring well defined animal models of human disease. We subjected male C57BL/6 mice to ligation of the inferior vena cava (IVC) as a flow reduction model to induce DVT. Thrombus size and weight were analyzed macroscopically and sonographically by B-mode, pulse wave (pw) Doppler and power Doppler imaging (PDI) using high frequency ultrasound. Thrombus size varied substantially between individual procedures and mice, irrespective of the flow reduction achieved by the ligature. Interestingly, PDI accurately predicted thrombus size in a very robust fashion (r2 = 0.9734, p thrombus weight (r2 = 0.5597, p thrombus formation. Occlusion of side branches prior to ligation of IVC did not increase thrombus size, probably due to patent side branches inaccessible to surgery. Venous side branches influence thrombus size in experimental DVT and might therefore prevent thrombus formation. This renders vessel anatomy and hemorheology important determinants in mouse models of DVT, which should be controlled for.

  4. Bacterial Sulfate Reduction Above 100-Degrees-C in Deep-Sea Hydrothermal Vent Sediments

    DEFF Research Database (Denmark)

    JØRGENSEN, BB; ISAKSEN, MF; JANNASCH, HW

    1992-01-01

    -reducing bacteria was done in hot deep-sea sediments at the hydrothermal vents of the Guaymas Basin tectonic spreading center in the Gulf of California. Radiotracer studies revealed that sulfate reduction can occur at temperatures up to 110-degrees-C, with an optimum rate at 103-degrees to 106-degrees......-C. This observation expands the upper temperature limit of this process in deep-ocean sediments by 20-degrees-C and indicates the existence of an unknown groUp of hyperthermophilic bacteria with a potential importance for the biogeochemistry of sulfur above 100-degrees-C....

  5. A deep 3D residual CNN for false-positive reduction in pulmonary nodule detection.

    Science.gov (United States)

    Jin, Hongsheng; Li, Zongyao; Tong, Ruofeng; Lin, Lanfen

    2018-05-01

    The automatic detection of pulmonary nodules using CT scans improves the efficiency of lung cancer diagnosis, and false-positive reduction plays a significant role in the detection. In this paper, we focus on the false-positive reduction task and propose an effective method for this task. We construct a deep 3D residual CNN (convolution neural network) to reduce false-positive nodules from candidate nodules. The proposed network is much deeper than the traditional 3D CNNs used in medical image processing. Specifically, in the network, we design a spatial pooling and cropping (SPC) layer to extract multilevel contextual information of CT data. Moreover, we employ an online hard sample selection strategy in the training process to make the network better fit hard samples (e.g., nodules with irregular shapes). Our method is evaluated on 888 CT scans from the dataset of the LUNA16 Challenge. The free-response receiver operating characteristic (FROC) curve shows that the proposed method achieves a high detection performance. Our experiments confirm that our method is robust and that the SPC layer helps increase the prediction accuracy. Additionally, the proposed method can easily be extended to other 3D object detection tasks in medical image processing. © 2018 American Association of Physicists in Medicine.

  6. Vehicle Color Recognition with Vehicle-Color Saliency Detection and Dual-Orientational Dimensionality Reduction of CNN Deep Features

    Science.gov (United States)

    Zhang, Qiang; Li, Jiafeng; Zhuo, Li; Zhang, Hui; Li, Xiaoguang

    2017-12-01

    Color is one of the most stable attributes of vehicles and often used as a valuable cue in some important applications. Various complex environmental factors, such as illumination, weather, noise and etc., result in the visual characteristics of the vehicle color being obvious diversity. Vehicle color recognition in complex environments has been a challenging task. The state-of-the-arts methods roughly take the whole image for color recognition, but many parts of the images such as car windows; wheels and background contain no color information, which will have negative impact on the recognition accuracy. In this paper, a novel vehicle color recognition method using local vehicle-color saliency detection and dual-orientational dimensionality reduction of convolutional neural network (CNN) deep features has been proposed. The novelty of the proposed method includes two parts: (1) a local vehicle-color saliency detection method has been proposed to determine the vehicle color region of the vehicle image and exclude the influence of non-color regions on the recognition accuracy; (2) dual-orientational dimensionality reduction strategy has been designed to greatly reduce the dimensionality of deep features that are learnt from CNN, which will greatly mitigate the storage and computational burden of the subsequent processing, while improving the recognition accuracy. Furthermore, linear support vector machine is adopted as the classifier to train the dimensionality reduced features to obtain the recognition model. The experimental results on public dataset demonstrate that the proposed method can achieve superior recognition performance over the state-of-the-arts methods.

  7. Achieving 80% greenhouse gas reduction target in Saudi Arabia under low and medium oil prices

    KAUST Repository

    Alshammari, Yousef Mohammad; Sarathy, Mani

    2016-01-01

    meeting the 80% target, and it would also lower costs of transition to low carbon energy system while maintaining cleaner use of hydrocarbons with CCS. Setting very deep GHG reduction targets may be economically uncompetitive in consideration of the energy

  8. Achieving CO2 Emissions Reduction Goals with Energy Infrastructure Projects

    International Nuclear Information System (INIS)

    Eberlinc, M.; Medved, K.; Simic, J.

    2013-01-01

    The EU has set its short-term goals in the Europe 2020 Strategy (20% of CO 2 emissions reduction, 20% increase in energy efficiency, 20% share of renewables in final energy). The analyses show that the EU Member States in general are on the right track of achieving these goals; they are even ahead (including Slovenia). But setting long-term goals by 2050 is a tougher challenge. Achieving CO 2 emissions reduction goes hand in hand with increasing the share of renewables and strategically planning the projects, which include exploiting the potential of renewable sources of energy (e.g. hydropower). In Slovenia, the expected share of hydropower in electricity production from large HPPs in the share of renewables by 2030 is 1/3. The paper includes a presentation of a hydro power plants project on the middle Sava river in Slovenia and its specifics (influenced by the expansion of the Natura 2000 protected sites and on the other hand by the changes in the Environment Protection Law, which implements the EU Industrial Emissions Directive and the ETS Directive). Studies show the importance of the HPPs in terms of CO 2 emissions reduction. The main conclusion of the paper shows the importance of energy infrastructure projects, which contribute to on the one hand the CO 2 emissions reduction and on the other the increase of renewables.(author)

  9. Exploring the effects of dimensionality reduction in deep networks for force estimation in robotic-assisted surgery

    Science.gov (United States)

    Aviles, Angelica I.; Alsaleh, Samar; Sobrevilla, Pilar; Casals, Alicia

    2016-03-01

    Robotic-Assisted Surgery approach overcomes the limitations of the traditional laparoscopic and open surgeries. However, one of its major limitations is the lack of force feedback. Since there is no direct interaction between the surgeon and the tissue, there is no way of knowing how much force the surgeon is applying which can result in irreversible injuries. The use of force sensors is not practical since they impose different constraints. Thus, we make use of a neuro-visual approach to estimate the applied forces, in which the 3D shape recovery together with the geometry of motion are used as input to a deep network based on LSTM-RNN architecture. When deep networks are used in real time, pre-processing of data is a key factor to reduce complexity and improve the network performance. A common pre-processing step is dimensionality reduction which attempts to eliminate redundant and insignificant information by selecting a subset of relevant features to use in model construction. In this work, we show the effects of dimensionality reduction in a real-time application: estimating the applied force in Robotic-Assisted Surgeries. According to the results, we demonstrated positive effects of doing dimensionality reduction on deep networks including: faster training, improved network performance, and overfitting prevention. We also show a significant accuracy improvement, ranging from about 33% to 86%, over existing approaches related to force estimation.

  10. Biogenic Properties of Deep Waters from the Black Sea Reduction (Hydrogen Sulphide) Zone for Marine Algae

    OpenAIRE

    Polikarpov, Gennady G.; Lazorenko, Galina Е.; Тereschenko, Natalya N.

    2015-01-01

    Abstract Generalized data of biogenic properties investigations of the Black Sea deep waters from its reduction zone for marine algae are presented. It is shown on board and in laboratory that after pre-oxidation of hydrogen sulphide by intensive aeration of the deep waters lifted to the surface of the sea, they are ready to be used for cultivation of the Black Sea unicellular, planktonic, and multicellular, benthic, algae instead of artificial medium. Naturally balanced micro- and macroeleme...

  11. The Path to Deep Nuclear Reductions. Dealing with American Conventional Superiority

    Energy Technology Data Exchange (ETDEWEB)

    Gormley, D.M.

    2009-07-01

    The transformation of the U.S. conventional capabilities has begun to have a substantial and important impact on counter-force strike missions particularly as they affect counter-proliferation requirements. So too have improvements in ballistic missile defense programs, which are also critically central to U.S. counter-proliferation objectives. These improved conventional capabilities come at a time when thinking about the prospects of eventually achieving a nuclear disarmed world has never been so promising. Yet, the path toward achieving that goal, or making substantial progress towards it, is fraught with pitfalls, including domestic political, foreign, and military ones. Two of the most important impediments to deep reductions in U.S. and Russian nuclear arsenals - no less a nuclear disarmed world - are perceived U.S. advantages in conventional counter-force strike capabilities working in combination with even imperfect but growing missile defense systems. The Barack Obama administration has already toned down the George W. Bush administration's rhetoric surrounding many of these new capabilities. Nevertheless, it is likely to affirm that it is a worthy goal to pursue a more conventionally oriented denial strategy as America further weans itself from its reliance on nuclear weapons. The challenge is to do so in the context of a more multilateral or collective security environment in which transparency plays the role it once did during the Cold War as a necessary adjunct to arms control agreements. Considerable thought has already been devoted to assessing many of the challenges along the way to a nuclear-free world, including verifying arsenals when they reach very low levels, more effective management of the civilian nuclear programs that remain, enforcement procedures, and what, if anything, might be needed to deal with latent capacities to produce nuclear weapons.1 But far less thought has been expended on why Russia - whose cooperation is absolutely

  12. The Path to Deep Nuclear Reductions. Dealing with American Conventional Superiority

    International Nuclear Information System (INIS)

    Gormley, D.M.

    2009-01-01

    The transformation of the U.S. conventional capabilities has begun to have a substantial and important impact on counter-force strike missions particularly as they affect counter-proliferation requirements. So too have improvements in ballistic missile defense programs, which are also critically central to U.S. counter-proliferation objectives. These improved conventional capabilities come at a time when thinking about the prospects of eventually achieving a nuclear disarmed world has never been so promising. Yet, the path toward achieving that goal, or making substantial progress towards it, is fraught with pitfalls, including domestic political, foreign, and military ones. Two of the most important impediments to deep reductions in U.S. and Russian nuclear arsenals - no less a nuclear disarmed world - are perceived U.S. advantages in conventional counter-force strike capabilities working in combination with even imperfect but growing missile defense systems. The Barack Obama administration has already toned down the George W. Bush administration's rhetoric surrounding many of these new capabilities. Nevertheless, it is likely to affirm that it is a worthy goal to pursue a more conventionally oriented denial strategy as America further weans itself from its reliance on nuclear weapons. The challenge is to do so in the context of a more multilateral or collective security environment in which transparency plays the role it once did during the Cold War as a necessary adjunct to arms control agreements. Considerable thought has already been devoted to assessing many of the challenges along the way to a nuclear-free world, including verifying arsenals when they reach very low levels, more effective management of the civilian nuclear programs that remain, enforcement procedures, and what, if anything, might be needed to deal with latent capacities to produce nuclear weapons.1 But far less thought has been expended on why Russia - whose cooperation is absolutely

  13. On the Reduction of Computational Complexity of Deep Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Partha Maji

    2018-04-01

    Full Text Available Deep convolutional neural networks (ConvNets, which are at the heart of many new emerging applications, achieve remarkable performance in audio and visual recognition tasks. Unfortunately, achieving accuracy often implies significant computational costs, limiting deployability. In modern ConvNets it is typical for the convolution layers to consume the vast majority of computational resources during inference. This has made the acceleration of these layers an important research area in academia and industry. In this paper, we examine the effects of co-optimizing the internal structures of the convolutional layers and underlying implementation of fundamental convolution operation. We demonstrate that a combination of these methods can have a big impact on the overall speedup of a ConvNet, achieving a ten-fold increase over baseline. We also introduce a new class of fast one-dimensional (1D convolutions for ConvNets using the Toom–Cook algorithm. We show that our proposed scheme is mathematically well-grounded, robust, and does not require any time-consuming retraining, while still achieving speedups solely from convolutional layers with no loss in baseline accuracy.

  14. Construction of System for Seismic Observation in Deep Borehole (SODB) - Overview and Achievement Status of the Project

    International Nuclear Information System (INIS)

    Kobayashi, Genyu

    2014-01-01

    The seismic responses of each unit at the Kashiwazaki-Kariwa NPP differed greatly during the 2007 Niigata-ken Chuetsu-oki Earthquake; the deep sedimentary structure around the site greatly affected these differences. To clarify underground structure and to evaluate ground motion amplification and attenuation effects more accurately in accordance with deep sedimentary structure, JNES initiated the SODB project. Deployment of a vertical seismometer array in a 3000-meter deep borehole was completed in June 2012 on the premises of NIIT. Horizontal arrays were also placed on the ground surface. Experiences and achievements in the JNES project were introduced, including development of seismic observation technology in deep boreholes, site amplification measurements from logging data, application of borehole observation data to maintenance of nuclear power plant safety, and so on. Afterwards, the relationships of other presentations in this WS, were explained. (authors)

  15. Application of variance reduction techniques of Monte-Carlo method to deep penetration shielding problems

    International Nuclear Information System (INIS)

    Rawat, K.K.; Subbaiah, K.V.

    1996-01-01

    General purpose Monte Carlo code MCNP is being widely employed for solving deep penetration problems by applying variance reduction techniques. These techniques depend on the nature and type of the problem being solved. Application of geometry splitting and implicit capture method are examined to study the deep penetration problems of neutron, gamma and coupled neutron-gamma in thick shielding materials. The typical problems chosen are: i) point isotropic monoenergetic gamma ray source of 1 MeV energy in nearly infinite water medium, ii) 252 Cf spontaneous source at the centre of 140 cm thick water and concrete and iii) 14 MeV fast neutrons incident on the axis of 100 cm thick concrete disk. (author). 7 refs., 5 figs

  16. Pathways to deep decarbonization - Interim 2014 Report

    International Nuclear Information System (INIS)

    2014-01-01

    The interim 2014 report by the Deep Decarbonization Pathways Project (DDPP), coordinated and published by IDDRI and the Sustainable Development Solutions Network (SDSN), presents preliminary findings of the pathways developed by the DDPP Country Research Teams with the objective of achieving emission reductions consistent with limiting global warming to less than 2 deg. C. The DDPP is a knowledge network comprising 15 Country Research Teams and several Partner Organizations who develop and share methods, assumptions, and findings related to deep decarbonization. Each DDPP Country Research Team has developed an illustrative road-map for the transition to a low-carbon economy, with the intent of taking into account national socio-economic conditions, development aspirations, infrastructure stocks, resource endowments, and other relevant factors. The interim 2014 report focuses on technically feasible pathways to deep decarbonization

  17. Achievable peak electrode voltage reduction by neurostimulators using descending staircase currents to deliver charge.

    Science.gov (United States)

    Halpern, Mark

    2011-01-01

    This paper considers the achievable reduction in peak voltage across two driving terminals of an RC circuit when delivering charge using a stepped current waveform, comprising a chosen number of steps of equal duration, compared with using a constant current over the total duration. This work has application to the design of neurostimulators giving reduced peak electrode voltage when delivering a given electric charge over a given time duration. Exact solutions for the greatest possible peak voltage reduction using two and three steps are given. Furthermore, it is shown that the achievable peak voltage reduction, for any given number of steps is identical for simple series RC circuits and parallel RC circuits, for appropriate different values of RC. It is conjectured that the maximum peak voltage reduction cannot be improved using a more complicated RC circuit.

  18. Homoacetogenesis in Deep-Sea Chloroflexi, as Inferred by Single-Cell Genomics, Provides a Link to Reductive Dehalogenation in Terrestrial Dehalococcoidetes.

    Science.gov (United States)

    Sewell, Holly L; Kaster, Anne-Kristin; Spormann, Alfred M

    2017-12-19

    The deep marine subsurface is one of the largest unexplored biospheres on Earth and is widely inhabited by members of the phylum Chloroflexi In this report, we investigated genomes of single cells obtained from deep-sea sediments of the Peruvian Margin, which are enriched in such Chloroflexi 16S rRNA gene sequence analysis placed two of these single-cell-derived genomes (DscP3 and Dsc4) in a clade of subphylum I Chloroflexi which were previously recovered from deep-sea sediment in the Okinawa Trough and a third (DscP2-2) as a member of the previously reported DscP2 population from Peruvian Margin site 1230. The presence of genes encoding enzymes of a complete Wood-Ljungdahl pathway, glycolysis/gluconeogenesis, a Rhodobacter nitrogen fixation (Rnf) complex, glyosyltransferases, and formate dehydrogenases in the single-cell genomes of DscP3 and Dsc4 and the presence of an NADH-dependent reduced ferredoxin:NADP oxidoreductase (Nfn) and Rnf in the genome of DscP2-2 imply a homoacetogenic lifestyle of these abundant marine Chloroflexi We also report here the first complete pathway for anaerobic benzoate oxidation to acetyl coenzyme A (CoA) in the phylum Chloroflexi (DscP3 and Dsc4), including a class I benzoyl-CoA reductase. Of remarkable evolutionary significance, we discovered a gene encoding a formate dehydrogenase (FdnI) with reciprocal closest identity to the formate dehydrogenase-like protein (complex iron-sulfur molybdoenzyme [CISM], DET0187) of terrestrial Dehalococcoides/Dehalogenimonas spp. This formate dehydrogenase-like protein has been shown to lack formate dehydrogenase activity in Dehalococcoides/Dehalogenimonas spp. and is instead hypothesized to couple HupL hydrogenase to a reductive dehalogenase in the catabolic reductive dehalogenation pathway. This finding of a close functional homologue provides an important missing link for understanding the origin and the metabolic core of terrestrial Dehalococcoides/Dehalogenimonas spp. and of reductive

  19. Effect of the antimicrobial photodynamic therapy on microorganism reduction in deep caries lesions: a systematic review and meta-analysis

    Science.gov (United States)

    Ornellas, Pâmela Oliveira; Antunes, Leonardo Santos; Fontes, Karla Bianca Fernandes da Costa; Póvoa, Helvécio Cardoso Corrêa; Küchler, Erika Calvano; Iorio, Natalia Lopes Pontes; Antunes, Lívia Azeredo Alves

    2016-09-01

    This study aimed to perform a systematic review to assess the effectiveness of antimicrobial photodynamic therapy (aPDT) in the reduction of microorganisms in deep carious lesions. An electronic search was conducted in Pubmed, Web of Science, Scopus, Lilacs, and Cochrane Library, followed by a manual search. The MeSH terms, MeSH synonyms, related terms, and free terms were used in the search. As eligibility criteria, only clinical studies were included. Initially, 227 articles were identified in the electronic search, and 152 studies remained after analysis and exclusion of the duplicated studies; 6 remained after application of the eligibility criteria; and 3 additional studies were found in the manual search. After access to the full articles, three were excluded, leaving six for evaluation by the criteria of the Cochrane Collaboration's tool for assessing risk of bias. Of these, five had some risk of punctuated bias. All results from the selected studies showed a significant reduction of microorganisms in deep carious lesions for both primary and permanent teeth. The meta-analysis demonstrated a significant reduction in microorganism counts in all analyses (p<0.00001). Based on these findings, there is scientific evidence emphasizing the effectiveness of aPDT in reducing microorganisms in deep carious lesions.

  20. Deep greenhouse gas emission reductions in Europe: Exploring different options

    International Nuclear Information System (INIS)

    Deetman, Sebastiaan; Hof, Andries F.; Pfluger, Benjamin; Vuuren, Detlef P. van; Girod, Bastien; Ruijven, Bas J. van

    2013-01-01

    Most modelling studies that explore emission mitigation scenarios only look into least-cost emission pathways, induced by a carbon tax. This means that European policies targeting specific – sometimes relatively costly – technologies, such as electric cars and advanced insulation measures, are usually not evaluated as part of cost-optimal scenarios. This study explores an emission mitigation scenario for Europe up to 2050, taking as a starting point specific emission reduction options instead of a carbon tax. The purpose is to identify the potential of each of these policies and identify trade-offs between sectoral policies in achieving emission reduction targets. The reduction options evaluated in this paper together lead to a reduction of 65% of 1990 CO 2 -equivalent emissions by 2050. More bottom-up modelling exercises, like the one presented here, provide a promising starting point to evaluate policy options that are currently considered by policy makers. - Highlights: ► We model the effects of 15 climate change mitigation measures in Europe. ► We assess the greenhouse gas emission reduction potential in different sectors. ► The measures could reduce greenhouse gas emissions by 60% below 1990 levels in 2050. ► The approach allows to explore arguably more relevant climate policy scenarios

  1. Positioning Reduction of Deep Space Probes Based on VLBI Tracking

    Science.gov (United States)

    Qiao, S. B.

    2011-11-01

    In the background of the Chinese Lunar Exploration Project and the Yinghuo Project, through theoretical analysis, algorithm study, software development, data simulation, real data processing and so on, the positioning reductions of the European lunar satellite Smart-1 and Mars Express (MEX) satellite, as well as the Chinese Chang'e-1 (CE-1) and Chang'e-2 (CE-2) satellites are accomplished by using VLBI and USB tracking data in this dissertation. The progress is made in various aspects including the development of theoretical model, the construction of observation equation, the analysis of the condition of normal equation, the selection and determination of the constraint, the analysis of data simulation, the detection of outliers in observations, the maintenance of the stability of the solution of parameters, the development of the practical software system, the processing of the real tracking data and so on. The details of the research progress in this dissertation are written as follows: (1) The algorithm is analyzed concerning the positioning reduction of the deep spacecraft based on VLBI tracking data. Through data simulation, it is analyzed for the effects of the bias in predicted orbit, the white noises and systematic errors in VLBI delays, and USB ranges on the positioning reduction of spacecraft. Results show that it is preferable to suppress the dispersion of positioning data points by applying the constraint of geocentric distance of spacecraft when there are only VLBI tracking data. The positioning solution is a biased estimate via observations of three VLBI stations. For the case of four tracking stations, the uncertainty of the constraint should be in accordance with the bias in the predicted orbit. White noises in delays and ranges mainly result in dispersion of the sequence of positioning data points. If there is the systematic error of observations, the systematic offset of the positioning results is caused, and there are trend jumps in the shape of

  2. Pathways to Deep Decarbonization in the United States

    Science.gov (United States)

    Torn, M. S.; Williams, J.

    2015-12-01

    Limiting anthropogenic warming to less than 2°C will require a reduction in global net greenhouse gas (GHG) emissions on the order of 80% below 1990 levels by 2050. Thus, there is a growing need to understand what would be required to achieve deep decarbonization (DD) in different economies. We examined the technical and economic feasibility of such a transition in the United States, evaluating the infrastructure and technology changes required to reduce U.S. GHG emissions in 2050 by 80% below 1990 levels. Using the PATHWAYS and GCAM models, we found that this level of decarbonization in the U.S. can be accomplished with existing commercial or near-commercial technologies, while providing the same level of energy services and economic growth as a reference case based on the U.S. DOE Annual Energy Outlook. Reductions are achieved through high levels of energy efficiency, decarbonization of electric generation, electrification of most end uses, and switching the remaining end uses to lower carbon fuels. Incremental energy system cost would be equivalent to roughly 1% of gross domestic product, not including potential non-energy benefits such as avoided human and infrastructure costs of climate change. Starting now on the deep decarbonization path would allow infrastructure stock turnover to follow natural replacement rates, which reduces costs, eases demand on manufacturing, and allows gradual consumer adoption. Energy system changes must be accompanied by reductions in non-energy and non-CO2 GHG emissions.

  3. Deep Belief Networks for dimensionality reduction

    NARCIS (Netherlands)

    Noulas, A.K.; Kröse, B.J.A.

    2008-01-01

    Deep Belief Networks are probabilistic generative models which are composed by multiple layers of latent stochastic variables. The top two layers have symmetric undirected connections, while the lower layers receive directed top-down connections from the layer above. The current state-of-the-art

  4. Analytical results of variance reduction characteristics of biased Monte Carlo for deep-penetration problems

    International Nuclear Information System (INIS)

    Murthy, K.P.N.; Indira, R.

    1986-01-01

    An analytical formulation is presented for calculating the mean and variance of transmission for a model deep-penetration problem. With this formulation, the variance reduction characteristics of two biased Monte Carlo schemes are studied. The first is the usual exponential biasing wherein it is shown that the optimal biasing parameter depends sensitively on the scattering properties of the shielding medium. The second is a scheme that couples exponential biasing to the scattering angle biasing proposed recently. It is demonstrated that the coupled scheme performs better than exponential biasing

  5. Observations of the Hubble Deep Field with the Infrared Space Observatory .1. Data reduction, maps and sky coverage

    DEFF Research Database (Denmark)

    Serjeant, S.B.G.; Eaton, N.; Oliver, S.J.

    1997-01-01

    We present deep imaging at 6.7 and 15 mu m from the CAM instrument on the Infrared Space Observatory (ISO), centred on the Hubble Deep Field (HDF). These are the deepest integrations published to date at these wavelengths in any region of sky. We discuss the observational strategy and the data...... reduction. The observed source density appears to approach the CAM confusion limit at 15 mu m, and fluctuations in the 6.7-mu m sky background may be identifiable with similar spatial fluctuations in the HDF galaxy counts. ISO appears to be detecting comparable field galaxy populations to the HDF, and our...

  6. Deep Learning-Based Noise Reduction Approach to Improve Speech Intelligibility for Cochlear Implant Recipients.

    Science.gov (United States)

    Lai, Ying-Hui; Tsao, Yu; Lu, Xugang; Chen, Fei; Su, Yu-Ting; Chen, Kuang-Chao; Chen, Yu-Hsuan; Chen, Li-Ching; Po-Hung Li, Lieber; Lee, Chin-Hui

    2018-01-20

    We investigate the clinical effectiveness of a novel deep learning-based noise reduction (NR) approach under noisy conditions with challenging noise types at low signal to noise ratio (SNR) levels for Mandarin-speaking cochlear implant (CI) recipients. The deep learning-based NR approach used in this study consists of two modules: noise classifier (NC) and deep denoising autoencoder (DDAE), thus termed (NC + DDAE). In a series of comprehensive experiments, we conduct qualitative and quantitative analyses on the NC module and the overall NC + DDAE approach. Moreover, we evaluate the speech recognition performance of the NC + DDAE NR and classical single-microphone NR approaches for Mandarin-speaking CI recipients under different noisy conditions. The testing set contains Mandarin sentences corrupted by two types of maskers, two-talker babble noise, and a construction jackhammer noise, at 0 and 5 dB SNR levels. Two conventional NR techniques and the proposed deep learning-based approach are used to process the noisy utterances. We qualitatively compare the NR approaches by the amplitude envelope and spectrogram plots of the processed utterances. Quantitative objective measures include (1) normalized covariance measure to test the intelligibility of the utterances processed by each of the NR approaches; and (2) speech recognition tests conducted by nine Mandarin-speaking CI recipients. These nine CI recipients use their own clinical speech processors during testing. The experimental results of objective evaluation and listening test indicate that under challenging listening conditions, the proposed NC + DDAE NR approach yields higher intelligibility scores than the two compared classical NR techniques, under both matched and mismatched training-testing conditions. When compared to the two well-known conventional NR techniques under challenging listening condition, the proposed NC + DDAE NR approach has superior noise suppression capabilities and gives less distortion

  7. The anaerobic degradation of organic matter in Danish coastal sediments: iron reduction, manganese reduction, and sulfate reduction

    DEFF Research Database (Denmark)

    Canfield, Donald Eugene; Thamdrup, B; Hansen, Jens Würgler

    1993-01-01

    ). In the deep portion of the basin, surface Mn enrichments reached 3.5 wt%, and Mn reduction was the only important anaerobic carbon oxidation process in the upper 10 cm of the sediment. In the less Mn-rich sediments from intermediate depths in the basin, Fe reduction ranged from somewhat less, to far more...... speculate that in shallow sediments of the Skagerrak, surface Mn oxides are present in a somewhat reduced oxidation level (deep basin....

  8. DEEP U BAND AND R IMAGING OF GOODS-SOUTH: OBSERVATIONS, DATA REDUCTION AND FIRST RESULTS ,

    International Nuclear Information System (INIS)

    Nonino, M.; Cristiani, S.; Vanzella, E.; Dickinson, M.; Reddy, N.; Rosati, P.; Grazian, A.; Giavalisco, M.; Kuntschner, H.; Fosbury, R. A. E.; Daddi, E.; Cesarsky, C.

    2009-01-01

    We present deep imaging in the U band covering an area of 630 arcmin 2 centered on the southern field of the Great Observatories Origins Deep Survey (GOODS). The data were obtained with the VIMOS instrument at the European Southern Observatory (ESO) Very Large Telescope. The final images reach a magnitude limit U lim ∼ 29.8 (AB, 1σ, in a 1'' radius aperture), and have good image quality, with full width at half-maximum ∼0.''8. They are significantly deeper than previous U-band images available for the GOODS fields, and better match the sensitivity of other multiwavelength GOODS photometry. The deeper U-band data yield significantly improved photometric redshifts, especially in key redshift ranges such as 2 lim ∼ 29 (AB, 1σ, 1'' radius aperture), and image quality ∼0.''75. We discuss the strategies for the observations and data reduction, and present the first results from the analysis of the co-added images.

  9. The removal of the deep lateral wall in orbital decompression: Its contribution to exophthalmos reduction and influence on consecutive diplopia

    NARCIS (Netherlands)

    Baldeschi, Lelio; Macandie, Kerr; Hintschich, Christoph; Wakelkamp, Iris M. M. J.; Prummel, Mark F.; Wiersinga, Wilmar M.

    2005-01-01

    PURPOSE: To evaluate the contribution of maximal removal of the deep lateral wall of the orbit to exophthalmos reduction in Graves' orbitopathy and its influence on the onset of consecutive diplopia. DESIGN: Case-control study. METHODS: The medical records of two cohorts of patients affected by

  10. Homoacetogenesis in Deep-Sea Chloroflexi, as Inferred by Single-Cell Genomics, Provides a Link to Reductive Dehalogenation in Terrestrial Dehalococcoidetes

    Directory of Open Access Journals (Sweden)

    Holly L. Sewell

    2017-12-01

    Full Text Available The deep marine subsurface is one of the largest unexplored biospheres on Earth and is widely inhabited by members of the phylum Chloroflexi. In this report, we investigated genomes of single cells obtained from deep-sea sediments of the Peruvian Margin, which are enriched in such Chloroflexi. 16S rRNA gene sequence analysis placed two of these single-cell-derived genomes (DscP3 and Dsc4 in a clade of subphylum I Chloroflexi which were previously recovered from deep-sea sediment in the Okinawa Trough and a third (DscP2-2 as a member of the previously reported DscP2 population from Peruvian Margin site 1230. The presence of genes encoding enzymes of a complete Wood-Ljungdahl pathway, glycolysis/gluconeogenesis, a Rhodobacter nitrogen fixation (Rnf complex, glyosyltransferases, and formate dehydrogenases in the single-cell genomes of DscP3 and Dsc4 and the presence of an NADH-dependent reduced ferredoxin:NADP oxidoreductase (Nfn and Rnf in the genome of DscP2-2 imply a homoacetogenic lifestyle of these abundant marine Chloroflexi. We also report here the first complete pathway for anaerobic benzoate oxidation to acetyl coenzyme A (CoA in the phylum Chloroflexi (DscP3 and Dsc4, including a class I benzoyl-CoA reductase. Of remarkable evolutionary significance, we discovered a gene encoding a formate dehydrogenase (FdnI with reciprocal closest identity to the formate dehydrogenase-like protein (complex iron-sulfur molybdoenzyme [CISM], DET0187 of terrestrial Dehalococcoides/Dehalogenimonas spp. This formate dehydrogenase-like protein has been shown to lack formate dehydrogenase activity in Dehalococcoides/Dehalogenimonas spp. and is instead hypothesized to couple HupL hydrogenase to a reductive dehalogenase in the catabolic reductive dehalogenation pathway. This finding of a close functional homologue provides an important missing link for understanding the origin and the metabolic core of terrestrial Dehalococcoides/Dehalogenimonas spp. and of

  11. Achieving CO2 reductions in Colombia: Effects of carbon taxes and abatement targets

    International Nuclear Information System (INIS)

    Calderón, Silvia; Alvarez, Andrés Camilo; Loboguerrero, Ana María; Arango, Santiago; Calvin, Katherine; Kober, Tom; Daenzer, Kathryn; Fisher-Vanden, Karen

    2016-01-01

    In this paper we investigate CO 2 emission scenarios for Colombia and the effects of implementing carbon taxes and abatement targets on the energy system. By comparing baseline and policy scenario results from two integrated assessment partial equilibrium models TIAM-ECN and GCAM and two general equilibrium models Phoenix and MEG4C, we provide an indication of future developments and dynamics in the Colombian energy system. Currently, the carbon intensity of the energy system in Colombia is low compared to other countries in Latin America. However, this trend may change given the projected rapid growth of the economy and the potential increase in the use of carbon-based technologies. Climate policy in Colombia is under development and has yet to consider economic instruments such as taxes and abatement targets. This paper shows how taxes or abatement targets can achieve significant CO 2 reductions in Colombia. Though abatement may be achieved through different pathways, taxes and targets promote the entry of cleaner energy sources into the market and reduce final energy demand through energy efficiency improvements and other demand-side responses. The electric power sector plays an important role in achieving CO 2 emission reductions in Colombia, through the increase of hydropower, the introduction of wind technologies, and the deployment of biomass, coal and natural gas with CO 2 capture and storage (CCS). Uncertainty over the prevailing mitigation pathway reinforces the importance of climate policy to guide sectors toward low-carbon technologies. This paper also assesses the economy-wide implications of mitigation policies such as potential losses in GDP and consumption. An assessment of the legal, institutional, social and environmental barriers to economy-wide mitigation policies is critical yet beyond the scope of this paper. - Highlights: • Four energy and economy-wide models under carbon mitigation scenarios are compared. • Baseline results show that CO

  12. Response to deep TMS in depressive patients with previous electroconvulsive treatment.

    Science.gov (United States)

    Rosenberg, Oded; Zangen, Abraham; Stryjer, Rafael; Kotler, Moshe; Dannon, Pinhas N

    2010-10-01

    The efficacy of transcranial magnetic stimulation (TMS) in the treatment of major depression has already been shown. Novel TMS coils allowing stimulation of deeper brain regions have recently been developed and studied. Our study is aimed at exploring the possible efficacy of deep TMS in patients with resistant depression, who previously underwent electroconvalsive therapy (ECT). Using Brainsway's deep TMS H1 coil, six patients who previously underwent ECT, were treated with 120% power of the motor threshold at a frequency of 20 Hz. Patients underwent five sessions per week, up to 4 weeks. Before the study, patients were evaluated using the Hamilton depression rating scale (HDRS, 24 items), the Hamilton anxiety scale, and the Beck depression inventory and were again evaluated after 5, 10, 15, and 20 daily treatments. Response to treatment was considered a reduction in the HDRS of at least 50%, and remission was considered a reduction of the HDRS-24 below 10 points. Two of six patients responded to the treatment with deep TMS, including one who achieved full remission. Our results suggest the possibility of a subpopulation of depressed patients who may benefit from deep TMS treatment, including patients who did not respond to ECT previously. However, the power of the study is small and similar larger samples are needed. Copyright © 2010 Elsevier Inc. All rights reserved.

  13. Global Emissions of Nitrous Oxide: Key Source Sectors, their Future Activities and Technical Opportunities for Emission Reduction

    Science.gov (United States)

    Winiwarter, W.; Höglund-Isaksson, L.; Klimont, Z.; Schöpp, W.; Amann, M.

    2017-12-01

    Nitrous oxide originates primarily from natural biogeochemical processes, but its atmospheric concentrations have been strongly affected by human activities. According to IPCC, it is the third largest contributor to the anthropogenic greenhouse gas emissions (after carbon dioxide and methane). Deep decarbonization scenarios, which are able to constrain global temperature increase within 1.5°C, require strategies to cut methane and nitrous oxide emissions on top of phasing out carbon dioxide emissions. Employing the Greenhouse gas and Air pollution INteractions and Synergies (GAINS) model, we have estimated global emissions of nitrous oxide until 2050. Using explicitly defined emission reduction technologies we demonstrate that, by 2030, about 26% ± 9% of the emissions can be avoided assuming full implementation of currently existing reduction technologies. Nearly a quarter of this mitigation can be achieved at marginal costs lower than 10 Euro/t CO2-eq with the chemical industry sector offering important reductions. Overall, the largest emitter of nitrous oxide, agriculture, also provides the largest emission abatement potentials. Emission reduction may be achieved by precision farming methods (variable rate technology) as well as by agrochemistry (nitrification inhibitors). Regionally, the largest emission reductions are achievable where intensive agriculture and industry are prevalent (production and application of mineral fertilizers): Centrally Planned Asia including China, North and Latin America, and South Asia including India. Further deep cuts in nitrous oxide emissions will require extending reduction efforts beyond strictly technological solutions, i.e., considering behavioral changes, including widespread adoption of "healthy diets" minimizing excess protein consumption.

  14. Association of Kinesthetic and Read-Write Learner with Deep Approach Learning and Academic Achievement

    Directory of Open Access Journals (Sweden)

    Latha Rajendra Kumar

    2011-06-01

    Full Text Available Background: The main purpose of the present study was to further investigate study processes, learning styles, and academic achievement in medical students. Methods: A total of 214 (mean age 22.5 years first and second year students - preclinical years - at the Asian Institute of Medical Science and Technology (AIMST University School of Medicine, in Malaysia participated.  There were 119 women (55.6% and 95 men (44.4%.   Biggs questionnaire for determining learning approaches and the VARK questionnaire for determining learning styles were used.  These were compared to the student’s performance in the assessment examinations. Results: The major findings were 1 the majority of students prefer to study alone, 2 most students employ a superficial study approach, and 3 students with high kinesthetic and read-write scores performed better on examinations and approached the subject by deep approach method compared to students with low scores.  Furthermore, there was a correlation between superficial approach scores and visual learner’s scores. Discussion: Read-write and kinesthetic learners who adopt a deep approach learning strategy perform better academically than do the auditory, visual learners that employ superficial study strategies.   Perhaps visual and auditory learners can be encouraged to adopt kinesthetic and read-write styles to enhance their performance in the exams.

  15. Deep learning aided decision support for pulmonary nodules diagnosing: a review.

    Science.gov (United States)

    Yang, Yixin; Feng, Xiaoyi; Chi, Wenhao; Li, Zhengyang; Duan, Wenzhe; Liu, Haiping; Liang, Wenhua; Wang, Wei; Chen, Ping; He, Jianxing; Liu, Bo

    2018-04-01

    Deep learning techniques have recently emerged as promising decision supporting approaches to automatically analyze medical images for different clinical diagnosing purposes. Diagnosing of pulmonary nodules by using computer-assisted diagnosing has received considerable theoretical, computational, and empirical research work, and considerable methods have been developed for detection and classification of pulmonary nodules on different formats of images including chest radiographs, computed tomography (CT), and positron emission tomography in the past five decades. The recent remarkable and significant progress in deep learning for pulmonary nodules achieved in both academia and the industry has demonstrated that deep learning techniques seem to be promising alternative decision support schemes to effectively tackle the central issues in pulmonary nodules diagnosing, including feature extraction, nodule detection, false-positive reduction, and benign-malignant classification for the huge volume of chest scan data. The main goal of this investigation is to provide a comprehensive state-of-the-art review of the deep learning aided decision support for pulmonary nodules diagnosing. As far as the authors know, this is the first time that a review is devoted exclusively to deep learning techniques for pulmonary nodules diagnosing.

  16. DeepMitosis: Mitosis detection via deep detection, verification and segmentation networks.

    Science.gov (United States)

    Li, Chao; Wang, Xinggang; Liu, Wenyu; Latecki, Longin Jan

    2018-04-01

    Mitotic count is a critical predictor of tumor aggressiveness in the breast cancer diagnosis. Nowadays mitosis counting is mainly performed by pathologists manually, which is extremely arduous and time-consuming. In this paper, we propose an accurate method for detecting the mitotic cells from histopathological slides using a novel multi-stage deep learning framework. Our method consists of a deep segmentation network for generating mitosis region when only a weak label is given (i.e., only the centroid pixel of mitosis is annotated), an elaborately designed deep detection network for localizing mitosis by using contextual region information, and a deep verification network for improving detection accuracy by removing false positives. We validate the proposed deep learning method on two widely used Mitosis Detection in Breast Cancer Histological Images (MITOSIS) datasets. Experimental results show that we can achieve the highest F-score on the MITOSIS dataset from ICPR 2012 grand challenge merely using the deep detection network. For the ICPR 2014 MITOSIS dataset that only provides the centroid location of mitosis, we employ the segmentation model to estimate the bounding box annotation for training the deep detection network. We also apply the verification model to eliminate some false positives produced from the detection model. By fusing scores of the detection and verification models, we achieve the state-of-the-art results. Moreover, our method is very fast with GPU computing, which makes it feasible for clinical practice. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Reductions in energy use and environmental emissions achievable with utility-based cogeneration: Simplified illustrations for Ontario

    International Nuclear Information System (INIS)

    Rosen, M.A.

    1998-01-01

    Significant reductions in energy use and environmental emissions are demonstrated to be achievable when electrical utilities use cogeneration. Simplified illustrations of these reductions are presented for the province of Ontario, based on applying cogeneration to the facilities of the main provincial electrical utility. Three cogeneration illustrations are considered: (i) fuel cogeneration is substituted for fuel electrical generation and fuel heating, (ii) nuclear cogeneration is substituted for nuclear electrical generation and fuel heating, and (iii) fuel cogeneration is substituted for fuel electrical generation and electrical heating. The substitution of cogeneration for separate electrical and heat generation processes for all illustrations considered leads to significant reductions in fuel energy consumption (24-61%), which lead to approximately proportional reductions in emissions. (Copyright (c) 1998 Elsevier Science B.V., Amsterdam. All rights reserved.)

  18. How to Achieve CO2 Emission Reduction Goals: What 'Jazz' and 'Symphony' Can Offer

    International Nuclear Information System (INIS)

    Rose, K.

    2013-01-01

    Achieving CO 2 emission reduction goals remains one of today's most challenging tasks. Global energy demand will grow for many decades to come. In many regions of the world cheap fossil fuels seem to be the way forward to meet ever growing energy demand. However, there are negative consequences to this, most notably increasing emission levels. Politicians and industry therefore must accept that make hard choices in this generation need to be made to bring about real changes for future generations and the planet to limit CO 2 emissions and climate change. In his presentation, prof. Rose will provide an insight into how CO 2 emission reduction goals can be set and achieved and how a balance between future energy needs and supply can be realised in the long run up to 2050 both globally and regionally. This will be done based on WEC's own leading analysis in this area, namely it recently launched World Energy Scenarios: composing energy futures to 2050 report and WEC's scenarios, Jazz and Symphony. WEC's full analysis, the complete report and supporting material is available online at: http://www.worldenergy.org/publications/2013/world-energy-scenarios-composing-energy-futures-to-2050.(author)

  19. Biotic and a-biotic Mn and Fe cycling in deep sediments across a gradient of sulfate reduction rates along the California margin

    Science.gov (United States)

    Schneider-Mor, A.; Steefel, C.; Maher, K.

    2011-12-01

    The coupling between the biological and a-biotic processes controlling trace metals in deep marine sediments are not well understood, although the fluxes of elements and trace metals across the sediment-water interface can be a major contribution to ocean water. Four marine sediment profiles (ODP leg 167 sites 1011, 1017, 1018 and 1020)were examined to evaluate and quantify the biotic and abiotic reaction networks and fluxes that occur in deep marine sediments. We compared biogeochemical processes across a gradient of sulfate reduction (SR) rates with the objective of studying the processes that control these rates and how they affect major elements as well as trace metal redistribution. The rates of sulfate reduction, methanogenesis and anaerobic methane oxidation (AMO) were constrained using a multicomponent reactive transport model (CrunchFlow). Constraints for the model include: sediment and pore water concentrations, as well as %CaCO3, %biogenic silica, wt% carbon and δ13C of total organic carbon (TOC), particulate organic matter (POC) and mineral associated carbon (MAC). The sites are distinguished by the depth of AMO: a shallow zone is observed at sites 1018 (9 to 19 meters composite depth (mcd)) and 1017 (19 to 30 mcd), while deeper zones occur at sites 1011 (56 to 76 mcd) and 1020 (101 to 116 mcd). Sulfate reduction rates at the shallow AMO sites are on the order 1x10-16 mol/L/yr, much faster than rates in the deeper zone sulfate reduction (1-3x10-17 mol/L/yr), as expected. The dissolved metal ion concentrations varied between the sites, with Fe (0.01-7 μM) and Mn (0.01-57 μM) concentrations highest at Site 1020 and lowest at site 1017. The highest Fe and Mn concentrations occurred at various depths, and were not directly correlated with the rates of sulfate reduction and the maximum alkalinity values. The main processes that control cycling of Fe are the production of sulfide from sulfate reduction and the distribution of Fe-oxides. The Mn distribution

  20. Policy packages to achieve demand reduction

    International Nuclear Information System (INIS)

    Boardman, Brenda

    2005-01-01

    In many sectors and many countries, energy demand is still increasing, despite decades of policies to reduce demand. Controlling climate change is becoming more urgent, so there is a need to devise policies that will, virtually, guarantee demand reduction. This has to come from policy, at least in the UK, as the conditions do not exist, yet, when the consumers will 'pull' the market for energy efficiency or the manufacturers will use technological development to 'push' it. That virtuous circle has to be created by a mixture of consumer education and restrictions on manufacturers (for instance, permission to manufacture). The wider policy options include higher prices for energy and stronger product policies. An assessment of the effectiveness of different policy packages indicates some guiding principles, for instance that improved product policy must precede higher prices, otherwise consumers are unable to react effectively to price rises. The evidence will be assessed about the ways in which national and EU policies can either reinforce, duplicate or undermine each other. Another area of examination will be timescales: what is the time lag between the implementation of a policy (whether prices or product based) and the level of maximum reductions. In addition, the emphasis given to factors such as equity, raising investment funds and speed of delivery also influence policy design and the extent to which absolute carbon reductions can be expected

  1. What Really is Deep Learning Doing?

    OpenAIRE

    Xiong, Chuyu

    2017-01-01

    Deep learning has achieved a great success in many areas, from computer vision to natural language processing, to game playing, and much more. Yet, what deep learning is really doing is still an open question. There are a lot of works in this direction. For example, [5] tried to explain deep learning by group renormalization, and [6] tried to explain deep learning from the view of functional approximation. In order to address this very crucial question, here we see deep learning from perspect...

  2. pathways to deep decarbonization - 2014 report

    International Nuclear Information System (INIS)

    Sachs, Jeffrey; Guerin, Emmanuel; Mas, Carl; Schmidt-Traub, Guido; Tubiana, Laurence; Waisman, Henri; Colombier, Michel; Bulger, Claire; Sulakshana, Elana; Zhang, Kathy; Barthelemy, Pierre; Spinazze, Lena; Pharabod, Ivan

    2014-09-01

    The Deep Decarbonization Pathways Project (DDPP) is a collaborative initiative to understand and show how individual countries can transition to a low-carbon economy and how the world can meet the internationally agreed target of limiting the increase in global mean surface temperature to less than 2 degrees Celsius (deg. C). Achieving the 2 deg. C limit will require that global net emissions of greenhouse gases (GHG) approach zero by the second half of the century. This will require a profound transformation of energy systems by mid-century through steep declines in carbon intensity in all sectors of the economy, a transition we call 'deep decarbonization.' Successfully transition to a low-carbon economy will require unprecedented global cooperation, including a global cooperative effort to accelerate the development and diffusion of some key low carbon technologies. As underscored throughout this report, the results of the DDPP analyses remain preliminary and incomplete. The DDPP proceeds in two phases. This 2014 report describes the DDPP's approach to deep decarbonization at the country level and presents preliminary findings on technically feasible pathways to deep decarbonization, utilizing technology assumptions and timelines provided by the DDPP Secretariat. At this stage we have not yet considered the economic and social costs and benefits of deep decarbonization, which will be the topic for the next report. The DDPP is issuing this 2014 report to the UN Secretary-General Ban Ki-moon in support of the Climate Leaders' Summit at the United Nations on September 23, 2014. This 2014 report by the Deep Decarbonization Pathway Project (DDPP) summarizes preliminary findings of the technical pathways developed by the DDPP Country Research Partners with the objective of achieving emission reductions consistent with limiting global warming to less than 2 deg. C., without, at this stage, consideration of economic and social costs and benefits. The DDPP is a knowledge

  3. Deep Reinforcement Learning: An Overview

    OpenAIRE

    Li, Yuxi

    2017-01-01

    We give an overview of recent exciting achievements of deep reinforcement learning (RL). We discuss six core elements, six important mechanisms, and twelve applications. We start with background of machine learning, deep learning and reinforcement learning. Next we discuss core RL elements, including value function, in particular, Deep Q-Network (DQN), policy, reward, model, planning, and exploration. After that, we discuss important mechanisms for RL, including attention and memory, unsuperv...

  4. Sulfate reduction controlled by organic matter availability in deep sediment cores from the saline, alkaline Lake Van (Eastern Anatolia, Turkey

    Directory of Open Access Journals (Sweden)

    Clemens eGlombitza

    2013-07-01

    Full Text Available As part of the International Continental Drilling Program (ICDP deep lake drilling project PaleoVan, we investigated sulfate reduction (SR in deep sediment cores of the saline, alkaline (salinity 21.4 ‰, alkalinity 155 m mEq-1, pH 9.81 Lake Van, Turkey. The cores were retrieved in the Northern Basin (NB and at Ahlat Ridge (AR and reached a maximum depth of 220 m. Additionally, 65-75 cm long gravity cores were taken at both sites. Sulfate reduction rates (SRR were low (≤ 22 nmol cm-3 d-1 compared to lakes with higher salinity and alkalinity, indicating that salinity and alkalinity are not limiting SR in Lake Van. Both sites differ significantly in rates and depth distribution of SR. In NB, SRR are up to 10 times higher than at AR. Sulfate reduction (SR could be detected down to 19 meters below lake floor (mblf at NB and down to 13 mblf at AR. Although SRR were lower at AR than at NB, organic matter (OM concentrations were higher. In contrast, dissolved OM in the pore water at AR contained more macromolecular OM and less low molecular weight OM. We thus suggest, that OM content alone cannot be used to infer microbial activity at Lake Van but that quality of OM has an important impact as well. These differences suggest that biogeochemical processes in lacustrine sediments are reacting very sensitively to small variations in geological, physical or chemical parameters over relatively short distances. 

  5. Visualizing histopathologic deep learning classification and anomaly detection using nonlinear feature space dimensionality reduction.

    Science.gov (United States)

    Faust, Kevin; Xie, Quin; Han, Dominick; Goyle, Kartikay; Volynskaya, Zoya; Djuric, Ugljesa; Diamandis, Phedias

    2018-05-16

    There is growing interest in utilizing artificial intelligence, and particularly deep learning, for computer vision in histopathology. While accumulating studies highlight expert-level performance of convolutional neural networks (CNNs) on focused classification tasks, most studies rely on probability distribution scores with empirically defined cutoff values based on post-hoc analysis. More generalizable tools that allow humans to visualize histology-based deep learning inferences and decision making are scarce. Here, we leverage t-distributed Stochastic Neighbor Embedding (t-SNE) to reduce dimensionality and depict how CNNs organize histomorphologic information. Unique to our workflow, we develop a quantitative and transparent approach to visualizing classification decisions prior to softmax compression. By discretizing the relationships between classes on the t-SNE plot, we show we can super-impose randomly sampled regions of test images and use their distribution to render statistically-driven classifications. Therefore, in addition to providing intuitive outputs for human review, this visual approach can carry out automated and objective multi-class classifications similar to more traditional and less-transparent categorical probability distribution scores. Importantly, this novel classification approach is driven by a priori statistically defined cutoffs. It therefore serves as a generalizable classification and anomaly detection tool less reliant on post-hoc tuning. Routine incorporation of this convenient approach for quantitative visualization and error reduction in histopathology aims to accelerate early adoption of CNNs into generalized real-world applications where unanticipated and previously untrained classes are often encountered.

  6. Deep Galaxy: Classification of Galaxies based on Deep Convolutional Neural Networks

    OpenAIRE

    Khalifa, Nour Eldeen M.; Taha, Mohamed Hamed N.; Hassanien, Aboul Ella; Selim, I. M.

    2017-01-01

    In this paper, a deep convolutional neural network architecture for galaxies classification is presented. The galaxy can be classified based on its features into main three categories Elliptical, Spiral, and Irregular. The proposed deep galaxies architecture consists of 8 layers, one main convolutional layer for features extraction with 96 filters, followed by two principles fully connected layers for classification. It is trained over 1356 images and achieved 97.272% in testing accuracy. A c...

  7. Developments in greenhouse gas emissions and net energy use in Danish agriculture - How to achieve substantial CO2 reductions?

    International Nuclear Information System (INIS)

    Dalgaard, T.; Olesen, J.E.; Petersen, S.O.; Petersen, B.M.; Jorgensen, U.; Kristensen, T.; Hutchings, N.J.; Gyldenkaerne, S.; Hermansen, J.E.

    2011-01-01

    Greenhouse gas (GHG) emissions from agriculture are a significant contributor to total Danish emissions. Consequently, much effort is currently given to the exploration of potential strategies to reduce agricultural emissions. This paper presents results from a study estimating agricultural GHG emissions in the form of methane, nitrous oxide and carbon dioxide (including carbon sources and sinks, and the impact of energy consumption/bioenergy production) from Danish agriculture in the years 1990-2010. An analysis of possible measures to reduce the GHG emissions indicated that a 50-70% reduction of agricultural emissions by 2050 relative to 1990 is achievable, including mitigation measures in relation to the handling of manure and fertilisers, optimization of animal feeding, cropping practices, and land use changes with more organic farming, afforestation and energy crops. In addition, the bioenergy production may be increased significantly without reducing the food production, whereby Danish agriculture could achieve a positive energy balance. - Highlights: → GHG emissions from Danish agriculture 1990-2010 are calculated, including carbon sequestration. → Effects of measures to further reduce GHG emissions are listed. → Land use scenarios for a substantially reduced GHG emission by 2050 are presented. → A 50-70% reduction of agricultural emissions by 2050 relative to 1990 is achievable. → Via bioenergy production Danish agriculture could achieve a positive energy balance. - Scenario studies of greenhouse gas mitigation measures illustrate the possible realization of CO 2 reductions for Danish agriculture by 2050, sustaining current food production.

  8. Outcomes of the DeepWind conceptual design

    NARCIS (Netherlands)

    Paulsen, US; Borg, M.; Madsen, HA; Pedersen, TF; Hattel, J; Ritchie, E.; Simao Ferreira, C.; Svendsen, H.; Berthelsen, P.A.; Smadja, C.

    2015-01-01

    DeepWind has been presented as a novel floating offshore wind turbine concept with cost reduction potentials. Twelve international partners developed a Darrieus type floating turbine with new materials and technologies for deep-sea offshore environment. This paper summarizes results of the 5 MW

  9. An Assessment of Envelope Measures in Mild Climate Deep Energy Retrofits

    Energy Technology Data Exchange (ETDEWEB)

    Walker, Iain [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Less, Brennan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2014-06-01

    Energy end-uses and interior comfort conditions have been monitored in 11 Deep Energy Retrofits (DERs) in a mild marine climate. Two broad categories of DER envelope were identified: first, bringing homes up to current code levels of insulation and airtightness, and second, enhanced retrofits that go beyond these code requirements. The efficacy of envelope measures in DERs was difficult to determine, due to the intermingled effects of enclosure improvements, HVAC system upgrades and changes in interior comfort conditions. While energy reductions in these project homes could not be assigned to specific improvements, the combined effects of changes in enclosure, HVAC system and comfort led to average heating energy reductions of 76percent (12,937 kWh) in the five DERs with pre-retrofit data, or 80percent (5.9 kWh/ft2) when normalized by floor area. Overall, net-site energy reductions averaged 58percent (15,966 kWh; n=5), and DERs with code-style envelopes achieved average net-site energy reductions of 65percent (18,923 kWh; n=4). In some homes, the heating energy reductions were actually larger than the whole house reductions that were achieved, which suggests that substantial additional energy uses were added to the home during the retrofit that offset some heating savings. Heating system operation and energy use was shown to vary inconsistently with outdoor conditions, suggesting that most DERs were not thermostatically controlled and that occupants were engaged in managing the indoor environmental conditions. Indoor temperatures maintained in these DERs were highly variable, and no project home consistently provided conditions within the ASHRAE Standard 55-2010 heating season comfort zone. Thermal comfort and heating system operation had a large impact on performance and were found to depend upon the occupant activities, so DERs should be designed with the occupants needs and patterns of consumption in mind. Beyond-code building envelopes were not found to be

  10. The influence of biopreparations on the reduction of energy consumption and CO2 emissions in shallow and deep soil tillage.

    Science.gov (United States)

    Naujokienė, Vilma; Šarauskis, Egidijus; Lekavičienė, Kristina; Adamavičienė, Aida; Buragienė, Sidona; Kriaučiūnienė, Zita

    2018-06-01

    The application of innovation in agriculture technologies is very important for increasing the efficiency of agricultural production, ensuring the high productivity of plants, production quality, farm profitability, the positive balance of used energy, and the requirements of environmental protection. Therefore, it is a scientific problem that solid and soil surfaces covered with plant residue have a negative impact on the work, traction resistance, energy consumption, and environmental pollution of tillage machines. The objective of this work was to determine the dependence of the reduction of energy consumption and CO 2 gas emissions on different biopreparations. Experimental research was carried out in a control (SC1) and seven different biopreparations using scenarios (SC2-SC8) using bacterial and non-bacterial biopreparations in different consistencies (with essential and mineral oils, extracts of various grasses and sea algae, phosphorus, potassium, humic and gibberellic acids, copper, zinc, manganese, iron, and calcium), estimating discing and plowing as the energy consumption parameters of shallow and deep soil tillage machines, respectively. CO 2 emissions were determined by evaluating soil characteristics (such as hardness, total porosity and density). Meteorological conditions such average daily temperatures (2015-20.3 °C; 2016-16.90 °C) and precipitations (2015-6.9 mm; 2016-114.9 mm) during the month strongly influenced different results in 2015 and 2016. Substantial differences between the averages of energy consumption identified in approximately 62% of biological preparation combinations created usage scenarios. Experimental research established that crop field treatments with biological preparations at the beginning of vegetation could reduce the energy consumption of shallow tillage machines by up to approximately 23%, whereas the energy consumption of deep tillage could be reduced by up to approximately 19.2% compared with the control

  11. Deep learning methods for protein torsion angle prediction.

    Science.gov (United States)

    Li, Haiou; Hou, Jie; Adhikari, Badri; Lyu, Qiang; Cheng, Jianlin

    2017-09-18

    Deep learning is one of the most powerful machine learning methods that has achieved the state-of-the-art performance in many domains. Since deep learning was introduced to the field of bioinformatics in 2012, it has achieved success in a number of areas such as protein residue-residue contact prediction, secondary structure prediction, and fold recognition. In this work, we developed deep learning methods to improve the prediction of torsion (dihedral) angles of proteins. We design four different deep learning architectures to predict protein torsion angles. The architectures including deep neural network (DNN) and deep restricted Boltzmann machine (DRBN), deep recurrent neural network (DRNN) and deep recurrent restricted Boltzmann machine (DReRBM) since the protein torsion angle prediction is a sequence related problem. In addition to existing protein features, two new features (predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments) are used as input to each of the four deep learning architectures to predict phi and psi angles of protein backbone. The mean absolute error (MAE) of phi and psi angles predicted by DRNN, DReRBM, DRBM and DNN is about 20-21° and 29-30° on an independent dataset. The MAE of phi angle is comparable to the existing methods, but the MAE of psi angle is 29°, 2° lower than the existing methods. On the latest CASP12 targets, our methods also achieved the performance better than or comparable to a state-of-the art method. Our experiment demonstrates that deep learning is a valuable method for predicting protein torsion angles. The deep recurrent network architecture performs slightly better than deep feed-forward architecture, and the predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments are useful features for improving prediction accuracy.

  12. Cough event classification by pretrained deep neural network.

    Science.gov (United States)

    Liu, Jia-Ming; You, Mingyu; Wang, Zheng; Li, Guo-Zheng; Xu, Xianghuai; Qiu, Zhongmin

    2015-01-01

    Cough is an essential symptom in respiratory diseases. In the measurement of cough severity, an accurate and objective cough monitor is expected by respiratory disease society. This paper aims to introduce a better performed algorithm, pretrained deep neural network (DNN), to the cough classification problem, which is a key step in the cough monitor. The deep neural network models are built from two steps, pretrain and fine-tuning, followed by a Hidden Markov Model (HMM) decoder to capture tamporal information of the audio signals. By unsupervised pretraining a deep belief network, a good initialization for a deep neural network is learned. Then the fine-tuning step is a back propogation tuning the neural network so that it can predict the observation probability associated with each HMM states, where the HMM states are originally achieved by force-alignment with a Gaussian Mixture Model Hidden Markov Model (GMM-HMM) on the training samples. Three cough HMMs and one noncough HMM are employed to model coughs and noncoughs respectively. The final decision is made based on viterbi decoding algorihtm that generates the most likely HMM sequence for each sample. A sample is labeled as cough if a cough HMM is found in the sequence. The experiments were conducted on a dataset that was collected from 22 patients with respiratory diseases. Patient dependent (PD) and patient independent (PI) experimental settings were used to evaluate the models. Five criteria, sensitivity, specificity, F1, macro average and micro average are shown to depict different aspects of the models. From overall evaluation criteria, the DNN based methods are superior to traditional GMM-HMM based method on F1 and micro average with maximal 14% and 11% error reduction in PD and 7% and 10% in PI, meanwhile keep similar performances on macro average. They also surpass GMM-HMM model on specificity with maximal 14% error reduction on both PD and PI. In this paper, we tried pretrained deep neural network in

  13. Deep Learning in Distance Education: Are We Achieving the Goal?

    Science.gov (United States)

    Shearer, Rick L.; Gregg, Andrea; Joo, K. P.

    2015-01-01

    As educators, one of our goals is to help students arrive at deeper levels of learning. However, how is this accomplished, especially in online courses? This design-based research study explored the concept of deep learning through a series of design changes in a graduate education course. A key question that emerged was through what learning…

  14. Deep Energy Retrofits - Eleven California Case Studies

    Energy Technology Data Exchange (ETDEWEB)

    Less, Brennan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fisher, Jeremy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Walker, Iain [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-10-01

    This research documents and demonstrates viable approaches using existing materials, tools and technologies in owner-conducted deep energy retrofits (DERs). These retrofits are meant to reduce energy use by 70% or more, and include extensive upgrades to the building enclosure, heating, cooling and hot water equipment, and often incorporate appliance and lighting upgrades as well as the addition of renewable energy. In this report, 11 Northern California (IECC climate zone 3) DER case studies are described and analyzed in detail, including building diagnostic tests and end-use energy monitoring results. All projects recognized the need to improve the home and its systems approximately to current building code-levels, and then pursued deeper energy reductions through either enhanced technology/ building enclosure measures, or through occupant conservation efforts, both of which achieved impressive energy performance and reductions. The beyond-code incremental DER costs averaged $25,910 for the six homes where cost data were available. DERs were affordable when these incremental costs were financed as part of a remodel, averaging a $30 per month increase in the net-cost of home ownership.

  15. Developments in greenhouse gas emissions and net energy use in Danish agriculture - How to achieve substantial CO{sub 2} reductions?

    Energy Technology Data Exchange (ETDEWEB)

    Dalgaard, T., E-mail: tommy.dalgaard@agrsci.dk [Aarhus University, Department of Agroecology, Blichers Alle 20, P.O. Box 50, DK-8830 Tjele (Denmark); Olesen, J.E.; Petersen, S.O.; Petersen, B.M.; Jorgensen, U.; Kristensen, T.; Hutchings, N.J. [Aarhus University, Department of Agroecology, Blichers Alle 20, P.O. Box 50, DK-8830 Tjele (Denmark); Gyldenkaerne, S. [Aarhus University, National Environmental Research Institute, Frederiksborgvej 399, DK-4000 Roskilde (Denmark); Hermansen, J.E. [Aarhus University, Department of Agroecology, Blichers Alle 20, P.O. Box 50, DK-8830 Tjele (Denmark)

    2011-11-15

    Greenhouse gas (GHG) emissions from agriculture are a significant contributor to total Danish emissions. Consequently, much effort is currently given to the exploration of potential strategies to reduce agricultural emissions. This paper presents results from a study estimating agricultural GHG emissions in the form of methane, nitrous oxide and carbon dioxide (including carbon sources and sinks, and the impact of energy consumption/bioenergy production) from Danish agriculture in the years 1990-2010. An analysis of possible measures to reduce the GHG emissions indicated that a 50-70% reduction of agricultural emissions by 2050 relative to 1990 is achievable, including mitigation measures in relation to the handling of manure and fertilisers, optimization of animal feeding, cropping practices, and land use changes with more organic farming, afforestation and energy crops. In addition, the bioenergy production may be increased significantly without reducing the food production, whereby Danish agriculture could achieve a positive energy balance. - Highlights: > GHG emissions from Danish agriculture 1990-2010 are calculated, including carbon sequestration. > Effects of measures to further reduce GHG emissions are listed. > Land use scenarios for a substantially reduced GHG emission by 2050 are presented. > A 50-70% reduction of agricultural emissions by 2050 relative to 1990 is achievable. > Via bioenergy production Danish agriculture could achieve a positive energy balance. - Scenario studies of greenhouse gas mitigation measures illustrate the possible realization of CO{sub 2} reductions for Danish agriculture by 2050, sustaining current food production.

  16. Construction of Neural Networks for Realization of Localized Deep Learning

    Directory of Open Access Journals (Sweden)

    Charles K. Chui

    2018-05-01

    Full Text Available The subject of deep learning has recently attracted users of machine learning from various disciplines, including: medical diagnosis and bioinformatics, financial market analysis and online advertisement, speech and handwriting recognition, computer vision and natural language processing, time series forecasting, and search engines. However, theoretical development of deep learning is still at its infancy. The objective of this paper is to introduce a deep neural network (also called deep-net approach to localized manifold learning, with each hidden layer endowed with a specific learning task. For the purpose of illustrations, we only focus on deep-nets with three hidden layers, with the first layer for dimensionality reduction, the second layer for bias reduction, and the third layer for variance reduction. A feedback component is also designed to deal with outliers. The main theoretical result in this paper is the order O(m-2s/(2s+d of approximation of the regression function with regularity s, in terms of the number m of sample points, where the (unknown manifold dimension d replaces the dimension D of the sampling (Euclidean space for shallow nets.

  17. Reduced impact of induced gate noise on inductively degenerated LNAs in deep submicron CMOS technologies

    DEFF Research Database (Denmark)

    Rossi, P.; Svelto, F.; Mazzanti, A.

    2005-01-01

    Designers of radio-frequency inductively-degenerated CMOS low-noise-amplifiers have usually not followed the guidelines for achieving minimum noise figure. Nonetheless, state-of-the- art implementations display noise figure values very close to the theoretical minimum. In this paper, we point out...... that this is due to the effect of the parasitic overlap capacitances in the MOS device. In particular, we show that overlap capacitances lead to a significant induced-gate-noise reduction, especially when deep sub-micron CMOS processes are used....

  18. Auxiliary Deep Generative Models

    DEFF Research Database (Denmark)

    Maaløe, Lars; Sønderby, Casper Kaae; Sønderby, Søren Kaae

    2016-01-01

    Deep generative models parameterized by neural networks have recently achieved state-of-the-art performance in unsupervised and semi-supervised learning. We extend deep generative models with auxiliary variables which improves the variational approximation. The auxiliary variables leave...... the generative model unchanged but make the variational distribution more expressive. Inspired by the structure of the auxiliary variable we also propose a model with two stochastic layers and skip connections. Our findings suggest that more expressive and properly specified deep generative models converge...... faster with better results. We show state-of-the-art performance within semi-supervised learning on MNIST (0.96%), SVHN (16.61%) and NORB (9.40%) datasets....

  19. A Meta-Analysis of Single-Family Deep Energy Retrofit Performance in the U.S.

    Energy Technology Data Exchange (ETDEWEB)

    Less, Brennan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Walker, Iain [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2014-08-01

    The current state of Deep Energy Retrofit (DER) performance in the U.S. has been assessed in 116 homes in the United States, using actual and simulated data gathered from the available domestic literature. Substantial airtightness reductions averaging 63% (n=48) were reported (two- to three-times more than in conventional retrofits), with average post-retrofit airtightness of 4.7 Air Changes per House at 50 Pascal (ACH50) (n=94). Yet, mechanical ventilation was not installed consistently. In order to avoid indoor air quality (IAQ) issues, all future DERs should comply with ASHRAE 62.2-2013 requirements or equivalent. Projects generally achieved good energy results, with average annual net-site and net-source energy savings of 47%±20% and 45%±24% (n=57 and n=35), respectively, and carbon emission reductions of 47%±22% (n=23). Net-energy reductions did not vary reliably with house age, airtightness, or reported project costs, but pre-retrofit energy usage was correlated with total reductions (MMBtu).

  20. Possible pathways for dealing with Japan's post-Fukushima challenge and achieving CO2 emission reduction targets in 2030

    International Nuclear Information System (INIS)

    Su, Xuanming; Zhou, Weisheng; Sun, Faming; Nakagami, Ken'Ichi

    2014-01-01

    Considering the unclear nuclear future of Japan after Fukushima Dai-ichi nuclear power plant accident since Mar. 11, 2011, this study assesses a series of energy consumption scenarios including the reference scenario, nuclear limited scenarios and current nuclear use level scenario for Japan in 2030 by the G-CEEP (Glocal Century Energy Environment Planning) model. The simulation result for each scenario is firstly presented in terms of primary energy consumption, electricity generation, CO 2 emission, marginal abatement cost and GDP (gross domestic product) loss. According to the results, energy saving contributes the biggest share in total CO 2 emission reduction, regardless of different nuclear use levels and different CO 2 emission reduction levels. A certain amount of coal generation can be retained in the nuclear limited scenarios due to the applying of CCS (carbon capture and storage). The discussion indicates that Japan needs to improve energy use efficiency, increase renewable energy and introduce CCS in order to reduce the dependence on nuclear power and to achieve CO 2 emission reduction target in 2030. In addition, it is ambitious for Japan to achieve the zero nuclear scenario with 30% CO 2 emission reduction which will cause a marginal abatement cost of 383 USD/tC and up to −2.54% GDP loss from the reference scenario. Dealing with the nuclear power issue, Japan is faced with a challenge as well as an opportunity. - Highlights: • Nuclear use limited and carbon emission reduction scenarios for Japan in 2030. • Contributions of different abatement options to carbon emissions. • CCS for reducing dependence on nuclear power

  1. Preparation of porous lead from shape-controlled PbO bulk by in situ electrochemical reduction in ChCl-EG deep eutectic solvent

    Science.gov (United States)

    Ru, Juanjian; Hua, Yixin; Xu, Cunying; Li, Jian; Li, Yan; Wang, Ding; Zhou, Zhongren; Gong, Kai

    2015-12-01

    Porous lead with different shapes was firstly prepared from controlled geometries of solid PbO bulk by in situ electrochemical reduction in choline chloride-ethylene glycol deep eutectic solvents at cell voltage 2.5 V and 353 K. The electrochemical behavior of PbO powders on cavity microelectrode was investigated by cyclic voltammetry. It is indicated that solid PbO can be directly reduced to metal in the solvent and a nucleation loop is apparent. Constant voltage electrolysis demonstrates that PbO pellet can be completely converted to metal for 13 h, and the current efficiency and specific energy consumption are about 87.79% and 736.82 kWh t-1, respectively. With the electro-deoxidation progress on the pellet surface, the reduction rate reaches the fastest and decreases along the distance from surface to inner center. The morphologies of metallic products are porous and mainly consisted of uniform particles which connect with each other by finer strip-shaped grains to remain the geometry and macro size constant perfectly. In addition, an empirical model of the electro-deoxidation process from spherical PbO bulk to porous lead is also proposed. These findings provide a novel and simple route for the preparation of porous metals from oxide precursors in deep eutectic solvents at room temperature.

  2. Eigenspectra optoacoustic tomography achieves quantitative blood oxygenation imaging deep in tissues

    Science.gov (United States)

    Tzoumas, Stratis; Nunes, Antonio; Olefir, Ivan; Stangl, Stefan; Symvoulidis, Panagiotis; Glasl, Sarah; Bayer, Christine; Multhoff, Gabriele; Ntziachristos, Vasilis

    2016-06-01

    Light propagating in tissue attains a spectrum that varies with location due to wavelength-dependent fluence attenuation, an effect that causes spectral corruption. Spectral corruption has limited the quantification accuracy of optical and optoacoustic spectroscopic methods, and impeded the goal of imaging blood oxygen saturation (sO2) deep in tissues; a critical goal for the assessment of oxygenation in physiological processes and disease. Here we describe light fluence in the spectral domain and introduce eigenspectra multispectral optoacoustic tomography (eMSOT) to account for wavelength-dependent light attenuation, and estimate blood sO2 within deep tissue. We validate eMSOT in simulations, phantoms and animal measurements and spatially resolve sO2 in muscle and tumours, validating our measurements with histology data. eMSOT shows substantial sO2 accuracy enhancement over previous optoacoustic methods, potentially serving as a valuable tool for imaging tissue pathophysiology.

  3. Eigenspectra optoacoustic tomography achieves quantitative blood oxygenation imaging deep in tissues.

    Science.gov (United States)

    Tzoumas, Stratis; Nunes, Antonio; Olefir, Ivan; Stangl, Stefan; Symvoulidis, Panagiotis; Glasl, Sarah; Bayer, Christine; Multhoff, Gabriele; Ntziachristos, Vasilis

    2016-06-30

    Light propagating in tissue attains a spectrum that varies with location due to wavelength-dependent fluence attenuation, an effect that causes spectral corruption. Spectral corruption has limited the quantification accuracy of optical and optoacoustic spectroscopic methods, and impeded the goal of imaging blood oxygen saturation (sO2) deep in tissues; a critical goal for the assessment of oxygenation in physiological processes and disease. Here we describe light fluence in the spectral domain and introduce eigenspectra multispectral optoacoustic tomography (eMSOT) to account for wavelength-dependent light attenuation, and estimate blood sO2 within deep tissue. We validate eMSOT in simulations, phantoms and animal measurements and spatially resolve sO2 in muscle and tumours, validating our measurements with histology data. eMSOT shows substantial sO2 accuracy enhancement over previous optoacoustic methods, potentially serving as a valuable tool for imaging tissue pathophysiology.

  4. Combination of deep eutectic solvent and ionic liquid to improve biocatalytic reduction of 2-octanone with Acetobacter pasteurianus GIM1.158 cell

    OpenAIRE

    Pei Xu; Peng-Xuan Du; Min-Hua Zong; Ning Li; Wen-Yong Lou

    2016-01-01

    The efficient anti-Prelog asymmetric reduction of 2-octanone with Acetobacter pasteurianus GIM1.158 cells was successfully performed in a biphasic system consisting of deep eutectic solvent (DES) and water-immiscible ionic liquid (IL). Various DESs exerted different effects on the synthesis of (R)-2-octanol. Choline chloride/ethylene glycol (ChCl/EG) exhibited good biocompatibility and could moderately increase the cell membrane permeability thus leading to the better results. Adding ChCl/EG ...

  5. Deep Energy Retrofit Guidance for the Building America Solutions Center

    Energy Technology Data Exchange (ETDEWEB)

    Less, Brennan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Walker, Iain [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-01-01

    The U.S. DOE Building America program has established a research agenda targeting market-relevant strategies to achieve 40% reductions in existing home energy use by 2030. Deep Energy Retrofits (DERs) are part of the strategy to meet and exceed this goal. DERs are projects that create new, valuable assets from existing residences, by bringing homes into alignment with the expectations of the 21st century. Ideally, high energy using, dated homes that are failing to provide adequate modern services to their owners and occupants (e.g., comfortable temperatures, acceptable humidity, clean, healthy), are transformed through comprehensive upgrades to the building envelope, services and miscellaneous loads into next generation high performance homes. These guidance documents provide information to aid in the broader market adoption of DERs. They are intended for inclusion in the online resource the Building America Solutions Center (BASC). This document is an assemblage of multiple entries in the BASC, each of which addresses a specific aspect of Deep Energy Retrofit best practices for projects targeting at least 50% energy reductions. The contents are based upon a review of actual DERs in the U.S., as well as a mixture of engineering judgment, published guidance from DOE research in technologies and DERs, simulations of cost-optimal DERs, Energy Star and Consortium for Energy Efficiency (CEE) product criteria, and energy codes.

  6. Deep TMS in a resistant major depressive disorder: a brief report.

    Science.gov (United States)

    Rosenberg, O; Shoenfeld, N; Zangen, A; Kotler, M; Dannon, P N

    2010-05-01

    Repetitive transcranial magnetic stimulation (rTMS) has proven effective. Recently, a greater intracranial penetration coil has been developed. We tested the efficacy of the coil in the treatment of resistant major depression. Our sample included seven patients suffering from major depression who were treated using Brainsway's H1-coil connected to a Magstim rapid 2 stimulator. Deep TMS treatment was given to each patient in five sessions per week over a period of 4 weeks. Patients were treated with 120% intensity of the motor threshold and a frequency of 20 HZ with a total of 1,680 pulses per session. Five patients completed 20 sessions: one attained remission (Hamilton Depression Rating Scale (HDRS)=9); three patients reached a reduction of more than 50% in their pre-treatment HDRS; and one patient achieved a partial response (i.e., the HDRS score dropped from 21 to 12). Average HDRS score dropped to 12.6 and average Hamilton Anxiety Rating Scale score dropped to 9.Two patients dropped out: one due to insomnia and the second due to a lack of response. Compared to the pooled response and remission rates when treating major depression with rTMS, deep TMS as used in this study is at least similarly effective. Still, a severe limitation of this study is its small sample size, which makes the comparison of the two methods in terms of their effectiveness or side effects impossible. Greater numbers of subjects should be studied to achieve this aim. An H1 deep TMS coil could be used as an alternative treatment for major depressive disorder.

  7. DEWS (DEep White matter hyperintensity Segmentation framework): A fully automated pipeline for detecting small deep white matter hyperintensities in migraineurs.

    Science.gov (United States)

    Park, Bo-Yong; Lee, Mi Ji; Lee, Seung-Hak; Cha, Jihoon; Chung, Chin-Sang; Kim, Sung Tae; Park, Hyunjin

    2018-01-01

    Migraineurs show an increased load of white matter hyperintensities (WMHs) and more rapid deep WMH progression. Previous methods for WMH segmentation have limited efficacy to detect small deep WMHs. We developed a new fully automated detection pipeline, DEWS (DEep White matter hyperintensity Segmentation framework), for small and superficially-located deep WMHs. A total of 148 non-elderly subjects with migraine were included in this study. The pipeline consists of three components: 1) white matter (WM) extraction, 2) WMH detection, and 3) false positive reduction. In WM extraction, we adjusted the WM mask to re-assign misclassified WMHs back to WM using many sequential low-level image processing steps. In WMH detection, the potential WMH clusters were detected using an intensity based threshold and region growing approach. For false positive reduction, the detected WMH clusters were classified into final WMHs and non-WMHs using the random forest (RF) classifier. Size, texture, and multi-scale deep features were used to train the RF classifier. DEWS successfully detected small deep WMHs with a high positive predictive value (PPV) of 0.98 and true positive rate (TPR) of 0.70 in the training and test sets. Similar performance of PPV (0.96) and TPR (0.68) was attained in the validation set. DEWS showed a superior performance in comparison with other methods. Our proposed pipeline is freely available online to help the research community in quantifying deep WMHs in non-elderly adults.

  8. Quantitative phase microscopy using deep neural networks

    Science.gov (United States)

    Li, Shuai; Sinha, Ayan; Lee, Justin; Barbastathis, George

    2018-02-01

    Deep learning has been proven to achieve ground-breaking accuracy in various tasks. In this paper, we implemented a deep neural network (DNN) to achieve phase retrieval in a wide-field microscope. Our DNN utilized the residual neural network (ResNet) architecture and was trained using the data generated by a phase SLM. The results showed that our DNN was able to reconstruct the profile of the phase target qualitatively. In the meantime, large error still existed, which indicated that our approach still need to be improved.

  9. Achieving Realistic Energy and Greenhouse Gas Emission Reductions in U.S. Cities

    Science.gov (United States)

    Blackhurst, Michael F.

    2011-12-01

    In recognizing that energy markets and greenhouse gas emissions are significantly influences by local factors, this research examines opportunities for achieving realistic energy greenhouse gas emissions from U.S. cities through provisions of more sustainable infrastructure. Greenhouse gas reduction opportunities are examined through the lens of a public program administrator charged with reducing emissions given realistic financial constraints and authority over emissions reductions and energy use. Opportunities are evaluated with respect to traditional public policy metrics, such as benefit-cost analysis, net benefit analysis, and cost-effectiveness. Section 2 summarizes current practices used to estimate greenhouse gas emissions from communities. I identify improved and alternative emissions inventory techniques such as disaggregating the sectors reported, reporting inventory uncertainty, and aligning inventories with local organizations that could facilitate emissions mitigation. The potential advantages and challenges of supplementing inventories with comparative benchmarks are also discussed. Finally, I highlight the need to integrate growth (population and economic) and business as usual implications (such as changes to electricity supply grids) into climate action planning. I demonstrate how these techniques could improve decision making when planning reductions, help communities set meaningful emission reduction targets, and facilitate CAP implementation and progress monitoring. Section 3 evaluates the costs and benefits of building energy efficiency are estimated as a means of reducing greenhouse gas emissions in Pittsburgh, PA and Austin, TX. Two policy objectives were evaluated: maximize GHG reductions given initial budget constraints or maximize social savings given target GHG reductions. This approach explicitly evaluates the trade-offs between three primary and often conflicting program design parameters: initial capital constraints, social savings

  10. An intensive nurse-led, multi-interventional clinic is more successful in achieving vascular risk reduction targets than standard diabetes care.

    LENUS (Irish Health Repository)

    MacMahon Tone, J

    2009-06-01

    The aim of this research was to determine whether an intensive, nurse-led clinic could achieve recommended vascular risk reduction targets in patients with type 2 diabetes as compared to standard diabetes management.

  11. Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video

    Science.gov (United States)

    Li, Honggui

    2017-09-01

    This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.

  12. Training Deep Spiking Neural Networks Using Backpropagation.

    Science.gov (United States)

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael

    2016-01-01

    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  13. Deep Energy Retrofit Performance Metric Comparison: Eight California Case Studies

    Energy Technology Data Exchange (ETDEWEB)

    Walker, Iain [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fisher, Jeremy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Less, Brennan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2014-06-01

    In this paper we will present the results of monitored annual energy use data from eight residential Deep Energy Retrofit (DER) case studies using a variety of performance metrics. For each home, the details of the retrofits were analyzed, diagnostic tests to characterize the home were performed and the homes were monitored for total and individual end-use energy consumption for approximately one year. Annual performance in site and source energy, as well as carbon dioxide equivalent (CO2e) emissions were determined on a per house, per person and per square foot basis to examine the sensitivity to these different metrics. All eight DERs showed consistent success in achieving substantial site energy and CO2e reductions, but some projects achieved very little, if any source energy reduction. This problem emerged in those homes that switched from natural gas to electricity for heating and hot water, resulting in energy consumption dominated by electricity use. This demonstrates the crucial importance of selecting an appropriate metric to be used in guiding retrofit decisions. Also, due to the dynamic nature of DERs, with changes in occupancy, size, layout, and comfort, several performance metrics might be necessary to understand a project’s success.

  14. Plant Species Identification by Bi-channel Deep Convolutional Networks

    Science.gov (United States)

    He, Guiqing; Xia, Zhaoqiang; Zhang, Qiqi; Zhang, Haixi; Fan, Jianping

    2018-04-01

    Plant species identification achieves much attention recently as it has potential application in the environmental protection and human life. Although deep learning techniques can be directly applied for plant species identification, it still needs to be designed for this specific task to obtain the state-of-art performance. In this paper, a bi-channel deep learning framework is developed for identifying plant species. In the framework, two different sub-networks are fine-tuned over their pretrained models respectively. And then a stacking layer is used to fuse the output of two different sub-networks. We construct a plant dataset of Orchidaceae family for algorithm evaluation. Our experimental results have demonstrated that our bi-channel deep network can achieve very competitive performance on accuracy rates compared to the existing deep learning algorithm.

  15. The challenge of meeting Canada's greenhouse gas reduction targets

    International Nuclear Information System (INIS)

    Hughes, Larry; Chaudhry, Nikhil

    2011-01-01

    In 2007, the Government of Canada announced its medium- and long-term greenhouse gas (GHG) emissions reduction plan entitled Turning the Corner, proposed emission cuts of 20% below 2006 levels by 2020 and 60-70% below 2006 levels by 2050. A report from a Canadian government advisory organization, the National Round Table on Environment and Economy (NRTEE), Achieving 2050: A carbon pricing policy for Canada, recommended 'fast and deep' energy pathways to emissions reduction through large-scale electrification of Canada's economy by relying on a major expansion of hydroelectricity, adoption of carbon capture and storage for coal and natural gas, and increasing the use of nuclear. This paper examines the likelihood of the pathways being met by considering the report's proposed energy systems, their associated energy sources, and the magnitude of the changes. It shows that the pathways assume some combination of technological advances, access to secure energy supplies, or rapid installation in order to meet both the 2020 and 2050 targets. This analysis suggests that NRTEE's projections are optimistic and unlikely to be achieved. The analysis described in this paper can be applied to other countries to better understand and develop strategies that can help reduce global greenhouse gas emissions. - Research highlights: → An analysis of a Canadian government advisory organization's GHG reduction plans. → Hydroelectricity and wind development is overly optimistic. → Declining coal and natural gas supplies and lack of CO 2 storage may hamper CCS. → Changing precipitation patterns may limit nuclear and hydroelectricity. → Bioenergy and energy reduction policies largely ignored despite their promise.

  16. Deep Learning and Bayesian Methods

    OpenAIRE

    Prosper Harrison B.

    2017-01-01

    A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such meth...

  17. Deep Neural Network-Based Chinese Semantic Role Labeling

    Institute of Scientific and Technical Information of China (English)

    ZHENG Xiaoqing; CHEN Jun; SHANG Guoqiang

    2017-01-01

    A recent trend in machine learning is to use deep architec-tures to discover multiple levels of features from data, which has achieved impressive results on various natural language processing (NLP) tasks. We propose a deep neural network-based solution to Chinese semantic role labeling (SRL) with its application on message analysis. The solution adopts a six-step strategy: text normalization, named entity recognition (NER), Chinese word segmentation and part-of-speech (POS) tagging, theme classification, SRL, and slot filling. For each step, a novel deep neural network - based model is designed and optimized, particularly for smart phone applications. Ex-periment results on all the NLP sub - tasks of the solution show that the proposed neural networks achieve state-of-the-art performance with the minimal computational cost. The speed advantage of deep neural networks makes them more competitive for large-scale applications or applications requir-ing real-time response, highlighting the potential of the pro-posed solution for practical NLP systems.

  18. Acetogenesis in the energy-starved deep biosphere - a paradox?

    DEFF Research Database (Denmark)

    Lever, Mark

    2012-01-01

    Under anoxic conditions in sediments, acetogens are often thought to be outcompeted by microorganisms performing energetically more favorable metabolic pathways, such as sulfate reduction or methanogenesis. Recent evidence from deep subseafloor sediments suggesting acetogenesis in the presence of...... to be taken into account to understand microbial survival in the energy-depleted deep biosphere....

  19. Time-lagged autoencoders: Deep learning of slow collective variables for molecular kinetics

    Science.gov (United States)

    Wehmeyer, Christoph; Noé, Frank

    2018-06-01

    Inspired by the success of deep learning techniques in the physical and chemical sciences, we apply a modification of an autoencoder type deep neural network to the task of dimension reduction of molecular dynamics data. We can show that our time-lagged autoencoder reliably finds low-dimensional embeddings for high-dimensional feature spaces which capture the slow dynamics of the underlying stochastic processes—beyond the capabilities of linear dimension reduction techniques.

  20. Deep Energy Retrofit Guidance for the Building America Solutions Center

    Energy Technology Data Exchange (ETDEWEB)

    Less, Brennan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Walker, Iain [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-01-01

    The U.S. DOE Building America program has established a research agenda targeting market-relevant strategies to achieve 40% reductions in existing home energy use by 2030. Deep Energy Retrofits (DERs) are part of the strategy to meet and exceed this goal. DERs are projects that create new, valuable assets from existing residences, by bringing homes into alignment with the expectations of the 21st century. Ideally, high energy using, dated homes that are failing to provide adequate modern services to their owners and occupants (e.g., comfortable temperatures, acceptable humidity, clean, healthy), are transformed through comprehensive upgrades to the building envelope, services and miscellaneous loads into next generation high performance homes. These guidance documents provide information to aid in the broader market adoption of DERs.

  1. Staged Inference using Conditional Deep Learning for energy efficient real-time smart diagnosis.

    Science.gov (United States)

    Parsa, Maryam; Panda, Priyadarshini; Sen, Shreyas; Roy, Kaushik

    2017-07-01

    Recent progress in biosensor technology and wearable devices has created a formidable opportunity for remote healthcare monitoring systems as well as real-time diagnosis and disease prevention. The use of data mining techniques is indispensable for analysis of the large pool of data generated by the wearable devices. Deep learning is among the promising methods for analyzing such data for healthcare applications and disease diagnosis. However, the conventional deep neural networks are computationally intensive and it is impractical to use them in real-time diagnosis with low-powered on-body devices. We propose Staged Inference using Conditional Deep Learning (SICDL), as an energy efficient approach for creating healthcare monitoring systems. For smart diagnostics, we observe that all diagnoses are not equally challenging. The proposed approach thus decomposes the diagnoses into preliminary analysis (such as healthy vs unhealthy) and detailed analysis (such as identifying the specific type of cardio disease). The preliminary diagnosis is conducted real-time with a low complexity neural network realized on the resource-constrained on-body device. The detailed diagnosis requires a larger network that is implemented remotely in cloud and is conditionally activated only for detailed diagnosis (unhealthy individuals). We evaluated the proposed approach using available physiological sensor data from Physionet databases, and achieved 38% energy reduction in comparison to the conventional deep learning approach.

  2. Hello World Deep Learning in Medical Imaging.

    Science.gov (United States)

    Lakhani, Paras; Gray, Daniel L; Pett, Carl R; Nagy, Paul; Shih, George

    2018-05-03

    There is recent popularity in applying machine learning to medical imaging, notably deep learning, which has achieved state-of-the-art performance in image analysis and processing. The rapid adoption of deep learning may be attributed to the availability of machine learning frameworks and libraries to simplify their use. In this tutorial, we provide a high-level overview of how to build a deep neural network for medical image classification, and provide code that can help those new to the field begin their informatics projects.

  3. Towards deep learning with segregated dendrites.

    Science.gov (United States)

    Guerguiev, Jordan; Lillicrap, Timothy P; Richards, Blake A

    2017-12-05

    Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network learns to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful higher-order representations-the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the morphology of neocortical pyramidal neurons.

  4. Streaming Reduction Circuit

    NARCIS (Netherlands)

    Gerards, Marco Egbertus Theodorus; Kuper, Jan; Kokkeler, Andre B.J.; Molenkamp, Egbert

    2009-01-01

    Reduction circuits are used to reduce rows of floating point values to single values. Binary floating point operators often have deep pipelines, which may cause hazards when many consecutive rows have to be reduced. We present an algorithm by which any number of consecutive rows of arbitrary lengths

  5. DEEP: a general computational framework for predicting enhancers

    KAUST Repository

    Kleftogiannis, Dimitrios A.

    2014-11-05

    Transcription regulation in multicellular eukaryotes is orchestrated by a number of DNA functional elements located at gene regulatory regions. Some regulatory regions (e.g. enhancers) are located far away from the gene they affect. Identification of distal regulatory elements is a challenge for the bioinformatics research. Although existing methodologies increased the number of computationally predicted enhancers, performance inconsistency of computational models across different cell-lines, class imbalance within the learning sets and ad hoc rules for selecting enhancer candidates for supervised learning, are some key questions that require further examination. In this study we developed DEEP, a novel ensemble prediction framework. DEEP integrates three components with diverse characteristics that streamline the analysis of enhancer\\'s properties in a great variety of cellular conditions. In our method we train many individual classification models that we combine to classify DNA regions as enhancers or non-enhancers. DEEP uses features derived from histone modification marks or attributes coming from sequence characteristics. Experimental results indicate that DEEP performs better than four state-of-the-art methods on the ENCODE data. We report the first computational enhancer prediction results on FANTOM5 data where DEEP achieves 90.2% accuracy and 90% geometric mean (GM) of specificity and sensitivity across 36 different tissues. We further present results derived using in vivo-derived enhancer data from VISTA database. DEEP-VISTA, when tested on an independent test set, achieved GM of 80.1% and accuracy of 89.64%. DEEP framework is publicly available at http://cbrc.kaust.edu.sa/deep/.

  6. DEEP: a general computational framework for predicting enhancers

    KAUST Repository

    Kleftogiannis, Dimitrios A.; Kalnis, Panos; Bajic, Vladimir B.

    2014-01-01

    Transcription regulation in multicellular eukaryotes is orchestrated by a number of DNA functional elements located at gene regulatory regions. Some regulatory regions (e.g. enhancers) are located far away from the gene they affect. Identification of distal regulatory elements is a challenge for the bioinformatics research. Although existing methodologies increased the number of computationally predicted enhancers, performance inconsistency of computational models across different cell-lines, class imbalance within the learning sets and ad hoc rules for selecting enhancer candidates for supervised learning, are some key questions that require further examination. In this study we developed DEEP, a novel ensemble prediction framework. DEEP integrates three components with diverse characteristics that streamline the analysis of enhancer's properties in a great variety of cellular conditions. In our method we train many individual classification models that we combine to classify DNA regions as enhancers or non-enhancers. DEEP uses features derived from histone modification marks or attributes coming from sequence characteristics. Experimental results indicate that DEEP performs better than four state-of-the-art methods on the ENCODE data. We report the first computational enhancer prediction results on FANTOM5 data where DEEP achieves 90.2% accuracy and 90% geometric mean (GM) of specificity and sensitivity across 36 different tissues. We further present results derived using in vivo-derived enhancer data from VISTA database. DEEP-VISTA, when tested on an independent test set, achieved GM of 80.1% and accuracy of 89.64%. DEEP framework is publicly available at http://cbrc.kaust.edu.sa/deep/.

  7. Drag Reduction of an Airfoil Using Deep Learning

    Science.gov (United States)

    Jiang, Chiyu; Sun, Anzhu; Marcus, Philip

    2017-11-01

    We reduced the drag of a 2D airfoil by starting with a NACA-0012 airfoil and used deep learning methods. We created a database which consists of simulations of 2D external flow over randomly generated shapes. We then developed a machine learning framework for external flow field inference given input shapes. Past work which utilized machine learning in Computational Fluid Dynamics focused on estimations of specific flow parameters, but this work is novel in the inference of entire flow fields. We further showed that learned flow patterns are transferable to cases that share certain similarities. This study illustrates the prospects of deeper integration of data-based modeling into current CFD simulation frameworks for faster flow inference and more accurate flow modeling.

  8. DeepQA: improving the estimation of single protein model quality with deep belief networks.

    Science.gov (United States)

    Cao, Renzhi; Bhattacharya, Debswapna; Hou, Jie; Cheng, Jianlin

    2016-12-05

    Protein quality assessment (QA) useful for ranking and selecting protein models has long been viewed as one of the major challenges for protein tertiary structure prediction. Especially, estimating the quality of a single protein model, which is important for selecting a few good models out of a large model pool consisting of mostly low-quality models, is still a largely unsolved problem. We introduce a novel single-model quality assessment method DeepQA based on deep belief network that utilizes a number of selected features describing the quality of a model from different perspectives, such as energy, physio-chemical characteristics, and structural information. The deep belief network is trained on several large datasets consisting of models from the Critical Assessment of Protein Structure Prediction (CASP) experiments, several publicly available datasets, and models generated by our in-house ab initio method. Our experiments demonstrate that deep belief network has better performance compared to Support Vector Machines and Neural Networks on the protein model quality assessment problem, and our method DeepQA achieves the state-of-the-art performance on CASP11 dataset. It also outperformed two well-established methods in selecting good outlier models from a large set of models of mostly low quality generated by ab initio modeling methods. DeepQA is a useful deep learning tool for protein single model quality assessment and protein structure prediction. The source code, executable, document and training/test datasets of DeepQA for Linux is freely available to non-commercial users at http://cactus.rnet.missouri.edu/DeepQA/ .

  9. Radiation dose reduction in digital breast tomosynthesis (DBT) by means of deep-learning-based supervised image processing

    Science.gov (United States)

    Liu, Junchi; Zarshenas, Amin; Qadir, Ammar; Wei, Zheng; Yang, Limin; Fajardo, Laurie; Suzuki, Kenji

    2018-03-01

    To reduce cumulative radiation exposure and lifetime risks for radiation-induced cancer from breast cancer screening, we developed a deep-learning-based supervised image-processing technique called neural network convolution (NNC) for radiation dose reduction in DBT. NNC employed patched-based neural network regression in a convolutional manner to convert lower-dose (LD) to higher-dose (HD) tomosynthesis images. We trained our NNC with quarter-dose (25% of the standard dose: 12 mAs at 32 kVp) raw projection images and corresponding "teaching" higher-dose (HD) images (200% of the standard dose: 99 mAs at 32 kVp) of a breast cadaver phantom acquired with a DBT system (Selenia Dimensions, Hologic, CA). Once trained, NNC no longer requires HD images. It converts new LD images to images that look like HD images; thus the term "virtual" HD (VHD) images. We reconstructed tomosynthesis slices on a research DBT system. To determine a dose reduction rate, we acquired 4 studies of another test phantom at 4 different radiation doses (1.35, 2.7, 4.04, and 5.39 mGy entrance dose). Structural SIMilarity (SSIM) index was used to evaluate the image quality. For testing, we collected half-dose (50% of the standard dose: 32+/-14 mAs at 33+/-5 kVp) and full-dose (standard dose: 68+/-23 mAs at 33+/-5 kvp) images of 10 clinical cases with the DBT system at University of Iowa Hospitals and Clinics. NNC converted half-dose DBT images of 10 clinical cases to VHD DBT images that were equivalent to full dose DBT images. Our cadaver phantom experiment demonstrated 79% dose reduction.

  10. Impact of Deepwater Horizon Spill on food supply to deep-sea benthos communities

    Science.gov (United States)

    Prouty, Nancy G.; Swarzenski, Pamela; Mienis, Furu; Duineveld, Gerald; Demopoulos, Amanda W.J.; Ross, Steve W.; Brooke, Sandra

    2016-01-01

    Deep-sea ecosystems encompass unique and often fragile communities that are sensitive to a variety of anthropogenic and natural impacts. After the 2010 Deepwater Horizon (DWH) oil spill, sampling efforts documented the acute impact of the spill on some deep-sea coral colonies. To investigate the impact of the DWH spill on quality and quantity of biomass delivered to the deep-sea, a suite of geochemical tracers (e.g., stable and radio-isotopes, lipid biomarkers, and compound specific isotopes) was measured from monthly sediment trap samples deployed near a high-density deep-coral site in the Viosca Knoll area of the north-central Gulf of Mexico prior to (Oct-2008 to Sept-2009) and after the spill (Oct-10 to Sept-11). Marine (e.g., autochthonous) sources of organic matter dominated the sediment traps in both years, however after the spill, there was a pronounced reduction in marinesourced OM, including a reduction in marine-sourced sterols and n-alkanes and a concomitant decrease in sediment trap organic carbon and pigment flux. Results from this study indicate a reduction in primary production and carbon export to the deep-sea in 2010-2011, at least 6-18 months after the spill started. Whereas satellite observations indicate an initial increase in phytoplankton biomass, results from this sediment trap study define a reduction in primary production and carbon export to the deep-sea community. In addition, a dilution from a low-14C carbon source (e.g., petrocarbon) was detected in the sediment trap samples after the spill, in conjunction with a change in the petrogenic composition. The data presented here fills a critical gap in our knowledge of biogeochemical processes and sub-acute impacts to the deep-sea that ensued after the 2010 DWH spill.

  11. Evaluating the Factors that Facilitate a Deep Understanding of Data Analysis

    Directory of Open Access Journals (Sweden)

    Oliver Burmeister

    1995-11-01

    Full Text Available Ideally the product of tertiary informatic study is more than a qualification, it is a rewarding experience of learning in a discipline area. It should build a desire for a deeper understanding and lead to fruitful research both personally and for the benefit of the wider community. This paper asks: 'What are the factors that lead to this type of quality (deep learning in data analysis?' In the study reported in this paper, students whose general approach to learning was achieving or surface oriented adopted a deep approach when the context encouraged it. An overseas study found a decline in deep learning at this stage of a tertiary program; the contention of this paper is that the opposite of this expected outcome was achieved due to the enhanced learning environment. Though only 15.1% of students involved in this study were deep learners, the data analysis instructional context resulted in 38.8% of students achieving deep learning outcomes. Other factors discovered that contributed to deep learning outcomes were an increase in the intrinsic motivation of students to study the domain area; their prior knowledge of informatics; assessment that sought an integrated, developed yet comprehensive understanding of analytical concepts and processes; and, their learning preferences. The preferences of deep learning students are analyzed in comparison to another such study of professionals in informatics, examining commonalties and differences between this and the wider professional study.

  12. Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification.

    Science.gov (United States)

    Rueckauer, Bodo; Lungu, Iulia-Alexandra; Hu, Yuhuang; Pfeiffer, Michael; Liu, Shih-Chii

    2017-01-01

    Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications.

  13. Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification

    Directory of Open Access Journals (Sweden)

    Bodo Rueckauer

    2017-12-01

    Full Text Available Spiking neural networks (SNNs can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications.

  14. Boosting compound-protein interaction prediction by deep learning.

    Science.gov (United States)

    Tian, Kai; Shao, Mingyu; Wang, Yang; Guan, Jihong; Zhou, Shuigeng

    2016-11-01

    The identification of interactions between compounds and proteins plays an important role in network pharmacology and drug discovery. However, experimentally identifying compound-protein interactions (CPIs) is generally expensive and time-consuming, computational approaches are thus introduced. Among these, machine-learning based methods have achieved a considerable success. However, due to the nonlinear and imbalanced nature of biological data, many machine learning approaches have their own limitations. Recently, deep learning techniques show advantages over many state-of-the-art machine learning methods in some applications. In this study, we aim at improving the performance of CPI prediction based on deep learning, and propose a method called DL-CPI (the abbreviation of Deep Learning for Compound-Protein Interactions prediction), which employs deep neural network (DNN) to effectively learn the representations of compound-protein pairs. Extensive experiments show that DL-CPI can learn useful features of compound-protein pairs by a layerwise abstraction, and thus achieves better prediction performance than existing methods on both balanced and imbalanced datasets. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Automated detection of masses on whole breast volume ultrasound scanner: false positive reduction using deep convolutional neural network

    Science.gov (United States)

    Hiramatsu, Yuya; Muramatsu, Chisako; Kobayashi, Hironobu; Hara, Takeshi; Fujita, Hiroshi

    2017-03-01

    Breast cancer screening with mammography and ultrasonography is expected to improve sensitivity compared with mammography alone, especially for women with dense breast. An automated breast volume scanner (ABVS) provides the operator-independent whole breast data which facilitate double reading and comparison with past exams, contralateral breast, and multimodality images. However, large volumetric data in screening practice increase radiologists' workload. Therefore, our goal is to develop a computer-aided detection scheme of breast masses in ABVS data for assisting radiologists' diagnosis and comparison with mammographic findings. In this study, false positive (FP) reduction scheme using deep convolutional neural network (DCNN) was investigated. For training DCNN, true positive and FP samples were obtained from the result of our initial mass detection scheme using the vector convergence filter. Regions of interest including the detected regions were extracted from the multiplanar reconstraction slices. We investigated methods to select effective FP samples for training the DCNN. Based on the free response receiver operating characteristic analysis, simple random sampling from the entire candidates was most effective in this study. Using DCNN, the number of FPs could be reduced by 60%, while retaining 90% of true masses. The result indicates the potential usefulness of DCNN for FP reduction in automated mass detection on ABVS images.

  16. Thermodynamic and achievable efficiencies for solar-driven electrochemical reduction of carbon dioxide to transportation fuels

    Science.gov (United States)

    Singh, Meenesh R.; Clark, Ezra L.; Bell, Alexis T.

    2015-11-01

    Thermodynamic, achievable, and realistic efficiency limits of solar-driven electrochemical conversion of water and carbon dioxide to fuels are investigated as functions of light-absorber composition and configuration, and catalyst composition. The maximum thermodynamic efficiency at 1-sun illumination for adiabatic electrochemical synthesis of various solar fuels is in the range of 32-42%. Single-, double-, and triple-junction light absorbers are found to be optimal for electrochemical load ranges of 0-0.9 V, 0.9-1.95 V, and 1.95-3.5 V, respectively. Achievable solar-to-fuel (STF) efficiencies are determined using ideal double- and triple-junction light absorbers and the electrochemical load curves for CO2 reduction on silver and copper cathodes, and water oxidation kinetics over iridium oxide. The maximum achievable STF efficiencies for synthesis gas (H2 and CO) and Hythane (H2 and CH4) are 18.4% and 20.3%, respectively. Whereas the realistic STF efficiency of photoelectrochemical cells (PECs) can be as low as 0.8%, tandem PECs and photovoltaic (PV)-electrolyzers can operate at 7.2% under identical operating conditions. We show that the composition and energy content of solar fuels can also be adjusted by tuning the band-gaps of triple-junction light absorbers and/or the ratio of catalyst-to-PV area, and that the synthesis of liquid products and C2H4 have high profitability indices.

  17. Thermodynamic and achievable efficiencies for solar-driven electrochemical reduction of carbon dioxide to transportation fuels.

    Science.gov (United States)

    Singh, Meenesh R; Clark, Ezra L; Bell, Alexis T

    2015-11-10

    Thermodynamic, achievable, and realistic efficiency limits of solar-driven electrochemical conversion of water and carbon dioxide to fuels are investigated as functions of light-absorber composition and configuration, and catalyst composition. The maximum thermodynamic efficiency at 1-sun illumination for adiabatic electrochemical synthesis of various solar fuels is in the range of 32-42%. Single-, double-, and triple-junction light absorbers are found to be optimal for electrochemical load ranges of 0-0.9 V, 0.9-1.95 V, and 1.95-3.5 V, respectively. Achievable solar-to-fuel (STF) efficiencies are determined using ideal double- and triple-junction light absorbers and the electrochemical load curves for CO2 reduction on silver and copper cathodes, and water oxidation kinetics over iridium oxide. The maximum achievable STF efficiencies for synthesis gas (H2 and CO) and Hythane (H2 and CH4) are 18.4% and 20.3%, respectively. Whereas the realistic STF efficiency of photoelectrochemical cells (PECs) can be as low as 0.8%, tandem PECs and photovoltaic (PV)-electrolyzers can operate at 7.2% under identical operating conditions. We show that the composition and energy content of solar fuels can also be adjusted by tuning the band-gaps of triple-junction light absorbers and/or the ratio of catalyst-to-PV area, and that the synthesis of liquid products and C2H4 have high profitability indices.

  18. A mediation analysis of achievement motives, goals, learning strategies, and academic achievement.

    Science.gov (United States)

    Diseth, Age; Kobbeltvedt, Therese

    2010-12-01

    Previous research is inconclusive regarding antecedents and consequences of achievement goals, and there is a need for more research in order to examine the joint effects of different types of motives and learning strategies as predictors of academic achievement. To investigate the relationship between achievement motives, achievement goals, learning strategies (deep, surface, and strategic), and academic achievement in a hierarchical model. Participants were 229 undergraduate students (mean age: 21.2 years) of psychology and economics at the University of Bergen, Norway. Variables were measured by means of items from the Achievement Motives Scale (AMS), the Approaches and Study Skills Inventory for Students, and an achievement goal scale. Correlation analysis showed that academic achievement (examination grade) was positively correlated with performance-approach goal, mastery goal, and strategic learning strategies, and negatively correlated with performance-avoidance goal and surface learning strategy. A path analysis (structural equation model) showed that achievement goals were mediators between achievement motives and learning strategies, and that strategic learning strategies mediated the relationship between achievement goals and academic achievement. This study integrated previous findings from several studies and provided new evidence on the direct and indirect effects of different types of motives and learning strategies as predictors of academic achievement.

  19. Formability of paperboard during deep-drawing with local steam application

    Science.gov (United States)

    Franke, Wilken; Stein, Philipp; Dörsam, Sven; Groche, Peter

    2018-05-01

    The use of paperboard can significantly improve the environmental compatibility of everyday products such as packages. Nevertheless, most packages are currently made of plastics, since the three-dimensional shaping of paperboard is possible only to a limited extent. In order to increase the forming possibilities, deep drawing of cardboard has been intensively investigated for more than a decade. An improvement with regard to increased forming limits has been achieved by heating of the tool parts, which leads to a softening of paperboard constituents such as lignin. A further approach is the moistening of the samples, whereby the hydrogen bonds between the fibers are weakened and as a result an increase of the formability. It is expected that a combination of both parameter approaches will result in a significant increase in the forming capacity and in the shape accuracy. For this reason, a new tool concept is introduced within the scope of this work which makes it possible to moisten samples during the deep drawing process by means of steam supply. The conducted investigations show that spring-back in the preferred fiber direction can be reduced by 38 %. Orthogonal to the preferred fiber direction a reduction of spring back of up to 79 % is determined, which corresponds to a perfect shape. Moreover, it was determined that the steam duration and the initial moisture content have an influence on the final shape. In addition to the increased dimensional accuracy, an optimized wrinkle compression compared to conventional deep drawing is found. According to the results, it can be summarized that a steam application in the deep drawing of paperboard significantly improves the part quality.

  20. DRREP: deep ridge regressed epitope predictor.

    Science.gov (United States)

    Sher, Gene; Zhi, Degui; Zhang, Shaojie

    2017-10-03

    The ability to predict epitopes plays an enormous role in vaccine development in terms of our ability to zero in on where to do a more thorough in-vivo analysis of the protein in question. Though for the past decade there have been numerous advancements and improvements in epitope prediction, on average the best benchmark prediction accuracies are still only around 60%. New machine learning algorithms have arisen within the domain of deep learning, text mining, and convolutional networks. This paper presents a novel analytically trained and string kernel using deep neural network, which is tailored for continuous epitope prediction, called: Deep Ridge Regressed Epitope Predictor (DRREP). DRREP was tested on long protein sequences from the following datasets: SARS, Pellequer, HIV, AntiJen, and SEQ194. DRREP was compared to numerous state of the art epitope predictors, including the most recently published predictors called LBtope and DMNLBE. Using area under ROC curve (AUC), DRREP achieved a performance improvement over the best performing predictors on SARS (13.7%), HIV (8.9%), Pellequer (1.5%), and SEQ194 (3.1%), with its performance being matched only on the AntiJen dataset, by the LBtope predictor, where both DRREP and LBtope achieved an AUC of 0.702. DRREP is an analytically trained deep neural network, thus capable of learning in a single step through regression. By combining the features of deep learning, string kernels, and convolutional networks, the system is able to perform residue-by-residue prediction of continues epitopes with higher accuracy than the current state of the art predictors.

  1. Weight gain after subthalamic nucleus deep brain stimulation in Parkinson's disease is influenced by dyskinesias' reduction and electrodes' position.

    Science.gov (United States)

    Balestrino, Roberta; Baroncini, Damiano; Fichera, Mario; Donofrio, Carmine Antonio; Franzin, Alberto; Mortini, Pietro; Comi, Giancarlo; Volontè, Maria Antonietta

    2017-12-01

    Parkinson's disease is a common neurodegenerative disease that can be treated with pharmacological or surgical therapy. Subthalamic nucleus (STN) deep brain stimulation is a commonly used surgical option. A reported side effect of STN-DBS is weight gain: the aim of our study was to find those factors that determine weight gain, through one year-long observation of 32 patients that underwent surgery in our centre. During the follow-up, we considered: anthropometric features, hormonal levels, motor outcome, neuropsychological and quality of life outcomes, therapeutic parameters and electrodes position. The majority (84%) of our patients gained weight (6.7 kg in 12 months); more than a half of the cohort became overweight. At 12th month, weight gain showed a correlation with dyskinesias reduction, electrodes voltage and distance on the lateral axis. In the multivariate regression analysis, the determinants of weight gain were dyskinesias reduction and electrodes position. In this study, we identified dyskinesias reduction and distance between the active electrodes and the third ventricle as determining factors of weight gain after STN-DBS implantation in PD patients. The first finding could be linked to a decrease in energy consumption, while the second one could be due to a lower stimulation of the lateral hypothalamic area, known for its important role in metabolism and body weight control. Weight gain is a common finding after STN-DBS implantation, and it should be carefully monitored given the potential harmful consequences of overweight.

  2. Deep Learning and Bayesian Methods

    Directory of Open Access Journals (Sweden)

    Prosper Harrison B.

    2017-01-01

    Full Text Available A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such methods might be used to automate certain aspects of data analysis in particle physics. Next, the connection to Bayesian methods is discussed and the paper ends with thoughts on a significant practical issue, namely, how, from a Bayesian perspective, one might optimize the construction of deep neural networks.

  3. New optimized drill pipe size for deep-water, extended reach and ultra-deep drilling

    Energy Technology Data Exchange (ETDEWEB)

    Jellison, Michael J.; Delgado, Ivanni [Grant Prideco, Inc., Hoston, TX (United States); Falcao, Jose Luiz; Sato, Ademar Takashi [PETROBRAS, Rio de Janeiro, RJ (Brazil); Moura, Carlos Amsler [Comercial Perfuradora Delba Baiana Ltda., Rio de Janeiro, RJ (Brazil)

    2004-07-01

    A new drill pipe size, 5-7/8 in. OD, represents enabling technology for Extended Reach Drilling (ERD), deep water and other deep well applications. Most world-class ERD and deep water wells have traditionally been drilled with 5-1/2 in. drill pipe or a combination of 6-5/8 in. and 5-1/2 in. drill pipe. The hydraulic performance of 5-1/2 in. drill pipe can be a major limitation in substantial ERD and deep water wells resulting in poor cuttings removal, slower penetration rates, diminished control over well trajectory and more tendency for drill pipe sticking. The 5-7/8 in. drill pipe provides a significant improvement in hydraulic efficiency compared to 5-1/2 in. drill pipe and does not suffer from the disadvantages associated with use of 6-5/8 in. drill pipe. It represents a drill pipe assembly that is optimized dimensionally and on a performance basis for casing and bit programs that are commonly used for ERD, deep water and ultra-deep wells. The paper discusses the engineering philosophy behind 5-7/8 in. drill pipe, the design challenges associated with development of the product and reviews the features and capabilities of the second-generation double-shoulder connection. The paper provides drilling case history information on significant projects where the pipe has been used and details results achieved with the pipe. (author)

  4. Assisted Diagnosis Research Based on Improved Deep Autoencoder

    Directory of Open Access Journals (Sweden)

    Ke Zhang-Han

    2017-01-01

    Full Text Available Deep Autoencoder has the powerful ability to learn features from large number of unlabeled samples and a small number of labeled samples. In this work, we have improved the network structure of the general deep autoencoder and applied it to the disease auxiliary diagnosis. We have achieved a network by entering the specific indicators and predicting whether suffering from liver disease, the network using real physical examination data for training and verification. Compared with the traditional semi-supervised machine learning algorithm, deep autoencoder will get higher accuracy.

  5. Thermodynamic and achievable efficiencies for solar-driven electrochemical reduction of carbon dioxide to transportation fuels

    Science.gov (United States)

    Singh, Meenesh R.; Clark, Ezra L.; Bell, Alexis T.

    2015-01-01

    Thermodynamic, achievable, and realistic efficiency limits of solar-driven electrochemical conversion of water and carbon dioxide to fuels are investigated as functions of light-absorber composition and configuration, and catalyst composition. The maximum thermodynamic efficiency at 1-sun illumination for adiabatic electrochemical synthesis of various solar fuels is in the range of 32–42%. Single-, double-, and triple-junction light absorbers are found to be optimal for electrochemical load ranges of 0–0.9 V, 0.9–1.95 V, and 1.95–3.5 V, respectively. Achievable solar-to-fuel (STF) efficiencies are determined using ideal double- and triple-junction light absorbers and the electrochemical load curves for CO2 reduction on silver and copper cathodes, and water oxidation kinetics over iridium oxide. The maximum achievable STF efficiencies for synthesis gas (H2 and CO) and Hythane (H2 and CH4) are 18.4% and 20.3%, respectively. Whereas the realistic STF efficiency of photoelectrochemical cells (PECs) can be as low as 0.8%, tandem PECs and photovoltaic (PV)-electrolyzers can operate at 7.2% under identical operating conditions. We show that the composition and energy content of solar fuels can also be adjusted by tuning the band-gaps of triple-junction light absorbers and/or the ratio of catalyst-to-PV area, and that the synthesis of liquid products and C2H4 have high profitability indices. PMID:26504215

  6. Applications of Deep Learning in Biomedicine.

    Science.gov (United States)

    Mamoshina, Polina; Vieira, Armando; Putin, Evgeny; Zhavoronkov, Alex

    2016-05-02

    Increases in throughput and installed base of biomedical research equipment led to a massive accumulation of -omics data known to be highly variable, high-dimensional, and sourced from multiple often incompatible data platforms. While this data may be useful for biomarker identification and drug discovery, the bulk of it remains underutilized. Deep neural networks (DNNs) are efficient algorithms based on the use of compositional layers of neurons, with advantages well matched to the challenges -omics data presents. While achieving state-of-the-art results and even surpassing human accuracy in many challenging tasks, the adoption of deep learning in biomedicine has been comparatively slow. Here, we discuss key features of deep learning that may give this approach an edge over other machine learning methods. We then consider limitations and review a number of applications of deep learning in biomedical studies demonstrating proof of concept and practical utility.

  7. Feasibility and Costs of Natural Gas as a Bridge to Deep Decarbonization in the United States

    Science.gov (United States)

    Jones, A. D.; McJeon, H. C.; Muratori, M.; Shi, W.

    2015-12-01

    Achieving emissions reductions consistent with a 2 degree Celsius global warming target requires nearly complete replacement of traditional fossil fuel combustion with near-zero carbon energy technologies in the United States by 2050. There are multiple technological change pathways consistent with this deep decarbonization, including strategies that rely on renewable energy, nuclear, and carbon capture and storage (CCS) technologies. The replacement of coal-fired power plants with natural gas-fired power plants has also been suggested as a bridge strategy to achieve near-term emissions reduction targets. These gas plants, however, would need to be replaced by near-zero energy technologies or retrofitted with CCS by 2050 in order to achieve longer-term targets. Here we examine the costs and feasibility of a natural gas bridge strategy. Using the Global Change Assessment (GCAM) model, we develop multiple scenarios that each meet the recent US Intended Nationally Determined Contribution (INDC) to reduce GHG emissions by 26%-28% below its 2005 levels in 2025, as well as a deep decarbonization target of 80% emissions reductions below 1990 levels by 2050. We find that the gas bridge strategy requires that gas plants be retired on average 20 years earlier than their designed lifetime of 45 years, a potentially challenging outcome to achieve from a policy perspective. Using a more idealized model, we examine the net energy system costs of this gas bridge strategy compared to one in which near-zero energy technologies are deployed in the near tem. We explore the sensitivity of these cost results to four factors: the discount rate applied to future costs, the length (or start year) of the gas bridge, the relative capital cost of natural gas vs. near-zero energy technology, and the fuel price of natural gas. The discount rate and cost factors are found to be more important than the length of the bridge. However, we find an important interaction as well. At low discount rates

  8. Opportunities and Challenges in Deep Mining: A Brief Review

    Directory of Open Access Journals (Sweden)

    Pathegama G. Ranjith

    2017-08-01

    Full Text Available Mineral consumption is increasing rapidly as more consumers enter the market for minerals and as the global standard of living increases. As a result, underground mining continues to progress to deeper levels in order to tackle the mineral supply crisis in the 21st century. However, deep mining occurs in a very technical and challenging environment, in which significant innovative solutions and best practice are required and additional safety standards must be implemented in order to overcome the challenges and reap huge economic gains. These challenges include the catastrophic events that are often met in deep mining engineering: rockbursts, gas outbursts, high in situ and redistributed stresses, large deformation, squeezing and creeping rocks, and high temperature. This review paper presents the current global status of deep mining and highlights some of the newest technological achievements and opportunities associated with rock mechanics and geotechnical engineering in deep mining. Of the various technical achievements, unmanned working-faces and unmanned mines based on fully automated mining and mineral extraction processes have become important fields in the 21st century.

  9. To master or perform? Exploring relations between achievement goals and conceptual change learning.

    Science.gov (United States)

    Ranellucci, John; Muis, Krista R; Duffy, Melissa; Wang, Xihui; Sampasivam, Lavanya; Franco, Gina M

    2013-09-01

    Research is needed to explore conceptual change in relation to achievement goal orientations and depth of processing. To address this need, we examined relations between achievement goals, use of deep versus shallow processing strategies, and conceptual change learning using a think-aloud protocol. Seventy-three undergraduate students were assessed on their prior knowledge and misconceptions about Newtonian mechanics, and then reported their achievement goals and participated in think-aloud protocols while reading Newtonian physics texts. A mastery-approach goal orientation positively predicted deep processing strategies, shallow processing strategies, and conceptual change. In contrast, a performance-approach goal orientation did not predict either of the processing strategies, but negatively predicted conceptual change. A performance-avoidance goal orientation negatively predicted deep processing strategies and conceptual change. Moreover, deep and shallow processing strategies positively predicted conceptual change as well as recall. Finally, both deep and shallow processing strategies mediated relations between mastery-approach goals and conceptual change. Results provide some support for Dole and Sinatra's (1998) Cognitive Reconstruction of Knowledge Model of conceptual change but also challenge specific facets with regard to the role of depth of processing in conceptual change. © 2012 The British Psychological Society.

  10. Remarkable reduction of thermal conductivity in phosphorene phononic crystal

    International Nuclear Information System (INIS)

    Xu, Wen; Zhang, Gang

    2016-01-01

    Phosphorene has received much attention due to its interesting physical and chemical properties, and its potential applications such as thermoelectricity. In thermoelectric applications, low thermal conductivity is essential for achieving a high figure of merit. In this work, we propose to reduce the thermal conductivity of phosphorene by adopting the phononic crystal structure, phosphorene nanomesh. With equilibrium molecular dynamics simulations, we find that the thermal conductivity is remarkably reduced in the phononic crystal. Our analysis shows that the reduction is due to the depressed phonon group velocities induced by Brillouin zone folding, and the reduced phonon lifetimes in the phononic crystal. Interestingly, it is found that the anisotropy ratio of thermal conductivity could be tuned by the ‘non-square’ pores in the phononic crystal, as the phonon group velocities in the direction with larger projection of pores is more severely suppressed, leading to greater reduction of thermal conductivity in this direction. Our work provides deep insight into thermal transport in phononic crystals and proposes a new strategy to reduce the thermal conductivity of monolayer phosphorene. (paper)

  11. Pareto-optimal multi-objective dimensionality reduction deep auto-encoder for mammography classification.

    Science.gov (United States)

    Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan

    2017-07-01

    Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Deep-learning-based classification of FDG-PET data for Alzheimer's disease categories

    Science.gov (United States)

    Singh, Shibani; Srivastava, Anant; Mi, Liang; Caselli, Richard J.; Chen, Kewei; Goradia, Dhruman; Reiman, Eric M.; Wang, Yalin

    2017-11-01

    Fluorodeoxyglucose (FDG) positron emission tomography (PET) measures the decline in the regional cerebral metabolic rate for glucose, offering a reliable metabolic biomarker even on presymptomatic Alzheimer's disease (AD) patients. PET scans provide functional information that is unique and unavailable using other types of imaging. However, the computational efficacy of FDG-PET data alone, for the classification of various Alzheimers Diagnostic categories, has not been well studied. This motivates us to correctly discriminate various AD Diagnostic categories using FDG-PET data. Deep learning has improved state-of-the-art classification accuracies in the areas of speech, signal, image, video, text mining and recognition. We propose novel methods that involve probabilistic principal component analysis on max-pooled data and mean-pooled data for dimensionality reduction, and multilayer feed forward neural network which performs binary classification. Our experimental dataset consists of baseline data of subjects including 186 cognitively unimpaired (CU) subjects, 336 mild cognitive impairment (MCI) subjects with 158 Late MCI and 178 Early MCI, and 146 AD patients from Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. We measured F1-measure, precision, recall, negative and positive predictive values with a 10-fold cross validation scheme. Our results indicate that our designed classifiers achieve competitive results while max pooling achieves better classification performance compared to mean-pooled features. Our deep model based research may advance FDG-PET analysis by demonstrating their potential as an effective imaging biomarker of AD.

  13. Cardiac dose reduction with deep inspiration breath hold for left-sided breast cancer radiotherapy patients with and without regional nodal irradiation.

    Science.gov (United States)

    Yeung, Rosanna; Conroy, Leigh; Long, Karen; Walrath, Daphne; Li, Haocheng; Smith, Wendy; Hudson, Alana; Phan, Tien

    2015-09-22

    Deep inspiration breath hold (DIBH) reduces heart and left anterior descending artery (LAD) dose during left-sided breast radiation therapy (RT); however there is limited information about which patients derive the most benefit from DIBH. The primary objective of this study was to determine which patients benefit the most from DIBH by comparing percent reduction in mean cardiac dose conferred by DIBH for patients treated with whole breast RT ± boost (WBRT) versus those receiving breast/chest wall plus regional nodal irradiation, including internal mammary chain (IMC) nodes (B/CWRT + RNI) using a modified wide tangent technique. A secondary objective was to determine if DIBH was required to meet a proposed heart dose constraint of Dmean irradiation.

  14. FOSTERING DEEP LEARNING AMONGST ENTREPRENEURSHIP ...

    African Journals Online (AJOL)

    An important prerequisite for this important objective to be achieved is that lecturers ensure that students adopt a deep learning approach towards entrepreneurship courses been taught, as this will enable them to truly understand key entrepreneurial concepts and strategies and how they can be implemented in the real ...

  15. Pathways to deep decarbonization - 2015 report

    International Nuclear Information System (INIS)

    Ribera, Teresa; Colombier, Michel; Waisman, Henri; Bataille, Chris; Pierfederici, Roberta; Sachs, Jeffrey; Schmidt-Traub, Guido; Williams, Jim; Segafredo, Laura; Hamburg Coplan, Jill; Pharabod, Ivan; Oury, Christian

    2015-12-01

    In September 2015, the Deep Decarbonization Pathways Project published the Executive Summary of the Pathways to Deep Decarbonization: 2015 Synthesis Report. The full 2015 Synthesis Report was launched in Paris on December 3, 2015, at a technical workshop with the Mitigation Action Plans and Scenarios (MAPS) program. The Deep Decarbonization Pathways Project (DDPP) is a collaborative initiative to understand and show how individual countries can transition to a low-carbon economy and how the world can meet the internationally agreed target of limiting the increase in global mean surface temperature to less than 2 degrees Celsius (deg. C). Achieving the 2 deg. C limit will require that global net emissions of greenhouse gases (GHG) approach zero by the second half of the century. In turn, this will require a profound transformation of energy systems by mid-century through steep declines in carbon intensity in all sectors of the economy, a transition we call 'deep decarbonization'

  16. WFIRST: Science from Deep Field Surveys

    Science.gov (United States)

    Koekemoer, Anton; Foley, Ryan; WFIRST Deep Field Working Group

    2018-01-01

    WFIRST will enable deep field imaging across much larger areas than those previously obtained with Hubble, opening up completely new areas of parameter space for extragalactic deep fields including cosmology, supernova and galaxy evolution science. The instantaneous field of view of the Wide Field Instrument (WFI) is about 0.3 square degrees, which would for example yield an Ultra Deep Field (UDF) reaching similar depths at visible and near-infrared wavelengths to that obtained with Hubble, over an area about 100-200 times larger, for a comparable investment in time. Moreover, wider fields on scales of 10-20 square degrees could achieve depths comparable to large HST surveys at medium depths such as GOODS and CANDELS, and would enable multi-epoch supernova science that could be matched in area to LSST Deep Drilling fields or other large survey areas. Such fields may benefit from being placed on locations in the sky that have ancillary multi-band imaging or spectroscopy from other facilities, from the ground or in space. The WFIRST Deep Fields Working Group has been examining the science considerations for various types of deep fields that may be obtained with WFIRST, and present here a summary of the various properties of different locations in the sky that may be considered for future deep fields with WFIRST.

  17. Electrochemical CO2 Reduction by Ni-containing Iron Sulfides: How Is CO2 Electrochemically Reduced at Bisulfide-Bearing Deep-sea Hydrothermal Precipitates?

    International Nuclear Information System (INIS)

    Yamaguchi, Akira; Yamamoto, Masahiro; Takai, Ken; Ishii, Takumi; Hashimoto, Kazuhito; Nakamura, Ryuhei

    2014-01-01

    The discovery of deep-sea hydrothermal vents on the late 1970's has led to many hypotheses concerning chemical evolution in the prebiotic ocean and the early evolution of energy metabolism in ancient Earth. Such studies stand on the quest for the bioenergetic evolution to utilize reducing chemicals such as H 2 for CO 2 reduction and carbon assimilation. In addition to the direct reaction of H 2 and CO 2 , the electrical current passing across a bisulfide-bearing chimney structure has pointed to the possible electrocatalytic CO 2 reduction at the cold ocean-vent interface (R. Nakamura, et al. Angew. Chem. Int. Ed. 2010, 49, 7692 − 7694). To confirm the validity of this hypothesis, here, we examined the energetics of electrocatalytic CO 2 reduction by iron sulfide (FeS) deposits at slightly acidic pH. Although FeS deposits inefficiently reduced CO 2 , the efficiency of the reaction was substantially improved by the substitution of Fe with Ni to form FeNi 2 S 4 (violarite), of which surface was further modified with amine compounds. The potential-dependent activity of CO 2 reduction demonstrated that CO 2 reduction by H 2 in hydrothermal fluids was involved in a strong endergonic electron transfer reaction, suggesting that a naturally occurring proton-motive force (PMF) as high as 200 mV would be established across the hydrothermal vent chimney wall. However, in the chimney structures, H 2 generation competes with CO 2 reduction for electrical current, resulting in rapid consumption of the PMF. Therefore, to maintain the PMF and the electrosynthesis of organic compounds in hydrothermal vent mineral deposits, we propose a homeostatic pH regulation mechanism of FeS deposits, in which elemental hydrogen stored in the hydrothermal mineral deposits is used to balance the consumption of the electrochemical gradient by H 2 generation

  18. Achieving reductions in greenhouse gases in the US road transportation sector

    International Nuclear Information System (INIS)

    Kay, Andrew I.; Noland, Robert B.; Rodier, Caroline J.

    2014-01-01

    It is well established that GHG emissions must be reduced 50 to 80% by 2050 in order to limit global temperature increase to 2 °C. Achieving reductions of this magnitude in the transportation sector is a challenge and requires a multitude of policies and technology options. The research presented here analyzes three scenarios: changes in the perceived price of travel, land use intensification, and increases in transit. Elasticity estimates are derived using an activity-based travel model for the state of California and broadly representative of the US. The VISION model is used to forecast changes in technology and fuel options that are currently forecast to occur in the US for the period 2000–2040, providing a life-cycle GHG forecast for the road transportation sector. Results suggest that aggressive policy action is required, especially pricing policies, but also more on the technology side, especially increases in the carbon efficiency of medium and heavy-duty vehicles. - Highlights: • Travel elasticities are calculated for policy scenarios using an activity-based travel model. • These elasticities are used to estimate changes in total life-cycle greenhouse gas emissions. • Current technology and fuel policy and the strongest behavioral policy will not meet targets. • Heavy and medium-duty trucks need more aggressive technology and fuel options

  19. Text feature extraction based on deep learning: a review.

    Science.gov (United States)

    Liang, Hong; Sun, Xiao; Sun, Yunlei; Gao, Yuan

    2017-01-01

    Selection of text feature item is a basic and important matter for text mining and information retrieval. Traditional methods of feature extraction require handcrafted features. To hand-design, an effective feature is a lengthy process, but aiming at new applications, deep learning enables to acquire new effective feature representation from training data. As a new feature extraction method, deep learning has made achievements in text mining. The major difference between deep learning and conventional methods is that deep learning automatically learns features from big data, instead of adopting handcrafted features, which mainly depends on priori knowledge of designers and is highly impossible to take the advantage of big data. Deep learning can automatically learn feature representation from big data, including millions of parameters. This thesis outlines the common methods used in text feature extraction first, and then expands frequently used deep learning methods in text feature extraction and its applications, and forecasts the application of deep learning in feature extraction.

  20. Outcomes of the DeepWind Conceptual Design

    DEFF Research Database (Denmark)

    Schmidt Paulsen, Uwe; Borg, Michael; Aagaard Madsen, Helge

    2015-01-01

    DeepWind has been presented as a novel floating offshore wind turbine concept with cost reduction potentials. Twelve international partners developed a Darrieus type floating turbine with new materials and technologies for deep-sea offshore environment. This paper summarizes results of the 5 MW...... the Deepwind floating 1 kW demonstrator. The 5 MW simulation results, loading and performance are compared to the OC3-NREL 5 MW wind turbine. Finally the paper elaborates the conceptual design on cost modelling....... DeepWind conceptual design. The concept was evaluated at the Hywind test site, described on its few components, in particular on the modified Troposkien blade shape and airfoil design. The feasibility of upscaling from 5 MW to 20 MW is discussed, taking into account the results from testing...

  1. Thermodynamic and achievable efficiencies for solar-driven electrochemical reduction of carbon dioxide to transportation fuels

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Meenesh R. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Center for Artificial Photosynthesis, Material Science Division; Clark, Ezra L. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Center for Artificial Photosynthesis, Material Science Division; Univ. of California, Berkeley, CA (United States). Dept. of Chemical & Biomolecular Engineering; Bell, Alexis T. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Center for Artificial Photosynthesis, Material Science Division; Univ. of California, Berkeley, CA (United States). Dept. of Chemical & Biomolecular Engineering

    2015-10-26

    Thermodynamic, achievable, and realistic efficiency limits of solar-driven electrochemical conversion of water and carbon dioxide to fuels are investigated as functions of light-absorber composition and configuration, and catalyst composition. The maximum thermodynamic efficiency at 1-sun illumination for adiabatic electrochemical synthesis of various solar fuels is in the range of 32–42%. Single-, double-, and triple-junction light absorbers are found to be optimal for electrochemical load ranges of 0–0.9 V, 0.9–1.95 V, and 1.95–3.5 V, respectively. Achievable solar-to-fuel (STF) efficiencies are determined using ideal double- and triple-junction light absorbers and the electrochemical load curves for CO2 reduction on silver and copper cathodes, and water oxidation kinetics over iridium oxide. The maximum achievable STF efficiencies for synthesis gas (H2 and CO) and Hythane (H2 and CH4) are 18.4% and 20.3%, respectively. Whereas the realistic STF efficiency of photoelectrochemical cells (PECs) can be as low as 0.8%, tandem PECs and photovoltaic (PV)-electrolyzers can operate at 7.2% under identical operating conditions. Finally, we show that the composition and energy content of solar fuels can also be adjusted by tuning the band-gaps of triple-junction light absorbers and/or the ratio of catalyst-to-PV area, and that the synthesis of liquid products and C2H4 have high profitability indices.

  2. Economic Evaluation of SMART Deployment in the MENA Region using DEEP 5..0

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Han-Ok; Lee, Man-Ki; Zee, Sung-Kyun; Kim, Young-In; Kim, Keung Koo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    Some countries have officially announced that the development of atomic energy is essential to meet the growing nation's requirements for energy to generate electricity, produce desalination water, and reduce reliance on depleting hydrocarbon resources. SMART (system-integrated modular advanced reactor) is a small-sized advanced integral reactor with a rated thermal power of 330 MW. It can produce 100 MW of electricity, or 90 MW of electricity and 40,000 tons of desalinated water concurrently, which is sufficient for 100,000 residents. It is an integral type reactor with a sensible mixture of proven technologies and advanced design features. SMART aims at achieving enhanced safety and improved economics; the enhancement of safety and reliability is realized by incorporating inherent safety-improving features and reliable passive safety systems. The improvement in the economics is achieved through a system simplification, component modularization, reduction of construction time, and high plant availability. The standard design approval assures the safety of the SMART system. The economics of SMART are evaluated for the deployment in MENA region in this study. DEEP 5.0 software was selected for the economic evaluation of SMART plant. By using the collected technical and economic data as the input data into DEEP program, the power and water costs are calculated. Electric power and fresh water production costs for the case of SMART deployment at the MENA region is evaluated using the DEEP 5.0 software in this study. Technical input data are prepared on the basis of the local environmental conditions of the MENA region. The results show that the SMART plant can supply 94 MWe to an external grid system with 40,000 m{sup 3}/d of fresh water. The power and water costs are calculated for the various specific construction costs.

  3. Economic Evaluation of SMART Deployment in the MENA Region using DEEP 5..0

    International Nuclear Information System (INIS)

    Kang, Han-Ok; Lee, Man-Ki; Zee, Sung-Kyun; Kim, Young-In; Kim, Keung Koo

    2014-01-01

    Some countries have officially announced that the development of atomic energy is essential to meet the growing nation's requirements for energy to generate electricity, produce desalination water, and reduce reliance on depleting hydrocarbon resources. SMART (system-integrated modular advanced reactor) is a small-sized advanced integral reactor with a rated thermal power of 330 MW. It can produce 100 MW of electricity, or 90 MW of electricity and 40,000 tons of desalinated water concurrently, which is sufficient for 100,000 residents. It is an integral type reactor with a sensible mixture of proven technologies and advanced design features. SMART aims at achieving enhanced safety and improved economics; the enhancement of safety and reliability is realized by incorporating inherent safety-improving features and reliable passive safety systems. The improvement in the economics is achieved through a system simplification, component modularization, reduction of construction time, and high plant availability. The standard design approval assures the safety of the SMART system. The economics of SMART are evaluated for the deployment in MENA region in this study. DEEP 5.0 software was selected for the economic evaluation of SMART plant. By using the collected technical and economic data as the input data into DEEP program, the power and water costs are calculated. Electric power and fresh water production costs for the case of SMART deployment at the MENA region is evaluated using the DEEP 5.0 software in this study. Technical input data are prepared on the basis of the local environmental conditions of the MENA region. The results show that the SMART plant can supply 94 MWe to an external grid system with 40,000 m 3 /d of fresh water. The power and water costs are calculated for the various specific construction costs

  4. Could US mayors achieve the entire US Paris climate target?

    Science.gov (United States)

    Gurney, K. R.; Huang, J.; Hutchins, M.; Liang, J.

    2017-12-01

    After the recent US Federal Administration announcement not to adhere to the Paris Accords, 359 mayors (and counting) in the US pledged to maintain their commitments, reducing emissions within their jurisdictions by 26-28% from their 2005 levels by the year 2025. While important, this leaves a large portion of the US landscape, and a large amount of US emissions, outside of the Paris commitment. With Federal US policy looking unlikely to change, could additional effort by US cities overcome the gap in national policy and achieve the equivalent US national Paris commitment? How many cities would be required and how deep would reductions need to be? Up until now, this question could not be reliably resolved due to lack of data at the urban scale. Here, we answer this question with new data - the Vulcan V3.0 FFCO2 emissions data product - through examination of the total US energy related CO2 emissions from cities. We find that the top 500 urban areas in the US could meet the national US commitment to the Paris Accords with a reduction of roughly 30% below their 2015 levels by the year 2025. This is driven by the share of US emissions emanating from cities, particularly the largest cohort. Indeed, as the number of urban areas taking on CO2 reduction targets grows, the less the reduction burden on any individual city. In this presentation, we provide an analysis of US urban CO2 emissions and US climate policy, accounting for varying definitions of urban areas, emitting sectors and the tradeoff between the number of policy-active cities and the CO2 reduction burden.

  5. Deep kernel learning method for SAR image target recognition

    Science.gov (United States)

    Chen, Xiuyuan; Peng, Xiyuan; Duan, Ran; Li, Junbao

    2017-10-01

    With the development of deep learning, research on image target recognition has made great progress in recent years. Remote sensing detection urgently requires target recognition for military, geographic, and other scientific research. This paper aims to solve the synthetic aperture radar image target recognition problem by combining deep and kernel learning. The model, which has a multilayer multiple kernel structure, is optimized layer by layer with the parameters of Support Vector Machine and a gradient descent algorithm. This new deep kernel learning method improves accuracy and achieves competitive recognition results compared with other learning methods.

  6. Green Infrastructure Simulation and Optimization to Achieve Combined Sewer Overflow Reductions in Philadelphia's Mill Creek Sewershed

    Science.gov (United States)

    Cohen, J. S.; McGarity, A. E.

    2017-12-01

    The ability for mass deployment of green stormwater infrastructure (GSI) to intercept significant amounts of urban runoff has the potential to reduce the frequency of a city's combined sewer overflows (CSOs). This study was performed to aid in the Overbrook Environmental Education Center's vision of applying this concept to create a Green Commercial Corridor in Philadelphia's Overbrook Neighborhood, which lies in the Mill Creek Sewershed. In an attempt to further implement physical and social reality into previous work using simulation-optimization techniques to produce GSI deployment strategies (McGarity, et al., 2016), this study's models incorporated land use types and a specific neighborhood in the sewershed. The low impact development (LID) feature in EPA's Storm Water Management Model (SWMM) was used to simulate various geographic configurations of GSI in Overbrook. The results from these simulations were used to obtain formulas describing the annual CSO reduction in the sewershed based on the deployed GSI practices. These non-linear hydrologic response formulas were then implemented into the Storm Water Investment Strategy Evaluation (StormWISE) model (McGarity, 2012), a constrained optimization model used to develop optimal stormwater management practices on the watershed scale. By saturating the avenue with GSI, not only will CSOs from the sewershed into the Schuylkill River be reduced, but ancillary social and economic benefits of GSI will also be achieved. The effectiveness of these ancillary benefits changes based on the type of GSI practice and the type of land use in which the GSI is implemented. Thus, the simulation and optimization processes were repeated while delimiting GSI deployment by land use (residential, commercial, industrial, and transportation). The results give a GSI deployment strategy that achieves desired annual CSO reductions at a minimum cost based on the locations of tree trenches, rain gardens, and rain barrels in specified land

  7. L-shaped fiber-chip grating couplers with high directionality and low reflectivity fabricated with deep-UV lithography.

    Science.gov (United States)

    Benedikovic, Daniel; Alonso-Ramos, Carlos; Pérez-Galacho, Diego; Guerber, Sylvain; Vakarin, Vladyslav; Marcaud, Guillaume; Le Roux, Xavier; Cassan, Eric; Marris-Morini, Delphine; Cheben, Pavel; Boeuf, Frédéric; Baudot, Charles; Vivien, Laurent

    2017-09-01

    Grating couplers enable position-friendly interfacing of silicon chips by optical fibers. The conventional coupler designs call upon comparatively complex architectures to afford efficient light coupling to sub-micron silicon-on-insulator (SOI) waveguides. Conversely, the blazing effect in double-etched gratings provides high coupling efficiency with reduced fabrication intricacy. In this Letter, we demonstrate for the first time, to the best of our knowledge, the realization of an ultra-directional L-shaped grating coupler, seamlessly fabricated by using 193 nm deep-ultraviolet (deep-UV) lithography. We also include a subwavelength index engineered waveguide-to-grating transition that provides an eight-fold reduction of the grating reflectivity, down to 1% (-20  dB). A measured coupling efficiency of -2.7  dB (54%) is achieved, with a bandwidth of 62 nm. These results open promising prospects for the implementation of efficient, robust, and cost-effective coupling interfaces for sub-micrometric SOI waveguides, as desired for large-volume applications in silicon photonics.

  8. Dijet production in diffractive deep-inelastic scattering in next-to-next-to-leading order QCD arXiv

    CERN Document Server

    Britzger, D.; Gehrmann, T.; Huss, A.; Niehues, J.; Žlebčík, R.

    Hard processes in diffractive deep-inelastic scattering can be described by a factorisation into parton-level subprocesses and diffractive parton distributions. In this framework, cross sections for inclusive dijet production in diffractive deep-inelastic electron-proton scattering (DIS) are computed to next-to-next-to-leading order (NNLO) QCD accuracy and compared to a comprehensive selection of data. Predictions for the total cross sections, 39 single-differential and four double-differential distributions for six measurements at HERA by the H1 and ZEUS collaborations are calculated. In the studied kinematical range, the NNLO corrections are found to be sizeable and positive. The NNLO predictions typically exceed the data, while the kinematical shape of the data is described better at NNLO than at next-to-leading order (NLO). A significant reduction of the scale uncertainty is achieved in comparison to NLO predictions. Our results use the currently available NLO diffractive parton distributions, and the dis...

  9. AN EFFICIENT METHOD FOR DEEP WEB CRAWLER BASED ON ACCURACY -A REVIEW

    OpenAIRE

    Pranali Zade1, Dr.S.W.Mohod2

    2018-01-01

    As deep web grows at a very fast pace, there has been increased interest in techniques that help efficiently locate deep-web interfaces. However, due to the large volume of web resources and the dynamic nature of deep web, achieving wide coverage and high efficiency is a challenging issue. We propose a three-stage framework, for efficient harvesting deep web interfaces. Project experimental results on a set of representative domains show the agility and accuracy of our proposed crawler framew...

  10. Nuclear structure in deep-inelastic reactions

    International Nuclear Information System (INIS)

    Rehm, K.E.

    1986-01-01

    The paper concentrates on recent deep inelastic experiments conducted at Argonne National Laboratory and the nuclear structure effects evident in reactions between super heavy nuclei. Experiments indicate that these reactions evolve gradually from simple transfer processes which have been studied extensively for lighter nuclei such as 16 O, suggesting a theoretical approach connecting the one-step DWBA theory to the multistep statistical models of nuclear reactions. This transition between quasi-elastic and deep inelastic reactions is achieved by a simple random walk model. Some typical examples of nuclear structure effects are shown. 24 refs., 9 figs

  11. HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.

    Science.gov (United States)

    Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye

    2017-02-09

    In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.

  12. Major impacts of climate change on deep-sea benthic ecosystems

    Directory of Open Access Journals (Sweden)

    Andrew K. Sweetman

    2017-02-01

    Full Text Available The deep sea encompasses the largest ecosystems on Earth. Although poorly known, deep seafloor ecosystems provide services that are vitally important to the entire ocean and biosphere. Rising atmospheric greenhouse gases are bringing about significant changes in the environmental properties of the ocean realm in terms of water column oxygenation, temperature, pH and food supply, with concomitant impacts on deep-sea ecosystems. Projections suggest that abyssal (3000–6000 m ocean temperatures could increase by 1°C over the next 84 years, while abyssal seafloor habitats under areas of deep-water formation may experience reductions in water column oxygen concentrations by as much as 0.03 mL L–1 by 2100. Bathyal depths (200–3000 m worldwide will undergo the most significant reductions in pH in all oceans by the year 2100 (0.29 to 0.37 pH units. O2 concentrations will also decline in the bathyal NE Pacific and Southern Oceans, with losses up to 3.7% or more, especially at intermediate depths. Another important environmental parameter, the flux of particulate organic matter to the seafloor, is likely to decline significantly in most oceans, most notably in the abyssal and bathyal Indian Ocean where it is predicted to decrease by 40–55% by the end of the century. Unfortunately, how these major changes will affect deep-seafloor ecosystems is, in some cases, very poorly understood. In this paper, we provide a detailed overview of the impacts of these changing environmental parameters on deep-seafloor ecosystems that will most likely be seen by 2100 in continental margin, abyssal and polar settings. We also consider how these changes may combine with other anthropogenic stressors (e.g., fishing, mineral mining, oil and gas extraction to further impact deep-seafloor ecosystems and discuss the possible societal implications.

  13. Thermochemical sulfate reduction in deep petroleum reservoirs: a molecular approach; Thermoreduction des sulfates dans les reservoirs petroliers: approche moleculaire

    Energy Technology Data Exchange (ETDEWEB)

    Hanin, S.

    2002-11-01

    The thermochemical sulfate reduction (TSR) is a set of chemical reactions leading to hydrocarbon oxidation and production of carbon dioxide and sour gas (H{sub 2}S) which is observed in deep petroleum reservoirs enriched in anhydrites (calcium sulfate). Molecular and isotopic studies have been conducted on several crude oil samples to determine which types of compounds could have been produced during TSR. Actually, we have shown that the main molecules formed by TSR were organo-sulfur compounds. Indeed, sulfur isotopic measurements. of alkyl-di-benzothiophenes, di-aryl-disulfides and thia-diamondoids (identified by NMR or synthesis of standards) shows that they are formed during TSR as their value approach that of the sulfur of the anhydrite. Moreover, thia-diamondoids are apparently exclusively formed during this phenomenon and can thus be considered as true molecular markers of TSR. In a second part, we have investigated with laboratory experiments the formation mechanism of the molecules produced during TSR. A first model has shown that sulfur incorporation into the organic matter occurred with mineral sulfur species of low oxidation degree. The use of {sup 34}S allowed to show that the sulfates reduction occurred during these simulations. At least, some experiments on polycyclic hydrocarbons, sulfurized or not, allowed to establish that thia-diamondoids could be formed by acid-catalysed rearrangements at high temperatures in a similar way as the diamondoids. (author)

  14. DeepPy: Pythonic deep learning

    DEFF Research Database (Denmark)

    Larsen, Anders Boesen Lindbo

    This technical report introduces DeepPy – a deep learning framework built on top of NumPy with GPU acceleration. DeepPy bridges the gap between highperformance neural networks and the ease of development from Python/NumPy. Users with a background in scientific computing in Python will quickly...... be able to understand and change the DeepPy codebase as it is mainly implemented using high-level NumPy primitives. Moreover, DeepPy supports complex network architectures by letting the user compose mathematical expressions as directed graphs. The latest version is available at http...

  15. Efficacy of two types of palliative sedation therapy defined using intervention protocols: proportional vs. deep sedation.

    Science.gov (United States)

    Imai, Kengo; Morita, Tatsuya; Yokomichi, Naosuke; Mori, Masanori; Naito, Akemi Shirado; Tsukuura, Hiroaki; Yamauchi, Toshihiro; Kawaguchi, Takashi; Fukuta, Kaori; Inoue, Satoshi

    2018-06-01

    This study investigated the effect of two types of palliative sedation defined using intervention protocols: proportional and deep sedation. We retrospectively analyzed prospectively recorded data of consecutive cancer patients who received the continuous infusion of midazolam in a palliative care unit. Attending physicians chose the sedation protocol based on each patient's wish, symptom severity, prognosis, and refractoriness of suffering. The primary endpoint was a treatment goal achievement at 4 h: in proportional sedation, the achievement of symptom relief (Support Team Assessment Schedule (STAS) ≤ 1) and absence of agitation (modified Richmond Agitation-Sedation Scale (RASS) ≤ 0) and in deep sedation, the achievement of deep sedation (RASS ≤ - 4). Secondary endpoints included mean scores of STAS and RASS, deep sedation as a result, and adverse events. Among 398 patients who died during the period, 32 received proportional and 18 received deep sedation. The treatment goal achievement rate was 68.8% (22/32, 95% confidence interval 52.7-84.9) in the proportional sedation group vs. 83.3% (15/18, 66.1-100) in the deep sedation group. STAS decreased from 3.8 to 0.8 with proportional sedation at 4 h vs. 3.7 to 0.3 with deep sedation; RASS decreased from + 1.2 to - 1.7 vs. + 1.4 to - 3.7, respectively. Deep sedation was needed as a result in 31.3% (10/32) of the proportional sedation group. No fatal events that were considered as probably or definitely related to the intervention occurred. The two types of intervention protocol well reflected the treatment intention and expected outcomes. Further, large-scale cohort studies are promising.

  16. Sequence-based prediction of protein protein interaction using a deep-learning algorithm.

    Science.gov (United States)

    Sun, Tanlin; Zhou, Bo; Lai, Luhua; Pei, Jianfeng

    2017-05-25

    Protein-protein interactions (PPIs) are critical for many biological processes. It is therefore important to develop accurate high-throughput methods for identifying PPI to better understand protein function, disease occurrence, and therapy design. Though various computational methods for predicting PPI have been developed, their robustness for prediction with external datasets is unknown. Deep-learning algorithms have achieved successful results in diverse areas, but their effectiveness for PPI prediction has not been tested. We used a stacked autoencoder, a type of deep-learning algorithm, to study the sequence-based PPI prediction. The best model achieved an average accuracy of 97.19% with 10-fold cross-validation. The prediction accuracies for various external datasets ranged from 87.99% to 99.21%, which are superior to those achieved with previous methods. To our knowledge, this research is the first to apply a deep-learning algorithm to sequence-based PPI prediction, and the results demonstrate its potential in this field.

  17. Sulphate reduction in the Aespoe HRL tunnel

    International Nuclear Information System (INIS)

    Gustafson, G.; Pedersen, K.; Tullborg, E.L.; Wallin, B.; Wikberg, P.

    1995-12-01

    Evidence and indications of sulphate reduction based on geological, hydrogeological, groundwater, isotope and microbial data gathered in and around the Aespoe Hard Rock Laboratory tunnel have been evaluated. This integrated investigation showed that sulphate reduction had taken place in the past but is most likely also an ongoing process. Anaerobic sulphate-reducing bacteria can live in marine sediments, in the tunnel sections under the sea and in deep groundwaters, since there is no access to oxygen. The sulphate-reducing bacteria seem to thrive when the Cl - concentration of the groundwater is 4000-6000 mg/l. Sulphate reduction is an in situ process but the resulting hydrogen-sulphide rich water can be transported to other locations. A more vigorous sulphate reduction takes place when the organic content in the groundwater is high (>10 mg/l DOC) which is the case in the sediments and in the groundwaters under the sea. Some bacteria use hydrogen as an electron donor instead of organic carbon and can therefore live in deep environments where access to organic material is limited. The sulphate-reducing bacteria seem to adapt to changing flow situations caused by the tunnel construction relatively fast. Sulphate reduction seems to have occurred and will probably occur where conditions are favourable for the sulphate-reducing bacteria such as anaerobic brackish groundwater with dissolved sulphate and organic carbon or hydrogen. 59 refs, 37 figs, 6 tabs

  18. Deep shaft high rate aerobic digestion: laboratory and pilot plant performance

    Energy Technology Data Exchange (ETDEWEB)

    Tran, F; Gannon, D

    1981-01-01

    The Deep Shaft is essentially an air-lift reactor, sunk deep in the ground (100-160 m); the resulting high hydrostatic pressure together with very efficient mixing in the shaft provide extremely high O transfer efficiencies (O.T.E.) of less than or equal to 90% vs. 4-20% in other aerators. This high O.T.E. suggests real potential for Deep-Shaft technology in the aerobic digestion of sludges and animal wastes: with conventional aerobic digesters an O.T.E. over 8% is extremely difficult to achieve. Laboratory and pilot plant Deep-Shaft aerobic digester studies carried out at Eco-Research's Pointe Claire, Quebec laboratories, and at the Paris, Ontario pilot Deep-Shaft digester are described.

  19. Visual Vehicle Tracking Based on Deep Representation and Semisupervised Learning

    Directory of Open Access Journals (Sweden)

    Yingfeng Cai

    2017-01-01

    Full Text Available Discriminative tracking methods use binary classification to discriminate between the foreground and background and have achieved some useful results. However, the use of labeled training samples is insufficient for them to achieve accurate tracking. Hence, discriminative classifiers must use their own classification results to update themselves, which may lead to feedback-induced tracking drift. To overcome these problems, we propose a semisupervised tracking algorithm that uses deep representation and transfer learning. Firstly, a 2D multilayer deep belief network is trained with a large amount of unlabeled samples. The nonlinear mapping point at the top of this network is subtracted as the feature dictionary. Then, this feature dictionary is utilized to transfer train and update a deep tracker. The positive samples for training are the tracked vehicles, and the negative samples are the background images. Finally, a particle filter is used to estimate vehicle position. We demonstrate experimentally that our proposed vehicle tracking algorithm can effectively restrain drift while also maintaining the adaption of vehicle appearance. Compared with similar algorithms, our method achieves a better tracking success rate and fewer average central-pixel errors.

  20. CANDELS: THE COSMIC ASSEMBLY NEAR-INFRARED DEEP EXTRAGALACTIC LEGACY SURVEY—THE HUBBLE SPACE TELESCOPE OBSERVATIONS, IMAGING DATA PRODUCTS, AND MOSAICS

    International Nuclear Information System (INIS)

    Koekemoer, Anton M.; Ferguson, Henry C.; Grogin, Norman A.; Lotz, Jennifer M.; Lucas, Ray A.; Ogaz, Sara; Rajan, Abhijith; Casertano, Stefano; Dahlen, Tomas; Faber, S. M.; Kocevski, Dale D.; Koo, David C.; Lai, Kamson; McGrath, Elizabeth J.; Riess, Adam G.; Rodney, Steve A.; Dolch, Timothy; Strolger, Louis; Castellano, Marco; Dickinson, Mark

    2011-01-01

    This paper describes the Hubble Space Telescope imaging data products and data reduction procedures for the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS). This survey is designed to document the evolution of galaxies and black holes at z ≈ 1.5-8, and to study Type Ia supernovae at z > 1.5. Five premier multi-wavelength sky regions are selected, each with extensive multi-wavelength observations. The primary CANDELS data consist of imaging obtained in the Wide Field Camera 3 infrared channel (WFC3/IR) and the WFC3 ultraviolet/optical channel, along with the Advanced Camera for Surveys (ACS). The CANDELS/Deep survey covers ∼125 arcmin 2 within GOODS-N and GOODS-S, while the remainder consists of the CANDELS/Wide survey, achieving a total of ∼800 arcmin 2 across GOODS and three additional fields (Extended Groth Strip, COSMOS, and Ultra-Deep Survey). We summarize the observational aspects of the survey as motivated by the scientific goals and present a detailed description of the data reduction procedures and products from the survey. Our data reduction methods utilize the most up-to-date calibration files and image combination procedures. We have paid special attention to correcting a range of instrumental effects, including charge transfer efficiency degradation for ACS, removal of electronic bias-striping present in ACS data after Servicing Mission 4, and persistence effects and other artifacts in WFC3/IR. For each field, we release mosaics for individual epochs and eventual mosaics containing data from all epochs combined, to facilitate photometric variability studies and the deepest possible photometry. A more detailed overview of the science goals and observational design of the survey are presented in a companion paper.

  1. Geological evidence for deep exploration in Xiazhuang uranium orefield and its periphery

    International Nuclear Information System (INIS)

    Feng Zhijun; Huang Hongkun; Zeng Wenwei; Wu Jiguang

    2011-01-01

    This paper first discussed the ore-controlling role of deep structure, the origin of metallogenic matter and fluid, the relation of diabase to silicification zone, then summarized the achievement of Geophysical survey and drilling, and finally analysed the potential for deep exploration in Xiazhuang uranium orefield.(authors)

  2. Accurate identification of RNA editing sites from primitive sequence with deep neural networks.

    Science.gov (United States)

    Ouyang, Zhangyi; Liu, Feng; Zhao, Chenghui; Ren, Chao; An, Gaole; Mei, Chuan; Bo, Xiaochen; Shu, Wenjie

    2018-04-16

    RNA editing is a post-transcriptional RNA sequence alteration. Current methods have identified editing sites and facilitated research but require sufficient genomic annotations and prior-knowledge-based filtering steps, resulting in a cumbersome, time-consuming identification process. Moreover, these methods have limited generalizability and applicability in species with insufficient genomic annotations or in conditions of limited prior knowledge. We developed DeepRed, a deep learning-based method that identifies RNA editing from primitive RNA sequences without prior-knowledge-based filtering steps or genomic annotations. DeepRed achieved 98.1% and 97.9% area under the curve (AUC) in training and test sets, respectively. We further validated DeepRed using experimentally verified U87 cell RNA-seq data, achieving 97.9% positive predictive value (PPV). We demonstrated that DeepRed offers better prediction accuracy and computational efficiency than current methods with large-scale, mass RNA-seq data. We used DeepRed to assess the impact of multiple factors on editing identification with RNA-seq data from the Association of Biomolecular Resource Facilities and Sequencing Quality Control projects. We explored developmental RNA editing pattern changes during human early embryogenesis and evolutionary patterns in Drosophila species and the primate lineage using DeepRed. Our work illustrates DeepRed's state-of-the-art performance; it may decipher the hidden principles behind RNA editing, making editing detection convenient and effective.

  3. Deep percolation in greenhouse-cultivated celery using the technique of subsurface film strips placement

    Directory of Open Access Journals (Sweden)

    Zhida Du

    2014-05-01

    Full Text Available To reduce the deep percolation during greenhouse vegetable cultivation, the technique of subsurface film strips placement was tested. Four treatments with two kinds of cross-sections (flat and U-shaped and two different spacings (10 cm and 40 cm of subsurface film strips were arranged in a greenhouse before planting celery. At the same time, a non-film treatment was arranged for comparison. Soil water content was measured and irrigation time was adjusted according to the soil water content. Evapotranspiration of celery during growth was calculated by the method of energy balance and the deep percolation was calculated by the equation of water balance. Deep percolation was reduced in all experimental treatments. Greater reduction in deep percolation was observed when using U-shaped cross-section strips compared with that using the flat cross-section strips. In addition, greater reduction in deep percolation was observed when the spacing between the film strips was smaller. The results of this test showed that the technique of subsurface film strips placement can reduce deep percolation and conserve irrigation water for greenhouse vegetables cultivation. However, the optimal layout variables for the use of the technique of subsurface film strips placement need further experimental and numerical analysis.

  4. Interleaving subthalamic nucleus deep brain stimulation to avoid side effects while achieving satisfactory motor benefits in Parkinson disease

    Science.gov (United States)

    Zhang, Shizhen; Zhou, Peizhi; Jiang, Shu; Wang, Wei; Li, Peng

    2016-01-01

    Abstract Background: Deep brain stimulation (DBS) of the subthalamic nucleus is an effective treatment for advanced Parkinson disease (PD). However, achieving ideal outcomes by conventional programming can be difficult in some patients, resulting in suboptimal control of PD symptoms and stimulation-induced adverse effects. Interleaving stimulation (ILS) is a newer programming technique that can individually optimize the stimulation area, thereby improving control of PD symptoms while alleviating stimulation-induced side effects after conventional programming fails to achieve the desired results. Methods: We retrospectively reviewed PD patients who received DBS programming during the previous 4 years in our hospital. We collected clinical and demographic data from 12 patients who received ILS because of incomplete alleviation of PD symptoms or stimulation-induced adverse effects after conventional programming had proven ineffective or intolerable. Appropriate lead location was confirmed with postoperative reconstruction images. The rationale and clinical efficacy of ILS was analyzed. Results: We divided our patients into 4 groups based on the following symptoms: stimulation-induced dysarthria and choreoathetoid dyskinesias, gait disturbance, and incomplete control of parkinsonism. After treatment with ILS, patients showed satisfactory improvement in PD symptoms and alleviation of stimulation-induced side effects, with a mean improvement in Unified PD Rating Scale motor scores of 26.9%. Conclusions: ILS is a newer choice and effective programming strategy to maximize symptom control in PD while decreasing stimulation-induced adverse effects when conventional programming fails to achieve satisfactory outcome. However, we should keep in mind that most DBS patients are routinely treated with conventional stimulation and that not all patients benefit from ILS. ILS is not recommended as the first choice of programming, and it is recommended only when patients have

  5. Outage reduction of Hamaoka NPS

    International Nuclear Information System (INIS)

    Hida, Shigeru; Anma, Minoru

    1999-01-01

    In the Hamaoka nuclear power plant, we have worked on the outage reduction since 1993. In those days, the outage length in Hamaoka was 80 days or more, and was largely far apart from excellent results of European and American plants about the 30days. A concrete strategy to achieve the reduction process is the extension of working hours, the changing work schedule control unit for every hour, the equipment improvements, and the improvements of work environments, etc. We executed them one by one reflecting results. As a result, we achieved the outage for 57 days in 1995. Starting from this, we acquired the further outage reduction one by one and achieved the outage for 38 days in 1997 while maintaining safety and reliability of the plant. We advance these strategies further and we will aim at the achievement of the 30·35 days outage in the future. (author)

  6. Anticipating Deep Mapping: Tracing the Spatial Practice of Tim Robinson

    Directory of Open Access Journals (Sweden)

    Jos Smith

    2015-07-01

    Full Text Available There has been little academic research published on the work of Tim Robinson despite an illustrious career, first as an artist of the London avant-garde, then as a map-maker in the west of Ireland, and finally as an author of place. In part, this dearth is due to the difficulty of approaching these three diverse strands collectively. However, recent developments in the field of deep mapping encourage us to look back at the continuity of Robinson’s achievements in full and offer a suitable framework for doing so. Socially engaged with living communities and a depth of historical knowledge about place, but at the same time keen to contribute artistically to the ongoing contemporary culture of place, the parameters of deep mapping are broad enough to encompass the range of Robinson’s whole practice and suggest unique ways to illuminate his very unusual career. But Robinson’s achievements also encourage a reflection on the historical context of deep mapping itself, as well as on the nature of its spatial practice (especially where space comes to connote a medium to be worked rather than an area/volume. With this in mind the following article both explores Robinson’s work through deep mapping and deep mapping through the work of this unusual artist.

  7. Deep learning beyond cats and dogs: recent advances in diagnosing breast cancer with deep neural networks.

    Science.gov (United States)

    Burt, Jeremy R; Torosdagli, Neslisah; Khosravan, Naji; RaviPrakash, Harish; Mortazi, Aliasghar; Tissavirasingham, Fiona; Hussein, Sarfaraz; Bagci, Ulas

    2018-04-10

    Deep learning has demonstrated tremendous revolutionary changes in the computing industry and its effects in radiology and imaging sciences have begun to dramatically change screening paradigms. Specifically, these advances have influenced the development of computer-aided detection and diagnosis (CAD) systems. These technologies have long been thought of as "second-opinion" tools for radiologists and clinicians. However, with significant improvements in deep neural networks, the diagnostic capabilities of learning algorithms are approaching levels of human expertise (radiologists, clinicians etc.), shifting the CAD paradigm from a "second opinion" tool to a more collaborative utility. This paper reviews recently developed CAD systems based on deep learning technologies for breast cancer diagnosis, explains their superiorities with respect to previously established systems, defines the methodologies behind the improved achievements including algorithmic developments, and describes remaining challenges in breast cancer screening and diagnosis. We also discuss possible future directions for new CAD models that continue to change as artificial intelligence algorithms evolve.

  8. Self-compression of femtosecond deep-ultraviolet pulses by filamentation in krypton.

    Science.gov (United States)

    Adachi, Shunsuke; Suzuki, Toshinori

    2017-05-15

    We demonstrate self-compression of deep-ultraviolet (DUV) pulses by filamentation in krypton. In contrast to self-compression in the near-infrared, that in the DUV is associated with a red-shifted sub-pulse appearing in the pulse temporal profile. The achieved pulse width of 15 fs is the shortest among demonstrated sub-mJ deep-ultraviolet pulses.

  9. Biogeochemical signals from deep microbial life in terrestrial crust.

    Directory of Open Access Journals (Sweden)

    Yohey Suzuki

    Full Text Available In contrast to the deep subseafloor biosphere, a volumetrically vast and stable habitat for microbial life in the terrestrial crust remains poorly explored. For the long-term sustainability of a crustal biome, high-energy fluxes derived from hydrothermal circulation and water radiolysis in uranium-enriched rocks are seemingly essential. However, the crustal habitability depending on a low supply of energy is unknown. We present multi-isotopic evidence of microbially mediated sulfate reduction in a granitic aquifer, a representative of the terrestrial crust habitat. Deep meteoric groundwater was collected from underground boreholes drilled into Cretaceous Toki granite (central Japan. A large sulfur isotopic fractionation of 20-60‰ diagnostic to microbial sulfate reduction is associated with the investigated groundwater containing sulfate below 0.2 mM. In contrast, a small carbon isotopic fractionation (<30‰ is not indicative of methanogenesis. Except for 2011, the concentrations of H2 ranged mostly from 1 to 5 nM, which is also consistent with an aquifer where a terminal electron accepting process is dominantly controlled by ongoing sulfate reduction. High isotopic ratios of mantle-derived 3He relative to radiogenic 4He in groundwater and the flux of H2 along adjacent faults suggest that, in addition to low concentrations of organic matter (<70 µM, H2 from deeper sources might partly fuel metabolic activities. Our results demonstrate that the deep biosphere in the terrestrial crust is metabolically active and playing a crucial role in the formation of reducing groundwater even under low-energy fluxes.

  10. Imaging findings and significance of deep neck space infection

    International Nuclear Information System (INIS)

    Zhuang Qixin; Gu Yifeng; Du Lianjun; Zhu Lili; Pan Yuping; Li Minghua; Yang Shixun; Shang Kezhong; Yin Shankai

    2004-01-01

    Objective: To study the imaging appearance of deep neck space cellulitis and abscess and to evaluate the diagnostic criteria of deep neck space infection. Methods: CT and MRI findings of 28 cases with deep neck space infection proved by clinical manifestation and pathology were analyzed, including 11 cases of retropharyngeal space, 5 cases of parapharyngeal space infection, 4 cases of masticator space infection, and 8 cases of multi-space infection. Results: CT and MRI could display the swelling of the soft tissues and displacement, reduction, or disappearance of lipoid space in the cellulitis. In inflammatory tissues, MRI imaging demonstrated hypointense or isointense signal on T 1 WI, and hyperintense signal changes on T 2 WI. In abscess, CT could display hypodensity in the center and boundary enhancement of the abscess. MRI could display obvious hyperintense signal on T 2 WI and boundary enhancement. Conclusion: CT and MRI could provide useful information for deep neck space cellulitis and abscess

  11. Gene expression inference with deep learning.

    Science.gov (United States)

    Chen, Yifei; Li, Yi; Narayan, Rajiv; Subramanian, Aravind; Xie, Xiaohui

    2016-06-15

    Large-scale gene expression profiling has been widely used to characterize cellular states in response to various disease conditions, genetic perturbations, etc. Although the cost of whole-genome expression profiles has been dropping steadily, generating a compendium of expression profiling over thousands of samples is still very expensive. Recognizing that gene expressions are often highly correlated, researchers from the NIH LINCS program have developed a cost-effective strategy of profiling only ∼1000 carefully selected landmark genes and relying on computational methods to infer the expression of remaining target genes. However, the computational approach adopted by the LINCS program is currently based on linear regression (LR), limiting its accuracy since it does not capture complex nonlinear relationship between expressions of genes. We present a deep learning method (abbreviated as D-GEX) to infer the expression of target genes from the expression of landmark genes. We used the microarray-based Gene Expression Omnibus dataset, consisting of 111K expression profiles, to train our model and compare its performance to those from other methods. In terms of mean absolute error averaged across all genes, deep learning significantly outperforms LR with 15.33% relative improvement. A gene-wise comparative analysis shows that deep learning achieves lower error than LR in 99.97% of the target genes. We also tested the performance of our learned model on an independent RNA-Seq-based GTEx dataset, which consists of 2921 expression profiles. Deep learning still outperforms LR with 6.57% relative improvement, and achieves lower error in 81.31% of the target genes. D-GEX is available at https://github.com/uci-cbcl/D-GEX CONTACT: xhx@ics.uci.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. The onset of fabric development in deep marine sediments

    NARCIS (Netherlands)

    Maffione, Marco; Morris, Antony

    2017-01-01

    Post-depositional compaction is a key stage in the formation of sedimentary rocks that results in porosity reduction, grain realignment and the production of sedimentary fabrics. The progressive time-depth evolution of the onset of fabric development in deep marine sediments is poorly constrained

  13. Key technologies and risk management of deep tunnel construction at Jinping II hydropower station

    Directory of Open Access Journals (Sweden)

    Chunsheng Zhang

    2016-08-01

    Full Text Available The four diversion tunnels at Jinping II hydropower station represent the deepest underground project yet conducted in China, with an overburden depth of 1500–2000 m and a maximum depth of 2525 m. The tunnel structure was subjected to a maximum external water pressure of 10.22 MPa and the maximum single-point groundwater inflow of 7.3 m3/s. The success of the project construction was related to numerous challenging issues such as the stability of the rock mass surrounding the deep tunnels, strong rockburst prevention and control, and the treatment of high-pressure, large-volume groundwater infiltration. During the construction period, a series of new technologies was developed for the purpose of risk control in the deep tunnel project. Nondestructive sampling and in-situ measurement technologies were employed to fully characterize the formation and development of excavation damaged zones (EDZs, and to evaluate the mechanical behaviors of deep rocks. The time effect of marble fracture propagation, the brittle–ductile–plastic transition of marble, and the temporal development of rock mass fracture and damage induced by high geostress were characterized. The safe construction of deep tunnels was achieved under a high risk of strong rockburst using active measures, a support system comprised of lining, grouting, and external water pressure reduction techniques that addressed the coupled effect of high geostress, high external water pressure, and a comprehensive early-warning system. A complete set of technologies for the treatment of high-pressure and large-volume groundwater infiltration was developed. Monitoring results indicated that the Jinping II hydropower station has been generally stable since it was put into operation in 2014.

  14. Structural damage detection using deep learning of ultrasonic guided waves

    Science.gov (United States)

    Melville, Joseph; Alguri, K. Supreet; Deemer, Chris; Harley, Joel B.

    2018-04-01

    Structural health monitoring using ultrasonic guided waves relies on accurate interpretation of guided wave propagation to distinguish damage state indicators. However, traditional physics based models do not provide an accurate representation, and classic data driven techniques, such as a support vector machine, are too simplistic to capture the complex nature of ultrasonic guide waves. To address this challenge, this paper uses a deep learning interpretation of ultrasonic guided waves to achieve fast, accurate, and automated structural damaged detection. To achieve this, full wavefield scans of thin metal plates are used, half from the undamaged state and half from the damaged state. This data is used to train our deep network to predict the damage state of a plate with 99.98% accuracy given signals from just 10 spatial locations on the plate, as compared to that of a support vector machine (SVM), which achieved a 62% accuracy.

  15. Deep Learning for Plant Identification in Natural Environment.

    Science.gov (United States)

    Sun, Yu; Liu, Yuan; Wang, Guan; Zhang, Haiyan

    2017-01-01

    Plant image identification has become an interdisciplinary focus in both botanical taxonomy and computer vision. The first plant image dataset collected by mobile phone in natural scene is presented, which contains 10,000 images of 100 ornamental plant species in Beijing Forestry University campus. A 26-layer deep learning model consisting of 8 residual building blocks is designed for large-scale plant classification in natural environment. The proposed model achieves a recognition rate of 91.78% on the BJFU100 dataset, demonstrating that deep learning is a promising technology for smart forestry.

  16. Deep Learning for Plant Identification in Natural Environment

    Directory of Open Access Journals (Sweden)

    Yu Sun

    2017-01-01

    Full Text Available Plant image identification has become an interdisciplinary focus in both botanical taxonomy and computer vision. The first plant image dataset collected by mobile phone in natural scene is presented, which contains 10,000 images of 100 ornamental plant species in Beijing Forestry University campus. A 26-layer deep learning model consisting of 8 residual building blocks is designed for large-scale plant classification in natural environment. The proposed model achieves a recognition rate of 91.78% on the BJFU100 dataset, demonstrating that deep learning is a promising technology for smart forestry.

  17. Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition

    OpenAIRE

    Li, Xiangang; Wu, Xihong

    2014-01-01

    Long short-term memory (LSTM) based acoustic modeling methods have recently been shown to give state-of-the-art performance on some speech recognition tasks. To achieve a further performance improvement, in this research, deep extensions on LSTM are investigated considering that deep hierarchical model has turned out to be more efficient than a shallow one. Motivated by previous research on constructing deep recurrent neural networks (RNNs), alternative deep LSTM architectures are proposed an...

  18. Cocatalysts in Semiconductor-based Photocatalytic CO2 Reduction: Achievements, Challenges, and Opportunities.

    Science.gov (United States)

    Ran, Jingrun; Jaroniec, Mietek; Qiao, Shi-Zhang

    2018-02-01

    Ever-increasing fossil-fuel combustion along with massive CO 2 emissions has aroused a global energy crisis and climate change. Photocatalytic CO 2 reduction represents a promising strategy for clean, cost-effective, and environmentally friendly conversion of CO 2 into hydrocarbon fuels by utilizing solar energy. This strategy combines the reductive half-reaction of CO 2 conversion with an oxidative half reaction, e.g., H 2 O oxidation, to create a carbon-neutral cycle, presenting a viable solution to global energy and environmental problems. There are three pivotal processes in photocatalytic CO 2 conversion: (i) solar-light absorption, (ii) charge separation/migration, and (iii) catalytic CO 2 reduction and H 2 O oxidation. While significant progress is made in optimizing the first two processes, much less research is conducted toward enhancing the efficiency of the third step, which requires the presence of cocatalysts. In general, cocatalysts play four important roles: (i) boosting charge separation/transfer, (ii) improving the activity and selectivity of CO 2 reduction, (iii) enhancing the stability of photocatalysts, and (iv) suppressing side or back reactions. Herein, for the first time, all the developed CO 2 -reduction cocatalysts for semiconductor-based photocatalytic CO 2 conversion are summarized, and their functions and mechanisms are discussed. Finally, perspectives in this emerging area are provided. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Reduction of MRI acoustic noise achieved by manipulation of scan parameters – A study using veterinary MR sequences

    International Nuclear Information System (INIS)

    Baker, Martin A.

    2013-01-01

    Sound pressure levels were measured within an MR scan room for a range of sequences employed in veterinary brain scanning, using a test phantom in an extremity coil. Variation of TR and TE, and use of a quieter gradient mode (‘whisper’ mode) were evaluated to determine their effect on sound pressure levels (SPLs). Use of a human head coil and a human brain sequence was also evaluated. Significant differences in SPL were achieved for T2, T1, T2* gradient echo and VIBE sequences by varying TR or TE, or by selecting the ‘whisper’ gradient mode. An appreciable reduction was achieved for the FLAIR sequence. Noise levels were not affected when a head coil was used in place of an extremity coil. Due to sequence parameters employed, veterinary patients and anaesthetists may be exposed to higher sound levels than those experienced in human MR examinations. The techniques described are particularly valuable in small animal MR scanning where ear protection is not routinely provided for the patient.

  20. Super-resolution for asymmetric resolution of FIB-SEM 3D imaging using AI with deep learning.

    Science.gov (United States)

    Hagita, Katsumi; Higuchi, Takeshi; Jinnai, Hiroshi

    2018-04-12

    Scanning electron microscopy equipped with a focused ion beam (FIB-SEM) is a promising three-dimensional (3D) imaging technique for nano- and meso-scale morphologies. In FIB-SEM, the specimen surface is stripped by an ion beam and imaged by an SEM installed orthogonally to the FIB. The lateral resolution is governed by the SEM, while the depth resolution, i.e., the FIB milling direction, is determined by the thickness of the stripped thin layer. In most cases, the lateral resolution is superior to the depth resolution; hence, asymmetric resolution is generated in the 3D image. Here, we propose a new approach based on an image-processing or deep-learning-based method for super-resolution of 3D images with such asymmetric resolution, so as to restore the depth resolution to achieve symmetric resolution. The deep-learning-based method learns from high-resolution sub-images obtained via SEM and recovers low-resolution sub-images parallel to the FIB milling direction. The 3D morphologies of polymeric nano-composites are used as test images, which are subjected to the deep-learning-based method as well as conventional methods. We find that the former yields superior restoration, particularly as the asymmetric resolution is increased. Our super-resolution approach for images having asymmetric resolution enables observation time reduction.

  1. Motivation, cognitive processing and achievement in higher education

    NARCIS (Netherlands)

    Bruinsma, M.

    2004-01-01

    This study investigated the question of whether a student's expectancy, values and negative affect influenced their deep information processing approach and achievement at the end of the first and second academic year. Five hundred and sixty-five first-year students completed a self-report

  2. Adaptive deep brain stimulation in advanced Parkinson disease.

    Science.gov (United States)

    Little, Simon; Pogosyan, Alex; Neal, Spencer; Zavala, Baltazar; Zrinzo, Ludvic; Hariz, Marwan; Foltynie, Thomas; Limousin, Patricia; Ashkan, Keyoumars; FitzGerald, James; Green, Alexander L; Aziz, Tipu Z; Brown, Peter

    2013-09-01

    Brain-computer interfaces (BCIs) could potentially be used to interact with pathological brain signals to intervene and ameliorate their effects in disease states. Here, we provide proof-of-principle of this approach by using a BCI to interpret pathological brain activity in patients with advanced Parkinson disease (PD) and to use this feedback to control when therapeutic deep brain stimulation (DBS) is delivered. Our goal was to demonstrate that by personalizing and optimizing stimulation in real time, we could improve on both the efficacy and efficiency of conventional continuous DBS. We tested BCI-controlled adaptive DBS (aDBS) of the subthalamic nucleus in 8 PD patients. Feedback was provided by processing of the local field potentials recorded directly from the stimulation electrodes. The results were compared to no stimulation, conventional continuous stimulation (cDBS), and random intermittent stimulation. Both unblinded and blinded clinical assessments of motor effect were performed using the Unified Parkinson's Disease Rating Scale. Motor scores improved by 66% (unblinded) and 50% (blinded) during aDBS, which were 29% (p = 0.03) and 27% (p = 0.005) better than cDBS, respectively. These improvements were achieved with a 56% reduction in stimulation time compared to cDBS, and a corresponding reduction in energy requirements (p random intermittent stimulation. BCI-controlled DBS is tractable and can be more efficient and efficacious than conventional continuous neuromodulation for PD. Copyright © 2013 American Neurological Association.

  3. Endovascular treatment of iliofemoral deep vein thrombosis in pregnancy using US-guided percutaneous aspiration thrombectomy.

    Science.gov (United States)

    Gedikoglu, Murat; Oguzkurt, Levent

    2017-01-01

    We aimed to describe ultrasonography (US)-guided percutaneous aspiration thrombectomy in pregnant women with iliofemoral deep vein thrombosis. This study included nine pregnant women with acute and subacute iliofemoral deep vein thrombosis, who were severe symptomatic cases with massive swelling and pain of the leg. Patients were excluded from the study if they had only femoropopliteal deep vein thrombosis or mild symptoms of deep vein thrombosis. US-guided percutaneous aspiration thrombectomy was applied to achieve thrombus removal and uninterrupted venous flow. The treatment was considered successful if there was adequate venous patency and symptomatic relief. Complete or significant thrombus removal and uninterrupted venous flow from the puncture site up to the iliac veins were achieved in all patients at first intervention. Complete relief of leg pain was achieved immediately in seven patients (77.8%). Two patients (22.2%) had a recurrence of thrombosis in the first week postintervention. One of them underwent a second intervention, where percutaneous aspiration thrombectomy was performed again with successful removal of thrombus and establishment of in line flow. Two patients were lost to follow-up after birth. None of the remaining seven patients had rethrombosis throughout the postpartum period. Symptomatic relief was detected clinically in these patients. Endovascular treatment with US-guided percutaneous aspiration thrombectomy can be considered as a safe and effective way to remove thrombus from the deep veins in pregnant women with acute and subacute iliofemoral deep vein thrombosis.

  4. Implications of Deep Decarbonization for Carbon Cycle Science

    Science.gov (United States)

    Jones, A. D.; Williams, J.; Torn, M. S.

    2016-12-01

    The energy-system transformations required to achieve deep decarbonization in the United States, defined as a reduction of greenhouse gas emissions of 80% or more below 1990 levels by 2050, have profound implications for carbon cycle science, particularly with respect to 4 key objectives: understanding and enhancing the terrestrial carbon sink, using bioenergy sustainably, controlling non-CO2 GHGs, and emissions monitoring and verification. (1) As a source of mitigation, the terrestrial carbon sink is pivotal but uncertain, and changes in the expected sink may significantly affect the overall cost of mitigation. Yet the dynamics of the sink under changing climatic conditions, and the potential to protect and enhance the sink through land management, are poorly understood. Policy urgently requires an integrative research program that links basic science knowledge to land management practices. (2) Biomass resources can fill critical energy needs in a deeply decarbonized system, but current understanding of sustainability and lifecycle carbon aspects is limited. Mitigation policy needs better understanding of the sustainable amount, types, and cost of bioenergy feedstocks, their interactions with other land uses, and more efficient and reliable monitoring of embedded carbon. (3) As CO2 emissions from energy decrease under deep decarbonization, the relative share of non-CO2 GHGs grows larger and their mitigation more important. Because the sources tend to be distributed, variable, and uncertain, they have been under-researched. Policy needs a better understanding of mitigation priorities and costs, informed by deeper research in key areas such as fugitive CH4, fertilizer-derived N2O, and industrial F-gases. (4) The M&V challenge under deep decarbonization changes with a steep decrease in the combustion CO2 sources due to widespread electrification, while a greater share of CO2 releases is net-carbon-neutral. Similarly, gas pipelines may carry an increasing share of

  5. Deep Visual Attention Prediction

    Science.gov (United States)

    Wang, Wenguan; Shen, Jianbing

    2018-05-01

    In this work, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although Convolutional Neural Networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve CNN based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark datasets demonstrate our method yields state-of-the-art performance with competitive inference time.

  6. What factors determine academic achievement in high achieving undergraduate medical students? A qualitative study.

    Science.gov (United States)

    Abdulghani, Hamza M; Al-Drees, Abdulmajeed A; Khalil, Mahmood S; Ahmad, Farah; Ponnamperuma, Gominda G; Amin, Zubair

    2014-04-01

    Medical students' academic achievement is affected by many factors such as motivational beliefs and emotions. Although students with high intellectual capacity are selected to study medicine, their academic performance varies widely. The aim of this study is to explore the high achieving students' perceptions of factors contributing to academic achievement. Focus group discussions (FGD) were carried out with 10 male and 9 female high achieving (scores more than 85% in all tests) students, from the second, third, fourth and fifth academic years. During the FGDs, the students were encouraged to reflect on their learning strategies and activities. The discussion was audio-recorded, transcribed and analysed qualitatively. Factors influencing high academic achievement include: attendance to lectures, early revision, prioritization of learning needs, deep learning, learning in small groups, mind mapping, learning in skills lab, learning with patients, learning from mistakes, time management, and family support. Internal motivation and expected examination results are important drivers of high academic performance. Management of non-academic issues like sleep deprivation, homesickness, language barriers, and stress is also important for academic success. Addressing these factors, which might be unique for a given student community, in a systematic manner would be helpful to improve students' performance.

  7. Portosystemic pressure reduction achieved with TIPPS and impact of portosystemic collaterals for the prediction of the portosystemic-pressure gradient in cirrhotic patients

    Energy Technology Data Exchange (ETDEWEB)

    Grözinger, Gerd, E-mail: gerd.groezinger@med.uni-tuebingen.de [Department of Diagnostic Radiology, Department of Radiology, University of Tübingen (Germany); Wiesinger, Benjamin; Schmehl, Jörg; Kramer, Ulrich [Department of Diagnostic Radiology, Department of Radiology, University of Tübingen (Germany); Mehra, Tarun [Department of Dermatology, University of Tübingen (Germany); Grosse, Ulrich; König, Claudius [Department of Diagnostic Radiology, Department of Radiology, University of Tübingen (Germany)

    2013-12-01

    Purpose: The portosystemic pressure gradient is an important factor defining prognosis in hepatic disease. However, noninvasive prediction of the gradient and the possible reduction by establishment of a TIPSS is challenging. A cohort of patients receiving TIPSS was evaluated with regard to imaging features of collaterals in cross-sectional imaging and the achievable reduction of the pressure gradient by establishment of a TIPSS. Methods: In this study 70 consecutive patients with cirrhotic liver disease were retrospectively evaluated. Patients received either CT or MR imaging before invasive pressure measurement during TIPSS procedure. Images were evaluated with regard to esophageal and fundus varices, splenorenal collaterals, short gastric vein and paraumbilical vein. Results were correlated with Child stage, portosystemic pressure gradient and post-TIPSS reduction of the pressure gradient. Results: In 55 of the 70 patients TIPSS reduced the pressure gradient to less than 12 mmHg. The pre-interventional pressure and the pressure reduction were not significantly different between Child stages. Imaging features of varices and portosystemic collaterals did not show significant differences. The only parameter with a significant predictive value for the reduction of the pressure gradient was the pre-TIPSS pressure gradient (r = 0.8, p < 0.001). Conclusions: TIPSS allows a reliable reduction of the pressure gradient even at high pre-interventional pressure levels and a high collateral presence. In patients receiving TIPSS the presence and the characteristics of the collateral vessels seem to be too variable to draw reliable conclusions concerning the portosystemic pressure gradient.

  8. Portosystemic pressure reduction achieved with TIPPS and impact of portosystemic collaterals for the prediction of the portosystemic-pressure gradient in cirrhotic patients

    International Nuclear Information System (INIS)

    Grözinger, Gerd; Wiesinger, Benjamin; Schmehl, Jörg; Kramer, Ulrich; Mehra, Tarun; Grosse, Ulrich; König, Claudius

    2013-01-01

    Purpose: The portosystemic pressure gradient is an important factor defining prognosis in hepatic disease. However, noninvasive prediction of the gradient and the possible reduction by establishment of a TIPSS is challenging. A cohort of patients receiving TIPSS was evaluated with regard to imaging features of collaterals in cross-sectional imaging and the achievable reduction of the pressure gradient by establishment of a TIPSS. Methods: In this study 70 consecutive patients with cirrhotic liver disease were retrospectively evaluated. Patients received either CT or MR imaging before invasive pressure measurement during TIPSS procedure. Images were evaluated with regard to esophageal and fundus varices, splenorenal collaterals, short gastric vein and paraumbilical vein. Results were correlated with Child stage, portosystemic pressure gradient and post-TIPSS reduction of the pressure gradient. Results: In 55 of the 70 patients TIPSS reduced the pressure gradient to less than 12 mmHg. The pre-interventional pressure and the pressure reduction were not significantly different between Child stages. Imaging features of varices and portosystemic collaterals did not show significant differences. The only parameter with a significant predictive value for the reduction of the pressure gradient was the pre-TIPSS pressure gradient (r = 0.8, p < 0.001). Conclusions: TIPSS allows a reliable reduction of the pressure gradient even at high pre-interventional pressure levels and a high collateral presence. In patients receiving TIPSS the presence and the characteristics of the collateral vessels seem to be too variable to draw reliable conclusions concerning the portosystemic pressure gradient

  9. Deep cooling of optically trapped atoms implemented by magnetic levitation without transverse confinement

    Science.gov (United States)

    Li, Chen; Zhou, Tianwei; Zhai, Yueyang; Xiang, Jinggang; Luan, Tian; Huang, Qi; Yang, Shifeng; Xiong, Wei; Chen, Xuzong

    2017-05-01

    We report a setup for the deep cooling of atoms in an optical trap. The deep cooling is implemented by eliminating the influence of gravity using specially constructed magnetic coils. Compared to the conventional method of generating a magnetic levitating force, the lower trap frequency achieved in our setup provides a lower limit of temperature and more freedoms to Bose gases with a simpler solution. A final temperature as low as ˜ 6 nK is achieved in the optical trap, and the atomic density is decreased by nearly two orders of magnitude during the second stage of evaporative cooling. This deep cooling of optically trapped atoms holds promise for many applications, such as atomic interferometers, atomic gyroscopes, and magnetometers, as well as many basic scientific research directions, such as quantum simulations and atom optics.

  10. The use of radiological guidelines to achieve a sustained reduction in the number of radiographic examinations of the cervical spine, lumbar spine and knees performed for GPs

    International Nuclear Information System (INIS)

    Glaves, J.

    2005-01-01

    AIM: To determine if the use of request guidelines can achieve a sustained reduction in the number of radiographic examinations of the cervical spine, lumbar spine and knee joints performed for general practitioners (GPs). METHODS: GPs referring to three community hospitals and a district general hospital were circulated with referral guidelines for radiography of the cervical spine, lumbar spine and knee, and all requests for these three examinations were checked. Requests that did not fit the guidelines were returned to the GP with an explanatory letter and a further copy of the guidelines. Where applicable, a large-joint replacement algorithm was also enclosed. If the GP maintained the opinion that the examination was indicated, she or he had the option of supplying further justifying information in writing or speaking to a consultant radiologist. RESULTS: Overall the number of radiographic examinations fell by 68% in the first year, achieving a 79% reduction in the second year. For knees, lumbar spine and cervical spine radiographs the total reductions were 77%, 78% and 86%, respectively. CONCLUSION: The use of referral guidelines, reinforced by request checking and clinical management algorithms, can produce a dramatic and sustained reduction in the number of radiographs of the cervical spine, lumbar spine and knees performed for GPs

  11. Distributed deep learning networks among institutions for medical imaging.

    Science.gov (United States)

    Chang, Ken; Balachandar, Niranjan; Lam, Carson; Yi, Darvin; Brown, James; Beers, Andrew; Rosen, Bruce; Rubin, Daniel L; Kalpathy-Cramer, Jayashree

    2018-03-29

    Deep learning has become a promising approach for automated support for clinical diagnosis. When medical data samples are limited, collaboration among multiple institutions is necessary to achieve high algorithm performance. However, sharing patient data often has limitations due to technical, legal, or ethical concerns. In this study, we propose methods of distributing deep learning models as an attractive alternative to sharing patient data. We simulate the distribution of deep learning models across 4 institutions using various training heuristics and compare the results with a deep learning model trained on centrally hosted patient data. The training heuristics investigated include ensembling single institution models, single weight transfer, and cyclical weight transfer. We evaluated these approaches for image classification in 3 independent image collections (retinal fundus photos, mammography, and ImageNet). We find that cyclical weight transfer resulted in a performance that was comparable to that of centrally hosted patient data. We also found that there is an improvement in the performance of cyclical weight transfer heuristic with a high frequency of weight transfer. We show that distributing deep learning models is an effective alternative to sharing patient data. This finding has implications for any collaborative deep learning study.

  12. Vietnam; Poverty Reduction Strategy Paper

    OpenAIRE

    International Monetary Fund

    2004-01-01

    This paper assesses the Poverty Reduction Strategy Paper of Vietnam, known as the Comprehensive Poverty Reduction and Growth Strategy (CPRGS). It is an action program to achieve economic growth and poverty reduction objectives. This paper reviews the objectives and tasks of socio-economic development and poverty reduction. The government of Vietnam takes poverty reduction as a cutting-through objective in the process of country socio-economic development and declares its commitment to impleme...

  13. A novel application of deep learning for single-lead ECG classification.

    Science.gov (United States)

    Mathews, Sherin M; Kambhamettu, Chandra; Barner, Kenneth E

    2018-06-04

    Detecting and classifying cardiac arrhythmias is critical to the diagnosis of patients with cardiac abnormalities. In this paper, a novel approach based on deep learning methodology is proposed for the classification of single-lead electrocardiogram (ECG) signals. We demonstrate the application of the Restricted Boltzmann Machine (RBM) and deep belief networks (DBN) for ECG classification following detection of ventricular and supraventricular heartbeats using single-lead ECG. The effectiveness of this proposed algorithm is illustrated using real ECG signals from the widely-used MIT-BIH database. Simulation results demonstrate that with a suitable choice of parameters, RBM and DBN can achieve high average recognition accuracies of ventricular ectopic beats (93.63%) and of supraventricular ectopic beats (95.57%) at a low sampling rate of 114 Hz. Experimental results indicate that classifiers built into this deep learning-based framework achieved state-of-the art performance models at lower sampling rates and simple features when compared to traditional methods. Further, employing features extracted at a sampling rate of 114 Hz when combined with deep learning provided enough discriminatory power for the classification task. This performance is comparable to that of traditional methods and uses a much lower sampling rate and simpler features. Thus, our proposed deep neural network algorithm demonstrates that deep learning-based methods offer accurate ECG classification and could potentially be extended to other physiological signal classifications, such as those in arterial blood pressure (ABP), nerve conduction (EMG), and heart rate variability (HRV) studies. Copyright © 2018. Published by Elsevier Ltd.

  14. An introduction to deep submicron CMOS for vertex applications

    CERN Document Server

    Campbell, M; Cantatore, E; Faccio, F; Heijne, Erik H M; Jarron, P; Santiard, Jean-Claude; Snoeys, W; Wyllie, K

    2001-01-01

    Microelectronics has become a key enabling technology in the development of tracking detectors for High Energy Physics. Deep submicron CMOS is likely to be extensively used in all future tracking systems. Radiation tolerance in the Mrad region has been achieved and complete readout chips comprising many millions of transistors now exist. The choice of technology is dictated by market forces but the adoption of deep submicron CMOS for tracking applications still poses some challenges. The techniques used are reviewed and some of the future challenges are discussed.

  15. An adaptive deep learning approach for PPG-based identification.

    Science.gov (United States)

    Jindal, V; Birjandtalab, J; Pouyan, M Baran; Nourani, M

    2016-08-01

    Wearable biosensors have become increasingly popular in healthcare due to their capabilities for low cost and long term biosignal monitoring. This paper presents a novel two-stage technique to offer biometric identification using these biosensors through Deep Belief Networks and Restricted Boltzman Machines. Our identification approach improves robustness in current monitoring procedures within clinical, e-health and fitness environments using Photoplethysmography (PPG) signals through deep learning classification models. The approach is tested on TROIKA dataset using 10-fold cross validation and achieved an accuracy of 96.1%.

  16. Report of the working group on achieving a fourfold reduction in greenhouse gas emissions in France by 2050

    International Nuclear Information System (INIS)

    2006-01-01

    Achieving a fourfold reduction of in greenhouse gas emissions by 2050 is the ambitious and voluntary objective for France that addresses a combination of many different aspects (technical, technological, economic, social) against a backdrop of important issues and choices for public policy-makers. This document is the bilingual version of the factor 4 group report. It discusses the Factor 4 objectives, the different proposed scenario and the main lessons learned, the strategies to support the Factor 4 objectives (fostering changes in behavior and defining the role of public policies), the Factor 4 objective in international and european contexts (experience aboard, strategic behavior, constraints and opportunities, particularly in europe) and recommendations. (A.L.B.)

  17. Comparative Analysis of Rote Learning on High and Low Achievers in Graduate and Undergraduate Programs

    Directory of Open Access Journals (Sweden)

    Ambreen Ahmed

    2017-06-01

    Full Text Available A survey was conducted to study the preferred learning strategies; that is, surface learning or deep learning of undergraduate and graduate male and female students and the impact of the preferred strategy on their academic performance. Both learning strategies help university students to get good scores in their examinations to meet the demands of industry in workforce. Quantitative research method was used to determine the impact of learning strategy on academic achievements. The R-SPQ2F questionnaire was sent to 103 students through Google forms and hard copies through snowball sampling technique. The results show that rote learning and academic performance are inversely related to each other. In high achievers, deep learning is significant as compared to low achievers. Furthermore, comparative analysis of learning styles on males and females showed that both preferred deep learning strategy equally. Learning strategy is not related to education level of students because there is no difference among preferred learning strategies of graduate and undergraduate students.

  18. The Impacts of Budget Reductions on Indiana's Public Schools: The Impact of Budget Changes on Student Achievement, Personnel, and Class Size for Public School Corporations in the State of Indiana

    Science.gov (United States)

    Jarman, Del W.; Boyland, Lori G.

    2011-01-01

    In recent years, economic downturn and changes to Indiana's school funding have resulted in significant financial reductions in General Fund allocations for many of Indiana's public school corporations. The main purpose of this statewide study is to examine the possible impacts of these budget reductions on class size and student achievement. This…

  19. The BOS-X approach: achieving drastic cost reduction in CPV through holistic power plant level innovation

    Science.gov (United States)

    Plesniak, A.; Garboushian, V.

    2012-10-01

    In 2011, the Amonix Advanced Technology Group was awarded DOE SunShot funding in the amount of 4.5M to design a new Balance of System (BOS) architecture utilizing Amonix MegaModules™ focused on reaching the SunShot goal of 0.06-$0.08/kWhr LCOE. The project proposal presented a comprehensive re-evaluation of the cost components of a utility scale CPV plant and identified critical areas of focus where innovation is needed to achieve cost reduction. As the world's premier manufacturer and most experienced installer of CPV power plants, Amonix is uniquely qualified to lead a rethinking of BOS architecture for CPV. The presentation will focus on the structure of the BOS-X approach, which looks for the next wave of cost reduction in CPV through evaluation of non-module subsystems and the interaction between subsystems during the lifecycle of a solar power plant. Innovation around nonmodule components is minimal to date because CPV companies are just now getting enough practice through completion of large projects to create ideas and tests on how to improve baseline designs and processes. As CPV companies increase their installed capacity, they can utilize an approach similar to the methodology of BOS-X to increase the competitiveness of their product. Through partnership with DOE, this holistic approach is expected to define a path for CPV well aligned with the goals of the SunShot Initiative.

  20. Acetogenesis in the energy-starved deep biosphere – a paradox?

    Directory of Open Access Journals (Sweden)

    Mark Alexander Lever

    2012-01-01

    Full Text Available Under anoxic conditions in sediments, acetogens are often thought to be outcompeted by microorganisms performing energetically more favorable metabolic pathways, such as sulfate reduction or methanogenesis. Recent evidence from deep subseafloor sediments suggesting acetogenesis in the presence of sulfate reduction and methanogenesis has called this notion into question, however. Here I argue that acetogens can successfully coexist with sulfate reducers and methanogens for multiple reasons. These include (1 substantial energy yields from most acetogenesis reactions across the wide range of conditions encountered in the subseafloor, (2 wide substrate spectra that enable niche differentiation by use of different substrates and/or pooling of energy from a broad range of energy substrates, (3 reduced energetic cost of biosynthesis among acetogens due to use of the reductive acetyl CoA pathway for both energy production and biosynthesis coupled with the ability to use many organic precursors to produce the key intermediate acetyl CoA. This leads to the general conclusion that, beside Gibbs free energy yields, variables such as metabolic strategy and energetic cost of biosynthesis need to be taken into account to understand microbial survival in the energy-depleted deep biosphere.

  1. The challenge of meeting Canada's greenhouse gas reduction targets

    Energy Technology Data Exchange (ETDEWEB)

    Hughes, Larry; Chaudhry, Nikhil

    2010-09-15

    In 2007, Canada's federal government announced its medium and long-term greenhouse gas emissions reduction plan entitled 'Turning the Corner', which proposed emission cuts of 20% below 2006 levels by 2020 and 60% to 70% below 2006 levels by 2050. A government advisory organization, the National Round Table on Environment and Economy presented a set of 'fast and deep' pathways to emissions reduction through the large-scale electrification of the Canadian economy. This paper examines the likelihood of the 'fast and deep' pathways being met by considering the technical report's proposed energy systems, their associated energy sources, and the magnitude of the changes.

  2. Life Support for Deep Space and Mars

    Science.gov (United States)

    Jones, Harry W.; Hodgson, Edward W.; Kliss, Mark H.

    2014-01-01

    How should life support for deep space be developed? The International Space Station (ISS) life support system is the operational result of many decades of research and development. Long duration deep space missions such as Mars have been expected to use matured and upgraded versions of ISS life support. Deep space life support must use the knowledge base incorporated in ISS but it must also meet much more difficult requirements. The primary new requirement is that life support in deep space must be considerably more reliable than on ISS or anywhere in the Earth-Moon system, where emergency resupply and a quick return are possible. Due to the great distance from Earth and the long duration of deep space missions, if life support systems fail, the traditional approaches for emergency supply of oxygen and water, emergency supply of parts, and crew return to Earth or escape to a safe haven are likely infeasible. The Orbital Replacement Unit (ORU) maintenance approach used by ISS is unsuitable for deep space with ORU's as large and complex as those originally provided in ISS designs because it minimizes opportunities for commonality of spares, requires replacement of many functional parts with each failure, and results in substantial launch mass and volume penalties. It has become impractical even for ISS after the shuttle era, resulting in the need for ad hoc repair activity at lower assembly levels with consequent crew time penalties and extended repair timelines. Less complex, more robust technical approaches may be needed to meet the difficult deep space requirements for reliability, maintainability, and reparability. Developing an entirely new life support system would neglect what has been achieved. The suggested approach is use the ISS life support technologies as a platform to build on and to continue to improve ISS subsystems while also developing new subsystems where needed to meet deep space requirements.

  3. Deep convolutional neural network based antenna selection in multiple-input multiple-output system

    Science.gov (United States)

    Cai, Jiaxin; Li, Yan; Hu, Ying

    2018-03-01

    Antenna selection of wireless communication system has attracted increasing attention due to the challenge of keeping a balance between communication performance and computational complexity in large-scale Multiple-Input MultipleOutput antenna systems. Recently, deep learning based methods have achieved promising performance for large-scale data processing and analysis in many application fields. This paper is the first attempt to introduce the deep learning technique into the field of Multiple-Input Multiple-Output antenna selection in wireless communications. First, the label of attenuation coefficients channel matrix is generated by minimizing the key performance indicator of training antenna systems. Then, a deep convolutional neural network that explicitly exploits the massive latent cues of attenuation coefficients is learned on the training antenna systems. Finally, we use the adopted deep convolutional neural network to classify the channel matrix labels of test antennas and select the optimal antenna subset. Simulation experimental results demonstrate that our method can achieve better performance than the state-of-the-art baselines for data-driven based wireless antenna selection.

  4. Achieving carbon emission reduction through industrial and urban symbiosis: A case of Kawasaki

    International Nuclear Information System (INIS)

    Dong, Huijuan; Ohnishi, Satoshi; Fujita, Tsuyoshi; Geng, Yong; Fujii, Minoru; Dong, Liang

    2014-01-01

    Industry and fossil fuel combustion are the main sources for urban carbon emissions. Most studies focus on energy consumption emission reduction and energy efficiency improvement. Material saving is also important for carbon emission reduction from a lifecycle perspective. IS (Industrial symbiosis) and U r S (urban symbiosis) have been effective since both of them encourage byproduct exchange. However, quantitative carbon emission reduction evaluation on applying them is still lacking. Consequently, the purpose of this paper is to fill such a gap through a case study in Kawasaki Eco-town, Japan. A hybrid LCA model was employed to evaluate to the lifecycle carbon footprint. The results show that lifecycle carbon footprints with and without IS and U r S were 26.66 Mt CO 2 e and 30.92 Mt CO 2 e, respectively. The carbon emission efficiency was improved by 13.77% with the implementation of IS and U r S. The carbon emission reduction was mainly from “iron and steel” industry, cement industry and “paper making” industry, with figures of 2.76 Mt CO 2 e, 1.16 Mt CO 2 e and 0.34 Mt CO 2 e, respectively. Reuse of scrape steel, blast furnace slag and waste paper are all effective measures for promoting carbon emission reductions. Finally, policy implications on how to further promote IS and U r S are presented. - Highlights: • We evaluate carbon emission reduction of industrial and urban symbiosis (IS/U r S). • Hybrid LCA model was used to evaluate lifecycle carbon footprint. • Carbon emission efficiency was improved by 13.77% after applying IS/U r S. • The importance of U r S in responding carbon reduction was addressed in the paper

  5. DANN: a deep learning approach for annotating the pathogenicity of genetic variants.

    Science.gov (United States)

    Quang, Daniel; Chen, Yifei; Xie, Xiaohui

    2015-03-01

    Annotating genetic variants, especially non-coding variants, for the purpose of identifying pathogenic variants remains a challenge. Combined annotation-dependent depletion (CADD) is an algorithm designed to annotate both coding and non-coding variants, and has been shown to outperform other annotation algorithms. CADD trains a linear kernel support vector machine (SVM) to differentiate evolutionarily derived, likely benign, alleles from simulated, likely deleterious, variants. However, SVMs cannot capture non-linear relationships among the features, which can limit performance. To address this issue, we have developed DANN. DANN uses the same feature set and training data as CADD to train a deep neural network (DNN). DNNs can capture non-linear relationships among features and are better suited than SVMs for problems with a large number of samples and features. We exploit Compute Unified Device Architecture-compatible graphics processing units and deep learning techniques such as dropout and momentum training to accelerate the DNN training. DANN achieves about a 19% relative reduction in the error rate and about a 14% relative increase in the area under the curve (AUC) metric over CADD's SVM methodology. All data and source code are available at https://cbcl.ics.uci.edu/public_data/DANN/. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. A New Trend in the Management of Pediatric Deep Neck Abscess: Achievement of the Medical Treatment Alone.

    Science.gov (United States)

    Çetin, Aslı Çakır; Olgun, Yüksel; Özses, Arif; Erdağ, Taner Kemal

    2017-06-01

    Albeit the traditional opinion that advocates a routine surgical drainage for the treatment of an abscess, the case series presenting high success rates of the medical therapy alone is increasing in deep neck abscesses of childhood. This research focuses on children whose deep neck abscess fully disappeared after only medical treatment. In a retrospective study, we evaluated medical records of 12 pediatric (<18 years old) cases diagnosed with deep neck abscess or abscess containing suppurative lymphadenitis and treated with only medical therapy between 2010 and 2015 for age, gender, treatment modality, parameters related to antimicrobial agents, location of the infection, etiology, symptoms, duration of hospital stay, characteristics of the radiological and biochemical examination findings, and complications. The mean age of 10 male and two female children was 5.9 years (range, 1-17 years). Baseline and the last control's mean values of white blood cell (WBC), C-reactive protein, and erythrocyte sedimentation rate were 18,050/μL, 99.8 mg/L, 73.1 mm/h, and 8,166/μL, 34.1 mg/L, 35.3 mm/h, respectively. Contrast-enhanced neck computed tomography demonstrated an abscess in seven cases and an abscess containing suppurative lymphadenitis in five cases. The largest diameter of the abscess was 41 mm. All cases were given broad-spectrum empirical antibiotherapy (penicillin+metronidazole, ceftriaxone+metronidazole, or clindamycin). No medical treatment failure was experienced. Independent of age and abscess size, if the baseline WBC is ≤25.200/μL, if only two or less than two cervical compartments are involved, if there are no complications in the admission, and if the etiological reason is not a previous history of trauma, surgery, foreign body, and malignancy, pediatric deep neck abscess can be treated successfully with parenteral empirical wide-spectrum antibiotherapy.

  7. Motivation, Cognitive Processing and Achievement in Higher Education

    Science.gov (United States)

    Bruinsma, Marjon

    2004-01-01

    This study investigated the question of whether a student's expectancy, values and negative affect influenced their deep information processing approach and achievement at the end of the first and second academic year. Five hundred and sixty-five first-year students completed a self-report questionnaire on three different occasions. The…

  8. Cardiac and pulmonary dose reduction for tangentially irradiated breast cancer, utilizing deep inspiration breath-hold with audio-visual guidance, without compromising target coverage

    International Nuclear Information System (INIS)

    Vikstroem, Johan; Hjelstuen, Mari H.B.; Mjaaland, Ingvil; Dybvik, Kjell Ivar

    2011-01-01

    Background and purpose. Cardiac disease and pulmonary complications are documented risk factors in tangential breast irradiation. Respiratory gating radiotherapy provides a possibility to substantially reduce cardiopulmonary doses. This CT planning study quantifies the reduction of radiation doses to the heart and lung, using deep inspiration breath-hold (DIBH). Patients and methods. Seventeen patients with early breast cancer, referred for adjuvant radiotherapy, were included. For each patient two CT scans were acquired; the first during free breathing (FB) and the second during DIBH. The scans were monitored by the Varian RPM respiratory gating system. Audio coaching and visual feedback (audio-visual guidance) were used. The treatment planning of the two CT studies was performed with conformal tangential fields, focusing on good coverage (V95>98%) of the planning target volume (PTV). Dose-volume histograms were calculated and compared. Doses to the heart, left anterior descending (LAD) coronary artery, ipsilateral lung and the contralateral breast were assessed. Results. Compared to FB, the DIBH-plans obtained lower cardiac and pulmonary doses, with equal coverage of PTV. The average mean heart dose was reduced from 3.7 to 1.7 Gy and the number of patients with >5% heart volume receiving 25 Gy or more was reduced from four to one of the 17 patients. With DIBH the heart was completely out of the beam portals for ten patients, with FB this could not be achieved for any of the 17 patients. The average mean dose to the LAD coronary artery was reduced from 18.1 to 6.4 Gy. The average ipsilateral lung volume receiving more than 20 Gy was reduced from 12.2 to 10.0%. Conclusion. Respiratory gating with DIBH, utilizing audio-visual guidance, reduces cardiac and pulmonary doses for tangentially treated left sided breast cancer patients without compromising the target coverage

  9. Cardiac and pulmonary dose reduction for tangentially irradiated breast cancer, utilizing deep inspiration breath-hold with audio-visual guidance, without compromising target coverage

    Energy Technology Data Exchange (ETDEWEB)

    Vikstroem, Johan; Hjelstuen, Mari H.B.; Mjaaland, Ingvil; Dybvik, Kjell Ivar (Dept. of Radiotherapy, Stavanger Univ. Hospital, Stavanger (Norway)), e-mail: vijo@sus.no

    2011-01-15

    Background and purpose. Cardiac disease and pulmonary complications are documented risk factors in tangential breast irradiation. Respiratory gating radiotherapy provides a possibility to substantially reduce cardiopulmonary doses. This CT planning study quantifies the reduction of radiation doses to the heart and lung, using deep inspiration breath-hold (DIBH). Patients and methods. Seventeen patients with early breast cancer, referred for adjuvant radiotherapy, were included. For each patient two CT scans were acquired; the first during free breathing (FB) and the second during DIBH. The scans were monitored by the Varian RPM respiratory gating system. Audio coaching and visual feedback (audio-visual guidance) were used. The treatment planning of the two CT studies was performed with conformal tangential fields, focusing on good coverage (V95>98%) of the planning target volume (PTV). Dose-volume histograms were calculated and compared. Doses to the heart, left anterior descending (LAD) coronary artery, ipsilateral lung and the contralateral breast were assessed. Results. Compared to FB, the DIBH-plans obtained lower cardiac and pulmonary doses, with equal coverage of PTV. The average mean heart dose was reduced from 3.7 to 1.7 Gy and the number of patients with >5% heart volume receiving 25 Gy or more was reduced from four to one of the 17 patients. With DIBH the heart was completely out of the beam portals for ten patients, with FB this could not be achieved for any of the 17 patients. The average mean dose to the LAD coronary artery was reduced from 18.1 to 6.4 Gy. The average ipsilateral lung volume receiving more than 20 Gy was reduced from 12.2 to 10.0%. Conclusion. Respiratory gating with DIBH, utilizing audio-visual guidance, reduces cardiac and pulmonary doses for tangentially treated left sided breast cancer patients without compromising the target coverage

  10. Effectiveness of a second deep TMS in depression: a brief report.

    Science.gov (United States)

    Rosenberg, O; Isserles, M; Levkovitz, Y; Kotler, M; Zangen, A; Dannon, P N

    2011-06-01

    Deep transcranial magnetic stimulation (DTMS) is an emerging and promising treatment for major depression. In our study, we explored the effectiveness of a second antidepressant course of deep TMS in major depression. We enrolled eight patients who had previously responded well to DTMS but relapsed within 1 year in order to evaluate whether a second course of DTMS would still be effective. Eight depressive patients who relapsed after a previous successful deep TMS course expressed their wish to be treated again. Upon their request, they were recruited and treated with 20 daily sessions of DTMS at 20 Hz using the Brainsway's H1 coil. The Hamilton depression rating scale (HDRS), Hamilton anxiety rating scale (HARS) and the Beck depression inventory (BDI) were used weekly to evaluate the response to treatment. Similar to the results obtained in the first course of treatment, the second course of treatment (after relapse) induced significant reductions in HDRS, HARS and BDI scores, compared to the ratings measured prior to treatment. The magnitude of response in the second course was smaller relative to that obtained in the first course of treatment. Our results suggest that depressive patients who previously responded well to deep TMS treatment are likely to respond again. However, the slight reduction in the magnitude of the response in the second treatment raises the question of whether tolerance or resistance to this treatment may eventually develop. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. FINITE ELEMENT ANALYSIS OF DEEP BEAM UNDER DIRECT AND INDIRECT LOAD

    Directory of Open Access Journals (Sweden)

    Haleem K. Hussain

    2018-05-01

    Full Text Available This research study the effect of exist of opening in web of deep beam loaded directly and indirectly and the behavior of reinforced concrete deep beams without with and without web reinforcement, the opening size and shear span ratio (a/d was constant. Nonlinear analysis using the finite element method with ANSYS software release 12.0 program was used to predict the ultimate load capacity and crack propagation for reinforced concrete deep beams with openings. The adopted beam models depend on experimental test program of reinforced concrete deep beam with and without openings and the finite element analysis result showed a good agreement with small amount of deference in ultimate beam capacity with (ANSYS analysis and it was completely efficient to simulate the behavior of reinforced concrete deep beams. The mid-span deflection at ultimate applied load and inclined cracked were highly compatible with experimental results. The model with opening in the shear span shows a reduction in the load-carrying capacity of beam and adding the vertical stirrup has improve the capacity of ultimate beam load.

  12. Deep Echo State Network (DeepESN): A Brief Survey

    OpenAIRE

    Gallicchio, Claudio; Micheli, Alessio

    2017-01-01

    The study of deep recurrent neural networks (RNNs) and, in particular, of deep Reservoir Computing (RC) is gaining an increasing research attention in the neural networks community. The recently introduced deep Echo State Network (deepESN) model opened the way to an extremely efficient approach for designing deep neural networks for temporal data. At the same time, the study of deepESNs allowed to shed light on the intrinsic properties of state dynamics developed by hierarchical compositions ...

  13. A Comprehensive Overview of CO2 Flow Behaviour in Deep Coal Seams

    Directory of Open Access Journals (Sweden)

    Mandadige Samintha Anne Perera

    2018-04-01

    Full Text Available Although enhanced coal bed methane recovery (ECBM and CO2 sequestration are effective approaches for achieving lower and safer CO2 levels in the atmosphere, the effectiveness of CO2 storage is greatly influenced by the flow ability of the injected CO2 through the coal seam. A precious understanding of CO2 flow behaviour is necessary due to various complexities generated in coal seams upon CO2 injection. This paper aims to provide a comprehensive overview on the CO2 flow behaviour in deep coal seams, specifically addressing the permeability alterations associated with different in situ conditions. The low permeability nature of natural coal seams has a significant impact on the CO2 sequestration process. One of the major causative factors for this low permeability nature is the high effective stresses applying on them, which reduces the pore space available for fluid movement with giving negative impact on the flow capability. Further, deep coal seams are often water saturated where, the moisture behave as barriers for fluid movement and thus reduce the seam permeability. Although the high temperatures existing at deep seams cause thermal expansion in the coal matrix, reducing their permeability, extremely high temperatures may create thermal cracks, resulting permeability enhancements. Deep coal seams preferable for CO2 sequestration generally are high-rank coal, as they have been subjected to greater pressure and temperature variations over a long period of time, which confirm the low permeability nature of such seams. The resulting extremely low CO2 permeability nature creates serious issues in large-scale CO2 sequestration/ECBM projects, as critically high injection pressures are required to achieve sufficient CO2 injection into the coal seam. The situation becomes worse when CO2 is injected into such coal seams, because CO2 movement in the coal seam creates a significant influence on the natural permeability of the seams through CO2

  14. To Master or Perform? Exploring Relations between Achievement Goals and Conceptual Change Learning

    Science.gov (United States)

    Ranellucci, John; Muis, Krista R.; Duffy, Melissa; Wang, Xihui; Sampasivam, Lavanya; Franco, Gina M.

    2013-01-01

    Background: Research is needed to explore conceptual change in relation to achievement goal orientations and depth of processing. Aims: To address this need, we examined relations between achievement goals, use of deep versus shallow processing strategies, and conceptual change learning using a think-aloud protocol. Sample and Method:…

  15. Two case studies on the origin of aqueous sulphate in deep crystalline rocks

    International Nuclear Information System (INIS)

    Michelot, J.L.; Fontes, J.C.

    1987-01-01

    The paper reports preliminary results obtained from studies in Central Sweden (Stripa) and in Northern Switzerland (Boettstein). The isotopic compositions ( 34 S, 18 O) of dissolved sulphates in shallow and deep groundwaters from the Stripa test site show that (1) the origins of the salinity in the shallow and in the deep groundwaters are probably different, (2) the low sulphate content of the waters collected from the upper part of the deep aquifer system could be derived from the shallow aqueous sulphate through bacterial reduction, (3) a deeper bulk of sulphate can be identified. After examining several hypotheses, a Permian or Triassic origin is attributed to this deep sulphate. Boettstein is the first drilled borehole of the NAGRA (Swiss National Co-operative for the Storage of Radioactive Wastes programme). In the 34 S versus 18 O diagram, most of the representative points of samples collected at different depths (from apparently different water bodies), lie along a straight line. It seems that this line cannot be a reduction line, nor a precipitation line (gypsum or anhydrite). It is thus interpreted as a mixing line. The end members of this mixing line are still unknown. However, a deep brine is present at the bottom of the system, probably related to brines circulating in the Permian channel found at the same depth, a few kilometres away. A working hypothesis involving this deep brine as a source for both end members of the mixing, through two different processes, is presented, with the problem of possible connections between the different water bodies. (author). 16 refs, 5 figs, 2 tabs

  16. Exploring Ocean Animal Trajectory Pattern via Deep Learning

    KAUST Repository

    Wang, Su

    2016-01-01

    We trained a combined deep convolutional neural network to predict seals’ age (3 categories) and gender (2 categories). The entire dataset contains 110 seals with around 489 thousand location records. Most records are continuous and measured in a certain step. We created five convolutional layers for feature representation and established two fully connected structure as age’s and gender’s classifier, respectively. Each classifier consists of three fully connected layers. Treating seals’ latitude and longitude as input, entire deep learning network, which includes 780,000 neurons and 2,097,000 parameters, can reach to 70.72% accuracy rate for predicting seals’ age and simultaneously achieve 79.95% for gender estimation.

  17. Exploring Ocean Animal Trajectory Pattern via Deep Learning

    KAUST Repository

    Wang, Su

    2016-05-23

    We trained a combined deep convolutional neural network to predict seals’ age (3 categories) and gender (2 categories). The entire dataset contains 110 seals with around 489 thousand location records. Most records are continuous and measured in a certain step. We created five convolutional layers for feature representation and established two fully connected structure as age’s and gender’s classifier, respectively. Each classifier consists of three fully connected layers. Treating seals’ latitude and longitude as input, entire deep learning network, which includes 780,000 neurons and 2,097,000 parameters, can reach to 70.72% accuracy rate for predicting seals’ age and simultaneously achieve 79.95% for gender estimation.

  18. Biomechanics Strategies for Space Closure in Deep Overbite

    Directory of Open Access Journals (Sweden)

    Harryanto Wijaya

    2013-07-01

    Full Text Available Space closure is an interesting aspect of orthodontic treatment related to principles of biomechanics. It should be tailored individually based on patient’s diagnosis and treatment plan. Understanding the space closure biomechanics basis leads to achieve the desired treatment objective. Overbite deepening and losing posterior anchorage are the two most common unwanted side effects in space closure. Conventionally, correction of overbite must be done before space closure resulted in longer treatment. Application of proper space closure biomechanics strategies is necessary to achieve the desired treatment outcome. This cases report aimed to show the space closure biomechanics strategies that effectively control the overbite as well as posterior anchorage in deep overbite patients without increasing treatment time. Two patients who presented with class II division 1 malocclusion were treated with fixed orthodontic appliance. The primary strategies included extraction space closure on segmented arch that employed two-step space closure, namely single canine retraction simultaneously with incisors intrusion followed by enmasse retraction of four incisors by using differential moment concept. These strategies successfully closed the space, corrected deep overbite and controlled posterior anchorage simultaneously so that the treatment time was shortened. Biomechanics strategies that utilized were effective to achieve the desired treatment outcome.

  19. Cardiac dose reduction with deep inspiration breath hold for left-sided breast cancer radiotherapy patients with and without regional nodal irradiation

    International Nuclear Information System (INIS)

    Yeung, Rosanna; Conroy, Leigh; Long, Karen; Walrath, Daphne; Li, Haocheng; Smith, Wendy; Hudson, Alana; Phan, Tien

    2015-01-01

    Deep inspiration breath hold (DIBH) reduces heart and left anterior descending artery (LAD) dose during left-sided breast radiation therapy (RT); however there is limited information about which patients derive the most benefit from DIBH. The primary objective of this study was to determine which patients benefit the most from DIBH by comparing percent reduction in mean cardiac dose conferred by DIBH for patients treated with whole breast RT ± boost (WBRT) versus those receiving breast/chest wall plus regional nodal irradiation, including internal mammary chain (IMC) nodes (B/CWRT + RNI) using a modified wide tangent technique. A secondary objective was to determine if DIBH was required to meet a proposed heart dose constraint of D mean < 4 Gy in these two cohorts. Twenty consecutive patients underwent CT simulation both free breathing (FB) and DIBH. Patients were grouped into two cohorts: WBRT (n = 11) and B/CWRT + RNI (n = 9). 3D-conformal plans were developed and FB was compared to DIBH for each cohort using Wilcoxon signed-rank tests for continuous variables and McNemar’s test for discrete variables. The percent relative reduction conferred by DIBH in mean heart and LAD dose, as well as lung V 20 were compared between the two cohorts using Wilcox rank-sum testing. The significance level was set at 0.05 with Bonferroni correction for multiple testing. All patients had comparable target coverage on DIBH and FB. DIBH statistically significantly reduced mean heart and LAD dose for both cohorts. Percent reduction in mean heart and LAD dose with DIBH was significantly larger in the B/CWRT + RNI cohort compared to WBRT group (relative reduction in mean heart and LAD dose: 55.9 % and 72.1 % versus 29.2 % and 43.5 %, p < 0.02). All patients in the WBRT group and five patients (56 %) in the B/CWBRT + RNI group met heart D mean <4 Gy with FB. All patients met this constraint with DIBH. All patients receiving WBRT met D mean Heart < 4 Gy on FB, while only slightly over

  20. Realization of Chinese word segmentation based on deep learning method

    Science.gov (United States)

    Wang, Xuefei; Wang, Mingjiang; Zhang, Qiquan

    2017-08-01

    In recent years, with the rapid development of deep learning, it has been widely used in the field of natural language processing. In this paper, I use the method of deep learning to achieve Chinese word segmentation, with large-scale corpus, eliminating the need to construct additional manual characteristics. In the process of Chinese word segmentation, the first step is to deal with the corpus, use word2vec to get word embedding of the corpus, each character is 50. After the word is embedded, the word embedding feature is fed to the bidirectional LSTM, add a linear layer to the hidden layer of the output, and then add a CRF to get the model implemented in this paper. Experimental results show that the method used in the 2014 People's Daily corpus to achieve a satisfactory accuracy.

  1. Detecting atrial fibrillation by deep convolutional neural networks.

    Science.gov (United States)

    Xia, Yong; Wulan, Naren; Wang, Kuanquan; Zhang, Henggui

    2018-02-01

    Atrial fibrillation (AF) is the most common cardiac arrhythmia. The incidence of AF increases with age, causing high risks of stroke and increased morbidity and mortality. Efficient and accurate diagnosis of AF based on the ECG is valuable in clinical settings and remains challenging. In this paper, we proposed a novel method with high reliability and accuracy for AF detection via deep learning. The short-term Fourier transform (STFT) and stationary wavelet transform (SWT) were used to analyze ECG segments to obtain two-dimensional (2-D) matrix input suitable for deep convolutional neural networks. Then, two different deep convolutional neural network models corresponding to STFT output and SWT output were developed. Our new method did not require detection of P or R peaks, nor feature designs for classification, in contrast to existing algorithms. Finally, the performances of the two models were evaluated and compared with those of existing algorithms. Our proposed method demonstrated favorable performances on ECG segments as short as 5 s. The deep convolutional neural network using input generated by STFT, presented a sensitivity of 98.34%, specificity of 98.24% and accuracy of 98.29%. For the deep convolutional neural network using input generated by SWT, a sensitivity of 98.79%, specificity of 97.87% and accuracy of 98.63% was achieved. The proposed method using deep convolutional neural networks shows high sensitivity, specificity and accuracy, and, therefore, is a valuable tool for AF detection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Formability of dual-phase steels in deep drawing of rectangular parts: Influence of blank thickness and die radius

    Science.gov (United States)

    López, Ana María Camacho; Regueras, José María Gutiérrez

    2017-10-01

    The new goals of automotive industry related with environment concerns, the reduction of fuel emissions and the security requirements have driven up to new designs which main objective is reducing weight. It can be achieved through new materials such as nano-structured materials, fibre-reinforced composites or steels with higher strength, among others. Into the last group, the Advance High Strength Steels (AHSS) and particularly, dual-phase steels are in a predominant situation. However, despite of their special characteristics, they present issues related to their manufacturability such as springback, splits and cracks, among others. This work is focused on the deep drawing processof rectangular shapes, a very usual forming operation that allows manufacturing several automotive parts like oil pans, cases, etc. Two of the main parameters in this process which affect directly to the characteristics of final product are blank thickness (t) and die radius (Rd). Influence of t and Rd on the formability of dual-phase steels has been analysed considering values typically used in industrial manufacturing for a wide range of dual-phase steels using finite element modelling and simulation; concretely, the influence of these parameters in the percentage of thickness reduction pt(%), a quite important value for manufactured parts by deep drawing operations, which affects to its integrity and its service behaviour. Modified Morh Coulomb criteria (MMC) has been used in order to obtain Fracture Forming Limit Diagrams (FFLD) which take into account an important failure mode in dual-phase steels: shear fracture. Finally, a relation between thickness reduction percentage and studied parameters has been established fordual-phase steels, obtaining a collection of equations based on Design of Experiments (D.O.E) technique, which can be useful in order to predict approximate results.

  3. Why & When Deep Learning Works: Looking Inside Deep Learnings

    OpenAIRE

    Ronen, Ronny

    2017-01-01

    The Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI) has been heavily supporting Machine Learning and Deep Learning research from its foundation in 2012. We have asked six leading ICRI-CI Deep Learning researchers to address the challenge of "Why & When Deep Learning works", with the goal of looking inside Deep Learning, providing insights on how deep networks function, and uncovering key observations on their expressiveness, limitations, and potential. The outp...

  4. Residual Deep Convolutional Neural Network Predicts MGMT Methylation Status.

    Science.gov (United States)

    Korfiatis, Panagiotis; Kline, Timothy L; Lachance, Daniel H; Parney, Ian F; Buckner, Jan C; Erickson, Bradley J

    2017-10-01

    Predicting methylation of the O6-methylguanine methyltransferase (MGMT) gene status utilizing MRI imaging is of high importance since it is a predictor of response and prognosis in brain tumors. In this study, we compare three different residual deep neural network (ResNet) architectures to evaluate their ability in predicting MGMT methylation status without the need for a distinct tumor segmentation step. We found that the ResNet50 (50 layers) architecture was the best performing model, achieving an accuracy of 94.90% (+/- 3.92%) for the test set (classification of a slice as no tumor, methylated MGMT, or non-methylated). ResNet34 (34 layers) achieved 80.72% (+/- 13.61%) while ResNet18 (18 layers) accuracy was 76.75% (+/- 20.67%). ResNet50 performance was statistically significantly better than both ResNet18 and ResNet34 architectures (p deep neural architectures can be used to predict molecular biomarkers from routine medical images.

  5. Application of Moessbauer spectroscopy to the study of neptunium adsorbed on deep-sea sediments

    International Nuclear Information System (INIS)

    Bennett, B.A.; Rees, L.V.C.

    1987-03-01

    A Neptunium Moessbauer spectrometer (the first in Great Britain) was constructed and the Moessbauer spectra of NpAl Laves phase alloy obtained. Neptunium was sorbed onto a calcareous deep-sea sediment from sea water, using a successive-loading technique. Sorption appeared to be by an equilibrium reaction, and because of the low solubility of neptunium in seawater, this meant that the maximum loading that could be achieved was 8mg 237 Np/g sediment. This proved to be an adequate concentration for Moessbauer measurements and a Moessbauer spectrum was obtained. This showed that most of the neptunium was in exchange sites and not present as precipitates of neptunium compounds. It was probably in the 4+ state indicating that reduction had occurred during sorption. This work has demonstrated that Moessbauer Spectroscopy has great potential as an aid to understanding the mechanism of actinide sorption in natural systems. (author)

  6. National environmental targets and international emission reduction instruments

    International Nuclear Information System (INIS)

    Morthorst, P.E.

    2003-01-01

    According to the agreed burden sharing within the European Union the overall EU emission reduction target as agreed by in the Kyoto protocol is converted into national greenhouse gas reduction-targets for each of the member states. In parallel with national emission reduction initiatives common EU policies for emission reductions are considered. Currently discussed is the introduction of a market for tradable permits for CO 2 -emissions to achieve emission reductions within the power industry and other energy intensive industries. In parallel with this markets for green certificates to deploy renewable energy technologies seem to be appearing in a number of countries, among these Denmark, Italy, Sweden, Belgium (Flanders), England and Australia. Although these national initiatives for a green certificate market are fairly different, they could be a starting point for establishing a common EU certificate market. But interactions between national targets for greenhouse gas emissions and these international instruments for emission reduction are not a trivial matter, especially not seen in relation to the possible contributions of these instruments in achieving national GHG-reduction targets. The paper is split into three parts all taking a liberalised power market as starting point: The first part discusses the consequences of a general deployment of renewable energy technologies, using planning initiatives or national promotion schemes (feed-in tariffs). In the second part an international green certificate market is introduced into the liberalised power market context, substituting other national promotion schemes. Finally, in the third part a combination of an international green certificate market (TGC) and an international emission-trading scheme for CO 2 is analysed within the liberalised international power market set-up. The main conclusion is that neither the use of national renewable support schemes nor the introduction of a TGC-market into a liberalised

  7. Achieving deep cuts in the carbon intensity of U.S. automobile transportation by 2050: complementary roles for electricity and biofuels.

    Science.gov (United States)

    Scown, Corinne D; Taptich, Michael; Horvath, Arpad; McKone, Thomas E; Nazaroff, William W

    2013-08-20

    Passenger cars in the United States (U.S.) rely primarily on petroleum-derived fuels and contribute the majority of U.S. transportation-related greenhouse gas (GHG) emissions. Electricity and biofuels are two promising alternatives for reducing both the carbon intensity of automotive transportation and U.S. reliance on imported oil. However, as standalone solutions, the biofuels option is limited by land availability and the electricity option is limited by market adoption rates and technical challenges. This paper explores potential GHG emissions reductions attainable in the United States through 2050 with a county-level scenario analysis that combines ambitious plug-in hybrid electric vehicle (PHEV) adoption rates with scale-up of cellulosic ethanol production. With PHEVs achieving a 58% share of the passenger car fleet by 2050, phasing out most corn ethanol and limiting cellulosic ethanol feedstocks to sustainably produced crop residues and dedicated crops, we project that the United States could supply the liquid fuels needed for the automobile fleet with an average blend of 80% ethanol (by volume) and 20% gasoline. If electricity for PHEV charging could be supplied by a combination of renewables and natural-gas combined-cycle power plants, the carbon intensity of automotive transport would be 79 g CO2e per vehicle-kilometer traveled, a 71% reduction relative to 2013.

  8. Combination of deep eutectic solvent and ionic liquid to improve biocatalytic reduction of 2-octanone with Acetobacter pasteurianus GIM1.158 cell

    Science.gov (United States)

    Xu, Pei; Du, Peng-Xuan; Zong, Min-Hua; Li, Ning; Lou, Wen-Yong

    2016-05-01

    The efficient anti-Prelog asymmetric reduction of 2-octanone with Acetobacter pasteurianus GIM1.158 cells was successfully performed in a biphasic system consisting of deep eutectic solvent (DES) and water-immiscible ionic liquid (IL). Various DESs exerted different effects on the synthesis of (R)-2-octanol. Choline chloride/ethylene glycol (ChCl/EG) exhibited good biocompatibility and could moderately increase the cell membrane permeability thus leading to the better results. Adding ChCl/EG increased the optimal substrate concentration from 40 mM to 60 mM and the product e.e. kept above 99.9%. To further improve the reaction efficiency, water-immiscible ILs were introduced to the reaction system and an enhanced substrate concentration (1.5 M) was observed with C4MIM·PF6. Additionally, the cells manifested good operational stability in the reaction system. Thus, the efficient biocatalytic process with ChCl/EG and C4MIM·PF6 was promising for efficient synthesis of (R)-2-octanol.

  9. Low-cost, high-precision micro-lensed optical fiber providing deep-micrometer to deep-nanometer-level light focusing.

    Science.gov (United States)

    Wen, Sy-Bor; Sundaram, Vijay M; McBride, Daniel; Yang, Yu

    2016-04-15

    A new type of micro-lensed optical fiber through stacking appropriate high-refractive microspheres at designed locations with respect to the cleaved end of an optical fiber is numerically and experimentally demonstrated. This new type of micro-lensed optical fiber can be precisely constructed with low cost and high speed. Deep micrometer-scale and submicrometer-scale far-field light spots can be achieved when the optical fibers are multimode and single mode, respectively. By placing an appropriate teardrop dielectric nanoscale scatterer at the far-field spot of this new type of micro-lensed optical fiber, a deep-nanometer near-field spot can also be generated with high intensity and minimum joule heating, which is valuable in high-speed, high-resolution, and high-power nanoscale detection compared with traditional near-field optical fibers containing a significant portion of metallic material.

  10. 40Ar/39Ar studies of deep sea igneous rocks

    International Nuclear Information System (INIS)

    Seidemann, D.

    1978-01-01

    An attempt to date deep-sea igneous rocks reliably was made using the 40 Ar/ 39 Ar dating technique. It was determined that the 40 Ar/ 39 Ar incremental release technique could not be used to eliminate the effects of excess radiogenic 40 Ar in deep-sea basalts. Excess 40 Ar is released throughout the extraction temperature range and cannot be distinguished from 40 Ar generated by in situ 40 K decay. The problem of the reduction of K-Ar dates associated with sea water alteration of deep-sea igneous rocks could not be resolved using the 40 Ar/ 39 Ar technique. Irradiation induced 39 Ar loss and/or redistribution in fine-grained and altered igneous rocks results in age spectra that are artifacts of the experimental procedure and only partly reflect the geologic history of the sample. Therefore, caution must be used in attributing significance to age spectra of fine grained and altered deep-sea igneous rocks. Effects of 39 Ar recoil are not important for either medium-grained (or coarser) deep-sea rocks or glasses because only a small fraction of the 39 Ar recoils to channels of easy diffusion, such as intergranular boundaries or cracks, during the irradiation. (author)

  11. Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification

    Directory of Open Access Journals (Sweden)

    Srdjan Sladojevic

    2016-01-01

    Full Text Available The latest generation of convolutional neural networks (CNNs has achieved impressive results in the field of image classification. This paper is concerned with a new approach to the development of plant disease recognition model, based on leaf image classification, by the use of deep convolutional networks. Novel way of training and the methodology used facilitate a quick and easy system implementation in practice. The developed model is able to recognize 13 different types of plant diseases out of healthy leaves, with the ability to distinguish plant leaves from their surroundings. According to our knowledge, this method for plant disease recognition has been proposed for the first time. All essential steps required for implementing this disease recognition model are fully described throughout the paper, starting from gathering images in order to create a database, assessed by agricultural experts. Caffe, a deep learning framework developed by Berkley Vision and Learning Centre, was used to perform the deep CNN training. The experimental results on the developed model achieved precision between 91% and 98%, for separate class tests, on average 96.3%.

  12. Deep neural networks to enable real-time multimessenger astrophysics

    Science.gov (United States)

    George, Daniel; Huerta, E. A.

    2018-02-01

    Gravitational wave astronomy has set in motion a scientific revolution. To further enhance the science reach of this emergent field of research, there is a pressing need to increase the depth and speed of the algorithms used to enable these ground-breaking discoveries. We introduce Deep Filtering—a new scalable machine learning method for end-to-end time-series signal processing. Deep Filtering is based on deep learning with two deep convolutional neural networks, which are designed for classification and regression, to detect gravitational wave signals in highly noisy time-series data streams and also estimate the parameters of their sources in real time. Acknowledging that some of the most sensitive algorithms for the detection of gravitational waves are based on implementations of matched filtering, and that a matched filter is the optimal linear filter in Gaussian noise, the application of Deep Filtering using whitened signals in Gaussian noise is investigated in this foundational article. The results indicate that Deep Filtering outperforms conventional machine learning techniques, achieves similar performance compared to matched filtering, while being several orders of magnitude faster, allowing real-time signal processing with minimal resources. Furthermore, we demonstrate that Deep Filtering can detect and characterize waveform signals emitted from new classes of eccentric or spin-precessing binary black holes, even when trained with data sets of only quasicircular binary black hole waveforms. The results presented in this article, and the recent use of deep neural networks for the identification of optical transients in telescope data, suggests that deep learning can facilitate real-time searches of gravitational wave sources and their electromagnetic and astroparticle counterparts. In the subsequent article, the framework introduced herein is directly applied to identify and characterize gravitational wave events in real LIGO data.

  13. Supervised deep learning embeddings for the prediction of cervical cancer diagnosis

    Directory of Open Access Journals (Sweden)

    Kelwin Fernandes

    2018-05-01

    Full Text Available Cervical cancer remains a significant cause of mortality all around the world, even if it can be prevented and cured by removing affected tissues in early stages. Providing universal and efficient access to cervical screening programs is a challenge that requires identifying vulnerable individuals in the population, among other steps. In this work, we present a computationally automated strategy for predicting the outcome of the patient biopsy, given risk patterns from individual medical records. We propose a machine learning technique that allows a joint and fully supervised optimization of dimensionality reduction and classification models. We also build a model able to highlight relevant properties in the low dimensional space, to ease the classification of patients. We instantiated the proposed approach with deep learning architectures, and achieved accurate prediction results (top area under the curve AUC = 0.6875 which outperform previously developed methods, such as denoising autoencoders. Additionally, we explored some clinical findings from the embedding spaces, and we validated them through the medical literature, making them reliable for physicians and biomedical researchers.

  14. Dasatinib rapidly induces deep molecular response in chronic-phase chronic myeloid leukemia patients who achieved major molecular response with detectable levels of BCR-ABL1 transcripts by imatinib therapy.

    Science.gov (United States)

    Shiseki, Masayuki; Yoshida, Chikashi; Takezako, Naoki; Ohwada, Akira; Kumagai, Takashi; Nishiwaki, Kaichi; Horikoshi, Akira; Fukuda, Tetsuya; Takano, Hina; Kouzai, Yasuji; Tanaka, Junji; Morita, Satoshi; Sakamoto, Junichi; Sakamaki, Hisashi; Inokuchi, Koiti

    2017-10-01

    With the introduction of imatinib, a first-generation tyrosine kinase inhibitor (TKI) to inhibit BCR-ABL1 kinase, the outcome of chronic-phase chronic myeloid leukemia (CP-CML) has improved dramatically. However, only a small proportion of CP-CML patients subsequently achieve a deep molecular response (DMR) with imatinib. Dasatinib, a second-generation TKI, is more potent than imatinib in the inhibition of BCR-ABL1 tyrosine kinase in vitro and more effective in CP-CML patients who do not achieve an optimal response with imatinib treatment. In the present study, we attempted to investigate whether switching the treatment from imatinib to dasatinib can induce DMR in 16 CP-CML patients treated with imatinib for at least two years who achieved a major molecular response (MMR) with detectable levels of BCR-ABL1 transcripts. The rates of achievement of DMR at 1, 3, 6 and 12 months after switching to dasatinib treatment in the 16 patients were 44% (7/16), 56% (9/16), 63% (10/16) and 75% (12/16), respectively. The cumulative rate of achieving DMR at 12 months from initiation of dasatinib therapy was 93.8% (15/16). The proportion of natural killer cells and cytotoxic T cells in peripheral lymphocytes increased after switching to dasatinib. In contrast, the proportion of regulatory T cells decreased during treatment. The safety profile of dasatinib was consistent with previous studies. Switching to dasatinib would be a therapeutic option for CP-CML patients who achieved MMR but not DMR by imatinib, especially for patients who wish to discontinue TKI therapy.

  15. Early diagenesis in the sediments of the Congo deep-sea fan dominated by massive terrigenous deposits: Part II - Iron-sulfur coupling

    Science.gov (United States)

    Taillefert, Martial; Beckler, Jordon S.; Cathalot, Cécile; Michalopoulos, Panagiotis; Corvaisier, Rudolph; Kiriazis, Nicole; Caprais, Jean-Claude; Pastor, Lucie; Rabouille, Christophe

    2017-08-01

    Deep-sea fans are well known depot centers for organic carbon that should promote sulfate reduction. At the same time, the high rates of deposition of unconsolidated metal oxides from terrigenous origin may also promote metal-reducing microbial activity. To investigate the eventual coupling between the iron and sulfur cycles in these environments, shallow sediment cores (Congo River deep-sea fan ( 5000 m) were profiled using a combination of geochemical methods. Interestingly, metal reduction dominated suboxic carbon remineralization processes in most of these sediments, while dissolved sulfide was absent. In some 'hotspot' patches, however, sulfate reduction produced large sulfide concentrations which supported chemosynthetic-based benthic megafauna. These environments were characterized by sharp geochemical boundaries compared to the iron-rich background environment, suggesting that FeS precipitation efficiently titrated iron and sulfide from the pore waters. A companion study demonstrated that methanogenesis was active in the deep sediment layers of these patchy ecosystems, suggesting that sulfate reduction was promoted by alternative anaerobic processes. These highly reduced habitats could be fueled by discrete, excess inputs of highly labile natural organic matter from Congo River turbidites or by exhumation of buried sulfide during channel flank erosion and slumping. Sulfidic conditions may be maintained by the mineralization of decomposition products from local benthic macrofauna or bacterial symbionts or by the production of more crystalline Fe(III) oxide phases that are less thermodynamically favorable than sulfate reduction in these bioturbated sediments. Overall, the iron and sulfur biogeochemical cycling in this environment is unique and much more similar to a coastal ecosystem than a deep-sea environment.

  16. A sparse autoencoder-based deep neural network for protein solvent accessibility and contact number prediction.

    Science.gov (United States)

    Deng, Lei; Fan, Chao; Zeng, Zhiwen

    2017-12-28

    Direct prediction of the three-dimensional (3D) structures of proteins from one-dimensional (1D) sequences is a challenging problem. Significant structural characteristics such as solvent accessibility and contact number are essential for deriving restrains in modeling protein folding and protein 3D structure. Thus, accurately predicting these features is a critical step for 3D protein structure building. In this study, we present DeepSacon, a computational method that can effectively predict protein solvent accessibility and contact number by using a deep neural network, which is built based on stacked autoencoder and a dropout method. The results demonstrate that our proposed DeepSacon achieves a significant improvement in the prediction quality compared with the state-of-the-art methods. We obtain 0.70 three-state accuracy for solvent accessibility, 0.33 15-state accuracy and 0.74 Pearson Correlation Coefficient (PCC) for the contact number on the 5729 monomeric soluble globular protein dataset. We also evaluate the performance on the CASP11 benchmark dataset, DeepSacon achieves 0.68 three-state accuracy and 0.69 PCC for solvent accessibility and contact number, respectively. We have shown that DeepSacon can reliably predict solvent accessibility and contact number with stacked sparse autoencoder and a dropout approach.

  17. Framework for the analysis of the low-carbon scenario 2020 to achieve the national carbon Emissions reduction target: Focused on educational facilities

    International Nuclear Information System (INIS)

    Koo, Choongwan; Kim, Hyunjoong; Hong, Taehoon

    2014-01-01

    Since the increase in greenhouse gas emissions has increased the global warming potential, an international agreement on carbon emissions reduction target (CERT) has been formulated in Kyoto Protocol (1997). This study aimed to develop a framework for the analysis of the low-carbon scenario 2020 to achieve the national CERT. To verify the feasibility of the proposed framework, educational facilities were used for a case study. This study was conducted in six steps: (i) selection of the target school; (ii) establishment of the reference model for the target school; (iii) energy consumption pattern analysis by target school; (iv) establishment of the energy retrofit model for the target school; (v) economic and environmental assessment through the life cycle cost and life cycle CO 2 analysis; and (vi) establishment of the low-carbon scenario in 2020 to achieve the national CERT. This study can help facility managers or policymakers establish the optimal retrofit strategy within the limited budget from a short-term perspective and the low-carbon scenario 2020 to achieve the national CERT from the long-term perspective. The proposed framework could be also applied to any other building type or country in the global environment

  18. How to improve healthcare? Identify, nurture and embed individuals and teams with "deep smarts".

    Science.gov (United States)

    Eljiz, Kathy; Greenfield, David; Molineux, John; Sloan, Terry

    2018-03-19

    Purpose Unlocking and transferring skills and capabilities in individuals to the teams they work within, and across, is the key to positive organisational development and improved patient care. Using the "deep smarts" model, the purpose of this paper is to examine these issues. Design/methodology/approach The "deep smarts" model is described, reviewed and proposed as a way of transferring knowledge and capabilities within healthcare organisations. Findings Effective healthcare delivery is achieved through, and continues to require, integrative care involving numerous, dispersed service providers. In the space of overlapping organisational boundaries, there is a need for "deep smarts" people who act as "boundary spanners". These are critical integrative, networking roles employing clinical, organisational and people skills across multiple settings. Research limitations/implications Studies evaluating the barriers and enablers to the application of the deep smarts model and 13 knowledge development strategies proposed are required. Such future research will empirically and contemporary ground our understanding of organisational development in modern complex healthcare settings. Practical implications An organisation with "deep smarts" people - in managerial, auxiliary and clinical positions - has a greater capacity for integration and achieving improved patient-centred care. Originality/value In total, 13 developmental strategies, to transfer individual capabilities into organisational capability, are proposed. These strategies are applicable to different contexts and challenges faced by individuals and teams in complex healthcare organisations.

  19. Achievement Motivation: A Rational Approach to Psychological Education

    Science.gov (United States)

    Smith, Robert L.; Troth, William A.

    1975-01-01

    Investigated the achievement motivation training component of psychological education. The subjects were 54 late-adolescent pupils. The experimental training program had as its objectives an increase in academic achievement motivation, internal feelings of control, and school performance, and a reduction of test anxiety. Results indicated…

  20. Classification of ECG beats using deep belief network and active learning.

    Science.gov (United States)

    G, Sayantan; T, Kien P; V, Kadambari K

    2018-04-12

    A new semi-supervised approach based on deep learning and active learning for classification of electrocardiogram signals (ECG) is proposed. The objective of the proposed work is to model a scientific method for classification of cardiac irregularities using electrocardiogram beats. The model follows the Association for the Advancement of medical instrumentation (AAMI) standards and consists of three phases. In phase I, feature representation of ECG is learnt using Gaussian-Bernoulli deep belief network followed by a linear support vector machine (SVM) training in the consecutive phase. It yields three deep models which are based on AAMI-defined classes, namely N, V, S, and F. In the last phase, a query generator is introduced to interact with the expert to label few beats to improve accuracy and sensitivity. The proposed approach depicts significant improvement in accuracy with minimal queries posed to the expert and fast online training as tested on the MIT-BIH Arrhythmia Database and the MIT-BIH Supra-ventricular Arrhythmia Database (SVDB). With 100 queries labeled by the expert in phase III, the method achieves an accuracy of 99.5% in "S" versus all classifications (SVEB) and 99.4% accuracy in "V " versus all classifications (VEB) on MIT-BIH Arrhythmia Database. In a similar manner, it is attributed that an accuracy of 97.5% for SVEB and 98.6% for VEB on SVDB database is achieved respectively. Graphical Abstract Reply- Deep belief network augmented by active learning for efficient prediction of arrhythmia.

  1. Software Graphics Processing Unit (sGPU) for Deep Space Applications

    Science.gov (United States)

    McCabe, Mary; Salazar, George; Steele, Glen

    2015-01-01

    A graphics processing capability will be required for deep space missions and must include a range of applications, from safety-critical vehicle health status to telemedicine for crew health. However, preliminary radiation testing of commercial graphics processing cards suggest they cannot operate in the deep space radiation environment. Investigation into an Software Graphics Processing Unit (sGPU)comprised of commercial-equivalent radiation hardened/tolerant single board computers, field programmable gate arrays, and safety-critical display software shows promising results. Preliminary performance of approximately 30 frames per second (FPS) has been achieved. Use of multi-core processors may provide a significant increase in performance.

  2. Aligning Seminars with Bologna Requirements: Reciprocal Peer Tutoring, the Solo Taxonomy and Deep Learning

    Science.gov (United States)

    Lueg, Rainer; Lueg, Klarissa; Lauridsen, Ole

    2016-01-01

    Changes in public policy, such as the Bologna Process, require students to be equipped with multifunctional competencies to master relevant tasks in unfamiliar situations. Achieving this goal might imply a change in many curricula toward deeper learning. As a didactical means to achieve deep learning results, the authors suggest reciprocal peer…

  3. Starvation and recovery in the deep-sea methanotroph Methyloprofundus sedimenti.

    Science.gov (United States)

    Tavormina, Patricia L; Kellermann, Matthias Y; Antony, Chakkiath Paul; Tocheva, Elitza I; Dalleska, Nathan F; Jensen, Ashley J; Valentine, David L; Hinrichs, Kai-Uwe; Jensen, Grant J; Dubilier, Nicole; Orphan, Victoria J

    2017-01-01

    In the deep ocean, the conversion of methane into derived carbon and energy drives the establishment of diverse faunal communities. Yet specific biological mechanisms underlying the introduction of methane-derived carbon into the food web remain poorly described, due to a lack of cultured representative deep-sea methanotrophic prokaryotes. Here, the response of the deep-sea aerobic methanotroph Methyloprofundus sedimenti to methane starvation and recovery was characterized. By combining lipid analysis, RNA analysis, and electron cryotomography, it was shown that M. sedimenti undergoes discrete cellular shifts in response to methane starvation, including changes in headgroup-specific fatty acid saturation levels, and reductions in cytoplasmic storage granules. Methane starvation is associated with a significant increase in the abundance of gene transcripts pertinent to methane oxidation. Methane reintroduction to starved cells stimulates a rapid, transient extracellular accumulation of methanol, revealing a way in which methane-derived carbon may be routed to community members. This study provides new understanding of methanotrophic responses to methane starvation and recovery, and lays the initial groundwork to develop Methyloprofundus as a model chemosynthesizing bacterium from the deep sea. © 2016 John Wiley & Sons Ltd.

  4. Reduction of Subjective and Objective System Complexity

    Science.gov (United States)

    Watson, Michael D.

    2015-01-01

    Occam's razor is often used in science to define the minimum criteria to establish a physical or philosophical idea or relationship. Albert Einstein is attributed the saying "everything should be made as simple as possible, but not simpler". These heuristic ideas are based on a belief that there is a minimum state or set of states for a given system or phenomena. In looking at system complexity, these heuristics point us to an idea that complexity can be reduced to a minimum. How then, do we approach a reduction in complexity? Complexity has been described as a subjective concept and an objective measure of a system. Subjective complexity is based on human cognitive comprehension of the functions and inter relationships of a system. Subjective complexity is defined by the ability to fully comprehend the system. Simplifying complexity, in a subjective sense, is thus gaining a deeper understanding of the system. As Apple's Jonathon Ive has stated," It's not just minimalism or the absence of clutter. It involves digging through the depth of complexity. To be truly simple, you have to go really deep". Simplicity is not the absence of complexity but a deeper understanding of complexity. Subjective complexity, based on this human comprehension, cannot then be discerned from the sociological concept of ignorance. The inability to comprehend a system can be either a lack of knowledge, an inability to understand the intricacies of a system, or both. Reduction in this sense is based purely on a cognitive ability to understand the system and no system then may be truly complex. From this view, education and experience seem to be the keys to reduction or eliminating complexity. Objective complexity, is the measure of the systems functions and interrelationships which exist independent of human comprehension. Jonathon Ive's statement does not say that complexity is removed, only that the complexity is understood. From this standpoint, reduction of complexity can be approached

  5. Active3 noise reduction

    International Nuclear Information System (INIS)

    Holzfuss, J.

    1996-01-01

    Noise reduction is a problem being encountered in a variety of applications, such as environmental noise cancellation, signal recovery and separation. Passive noise reduction is done with the help of absorbers. Active noise reduction includes the transmission of phase inverted signals for the cancellation. This paper is about a threefold active approach to noise reduction. It includes the separation of a combined source, which consists of both a noise and a signal part. With the help of interaction with the source by scanning it and recording its response, modeling as a nonlinear dynamical system is achieved. The analysis includes phase space analysis and global radial basis functions as tools for the prediction used in a subsequent cancellation procedure. Examples are given which include noise reduction of speech. copyright 1996 American Institute of Physics

  6. Hexavalent Chromium reduction by Trichoderma inhamatum

    Energy Technology Data Exchange (ETDEWEB)

    Morales-Battera, L.; Cristiani-Urbina, E.

    2009-07-01

    Reduction of hexavalent chromium [Cr(VI)] to trivalent chromium [Cr(III)] is a useful and attractive process for remediation of ecosystems and industrial effluents contaminated with Cr(VI). Cr(VI) reduction to Cr(II) can be achieved by both chemical and biological methods; however, the biological reduction is more convenient than the chemical one since costs are lower, and sludge is generated in smaller amounts. (Author)

  7. [Perceptions of classroom goal structures, personal achievement goal orientations, and learning strategies].

    Science.gov (United States)

    Miki, Kaori; Yamauchi, Hirotsugu

    2005-08-01

    We examined the relations among students' perceptions of classroom goal structures (mastery and performance goal structures), students' achievement goal orientations (mastery, performance, and work-avoidance goals), and learning strategies (deep processing, surface processing and self-handicapping strategies). Participants were 323 5th and 6th grade students in elementary schools. The results from structural equation modeling indicated that perceptions of classroom mastery goal structures were associated with students' mastery goal orientations, which were in turn related positively to the deep processing strategies and academic achievement. Perceptions of classroom performance goal stractures proved associated with work avoidance-goal orientations, which were positively related to the surface processing and self-handicapping strategies. Two types of goal structures had a positive relation with students' performance goal orientations, which had significant positive effects on academic achievement. The results of this study suggest that elementary school students' perceptions of mastery goal structures are related to adaptive patterns of learning more than perceptions of performance goal structures are. The role of perceptions of classroom goal structure in promoting students' goal orientations and learning strategies is discussed.

  8. Deep iCrawl: An Intelligent Vision-Based Deep Web Crawler

    OpenAIRE

    R.Anita; V.Ganga Bharani; N.Nityanandam; Pradeep Kumar Sahoo

    2011-01-01

    The explosive growth of World Wide Web has posed a challenging problem in extracting relevant data. Traditional web crawlers focus only on the surface web while the deep web keeps expanding behind the scene. Deep web pages are created dynamically as a result of queries posed to specific web databases. The structure of the deep web pages makes it impossible for traditional web crawlers to access deep web contents. This paper, Deep iCrawl, gives a novel and vision-based app...

  9. Interleaving subthalamic nucleus deep brain stimulation to avoid side effects while achieving satisfactory motor benefits in Parkinson disease: A report of 12 cases.

    Science.gov (United States)

    Zhang, Shizhen; Zhou, Peizhi; Jiang, Shu; Wang, Wei; Li, Peng

    2016-12-01

    Deep brain stimulation (DBS) of the subthalamic nucleus is an effective treatment for advanced Parkinson disease (PD). However, achieving ideal outcomes by conventional programming can be difficult in some patients, resulting in suboptimal control of PD symptoms and stimulation-induced adverse effects. Interleaving stimulation (ILS) is a newer programming technique that can individually optimize the stimulation area, thereby improving control of PD symptoms while alleviating stimulation-induced side effects after conventional programming fails to achieve the desired results. We retrospectively reviewed PD patients who received DBS programming during the previous 4 years in our hospital. We collected clinical and demographic data from 12 patients who received ILS because of incomplete alleviation of PD symptoms or stimulation-induced adverse effects after conventional programming had proven ineffective or intolerable. Appropriate lead location was confirmed with postoperative reconstruction images. The rationale and clinical efficacy of ILS was analyzed. We divided our patients into 4 groups based on the following symptoms: stimulation-induced dysarthria and choreoathetoid dyskinesias, gait disturbance, and incomplete control of parkinsonism. After treatment with ILS, patients showed satisfactory improvement in PD symptoms and alleviation of stimulation-induced side effects, with a mean improvement in Unified PD Rating Scale motor scores of 26.9%. ILS is a newer choice and effective programming strategy to maximize symptom control in PD while decreasing stimulation-induced adverse effects when conventional programming fails to achieve satisfactory outcome. However, we should keep in mind that most DBS patients are routinely treated with conventional stimulation and that not all patients benefit from ILS. ILS is not recommended as the first choice of programming, and it is recommended only when patients have unsatisfactory control of PD symptoms or stimulation

  10. ACTIVIS: Visual Exploration of Industry-Scale Deep Neural Network Models.

    Science.gov (United States)

    Kahng, Minsuk; Andrews, Pierre Y; Kalro, Aditya; Polo Chau, Duen Horng

    2017-08-30

    While deep learning models have achieved state-of-the-art accuracies for many prediction tasks, understanding these models remains a challenge. Despite the recent interest in developing visual tools to help users interpret deep learning models, the complexity and wide variety of models deployed in industry, and the large-scale datasets that they used, pose unique design challenges that are inadequately addressed by existing work. Through participatory design sessions with over 15 researchers and engineers at Facebook, we have developed, deployed, and iteratively improved ACTIVIS, an interactive visualization system for interpreting large-scale deep learning models and results. By tightly integrating multiple coordinated views, such as a computation graph overview of the model architecture, and a neuron activation view for pattern discovery and comparison, users can explore complex deep neural network models at both the instance- and subset-level. ACTIVIS has been deployed on Facebook's machine learning platform. We present case studies with Facebook researchers and engineers, and usage scenarios of how ACTIVIS may work with different models.

  11. A Deep Learning Network Approach to ab initio Protein Secondary Structure Prediction.

    Science.gov (United States)

    Spencer, Matt; Eickholt, Jesse; Jianlin Cheng

    2015-01-01

    Ab initio protein secondary structure (SS) predictions are utilized to generate tertiary structure predictions, which are increasingly demanded due to the rapid discovery of proteins. Although recent developments have slightly exceeded previous methods of SS prediction, accuracy has stagnated around 80 percent and many wonder if prediction cannot be advanced beyond this ceiling. Disciplines that have traditionally employed neural networks are experimenting with novel deep learning techniques in attempts to stimulate progress. Since neural networks have historically played an important role in SS prediction, we wanted to determine whether deep learning could contribute to the advancement of this field as well. We developed an SS predictor that makes use of the position-specific scoring matrix generated by PSI-BLAST and deep learning network architectures, which we call DNSS. Graphical processing units and CUDA software optimize the deep network architecture and efficiently train the deep networks. Optimal parameters for the training process were determined, and a workflow comprising three separately trained deep networks was constructed in order to make refined predictions. This deep learning network approach was used to predict SS for a fully independent test dataset of 198 proteins, achieving a Q3 accuracy of 80.7 percent and a Sov accuracy of 74.2 percent.

  12. A double-blind, randomized trial of deep repetitive transcranial magnetic stimulation (rTMS) for autism spectrum disorder.

    Science.gov (United States)

    Enticott, Peter G; Fitzgibbon, Bernadette M; Kennedy, Hayley A; Arnold, Sara L; Elliot, David; Peachey, Amy; Zangen, Abraham; Fitzgerald, Paul B

    2014-01-01

    Biomedical treatment options for autism spectrum disorder (ASD) are extremely limited. Repetitive transcranial magnetic stimulation (rTMS) is a safe and efficacious technique when targeting specific areas of cortical dysfunction in major depressive disorder, and a similar approach could yield therapeutic benefits in ASD, if applied to relevant cortical regions. The aim of this study was to examine whether deep rTMS to bilateral dorsomedial prefrontal cortex improves social relating in ASD. 28 adults diagnosed with either autistic disorder (high-functioning) or Asperger's disorder completed a prospective, double-blind, randomized, placebo-controlled design with 2 weeks of daily weekday treatment. This involved deep rTMS to bilateral dorsomedial prefrontal cortex (5 Hz, 10-s train duration, 20-s inter-train interval) for 15 min (1500 pulses per session) using a HAUT-Coil. The sham rTMS coil was encased in the same helmet of the active deep rTMS coil, but no effective field was delivered into the brain. Assessments were conducted before, after, and one month following treatment. Participants in the active condition showed a near significant reduction in self-reported social relating symptoms from pre-treatment to one month follow-up, and a significant reduction in social relating symptoms (relative to sham participants) for both post-treatment assessments. Those in the active condition also showed a reduction in self-oriented anxiety during difficult and emotional social situations from pre-treatment to one month follow-up. There were no changes for those in the sham condition. Deep rTMS to bilateral dorsomedial prefrontal cortex yielded a reduction in social relating impairment and socially-related anxiety. Further research in this area should employ extended rTMS protocols that approximate those used in depression in an attempt to replicate and amplify the clinical response. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. How deep-sea wood falls sustain chemosynthetic life.

    Directory of Open Access Journals (Sweden)

    Christina Bienhold

    Full Text Available Large organic food falls to the deep sea--such as whale carcasses and wood logs--are known to serve as stepping stones for the dispersal of highly adapted chemosynthetic organisms inhabiting hot vents and cold seeps. Here we investigated the biogeochemical and microbiological processes leading to the development of sulfidic niches by deploying wood colonization experiments at a depth of 1690 m in the Eastern Mediterranean for one year. Wood-boring bivalves of the genus Xylophaga played a key role in the degradation of the wood logs, facilitating the development of anoxic zones and anaerobic microbial processes such as sulfate reduction. Fauna and bacteria associated with the wood included types reported from other deep-sea habitats including chemosynthetic ecosystems, confirming the potential role of large organic food falls as biodiversity hot spots and stepping stones for vent and seep communities. Specific bacterial communities developed on and around the wood falls within one year and were distinct from freshly submerged wood and background sediments. These included sulfate-reducing and cellulolytic bacterial taxa, which are likely to play an important role in the utilization of wood by chemosynthetic life and other deep-sea animals.

  14. How Deep-Sea Wood Falls Sustain Chemosynthetic Life

    Science.gov (United States)

    Bienhold, Christina; Pop Ristova, Petra; Wenzhöfer, Frank; Dittmar, Thorsten; Boetius, Antje

    2013-01-01

    Large organic food falls to the deep sea – such as whale carcasses and wood logs – are known to serve as stepping stones for the dispersal of highly adapted chemosynthetic organisms inhabiting hot vents and cold seeps. Here we investigated the biogeochemical and microbiological processes leading to the development of sulfidic niches by deploying wood colonization experiments at a depth of 1690 m in the Eastern Mediterranean for one year. Wood-boring bivalves of the genus Xylophaga played a key role in the degradation of the wood logs, facilitating the development of anoxic zones and anaerobic microbial processes such as sulfate reduction. Fauna and bacteria associated with the wood included types reported from other deep-sea habitats including chemosynthetic ecosystems, confirming the potential role of large organic food falls as biodiversity hot spots and stepping stones for vent and seep communities. Specific bacterial communities developed on and around the wood falls within one year and were distinct from freshly submerged wood and background sediments. These included sulfate-reducing and cellulolytic bacterial taxa, which are likely to play an important role in the utilization of wood by chemosynthetic life and other deep-sea animals. PMID:23301092

  15. Energy Information Augmented Community-Based Energy Reduction

    Directory of Open Access Journals (Sweden)

    Mark Rembert

    2012-06-01

    Full Text Available More than one-half of all U.S. states have instituted energy efficiency mandates requiring utilities to reduce energy use. To achieve these goals, utilities have been permitted rate structures to help them incentivize energy reduction projects. This strategy is proving to be only modestly successful in stemming energy consumption growth. By the same token, community energy reduction programs have achieved moderate to very significant energy reduction. The research described here offers an important tool to strengthen the community energy reduction efforts—by providing such efforts energy information tailored to the energy use patterns of each building occupant. The information provided most importantly helps each individual energy customer understand their potential for energy savings and what reduction measures are most important to them. This information can be leveraged by the leading community organization to prompt greater action in its community. A number of case studies of this model are shown. Early results are promising.

  16. Interventional treatment for old thrombus in iliofemoral deep veins

    International Nuclear Information System (INIS)

    Qian Jun; Jiang Hong; Yang Yang

    2009-01-01

    Objective: To evaluate interventional management in treating old thrombus in iliofemoral deep veins. Methods: The clinical data and the interventional treatment results of 32 patients with chronic iliofemoral deep venous thrombosis were retrospectively reviewed and analyzed. Results: Technical success was achieved in 30 patients (93.8%). Twenty-nine endovascular stents were successfully placed in 25 patients. Postoperative therapeutic effects were as follows: grade I was obtained in 4 cases (12.5%), grade II in 16 cases (50.0%), grade III in 10 cases (31.3%) and grade IV in 2 cases (6.3%). Twenty-nine patients were followed-up for a mean period of 13.0 ± 6.8 months, and three patients were lost in touch. The follow-up results were as follows: grade I was seen in 3 cases (10.3%), grade II in 14 cases (48.3%) and grade III in 12 cases (41.4%). Conclusion: Interventional management is a minimally-invasive, safe and effective treatment for chronic iliofemoral deep venous thrombosis. (authors)

  17. Diagnosis of deep vein thrombosis using autologous indium-III-labelled platelets

    International Nuclear Information System (INIS)

    Fenech, A.; Hussey, J.K.; Smith, F.W.; Dendy, P.P.; Bennett, B.; Douglas, A.S.

    1981-01-01

    Forty-eight patients who had undergone surgical reduction of a fractured neck of femur or in whom deep vein thrombosis was suspected clinically were studied by ascending phlebography and imaging after injection of autologous indium-III-labelled platelets to assess the accuracy and value of the radioisotopic technique in diagnosing deep vein thrombosis. Imaging was performed with a wide-field gammacamera linked with data display facilities. Phlebography showed thrombi in 26 out of 54 limbs examined and a thrombus in the inferior vena cava of one patient; imaging the labelled platelets showed the thrombi in 24 of the 26 limbs and the thrombus in the inferior vena cava. The accumulation of indium-III at sites corresponding to those at which venous thrombi have been shown phlebographically indicates that this radioisotopic technique is a useful addition to methods already available for the detection of deep vein thrombosis. (author)

  18. Diagnosis of deep vein thrombosis using autologous indium-III-labelled platelets

    Energy Technology Data Exchange (ETDEWEB)

    Fenech, A.; Hussey, J.K.; Smith, F.W.; Dendy, P.P.; Bennett, B.; Douglas, A.S. (Aberdeen Univ. (UK))

    1981-03-28

    Forty-eight patients who had undergone surgical reduction of a fractured neck of femur or in whom deep vein thrombosis was suspected clinically were studied by ascending phlebography and imaging after injection of autologous indium-III-labelled platelets to assess the accuracy and value of the radioisotopic technique in diagnosing deep vein thrombosis. Imaging was performed with a wide-field gamma camera linked with data display facilities. Phlebography showed thrombi in 26 out of 54 limbs examined and a thrombus in the inferior vena cava of one patient; imaging the labelled platelets showed the thrombi in 24 of the 26 limbs and the thrombus in the inferior vena cava. The accumulation of indium-III at sites corresponding to those at which venous thrombi have been shown phlebographically indicates that this radioisotopic technique is a useful addition to methods already available for the detection of deep vein thrombosis.

  19. Bacteriological examination and biological characteristics of deep frozen bone preserved by gamma sterilization

    International Nuclear Information System (INIS)

    Pham Quang Ngoc; Le The Trung; Vo Van Thuan; Ho Minh Duc

    1999-01-01

    To promote the surgical success in Vietnam, we should supply bone allografts of different sizes. For this reason we have developed a standard procedure in procurement, deep freezing, packaging and radiation sterilization of massive bone. The achievement in this attempt will be briefly reported. The dose of 10-15 kGy is proved to be suitable for radiation sterilization of massive bone allografts being treated in clean condition and preserved in deep frozen. Neither deep freezing nor radiation sterilization cause any significant loss of biochemical stability of massive bone allografts especially when deep freezing combines with radiation. There were neither cross infection nor change of biological characteristics found after 6 months of storage since radiation treatment. In addition to results of the previous research and development of tissue grafts for medical care, the deep freezing radiation sterilization has been established for preservation of massive bone that is of high demand for surgery in Vietnam

  20. Deep ocean model penetrator experiments

    International Nuclear Information System (INIS)

    Freeman, T.J.; Burdett, J.R.F.

    1986-01-01

    Preliminary trials of experimental model penetrators in the deep ocean have been conducted as an international collaborative exercise by participating members (national bodies and the CEC) of the Engineering Studies Task Group of the Nuclear Energy Agency's Seabed Working Group. This report describes and gives the results of these experiments, which were conducted at two deep ocean study areas in the Atlantic: Great Meteor East and the Nares Abyssal Plain. Velocity profiles of penetrators of differing dimensions and weights have been determined as they free-fell through the water column and impacted the sediment. These velocity profiles are used to determine the final embedment depth of the penetrators and the resistance to penetration offered by the sediment. The results are compared with predictions of embedment depth derived from elementary models of a penetrator impacting with a sediment. It is tentatively concluded that once the resistance to penetration offered by a sediment at a particular site has been determined, this quantity can be used to sucessfully predict the embedment that penetrators of differing sizes and weights would achieve at the same site

  1. Are deep strategic learners better suited to PBL? A preliminary study.

    Science.gov (United States)

    Papinczak, Tracey

    2009-08-01

    The aim of this study was to determine if medical students categorized as having deep and strategic approaches to their learning find problem-based learning (PBL) enjoyable and supportive of their learning, and achieve well in the first-year course. Quantitative and qualitative data were gathered from first-year medical students (N = 213). All students completed the Medical Course Learning Questionnaire at the commencement and completion of their first year of medical studies. The instrument measured a number of different aspects of learning, including approaches to learning, preferences for different learning environments, self-efficacy, and perceptions of learning within PBL tutorials. Qualitative data were collected from written responses to open questions. Results of students' performance on two forms of examinations were obtained for those giving permission (N = 68). Two-step cluster analysis of the cohort's responses to questions about their learning approaches identified five clusters, three of which represented coherent combinations of learning approaches (deep, deep and strategic, and surface apathetic) and two clusters which had unusual or dissonant combinations. Deep, strategic learners represented 25.8% of the cohort. They were more efficacious, preferred learning environments which support development of understanding and achieved significantly higher scores on the written examination. Strongly positive comments about learning in PBL tutorials were principally described by members of this cluster. This preliminary study employed a technique to categorize a student cohort into subgroups on the basis of their approaches to learning. One, the deep and strategic learners, appeared to be less vulnerable to the stresses of PBS in a medical course. While variation between individual learners will always be considerable, this analysis has enabled classification of a student group that may be less likely to find PBL problematic. Implications for practice and

  2. Hydro-mechanical deep drawing of rolled magnesium sheets

    Energy Technology Data Exchange (ETDEWEB)

    Bach, F.W.; Rodman, M.; Rossberg, A. [Hannover Univ., Garbsen (Germany). Inst. of Materials Science; Behrens, B.A.; Vogt, O. [Hannover Univ., Garbsen (DE). Inst. of Metal Forming and Metal Forming Machine Tools (IFUM)

    2005-12-01

    Magnesium sheets offer high specific properties which make them very attractive in modern light weight constructions. The main obstacles for a wider usage are their high production costs, the poor corrosion properties and the limited ductility. Until today, forming processes have to be conducted at temperatures well above T=220 C. In the first place, this is a cost factor. Moreover, technical aspects, such as grain growth or the limited use of lubrication speak against high temperatures. The first aim of the presented research work is to increase the ductility at lower temperatures by alloy modification and by an adapted rolling technology. The key factor to reach isotropic mechanical properties and increased limit drawing ratios in deep drawing tools, is to achieve fine, homogeneous microstructures. This can be done by cross rolling at moderate temperatures. The heat treatment has to be adapted accordingly. In a second stage, hydro-mechanical deep drawing experiments were carried out at elevated temperature. The results show that the forming behaviour of the tested Mg-alloys is considerably improved compared to conventional deep drawing. (orig.)

  3. Reduction of nuclear waste with ALMRS

    International Nuclear Information System (INIS)

    Bultman, J.H.

    1993-10-01

    The Advanced Liquid Metal Reactor (ALMR) can operate on LWR discharged material. In the calculation of the reduction of this material in the ALMR the inventory of the core should be taken into account. A high reduction can only be obtained if this inventory is reduced during operation of ALMRs. Then, it is possible to achieve a high reduction upto a factor 100 within a few hundred years. (orig.)

  4. Technologies in deep and ultra-deep well drilling: Present status, challenges and future trend in the 13th Five-Year Plan period (2016–2020

    Directory of Open Access Journals (Sweden)

    Haige Wang

    2017-09-01

    Full Text Available During the 12th Five-Year Plan period (2011–2015, CNPC independently developed a series of new drilling equipment, tools and chemical materials for deep and ultra-deep wells, including six packages of key drilling equipment: rigs for wells up to 8000 m deep, quadruple-joint-stand rigs, automatic pipe handling devices for rigs for wells being 5000/7000 m deep, managed pressure drilling systems & equipment, gas/fuel alternative combustion engine units, and air/gas/underbalanced drilling systems; seven sets of key drilling tools: automatic vertical well drilling tools, downhole turbine tools, high-performance PDC bits, hybrid bits, bit jet pulsation devices, no-drilling-surprise monitoring system, & casing running devices for top drive; and five kinds of drilling fluids and cementing slurries: high temperature and high density water-based drilling fluids, oil-based drilling fluids, high temperature and large temperature difference cementing slurry, and ductile cement slurry system. These new development technologies have played an important role in supporting China's oil and gas exploration and development business. During the following 13th Five-Year Plan period (2016–2020, there are still many challenges to the drilling of deep and ultra-deep wells, such as high temperatures, high pressures, narrow pressure window, wellbore integrity and so on, as well as the enormous pressure on cost reduction and efficiency improvement. Therefore, the future development trend will be focused on the development of efficient and mobile rigs, high-performance drill bits and auxiliary tools, techniques for wellbore integrity and downhole broadband telemetry, etc. In conclusion, this study will help improve the ability and level of drilling ultra-deep wells and provide support for oil and gas exploration and development services in China. Keywords: Deep well, Ultra-deep well, Drilling techniques, Progress, Challenge, Strategy, CNPC

  5. High Pressure Reduction of Selenite by Shewanella oneidensis MR-1

    Science.gov (United States)

    Picard, A.; Daniel, I.; Testemale, D.; Letard, I.; Bleuet, P.; Cardon, H.; Oger, P.

    2007-12-01

    High-pressure biotopes comprise cold deep-sea environments, hydrothermal vents, and deep subsurface or deep-sea sediments. The latter are less studied, due to the technical difficulties to sample at great depths without contamination. Nevertheless, microbial sulfate reduction and methanogenesis have been found to be spatially distributed in deep deep-sea sediments (1), and sulfate reduction has been shown to be actually more efficient under high hydrostatic pressure (HHP) in some sediments (2). Sulfate-reducing bacteria obtained from the Japan Sea are characterized by an increased sulfide production under pressure (3,4). Unfortunately, investigations of microbial metabolic activity as a function of pressure are extremely scarce due to the experimental difficulty of such measurements at high hydrostatic pressure. We were able to measure the reduction of selenite Se(IV) by Shewanella oneidensis MR-1 as a function of pressure, to 150 MPa using two different high-pressure reactors that allow in situ X-ray spectroscopy measurements on a synchrotron source. A first series of measurements was carried out in a low-pressure Diamond Anvil Cell (DAC) of our own design (5) at ID22 beamline at ESRF (European Synchrotron Radiation Facility); a second one was performed in an autoclave (6) at the BM30B beamline at ESRF. Selenite reduction by strain MR-17 was monitored from ambient pressure to 150 MPa over 25 hours at 30 deg C by XANES spectroscopy (X-ray Analysis of Near Edge Structure). Spectra were recorded hourly in order to quantify the evolution of the oxidation state of selenium with time. Stationary-phase bacteria were inoculated at a high concentration into fresh growth medium containing 5 or 10 M of sodium selenite and 20 mM sodium lactate. Kinetic parameters of the Se (IV) reduction by Shewanella oneidensis strain MR-1 could be extracted from the data, as a function of pressure. They show 1) that the rate constant k of the reaction is decreased by a half at high pressure

  6. Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines.

    Science.gov (United States)

    Neftci, Emre O; Augustine, Charles; Paul, Somnath; Detorakis, Georgios

    2017-01-01

    An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.

  7. Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines

    Directory of Open Access Journals (Sweden)

    Emre O. Neftci

    2017-06-01

    Full Text Available An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.

  8. A 481pJ/decision 3.4M decision/s Multifunctional Deep In-memory Inference Processor using Standard 6T SRAM Array

    OpenAIRE

    Kang, Mingu; Gonugondla, Sujan; Patil, Ameya; Shanbhag, Naresh

    2016-01-01

    This paper describes a multi-functional deep in-memory processor for inference applications. Deep in-memory processing is achieved by embedding pitch-matched low-SNR analog processing into a standard 6T 16KB SRAM array in 65 nm CMOS. Four applications are demonstrated. The prototype achieves up to 5.6X (9.7X estimated for multi-bank scenario) energy savings with negligible (

  9. Classifying the molecular functions of Rab GTPases in membrane trafficking using deep convolutional neural networks.

    Science.gov (United States)

    Le, Nguyen-Quoc-Khanh; Ho, Quang-Thai; Ou, Yu-Yen

    2018-06-13

    Deep learning has been increasingly used to solve a number of problems with state-of-the-art performance in a wide variety of fields. In biology, deep learning can be applied to reduce feature extraction time and achieve high levels of performance. In our present work, we apply deep learning via two-dimensional convolutional neural networks and position-specific scoring matrices to classify Rab protein molecules, which are main regulators in membrane trafficking for transferring proteins and other macromolecules throughout the cell. The functional loss of specific Rab molecular functions has been implicated in a variety of human diseases, e.g., choroideremia, intellectual disabilities, cancer. Therefore, creating a precise model for classifying Rabs is crucial in helping biologists understand the molecular functions of Rabs and design drug targets according to such specific human disease information. We constructed a robust deep neural network for classifying Rabs that achieved an accuracy of 99%, 99.5%, 96.3%, and 97.6% for each of four specific molecular functions. Our approach demonstrates superior performance to traditional artificial neural networks. Therefore, from our proposed study, we provide both an effective tool for classifying Rab proteins and a basis for further research that can improve the performance of biological modeling using deep neural networks. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Classification of Exacerbation Frequency in the COPDGene Cohort Using Deep Learning with Deep Belief Networks.

    Science.gov (United States)

    Ying, Jun; Dutta, Joyita; Guo, Ning; Hu, Chenhui; Zhou, Dan; Sitek, Arkadiusz; Li, Quanzheng

    2016-12-21

    This study aims to develop an automatic classifier based on deep learning for exacerbation frequency in patients with chronic obstructive pulmonary disease (COPD). A threelayer deep belief network (DBN) with two hidden layers and one visible layer was employed to develop classification models and the models' robustness to exacerbation was analyzed. Subjects from the COPDGene cohort were labeled with exacerbation frequency, defined as the number of exacerbation events per year. 10,300 subjects with 361 features each were included in the analysis. After feature selection and parameter optimization, the proposed classification method achieved an accuracy of 91.99%, using a 10-fold cross validation experiment. The analysis of DBN weights showed that there was a good visual spatial relationship between the underlying critical features of different layers. Our findings show that the most sensitive features obtained from the DBN weights are consistent with the consensus showed by clinical rules and standards for COPD diagnostics. We thus demonstrate that DBN is a competitive tool for exacerbation risk assessment for patients suffering from COPD.

  11. The Effect of Deep Oscillation Therapy in Fibrocystic Breast Disease. A Randomized Controlled Clinical Trial

    Directory of Open Access Journals (Sweden)

    Solangel Hernandez

    2018-02-01

    Full Text Available Introduction: Fibrocystic breast disease is the most widespread disorder in women during their phase of sexual maturity. Deep oscillation (DO therapy has been used on patients who have undergone an operation for breast cancer as a special form of manual lymphatic drainage. Method: Experimental, prospective case-control studies were conducted in 401 women diagnosed with fibrocystic breast disease. The sample was selected at random and was divided into three groups, a study group and two control groups. Results: Pain was reduced in the three therapies applied. This was statistically significant in the study group. The sonography study presented a predominance of its fibrous form. Upon completion of the treatment a resolution of the fibrosis was observed in the study group. The women were using their bra in an incorrect manner. Conclusions: Pain was reduced in the three therapies applied. In the study group this reduction was statistically significant. It is possible to verify the magnitude of the resonant vibration in the connective tissue from surface to deep layers by viewing the effect of the deep oscillations through the use of diagnostic ultrasound. The most frequent sonographic finding was fibrosis. Deep oscillation therapy produces a tissue-relaxing, moderate vasoconstriction effect, favours local oedema reabsorption and fibrosis reduction. A factor that may affect breast pain is incorrect bra use. The majority of women studied were using their bra incorrectly.

  12. Metagenomic Signatures of Microbial Communities in Deep-Sea Hydrothermal Sediments of Azores Vent Fields.

    Science.gov (United States)

    Cerqueira, Teresa; Barroso, Cristina; Froufe, Hugo; Egas, Conceição; Bettencourt, Raul

    2018-01-21

    The organisms inhabiting the deep-seafloor are known to play a crucial role in global biogeochemical cycles. Chemolithoautotrophic prokaryotes, which produce biomass from single carbon molecules, constitute the primary source of nutrition for the higher organisms, being critical for the sustainability of food webs and overall life in the deep-sea hydrothermal ecosystems. The present study investigates the metabolic profiles of chemolithoautotrophs inhabiting the sediments of Menez Gwen and Rainbow deep-sea vent fields, in the Mid-Atlantic Ridge. Differences in the microbial community structure might be reflecting the distinct depth, geology, and distance from vent of the studied sediments. A metagenomic sequencing approach was conducted to characterize the microbiome of the deep-sea hydrothermal sediments and the relevant metabolic pathways used by microbes. Both Menez Gwen and Rainbow metagenomes contained a significant number of genes involved in carbon fixation, revealing the largely autotrophic communities thriving in both sites. Carbon fixation at Menez Gwen site was predicted to occur mainly via the reductive tricarboxylic acid cycle, likely reflecting the dominance of sulfur-oxidizing Epsilonproteobacteria at this site, while different autotrophic pathways were identified at Rainbow site, in particular the Calvin-Benson-Bassham cycle. Chemolithotrophy appeared to be primarily driven by the oxidation of reduced sulfur compounds, whether through the SOX-dependent pathway at Menez Gwen site or through reverse sulfate reduction at Rainbow site. Other energy-yielding processes, such as methane, nitrite, or ammonia oxidation, were also detected but presumably contributing less to chemolithoautotrophy. This work furthers our knowledge of the microbial ecology of deep-sea hydrothermal sediments and represents an important repository of novel genes with potential biotechnological interest.

  13. Continuous Cropping and Moist Deep Convection on the Canadian Prairies

    Directory of Open Access Journals (Sweden)

    Devon E. Worth

    2012-12-01

    Full Text Available Summerfallow is cropland that is purposely kept out of production during a growing season to conserve soil moisture. On the Canadian Prairies, a trend to continuous cropping with a reduction in summerfallow began after the summerfallow area peaked in 1976. This study examined the impact of this land-use change on convective available potential energy (CAPE, a necessary but not sufficient condition for moist deep convection. All else being equal, an increase in CAPE increases the probability-of-occurrence of convective clouds and their intensity if they occur. Representative Bowen ratios for the Black, Dark Brown, and Brown soil zones were determined for 1976: the maximum summerfallow year, 2001: our baseline year, and 20xx: a hypothetical year with the maximum-possible annual crop area. Average mid-growing-season Bowen ratios and noon solar radiation were used to estimate the reduction in the lifted index (LI from land-use weighted evapotranspiration in each study year. LI is an index of CAPE, and a reduction in LI indicates an increase in CAPE. The largest reductions in LI were found for the Black soil zone. They were −1.61 ± 0.18, −1.77 ± 0.14 and −1.89 ± 0.16 in 1976, 2001 and 20xx, respectively. These results suggest that, all else being equal, the probability-of-occurrence of moist deep convection in the Black soil zone was lower in 1976 than in the base year 2001, and it will be higher in 20xx when the annual crop area reaches a maximum. The trend to continuous cropping had less impact in the drier Dark Brown and Brown soil zones.

  14. Crud removal with deep bed type condensate demineralizer in Tokai-2 BWR

    International Nuclear Information System (INIS)

    Abe, Ayumi; Takiguchi, Hideki; Numata, Kunio; Saito, Toshihiko

    1996-01-01

    The major objective and functions for the installation of the deep bed type condensate polishers in BWR power plants is to remove both ionic impurities caused by sea water leakage and suspended impurities called crud mainly consisting of metal oxides which are produced from metal corrosion. In considering the reduction of occupational radiation exposure level, it is extremely important to remove the crud effectively. In recent Japanese BWR power plants, condensate pre-filters with powdered ion exchange resins or with hollow fiber membrane have been installed to remove the crud at the upper stream of the deep bed polishers. In such plants, the crud removal is conventionally the secondary objective for the deep bed polishers. The Japan Atomic Power Company has introduced the small particle ion exchange resin and a soak regeneration method since April 1985, and then applied the low cross-linked resin since July 1995 at Tokai-2 Power Station, to improve the crud removal performance by using only deep bed type condensate demineralizer, and as a result condensate demineralizer outlet iron level has been kept below 1 ppb since 1991

  15. Ceramic Spheres—A Novel Solution to Deep Sea Buoyancy Modules

    Science.gov (United States)

    Jiang, Bo; Blugan, Gurdial; Sturzenegger, Philip N.; Gonzenbach, Urs T.; Misson, Michael; Thornberry, John; Stenerud, Runar; Cartlidge, David; Kuebler, Jakob

    2016-01-01

    Ceramic-based hollow spheres are considered a great driving force for many applications such as offshore buoyancy modules due to their large diameter to wall thickness ratio and uniform wall thickness geometric features. We have developed such thin-walled hollow spheres made of alumina using slip casting and sintering processes. A diameter as large as 50 mm with a wall thickness of 0.5–1.0 mm has been successfully achieved in these spheres. Their material and structural properties were examined by a series of characterization tools. Particularly, the feasibility of these spheres was investigated with respect to its application for deep sea (>3000 m) buoyancy modules. These spheres, sintered at 1600 °C and with 1.0 mm of wall thickness, have achieved buoyancy of more than 54%. As the sphere’s wall thickness was reduced (e.g., 0.5 mm), their buoyancy reached 72%. The mechanical performance of such spheres has shown a hydrostatic failure pressure above 150 MPa, corresponding to a rating depth below sea level of 5000 m considering a safety factor of 3. The developed alumina-based ceramic spheres are feasible for low cost and scaled-up production and show great potential at depths greater than those achievable by the current deep-sea buoyancy module technologies. PMID:28773651

  16. Human-level control through deep reinforcement learning

    Science.gov (United States)

    Mnih, Volodymyr; Kavukcuoglu, Koray; Silver, David; Rusu, Andrei A.; Veness, Joel; Bellemare, Marc G.; Graves, Alex; Riedmiller, Martin; Fidjeland, Andreas K.; Ostrovski, Georg; Petersen, Stig; Beattie, Charles; Sadik, Amir; Antonoglou, Ioannis; King, Helen; Kumaran, Dharshan; Wierstra, Daan; Legg, Shane; Hassabis, Demis

    2015-02-01

    The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

  17. Human-level control through deep reinforcement learning.

    Science.gov (United States)

    Mnih, Volodymyr; Kavukcuoglu, Koray; Silver, David; Rusu, Andrei A; Veness, Joel; Bellemare, Marc G; Graves, Alex; Riedmiller, Martin; Fidjeland, Andreas K; Ostrovski, Georg; Petersen, Stig; Beattie, Charles; Sadik, Amir; Antonoglou, Ioannis; King, Helen; Kumaran, Dharshan; Wierstra, Daan; Legg, Shane; Hassabis, Demis

    2015-02-26

    The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

  18. Behaviour of Steel Fibre Reinforced Rubberized Continuous Deep Beams

    Science.gov (United States)

    Sandeep, MS; Nagarajan, Praveen; Shashikala, A. P.

    2018-03-01

    Transfer girders and pier caps, which are in fact deep beams, are critical structural elements present in high-rise buildings and bridges respectively. During an earthquake, failure of lifeline structures like bridges and critical structural members like transfer girders will result in severe catastrophes. Ductility is the key factor that influences the resistance of any structural member against seismic action. Structural members cast using materials having higher ductility will possess higher seismic resistance. Previous research shows that concrete having rubber particles (rubcrete) possess better ductility and low density in comparison to ordinary concrete. The main hindrance to the use of rubcrete is the reduction in compressive and tensile strength of concrete due to the presence of rubber. If these undesirable properties of rubcrete can be controlled, a new cementitious composite with better ductility, seismic performance and economy can be developed. A combination of rubber particles and steel fibre has the potential to reduce the undesirable effect of rubcrete. In this paper, the effect of rubber particles and steel fibre in the behaviour of two-span continuous deep beams is studied experimentally. Based on the results, optimum proportions of steel fibre and rubber particles for getting good ductile behaviour with less reduction in collapse load is found out.

  19. 30 CFR 203.60 - Who may apply for royalty relief on a case-by-case basis in deep water in the Gulf of Mexico or...

    Science.gov (United States)

    2010-07-01

    ...-case basis in deep water in the Gulf of Mexico or offshore of Alaska? 203.60 Section 203.60 Mineral... basis in deep water in the Gulf of Mexico or offshore of Alaska? You may apply for royalty relief under... REDUCTION IN ROYALTY RATES OCS Oil, Gas, and Sulfur General Royalty Relief for Pre-Act Deep Water Leases and...

  20. Global Lunar Topography from the Deep Space Gateway for Science and Exploration

    Science.gov (United States)

    Archinal, B.; Gaddis, L.; Kirk, R.; Edmundson, K.; Stone, T.; Portree, D.; Keszthelyi, L.

    2018-02-01

    The Deep Space Gateway, in low lunar orbit, could be used to achieve a long standing goal of lunar science, collecting stereo images in two months to make a complete, uniform, high resolution, known accuracy, global topographic model of the Moon.

  1. Reduction Mammoplasty: A Comparison Between Operations Performed by Plastic Surgery and General Surgery.

    Science.gov (United States)

    Kordahi, Anthony M; Hoppe, Ian C; Lee, Edward S

    2015-01-01

    Reduction mammoplasty is an often-performed procedure by plastic surgeons and increasingly by general surgeons. The question has been posed in both general surgical literature and plastic surgical literature as to whether this procedure should remain the domain of surgical specialists. Some general surgeons are trained in breast reductions, whereas all plastic surgeons receive training in this procedure. The National Surgical Quality Improvement Project provides a unique opportunity to compare the 2 surgical specialties in an unbiased manner in terms of preoperative comorbidities and 30-day postoperative complications. The National Surgical Quality Improvement Project database was queried for the years 2005-2012. Patients were identified as having undergone a reduction mammoplasty by Current Procedural Terminology codes. RESULTS were refined to include only females with an International Classification of Diseases, Ninth Revision, code of 611.1 (hypertrophy of breasts). Information was collected regarding age, surgical specialty performing procedure, body mass index, and other preoperative variables. The outcomes utilized were presence of superficial surgical site infection, presence of deep surgical site infection, presence of wound dehiscence, postoperative respiratory compromise, pulmonary embolism, deep vein thrombosis, perioperative transfusion, operative time, reintubation, reoperation, and length of hospital stay. During this time period, there were 6239 reduction mammaplasties performed within the National Surgical Quality Improvement Project database: 339 by general surgery and 5900 by plastic surgery. No statistical differences were detected between the 2 groups with regard to superficial wound infections, deep wound infections, organ space infections, or wound dehiscence. There were no significant differences noted between within groups with regard to systemic postoperative complications. Patients undergoing a procedure by general surgery were more likely

  2. The effect of the electrode material on the electrodeposition of zinc from deep eutectic solvents

    International Nuclear Information System (INIS)

    Vieira, L.; Schennach, R.; Gollas, B.

    2016-01-01

    Highlights: • Mechanistic insight into zinc electrodeposition from deep eutectic solvents. • Overpotential for hydrogen evolution affects the electrodeposition of zinc. • Electrodeposited zinc forms surface alloys on Cu, Au, and Pt. • In situ PM-IRRAS of a ZnCl_2 containing deep eutectic solvent on glassy carbon. - Abstract: The voltammetric behaviour of the ZnCl_2 containing deep eutectic solvent choline chloride/ethylene glycol 1:2 was investigated on glassy carbon, stainless steel, Au, Pt, Cu, and Zn electrodes. While cyclic voltammetry on glassy carbon and stainless steel showed a cathodic peak for zinc electrodeposition only in the anodic reverse sweep, a cathodic peak was found also in the cathodic forward sweep on Au, Pt, Cu, and Zn. This behaviour is in agreement with the proposed mechanism of zinc deposition from an intermediate species Z, whose formation depends on the cathodic reduction potential of the solvent. The voltammetric reduction of the electrolyte involves hydrogen evolution and as a result the formation of Z and its reduction to zinc depend on the hydrogen overpotential for each electrode material. On Au, Pt, and Cu also the anodic stripping was different from that on glassy carbon and steel due to the formation of surface zinc alloys with the three former metals. The morphology of the zinc layers on Cu has been characterised by scanning electron microscopy and focussed ion beam. X-ray diffraction confirmed the presence of crystalline zinc and a Cu_4Zn phase. Spectroelectrochemistry by means of polarization modulation reflection-absorption spectroscopy (PM-IRRAS) on a glassy carbon electrode in the ZnCl_2 containing deep eutectic solvent showed characteristic potential dependent changes. The variation of band intensities at different applied potentials correlate with the voltammetry and suggest the formation of a compact blocking layer on the electrode surface, which inhibits the electrodeposition of zinc at sufficiently negative

  3. Structure, functioning, and cumulative stressors of Mediterranean deep-sea ecosystems

    Science.gov (United States)

    Tecchio, Samuele; Coll, Marta; Sardà, Francisco

    2015-06-01

    Environmental stressors, such as climate fluctuations, and anthropogenic stressors, such as fishing, are of major concern for the management of deep-sea ecosystems. Deep-water habitats are limited by primary productivity and are mainly dependent on the vertical input of organic matter from the surface. Global change over the latest decades is imparting variations in primary productivity levels across oceans, and thus it has an impact on the amount of organic matter landing on the deep seafloor. In addition, anthropogenic impacts are now reaching the deep ocean. The Mediterranean Sea, the largest enclosed basin on the planet, is not an exception. However, ecosystem-level studies of response to varying food input and anthropogenic stressors on deep-sea ecosystems are still scant. We present here a comparative ecological network analysis of three food webs of the deep Mediterranean Sea, with contrasting trophic structure. After modelling the flows of these food webs with the Ecopath with Ecosim approach, we compared indicators of network structure and functioning. We then developed temporal dynamic simulations varying the organic matter input to evaluate its potential effect. Results show that, following the west-to-east gradient in the Mediterranean Sea of marine snow input, organic matter recycling increases, net production decreases to negative values and trophic organisation is overall reduced. The levels of food-web activity followed the gradient of organic matter availability at the seafloor, confirming that deep-water ecosystems directly depend on marine snow and are therefore influenced by variations of energy input, such as climate-driven changes. In addition, simulations of varying marine snow arrival at the seafloor, combined with the hypothesis of a possible fishery expansion on the lower continental slope in the western basin, evidence that the trawling fishery may pose an impact which could be an order of magnitude stronger than a climate

  4. Volume fracturing of deep shale gas horizontal wells

    Directory of Open Access Journals (Sweden)

    Tingxue Jiang

    2017-03-01

    Full Text Available Deep shale gas reservoirs buried underground with depth being more than 3500 m are characterized by high in-situ stress, large horizontal stress difference, complex distribution of bedding and natural cracks, and strong rock plasticity. Thus, during hydraulic fracturing, these reservoirs often reveal difficult fracture extension, low fracture complexity, low stimulated reservoir volume (SRV, low conductivity and fast decline, which hinder greatly the economic and effective development of deep shale gas. In this paper, a specific and feasible technique of volume fracturing of deep shale gas horizontal wells is presented. In addition to planar perforation, multi-scale fracturing, full-scale fracture filling, and control over extension of high-angle natural fractures, some supporting techniques are proposed, including multi-stage alternate injection (of acid fluid, slick water and gel and the mixed- and small-grained proppant to be injected with variable viscosity and displacement. These techniques help to increase the effective stimulated reservoir volume (ESRV for deep gas production. Some of the techniques have been successfully used in the fracturing of deep shale gas horizontal wells in Yongchuan, Weiyuan and southern Jiaoshiba blocks in the Sichuan Basin. As a result, Wells YY1HF and WY1HF yielded initially 14.1 × 104 m3/d and 17.5 × 104 m3/d after fracturing. The volume fracturing of deep shale gas horizontal well is meaningful in achieving the productivity of 50 × 108 m3 gas from the interval of 3500–4000 m in Phase II development of Fuling and also in commercial production of huge shale gas resources at a vertical depth of less than 6000 m.

  5. Climate Leadership Award for Excellence in GHG Management (Goal Achievement Award)

    Science.gov (United States)

    Apply to the Climate Leadership Award for Excellence in GHG Management (Goal Achievement Award), which publicly recognizes organizations that achieve publicly-set aggressive greenhouse gas emissions reduction goals.

  6. Study on the Geological Structure around KURT Using a Deep Borehole Investigation

    International Nuclear Information System (INIS)

    Park, Kyung Woo; Kim, Kyung Su; Koh, Yong Kwon; Choi, Jong Won

    2010-01-01

    To characterize geological features in study area for high-level radioactive waste disposal research, KAERI (Korea Atomic Energy Research Institute) has been performing the several geological investigations such as geophysical surveys and borehole drilling since 1997. Especially, the KURT (KAERI Underground Research Tunnel) constructed to understand the deep geological environments in 2006. Recently, the deep borehole of 500 m depths was drilled to confirm and validate the geological model at the left research module of the KURT. The objective of this research was to identify the geological structures around KURT using the data obtained from the deep borehole investigation. To achieve the purpose, several geological investigations such as geophysical and borehole fracture surveys were carried out simultaneously. As a result, 7 fracture zones were identified in deep borehole located in the KURT. As one of important parts of site characterization on KURT area, the results will be used to revise the geological model of the study area

  7. Enhanced deep ocean ventilation and oxygenation with global warming

    Science.gov (United States)

    Froelicher, T. L.; Jaccard, S.; Dunne, J. P.; Paynter, D.; Gruber, N.

    2014-12-01

    Twenty-first century coupled climate model simulations, observations from the recent past, and theoretical arguments suggest a consistent trend towards warmer ocean temperatures and fresher polar surface oceans in response to increased radiative forcing resulting in increased upper ocean stratification and reduced ventilation and oxygenation of the deep ocean. Paleo-proxy records of the warming at the end of the last ice age, however, suggests a different outcome, namely a better ventilated and oxygenated deep ocean with global warming. Here we use a four thousand year global warming simulation from a comprehensive Earth System Model (GFDL ESM2M) to show that this conundrum is a consequence of different rates of warming and that the deep ocean is actually better ventilated and oxygenated in a future warmer equilibrated climate consistent with paleo-proxy records. The enhanced deep ocean ventilation in the Southern Ocean occurs in spite of increased positive surface buoyancy fluxes and a constancy of the Southern Hemisphere westerly winds - circumstances that would otherwise be expected to lead to a reduction in deep ocean ventilation. This ventilation recovery occurs through a global scale interaction of the Atlantic Meridional Overturning Circulation undergoing a multi-centennial recovery after an initial century of transient decrease and transports salinity-rich waters inform the subtropical surface ocean to the Southern Ocean interior on multi-century timescales. The subsequent upwelling of salinity-rich waters in the Southern Ocean strips away the freshwater cap that maintains vertical stability and increases open ocean convection and the formation of Antarctic Bottom Waters. As a result, the global ocean oxygen content and the nutrient supply from the deep ocean to the surface are higher in a warmer ocean. The implications for past and future changes in ocean heat and carbon storage will be discussed.

  8. Deep learning

    CERN Document Server

    Goodfellow, Ian; Courville, Aaron

    2016-01-01

    Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language proces...

  9. Setup for in situ deep level transient spectroscopy of semiconductors during swift heavy ion irradiation.

    Science.gov (United States)

    Kumar, Sandeep; Kumar, Sugam; Katharria, Y S; Safvan, C P; Kanjilal, D

    2008-05-01

    A computerized system for in situ deep level characterization during irradiation in semiconductors has been set up and tested in the beam line for materials science studies of the 15 MV Pelletron accelerator at the Inter-University Accelerator Centre, New Delhi. This is a new facility for in situ irradiation-induced deep level studies, available in the beam line of an accelerator laboratory. It is based on the well-known deep level transient spectroscopy (DLTS) technique. High versatility for data manipulation is achieved through multifunction data acquisition card and LABVIEW. In situ DLTS studies of deep levels produced by impact of 100 MeV Si ions on Aun-Si(100) Schottky barrier diode are presented to illustrate performance of the automated DLTS facility in the beam line.

  10. A Novel Text Clustering Approach Using Deep-Learning Vocabulary Network

    Directory of Open Access Journals (Sweden)

    Junkai Yi

    2017-01-01

    Full Text Available Text clustering is an effective approach to collect and organize text documents into meaningful groups for mining valuable information on the Internet. However, there exist some issues to tackle such as feature extraction and data dimension reduction. To overcome these problems, we present a novel approach named deep-learning vocabulary network. The vocabulary network is constructed based on related-word set, which contains the “cooccurrence” relations of words or terms. We replace term frequency in feature vectors with the “importance” of words in terms of vocabulary network and PageRank, which can generate more precise feature vectors to represent the meaning of text clustering. Furthermore, sparse-group deep belief network is proposed to reduce the dimensionality of feature vectors, and we introduce coverage rate for similarity measure in Single-Pass clustering. To verify the effectiveness of our work, we compare the approach to the representative algorithms, and experimental results show that feature vectors in terms of deep-learning vocabulary network have better clustering performance.

  11. Sarcoidosis, Celiac Disease and Deep Venous Thrombosis: a Rare Association

    Directory of Open Access Journals (Sweden)

    Gökhan Çelik

    2011-11-01

    Full Text Available Sarcoidosis is a multisystem granulomatous disorder of unknown etiology and it may rarely be associated with a second disorder. Celiac disease is an immune-mediated enteropathy characterized with malabsorption caused by gluten intolerance, and several reports indicate an association between celiac disease and sarcoidosis. In addition, although celiac disease is associated with several extraintestinal pathologies, venous thrombosis has been rarely reported. Herein we present a rare case report of a patient with a diagnosis of sarcoidosis, celiac disease and deep venous thrombosis because of the rare association of these disorders. The patient was admitted with abdominal pain, weight loss, chronic diarrhea and a 5-day history of swelling in her right leg. A diagnosis of deep venous thrombosis was achieved by doppler ultrasonographic examination. The diagnosis of celiac disease was made by biopsy of duodenal mucosa and supported with elevated serum level of anti-gliadin IgA and IgG, and a diagnosis of sarcoidosis was achieved by transbronchial needle aspiration from the subcarinal lymph node during flexible bronchoscopy.

  12. Reduction of deepwater formation in the Greenland Sea during the 1980s: Evidence from tracer data

    International Nuclear Information System (INIS)

    Schlosser, P.; Boenisch, G.; Bayer, R.; Rhein, M.

    1991-01-01

    Hydrographic observations and measurements of the concentrations of chlorofluorocarbons (CFCs) have suggested that the formation of Greenland Sea Deep Water (GSDW) slowed down considerably during the 1980s. Such a decrease is related to weakened convection in the Greenland Sea and thus could have significant impact on the properties of the waters flowing over the Scotland-Iceland-Greenlad ridge system into the deep Atlantic. Study of the variability of GSDW formation is relevant for understanding the impact of the circulation in the European Polar seas on regional and global deep water characteristics. New long-term multitracer observations from the Greenland Sea show that GSDW formation indeed was greatly reduced during the 1980s. A box model of deepwater formation and exchange in the European Polar seas tuned by the tracer data indicates that the reduction rate of GSDW formation was about 80% and that the start date of the reduction was between 1978 and 1982. 24 refs., 4 figs

  13. The role of poverty reduction strategies in achieving the millennium development goals

    NARCIS (Netherlands)

    Bezemer, Dirk; Eggen, Andrea

    2008-01-01

    We provide a literature overview of the linkages between Poverty Reduction Strategy Papers (PRSPs) and the Millenium Development Goals (MDGs) and use novel data to examine their relation. We find that introduction of a PRSP is associated with progress in four of the nine MDG indicators we study.

  14. Deep and tapered silicon photonic crystals for achieving anti-reflection and enhanced absorption.

    Science.gov (United States)

    Hung, Yung-Jr; Lee, San-Liang; Coldren, Larry A

    2010-03-29

    Tapered silicon photonic crystals (PhCs) with smooth sidewalls are realized using a novel single-step deep reactive ion etching. The PhCs can significantly reduce the surface reflection over the wavelength range between the ultra-violet and near-infrared regions. From the measurements using a spectrophotometer and an angle-variable spectroscopic ellipsometer, the sub-wavelength periodic structure can provide a broad and angular-independent antireflective window in the visible region for the TE-polarized light. The PhCs with tapered rods can further reduce the reflection due to a gradually changed effective index. On the other hand, strong optical resonances for TM-mode can be found in this structure, which is mainly due to the existence of full photonic bandgaps inside the material. Such resonance can enhance the optical absorption inside the silicon PhCs due to its increased optical paths. With the help of both antireflective and absorption-enhanced characteristics in this structure, the PhCs can be used for various applications.

  15. Canadian options for greenhouse gas emission reduction (COGGER)

    International Nuclear Information System (INIS)

    Robinson, J.; Fraser, M.; Haites, E.; Harvey, D.; Jaccard, M.; Reinsch, A.; Torrie, R.

    1993-09-01

    A panel was formed to assess the feasibility and cost of energy-related greenhouse gas (GHG) emissions reduction in Canada. The panel studies focused on the potential for increased energy efficiency and fuel switching and their effect in reducing CO 2 emissions by reviewing the extensive literature available on those topics and assessing their conclusions. Economically feasible energy savings are estimated mostly in the range of 20-40% savings by the year 2010 relative to a reference-case projection, with a median of 23%. The panel concluded that achieving the identified economic potential for increased energy efficiency by 2010 will depend on development of additional demand-side management or energy efficiency programs that go well beyond current policies and programs. Fuel switching will play a much smaller role in stabilizing energy-related CO 2 emissions than improved energy efficiency. Technology substitution and broader structural change would enable Canada to achieve significant reductions in CO 2 emissions; however, more research is needed on achieving emission reductions that would approach the levels estimated to be required globally for stabilization of atmospheric CO 2 concentrations. Achieving such emissions reductions would likely require a combination of significant improvements in energy efficiency, major changes in energy sources, and substantial changes in economic activity and life styles, relative to that projected in most reference-case forecasts. 5 refs., 1 fig., 10 tabs

  16. Deep transcranial magnetic stimulation for the treatment of auditory hallucinations: a preliminary open-label study.

    Science.gov (United States)

    Rosenberg, Oded; Roth, Yiftach; Kotler, Moshe; Zangen, Abraham; Dannon, Pinhas

    2011-02-09

    Schizophrenia is a chronic and disabling disease that presents with delusions and hallucinations. Auditory hallucinations are usually expressed as voices speaking to or about the patient. Previous studies have examined the effect of repetitive transcranial magnetic stimulation (TMS) over the temporoparietal cortex on auditory hallucinations in schizophrenic patients. Our aim was to explore the potential effect of deep TMS, using the H coil over the same brain region on auditory hallucinations. Eight schizophrenic patients with refractory auditory hallucinations were recruited, mainly from Beer Ya'akov Mental Health Institution (Tel Aviv university, Israel) ambulatory clinics, as well as from other hospitals outpatient populations. Low-frequency deep TMS was applied for 10 min (600 pulses per session) to the left temporoparietal cortex for either 10 or 20 sessions. Deep TMS was applied using Brainsway's H1 coil apparatus. Patients were evaluated using the Auditory Hallucinations Rating Scale (AHRS) as well as the Scale for the Assessment of Positive Symptoms scores (SAPS), Clinical Global Impressions (CGI) scale, and the Scale for Assessment of Negative Symptoms (SANS). This preliminary study demonstrated a significant improvement in AHRS score (an average reduction of 31.7% ± 32.2%) and to a lesser extent improvement in SAPS results (an average reduction of 16.5% ± 20.3%). In this study, we have demonstrated the potential of deep TMS treatment over the temporoparietal cortex as an add-on treatment for chronic auditory hallucinations in schizophrenic patients. Larger samples in a double-blind sham-controlled design are now being preformed to evaluate the effectiveness of deep TMS treatment for auditory hallucinations. This trial is registered with clinicaltrials.gov (identifier: NCT00564096).

  17. Comparative Single-Cell Genomics of Chloroflexi from the Okinawa Trough Deep-Subsurface Biosphere.

    Science.gov (United States)

    Fullerton, Heather; Moyer, Craig L

    2016-05-15

    Chloroflexi small-subunit (SSU) rRNA gene sequences are frequently recovered from subseafloor environments, but the metabolic potential of the phylum is poorly understood. The phylum Chloroflexi is represented by isolates with diverse metabolic strategies, including anoxic phototrophy, fermentation, and reductive dehalogenation; therefore, function cannot be attributed to these organisms based solely on phylogeny. Single-cell genomics can provide metabolic insights into uncultured organisms, like the deep-subsurface Chloroflexi Nine SSU rRNA gene sequences were identified from single-cell sorts of whole-round core material collected from the Okinawa Trough at Iheya North hydrothermal field as part of Integrated Ocean Drilling Program (IODP) expedition 331 (Deep Hot Biosphere). Previous studies of subsurface Chloroflexi single amplified genomes (SAGs) suggested heterotrophic or lithotrophic metabolisms and provided no evidence for growth by reductive dehalogenation. Our nine Chloroflexi SAGs (seven of which are from the order Anaerolineales) indicate that, in addition to genes for the Wood-Ljungdahl pathway, exogenous carbon sources can be actively transported into cells. At least one subunit for pyruvate ferredoxin oxidoreductase was found in four of the Chloroflexi SAGs. This protein can provide a link between the Wood-Ljungdahl pathway and other carbon anabolic pathways. Finally, one of the seven Anaerolineales SAGs contains a distinct reductive dehalogenase homologous (rdhA) gene. Through the use of single amplified genomes (SAGs), we have extended the metabolic potential of an understudied group of subsurface microbes, the Chloroflexi These microbes are frequently detected in the subsurface biosphere, though their metabolic capabilities have remained elusive. In contrast to previously examined Chloroflexi SAGs, our genomes (several are from the order Anaerolineales) were recovered from a hydrothermally driven system and therefore provide a unique window into

  18. Silicon germanium mask for deep silicon etching

    KAUST Repository

    Serry, Mohamed

    2014-07-29

    Polycrystalline silicon germanium (SiGe) can offer excellent etch selectivity to silicon during cryogenic deep reactive ion etching in an SF.sub.6/O.sub.2 plasma. Etch selectivity of over 800:1 (Si:SiGe) may be achieved at etch temperatures from -80 degrees Celsius to -140 degrees Celsius. High aspect ratio structures with high resolution may be patterned into Si substrates using SiGe as a hard mask layer for construction of microelectromechanical systems (MEMS) devices and semiconductor devices.

  19. Silicon germanium mask for deep silicon etching

    KAUST Repository

    Serry, Mohamed; Rubin, Andrew; Refaat, Mohamed; Sedky, Sherif; Abdo, Mohammad

    2014-01-01

    Polycrystalline silicon germanium (SiGe) can offer excellent etch selectivity to silicon during cryogenic deep reactive ion etching in an SF.sub.6/O.sub.2 plasma. Etch selectivity of over 800:1 (Si:SiGe) may be achieved at etch temperatures from -80 degrees Celsius to -140 degrees Celsius. High aspect ratio structures with high resolution may be patterned into Si substrates using SiGe as a hard mask layer for construction of microelectromechanical systems (MEMS) devices and semiconductor devices.

  20. Self-Paced Prioritized Curriculum Learning With Coverage Penalty in Deep Reinforcement Learning.

    Science.gov (United States)

    Ren, Zhipeng; Dong, Daoyi; Li, Huaxiong; Chen, Chunlin; Zhipeng Ren; Daoyi Dong; Huaxiong Li; Chunlin Chen; Dong, Daoyi; Li, Huaxiong; Chen, Chunlin; Ren, Zhipeng

    2018-06-01

    In this paper, a new training paradigm is proposed for deep reinforcement learning using self-paced prioritized curriculum learning with coverage penalty. The proposed deep curriculum reinforcement learning (DCRL) takes the most advantage of experience replay by adaptively selecting appropriate transitions from replay memory based on the complexity of each transition. The criteria of complexity in DCRL consist of self-paced priority as well as coverage penalty. The self-paced priority reflects the relationship between the temporal-difference error and the difficulty of the current curriculum for sample efficiency. The coverage penalty is taken into account for sample diversity. With comparison to deep Q network (DQN) and prioritized experience replay (PER) methods, the DCRL algorithm is evaluated on Atari 2600 games, and the experimental results show that DCRL outperforms DQN and PER on most of these games. More results further show that the proposed curriculum training paradigm of DCRL is also applicable and effective for other memory-based deep reinforcement learning approaches, such as double DQN and dueling network. All the experimental results demonstrate that DCRL can achieve improved training efficiency and robustness for deep reinforcement learning.

  1. Arsenic migration to deep groundwater in Bangladesh influenced by adsorption and water demand

    Science.gov (United States)

    Radloff, K. A.; Zheng, Y.; Michael, H. A.; Stute, M.; Bostick, B. C.; Mihajlov, I.; Bounds, M.; Huq, M. R.; Choudhury, I.; Rahman, M. W.; Schlosser, P.; Ahmed, K. M.; van Geen, A.

    2011-11-01

    The consumption of shallow groundwater with elevated concentrations of arsenic is causing widespread disease in many parts of South and Southeast Asia. In the Bengal Basin, a growing reliance on groundwater sourced below 150-m depth--where arsenic concentrations tend to be lower--has reduced exposure. Groundwater flow simulations have suggested that these deep waters are at risk of contamination due to replenishment with high-arsenic groundwater from above, even when deep water pumping is restricted to domestic use. However, these simulations have neglected the influence of sediment adsorption on arsenic migration. Here, we inject arsenic-bearing groundwater into a deep aquifer zone in Bangladesh, and monitor the reduction in arsenic levels over time following stepwise withdrawal of the water. Arsenic concentrations in the injected water declined by 70% after 24h in the deep aquifer zone, owing to adsorption on sediments; concentrations of a co-injected inert tracer remain unchanged. We incorporate the experimentally determined adsorption properties of sands in the deep aquifer zone into a groundwater flow and transport model covering the Bengal Basin. Simulations using present and future scenarios of water-use suggest that arsenic adsorption significantly retards transport, thereby extending the area over which deep groundwater can be used with low risk of arsenic contamination. Risks are considerably lower when deep water is pumped for domestic use alone. Some areas remain vulnerable to arsenic intrusion, however, and we suggest that these be prioritized for monitoring.

  2. Highly efficient deep ultraviolet generation by sum-frequency mixing ...

    Indian Academy of Sciences (India)

    Generation of deep ultraviolet radiation at 210 nm by Type-I third harmonic generation is achieved in a pair of BBO crystals with conversion efficiency as high as 36%. The fundamental source is the dye laser radiation pumped by the second harmonic of a Q-switched Nd : YAG laser. A walk-off compensated configuration ...

  3. Deep waters : the Ottawa River and Canada's nuclear adventure

    International Nuclear Information System (INIS)

    Krenz, F.H.K.

    2004-01-01

    Deep Waters is an intimate account of the principal events and personalities involved in the successful development of the Canadian nuclear power system (CANDU), an achievement that is arguably one of Canada's greatest scientific and technical successes of the twentieth century. The author tells the stories of the people involved and the problems they faced and overcame and also relates the history of the development of the town of Deep River, built exclusively for the scientists and employees of the Chalk River Project and describes the impact of the Project on the traditional communities of the Ottawa Valley. Public understanding of nuclear power has remained confused, yet decisions about whether and how to use it are of vital importance to Canadians today - and will increase in importance as we seek to maintain our standard of living without doing irreparable damage to the environment around us. Deep Waters examines the issues involved in the use of nuclear power without over-emphasizing its positive aspects or avoiding its negative aspects.

  4. Simulation of deep ventilation in Crater Lake, Oregon, 1951–2099

    Science.gov (United States)

    Wood, Tamara M.; Wherry, Susan A.; Piccolroaz, Sebastiano; Girdner, Scott F

    2016-05-04

    The frequency of deep ventilation events in Crater Lake, a caldera lake in the Oregon Cascade Mountains, was simulated in six future climate scenarios, using a 1-dimensional deep ventilation model (1DDV) that was developed to simulate the ventilation of deep water initiated by reverse stratification and subsequent thermobaric instability. The model was calibrated and validated with lake temperature data collected from 1994 to 2011. Wind and air temperature data from three general circulation models and two representative concentration pathways were used to simulate the change in lake temperature and the frequency of deep ventilation events in possible future climates. The lumped model air2water was used to project lake surface temperature, a required boundary condition for the lake model, based on air temperature in the future climates.The 1DDV model was used to simulate daily water temperature profiles through 2099. All future climate scenarios projected increased water temperature throughout the water column and a substantive reduction in the frequency of deep ventilation events. The least extreme scenario projected the frequency of deep ventilation events to decrease from about 1 in 2 years in current conditions to about 1 in 3 years by 2100. The most extreme scenario considered projected the frequency of deep ventilation events to be about 1 in 7.7 years by 2100. All scenarios predicted that the temperature of the entire water column will be greater than 4 °C for increasing lengths of time in the future and that the conditions required for thermobaric instability induced mixing will become rare or non-existent.The disruption of deep ventilation by itself does not provide a complete picture of the potential ecological and water quality consequences of warming climate to Crater Lake. Estimating the effect of warming climate on deep water oxygen depletion and water clarity will require careful modeling studies to combine the physical mixing processes affected by

  5. Reduction of light oil usage as power fluid for jet pumping in deep heavy oil reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Chen, S.; Li, H.; Yang, D. [Society of Petroleum Engineers, Canadian Section, Calgary, AB (Canada)]|[Regina Univ., SK (Canada); Zhang, Q. [China Univ. of Petroleum, Dongying, Shandong (China); He, J. [China National Petroleum Corp., Haidan District, Beijing (China). PetroChina Tarim Oilfield Co.

    2008-10-15

    In deep heavy oil reservoirs, reservoir fluid can flow more easily in the formation as well as around the bottomhole. However, during its path along the production string, viscosity of the reservoir fluid increases dramatically due to heat loss and release of the dissolved gas, resulting in significant pressure drop along the wellbore. Artificial lifting methods need to be adopted to pump the reservoir fluids to the surface. This paper discussed the development of a new technique for reducing the amount of light oil used for jet pumping in deep heavy oil wells. Two approaches were discussed. Approach A uses the light oil as a power fluid first to obtain produced fluid with lower viscosity, and then the produced fluid is reinjected into the well as a power fluid. The process continues until the viscosity of the produced fluid is too high to be utilized. Approach B combines a portion of the produced fluid with the light oil at a reasonable ratio and then the produced fluid-light oil mixture is used as the power fluid for deep heavy oil well production. The viscosity of the blended power fluid continue to increase and eventually reach equilibrium. The paper presented the detailed processes of both approaches in order to indicate how to apply them in field applications. Theoretic models were also developed and presented to determine the key parameters in the field operations. A field case was also presented and a comparison and analysis between the two approaches were discussed. It was concluded from the field applications that, with a certain amount of light oil, the amount of reservoir fluid produced by using the new technique could be 3 times higher than that of the conventional jet pumping method. 17 refs., 3 tabs., 6 figs.

  6. Deep remission: a new concept?

    Science.gov (United States)

    Colombel, Jean-Frédéric; Louis, Edouard; Peyrin-Biroulet, Laurent; Sandborn, William J; Panaccione, Remo

    2012-01-01

    Crohn's disease (CD) is a chronic inflammatory disorder characterized by periods of clinical remission alternating with periods of relapse defined by recurrent clinical symptoms. Persistent inflammation is believed to lead to progressive bowel damage over time, which manifests with the development of strictures, fistulae and abscesses. These disease complications frequently lead to a need for surgical resection, which in turn leads to disability. So CD can be characterized as a chronic, progressive, destructive and disabling disease. In rheumatoid arthritis, treatment paradigms have evolved beyond partial symptom control alone toward the induction and maintenance of sustained biological remission, also known as a 'treat to target' strategy, with the goal of improving long-term disease outcomes. In CD, there is currently no accepted, well-defined, comprehensive treatment goal that entails the treatment of both clinical symptoms and biologic inflammation. It is important that such a treatment concept begins to evolve for CD. A treatment strategy that delays or halts the progression of CD to increasing damage and disability is a priority. As a starting point, a working definition of sustained deep remission (that includes long-term biological remission and symptom control) with defined patient outcomes (including no disease progression) has been proposed. The concept of sustained deep remission represents a goal for CD management that may still evolve. It is not clear if the concept also applies to ulcerative colitis. Clinical trials are needed to evaluate whether treatment algorithms that tailor therapy to achieve deep remission in patients with CD can prevent disease progression and disability. Copyright © 2012 S. Karger AG, Basel.

  7. Deep Incremental Boosting

    OpenAIRE

    Mosca, Alan; Magoulas, George D

    2017-01-01

    This paper introduces Deep Incremental Boosting, a new technique derived from AdaBoost, specifically adapted to work with Deep Learning methods, that reduces the required training time and improves generalisation. We draw inspiration from Transfer of Learning approaches to reduce the start-up time to training each incremental Ensemble member. We show a set of experiments that outlines some preliminary results on some common Deep Learning datasets and discuss the potential improvements Deep In...

  8. The change in deep cervical flexor activity after training is associated with the degree of pain reduction in patients with chronic neck pain.

    Science.gov (United States)

    Falla, Deborah; O'Leary, Shaun; Farina, Dario; Jull, Gwendolen

    2012-09-01

    Altered activation of the deep cervical flexors (longus colli and longus capitis) has been found in individuals with neck pain disorders but the response to training has been variable. Therefore, this study investigated the relationship between change in deep cervical flexor muscle activity and symptoms in response to specific training. Fourteen women with chronic neck pain undertook a 6-week program of specific training that consisted of a craniocervical flexion exercise performed twice per day (10 to 20 min) for the duration of the trial. The exercise targets the deep flexor muscles of the upper cervical region. At baseline and follow-up, measures were taken of neck pain intensity (visual analogue scale, 0 to 10), perceived disability (Neck Disability Index, 0 to 50) and electromyography (EMG) of the deep cervical flexors (by a nasopharyngeal electrode suctioned over the posterior oropharyngeal wall) during performance of craniocervical flexion. After training, the activation of the deep cervical flexors increased (Pcervical flexor EMG amplitude at baseline (R(2)=0.68; Ppain intensity, change in pain level with training, and change in EMG amplitude for the deep cervical flexors during craniocervical flexion (R(2)=0.34; Pcervical flexor muscles in women with chronic neck pain reduces pain and improves the activation of these muscles, especially in those with the least activation of their deep cervical flexors before training. This finding suggests that the selection of exercise based on a precise assessment of the patients' neuromuscular control and targeted exercise interventions based on this assessment are likely to be the most beneficial to patients with neck pain.

  9. Deep Super Learner: A Deep Ensemble for Classification Problems

    OpenAIRE

    Young, Steven; Abdou, Tamer; Bener, Ayse

    2018-01-01

    Deep learning has become very popular for tasks such as predictive modeling and pattern recognition in handling big data. Deep learning is a powerful machine learning method that extracts lower level features and feeds them forward for the next layer to identify higher level features that improve performance. However, deep neural networks have drawbacks, which include many hyper-parameters and infinite architectures, opaqueness into results, and relatively slower convergence on smaller datase...

  10. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.

    Science.gov (United States)

    Chen, Liang-Chieh; Papandreou, George; Kokkinos, Iasonas; Murphy, Kevin; Yuille, Alan L

    2018-04-01

    In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.

  11. DeepRT: deep learning for peptide retention time prediction in proteomics

    OpenAIRE

    Ma, Chunwei; Zhu, Zhiyong; Ye, Jun; Yang, Jiarui; Pei, Jianguo; Xu, Shaohang; Zhou, Ruo; Yu, Chang; Mo, Fan; Wen, Bo; Liu, Siqi

    2017-01-01

    Accurate predictions of peptide retention times (RT) in liquid chromatography have many applications in mass spectrometry-based proteomics. Herein, we present DeepRT, a deep learning based software for peptide retention time prediction. DeepRT automatically learns features directly from the peptide sequences using the deep convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) model, which eliminates the need to use hand-crafted features or rules. After the feature learning, pr...

  12. The importance of grid integration for achievable greenhouse gas emissions reductions from alternative vehicle technologies

    International Nuclear Information System (INIS)

    Tarroja, Brian; Shaffer, Brendan; Samuelsen, Scott

    2015-01-01

    Alternative vehicles must appropriately interface with the electric grid and renewable generation to contribute to decarbonization. This study investigates the impact of infrastructure configurations and management strategies on the vehicle–grid interface and vehicle greenhouse gas reduction potential with regard to California's Executive Order S-21-09 goal. Considered are battery electric vehicles, gasoline-fueled plug-in hybrid electric vehicles, hydrogen-fueled fuel cell vehicles, and plug-in hybrid fuel cell vehicles. Temporally resolved models of the electric grid, electric vehicle charging, hydrogen infrastructure, and vehicle powertrain simulations are integrated. For plug-in vehicles, consumer travel patterns can limit the greenhouse gas reductions without smart charging or energy storage. For fuel cell vehicles, the fuel production mix must be optimized for minimal greenhouse gas emissions. The plug-in hybrid fuel cell vehicle has the largest potential for emissions reduction due to smaller battery and fuel cells keeping efficiencies higher and meeting 86% of miles on electric travel keeping the hydrogen demand low. Energy storage is required to meet Executive Order S-21-09 goals in all cases. Meeting the goal requires renewable capacities of 205 GW for plug-in hybrid fuel cell vehicles and battery electric vehicle 100s, 255 GW for battery electric vehicle 200s, and 325 GW for fuel cell vehicles. - Highlights: • Consumer travel patterns limit greenhouse gas reductions with immediate charging. • Smart charging or energy storage are required for large greenhouse gas reductions. • Fuel cells as a plug-in vehicle range extender provided the most greenhouse gas reductions. • Energy storage is required to meet greenhouse gas goals regardless of vehicle type. • Smart charging reduces the required energy storage size for a given greenhouse gas goal

  13. How to achieve emission reductions in Germany and the European Union. Energy policy, RUE with cross cutting technologies, Pinch technology

    Energy Technology Data Exchange (ETDEWEB)

    Radgen, P.

    1999-10-01

    The German presentations will cover three main topics. These are: (1) Energy policy on the national level and in the European Community. (2) Rational use of energy and efficiency improvements by cross cutting technologies. (3) Optimizing heat recovery and heat recovery network with Pinch technology. Actual development of carbon dioxide emissions and scenarios to forecast for the future development will be presented. It will be shown, that long term agreements are widely used in the EC to obtain a reduction of emissions. Specific attention will also be placed on the burden sharing in the EC and the other GHG. In the second part the efficiency improvement by cross cutting technologies will be discussed for furnaces, waste heat recovery, electric motors, compressed air systems, cooling systems, lighting and heat pumps. Most of these improvement potentials are economic at present energy prices, but some barriers for their application have to be overcome which will be discussed. In the last part a systematic method for the optimization of heat recovery networks is presented. The Pinch technology, developed in the late seventies is an easy and reliable way to obtain quickly a good insight into the heat flows of a process. The basics of Pinch technology will be presented with a simple example and the presentation of an in deep analysis of a fertilizer complex. (orig.)

  14. Deep reduced PEDOT films support electrochemical applications: Biomimetic color front.

    Directory of Open Access Journals (Sweden)

    Toribio Fernandez OTERO

    2015-02-01

    Full Text Available Most of the literature accepts, despite many controversial results, that during oxidation/reduction films of conducting polymers move from electronic conductors to insulators. Thus, engineers and device’s designers are forced to use metallic supports to reoxidize the material for reversible device work. Electrochromic front experiments appear as main visual support of the claimed insulating nature of reduced conducting polymers. Here we present a different design of the biomimetic electrochromic front that corroborates the electronic and ionic conducting nature of deep reduced films. The direct contact PEDOT metal/electrolyte and film/electrolyte was prevented from electrolyte contact until 1cm far from the metal contact with protecting Parafilm®. The deep reduced PEDOT film supports the flow of high currents promoting reaction induced electrochromic color changes beginning 1 cm far from the metal-polymer electrical contact and advancing, through the reduced film, towards the metal contact. Reverse color changes during oxidation/reduction always are initiated at the film/electrolyte contact advancing, under the protecting film, towards the film/metal contact. Both reduced and oxidized states of the film demonstrate electronic and ionic conductivities high enough to be used for electronic applications or, as self-supported electrodes, for electrochemical devices. The electrochemically stimulated conformational relaxation (ESCR model explains those results.

  15. DANoC: An Efficient Algorithm and Hardware Codesign of Deep Neural Networks on Chip.

    Science.gov (United States)

    Zhou, Xichuan; Li, Shengli; Tang, Fang; Hu, Shengdong; Lin, Zhi; Zhang, Lei

    2017-07-18

    Deep neural networks (NNs) are the state-of-the-art models for understanding the content of images and videos. However, implementing deep NNs in embedded systems is a challenging task, e.g., a typical deep belief network could exhaust gigabytes of memory and result in bandwidth and computational bottlenecks. To address this challenge, this paper presents an algorithm and hardware codesign for efficient deep neural computation. A hardware-oriented deep learning algorithm, named the deep adaptive network, is proposed to explore the sparsity of neural connections. By adaptively removing the majority of neural connections and robustly representing the reserved connections using binary integers, the proposed algorithm could save up to 99.9% memory utility and computational resources without undermining classification accuracy. An efficient sparse-mapping-memory-based hardware architecture is proposed to fully take advantage of the algorithmic optimization. Different from traditional Von Neumann architecture, the deep-adaptive network on chip (DANoC) brings communication and computation in close proximity to avoid power-hungry parameter transfers between on-board memory and on-chip computational units. Experiments over different image classification benchmarks show that the DANoC system achieves competitively high accuracy and efficiency comparing with the state-of-the-art approaches.

  16. Automatic feature learning using multichannel ROI based on deep structured algorithms for computerized lung cancer diagnosis.

    Science.gov (United States)

    Sun, Wenqing; Zheng, Bin; Qian, Wei

    2017-10-01

    This study aimed to analyze the ability of extracting automatically generated features using deep structured algorithms in lung nodule CT image diagnosis, and compare its performance with traditional computer aided diagnosis (CADx) systems using hand-crafted features. All of the 1018 cases were acquired from Lung Image Database Consortium (LIDC) public lung cancer database. The nodules were segmented according to four radiologists' markings, and 13,668 samples were generated by rotating every slice of nodule images. Three multichannel ROI based deep structured algorithms were designed and implemented in this study: convolutional neural network (CNN), deep belief network (DBN), and stacked denoising autoencoder (SDAE). For the comparison purpose, we also implemented a CADx system using hand-crafted features including density features, texture features and morphological features. The performance of every scheme was evaluated by using a 10-fold cross-validation method and an assessment index of the area under the receiver operating characteristic curve (AUC). The observed highest area under the curve (AUC) was 0.899±0.018 achieved by CNN, which was significantly higher than traditional CADx with the AUC=0.848±0.026. The results from DBN was also slightly higher than CADx, while SDAE was slightly lower. By visualizing the automatic generated features, we found some meaningful detectors like curvy stroke detectors from deep structured schemes. The study results showed the deep structured algorithms with automatically generated features can achieve desirable performance in lung nodule diagnosis. With well-tuned parameters and large enough dataset, the deep learning algorithms can have better performance than current popular CADx. We believe the deep learning algorithms with similar data preprocessing procedure can be used in other medical image analysis areas as well. Copyright © 2017. Published by Elsevier Ltd.

  17. Phased Retrofits in Existing Homes in Florida Phase I: Shallow and Deep Retrofits

    Energy Technology Data Exchange (ETDEWEB)

    Parker, D. [Building America Partnership for Improved Residential Construction, Cocoa, FL (United States); Sutherland, K. [Building America Partnership for Improved Residential Construction, Cocoa, FL (United States); Chasar, D. [Building America Partnership for Improved Residential Construction, Cocoa, FL (United States); Montemurno, J. [Building America Partnership for Improved Residential Construction, Cocoa, FL (United States); Amos, B. [Building America Partnership for Improved Residential Construction, Cocoa, FL (United States); Kono, J. [Building America Partnership for Improved Residential Construction, Cocoa, FL (United States)

    2016-02-01

    The U.S. Department of Energy (DOE) Building America program, in collaboration with Florida Power and Light (FPL), conducted a phased residential energy-efficiency retrofit program. This research sought to establish impacts on annual energy and peak energy reductions from the technologies applied at two levels of retrofit - shallow and deep, with savings levels approaching the Building America program goals of reducing whole-house energy use by 40%. Under the Phased Deep Retrofit (PDR) project, we have installed phased, energy-efficiency retrofits in a sample of 56 existing, all-electric homes. End-use savings and economic evaluation results from the phased measure packages and single measures are summarized in this report.

  18. Neuropsychiatric Outcome of an Adolescent Who Received Deep Brain Stimulation for Tourette's Syndrome

    Directory of Open Access Journals (Sweden)

    S. J. Pullen

    2011-01-01

    Full Text Available This case study followed one adolescent patient who underwent bilateral deep brain stimulation of the centromedian parafascicular complex (CM-Pf for debilitating, treatment refractory Tourette's syndrome for a period of 1.5 years. Neurocognitive testing showed no significant changes between baseline and follow-up assessments. Psychiatric assessment revealed positive outcomes in overall adaptive functioning and reduction in psychotropic medication load in this patient. Furthermore, despite significant baseline psychiatric comorbidity, this patient reported no suicidal ideation following electrode implantation. Deep brain stimulation is increasingly being used in children and adolescents. This case reports on the positive neurologic and neuropsychiatric outcome of an adolescent male with bilateral CM-Pf stimulation.

  19. Auditory processing during deep propofol sedation and recovery from unconsciousness.

    Science.gov (United States)

    Koelsch, Stefan; Heinke, Wolfgang; Sammler, Daniela; Olthoff, Derk

    2006-08-01

    Using evoked potentials, this study investigated effects of deep propofol sedation, and effects of recovery from unconsciousness, on the processing of auditory information with stimuli suited to elicit a physical MMN, and a (music-syntactic) ERAN. Levels of sedation were assessed using the Bispectral Index (BIS) and the Modified Observer's Assessment of Alertness and Sedation Scale (MOAAS). EEG-measurements were performed during wakefulness, deep propofol sedation (MOAAS 2-3, mean BIS=68), and a recovery period. Between deep sedation and recovery period, the infusion rate of propofol was increased to achieve unconsciousness (MOAAS 0-1, mean BIS=35); EEG measurements of recovery period were performed after subjects regained consciousness. During deep sedation, the physical MMN was markedly reduced, but still significant. No ERAN was observed in this level. A clear P3a was elicited during deep sedation by those deviants, which were task-relevant during the awake state. As soon as subjects regained consciousness during the recovery period, a normal MMN was elicited. By contrast, the P3a was absent in the recovery period, and the P3b was markedly reduced. Results indicate that the auditory sensory memory (as indexed by the physical MMN) is still active, although strongly reduced, during deep sedation (MOAAS 2-3). The presence of the P3a indicates that attention-related processes are still operating during this level. Processes of syntactic analysis appear to be abolished during deep sedation. After propofol-induced anesthesia, the auditory sensory memory appears to operate normal as soon as subjects regain consciousness, whereas the attention-related processes indexed by P3a and P3b are markedly impaired. Results inform about effects of sedative drugs on auditory and attention-related mechanisms. The findings are important because these mechanisms are prerequisites for auditory awareness, auditory learning and memory, as well as language perception during anesthesia.

  20. Contrasting impacts of light reduction on sediment biogeochemistry in deep- and shallow-water tropical seagrass assemblages (Green Island, Great Barrier Reef).

    Science.gov (United States)

    Schrameyer, Verena; York, Paul H; Chartrand, Kathryn; Ralph, Peter J; Kühl, Michael; Brodersen, Kasper Elgetti; Rasheed, Michael A

    2018-05-01

    Seagrass meadows increasingly face reduced light availability as a consequence of coastal development, eutrophication, and climate-driven increases in rainfall leading to turbidity plumes. We examined the impact of reduced light on above-ground seagrass biomass and sediment biogeochemistry in tropical shallow- (∼2 m) and deep-water (∼17 m) seagrass meadows (Green Island, Australia). Artificial shading (transmitting ∼10-25% of incident solar irradiance) was applied to the shallow- and deep-water sites for up to two weeks. While above-ground biomass was unchanged, higher diffusive O 2 uptake (DOU) rates, lower O 2 penetration depths, and higher volume-specific O 2 consumption (R) rates were found in seagrass-vegetated sediments as compared to adjacent bare sand (control) areas at the shallow-water sites. In contrast, deep-water sediment characteristics did not differ between bare sand and vegetated sites. At the vegetated shallow-water site, shading resulted in significantly lower hydrogen sulphide (H 2 S) levels in the sediment. No shading effects were found on sediment biogeochemistry at the deep-water site. Overall, our results show that the sediment biogeochemistry of shallow-water (Halodule uninervis, Syringodium isoetifolium, Cymodocea rotundata and C. serrulata) and deep-water (Halophila decipiens) seagrass meadows with different species differ in response to reduced light. The light-driven dynamics of the sediment biogeochemistry at the shallow-water site could suggest the presence of a microbial consortium, which might be stimulated by photosynthetically produced exudates from the seagrass, which becomes limited due to lower seagrass photosynthesis under shaded conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Volatile organic compounds and oxides of nitrogen. Further emission reductions

    Energy Technology Data Exchange (ETDEWEB)

    Froste, H [comp.

    1997-12-31

    This report presents the current status in relation to achievement of the Swedish Environmental target set by Parliament to reduce emission of volatile organic compounds by 50 per cent between 1988 and 2000. It also instructed the Agency to formulate proposed measures to achieve a 50 per cent reduction of emission of nitrogen oxides between 1985 and 2005. The report presents an overall account of emission trends for volatile organic compounds (from all sectors) and nitrogen oxides (from the industry sector) and steps proposed to achieve further emission reductions. 43 refs

  2. Volatile organic compounds and oxides of nitrogen. Further emission reductions

    Energy Technology Data Exchange (ETDEWEB)

    Froste, H. [comp.

    1996-12-31

    This report presents the current status in relation to achievement of the Swedish Environmental target set by Parliament to reduce emission of volatile organic compounds by 50 per cent between 1988 and 2000. It also instructed the Agency to formulate proposed measures to achieve a 50 per cent reduction of emission of nitrogen oxides between 1985 and 2005. The report presents an overall account of emission trends for volatile organic compounds (from all sectors) and nitrogen oxides (from the industry sector) and steps proposed to achieve further emission reductions. 43 refs

  3. Deep Ultraviolet Light Emitters Based on (Al,Ga)N/GaN Semiconductor Heterostructures

    Science.gov (United States)

    Liang, Yu-Han

    Deep ultraviolet (UV) light sources are useful in a number of applications that include sterilization, medical diagnostics, as well as chemical and biological identification. However, state-of-the-art deep UV light-emitting diodes and lasers made from semiconductors still suffer from low external quantum efficiency and low output powers. These limitations make them costly and ineffective in a wide range of applications. Deep UV sources such as lasers that currently exist are prohibitively bulky, complicated, and expensive. This is typically because they are constituted of an assemblage of two to three other lasers in tandem to facilitate sequential harmonic generation that ultimately results in the desired deep UV wavelength. For semiconductor-based deep UV sources, the most challenging difficulty has been finding ways to optimally dope the (Al,Ga)N/GaN heterostructures essential for UV-C light sources. It has proven to be very difficult to achieve high free carrier concentrations and low resistivities in high-aluminum-containing III-nitrides. As a result, p-type doped aluminum-free III-nitrides are employed as the p-type contact layers in UV light-emitting diode structures. However, because of impedance-mismatch issues, light extraction from the device and consequently the overall external quantum efficiency is drastically reduced. This problem is compounded with high losses and low gain when one tries to make UV nitride lasers. In this thesis, we provide a robust and reproducible approach to resolving most of these challenges. By using a liquid-metal-enabled growth mode in a plasma-assisted molecular beam epitaxy process, we show that highly-doped aluminum containing III-nitride films can be achieved. This growth mode is driven by kinetics. Using this approach, we have been able to achieve extremely high p-type and n-type doping in (Al,Ga)N films with high aluminum content. By incorporating a very high density of Mg atoms in (Al,Ga)N films, we have been able to

  4. Analyses of the deep borehole drilling status for a deep borehole disposal system

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Youl; Choi, Heui Joo; Lee, Min Soo; Kim, Geon Young; Kim, Kyung Su [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    The purpose of disposal for radioactive wastes is not only to isolate them from humans, but also to inhibit leakage of any radioactive materials into the accessible environment. Because of the extremely high level and long-time scale radioactivity of HLW(High-level radioactive waste), a mined deep geological disposal concept, the disposal depth is about 500 m below ground, is considered as the safest method to isolate the spent fuels or high-level radioactive waste from the human environment with the best available technology at present time. Therefore, as an alternative disposal concept, i.e., deep borehole disposal technology is under consideration in number of countries in terms of its outstanding safety and cost effectiveness. In this paper, the general status of deep drilling technologies was reviewed for deep borehole disposal of high level radioactive wastes. Based on the results of these review, very preliminary applicability of deep drilling technology for deep borehole disposal analyzed. In this paper, as one of key technologies of deep borehole disposal system, the general status of deep drilling technologies in oil industry, geothermal industry and geo scientific field was reviewed for deep borehole disposal of high level radioactive wastes. Based on the results of these review, the very preliminary applicability of deep drilling technology for deep borehole disposal such as relation between depth and diameter, drilling time and feasibility classification was analyzed.

  5. Deep features for efficient multi-biometric recognition with face and ear images

    Science.gov (United States)

    Omara, Ibrahim; Xiao, Gang; Amrani, Moussa; Yan, Zifei; Zuo, Wangmeng

    2017-07-01

    Recently, multimodal biometric systems have received considerable research interest in many applications especially in the fields of security. Multimodal systems can increase the resistance to spoof attacks, provide more details and flexibility, and lead to better performance and lower error rate. In this paper, we present a multimodal biometric system based on face and ear, and propose how to exploit the extracted deep features from Convolutional Neural Networks (CNNs) on the face and ear images to introduce more powerful discriminative features and robust representation ability for them. First, the deep features for face and ear images are extracted based on VGG-M Net. Second, the extracted deep features are fused by using a traditional concatenation and a Discriminant Correlation Analysis (DCA) algorithm. Third, multiclass support vector machine is adopted for matching and classification. The experimental results show that the proposed multimodal system based on deep features is efficient and achieves a promising recognition rate up to 100 % by using face and ear. In addition, the results indicate that the fusion based on DCA is superior to traditional fusion.

  6. Effective photodynamic therapy against microbial populations in human deep tissue abscess aspirates.

    Science.gov (United States)

    Haidaris, Constantine G; Foster, Thomas H; Waldman, David L; Mathes, Edward J; McNamara, Joanne; Curran, Timothy

    2013-10-01

    The primary therapy for deep tissue abscesses is drainage accompanied by systemic antimicrobial treatment. However, the long antibiotic course required increases the probability of acquired resistance, and the high incidence of polymicrobial infections in abscesses complicates treatment choices. Photodynamic therapy (PDT) is effective against multiple classes of organisms, including those displaying drug resistance, and may serve as a useful adjunct to the standard of care by reduction of abscess microbial burden following drainage. Aspirates were obtained from 32 patients who underwent image-guided percutaneous drainage of the abscess cavity. The majority of the specimens (24/32) were abdominal, with the remainder from liver and lung. Conventional microbiological techniques and nucleotide sequence analysis of rRNA gene fragments were used to characterize microbial populations from abscess aspirates. We evaluated the sensitivity of microorganisms to methylene blue-sensitized PDT in vitro both within the context of an abscess aspirate and as individual isolates. Most isolates were bacterial, with the fungus Candida tropicalis also isolated from two specimens. We examined the sensitivity of these microorganisms to methylene blue-PDT. Complete elimination of culturable microorganisms was achieved in three different aspirates, and significant killing (P abscess treatment. © 2013 Wiley Periodicals, Inc.

  7. Particle swarm optimization for programming deep brain stimulation arrays.

    Science.gov (United States)

    Peña, Edgar; Zhang, Simeng; Deyo, Steve; Xiao, YiZi; Johnson, Matthew D

    2017-02-01

    Deep brain stimulation (DBS) therapy relies on both precise neurosurgical targeting and systematic optimization of stimulation settings to achieve beneficial clinical outcomes. One recent advance to improve targeting is the development of DBS arrays (DBSAs) with electrodes segmented both along and around the DBS lead. However, increasing the number of independent electrodes creates the logistical challenge of optimizing stimulation parameters efficiently. Solving such complex problems with multiple solutions and objectives is well known to occur in biology, in which complex collective behaviors emerge out of swarms of individual organisms engaged in learning through social interactions. Here, we developed a particle swarm optimization (PSO) algorithm to program DBSAs using a swarm of individual particles representing electrode configurations and stimulation amplitudes. Using a finite element model of motor thalamic DBS, we demonstrate how the PSO algorithm can efficiently optimize a multi-objective function that maximizes predictions of axonal activation in regions of interest (ROI, cerebellar-receiving area of motor thalamus), minimizes predictions of axonal activation in regions of avoidance (ROA, somatosensory thalamus), and minimizes power consumption. The algorithm solved the multi-objective problem by producing a Pareto front. ROI and ROA activation predictions were consistent across swarms (<1% median discrepancy in axon activation). The algorithm was able to accommodate for (1) lead displacement (1 mm) with relatively small ROI (⩽9.2%) and ROA (⩽1%) activation changes, irrespective of shift direction; (2) reduction in maximum per-electrode current (by 50% and 80%) with ROI activation decreasing by 5.6% and 16%, respectively; and (3) disabling electrodes (n  =  3 and 12) with ROI activation reduction by 1.8% and 14%, respectively. Additionally, comparison between PSO predictions and multi-compartment axon model simulations showed discrepancies

  8. Particle Swarm Optimization for Programming Deep Brain Stimulation Arrays

    Science.gov (United States)

    Peña, Edgar; Zhang, Simeng; Deyo, Steve; Xiao, YiZi; Johnson, Matthew D.

    2017-01-01

    Objective Deep brain stimulation (DBS) therapy relies on both precise neurosurgical targeting and systematic optimization of stimulation settings to achieve beneficial clinical outcomes. One recent advance to improve targeting is the development of DBS arrays (DBSAs) with electrodes segmented both along and around the DBS lead. However, increasing the number of independent electrodes creates the logistical challenge of optimizing stimulation parameters efficiently. Approach Solving such complex problems with multiple solutions and objectives is well known to occur in biology, in which complex collective behaviors emerge out of swarms of individual organisms engaged in learning through social interactions. Here, we developed a particle swarm optimization (PSO) algorithm to program DBSAs using a swarm of individual particles representing electrode configurations and stimulation amplitudes. Using a finite element model of motor thalamic DBS, we demonstrate how the PSO algorithm can efficiently optimize a multi-objective function that maximizes predictions of axonal activation in regions of interest (ROI, cerebellar-receiving area of motor thalamus), minimizes predictions of axonal activation in regions of avoidance (ROA, somatosensory thalamus), and minimizes power consumption. Main Results The algorithm solved the multi-objective problem by producing a Pareto front. ROI and ROA activation predictions were consistent across swarms (<1% median discrepancy in axon activation). The algorithm was able to accommodate for (1) lead displacement (1 mm) with relatively small ROI (≤9.2%) and ROA (≤1%) activation changes, irrespective of shift direction; (2) reduction in maximum per-electrode current (by 50% and 80%) with ROI activation decreasing by 5.6% and 16%, respectively; and (3) disabling electrodes (n=3 and 12) with ROI activation reduction by 1.8% and 14%, respectively. Additionally, comparison between PSO predictions and multi-compartment axon model simulations

  9. How to achieve proper overbite—Lessons from natural dentoalveolar compensation

    Directory of Open Access Journals (Sweden)

    Jenny Zwei-Chieng Chang

    2013-12-01

    Conclusion: For orthodontically closing the open bite, intruding upper posteriors and extruding lower anteriors are appropriate ways to simulate the natural occurring compensation. To eliminate deep bite in a low mandibular plane patient, intruding upper and lower anteriors and proclining anteriors will achieve good overbite. Imitating the natural dentoalveolar compensation by using temporary anchorage devices at appropriate sites for intruding teeth helps to resolve orthodontic vertical problems.

  10. Deep transcranial magnetic stimulation for the treatment of auditory hallucinations: a preliminary open-label study

    Directory of Open Access Journals (Sweden)

    Zangen Abraham

    2011-02-01

    Full Text Available Abstract Background Schizophrenia is a chronic and disabling disease that presents with delusions and hallucinations. Auditory hallucinations are usually expressed as voices speaking to or about the patient. Previous studies have examined the effect of repetitive transcranial magnetic stimulation (TMS over the temporoparietal cortex on auditory hallucinations in schizophrenic patients. Our aim was to explore the potential effect of deep TMS, using the H coil over the same brain region on auditory hallucinations. Patients and methods Eight schizophrenic patients with refractory auditory hallucinations were recruited, mainly from Beer Ya'akov Mental Health Institution (Tel Aviv university, Israel ambulatory clinics, as well as from other hospitals outpatient populations. Low-frequency deep TMS was applied for 10 min (600 pulses per session to the left temporoparietal cortex for either 10 or 20 sessions. Deep TMS was applied using Brainsway's H1 coil apparatus. Patients were evaluated using the Auditory Hallucinations Rating Scale (AHRS as well as the Scale for the Assessment of Positive Symptoms scores (SAPS, Clinical Global Impressions (CGI scale, and the Scale for Assessment of Negative Symptoms (SANS. Results This preliminary study demonstrated a significant improvement in AHRS score (an average reduction of 31.7% ± 32.2% and to a lesser extent improvement in SAPS results (an average reduction of 16.5% ± 20.3%. Conclusions In this study, we have demonstrated the potential of deep TMS treatment over the temporoparietal cortex as an add-on treatment for chronic auditory hallucinations in schizophrenic patients. Larger samples in a double-blind sham-controlled design are now being preformed to evaluate the effectiveness of deep TMS treatment for auditory hallucinations. Trial registration This trial is registered with clinicaltrials.gov (identifier: NCT00564096.

  11. Discrimination of Breast Cancer with Microcalcifications on Mammography by Deep Learning.

    Science.gov (United States)

    Wang, Jinhua; Yang, Xi; Cai, Hongmin; Tan, Wanchang; Jin, Cangzheng; Li, Li

    2016-06-07

    Microcalcification is an effective indicator of early breast cancer. To improve the diagnostic accuracy of microcalcifications, this study evaluates the performance of deep learning-based models on large datasets for its discrimination. A semi-automated segmentation method was used to characterize all microcalcifications. A discrimination classifier model was constructed to assess the accuracies of microcalcifications and breast masses, either in isolation or combination, for classifying breast lesions. Performances were compared to benchmark models. Our deep learning model achieved a discriminative accuracy of 87.3% if microcalcifications were characterized alone, compared to 85.8% with a support vector machine. The accuracies were 61.3% for both methods with masses alone and improved to 89.7% and 85.8% after the combined analysis with microcalcifications. Image segmentation with our deep learning model yielded 15, 26 and 41 features for the three scenarios, respectively. Overall, deep learning based on large datasets was superior to standard methods for the discrimination of microcalcifications. Accuracy was increased by adopting a combinatorial approach to detect microcalcifications and masses simultaneously. This may have clinical value for early detection and treatment of breast cancer.

  12. Multi-Site Diagnostic Classification of Schizophrenia Using Discriminant Deep Learning with Functional Connectivity MRI

    Directory of Open Access Journals (Sweden)

    Ling-Li Zeng

    2018-04-01

    Full Text Available Background: A lack of a sufficiently large sample at single sites causes poor generalizability in automatic diagnosis classification of heterogeneous psychiatric disorders such as schizophrenia based on brain imaging scans. Advanced deep learning methods may be capable of learning subtle hidden patterns from high dimensional imaging data, overcome potential site-related variation, and achieve reproducible cross-site classification. However, deep learning-based cross-site transfer classification, despite less imaging site-specificity and more generalizability of diagnostic models, has not been investigated in schizophrenia. Methods: A large multi-site functional MRI sample (n = 734, including 357 schizophrenic patients from seven imaging resources was collected, and a deep discriminant autoencoder network, aimed at learning imaging site-shared functional connectivity features, was developed to discriminate schizophrenic individuals from healthy controls. Findings: Accuracies of approximately 85·0% and 81·0% were obtained in multi-site pooling classification and leave-site-out transfer classification, respectively. The learned functional connectivity features revealed dysregulation of the cortical-striatal-cerebellar circuit in schizophrenia, and the most discriminating functional connections were primarily located within and across the default, salience, and control networks. Interpretation: The findings imply that dysfunctional integration of the cortical-striatal-cerebellar circuit across the default, salience, and control networks may play an important role in the “disconnectivity” model underlying the pathophysiology of schizophrenia. The proposed discriminant deep learning method may be capable of learning reliable connectome patterns and help in understanding the pathophysiology and achieving accurate prediction of schizophrenia across multiple independent imaging sites. Keywords: Schizophrenia, Deep learning, Connectome, f

  13. Melting metal waste for volume reduction and decontamination

    International Nuclear Information System (INIS)

    Copeland, G.L.; Heshmatpour, B.; Heestand, R.L.

    1980-01-01

    Melt-slagging was investigated as a technique for volume reduction and decontamination of radioactively contaminated scrap metals. Experiments were conducted using several metals and slags in which the partitioning of the contaminant U or Pu to the slag was measured. Concentrations of U or Pu in the metal product of about 1 ppM were achieved for many metals. A volume reduction of 30:1 was achieved for a typical batch of mixed metal scrap. Additionally, the production of granular products was demonstrated with metal shot and crushed slag

  14. Deep Space Telecommunications

    Science.gov (United States)

    Kuiper, T. B. H.; Resch, G. M.

    2000-01-01

    The increasing load on NASA's deep Space Network, the new capabilities for deep space missions inherent in a next-generation radio telescope, and the potential of new telescope technology for reducing construction and operation costs suggest a natural marriage between radio astronomy and deep space telecommunications in developing advanced radio telescope concepts.

  15. Sunspot drawings handwritten character recognition method based on deep learning

    Science.gov (United States)

    Zheng, Sheng; Zeng, Xiangyun; Lin, Ganghua; Zhao, Cui; Feng, Yongli; Tao, Jinping; Zhu, Daoyuan; Xiong, Li

    2016-05-01

    High accuracy scanned sunspot drawings handwritten characters recognition is an issue of critical importance to analyze sunspots movement and store them in the database. This paper presents a robust deep learning method for scanned sunspot drawings handwritten characters recognition. The convolution neural network (CNN) is one algorithm of deep learning which is truly successful in training of multi-layer network structure. CNN is used to train recognition model of handwritten character images which are extracted from the original sunspot drawings. We demonstrate the advantages of the proposed method on sunspot drawings provided by Chinese Academy Yunnan Observatory and obtain the daily full-disc sunspot numbers and sunspot areas from the sunspot drawings. The experimental results show that the proposed method achieves a high recognition accurate rate.

  16. Full-scale operating experience of deep bed denitrification filter achieving phosphorus.

    Science.gov (United States)

    Husband, Joseph A; Slattery, Larry; Garrett, John; Corsoro, Frank; Smithers, Carol; Phipps, Scott

    2012-01-01

    The Arlington County Wastewater Pollution Control Plant (ACWPCP) is located in the southern part of Arlington County, Virginia, USA and discharges to the Potomac River via the Four Mile Run. The ACWPCP was originally constructed in 1937. In 2001, Arlington County, Virginia (USA) committed to expanding their 113,500 m³/d, (300,000 pe) secondary treatment plant to a 151,400 m³/d (400,000 pe) to achieve effluent total nitrogen (TN) to phosphorus (TP) phosphorus, to very low concentrations. This paper will review the steps from concept to the first year of operation, including pilot and full-scale operating data and the capital cost for the denitrification filters.

  17. Bubble formation after a 20-m dive: deep-stop vs. shallow-stop decompression profiles

    NARCIS (Netherlands)

    Schellart, Nico A. M.; Corstius, Jan-Jaap Brandt; Germonpré, Peter; Sterk, Wouter

    2008-01-01

    OBJECTIVES: It is claimed that performing a "deep stop," a stop at about half of maximal diving depth (MDD), can reduce the amount of detectable precordial bubbles after the dive and may thus diminish the risk of decompression sickness. In order to ascertain whether this reduction is caused by the

  18. Emotions, Self-Regulated Learning, and Achievement in Mathematics: A Growth Curve Analysis

    Science.gov (United States)

    Ahmed, Wondimu; van der Werf, Greetje; Kuyper, Hans; Minnaert, Alexander

    2013-01-01

    The purpose of the current study was twofold: (a) to investigate the developmental trends of 4 academic emotions (anxiety, boredom, enjoyment, and pride) and (b) to examine whether changes in emotions are linked to the changes in students' self-regulatory strategies (shallow, deep, and meta-cognitive) and achievement in mathematics. Four hundred…

  19. Greedy Deep Dictionary Learning

    OpenAIRE

    Tariyal, Snigdha; Majumdar, Angshul; Singh, Richa; Vatsa, Mayank

    2016-01-01

    In this work we propose a new deep learning tool called deep dictionary learning. Multi-level dictionaries are learnt in a greedy fashion, one layer at a time. This requires solving a simple (shallow) dictionary learning problem, the solution to this is well known. We apply the proposed technique on some benchmark deep learning datasets. We compare our results with other deep learning tools like stacked autoencoder and deep belief network; and state of the art supervised dictionary learning t...

  20. Generating Importance Map for Geometry Splitting using Discrete Ordinates Code in Deep Shielding Problem

    International Nuclear Information System (INIS)

    Kim, Jong Woon; Lee, Young Ouk

    2016-01-01

    When we use MCNP code for a deep shielding problem, we prefer to use variance reduction technique such as geometry splitting, or weight window, or source biasing to have relative error within reliable confidence interval. To generate importance map for geometry splitting in MCNP calculation, we should know the track entering number and previous importance on each cells since a new importance is calculated based on these information. If a problem is deep shielding problem such that we have zero tracks entering on a cell, we cannot generate new importance map. In this case, discrete ordinates code can provide information to generate importance map easily. In this paper, we use AETIUS code as a discrete ordinates code. Importance map for MCNP is generated based on a zone average flux of AETIUS calculation. The discretization of space, angle, and energy is not necessary for MCNP calculation. This is the big merit of MCNP code compared to the deterministic code. However, deterministic code (i.e., AETIUS) can provide a rough estimate of the flux throughout a problem relatively quickly. This can help MCNP by providing variance reduction parameters. Recently, ADVANTG code is released. This is an automated tool for generating variance reduction parameters for fixed-source continuous-energy Monte Carlo simulations with MCNP5 v1.60

  1. Robustness for slope stability modelling under deep uncertainty

    Science.gov (United States)

    Almeida, Susana; Holcombe, Liz; Pianosi, Francesca; Wagener, Thorsten

    2015-04-01

    Landslides can have large negative societal and economic impacts, such as loss of life and damage to infrastructure. However, the ability of slope stability assessment to guide management is limited by high levels of uncertainty in model predictions. Many of these uncertainties cannot be easily quantified, such as those linked to climate change and other future socio-economic conditions, restricting the usefulness of traditional decision analysis tools. Deep uncertainty can be managed more effectively by developing robust, but not necessarily optimal, policies that are expected to perform adequately under a wide range of future conditions. Robust strategies are particularly valuable when the consequences of taking a wrong decision are high as is often the case of when managing natural hazard risks such as landslides. In our work a physically based numerical model of hydrologically induced slope instability (the Combined Hydrology and Stability Model - CHASM) is applied together with robust decision making to evaluate the most important uncertainties (storm events, groundwater conditions, surface cover, slope geometry, material strata and geotechnical properties) affecting slope stability. Specifically, impacts of climate change on long-term slope stability are incorporated, accounting for the deep uncertainty in future climate projections. Our findings highlight the potential of robust decision making to aid decision support for landslide hazard reduction and risk management under conditions of deep uncertainty.

  2. DeepBipolar: Identifying genomic mutations for bipolar disorder via deep learning.

    Science.gov (United States)

    Laksshman, Sundaram; Bhat, Rajendra Rana; Viswanath, Vivek; Li, Xiaolin

    2017-09-01

    Bipolar disorder, also known as manic depression, is a brain disorder that affects the brain structure of a patient. It results in extreme mood swings, severe states of depression, and overexcitement simultaneously. It is estimated that roughly 3% of the population of the United States (about 5.3 million adults) suffers from bipolar disorder. Recent research efforts like the Twin studies have demonstrated a high heritability factor for the disorder, making genomics a viable alternative for detecting and treating bipolar disorder, in addition to the conventional lengthy and costly postsymptom clinical diagnosis. Motivated by this study, leveraging several emerging deep learning algorithms, we design an end-to-end deep learning architecture (called DeepBipolar) to predict bipolar disorder based on limited genomic data. DeepBipolar adopts the Deep Convolutional Neural Network (DCNN) architecture that automatically extracts features from genotype information to predict the bipolar phenotype. We participated in the Critical Assessment of Genome Interpretation (CAGI) bipolar disorder challenge and DeepBipolar was considered the most successful by the independent assessor. In this work, we thoroughly evaluate the performance of DeepBipolar and analyze the type of signals we believe could have affected the classifier in distinguishing the case samples from the control set. © 2017 Wiley Periodicals, Inc.

  3. Deep learning? What deep learning? | Fourie | South African ...

    African Journals Online (AJOL)

    In teaching generally over the past twenty years, there has been a move towards teaching methods that encourage deep, rather than surface approaches to learning. The reason for this being that students, who adopt a deep approach to learning are considered to have learning outcomes of a better quality and desirability ...

  4. Stabilizing Effects of Deep Eutectic Solvents on Alcohol Dehydrogenase Mediated Systems

    OpenAIRE

    Fatima Zohra Ibn Majdoub Hassani; Ivan Lavandera; Joseph Kreit

    2016-01-01

    This study explored the effects of different organic solvents, temperature, and the amount of glycerol on the alcohol dehydrogenase (ADH)-catalysed stereoselective reduction of different ketones. These conversions were then analyzed by gas chromatography. It was found that when the amount of deep eutectic solvents (DES) increases, it can improve the stereoselectivity of the enzyme although reducing its ability to convert the substrate into the corresponding alcohol. Moreover, glycerol was fou...

  5. Sea-level and deep-sea-temperature variability over the past 5.3 million years.

    Science.gov (United States)

    Rohling, E J; Foster, G L; Grant, K M; Marino, G; Roberts, A P; Tamisiea, M E; Williams, F

    2014-04-24

    Ice volume (and hence sea level) and deep-sea temperature are key measures of global climate change. Sea level has been documented using several independent methods over the past 0.5 million years (Myr). Older periods, however, lack such independent validation; all existing records are related to deep-sea oxygen isotope (δ(18)O) data that are influenced by processes unrelated to sea level. For deep-sea temperature, only one continuous high-resolution (Mg/Ca-based) record exists, with related sea-level estimates, spanning the past 1.5 Myr. Here we present a novel sea-level reconstruction, with associated estimates of deep-sea temperature, which independently validates the previous 0-1.5 Myr reconstruction and extends it back to 5.3 Myr ago. We find that deep-sea temperature and sea level generally decreased through time, but distinctly out of synchrony, which is remarkable given the importance of ice-albedo feedbacks on the radiative forcing of climate. In particular, we observe a large temporal offset during the onset of Plio-Pleistocene ice ages, between a marked cooling step at 2.73 Myr ago and the first major glaciation at 2.15 Myr ago. Last, we tentatively infer that ice sheets may have grown largest during glacials with more modest reductions in deep-sea temperature.

  6. DeepInfer: open-source deep learning deployment toolkit for image-guided therapy

    Science.gov (United States)

    Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A.; Kapur, Tina; Wells, William M.; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang

    2017-03-01

    Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research work ows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.

  7. Deep-learning: investigating deep neural networks hyper-parameters and comparison of performance to shallow methods for modeling bioactivity data.

    Science.gov (United States)

    Koutsoukas, Alexios; Monaghan, Keith J; Li, Xiaoli; Huan, Jun

    2017-06-28

    In recent years, research in artificial neural networks has resurged, now under the deep-learning umbrella, and grown extremely popular. Recently reported success of DL techniques in crowd-sourced QSAR and predictive toxicology competitions has showcased these methods as powerful tools in drug-discovery and toxicology research. The aim of this work was dual, first large number of hyper-parameter configurations were explored to investigate how they affect the performance of DNNs and could act as starting points when tuning DNNs and second their performance was compared to popular methods widely employed in the field of cheminformatics namely Naïve Bayes, k-nearest neighbor, random forest and support vector machines. Moreover, robustness of machine learning methods to different levels of artificially introduced noise was assessed. The open-source Caffe deep-learning framework and modern NVidia GPU units were utilized to carry out this study, allowing large number of DNN configurations to be explored. We show that feed-forward deep neural networks are capable of achieving strong classification performance and outperform shallow methods across diverse activity classes when optimized. Hyper-parameters that were found to play critical role are the activation function, dropout regularization, number hidden layers and number of neurons. When compared to the rest methods, tuned DNNs were found to statistically outperform, with p value <0.01 based on Wilcoxon statistical test. DNN achieved on average MCC units of 0.149 higher than NB, 0.092 than kNN, 0.052 than SVM with linear kernel, 0.021 than RF and finally 0.009 higher than SVM with radial basis function kernel. When exploring robustness to noise, non-linear methods were found to perform well when dealing with low levels of noise, lower than or equal to 20%, however when dealing with higher levels of noise, higher than 30%, the Naïve Bayes method was found to perform well and even outperform at the highest level of

  8. Optimization of spatiotemporally fractionated radiotherapy treatments with bounds on the achievable benefit

    Science.gov (United States)

    Gaddy, Melissa R.; Yıldız, Sercan; Unkelbach, Jan; Papp, Dávid

    2018-01-01

    Spatiotemporal fractionation schemes, that is, treatments delivering different dose distributions in different fractions, can potentially lower treatment side effects without compromising tumor control. This can be achieved by hypofractionating parts of the tumor while delivering approximately uniformly fractionated doses to the surrounding tissue. Plan optimization for such treatments is based on biologically effective dose (BED); however, this leads to computationally challenging nonconvex optimization problems. Optimization methods that are in current use yield only locally optimal solutions, and it has hitherto been unclear whether these plans are close to the global optimum. We present an optimization framework to compute rigorous bounds on the maximum achievable normal tissue BED reduction for spatiotemporal plans. The approach is demonstrated on liver tumors, where the primary goal is to reduce mean liver BED without compromising any other treatment objective. The BED-based treatment plan optimization problems are formulated as quadratically constrained quadratic programming (QCQP) problems. First, a conventional, uniformly fractionated reference plan is computed using convex optimization. Then, a second, nonconvex, QCQP model is solved to local optimality to compute a spatiotemporally fractionated plan that minimizes mean liver BED, subject to the constraints that the plan is no worse than the reference plan with respect to all other planning goals. Finally, we derive a convex relaxation of the second model in the form of a semidefinite programming problem, which provides a rigorous lower bound on the lowest achievable mean liver BED. The method is presented on five cases with distinct geometries. The computed spatiotemporal plans achieve 12-35% mean liver BED reduction over the optimal uniformly fractionated plans. This reduction corresponds to 79-97% of the gap between the mean liver BED of the uniform reference plans and our lower bounds on the lowest

  9. Detector for deep well logging

    International Nuclear Information System (INIS)

    1976-01-01

    A substantial improvement in the useful life and efficiency of a deep-well scintillation detector is achieved by a unique construction wherein the steel cylinder enclosing the sodium iodide scintillation crystal is provided with a tapered recess to receive a glass window which has a high transmittance at the critical wavelength and, for glass, a high coefficient of thermal expansion. A special high-temperature epoxy adhesive composition is employed to form a relatively thick sealing annulus which keeps the glass window in the tapered recess and compensates for the differences in coefficients of expansion between the container and glass so as to maintain a hermetic seal as the unit is subjected to a wide range of temperature

  10. GMSK Modulation for Deep Space Applications

    Science.gov (United States)

    Shambayati, Shervin; Lee, Dennis K.

    2012-01-01

    Due to scarcity of spectrum at 8.42 GHz deep space Xband allocation, many deep space missions are now considering the use of higher order modulation schemes instead of the traditional binary phase shift keying (BPSK). One such scheme is pre-coded Gaussian minimum shift keying (GMSK). GMSK is an excellent candidate for deep space missions. GMSK is a constant envelope, bandwidth efficien modulation whose frame error rate (FER) performance with perfect carrier tracking and proper receiver structure is nearly identical to that of BPSK. There are several issues that need to be addressed with GMSK however. Specificall, we are interested in the combined effects of spectrum limitations and receiver structure on the coded performance of the X-band link using GMSK. The receivers that are typically used for GMSK demodulations are variations on offset quadrature phase shift keying (OQPSK) receivers. In this paper we consider three receivers: the standard DSN OQPSK receiver, DSN OQPSK receiver with filte ed input, and an optimum OQPSK receiver with filte ed input. For the DSN OQPSK receiver we show experimental results with (8920, 1/2), (8920, 1/3) and (8920, 1/6) turbo codes in terms of their error rate performance. We also consider the tracking performance of this receiver as a function of data rate, channel code and the carrier loop signal-to-noise ratio (SNR). For the other two receivers we derive theoretical results that will show that for a given loop bandwidth, a receiver structure, and a channel code, there is a lower data rate limit on the GMSK below which a higher SNR than what is required to achieve the required FER on the link is needed. These limits stem from the minimum loop signal-to-noise ratio requirements on the receivers for achieving lock. As a result of this, for a given channel code and a given FER, there could be a gap between the maximum data rate that BPSK can support without violating the spectrum limits and the minimum data rate that GMSK can support

  11. The Relationships among Middle School Students' Motivational Orientations, Learning Strategies, and Academic Achievement

    Science.gov (United States)

    McClintic-Gilbert, Megan S.; Corpus, Jennifer Henderlong; Wormington, Stephanie V.; Haimovitz, Kyla

    2013-01-01

    The present study examined the extent to which middle school students' (N = 90) learning strategies mediated the relationship between their motivational orientations and academic achievement. Survey data revealed that higher degrees of intrinsic motivation predicted the use of both deep and surface learning strategies, whereas higher degrees of…

  12. Cultural Relevance and Working with Inner City Youth Populations to Achieve Civic Engagement

    Science.gov (United States)

    Ward, Shakoor; Webster, Nicole

    2011-01-01

    This article helps Extension professionals consider the cultural relevant needs of inner city residents in hopes of achieving ongoing civic engagement and appropriate program activities in these communities. Having a deep understanding of how the various dimensions of marginalized community life among inner city populations affect participation in…

  13. Low-complexity object detection with deep convolutional neural network for embedded systems

    Science.gov (United States)

    Tripathi, Subarna; Kang, Byeongkeun; Dane, Gokce; Nguyen, Truong

    2017-09-01

    We investigate low-complexity convolutional neural networks (CNNs) for object detection for embedded vision applications. It is well-known that consolidation of an embedded system for CNN-based object detection is more challenging due to computation and memory requirement comparing with problems like image classification. To achieve these requirements, we design and develop an end-to-end TensorFlow (TF)-based fully-convolutional deep neural network for generic object detection task inspired by one of the fastest framework, YOLO.1 The proposed network predicts the localization of every object by regressing the coordinates of the corresponding bounding box as in YOLO. Hence, the network is able to detect any objects without any limitations in the size of the objects. However, unlike YOLO, all the layers in the proposed network is fully-convolutional. Thus, it is able to take input images of any size. We pick face detection as an use case. We evaluate the proposed model for face detection on FDDB dataset and Widerface dataset. As another use case of generic object detection, we evaluate its performance on PASCAL VOC dataset. The experimental results demonstrate that the proposed network can predict object instances of different sizes and poses in a single frame. Moreover, the results show that the proposed method achieves comparative accuracy comparing with the state-of-the-art CNN-based object detection methods while reducing the model size by 3× and memory-BW by 3 - 4× comparing with one of the best real-time CNN-based object detectors, YOLO. Our 8-bit fixed-point TF-model provides additional 4× memory reduction while keeping the accuracy nearly as good as the floating-point model. Moreover, the fixed- point model is capable of achieving 20× faster inference speed comparing with the floating-point model. Thus, the proposed method is promising for embedded implementations.

  14. Phased Retrofits in Existing Homes In Florida Phase I: Shallow and Deep Retrofits

    Energy Technology Data Exchange (ETDEWEB)

    Parker, D. [Building America Partnership for Improved Residential Construction, Cocoa, FL (United States); Sutherland, K. [Building America Partnership for Improved Residential Construction, Cocoa, FL (United States); Chasar, D. [Building America Partnership for Improved Residential Construction, Cocoa, FL (United States); Montemurno, J. [Building America Partnership for Improved Residential Construction, Cocoa, FL (United States); Amos, B. [Building America Partnership for Improved Residential Construction, Cocoa, FL (United States); Kono, J. [Building America Partnership for Improved Residential Construction, Cocoa, FL (United States)

    2016-02-04

    The U.S. Department of Energy (DOE) Building America program, in collaboration with Florida Power and Light (FPL), conducted a phased residential energy-efficiency retrofit program. This research sought to establish impacts on annual energy and peak energy reductions from the technologies applied at two levels of retrofit - shallow and deep, with savings levels approaching the Building America program goals of reducing whole-house energy use by 40%. Under the Phased Deep Retrofit (PDR) project, we have installed phased, energy-efficiency retrofits in a sample of 56 existing, all-electric homes. End-use savings and economic evaluation results from the phased measure packages and single measures are summarized in this report. Project results will be of interest to utility program designers, weatherization evaluators, and the housing remodel industry. Shallow retrofits were conducted in all homes from March to June 2013. The measures for this phase were chosen based on ease of installation, targeting lighting (CFLs and LED lamps), domestic hot water (wraps and showerheads), refrigeration (cleaning of coils), pool pump (reduction of operating hours), and the home entertainment center (smart plugs). Deep retrofits were conducted on a subset of ten PDR homes from May 2013 through March 2014. Measures included new air source heat pumps, duct repair, ceiling insulation, heat pump water heaters, variable speed pool pumps and learning thermostats. Major appliances such as refrigerators and dishwashers were replaced where they were old and inefficient.

  15. Deep-sea environment and biodiversity of the West African Equatorial margin

    Science.gov (United States)

    Sibuet, Myriam; Vangriesheim, Annick

    2009-12-01

    The long-term BIOZAIRE multidisciplinary deep-sea environmental program on the West Equatorial African margin organized in partnership between Ifremer and TOTAL aimed at characterizing the benthic community structure in relation with physical and chemical processes in a region of oil and gas interest. The morphology of the deep Congo submarine channel and the sedimentological structures of the deep-sea fan were established during the geological ZAIANGO project and helped to select study sites ranging from 350 to 4800 m water depth inside or near the channel and away from its influence. Ifremer conducted eight deep-sea cruises on board research vessels between 2000 and 2005. Standardized methods of sampling together with new technologies such as the ROV Victor 6000 and its associated instrumentation were used to investigate this poorly known continental margin. In addition to the study of sedimentary environments more or less influenced by turbidity events, the discovery of one of the largest cold seeps near the Congo channel and deep coral reefs extends our knowledge of the different habitats of this margin. This paper presents the background, objectives and major results of the BIOZAIRE Program. It highlights the work achieved in the 16 papers in this special issue. This synthesis paper describes the knowledge acquired at a regional and local scale of the Equatorial East Atlantic margin, and tackles new interdisciplinary questions to be answered in the various domains of physics, chemistry, taxonomy and ecology to better understand the deep-sea environment in the Gulf of Guinea.

  16. Sex-work harm reduction.

    Science.gov (United States)

    Rekart, Michael L

    2005-12-17

    Sex work is an extremely dangerous profession. The use of harm-reduction principles can help to safeguard sex workers' lives in the same way that drug users have benefited from drug-use harm reduction. Sex workers are exposed to serious harms: drug use, disease, violence, discrimination, debt, criminalisation, and exploitation (child prostitution, trafficking for sex work, and exploitation of migrants). Successful and promising harm-reduction strategies are available: education, empowerment, prevention, care, occupational health and safety, decriminalisation of sex workers, and human-rights-based approaches. Successful interventions include peer education, training in condom-negotiating skills, safety tips for street-based sex workers, male and female condoms, the prevention-care synergy, occupational health and safety guidelines for brothels, self-help organisations, and community-based child protection networks. Straightforward and achievable steps are available to improve the day-to-day lives of sex workers while they continue to work. Conceptualising and debating sex-work harm reduction as a new paradigm can hasten this process.

  17. Exploring Students' Reflective Thinking Practice, Deep Processing Strategies, Effort, and Achievement Goal Orientations

    Science.gov (United States)

    Phan, Huy Phuong

    2009-01-01

    Recent research indicates that study processing strategies, effort, reflective thinking practice, and achievement goals are important factors contributing to the prediction of students' academic success. Very few studies have combined these theoretical orientations within one conceptual model. This study tested a conceptual model that included, in…

  18. A case of deep vein thrombosis with postthrombotic syndrome cured by homoeopathic therapy

    Directory of Open Access Journals (Sweden)

    Gyandas G Wadhwani

    2015-01-01

    Full Text Available A 46-year-old woman consulted for right-sided deep vein thrombosis in external iliac, common femoral, superficial femoral and popliteal veins with extension along with postthrombotic syndrome. After homoeopathic consultation, she was prescribed Argentum nitricum in ascending LM potencies. Symptomatic relief was reported within 2 weeks of treatment, and gradually the quality of life improved after simultaneous reduction in pain due to other complaints of sciatica and osteoarthrosis. Venous Doppler studies repeated a year later showed complete resolution of the medical condition with homoeopathic drug therapy alone. The physical examination also revealed a reduction in limb circumference.

  19. Deep learning with Python

    CERN Document Server

    Chollet, Francois

    2018-01-01

    DESCRIPTION Deep learning is applicable to a widening range of artificial intelligence problems, such as image classification, speech recognition, text classification, question answering, text-to-speech, and optical character recognition. Deep Learning with Python is structured around a series of practical code examples that illustrate each new concept introduced and demonstrate best practices. By the time you reach the end of this book, you will have become a Keras expert and will be able to apply deep learning in your own projects. KEY FEATURES • Practical code examples • In-depth introduction to Keras • Teaches the difference between Deep Learning and AI ABOUT THE TECHNOLOGY Deep learning is the technology behind photo tagging systems at Facebook and Google, self-driving cars, speech recognition systems on your smartphone, and much more. AUTHOR BIO Francois Chollet is the author of Keras, one of the most widely used libraries for deep learning in Python. He has been working with deep neural ...

  20. Deep learning evaluation using deep linguistic processing

    OpenAIRE

    Kuhnle, Alexander; Copestake, Ann

    2017-01-01

    We discuss problems with the standard approaches to evaluation for tasks like visual question answering, and argue that artificial data can be used to address these as a complement to current practice. We demonstrate that with the help of existing 'deep' linguistic processing technology we are able to create challenging abstract datasets, which enable us to investigate the language understanding abilities of multimodal deep learning models in detail, as compared to a single performance value ...

  1. Some Remarks on Practical Aspects of Laboratory Testing of Deep Soil Mixing Composites Achieved in Organic Soils

    Science.gov (United States)

    Kanty, Piotr; Rybak, Jarosław; Stefaniuk, Damian

    2017-10-01

    This paper presents the results of laboratory testing of organic soil-cement samples are presented in the paper. The research program continues previously reported the authors’ experiences with cement-fly ash-soil sample testing. Over 100 of compression and a dozen of tension tests have been carried out altogether. Several samples were waiting for failure test for over one year after they were formed. Several factors, like: the large amount of the tested samples, a long observation time, carrying out the tests in complex cycles of loading and the possibility of registering the loads and deformation in the axial and lateral direction - have made it possible to take into consideration numerous interdependencies, three of which have been presented in this work: the increments of compression strength, the stiffness of soil-cement in relation to strength and the tensile strength. Compressive strength, elastic modulus and tensile resistance of cubic samples were examined. Samples were mixed and stored in the laboratory conditions. Further numerical analysis in the Finite Element Method numerical code Z_Soil, were performed on the basis of laboratory test results. Computations prove that cement-based stabilization of organic soil brings serious risks (in terms of material capacity and stiffness) and Deep Soil Mixing technology should not be recommended for achieving it. The numerical analysis presented in the study below includes only one type of organic and sandy soil and several possible geometric combinations. Despite that, it clearly points to the fact that designing the DSM columns in the organic soil may be linked with a considerable risk and the settlement may reach too high values. During in situ mixing, the organic material surrounded by sand layers surely mixes with one another in certain areas. However, it has not been examined and it is difficult to assume such mixing already at the designing stage. In case of designing the DSM columns which goes through a

  2. Evaluation of a deep learning architecture for MR imaging prediction of ATRX in glioma patients

    Science.gov (United States)

    Korfiatis, Panagiotis; Kline, Timothy L.; Erickson, Bradley J.

    2018-02-01

    Predicting mutation/loss of alpha-thalassemia/mental retardation syndrome X-linked (ATRX) gene utilizing MR imaging is of high importance since it is a predictor of response and prognosis in brain tumors. In this study, we compare a deep neural network approach based on a residual deep neural network (ResNet) architecture and one based on a classical machine learning approach and evaluate their ability in predicting ATRX mutation status without the need for a distinct tumor segmentation step. We found that the ResNet50 (50 layers) architecture, pre trained on ImageNet data was the best performing model, achieving an accuracy of 0.91 for the test set (classification of a slice as no tumor, ATRX mutated, or mutated) in terms of f1 score in a test set of 35 cases. The SVM classifier achieved 0.63 for differentiating the Flair signal abnormality regions from the test patients based on their mutation status. We report a method that alleviates the need for extensive preprocessing and acts as a proof of concept that deep neural network architectures can be used to predict molecular biomarkers from routine medical images.

  3. A preliminary examination of the diagnostic value of deep learning in hip osteoarthritis.

    Directory of Open Access Journals (Sweden)

    Yanping Xue

    Full Text Available Hip Osteoarthritis (OA is a common disease among the middle-aged and elderly people. Conventionally, hip OA is diagnosed by manually assessing X-ray images. This study took the hip joint as the object of observation and explored the diagnostic value of deep learning in hip osteoarthritis. A deep convolutional neural network (CNN was trained and tested on 420 hip X-ray images to automatically diagnose hip OA. This CNN model achieved a balance of high sensitivity of 95.0% and high specificity of 90.7%, as well as an accuracy of 92.8% compared to the chief physicians. The CNN model performance is comparable to an attending physician with 10 years of experience. The results of this study indicate that deep learning has promising potential in the field of intelligent medical image diagnosis practice.

  4. Mechanical pre-planting weed control in short rotation coppice using deep forestry ploughing techniques

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-11-01

    This report describes a trial by Border biofuels to investigate the deep forestry plough as a mechanical pre-planting weed control method to reduce weed infestations in willow coppice and thus contribute to improved establishment and eventual yield. The results suggest that there was a considerable increase in biomass productivity from the deep ploughed area compared to the conventionally cultivated area at all three SRC sites tested. This technique also suggests that the deep forestry ploughing provides the benefit of much reduced levels of seed germination of many annual weed species and a reduction in levels of perennial weed infestation. It is not possible at this stage to predict the longer term benefits in terms of harvestable biomass productivity but it may be considered that the improved establishment and lack of weed competition would consistently produce higher yields of biomass than plantations which suffer from persistent and invasive weed competition. (author)

  5. Deep Drawing of High-Strength Tailored Blanks by Using Tailored Tools

    Directory of Open Access Journals (Sweden)

    Thomas Mennecart

    2016-01-01

    Full Text Available In most forming processes based on tailored blanks, the tool material remains the same as that of sheet metal blanks without tailored properties. A novel concept of lightweight construction for deep drawing tools is presented in this work to improve the forming behavior of tailored blanks. The investigations presented here deal with the forming of tailored blanks of dissimilar strengths using tailored dies made of two different materials. In the area of the steel blank with higher strength, typical tool steel is used. In the area of the low-strength steel, a hybrid tool made out of a polymer and a fiber-reinforced surface replaces the steel half. Cylindrical cups of DP600/HX300LAD are formed and analyzed regarding their formability. The use of two different halves of tool materials shows improved blank thickness distribution, weld-line movement and pressure distribution compared to the use of two steel halves. An improvement in strain distribution is also observed by the inclusion of springs in the polymer side of tools, which is implemented to control the material flow in the die. Furthermore, a reduction in tool weight of approximately 75% can be achieved by using this technique. An accurate finite element modeling strategy is developed to analyze the problem numerically and is verified experimentally for the cylindrical cup. This strategy is then applied to investigate the thickness distribution and weld-line movement for a complex geometry, and its transferability is validated. The inclusion of springs in the hybrid tool leads to better material flow, which results in reduction of weld-line movement by around 60%, leading to more uniform thickness distribution.

  6. Acute and chronic changes in brain activity with deep brain stimulation for refractory depression.

    Science.gov (United States)

    Conen, Silke; Matthews, Julian C; Patel, Nikunj K; Anton-Rodriguez, José; Talbot, Peter S

    2018-04-01

    Deep brain stimulation is a potential option for patients with treatment-refractory depression. Deep brain stimulation benefits have been reported when targeting either the subgenual cingulate or ventral anterior capsule/nucleus accumbens. However, not all patients respond and optimum stimulation-site is uncertain. We compared deep brain stimulation of the subgenual cingulate and ventral anterior capsule/nucleus accumbens separately and combined in the same seven treatment-refractory depression patients, and investigated regional cerebral blood flow changes associated with acute and chronic deep brain stimulation. Deep brain stimulation-response was defined as reduction in Montgomery-Asberg Depression Rating Scale score from baseline of ≥50%, and remission as a Montgomery-Asberg Depression Rating Scale score ≤8. Changes in regional cerebral blood flow were assessed using [ 15 O]water positron emission tomography. Remitters had higher relative regional cerebral blood flow in the prefrontal cortex at baseline and all subsequent time-points compared to non-remitters and non-responders, with prefrontal cortex regional cerebral blood flow generally increasing with chronic deep brain stimulation. These effects were consistent regardless of stimulation-site. Overall, no significant regional cerebral blood flow changes were apparent when deep brain stimulation was acutely interrupted. Deep brain stimulation improved treatment-refractory depression severity in the majority of patients, with consistent changes in local and distant brain regions regardless of target stimulation. Remission of depression was reached in patients with higher baseline prefrontal regional cerebral blood flow. Because of the small sample size these results are preliminary and further evaluation is necessary to determine whether prefrontal cortex regional cerebral blood flow could be a predictive biomarker of treatment response.

  7. A PILOT FOR A VERY LARGE ARRAY H I DEEP FIELD

    International Nuclear Information System (INIS)

    Fernández, Ximena; Van Gorkom, J. H.; Schiminovich, David; Hess, Kelley M.; Pisano, D. J.; Kreckel, Kathryn; Momjian, Emmanuel; Popping, Attila; Oosterloo, Tom; Chomiuk, Laura; Verheijen, M. A. W.; Henning, Patricia A.; Bershady, Matthew A.; Wilcots, Eric M.; Scoville, Nick

    2013-01-01

    High-resolution 21 cm H I deep fields provide spatially and kinematically resolved images of neutral hydrogen at different redshifts, which are key to understanding galaxy evolution across cosmic time and testing predictions of cosmological simulations. Here we present results from a pilot for an H I deep field done with the Karl G. Jansky Very Large Array (VLA). We take advantage of the newly expanded capabilities of the telescope to probe the redshift interval 0 < z < 0.193 in one observation. We observe the COSMOS field for 50 hr, which contains 413 galaxies with optical spectroscopic redshifts in the imaged field of 34' × 34' and the observed redshift interval. We have detected neutral hydrogen gas in 33 galaxies in different environments spanning the probed redshift range, including three without a previously known spectroscopic redshift. The detections have a range of H I and stellar masses, indicating the diversity of galaxies we are probing. We discuss the observations, data reduction, results, and highlight interesting detections. We find that the VLA's B-array is the ideal configuration for H I deep fields since its long spacings mitigate radio frequency interference. This pilot shows that the VLA is ready to carry out such a survey, and serves as a test for future H I deep fields planned with other Square Kilometer Array pathfinders.

  8. A PILOT FOR A VERY LARGE ARRAY H I DEEP FIELD

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez, Ximena; Van Gorkom, J. H.; Schiminovich, David [Department of Astronomy, Columbia University, New York, NY 10027 (United States); Hess, Kelley M. [Department of Astronomy, Astrophysics, Cosmology and Gravity Centre, University of Cape Town, Private Bag X3, Rondebosch 7701 (South Africa); Pisano, D. J. [Department of Physics, West Virginia University, P.O. Box 6315, Morgantown, WV 26506 (United States); Kreckel, Kathryn [Max Planck Institute for Astronomy, Koenigstuhl 17, D-69117 Heidelberg (Germany); Momjian, Emmanuel [National Radio Astronomy Observatory, Socorro, NM 87801 (United States); Popping, Attila [International Centre for Radio Astronomy Research (ICRAR), The University of Western Australia, 35 Stirling Hwy, Crawley, WA 6009 (Australia); Oosterloo, Tom [Netherlands Institute for Radio Astronomy (ASTRON), Postbus 2, NL-7990 AA Dwingeloo (Netherlands); Chomiuk, Laura [Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States); Verheijen, M. A. W. [Kapteyn Astronomical Institute, University of Groningen, Postbus 800, NL-9700 AV Groningen (Netherlands); Henning, Patricia A. [Department of Physics and Astronomy, University of New Mexico, Albuquerque, NM 87131 (United States); Bershady, Matthew A.; Wilcots, Eric M. [Department of Astronomy, University of Wisconsin-Madison, Madison, WI 53706 (United States); Scoville, Nick, E-mail: ximena@astro.columbia.edu [Department of Astronomy, California Institute of Technology, Pasadena, CA 91125 (United States)

    2013-06-20

    High-resolution 21 cm H I deep fields provide spatially and kinematically resolved images of neutral hydrogen at different redshifts, which are key to understanding galaxy evolution across cosmic time and testing predictions of cosmological simulations. Here we present results from a pilot for an H I deep field done with the Karl G. Jansky Very Large Array (VLA). We take advantage of the newly expanded capabilities of the telescope to probe the redshift interval 0 < z < 0.193 in one observation. We observe the COSMOS field for 50 hr, which contains 413 galaxies with optical spectroscopic redshifts in the imaged field of 34' Multiplication-Sign 34' and the observed redshift interval. We have detected neutral hydrogen gas in 33 galaxies in different environments spanning the probed redshift range, including three without a previously known spectroscopic redshift. The detections have a range of H I and stellar masses, indicating the diversity of galaxies we are probing. We discuss the observations, data reduction, results, and highlight interesting detections. We find that the VLA's B-array is the ideal configuration for H I deep fields since its long spacings mitigate radio frequency interference. This pilot shows that the VLA is ready to carry out such a survey, and serves as a test for future H I deep fields planned with other Square Kilometer Array pathfinders.

  9. Compression of a Deep Competitive Network Based on Mutual Information for Underwater Acoustic Targets Recognition

    Directory of Open Access Journals (Sweden)

    Sheng Shen

    2018-04-01

    Full Text Available The accuracy of underwater acoustic targets recognition via limited ship radiated noise can be improved by a deep neural network trained with a large number of unlabeled samples. However, redundant features learned by deep neural network have negative effects on recognition accuracy and efficiency. A compressed deep competitive network is proposed to learn and extract features from ship radiated noise. The core idea of the algorithm includes: (1 Competitive learning: By integrating competitive learning into the restricted Boltzmann machine learning algorithm, the hidden units could share the weights in each predefined group; (2 Network pruning: The pruning based on mutual information is deployed to remove the redundant parameters and further compress the network. Experiments based on real ship radiated noise show that the network can increase recognition accuracy with fewer informative features. The compressed deep competitive network can achieve a classification accuracy of 89.1 % , which is 5.3 % higher than deep competitive network and 13.1 % higher than the state-of-the-art signal processing feature extraction methods.

  10. Comparing deep neural network and other machine learning algorithms for stroke prediction in a large-scale population-based electronic medical claims database.

    Science.gov (United States)

    Chen-Ying Hung; Wei-Chen Chen; Po-Tsun Lai; Ching-Heng Lin; Chi-Chun Lee

    2017-07-01

    Electronic medical claims (EMCs) can be used to accurately predict the occurrence of a variety of diseases, which can contribute to precise medical interventions. While there is a growing interest in the application of machine learning (ML) techniques to address clinical problems, the use of deep-learning in healthcare have just gained attention recently. Deep learning, such as deep neural network (DNN), has achieved impressive results in the areas of speech recognition, computer vision, and natural language processing in recent years. However, deep learning is often difficult to comprehend due to the complexities in its framework. Furthermore, this method has not yet been demonstrated to achieve a better performance comparing to other conventional ML algorithms in disease prediction tasks using EMCs. In this study, we utilize a large population-based EMC database of around 800,000 patients to compare DNN with three other ML approaches for predicting 5-year stroke occurrence. The result shows that DNN and gradient boosting decision tree (GBDT) can result in similarly high prediction accuracies that are better compared to logistic regression (LR) and support vector machine (SVM) approaches. Meanwhile, DNN achieves optimal results by using lesser amounts of patient data when comparing to GBDT method.

  11. Closed loop deep brain stimulation: an evolving technology.

    Science.gov (United States)

    Hosain, Md Kamal; Kouzani, Abbas; Tye, Susannah

    2014-12-01

    Deep brain stimulation is an effective and safe medical treatment for a variety of neurological and psychiatric disorders including Parkinson's disease, essential tremor, dystonia, and treatment resistant obsessive compulsive disorder. A closed loop deep brain stimulation (CLDBS) system automatically adjusts stimulation parameters by the brain response in real time. The CLDBS continues to evolve due to the advancement in the brain stimulation technologies. This paper provides a study on the existing systems developed for CLDBS. It highlights the issues associated with CLDBS systems including feedback signal recording and processing, stimulation parameters setting, control algorithm, wireless telemetry, size, and power consumption. The benefits and limitations of the existing CLDBS systems are also presented. Whilst robust clinical proof of the benefits of the technology remains to be achieved, it has the potential to offer several advantages over open loop DBS. The CLDBS can improve efficiency and efficacy of therapy, eliminate lengthy start-up period for programming and adjustment, provide a personalized treatment, and make parameters setting automatic and adaptive.

  12. A deep convolutional neural network for recognizing foods

    Science.gov (United States)

    Jahani Heravi, Elnaz; Habibi Aghdam, Hamed; Puig, Domenec

    2015-12-01

    Controlling the food intake is an efficient way that each person can undertake to tackle the obesity problem in countries worldwide. This is achievable by developing a smartphone application that is able to recognize foods and compute their calories. State-of-art methods are chiefly based on hand-crafted feature extraction methods such as HOG and Gabor. Recent advances in large-scale object recognition datasets such as ImageNet have revealed that deep Convolutional Neural Networks (CNN) possess more representation power than the hand-crafted features. The main challenge with CNNs is to find the appropriate architecture for each problem. In this paper, we propose a deep CNN which consists of 769; 988 parameters. Our experiments show that the proposed CNN outperforms the state-of-art methods and improves the best result of traditional methods 17%. Moreover, using an ensemble of two CNNs that have been trained two different times, we are able to improve the classification performance 21:5%.

  13. Superplasticity and Micro-arrayed Deep-Drawing Behavior of Ni-Co/GO Nanocomposite

    Science.gov (United States)

    Wang, Guofeng; Zhao, Shanshan; Li, You; Yang, Chao; Liu, Siyu

    2017-10-01

    In this article, Ni-Co/GO nanocomposite was fabricated by AC pulse electrodeposition method. The room temperature strength tests and the superplasticity of the nanocomposite were investigated by the tensile tests. A 5 × 5 micro-arrayed deep-drawing die was designed to explore the feasibility of micro-forming. The as-deposited material has a narrow grain size distribution with a mean grain size of 50 nm. The addition of GO as a reinforcing phase can effectively enhance the room temperature tensile strength of the nanocomposite, but reduce the plasticity. When adding GO to the plating bath, a maximum elongation of 467% was observed for the specimen with a GO content of 0.01 g/L at 773 K and a strain rate of 1.67 × 10-3 s-1 by tensile tests. Micro-arrayed deep-drawing tests were subsequently performed with male die diameter of 0.58 mm and female die diameter of 0.8 mm. The experimental relative drawing height values were measured and compared with the deep-drawing parts without GO additive. It is found that the micro-arrayed deep-drawing with rigid male die at high temperature was feasible and forming parts with good shape could be got. The thickness distribution analysis of the deep-drawing parts showed that wall thickness changed ranging from 53 to 95 μm, and the thickness reduction at the punch fillet is the most obvious.

  14. Multiple sclerosis deep grey matter: the relation between demyelination, neurodegeneration, inflammation and iron.

    Science.gov (United States)

    Haider, Lukas; Simeonidou, Constantina; Steinberger, Günther; Hametner, Simon; Grigoriadis, Nikolaos; Deretzi, Georgia; Kovacs, Gabor G; Kutzelnigg, Alexandra; Lassmann, Hans; Frischer, Josa M

    2014-12-01

    In multiple sclerosis (MS), diffuse degenerative processes in the deep grey matter have been associated with clinical disabilities. We performed a systematic study in MS deep grey matter with a focus on the incidence and topographical distribution of lesions in relation to white matter and cortex in a total sample of 75 MS autopsy patients and 12 controls. In addition, detailed analyses of inflammation, acute axonal injury, iron deposition and oxidative stress were performed. MS deep grey matter was affected by two different processes: the formation of focal demyelinating lesions and diffuse neurodegeneration. Deep grey matter demyelination was most prominent in the caudate nucleus and hypothalamus and could already be seen in early MS stages. Lesions developed on the background of inflammation. Deep grey matter inflammation was intermediate between low inflammatory cortical lesions and active white matter lesions. Demyelination and neurodegeneration were associated with oxidative injury. Iron was stored primarily within oligodendrocytes and myelin fibres and released upon demyelination. In addition to focal demyelinated plaques, the MS deep grey matter also showed diffuse and global neurodegeneration. This was reflected by a global reduction of neuronal density, the presence of acutely injured axons, and the accumulation of oxidised phospholipids and DNA in neurons, oligodendrocytes and axons. Neurodegeneration was associated with T cell infiltration, expression of inducible nitric oxide synthase in microglia and profound accumulation of iron. Thus, both focal lesions as well as diffuse neurodegeneration in the deep grey matter appeared to contribute to the neurological disabilities of MS patients. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  15. White blood cells identification system based on convolutional deep neural learning networks.

    Science.gov (United States)

    Shahin, A I; Guo, Yanhui; Amin, K M; Sharawi, Amr A

    2017-11-16

    White blood cells (WBCs) differential counting yields valued information about human health and disease. The current developed automated cell morphology equipments perform differential count which is based on blood smear image analysis. Previous identification systems for WBCs consist of successive dependent stages; pre-processing, segmentation, feature extraction, feature selection, and classification. There is a real need to employ deep learning methodologies so that the performance of previous WBCs identification systems can be increased. Classifying small limited datasets through deep learning systems is a major challenge and should be investigated. In this paper, we propose a novel identification system for WBCs based on deep convolutional neural networks. Two methodologies based on transfer learning are followed: transfer learning based on deep activation features and fine-tuning of existed deep networks. Deep acrivation featues are extracted from several pre-trained networks and employed in a traditional identification system. Moreover, a novel end-to-end convolutional deep architecture called "WBCsNet" is proposed and built from scratch. Finally, a limited balanced WBCs dataset classification is performed through the WBCsNet as a pre-trained network. During our experiments, three different public WBCs datasets (2551 images) have been used which contain 5 healthy WBCs types. The overall system accuracy achieved by the proposed WBCsNet is (96.1%) which is more than different transfer learning approaches or even the previous traditional identification system. We also present features visualization for the WBCsNet activation which reflects higher response than the pre-trained activated one. a novel WBCs identification system based on deep learning theory is proposed and a high performance WBCsNet can be employed as a pre-trained network. Copyright © 2017. Published by Elsevier B.V.

  16. Achieving emissions reduction through oil sands cogeneration in Alberta’s deregulated electricity market

    International Nuclear Information System (INIS)

    Ouellette, A.; Rowe, A.; Sopinka, A.; Wild, P.

    2014-01-01

    The province of Alberta faces the challenge of balancing its commitment to reduce CO 2 emissions and the growth of its energy-intensive oil sands industry. Currently, these operations rely on the Alberta electricity system and on-site generation to satisfy their steam and electricity requirements. Most of the on-site generation units produce steam and electricity through the process of cogeneration. It is unclear to what extent new and existing operations will continue to develop cogeneration units or rely on electricity from the Alberta grid to meet their energy requirements in the near future. This study explores the potential for reductions in fuel usage and CO 2 emissions by increasing the penetration of oil sands cogeneration in the provincial generation mixture. EnergyPLAN is used to perform scenario analyses on Alberta’s electricity system in 2030 with a focus on transmission conditions to the oil sands region. The results show that up to 15–24% of CO 2 reductions prescribed by the 2008 Alberta Climate Strategy are possible. Furthermore, the policy implications of these scenarios within a deregulated market are discussed. - Highlights: • High levels of cogeneration in the oil sands significantly reduce the total fuel usage and CO 2 emissions for the province. • Beyond a certain threshold, the emissions reduction intensity per MW of cogeneration installed is reduced. • The cost difference between scenarios is not significant. • Policy which gives an advantage to a particular technology goes against the ideology of a deregulated market. • Alberta will need significant improvements to its transmission system in order for oil sands cogeneration to persist

  17. Radical Transformation in the Human - Nature Perception: Deep Ecology

    Directory of Open Access Journals (Sweden)

    Hasan YAYLI

    2015-07-01

    Full Text Available There have been numerous endeavors to date the green thought. As the environmental problems have begun to be apparent in the aftermath of the second world war, the year of 1952, a traumatic incident is noted where more than four thousand people have died d ue to air pollution in London, while in 1970, Rome Club have initiated within the Project of Predicament of Mankind in collaboration with Massachusetts Institute of Technology (MIT, in which zero growth thesis put forward in its famed report. Both the for mer and the latter ignited environmental awareness and regarded as the point of origins for the green thought. Regardless of where it begins from, ecological movements have mainly followed the paths of two movements of thought and tried to develop their p aradigms on the basis of these two main thoughts. The environmentalists that named as socialist or Marxist asserts that only through a radical transformation where capitalist way of production is abandoned, the prevention of environmental degradation cou ld be achieved. Whereas the environmentalists who follow the capitalist paradigm believed the protection of environment could be achieved by means of the sustainability in terms of natural resource pool and waste - disposal practices. If we look closely, both of these two movements of thought are anthropocentric. An alternative ecological movement of thought has proposed in 1973 by Norwegian philosopher, Arne Naess, in his work named, “The Shallow and the Deep, Long - Range Ecology Moveme nt: A Summary”. This Deep Ecology approach moves through the commitment to the inner value of the nature aside from mankind and by this way, differs from anthropocentric approaches. Within forty two years, Deep Ecology has led various discussions. The the mes as “ecosophy” which has proposed to define itself and the “bio - regions” conception which put forward to actualize its philosophy could be counted among the reference points of the

  18. High-Resolution Ultrasound-Switchable Fluorescence Imaging in Centimeter-Deep Tissue Phantoms with High Signal-To-Noise Ratio and High Sensitivity via Novel Contrast Agents.

    Science.gov (United States)

    Cheng, Bingbing; Bandi, Venugopal; Wei, Ming-Yuan; Pei, Yanbo; D'Souza, Francis; Nguyen, Kytai T; Hong, Yi; Yuan, Baohong

    2016-01-01

    For many years, investigators have sought after high-resolution fluorescence imaging in centimeter-deep tissue because many interesting in vivo phenomena-such as the presence of immune system cells, tumor angiogenesis, and metastasis-may be located deep in tissue. Previously, we developed a new imaging technique to achieve high spatial resolution in sub-centimeter deep tissue phantoms named continuous-wave ultrasound-switchable fluorescence (CW-USF). The principle is to use a focused ultrasound wave to externally and locally switch on and off the fluorophore emission from a small volume (close to ultrasound focal volume). By making improvements in three aspects of this technique: excellent near-infrared USF contrast agents, a sensitive frequency-domain USF imaging system, and an effective signal processing algorithm, for the first time this study has achieved high spatial resolution (~ 900 μm) in 3-centimeter-deep tissue phantoms with high signal-to-noise ratio (SNR) and high sensitivity (3.4 picomoles of fluorophore in a volume of 68 nanoliters can be detected). We have achieved these results in both tissue-mimic phantoms and porcine muscle tissues. We have also demonstrated multi-color USF to image and distinguish two fluorophores with different wavelengths, which might be very useful for simultaneously imaging of multiple targets and observing their interactions in the future. This work has opened the door for future studies of high-resolution centimeter-deep tissue fluorescence imaging.

  19. Deep frying

    NARCIS (Netherlands)

    Koerten, van K.N.

    2016-01-01

    Deep frying is one of the most used methods in the food processing industry. Though practically any food can be fried, French fries are probably the most well-known deep fried products. The popularity of French fries stems from their unique taste and texture, a crispy outside with a mealy soft

  20. Academic Self-Concept and Learning Strategies: Direction of Effect on Student Academic Achievement

    Science.gov (United States)

    McInerney, Dennis M.; Cheng, Rebecca Wing-yi; Mok, Magdalena Mo Ching; Lam, Amy Kwok Hap

    2012-01-01

    This study examined the prediction of academic self-concept (English and Mathematics) and learning strategies (deep and surface), and their direction of effect, on academic achievement (English and Mathematics) of 8,354 students from 16 secondary schools in Hong Kong. Two competing models were tested to ascertain the direction of effect: Model A…

  1. Deep carbon reductions in California require electrification and integration across economic sectors

    International Nuclear Information System (INIS)

    Wei, Max; Greenblatt, Jeffery B; McMahon, James E; Nelson, James H; Mileva, Ana; Johnston, Josiah; Jones, Chris; Kammen, Daniel M; Ting, Michael; Yang, Christopher

    2013-01-01

    Meeting a greenhouse gas (GHG) reduction target of 80% below 1990 levels in the year 2050 requires detailed long-term planning due to complexity, inertia, and path dependency in the energy system. A detailed investigation of supply and demand alternatives is conducted to assess requirements for future California energy systems that can meet the 2050 GHG target. Two components are developed here that build novel analytic capacity and extend previous studies: (1) detailed bottom-up projections of energy demand across the building, industry and transportation sectors; and (2) a high-resolution variable renewable resource capacity planning model (SWITCH) that minimizes the cost of electricity while meeting GHG policy goals in the 2050 timeframe. Multiple pathways exist to a low-GHG future, all involving increased efficiency, electrification, and a dramatic shift from fossil fuels to low-GHG energy. The electricity system is found to have a diverse, cost-effective set of options that meet aggressive GHG reduction targets. This conclusion holds even with increased demand from transportation and heating, but the optimal levels of wind and solar deployment depend on the temporal characteristics of the resulting load profile. Long-term policy support is found to be a key missing element for the successful attainment of the 2050 GHG target in California. (letter)

  2. DeepPVP: phenotype-based prioritization of causative variants using deep learning

    KAUST Repository

    Boudellioua, Imene

    2018-05-02

    Background: Prioritization of variants in personal genomic data is a major challenge. Recently, computational methods that rely on comparing phenotype similarity have shown to be useful to identify causative variants. In these methods, pathogenicity prediction is combined with a semantic similarity measure to prioritize not only variants that are likely to be dysfunctional but those that are likely involved in the pathogenesis of a patient\\'s phenotype. Results: We have developed DeepPVP, a variant prioritization method that combined automated inference with deep neural networks to identify the likely causative variants in whole exome or whole genome sequence data. We demonstrate that DeepPVP performs significantly better than existing methods, including phenotype-based methods that use similar features. DeepPVP is freely available at https://github.com/bio-ontology-research-group/phenomenet-vp Conclusions: DeepPVP further improves on existing variant prioritization methods both in terms of speed as well as accuracy.

  3. Using a model of human visual perception to improve deep learning.

    Science.gov (United States)

    Stettler, Michael; Francis, Gregory

    2018-04-17

    Deep learning algorithms achieve human-level (or better) performance on many tasks, but there still remain situations where humans learn better or faster. With regard to classification of images, we argue that some of those situations are because the human visual system represents information in a format that promotes good training and classification. To demonstrate this idea, we show how occluding objects can impair performance of a deep learning system that is trained to classify digits in the MNIST database. We describe a human inspired segmentation and interpolation algorithm that attempts to reconstruct occluded parts of an image, and we show that using this reconstruction algorithm to pre-process occluded images promotes training and classification performance. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Speech reconstruction using a deep partially supervised neural network.

    Science.gov (United States)

    McLoughlin, Ian; Li, Jingjie; Song, Yan; Sharifzadeh, Hamid R

    2017-08-01

    Statistical speech reconstruction for larynx-related dysphonia has achieved good performance using Gaussian mixture models and, more recently, restricted Boltzmann machine arrays; however, deep neural network (DNN)-based systems have been hampered by the limited amount of training data available from individual voice-loss patients. The authors propose a novel DNN structure that allows a partially supervised training approach on spectral features from smaller data sets, yielding very good results compared with the current state-of-the-art.

  5. The Mechanism and Application of Deep-Hole Precracking Blasting on Rockburst Prevention

    Directory of Open Access Journals (Sweden)

    Zhenhua Ouyang

    2015-01-01

    Full Text Available The mechanism of preventing rockburst through deep-hole precracking blasting was studied based on experimental test, numerical simulation, and field testing. The study results indicate that the deep-hole precracking could change the bursting proneness and stress state of coal-rock mass, thereby preventing the occurrence of rockburst. The bursting proneness of the whole composite structure could be weakened by the deep-hole precracking blasting. The change of stress state in the process of precracking blasting is achieved in two ways: (1 artificially break the roof apart, thus weakening the continuity of the roof strata, effectively inducing the roof caving while reducing its impact strength; and (2 the dynamic shattering and air pressure generated by the blasting can structurally change the properties of the coal-rock mass by mitigating the high stress generation and high elastic energy accumulation, thus breaking the conditions of energy transfer and rock burst occurrence.

  6. Multispectral embedding-based deep neural network for three-dimensional human pose recovery

    Science.gov (United States)

    Yu, Jialin; Sun, Jifeng

    2018-01-01

    Monocular image-based three-dimensional (3-D) human pose recovery aims to retrieve 3-D poses using the corresponding two-dimensional image features. Therefore, the pose recovery performance highly depends on the image representations. We propose a multispectral embedding-based deep neural network (MSEDNN) to automatically obtain the most discriminative features from multiple deep convolutional neural networks and then embed their penultimate fully connected layers into a low-dimensional manifold. This compact manifold can explore not only the optimum output from multiple deep networks but also the complementary properties of them. Furthermore, the distribution of each hierarchy discriminative manifold is sufficiently smooth so that the training process of our MSEDNN can be effectively implemented only using few labeled data. Our proposed network contains a body joint detector and a human pose regressor that are jointly trained. Extensive experiments conducted on four databases show that our proposed MSEDNN can achieve the best recovery performance compared with the state-of-the-art methods.

  7. Growth temperature dependence of Si doping efficiency and compensating deep level defect incorporation in Al0.7Ga0.3N

    International Nuclear Information System (INIS)

    Armstrong, Andrew M.; Moseley, Michael W.; Allerman, Andrew A.; Crawford, Mary H.; Wierer, Jonathan J.

    2015-01-01

    The growth temperature dependence of Si doping efficiency and deep level defect formation was investigated for n-type Al 0.7 Ga 0.3 N. It was observed that dopant compensation was greatly reduced with reduced growth temperature. Deep level optical spectroscopy and lighted capacitance-voltage were used to understand the role of acceptor-like deep level defects on doping efficiency. Deep level defects were observed at 2.34 eV, 3.56 eV, and 4.74 eV below the conduction band minimum. The latter two deep levels were identified as the major compensators because the reduction in their concentrations at reduced growth temperature correlated closely with the concomitant increase in free electron concentration. Possible mechanisms for the strong growth temperature dependence of deep level formation are considered, including thermodynamically driven compensating defect formation that can arise for a semiconductor with very large band gap energy, such as Al 0.7 Ga 0.3 N

  8. Guidance levels, achievable doses and expectation levels

    International Nuclear Information System (INIS)

    Li, Lianbo; Meng, Bing

    2002-01-01

    The National Radiological Protection Board (NRPB), the International Atomic Energy Agency (IAEA) and the Commission of the European Communities (CEC) published their guidance levels and reference doses for typical X-ray examination and nuclear medicine in their documents in 1993, 1994 and 1996 respectively. From then on, the concept of guidance levels or reference doses have been applied to different examinations in the field of radiology and proved to be effective for reduction of patient doses. But the guidance levels or reference doses are likely to have some shortcomings and can do little to make further reduction of patient dose in the radiology departments where patient dose are already below them. For this reason, the National Radiological Protection Board (NRPB) proposed a concept named achievable doses which are based on the mean dose observed for a selected sample of radiology departments. This paper will review and discuss the concept of guidance levels and achievable doses, and propose a new concept referred to as Expectation Levels that will encourage the radiology departments where patient dose are already below the guidance levels to keep patient dose as low as reasonably achievable. Some examples of the expectation levels based on the data published by a few countries are also illustrated in this paper

  9. Designed Er(3+)-singly doped NaYF4 with double excitation bands for simultaneous deep macroscopic and microscopic upconverting bioimaging.

    Science.gov (United States)

    Wen, Xuanyuan; Wang, Baoju; Wu, Ruitao; Li, Nana; He, Sailing; Zhan, Qiuqiang

    2016-06-01

    Simultaneous deep macroscopic imaging and microscopic imaging is in urgent demand, but is challenging to achieve experimentally due to the lack of proper fluorescent probes. Herein, we have designed and successfully synthesized simplex Er(3+)-doped upconversion nanoparticles (UCNPs) with double excitation bands for simultaneous deep macroscopic and microscopic imaging. The material structure and the excitation wavelength of Er(3+)-singly doped UCNPs were further optimized to enhance the upconversion emission efficiency. After optimization, we found that NaYF4:30%Er(3+)@NaYF4:2%Er(3+) could simultaneously achieve efficient two-photon excitation (2PE) macroscopic tissue imaging and three-photon excitation (3PE) deep microscopic when excited by 808 nm continuous wave (CW) and 1480 nm CW lasers, respectively. In vitro cell imaging and in vivo imaging have also been implemented to demonstrate the feasibility and potential of the proposed simplex Er(3+)-doped UCNPs as bioprobe.

  10. Hot, deep origin of petroleum: deep basin evidence and application

    Science.gov (United States)

    Price, Leigh C.

    1978-01-01

    Use of the model of a hot deep origin of oil places rigid constraints on the migration and entrapment of crude oil. Specifically, oil originating from depth migrates vertically up faults and is emplaced in traps at shallower depths. Review of petroleum-producing basins worldwide shows oil occurrence in these basins conforms to the restraints of and therefore supports the hypothesis. Most of the world's oil is found in the very deepest sedimentary basins, and production over or adjacent to the deep basin is cut by or directly updip from faults dipping into the basin deep. Generally the greater the fault throw the greater the reserves. Fault-block highs next to deep sedimentary troughs are the best target areas by the present concept. Traps along major basin-forming faults are quite prospective. The structural style of a basin governs the distribution, types, and amounts of hydrocarbons expected and hence the exploration strategy. Production in delta depocenters (Niger) is in structures cut by or updip from major growth faults, and structures not associated with such faults are barren. Production in block fault basins is on horsts next to deep sedimentary troughs (Sirte, North Sea). In basins whose sediment thickness, structure and geologic history are known to a moderate degree, the main oil occurrences can be specifically predicted by analysis of fault systems and possible hydrocarbon migration routes. Use of the concept permits the identification of significant targets which have either been downgraded or ignored in the past, such as production in or just updip from thrust belts, stratigraphic traps over the deep basin associated with major faulting, production over the basin deep, and regional stratigraphic trapping updip from established production along major fault zones.

  11. Deep ocean nutrients during the Last Glacial Maximum deduced from sponge silicon isotopic compositions

    Science.gov (United States)

    Hendry, Katharine R.; Georg, R. Bastian; Rickaby, Rosalind E. M.; Robinson, Laura F.; Halliday, Alex N.

    2010-04-01

    The relative importance of biological and physical processes within the Southern Ocean for the storage of carbon and atmospheric pCO 2 on glacial-interglacial timescales remains uncertain. Understanding the impact of surface biological production on carbon export in the past relies on the reconstruction of the nutrient supply from upwelling deep waters. In particular, the upwelling of silicic acid (Si(OH) 4) is tightly coupled to carbon export in the Southern Ocean via diatom productivity. Here, we address how changes in deep water Si(OH) 4 concentrations can be reconstructed using the silicon isotopic composition of deep-sea sponges. We report δ30Si of modern deep-sea sponge spicules and show that they reflect seawater Si(OH) 4 concentration. The fractionation factor of sponge δ30Si compared to seawater δ30Si shows a positive relationship with Si(OH) 4, which may be a growth rate effect. Application of this proxy in two down-core records from the Scotia Sea reveals that Si(OH) 4 concentrations in the deep Southern Ocean during the Last Glacial Maximum (LGM) were no different than today. Our result does not support a coupling of carbon and nutrient build up in an isolated deep ocean reservoir during the LGM. Our data, combined with records of stable isotopes from diatoms, are only consistent with enhanced LGM Southern Ocean nutrient utilization if there was also a concurrent reduction in diatom silicification or a shift from siliceous to organic-walled phytoplankton.

  12. Incorporating deep learning with convolutional neural networks and position specific scoring matrices for identifying electron transport proteins.

    Science.gov (United States)

    Le, Nguyen-Quoc-Khanh; Ho, Quang-Thai; Ou, Yu-Yen

    2017-09-05

    In several years, deep learning is a modern machine learning technique using in a variety of fields with state-of-the-art performance. Therefore, utilization of deep learning to enhance performance is also an important solution for current bioinformatics field. In this study, we try to use deep learning via convolutional neural networks and position specific scoring matrices to identify electron transport proteins, which is an important molecular function in transmembrane proteins. Our deep learning method can approach a precise model for identifying of electron transport proteins with achieved sensitivity of 80.3%, specificity of 94.4%, and accuracy of 92.3%, with MCC of 0.71 for independent dataset. The proposed technique can serve as a powerful tool for identifying electron transport proteins and can help biologists understand the function of the electron transport proteins. Moreover, this study provides a basis for further research that can enrich a field of applying deep learning in bioinformatics. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  13. Reflected Sunlight Reduction and Characterization for a Deep-Space Optical Receiver Antenna (DSORA)

    Science.gov (United States)

    Clymer, B. D.

    1990-01-01

    A baffle system for the elimination of first-order specular and diffuse reflection of sunlight from the sunshade of a deep-space optical receiver telescope is presented. This baffle system consists of rings of 0.5cm blades spaced 2.5 cm apart on the walls of GO hexagonal sunshade tubes that combine to form the telescope sunshade. The shadow cast by the blades, walls, and rims of the tubes prevent all first-order reflections of direct sunlight from reaching the primary mirror of the telescope. A reflection model of the sunshade without baffles is also presented for comparison. Since manufacturers of absorbing surfaces do not measure data near grazing incidence, the reflection properties at anticipated angles of incidence must be characterized. A description of reflection from matte surfaces in term of bidirectional reflection distribution function (BRDF) is presented along with a discussion of measuring BRDF near grazing incidence.

  14. Human Capital Formation And Poverty Reduction Strategies In ...

    African Journals Online (AJOL)

    This study articulates the development thrust of the Nigerian government, (1999 – 2003), in the area of human capital information and poverty reduction. The policies to achieve its objectives and the gains of such policies to the common man in Nigeria. To achieve its objectives, the government emphasized, macroeconomic ...

  15. School Segregation and Racial Academic Achievement Gaps

    Directory of Open Access Journals (Sweden)

    Sean F. Reardon

    2016-09-01

    Full Text Available Although it is clear that racial segregation is linked to academic achievement gaps, the mechanisms underlying this link have been debated since James Coleman published his eponymous 1966 report. In this paper, I examine sixteen distinct measures of segregation to determine which is most strongly associated with academic achievement gaps. I find clear evidence that one aspect of segregation in particular—the disparity in average school poverty rates between white and black students’ schools—is consistently the single most powerful correlate of achievement gaps, a pattern that holds in both bivariate and multivariate analyses. This implies that high-poverty schools are, on average, much less effective than lower-poverty schools and suggests that strategies that reduce the differential exposure of black, Hispanic, and white students to poor schoolmates may lead to meaningful reductions in academic achievement gaps.

  16. Reduction of Aldehydes and Ketones by Sodium Dithionite

    NARCIS (Netherlands)

    Vries, Johannes G. de; Kellogg, Richard M.

    1980-01-01

    Conditions have been developed for the effective reduction of aldehydes and ketones by sodium dithionite, Na2S2O4. Complete reduction of simple aldehydes and ketones can be achieved with excess Na2S2O4 in H2O/dioxane mixtures at reflux temperature. Some aliphatic ketones, for example, pentanone and

  17. Deep web query interface understanding and integration

    CERN Document Server

    Dragut, Eduard C; Yu, Clement T

    2012-01-01

    There are millions of searchable data sources on the Web and to a large extent their contents can only be reached through their own query interfaces. There is an enormous interest in making the data in these sources easily accessible. There are primarily two general approaches to achieve this objective. The first is to surface the contents of these sources from the deep Web and add the contents to the index of regular search engines. The second is to integrate the searching capabilities of these sources and support integrated access to them. In this book, we introduce the state-of-the-art tech

  18. Welfare Effects of Tariff Reduction Formulas

    DEFF Research Database (Denmark)

    Guldager, Jan G.; Schröder, Philipp J.H.

    WTO negotiations rely on tariff reduction formulas. It has been argued that formula approaches are of increasing importance in trade talks, because of the large number of countries involved, the wider dispersion in initial tariffs (e.g. tariff peaks) and gaps between bound and applied tariff rate....... No single formula dominates for all conditions. The ranking of the three tools depends on the degree of product differentiation in the industry, and the achieved reduction in the average tariff....

  19. Simultaneous nitrate reduction and acetaminophen oxidation using the continuous-flow chemical-less VUV process as an integrated advanced oxidation and reduction process

    Energy Technology Data Exchange (ETDEWEB)

    Moussavi, Gholamreza, E-mail: moussavi@modares.ac.ir; Shekoohiyan, Sakine

    2016-11-15

    Highlights: • Simultaneous advanced oxidation and reduction processes were explored in VUV system. • Complete reduction of nitrate to N{sub 2} was achieved at the presence of acetaminophen. • Complete degradation of acetaminophen was achieved at the presence of nitrate. • Over 95% of acetaminophen was mineralized in the VUV photoreactor. • VUV is a chemical-less advanced process for treating water emerging contaminants. - Abstract: This work was aimed at investigating the performance of the continuous-flow VUV photoreactor as a novel chemical-less advanced process for simultaneously oxidizing acetaminophen (ACT) as a model of pharmaceuticals and reducing nitrate in a single reactor. Solution pH was an important parameter affecting the performance of VUV; the highest ACT oxidation and nitrate reduction attained at solution pH between 6 and 8. The ACT was oxidized mainly by HO· while the aqueous electrons were the main working agents in the reduction of nitrate. The performance of VUV photoreactor improved with the increase of hydraulic retention time (HRT); the complete degradation of ACT and ∼99% reduction of nitrate with 100% N{sub 2} selectivity achieved at HRT of 80 min. The VUV effluent concentrations of nitrite and ammonium at HRT of 80 min were below the drinking water standards. The real water sample contaminated with the ACT and nitrate was efficiently treated in the VUV photoreactor. Therefore, the VUV photoreactor is a chemical-less advanced process in which both advanced oxidation and advanced reduction reactions are accomplished. This unique feature possesses VUV photoreactor as a promising method of treating water contaminated with both pharmaceutical and nitrate.

  20. Simultaneous nitrate reduction and acetaminophen oxidation using the continuous-flow chemical-less VUV process as an integrated advanced oxidation and reduction process

    International Nuclear Information System (INIS)

    Moussavi, Gholamreza; Shekoohiyan, Sakine

    2016-01-01

    Highlights: • Simultaneous advanced oxidation and reduction processes were explored in VUV system. • Complete reduction of nitrate to N_2 was achieved at the presence of acetaminophen. • Complete degradation of acetaminophen was achieved at the presence of nitrate. • Over 95% of acetaminophen was mineralized in the VUV photoreactor. • VUV is a chemical-less advanced process for treating water emerging contaminants. - Abstract: This work was aimed at investigating the performance of the continuous-flow VUV photoreactor as a novel chemical-less advanced process for simultaneously oxidizing acetaminophen (ACT) as a model of pharmaceuticals and reducing nitrate in a single reactor. Solution pH was an important parameter affecting the performance of VUV; the highest ACT oxidation and nitrate reduction attained at solution pH between 6 and 8. The ACT was oxidized mainly by HO· while the aqueous electrons were the main working agents in the reduction of nitrate. The performance of VUV photoreactor improved with the increase of hydraulic retention time (HRT); the complete degradation of ACT and ∼99% reduction of nitrate with 100% N_2 selectivity achieved at HRT of 80 min. The VUV effluent concentrations of nitrite and ammonium at HRT of 80 min were below the drinking water standards. The real water sample contaminated with the ACT and nitrate was efficiently treated in the VUV photoreactor. Therefore, the VUV photoreactor is a chemical-less advanced process in which both advanced oxidation and advanced reduction reactions are accomplished. This unique feature possesses VUV photoreactor as a promising method of treating water contaminated with both pharmaceutical and nitrate.

  1. Development of a Hybrid Deep Drawing Process to Reduce Springback of AHSS

    Science.gov (United States)

    Boskovic, Vladimir; Sommitsch, Christoph; Kicin, Mustafa

    2017-09-01

    In future, the steel manufacturers will strive for the implementation of Advanced High Strength Steels (AHSS) in the automotive industry to reduce mass and improve structural performance. A key challenge is the definition of optimal and cost effective processes as well as solutions to introduce complex steel products in cold forming. However, the application of these AHSS often leads to formability problems such as springback. One promising approach in order to minimize springback is the relaxation of stress through the targeted heating of materials in the radius area after the deep drawing process. In this study, experiments are conducted on a Dual Phase (DP) and TWining Induced Plasticity (TWIP) steel for the process feasibility study. This work analyses the influence of various heat treatment temperatures on the springback reduction of deep drawn AHSS.

  2. The modulatory effect of adaptive deep brain stimulation on beta bursts in Parkinson's disease.

    Science.gov (United States)

    Tinkhauser, Gerd; Pogosyan, Alek; Little, Simon; Beudel, Martijn; Herz, Damian M; Tan, Huiling; Brown, Peter

    2017-04-01

    Adaptive deep brain stimulation uses feedback about the state of neural circuits to control stimulation rather than delivering fixed stimulation all the time, as currently performed. In patients with Parkinson's disease, elevations in beta activity (13-35 Hz) in the subthalamic nucleus have been demonstrated to correlate with clinical impairment and have provided the basis for feedback control in trials of adaptive deep brain stimulation. These pilot studies have suggested that adaptive deep brain stimulation may potentially be more effective, efficient and selective than conventional deep brain stimulation, implying mechanistic differences between the two approaches. Here we test the hypothesis that such differences arise through differential effects on the temporal dynamics of beta activity. The latter is not constantly increased in Parkinson's disease, but comes in bursts of different durations and amplitudes. We demonstrate that the amplitude of beta activity in the subthalamic nucleus increases in proportion to burst duration, consistent with progressively increasing synchronization. Effective adaptive deep brain stimulation truncated long beta bursts shifting the distribution of burst duration away from long duration with large amplitude towards short duration, lower amplitude bursts. Critically, bursts with shorter duration are negatively and bursts with longer duration positively correlated with the motor impairment off stimulation. Conventional deep brain stimulation did not change the distribution of burst durations. Although both adaptive and conventional deep brain stimulation suppressed mean beta activity amplitude compared to the unstimulated state, this was achieved by a selective effect on burst duration during adaptive deep brain stimulation, whereas conventional deep brain stimulation globally suppressed beta activity. We posit that the relatively selective effect of adaptive deep brain stimulation provides a rationale for why this approach could

  3. Deep learning in bioinformatics.

    Science.gov (United States)

    Min, Seonwoo; Lee, Byunghan; Yoon, Sungroh

    2017-09-01

    In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Deep Learning versus Professional Healthcare Equipment: A Fine-Grained Breathing Rate Monitoring Model

    Directory of Open Access Journals (Sweden)

    Bang Liu

    2018-01-01

    Full Text Available In mHealth field, accurate breathing rate monitoring technique has benefited a broad array of healthcare-related applications. Many approaches try to use smartphone or wearable device with fine-grained monitoring algorithm to accomplish the task, which can only be done by professional medical equipment before. However, such schemes usually result in bad performance in comparison to professional medical equipment. In this paper, we propose DeepFilter, a deep learning-based fine-grained breathing rate monitoring algorithm that works on smartphone and achieves professional-level accuracy. DeepFilter is a bidirectional recurrent neural network (RNN stacked with convolutional layers and speeded up by batch normalization. Moreover, we collect 16.17 GB breathing sound recording data of 248 hours from 109 and another 10 volunteers to train and test our model, respectively. The results show a reasonably good accuracy of breathing rate monitoring.

  5. Relations between Goals, Self-Efficacy, Critical Thinking and Deep Processing Strategies: A Path Analysis

    Science.gov (United States)

    Phan, Huy Phuong

    2009-01-01

    Research exploring students' academic learning has recently amalgamated different motivational theories within one conceptual framework. The inclusion of achievement goals, self-efficacy, deep processing and critical thinking has been cited in a number of studies. This article discusses two empirical studies that examined these four theoretical…

  6. The Research on Subsidence Prediction of Soils Around Deep Foundation Pit

    Directory of Open Access Journals (Sweden)

    Ge LIU

    2014-12-01

    Full Text Available Deep foundation pit will cause settlement of surround buildings in the process of excavation. When the settlement is excessive, it will give rise to safety issues. Subsidence monitoring has become an important measure to ensure the safety of deep foundation pits. But in current subsidence monitoring engineering, the costs of wiring, unwiring and installation are particularly high. This paper proposes a portable wireless data transmission device in forecasting and early warning of settlement deformation of soils around deep foundation pits. We solve the problem by adopting the means of wireless communication to replace the cable transmission link part. The device does not rely on any personal computers. Instead, it can directly deal with the collected data through grey prediction GM (1, 1 mathematical model, neural network and interpolation model to give short-term, medium- term and long-term forecasts, respectively. Additionally it is able to set a threshold value. Once the forecast data reach the threshold, the device can issue alert and achieve the target of reminding technicians, so as to provide reliable basis to prevent and reduce disasters.

  7. Does Class-Size Reduction Close the Achievement Gap? Evidence from TIMSS 2011

    Science.gov (United States)

    Li, Wei; Konstantopoulos, Spyros

    2017-01-01

    Policies about reducing class size have been implemented in the US and Europe in the past decades. Only a few studies have discussed the effects of class size at different levels of student achievement, and their findings have been mixed. We employ quantile regression analysis, coupled with instrumental variables, to examine the causal effects of…

  8. Meridional overturning circulation conveys fast acidification to the deep Atlantic Ocean

    Science.gov (United States)

    Perez, Fiz F.; Fontela, Marcos; García-Ibáñez, Maribel I.; Mercier, Herlé; Velo, Anton; Lherminier, Pascale; Zunino, Patricia; de La Paz, Mercedes; Alonso-Pérez, Fernando; Guallart, Elisa F.; Padin, Xose A.

    2018-02-01

    Since the Industrial Revolution, the North Atlantic Ocean has been accumulating anthropogenic carbon dioxide (CO2) and experiencing ocean acidification, that is, an increase in the concentration of hydrogen ions (a reduction in pH) and a reduction in the concentration of carbonate ions. The latter causes the ‘aragonite saturation horizon’—below which waters are undersaturated with respect to a particular calcium carbonate, aragonite—to move to shallower depths (to shoal), exposing corals to corrosive waters. Here we use a database analysis to show that the present rate of supply of acidified waters to the deep Atlantic could cause the aragonite saturation horizon to shoal by 1,000-1,700 metres in the subpolar North Atlantic within the next three decades. We find that, during 1991-2016, a decrease in the concentration of carbonate ions in the Irminger Sea caused the aragonite saturation horizon to shoal by about 10-15 metres per year, and the volume of aragonite-saturated waters to reduce concomitantly. Our determination of the transport of the excess of carbonate over aragonite saturation (xc[CO32-])—an indicator of the availability of aragonite to organisms—by the Atlantic meridional overturning circulation shows that the present-day transport of carbonate ions towards the deep ocean is about 44 per cent lower than it was in preindustrial times. We infer that a doubling of atmospheric anthropogenic CO2 levels—which could occur within three decades according to a ‘business-as-usual scenario’ for climate change—could reduce the transport of xc[CO32-] by 64-79 per cent of that in preindustrial times, which could severely endanger cold-water coral habitats. The Atlantic meridional overturning circulation would also export this acidified deep water southwards, spreading corrosive waters to the world ocean.

  9. Justice policy reform for high-risk juveniles: using science to achieve large-scale crime reduction.

    Science.gov (United States)

    Skeem, Jennifer L; Scott, Elizabeth; Mulvey, Edward P

    2014-01-01

    After a distinctly punitive era, a period of remarkable reform in juvenile crime regulation has begun. Practical urgency has fueled interest in both crime reduction and research on the prediction and malleability of criminal behavior. In this rapidly changing context, high-risk juveniles--the small proportion of the population where crime becomes concentrated--present a conundrum. Research indicates that these are precisely the individuals to treat intensively to maximize crime reduction, but there are both real and imagined barriers to doing so. Mitigation principles (during early adolescence, ages 10-13) and institutional placement or criminal court processing (during mid-late adolescence, ages 14-18) can prevent these juveniles from receiving interventions that would best protect public safety. In this review, we synthesize relevant research to help resolve this challenge in a manner that is consistent with the law's core principles. In our view, early adolescence offers unique opportunities for risk reduction that could (with modifications) be realized in the juvenile justice system in cooperation with other social institutions.

  10. DeepSimulator: a deep simulator for Nanopore sequencing

    KAUST Repository

    Li, Yu

    2017-12-23

    Motivation: Oxford Nanopore sequencing is a rapidly developed sequencing technology in recent years. To keep pace with the explosion of the downstream data analytical tools, a versatile Nanopore sequencing simulator is needed to complement the experimental data as well as to benchmark those newly developed tools. However, all the currently available simulators are based on simple statistics of the produced reads, which have difficulty in capturing the complex nature of the Nanopore sequencing procedure, the main task of which is the generation of raw electrical current signals. Results: Here we propose a deep learning based simulator, DeepSimulator, to mimic the entire pipeline of Nanopore sequencing. Starting from a given reference genome or assembled contigs, we simulate the electrical current signals by a context-dependent deep learning model, followed by a base-calling procedure to yield simulated reads. This workflow mimics the sequencing procedure more naturally. The thorough experiments performed across four species show that the signals generated by our context-dependent model are more similar to the experimentally obtained signals than the ones generated by the official context-independent pore model. In terms of the simulated reads, we provide a parameter interface to users so that they can obtain the reads with different accuracies ranging from 83% to 97%. The reads generated by the default parameter have almost the same properties as the real data. Two case studies demonstrate the application of DeepSimulator to benefit the development of tools in de novo assembly and in low coverage SNP detection. Availability: The software can be accessed freely at: https://github.com/lykaust15/DeepSimulator.

  11. Deep learning relevance

    DEFF Research Database (Denmark)

    Lioma, Christina; Larsen, Birger; Petersen, Casper

    2016-01-01

    train a Recurrent Neural Network (RNN) on existing relevant information to that query. We then use the RNN to "deep learn" a single, synthetic, and we assume, relevant document for that query. We design a crowdsourcing experiment to assess how relevant the "deep learned" document is, compared...... to existing relevant documents. Users are shown a query and four wordclouds (of three existing relevant documents and our deep learned synthetic document). The synthetic document is ranked on average most relevant of all....

  12. Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study.

    Science.gov (United States)

    Hosseinyalamdary, Siavash

    2018-04-24

    Bayes filters, such as the Kalman and particle filters, have been used in sensor fusion to integrate two sources of information and obtain the best estimate of unknowns. The efficient integration of multiple sensors requires deep knowledge of their error sources. Some sensors, such as Inertial Measurement Unit (IMU), have complicated error sources. Therefore, IMU error modelling and the efficient integration of IMU and Global Navigation Satellite System (GNSS) observations has remained a challenge. In this paper, we developed deep Kalman filter to model and remove IMU errors and, consequently, improve the accuracy of IMU positioning. To achieve this, we added a modelling step to the prediction and update steps of the Kalman filter, so that the IMU error model is learned during integration. The results showed our deep Kalman filter outperformed the conventional Kalman filter and reached a higher level of accuracy.

  13. Approaches to achieving inherently safe fusion power plants

    International Nuclear Information System (INIS)

    Piet, S.J.

    1986-01-01

    Achieving inherently safe fusion facilities and conceptual designs is a challenge to the fusion community. Success should provide fusion with important competitive advantages versus other energy technologies. Inherent safety should mean a facility designed with passive safety features such that the public is protected from any acute fatalities under all credible accidental circumstances. A key aspect to inherent safety is demonstrability - the ability to prove that a deign is as safe as claimed. Three complementary approaches to achieving inherent safety are examined: toxin inventory reduction, energy source reduction and design fault tolerance. Four levels of assurance are defined, associated with uncertainty in the words ''credible' and ''demonstrable.'' Sound reasons exist for believing that inherent safety puts a modest upper bound on all accident consequences; it should be considered a part of the collection of safety and environmental issues, which also include lower consequence accidents, waste management, and effluent control

  14. Characterizations of geothermal springs along the Moxi deep fault in the western Sichuan plateau, China

    Science.gov (United States)

    Qi, Jihong; Xu, Mo; An, Chengjiao; Wu, Mingliang; Zhang, Yunhui; Li, Xiao; Zhang, Qiang; Lu, Guoping

    2017-02-01

    Abundant geothermal springs occur along the Moxi fault located in western Sichuan Province (the eastern edge of the Qinghai-Tibet plateau), highlighted by geothermal water outflow with an unusually high temperature of 218 °C at 21.5 MPa from a 2010-m borehole in Laoyulin, Kangding. Earthquake activity occurs relatively more frequently in the region and is considered to be related to the strong hydrothermal activity. Geothermal waters hosted by a deep fault may provide evidence regarding the deep underground; their aqueous chemistry and isotopic information can indicate the mechanism of thermal springs. Cyclical variations of geothermal water outflows are thought to work under the effect of solid earth tides and can contribute to understanding conditions and processes in underground geo-environments. This paper studies the origin and variations of the geothermal spring group controlled by the Moxi fault and discusses conditions in the deep ground. Flow variation monitoring of a series of parameters was performed to study the geothermal responses to solid tides. Geothermal reservoir temperatures are evaluated with Na-K-Mg data. The abundant sulfite content, dissolved oxygen (DO) and oxidation-reduction potential (ORP) data are discussed to study the oxidation-reduction states. Strontium isotopes are used to trace the water source. The results demonstrate that geothermal water could flow quickly through the Moxi fault the depth of the geothermal reservoir influences the thermal reservoir temperature, where supercritical hot water is mixed with circulating groundwater and can reach 380 °C. To the southward along the fault, the circulation of geothermal waters becomes shallower, and the waters may have reacted with metamorphic rock to some extent. Our results provide a conceptual deep heat source model for geothermal flow and the reservoir characteristics of the Moxi fault and indicate that the faulting may well connect the deep heat source to shallower depths. The

  15. Suggesting alternatives for reinforced concrete deep beams by reinforcing struts and ties

    Directory of Open Access Journals (Sweden)

    Saleem Abdul-Razzaq Khattab

    2017-01-01

    Full Text Available This paper studied reinforcing struts and ties in deep beams based on the Strut-and-Tie Model (STM of ACI 318M-14. The study contained testing 9 simply supported specimens, divided into 3 groups. The difference between the groups was the loading type which was 2-concentrated forces, 1-concentrated force and uniformly distributed load. Each group contained three specimens; the first specimens in each group were conventional deep beams as references which had a length of 1400 mm, a height of 400 mm and a width of 150 mm. The second specimens were the same as references in dimensions, but with removing shoulders. In addition, only the paths of struts & ties of STM were reinforced in the second specimens as compression and tension members, respectively. The third specimens were the frames that took their dimensions from STM of ACI 318M-14. From the experimental work, it is found that the proposed frames were good alternatives for the references despite the small loss in ultimate capacity. However, these proposed frames already carried loads greater than the factored design loads of STM. In comparison with the references, these proposed frames provided 41-51% reduction in weight, 4-27% reduction in cost besides providing front side area about 46-55%.

  16. Performance evaluation of iterative reconstruction algorithms for achieving CT radiation dose reduction — a phantom study

    Science.gov (United States)

    Dodge, Cristina T.; Tamm, Eric P.; Cody, Dianna D.; Liu, Xinming; Jensen, Corey T.; Wei, Wei; Kundra, Vikas

    2016-01-01

    The purpose of this study was to characterize image quality and dose performance with GE CT iterative reconstruction techniques, adaptive statistical iterative reconstruction (ASiR), and model‐based iterative reconstruction (MBIR), over a range of typical to low‐dose intervals using the Catphan 600 and the anthropomorphic Kyoto Kagaku abdomen phantoms. The scope of the project was to quantitatively describe the advantages and limitations of these approaches. The Catphan 600 phantom, supplemented with a fat‐equivalent oval ring, was scanned using a GE Discovery HD750 scanner at 120 kVp, 0.8 s rotation time, and pitch factors of 0.516, 0.984, and 1.375. The mA was selected for each pitch factor to achieve CTDIvol values of 24, 18, 12, 6, 3, 2, and 1 mGy. Images were reconstructed at 2.5 mm thickness with filtered back‐projection (FBP); 20%, 40%, and 70% ASiR; and MBIR. The potential for dose reduction and low‐contrast detectability were evaluated from noise and contrast‐to‐noise ratio (CNR) measurements in the CTP 404 module of the Catphan. Hounsfield units (HUs) of several materials were evaluated from the cylinder inserts in the CTP 404 module, and the modulation transfer function (MTF) was calculated from the air insert. The results were confirmed in the anthropomorphic Kyoto Kagaku abdomen phantom at 6, 3, 2, and 1 mGy. MBIR reduced noise levels five‐fold and increased CNR by a factor of five compared to FBP below 6 mGy CTDIvol, resulting in a substantial improvement in image quality. Compared to ASiR and FBP, HU in images reconstructed with MBIR were consistently lower, and this discrepancy was reversed by higher pitch factors in some materials. MBIR improved the conspicuity of the high‐contrast spatial resolution bar pattern, and MTF quantification confirmed the superior spatial resolution performance of MBIR versus FBP and ASiR at higher dose levels. While ASiR and FBP were relatively insensitive to changes in dose and pitch, the spatial

  17. Analysis for Behavior of Reinforcement Lap Splices in Deep Beams

    Directory of Open Access Journals (Sweden)

    Ammar Yaser Ali

    2018-03-01

    Full Text Available The present study includes an experimental and theoretical investigation of reinforced concrete deep beams containing tensile reinforcement lap splices at constant moment zone under static load. The study included two stages: in the first one, an experimental work included testing of eight simply supported RC deep beams having a total length (L = 2000 mm, overall depth (h= 600 mm and width (b = 150 mm. The tested specimens were divided into three groups to study the effect of main variables: lap length, location of splice, internal confinement (stirrups and external confinement (strengthening by CFRP laminates. The experimental results showed that the use of CFRP as external strengthening in deep beam with lap spliced gives best behavior such as increase in stiffness, decrease in deflection, delaying the cracks appearance and reducing the crack width. The reduction in deflection about (14-21 % than the unstrengthened beam and about (5-14 % than the beam with continuous bars near ultimate load. Also, it was observed that the beams with unstrengthened tensile reinforcement lap splices had three types of cracks: flexural, flexural-shear and splitting cracks while the beams with strengthened tensile reinforcement lap splices or continuous bars don’t observe splitting cracks. In the second stage, a numerical analysis of three dimensional finite element analysis was utilized to explore the behavior of the RC deep beams with tensile reinforcement lap splices, in addition to parametric study of many variables. The comparison between the experimental and theoretical results showed reasonable agreement. The average difference of the deflection at service load was less than 5%.

  18. Infiltration of surface mined land reclaimed by deep tillage treatments

    International Nuclear Information System (INIS)

    Chong, S.K.; Cowsert, P.

    1994-01-01

    Surface mining of coal leads to the drastic disturbance of soils. Compaction of replaced subsoil and topsoil resulting from hauling, grading, and leveling procedures produces a poor rooting medium for crop growth. Soil compaction results in high bulk density, low macroporosity, poor water infiltration capacity, and reduced elongation of plant roots. In the United States, Public Law 95-87 mandates that the rooting medium of mined soils have specific textural characteristics and be graded and shaped to a topography similar to premining conditions. Also, crop productivity levels equivalent to those prior to mining must be achieved, especially for prime farmland. Alleviation of compaction has been the major focus of reclamation, and recently new techniques to augment the rooting zone with deep-ripping and loosening equipment have come to the forefront. Several surface mine operators in the Illinois coal basin are using deep tillage equipment that is capable of loosening soils to greater depths than is possible with conventional farm tillage equipment. Information on the beneficial effects of these loosening procedures on soil hydrological properties, such as infiltration, runoff potential, erosion, and water retention, is extremely important for future mined land management. However, such information is lacking. In view of the current yield demonstration regulation for prime farmland and other unmined soils, it is important that as much information as possible be obtained concerning the effect of deep tillage on soil hydrologic properties. The objectives of this study are: (1) to compare infiltration rates and related soil physical properties of mined soils reclaimed by various deep tillage treatments and (2) to study the temporal variability of infiltration and related physical properties of the reclaimed mined soil after deep tillage treatment

  19. STIMULATION TECHNOLOGIES FOR DEEP WELL COMPLETIONS

    Energy Technology Data Exchange (ETDEWEB)

    Stephen Wolhart

    2003-06-01

    The Department of Energy (DOE) is sponsoring a Deep Trek Program targeted at improving the economics of drilling and completing deep gas wells. Under the DOE program, Pinnacle Technologies is conducting a project to evaluate the stimulation of deep wells. The objective of the project is to assess U.S. deep well drilling & stimulation activity, review rock mechanics & fracture growth in deep, high pressure/temperature wells and evaluate stimulation technology in several key deep plays. Phase 1 was recently completed and consisted of assessing deep gas well drilling activity (1995-2007) and an industry survey on deep gas well stimulation practices by region. Of the 29,000 oil, gas and dry holes drilled in 2002, about 300 were drilled in the deep well; 25% were dry, 50% were high temperature/high pressure completions and 25% were simply deep completions. South Texas has about 30% of these wells, Oklahoma 20%, Gulf of Mexico Shelf 15% and the Gulf Coast about 15%. The Rockies represent only 2% of deep drilling. Of the 60 operators who drill deep and HTHP wells, the top 20 drill almost 80% of the wells. Six operators drill half the U.S. deep wells. Deep drilling peaked at 425 wells in 1998 and fell to 250 in 1999. Drilling is expected to rise through 2004 after which drilling should cycle down as overall drilling declines.

  20. A trial of scheduled deep brain stimulation for Tourette syndrome: moving away from continuous deep brain stimulation paradigms.

    Science.gov (United States)

    Okun, Michael S; Foote, Kelly D; Wu, Samuel S; Ward, Herbert E; Bowers, Dawn; Rodriguez, Ramon L; Malaty, Irene A; Goodman, Wayne K; Gilbert, Donald M; Walker, Harrison C; Mink, Jonathan W; Merritt, Stacy; Morishita, Takashi; Sanchez, Justin C

    2013-01-01

    To collect the information necessary to design the methods and outcome variables for a larger trial of scheduled deep brain stimulation (DBS) for Tourette syndrome. We performed a small National Institutes of Health-sponsored clinical trials planning study of the safety and preliminary efficacy of implanted DBS in the bilateral centromedian thalamic region. The study used a cranially contained constant-current device and a scheduled, rather than the classic continuous, DBS paradigm. Baseline vs 6-month outcomes were collected and analyzed. In addition, we compared acute scheduled vs acute continuous vs off DBS. A university movement disorders center. Five patients with implanted DBS. A 50% improvement in the Yale Global Tic Severity Scale (YGTSS) total score. RESULTS Participating subjects had a mean age of 34.4 (range, 28-39) years and a mean disease duration of 28.8 years. No significant adverse events or hardware-related issues occurred. Baseline vs 6-month data revealed that reductions in the YGTSS total score did not achieve the prestudy criterion of a 50% improvement in the YGTSS total score on scheduled stimulation settings. However, statistically significant improvements were observed in the YGTSS total score (mean [SD] change, -17.8 [9.4]; P=.01), impairment score (-11.3 [5.0]; P=.007), and motor score (-2.8 [2.2]; P=.045); the Modified Rush Tic Rating Scale Score total score (-5.8 [2.9]; P=.01); and the phonic tic severity score (-2.2 [2.6]; P=.04). Continuous, off, and scheduled stimulation conditions were assessed blindly in an acute experiment at 6 months after implantation. The scores in all 3 conditions showed a trend for improvement. Trends for improvement also occurred with continuous and scheduled conditions performing better than the off condition. Tic suppression was commonly seen at ventral (deep) contacts, and programming settings resulting in tic suppression were commonly associated with a subjective feeling of calmness. This study provides

  1. Thirty-six Cases of Pseudobulbar Palsy Treated by Needling with Prompt and Deep Insertion

    Institute of Scientific and Technical Information of China (English)

    Chen Hong

    2006-01-01

    @@ Pseudobulbar palsy refers to bulbar paralysis due to the upper motor neuron injury, which is one of the severe complications of cerebrovascular diseases.The author has treated 36 cases of the disease with acupuncture by a prompt and deep insertion technique, and achieved satisfactory therapeutic results. A report follows.

  2. GHG emission scenarios in Asia and the world: The key technologies for significant reduction

    International Nuclear Information System (INIS)

    Akashi, Osamu; Hijioka, Yasuaki; Masui, Toshihiko; Hanaoka, Tatsuya; Kainuma, Mikiko

    2012-01-01

    In this paper, we explore GHG emission scenarios up to 2050 in Asia and the world as part of the Asian Modeling Exercise and assess technology options for meeting a 2.6 W/m 2 radiative forcing target using AIM/Enduse[Global] and AIM/Impact[Policy]. Global GHG emissions in 2050 are required to be reduced by 72% relative to a reference scenario, which corresponds to a 57% reduction from the 2005 level, in order to meet the above target. Energy intensity improvement contributes a lot to curbing CO 2 emission in the short-term. Meanwhile, carbon intensity reduction and CO 2 capture play a large role for further emission reduction in the mid to long-term. The top five key technologies in terms of reduction amount are CCS, solar power generation, wind power generation, biomass power generation and biofuel, which, in total, account for about 60% of global GHG emissions reduction in 2050. We implement additional model runs, each of which enforced limited availability of one of the key technology. The result shows that the 2.6 W/m 2 target up to 2050 is achievable even if availability of any one of the key technologies is limited to half the level achieved in the default simulation. However, if the use of CCS or biomass is limited, the cumulative GHG abatement cost until 2050 increases considerably. Therefore CCS and biomass have a vital role in curbing costs to achieve significant emission reductions. - Highlights: ► We explore GHG emission scenarios up to 2050 in Asia and the world. ► Significant GHG emission reduction is required to limit radiative forcing at low level. ► We assess technology options for achieving significant GHG emission reduction. ► CCS, solar power, wind power, and biomass are the key technologies for reduction. ► Especially, CCS and biomass play a vital role in curbing costs to achieve significant emission reductions.

  3. Multi-level deep supervised networks for retinal vessel segmentation.

    Science.gov (United States)

    Mo, Juan; Zhang, Lei

    2017-12-01

    Changes in the appearance of retinal blood vessels are an important indicator for various ophthalmologic and cardiovascular diseases, including diabetes, hypertension, arteriosclerosis, and choroidal neovascularization. Vessel segmentation from retinal images is very challenging because of low blood vessel contrast, intricate vessel topology, and the presence of pathologies such as microaneurysms and hemorrhages. To overcome these challenges, we propose a neural network-based method for vessel segmentation. A deep supervised fully convolutional network is developed by leveraging multi-level hierarchical features of the deep networks. To improve the discriminative capability of features in lower layers of the deep network and guide the gradient back propagation to overcome gradient vanishing, deep supervision with auxiliary classifiers is incorporated in some intermediate layers of the network. Moreover, the transferred knowledge learned from other domains is used to alleviate the issue of insufficient medical training data. The proposed approach does not rely on hand-crafted features and needs no problem-specific preprocessing or postprocessing, which reduces the impact of subjective factors. We evaluate the proposed method on three publicly available databases, the DRIVE, STARE, and CHASE_DB1 databases. Extensive experiments demonstrate that our approach achieves better or comparable performance to state-of-the-art methods with a much faster processing speed, making it suitable for real-world clinical applications. The results of cross-training experiments demonstrate its robustness with respect to the training set. The proposed approach segments retinal vessels accurately with a much faster processing speed and can be easily applied to other biomedical segmentation tasks.

  4. Deep learning in TMVA Benchmarking Benchmarking TMVA DNN Integration of a Deep Autoencoder

    CERN Document Server

    Huwiler, Marc

    2017-01-01

    The TMVA library in ROOT is dedicated to multivariate analysis, and in partic- ular oers numerous machine learning algorithms in a standardized framework. It is widely used in High Energy Physics for data analysis, mainly to perform regression and classication. To keep up to date with the state of the art in deep learning, a new deep learning module was being developed this summer, oering deep neural net- work, convolutional neural network, and autoencoder. TMVA did not have yet any autoencoder method, and the present project consists in implementing the TMVA autoencoder class based on the deep learning module. It also includes some bench- marking performed on the actual deep neural network implementation, in comparison to the Keras framework with Tensorflow and Theano backend.

  5. Air Layer Drag Reduction

    Science.gov (United States)

    Ceccio, Steven; Elbing, Brian; Winkel, Eric; Dowling, David; Perlin, Marc

    2008-11-01

    A set of experiments have been conducted at the US Navy's Large Cavitation Channel to investigate skin-friction drag reduction with the injection of air into a high Reynolds number turbulent boundary layer. Testing was performed on a 12.9 m long flat-plate test model with the surface hydraulically smooth and fully rough at downstream-distance-based Reynolds numbers to 220 million and at speeds to 20 m/s. Local skin-friction, near-wall bulk void fraction, and near-wall bubble imaging were monitored along the length of the model. The instrument suite was used to access the requirements necessary to achieve air layer drag reduction (ALDR). Injection of air over a wide range of air fluxes showed that three drag reduction regimes exist when injecting air; (1) bubble drag reduction that has poor downstream persistence, (2) a transitional regime with a steep rise in drag reduction, and (3) ALDR regime where the drag reduction plateaus at 90% ± 10% over the entire model length with large void fractions in the near-wall region. These investigations revealed several requirements for ALDR including; sufficient volumetric air fluxes that increase approximately with the square of the free-stream speed, slightly higher air fluxes are needed when the surface tension is reduced, higher air fluxes are required for rough surfaces, and the formation of ALDR is sensitive to the inlet condition.

  6. Accounting for Natural Reduction of Nitrogen

    DEFF Research Database (Denmark)

    Højberg, A L; Refsgaard, J. C.; Hansen, A.L.

    the same restriction for all areas independent on drainage schemes, hydrogeochemical conditions in the subsurface and retention in surface waters. Although significant reductions have been achieved this way, general measures are not cost-effective, as nitrogen retention (primarily as denitrification...

  7. A patient-specific model of the biomechanics of hip reduction for neonatal Developmental Dysplasia of the Hip: Investigation of strategies for low to severe grades of Developmental Dysplasia of the Hip.

    Science.gov (United States)

    Huayamave, Victor; Rose, Christopher; Serra, Sheila; Jones, Brendan; Divo, Eduardo; Moslehy, Faissal; Kassab, Alain J; Price, Charles T

    2015-07-16

    A physics-based computational model of neonatal Developmental Dysplasia of the Hip (DDH) following treatment with the Pavlik Harness (PV) was developed to obtain muscle force contribution in order to elucidate biomechanical factors influencing the reduction of dislocated hips. Clinical observation suggests that reduction occurs in deep sleep involving passive muscle action. Consequently, a set of five (5) adductor muscles were identified as mediators of reduction using the PV. A Fung/Hill-type model was used to characterize muscle response. Four grades (1-4) of dislocation were considered, with one (1) being a low subluxation and four (4) a severe dislocation. A three-dimensional model of the pelvis-femur lower limb of a representative 10 week-old female was generated based on CT-scans with the aid of anthropomorphic scaling of anatomical landmarks. The model was calibrated to achieve equilibrium at 90° flexion and 80° abduction. The hip was computationally dislocated according to the grade under investigation, the femur was restrained to move in an envelope consistent with PV restraints, and the dynamic response under passive muscle action and the effect of gravity was resolved. Model results with an anteversion angle of 50° show successful reduction Grades 1-3, while Grade 4 failed to reduce with the PV. These results are consistent with a previous study based on a simplified anatomically-consistent synthetic model and clinical reports of very low success of the PV for Grade 4. However our model indicated that it is possible to achieve reduction of Grade 4 dislocation by hyperflexion and the resultant external rotation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Cost-Reduction Roadmap for Residential Solar Photovoltaics (PV), 2017-2030

    Energy Technology Data Exchange (ETDEWEB)

    Cook, Jeffrey J. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ardani, Kristen B. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Margolis, Robert M. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Fu, Ran [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-01-03

    The installed cost of solar photovoltaics (PV) has fallen rapidly in recent years and is expected to continue declining in the future. In this report, we focus on the potential for continued PV cost reductions in the residential market. From 2010 to 2017, the levelized cost of energy (LCOE) for residential PV declined from 52 cents per kilowatt-hour (cents/kWh) to 16 cents/kWh (Fu et al. 2017). The U.S. Department of Energy's (DOE's) Solar Energy Technologies Office (SETO) recently set new LCOE targets for 2030, including a target of 5 cents/kWh for residential PV. We present a roadmap for achieving the SETO 2030 residential PV target. Because the 2030 target likely will not be achieved under business-as-usual trends (NREL 2017), we examine two key market segments that demonstrate significant opportunities for cost savings and market growth: installing PV at the time of roof replacement and installing PV as part of the new home construction process. Within both market segments, we identify four key cost-reduction opportunities: market maturation, business model integration, product innovation, and economies of scale. To assess the potential impact of these cost reductions, we compare modeled residential PV system prices in 2030 to the National Renewable Energy Laboratory's (NREL's) quarter one 2017 (Q1 2017) residential PV system price benchmark (Fu et al. 2017). We use a bottom-up accounting framework to model all component and project-development costs incurred when installing a PV system. The result is a granular accounting for 11 direct and indirect costs associated with installing a residential PV system in 2030. All four modeled pathways demonstrate significant installed-system price savings over the Q1 2017 benchmark, with the visionary pathways yielding the greatest price benefits. The largest modeled cost savings are in the supply chain, sales and marketing, overhead, and installation labor cost categories. When we translate these

  9. A Meta-Analysis of Single-Family Deep Energy Retrofit Performance in the U.S.

    Energy Technology Data Exchange (ETDEWEB)

    Less, Brennan; Walker, Iain

    2014-03-01

    The current state of Deep Energy Retrofit (DER) performance in the U.S. has been assessed in 116 homes in the United States (US), using actual and simulated data gathered from the available domestic literature. Substantial airtightness reductions averaging 63% (n=48) were reported (two- to three-times more than in conventional retrofits), with average post-retrofit airtightness of 4.7 Air Changes per House at 50 Pascal (ACH50) (n=94). Yet, mechanical ventilation was not installed consistently. In order to avoid indoor air quality (IAQ) issues, all future DERs should comply with ASHRAE 62.2-2013 requirements or equivalent. Projects generally achieved good energy results, with average annual net-site and net-source energy savings of 47%±20% and 45%±24% (n=57 and n=35), respectively, and carbon emission reductions of 47%±22% (n=23). Net-energy reductions did not vary reliably with house age, airtightness, or reported project costs, but pre-retrofit energy usage was correlated with total reductions (MMBtu). Annual energy costs were reduced $1,283±$804 (n=31), from a pre-retrofit average of $2,738±$1,065 to $1,588±$561 post-retrofit (n=25 and n=39). The average reported incremental project cost was $40,420±$30,358 (n=59). When financed on a 30-year term, the median change in net-homeownership cost was only $1.00 per month, ranging from $149 in savings to an increase of $212 (mean=$15.67±$87.74; n=28), and almost half of the projects resulted in reductions in net-cost. The economic value of a DER may be much greater than is suggested by these net-costs, because DERs entail substantial non-energy benefits (NEBs), and retrofit measures may add value to a home at resale similarly to general remodeling, PV panel installation, and green/energy efficient home labels. These results provide estimates of the potential of DERs to address energy use in existing homes across climate zones that can be used in future estimates of the technical potential to reduce household

  10. DeepSAT's CloudCNN: A Deep Neural Network for Rapid Cloud Detection from Geostationary Satellites

    Science.gov (United States)

    Kalia, S.; Li, S.; Ganguly, S.; Nemani, R. R.

    2017-12-01

    Cloud and cloud shadow detection has important applications in weather and climate studies. It is even more crucial when we introduce geostationary satellites into the field of terrestrial remotesensing. With the challenges associated with data acquired in very high frequency (10-15 mins per scan), the ability to derive an accurate cloud/shadow mask from geostationary satellite data iscritical. The key to the success for most of the existing algorithms depends on spatially and temporally varying thresholds, which better capture local atmospheric and surface effects.However, the selection of proper threshold is difficult and may lead to erroneous results. In this work, we propose a deep neural network based approach called CloudCNN to classifycloud/shadow from Himawari-8 AHI and GOES-16 ABI multispectral data. DeepSAT's CloudCNN consists of an encoder-decoder based architecture for binary-class pixel wise segmentation. We train CloudCNN on multi-GPU Nvidia Devbox cluster, and deploy the prediction pipeline on NASA Earth Exchange (NEX) Pleiades supercomputer. We achieved an overall accuracy of 93.29% on test samples. Since, the predictions take only a few seconds to segment a full multi-spectral GOES-16 or Himawari-8 Full Disk image, the developed framework can be used for real-time cloud detection, cyclone detection, or extreme weather event predictions.

  11. Deep subsurface microbial processes

    Science.gov (United States)

    Lovley, D.R.; Chapelle, F.H.

    1995-01-01

    Information on the microbiology of the deep subsurface is necessary in order to understand the factors controlling the rate and extent of the microbially catalyzed redox reactions that influence the geophysical properties of these environments. Furthermore, there is an increasing threat that deep aquifers, an important drinking water resource, may be contaminated by man's activities, and there is a need to predict the extent to which microbial activity may remediate such contamination. Metabolically active microorganisms can be recovered from a diversity of deep subsurface environments. The available evidence suggests that these microorganisms are responsible for catalyzing the oxidation of organic matter coupled to a variety of electron acceptors just as microorganisms do in surface sediments, but at much slower rates. The technical difficulties in aseptically sampling deep subsurface sediments and the fact that microbial processes in laboratory incubations of deep subsurface material often do not mimic in situ processes frequently necessitate that microbial activity in the deep subsurface be inferred through nonmicrobiological analyses of ground water. These approaches include measurements of dissolved H2, which can predict the predominant microbially catalyzed redox reactions in aquifers, as well as geochemical and groundwater flow modeling, which can be used to estimate the rates of microbial processes. Microorganisms recovered from the deep subsurface have the potential to affect the fate of toxic organics and inorganic contaminants in groundwater. Microbial activity also greatly influences 1 the chemistry of many pristine groundwaters and contributes to such phenomena as porosity development in carbonate aquifers, accumulation of undesirably high concentrations of dissolved iron, and production of methane and hydrogen sulfide. Although the last decade has seen a dramatic increase in interest in deep subsurface microbiology, in comparison with the study of

  12. Processes governing transient responses of the deep ocean buoyancy budget to a doubling of CO2

    Science.gov (United States)

    Palter, J. B.; Griffies, S. M.; Hunter Samuels, B. L.; Galbraith, E. D.; Gnanadesikan, A.

    2012-12-01

    Recent observational analyses suggest there is a temporal trend and high-frequency variability in deep ocean buoyancy in the last twenty years, a phenomenon reproduced even in low-mixing models. Here we use an earth system model (GFDL's ESM2M) to evaluate physical processes that influence buoyancy (and thus steric sea level) budget of the deep ocean in quasi-steady state and under a doubling of CO2. A new suite of model diagnostics allows us to quantitatively assess every process that influences the buoyancy budget and its temporal evolution, revealing surprising dynamics governing both the equilibrium budget and its transient response to climate change. The results suggest that the temporal evolution of the deep ocean contribution to sea level rise is due to a diversity of processes at high latitudes, whose net effect is then advected in the Eulerian mean flow to mid and low latitudes. In the Southern Ocean, a slowdown in convection and spin up of the residual mean advection are approximately equal players in the deep steric sea level rise. In the North Atlantic, the region of greatest deep steric sea level variability in our simulations, a decrease in mixing of cold, dense waters from the marginal seas and a reduction in open ocean convection causes an accumulation of buoyancy in the deep subpolar gyre, which is then advected equatorward.

  13. Final report - Reduction of mercury in saturated subsurface sediments and its potential to mobilize mercury in its elemental form

    Energy Technology Data Exchange (ETDEWEB)

    Bakray, Tamar [Rutgers University

    2013-06-13

    The goal of our project was to investigate Hg(II) reduction in the deep subsurface. We focused on microbial and abiotic pathways of reduction and explored how it affected the toxicity and mobility of Hg in this unique environment. The project’s tasks included: 1. Examining the role of mer activities in the reduction of Hg(II) in denitrifying enrichment cultures; 2. Investigating the biotic/abiotic reduction of Hg(II) under iron reducing conditions; 3. Examining Hg(II) redox transformations under anaerobic conditions in subsurface sediments from DOE sites.

  14. DeepSurv: personalized treatment recommender system using a Cox proportional hazards deep neural network.

    Science.gov (United States)

    Katzman, Jared L; Shaham, Uri; Cloninger, Alexander; Bates, Jonathan; Jiang, Tingting; Kluger, Yuval

    2018-02-26

    Medical practitioners use survival models to explore and understand the relationships between patients' covariates (e.g. clinical and genetic features) and the effectiveness of various treatment options. Standard survival models like the linear Cox proportional hazards model require extensive feature engineering or prior medical knowledge to model treatment interaction at an individual level. While nonlinear survival methods, such as neural networks and survival forests, can inherently model these high-level interaction terms, they have yet to be shown as effective treatment recommender systems. We introduce DeepSurv, a Cox proportional hazards deep neural network and state-of-the-art survival method for modeling interactions between a patient's covariates and treatment effectiveness in order to provide personalized treatment recommendations. We perform a number of experiments training DeepSurv on simulated and real survival data. We demonstrate that DeepSurv performs as well as or better than other state-of-the-art survival models and validate that DeepSurv successfully models increasingly complex relationships between a patient's covariates and their risk of failure. We then show how DeepSurv models the relationship between a patient's features and effectiveness of different treatment options to show how DeepSurv can be used to provide individual treatment recommendations. Finally, we train DeepSurv on real clinical studies to demonstrate how it's personalized treatment recommendations would increase the survival time of a set of patients. The predictive and modeling capabilities of DeepSurv will enable medical researchers to use deep neural networks as a tool in their exploration, understanding, and prediction of the effects of a patient's characteristics on their risk of failure.

  15. Pathogenesis of deep endometriosis.

    Science.gov (United States)

    Gordts, Stephan; Koninckx, Philippe; Brosens, Ivo

    2017-12-01

    The pathophysiology of (deep) endometriosis is still unclear. As originally suggested by Cullen, change the definition "deeper than 5 mm" to "adenomyosis externa." With the discovery of the old European literature on uterine bleeding in 5%-10% of the neonates and histologic evidence that the bleeding represents decidual shedding, it is postulated/hypothesized that endometrial stem/progenitor cells, implanted in the pelvic cavity after birth, may be at the origin of adolescent and even the occasionally premenarcheal pelvic endometriosis. Endometriosis in the adolescent is characterized by angiogenic and hemorrhagic peritoneal and ovarian lesions. The development of deep endometriosis at a later age suggests that deep infiltrating endometriosis is a delayed stage of endometriosis. Another hypothesis is that the endometriotic cell has undergone genetic or epigenetic changes and those specific changes determine the development into deep endometriosis. This is compatible with the hereditary aspects, and with the clonality of deep and cystic ovarian endometriosis. It explains the predisposition and an eventual causal effect by dioxin or radiation. Specific genetic/epigenetic changes could explain the various expressions and thus typical, cystic, and deep endometriosis become three different diseases. Subtle lesions are not a disease until epi(genetic) changes occur. A classification should reflect that deep endometriosis is a specific disease. In conclusion the pathophysiology of deep endometriosis remains debated and the mechanisms of disease progression, as well as the role of genetics and epigenetics in the process, still needs to be unraveled. Copyright © 2017 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  16. Study on method and mechanism of deep well circulation for the growth control of Microcystis in aquaculture pond.

    Science.gov (United States)

    Cong, Haibing; Sun, Feng; Wu, Jun; Zhou, Yue; Yan, Qi; Ren, Ao; Xu, Hu

    2017-06-01

    In order to control the growth of Microcystis in aquaculture ponds and reduce its adverse effect on water quality and aquaculture, a production-scale experiment of deep well circulation treatment was carried out in an aquaculture pond with water surface area of 63,000 m 2 and water depth of 1.6-2.0 m. Compared with the control pond, the experiment pond had better water quality as indicated by 64.2% reduction in chlorophyll a, and 81.1% reduction in algal cells. The chemical oxygen demand, total nitrogen, and total phosphorus concentration were reduced by 55.1%, 57.5%, and 50.8%, respectively. The treatment efficiency is mainly due to the growth control of Microcystis (i.e. cell reduction of 96.4%). The gas vesicles collapsing because of the water pressure was suggested to be the mechanism for Microcystis suppression by the deep well circulation treatment. The Microcystis lost its buoyancy after gas vesicles collapsed and it settled to the bottom of the aquaculture pond. As a result, the algae reproduction was suppressed because algae could only grow in the area with enough sunlight (i.e. water depth less than 1 m).

  17. DeepSpark: A Spark-Based Distributed Deep Learning Framework for Commodity Clusters

    OpenAIRE

    Kim, Hanjoo; Park, Jaehong; Jang, Jaehee; Yoon, Sungroh

    2016-01-01

    The increasing complexity of deep neural networks (DNNs) has made it challenging to exploit existing large-scale data processing pipelines for handling massive data and parameters involved in DNN training. Distributed computing platforms and GPGPU-based acceleration provide a mainstream solution to this computational challenge. In this paper, we propose DeepSpark, a distributed and parallel deep learning framework that exploits Apache Spark on commodity clusters. To support parallel operation...

  18. A Deep Learning Approach for Fault Diagnosis of Induction Motors in Manufacturing

    Science.gov (United States)

    Shao, Si-Yu; Sun, Wen-Jun; Yan, Ru-Qiang; Wang, Peng; Gao, Robert X.

    2017-11-01

    Extracting features from original signals is a key procedure for traditional fault diagnosis of induction motors, as it directly influences the performance of fault recognition. However, high quality features need expert knowledge and human intervention. In this paper, a deep learning approach based on deep belief networks (DBN) is developed to learn features from frequency distribution of vibration signals with the purpose of characterizing working status of induction motors. It combines feature extraction procedure with classification task together to achieve automated and intelligent fault diagnosis. The DBN model is built by stacking multiple-units of restricted Boltzmann machine (RBM), and is trained using layer-by-layer pre-training algorithm. Compared with traditional diagnostic approaches where feature extraction is needed, the presented approach has the ability of learning hierarchical representations, which are suitable for fault classification, directly from frequency distribution of the measurement data. The structure of the DBN model is investigated as the scale and depth of the DBN architecture directly affect its classification performance. Experimental study conducted on a machine fault simulator verifies the effectiveness of the deep learning approach for fault diagnosis of induction motors. This research proposes an intelligent diagnosis method for induction motor which utilizes deep learning model to automatically learn features from sensor data and realize working status recognition.

  19. Multi-Site Diagnostic Classification of Schizophrenia Using Discriminant Deep Learning with Functional Connectivity MRI.

    Science.gov (United States)

    Zeng, Ling-Li; Wang, Huaning; Hu, Panpan; Yang, Bo; Pu, Weidan; Shen, Hui; Chen, Xingui; Liu, Zhening; Yin, Hong; Tan, Qingrong; Wang, Kai; Hu, Dewen

    2018-04-01

    A lack of a sufficiently large sample at single sites causes poor generalizability in automatic diagnosis classification of heterogeneous psychiatric disorders such as schizophrenia based on brain imaging scans. Advanced deep learning methods may be capable of learning subtle hidden patterns from high dimensional imaging data, overcome potential site-related variation, and achieve reproducible cross-site classification. However, deep learning-based cross-site transfer classification, despite less imaging site-specificity and more generalizability of diagnostic models, has not been investigated in schizophrenia. A large multi-site functional MRI sample (n = 734, including 357 schizophrenic patients from seven imaging resources) was collected, and a deep discriminant autoencoder network, aimed at learning imaging site-shared functional connectivity features, was developed to discriminate schizophrenic individuals from healthy controls. Accuracies of approximately 85·0% and 81·0% were obtained in multi-site pooling classification and leave-site-out transfer classification, respectively. The learned functional connectivity features revealed dysregulation of the cortical-striatal-cerebellar circuit in schizophrenia, and the most discriminating functional connections were primarily located within and across the default, salience, and control networks. The findings imply that dysfunctional integration of the cortical-striatal-cerebellar circuit across the default, salience, and control networks may play an important role in the "disconnectivity" model underlying the pathophysiology of schizophrenia. The proposed discriminant deep learning method may be capable of learning reliable connectome patterns and help in understanding the pathophysiology and achieving accurate prediction of schizophrenia across multiple independent imaging sites. Copyright © 2018 German Center for Neurodegenerative Diseases (DZNE). Published by Elsevier B.V. All rights reserved.

  20. Industrialisation, Trade Policy and Poverty Reduction: Evidence from Asia

    OpenAIRE

    Peter Warr

    2003-01-01

    Over recent decades, most of the developing economies of Asia achieved reductions in absolute poverty incidence, but these reductions varied greatly in size. Differences in the rate of aggregate economic growth explain part, but not all of these differences. One factor that would be important is the sectoral composition of the growth. This paper examines the relationship between poverty reduction outcomes and the rate of growth in the agricultural, industrial and services sectors. It assemble...

  1. The role of anthropogenic aerosol emission reduction in achieving the Paris Agreement's objective

    Science.gov (United States)

    Hienola, Anca; Pietikäinen, Joni-Pekka; O'Donnell, Declan; Partanen, Antti-Ilari; Korhonen, Hannele; Laaksonen, Ari

    2017-04-01

    The Paris agreement reached in December 2015 under the auspices of the United Nation Framework Convention on Climate Change (UNFCCC) aims at holding the global temperature increase to well below 2◦C above preindustrial levels and "to pursue efforts to limit the temperature increase to 1.5◦C above preindustrial levels". Limiting warming to any level implies that the total amount of carbon dioxide (CO2) - the dominant driver of long-term temperatures - that can ever be emitted into the atmosphere is finite. Essentially, this means that global CO2 emissions need to become net zero. CO2 is not the only pollutant causing warming, although it is the most persistent. Short-lived, non-CO2 climate forcers also must also be considered. Whereas much effort has been put into defining a threshold for temperature increase and zero net carbon emissions, surprisingly little attention has been paid to the non-CO2 climate forcers, including not just the non-CO2 greenhouse gases (methane (CH4), nitrous oxide (N2O), halocarbons etc.) but also the anthropogenic aerosols like black carbon (BC), organic carbon (OC) and sulfate. This study investigates the possibility of limiting the temperature increase to 1.5◦C by the end of the century under different future scenarios of anthropogenic aerosol emissions simulated with the very simplistic MAGICC climate carbon cycle model as well as with ECHAM6.1-HAM2.2-SALSA + UVic ESCM. The simulations include two different CO2 scenarios- RCP3PD as control and a CO2 reduction leading to 1.5◦C (which translates into reaching the net zero CO2 emissions by mid 2040s followed by negative emissions by the end of the century); each CO2 scenario includes also two aerosol pollution control cases denoted with CLE (current legislation) and MFR (maximum feasible reduction). The main result of the above scenarios is that the stronger the anthropogenic aerosol emission reduction is, the more significant the temperature increase by 2100 relative to pre

  2. Reduction operator for wide-SIMDs reconsidered

    NARCIS (Netherlands)

    Waeijen, L.J.W.; She, D.; Corporaal, H.; He, Y.

    2014-01-01

    It has been shown that wide Single Instruction Multiple Data architectures (wide-SIMDs) can achieve high energy efficiency, especially in domains such as image and vision processing. In these and various other application domains, reduction is a frequently encountered operation, where multiple input

  3. The dynamic interplay among EFL learners’ ambiguity tolerance, adaptability, cultural intelligence, learning approach, and language achievement

    Directory of Open Access Journals (Sweden)

    Shadi Alahdadi

    2017-01-01

    Full Text Available A key objective of education is to prepare individuals to be fully-functioning learners. This entails developing the cognitive, metacognitive, motivational, cultural, and emotional competencies. The present study aimed to examine the interrelationships among adaptability, tolerance of ambiguity, cultural intelligence, learning approach, and language achievement as manifestations of the above competencies within a single model. The participants comprised one hundred eighty BA and MA Iranian university students studying English language teaching and translation. The instruments used in this study consisted of the translated versions of four questionnaires: second language tolerance of ambiguity scale, adaptability taken from emotional intelligence inventory, cultural intelligence (CQ inventory, and the revised study process questionnaire measuring surface and deep learning. The results estimated via structural equation modeling (SEM revealed that the proposed model containing the variables under study had a good fit with the data. It was found that all the variables except adaptability directly influenced language achievement with deep approach having the highest impact and ambiguity tolerance having the lowest influence. In addition, ambiguity tolerance was a positive and significant predictor of deep approach. CQ was found to be under the influence of both ambiguity tolerance and adaptability. The findings were discussed in the light of the yielded results.

  4. Insect Repellents and Associated Personal Protection for a Reduction in Human Disease

    Science.gov (United States)

    2012-01-01

    of topical repellents and scrub typhus was reduced through the use of treated clothing. Successful reduction of leishmaniasis was achieved through the...epidemic typhus , scrub typhus , plague and malaria. The result was the development of many of the modern strategies for vector control that we take for...prevent bites and disease. They gave examples that represented well-documented disease reduction achieved with repellent clothing ( scrub typhus ; McCulloch

  5. DeepPVP: phenotype-based prioritization of causative variants using deep learning

    KAUST Repository

    Boudellioua, Imene; Kulmanov, Maxat; Schofield, Paul N; Gkoutos, Georgios V; Hoehndorf, Robert

    2018-01-01

    phenotype-based methods that use similar features. DeepPVP is freely available at https://github.com/bio-ontology-research-group/phenomenet-vp Conclusions: DeepPVP further improves on existing variant prioritization methods both in terms of speed as well

  6. Challenges and Opportunities To Achieve 50% Energy Savings in Homes. National Laboratory White Papers

    Energy Technology Data Exchange (ETDEWEB)

    Bianchi, Marcus V.A. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2011-07-01

    This report summarizes the key opportunities, gaps, and barriers identified by researchers from four national laboratories (Lawrence Berkeley National Laboratory, National Renewable Energy Laboratory, Oak Ridge National Laboratory, and Pacific Northwest National Laboratory) that must be addressed to achieve the longer term 50% saving goal for Building America to ensure coordination with the Building America industry teams who are focusing their research on systems to achieve the near-term 30% savings goal. Although new construction was included, the focus of the effort was on deep energy retrofits of existing homes.

  7. DeepARG: a deep learning approach for predicting antibiotic resistance genes from metagenomic data.

    Science.gov (United States)

    Arango-Argoty, Gustavo; Garner, Emily; Pruden, Amy; Heath, Lenwood S; Vikesland, Peter; Zhang, Liqing

    2018-02-01

    Growing concerns about increasing rates of antibiotic resistance call for expanded and comprehensive global monitoring. Advancing methods for monitoring of environmental media (e.g., wastewater, agricultural waste, food, and water) is especially needed for identifying potential resources of novel antibiotic resistance genes (ARGs), hot spots for gene exchange, and as pathways for the spread of ARGs and human exposure. Next-generation sequencing now enables direct access and profiling of the total metagenomic DNA pool, where ARGs are typically identified or predicted based on the "best hits" of sequence searches against existing databases. Unfortunately, this approach produces a high rate of false negatives. To address such limitations, we propose here a deep learning approach, taking into account a dissimilarity matrix created using all known categories of ARGs. Two deep learning models, DeepARG-SS and DeepARG-LS, were constructed for short read sequences and full gene length sequences, respectively. Evaluation of the deep learning models over 30 antibiotic resistance categories demonstrates that the DeepARG models can predict ARGs with both high precision (> 0.97) and recall (> 0.90). The models displayed an advantage over the typical best hit approach, yielding consistently lower false negative rates and thus higher overall recall (> 0.9). As more data become available for under-represented ARG categories, the DeepARG models' performance can be expected to be further enhanced due to the nature of the underlying neural networks. Our newly developed ARG database, DeepARG-DB, encompasses ARGs predicted with a high degree of confidence and extensive manual inspection, greatly expanding current ARG repositories. The deep learning models developed here offer more accurate antimicrobial resistance annotation relative to current bioinformatics practice. DeepARG does not require strict cutoffs, which enables identification of a much broader diversity of ARGs. The

  8. DeepNet: An Ultrafast Neural Learning Code for Seismic Imaging

    International Nuclear Information System (INIS)

    Barhen, J.; Protopopescu, V.; Reister, D.

    1999-01-01

    A feed-forward multilayer neural net is trained to learn the correspondence between seismic data and well logs. The introduction of a virtual input layer, connected to the nominal input layer through a special nonlinear transfer function, enables ultrafast (single iteration), near-optimal training of the net using numerical algebraic techniques. A unique computer code, named DeepNet, has been developed, that has achieved, in actual field demonstrations, results unattainable to date with industry standard tools

  9. Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study

    Directory of Open Access Journals (Sweden)

    Siavash Hosseinyalamdary

    2018-04-01

    Full Text Available Bayes filters, such as the Kalman and particle filters, have been used in sensor fusion to integrate two sources of information and obtain the best estimate of unknowns. The efficient integration of multiple sensors requires deep knowledge of their error sources. Some sensors, such as Inertial Measurement Unit (IMU, have complicated error sources. Therefore, IMU error modelling and the efficient integration of IMU and Global Navigation Satellite System (GNSS observations has remained a challenge. In this paper, we developed deep Kalman filter to model and remove IMU errors and, consequently, improve the accuracy of IMU positioning. To achieve this, we added a modelling step to the prediction and update steps of the Kalman filter, so that the IMU error model is learned during integration. The results showed our deep Kalman filter outperformed the conventional Kalman filter and reached a higher level of accuracy.

  10. Should the U.S. proceed to consider licensing deep geological disposal of high-level nuclear waste

    International Nuclear Information System (INIS)

    Curtiss, J.R.

    1993-01-01

    The United States, as well as other countries facing the question of how to handle high-level nuclear waste, has decided that the most appropriate means of disposal is in a deep geologic repository. In recent years, the Radioactive Waste Management Committee of the Nuclear Energy Agency has developed several position papers on the technical achievability of deep geologic disposal, thus demonstrating the serious consideration of deep geologic disposal in the international community. The Committee has not, as yet, formally endorsed disposal in a deep geologic repository as the preferred method of handling high-level nuclear waste. The United States, on the other hand, has studied the various methods of disposing of high-level nuclear waste, and has determined that deep geologic disposal is the method that should be developed. The purpose of this paper is to present a review of the United States' decision on selecting deep geologic disposal as the preferred method of addressing the high-level waste problem. It presents a short history of the steps taken by the U.S. in determining what method to use, discusses the NRC's waste Confidence Decision, and provides information on other issues in the U.S. program such as reconsideration of the final disposal standard and the growing inventory of spent fuel in storage

  11. Stimulation Technologies for Deep Well Completions

    Energy Technology Data Exchange (ETDEWEB)

    None

    2003-09-30

    The Department of Energy (DOE) is sponsoring the Deep Trek Program targeted at improving the economics of drilling and completing deep gas wells. Under the DOE program, Pinnacle Technologies is conducting a study to evaluate the stimulation of deep wells. The objective of the project is to assess U.S. deep well drilling & stimulation activity, review rock mechanics & fracture growth in deep, high pressure/temperature wells and evaluate stimulation technology in several key deep plays. An assessment of historical deep gas well drilling activity and forecast of future trends was completed during the first six months of the project; this segment of the project was covered in Technical Project Report No. 1. The second progress report covers the next six months of the project during which efforts were primarily split between summarizing rock mechanics and fracture growth in deep reservoirs and contacting operators about case studies of deep gas well stimulation.

  12. Deep learning for biomarker regression: application to osteoporosis and emphysema on chest CT scans

    Science.gov (United States)

    González, Germán.; Washko, George R.; San José Estépar, Raúl

    2018-03-01

    Introduction: Biomarker computation using deep-learning often relies on a two-step process, where the deep learning algorithm segments the region of interest and then the biomarker is measured. We propose an alternative paradigm, where the biomarker is estimated directly using a regression network. We showcase this image-tobiomarker paradigm using two biomarkers: the estimation of bone mineral density (BMD) and the estimation of lung percentage of emphysema from CT scans. Materials and methods: We use a large database of 9,925 CT scans to train, validate and test the network for which reference standard BMD and percentage emphysema have been already computed. First, the 3D dataset is reduced to a set of canonical 2D slices where the organ of interest is visible (either spine for BMD or lungs for emphysema). This data reduction is performed using an automatic object detector. Second, The regression neural network is composed of three convolutional layers, followed by a fully connected and an output layer. The network is optimized using a momentum optimizer with an exponential decay rate, using the root mean squared error as cost function. Results: The Pearson correlation coefficients obtained against the reference standards are r = 0.940 (p < 0.00001) and r = 0.976 (p < 0.00001) for BMD and percentage emphysema respectively. Conclusions: The deep-learning regression architecture can learn biomarkers from images directly, without indicating the structures of interest. This approach simplifies the development of biomarker extraction algorithms. The proposed data reduction based on object detectors conveys enough information to compute the biomarkers of interest.

  13. Emergency Closed Reduction of a C4/5 Fracture Dislocation with Complete Paraplegia Resulting in Profound Neurologic Recovery

    Directory of Open Access Journals (Sweden)

    Christian W. Müller

    2013-01-01

    Full Text Available Introduction. Cervical spinal cord injuries due to traumatic fractures are associated with persistent neurological deficits. Although clinical evidence is weak, early decompression, defined as <24–72 h, has been frequently proposed. Animal studies show better outcomes after early decompression within one hour or less, which can hardly ever be achieved in clinical practice. Case Presentation. A 37-year-old patient was hospitalized after being hit by a shying horse. After diagnosis of C4/5 fracture dislocation and complete paraplegia, she was intubated and sedated with deep relaxation. Emergency reduction was performed at approximately 120 minutes after trauma. Subsequently, a standard anterior decompression, discectomy, and fusion were carried out. She was then transferred to a specialized rehabilitation hospital. Her neurologic function improved from AIS grade A on admission to grade B postoperatively and grade D after four months of rehabilitation. One year after the accident, she was ambulatory without walking aids and restarted horse riding. Discussion and Conclusion. Rarely in clinical practice, decompression of the spine canal can be performed as early as in this case. This case highlights the potential benefit of utmost early reduction in cervical fracture dislocations with compression of the spinal cord.

  14. Efficacy of a Deep Learning System for Detecting Glaucomatous Optic Neuropathy Based on Color Fundus Photographs.

    Science.gov (United States)

    Li, Zhixi; He, Yifan; Keel, Stuart; Meng, Wei; Chang, Robert T; He, Mingguang

    2018-03-02

    To assess the performance of a deep learning algorithm for detecting referable glaucomatous optic neuropathy (GON) based on color fundus photographs. A deep learning system for the classification of GON was developed for automated classification of GON on color fundus photographs. We retrospectively included 48 116 fundus photographs for the development and validation of a deep learning algorithm. This study recruited 21 trained ophthalmologists to classify the photographs. Referable GON was defined as vertical cup-to-disc ratio of 0.7 or more and other typical changes of GON. The reference standard was made until 3 graders achieved agreement. A separate validation dataset of 8000 fully gradable fundus photographs was used to assess the performance of this algorithm. The area under receiver operator characteristic curve (AUC) with sensitivity and specificity was applied to evaluate the efficacy of the deep learning algorithm detecting referable GON. In the validation dataset, this deep learning system achieved an AUC of 0.986 with sensitivity of 95.6% and specificity of 92.0%. The most common reasons for false-negative grading (n = 87) were GON with coexisting eye conditions (n = 44 [50.6%]), including pathologic or high myopia (n = 37 [42.6%]), diabetic retinopathy (n = 4 [4.6%]), and age-related macular degeneration (n = 3 [3.4%]). The leading reason for false-positive results (n = 480) was having other eye conditions (n = 458 [95.4%]), mainly including physiologic cupping (n = 267 [55.6%]). Misclassification as false-positive results amidst a normal-appearing fundus occurred in only 22 eyes (4.6%). A deep learning system can detect referable GON with high sensitivity and specificity. Coexistence of high or pathologic myopia is the most common cause resulting in false-negative results. Physiologic cupping and pathologic myopia were the most common reasons for false-positive results. Copyright © 2018 American Academy of Ophthalmology. Published by

  15. Stimulation Technologies for Deep Well Completions

    Energy Technology Data Exchange (ETDEWEB)

    Stephen Wolhart

    2005-06-30

    The Department of Energy (DOE) is sponsoring the Deep Trek Program targeted at improving the economics of drilling and completing deep gas wells. Under the DOE program, Pinnacle Technologies conducted a study to evaluate the stimulation of deep wells. The objective of the project was to review U.S. deep well drilling and stimulation activity, review rock mechanics and fracture growth in deep, high-pressure/temperature wells and evaluate stimulation technology in several key deep plays. This report documents results from this project.

  16. DeepPicker: A deep learning approach for fully automated particle picking in cryo-EM.

    Science.gov (United States)

    Wang, Feng; Gong, Huichao; Liu, Gaochao; Li, Meijing; Yan, Chuangye; Xia, Tian; Li, Xueming; Zeng, Jianyang

    2016-09-01

    Particle picking is a time-consuming step in single-particle analysis and often requires significant interventions from users, which has become a bottleneck for future automated electron cryo-microscopy (cryo-EM). Here we report a deep learning framework, called DeepPicker, to address this problem and fill the current gaps toward a fully automated cryo-EM pipeline. DeepPicker employs a novel cross-molecule training strategy to capture common features of particles from previously-analyzed micrographs, and thus does not require any human intervention during particle picking. Tests on the recently-published cryo-EM data of three complexes have demonstrated that our deep learning based scheme can successfully accomplish the human-level particle picking process and identify a sufficient number of particles that are comparable to those picked manually by human experts. These results indicate that DeepPicker can provide a practically useful tool to significantly reduce the time and manual effort spent in single-particle analysis and thus greatly facilitate high-resolution cryo-EM structure determination. DeepPicker is released as an open-source program, which can be downloaded from https://github.com/nejyeah/DeepPicker-python. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. DEEP LEARNING MODEL FOR BILINGUAL SENTIMENT CLASSIFICATION OF SHORT TEXTS

    Directory of Open Access Journals (Sweden)

    Y. B. Abdullin

    2017-01-01

    Full Text Available Sentiment analysis of short texts such as Twitter messages and comments in news portals is challenging due to the lack of contextual information. We propose a deep neural network model that uses bilingual word embeddings to effectively solve sentiment classification problem for a given pair of languages. We apply our approach to two corpora of two different language pairs: English-Russian and Russian-Kazakh. We show how to train a classifier in one language and predict in another. Our approach achieves 73% accuracy for English and 74% accuracy for Russian. For Kazakh sentiment analysis, we propose a baseline method, that achieves 60% accuracy; and a method to learn bilingual embeddings from a large unlabeled corpus using a bilingual word pairs.

  18. Shelf-to-basin iron shuttling enhances vivianite formation in deep Baltic Sea sediments

    Science.gov (United States)

    Reed, Daniel C.; Gustafsson, Bo G.; Slomp, Caroline P.

    2016-01-01

    Coastal hypoxia is a growing and persistent problem largely attributable to enhanced terrestrial nutrient (i.e., nitrogen and phosphorus) loading. Recent studies suggest phosphorus removal through burial of iron (II) phosphates, putatively vivianite, plays an important role in nutrient cycling in the Baltic Sea - the world's largest anthropogenic dead zone - yet the dynamics of iron (II) phosphate formation are poorly constrained. To address this, a reactive-transport model was used to reconstruct the diagenetic and depositional history of sediments in the Fårö basin, a deep anoxic and sulphidic region of the Baltic Sea where iron (II) phosphates have been observed. Simulations demonstrate that transport of iron from shelf sediments to deep basins enhances vivianite formation while sulphide concentrations are low, but that pyrite forms preferentially over vivianite when sulphate reduction intensifies due to elevated organic loading. Episodic reoxygenation events, associated with major inflows of oxic waters, encourage the retention of iron oxyhydroxides and iron-bound phosphorus in sediments, increasing vivianite precipitation as a result. Results suggest that artificial reoxygenation of the Baltic Sea bottom waters could sequester up to 3% of the annual external phosphorus loads as iron (II) phosphates, but this is negligible when compared to potential internal phosphorus loads due to dissolution of iron oxyhydroxides when low oxygen conditions prevail. Thus, enhancing vivianite formation through artificial reoxygenation of deep waters is not a viable engineering solution to eutrophication in the Baltic Sea. Finally, simulations suggest that regions with limited sulphate reduction and hypoxic intervals, such as eutrophic estuaries, could act as important phosphorus sinks by sequestering vivianite. This could potentially alleviate eutrophication in shelf and slope environments.

  19. Dose reduction and optimization studies (ALARA) at nuclear power facilities

    International Nuclear Information System (INIS)

    Baum, J.W.; Meinhold, C.B.

    1983-01-01

    Brookhaven National Laboratory (BNL) has been commissioned by the Nuclear Regulatory Commission (NRC) to study dose-reduction techniques and effectiveness of as low as reasonably achievable (ALARA) planning at LWR plants. These studies have the following objectives: identify high-dose maintenance tasks; identify dose-reduction techniques; examine incentives for dose reduction; evaluate cost-effectiveness and optimization of dose-reduction techniques; and compile an ALARA handbook on data, engineering modifications, cost-effectiveness calculations, and other information of interest to ALARA practioners

  20. Pathways to a low-carbon economy for the UK with the macro-econometric E3MG model

    International Nuclear Information System (INIS)

    Dagoumas, A.S.; Barker, T.S.

    2010-01-01

    This paper examines different carbon pathways for achieving deep CO 2 reduction targets for the UK using a macro-econometric hybrid model E3MG, which stands for Energy-Economy-Environment Model at the Global level. The E3MG, with the UK as one of its regions, combines a top-down approach for modeling the global economy and for estimating the aggregate and disaggregate energy demand and a bottom-up approach (Energy Technology subModel, ETM) for simulating the power sector, which then provides feedback to the energy demand equations and the whole economy. The ETM submodel uses a probabilistic approach and historical data for estimating the penetration levels of the different technologies, considering their economic, technical and environmental characteristics. Three pathway scenarios (CFH, CLC and CAM) simulate the CO 2 reduction by 40%, 60% and 80% by 2050 compared to 1990 levels respectively and are compared with a reference scenario (REF), with no reduction target. The targets are modeled as the UK contribution to an international mitigation effort, such as achieving the G8 reduction targets, which is a more realistic political framework for the UK to move towards deep reductions rather than moving alone. This paper aims to provide modeling evidence that deep reduction targets can be met through different carbon pathways while also assessing the macroeconomic effects of the pathways on GDP and investment.

  1. DeepBase: annotation and discovery of microRNAs and other noncoding RNAs from deep-sequencing data.

    Science.gov (United States)

    Yang, Jian-Hua; Qu, Liang-Hu

    2012-01-01

    Recent advances in high-throughput deep-sequencing technology have produced large numbers of short and long RNA sequences and enabled the detection and profiling of known and novel microRNAs (miRNAs) and other noncoding RNAs (ncRNAs) at unprecedented sensitivity and depth. In this chapter, we describe the use of deepBase, a database that we have developed to integrate all public deep-sequencing data and to facilitate the comprehensive annotation and discovery of miRNAs and other ncRNAs from these data. deepBase provides an integrative, interactive, and versatile web graphical interface to evaluate miRBase-annotated miRNA genes and other known ncRNAs, explores the expression patterns of miRNAs and other ncRNAs, and discovers novel miRNAs and other ncRNAs from deep-sequencing data. deepBase also provides a deepView genome browser to comparatively analyze these data at multiple levels. deepBase is available at http://deepbase.sysu.edu.cn/.

  2. Critical Low-Noise Technologies Being Developed for Engine Noise Reduction Systems Subproject

    Science.gov (United States)

    Grady, Joseph E.; Civinskas, Kestutis C.

    2004-01-01

    NASA's previous Advanced Subsonic Technology (AST) Noise Reduction Program delivered the initial technologies for meeting a 10-year goal of a 10-dB reduction in total aircraft system noise. Technology Readiness Levels achieved for the engine-noise-reduction technologies ranged from 4 (rig scale) to 6 (engine demonstration). The current Quiet Aircraft Technology (QAT) project is building on those AST accomplishments to achieve the additional noise reduction needed to meet the Aerospace Technology Enterprise's 10-year goal, again validated through a combination of laboratory rig and engine demonstration tests. In order to meet the Aerospace Technology Enterprise goal for future aircraft of a 50- reduction in the perceived noise level, reductions of 4 dB are needed in both fan and jet noise. The primary objectives of the Engine Noise Reduction Systems (ENRS) subproject are, therefore, to develop technologies to reduce both fan and jet noise by 4 dB, to demonstrate these technologies in engine tests, and to develop and experimentally validate Computational Aero Acoustics (CAA) computer codes that will improve our ability to predict engine noise.

  3. MOVEMENT AND MANEUVER IN DEEP SPACE: A Framework to Leverage Advanced Propulsion

    Science.gov (United States)

    2018-04-01

    the Casimir force—which is analogous to a pressure imbalance created by a reduction in air density ( think Bernoulli’s principle).53 Because the...many bets ” scenario. If the bets are well vetted, like the BPP model, then even a null or sub-optimal result is a valuable 37 pay-off in terms of...we must think of deep space exploration as imperative–too important to be relegated to simple political interest. 115 “Worldometers” on Worldometers

  4. Simulation technology used for risky assessment in deep exploration project in China

    Science.gov (United States)

    jiao, J.; Huang, D.; Liu, J.

    2013-12-01

    the real world or process, which can provide new insight to the equipment to meet requests from application and construction process and facilitates by means of direct perception and understanding of installation, debugging and experimental process of key equipment for deep exploration. Finally, the objective of project cost conservation and risk reduction can be reasonably approached. Risk assessment can be used to quantitatively evaluate the possible degree of the impact. During the research and development stage, information from the installation, debugging and simulation demonstration of the experiment process of the key instrument and equipment are used to evaluate the fatigue and safety of the device. It needs fully understanding the controllable and uncontrollable risk factors during the process, and then adjusting and improving the unsafe risk factors in the risk assessment and prediction. With combination with professional Geo software to process and interpret the environment to obtain evaluation parameters, simulation modeling is more likely close to exploration target which need more details of evaluations. From micro and macro comprehensive angles to safety and risk assessment can be achieved to satisfy the purpose of reducing the risk of equipment development, and to avoid unnecessary loss on the way of the development.

  5. PNL size reduction and decontamination facilities and capabilities

    International Nuclear Information System (INIS)

    Allen, R.P.; Fetrow, L.K.; McCoy, M.W.

    1983-07-01

    Studies sponsored by the US Department of Energy at Pacific Northwest Laboratory (PNL) have resulted in the development of an effective, integrated size reduction and decontamination system for transuranically contaminated components. Using this system, a reduction of more than 95% in the volume of transuranic waste requiring interim storage and eventual geologic disposal has been achieved for typical plutonium contaminated glove boxes. This paper describes the separate preparation, size reduction, decontamination and waste treatment operations and facilities that have been developed and demonstrated as part of this work

  6. Deep Borehole Disposal as an Alternative Concept to Deep Geological Disposal

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jongyoul; Lee, Minsoo; Choi, Heuijoo; Kim, Kyungsu [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    In this paper, the general concept and key technologies for deep borehole disposal of spent fuels or HLW, as an alternative method to the mined geological disposal method, were reviewed. After then an analysis on the distance between boreholes for the disposal of HLW was carried out. Based on the results, a disposal area were calculated approximately and compared with that of mined geological disposal. These results will be used as an input for the analyses of applicability for DBD in Korea. The disposal safety of this system has been demonstrated with underground research laboratory and some advanced countries such as Finland and Sweden are implementing their disposal project on commercial stage. However, if the spent fuels or the high-level radioactive wastes can be disposed of in the depth of 3-5 km and more stable rock formation, it has several advantages. Therefore, as an alternative disposal concept to the mined deep geological disposal concept (DGD), very deep borehole disposal (DBD) technology is under consideration in number of countries in terms of its outstanding safety and cost effectiveness. In this paper, the general concept of deep borehole disposal for spent fuels or high level radioactive wastes was reviewed. And the key technologies, such as drilling technology of large diameter borehole, packaging and emplacement technology, sealing technology and performance/safety analyses technologies, and their challenges in development of deep borehole disposal system were analyzed. Also, very preliminary deep borehole disposal concept including disposal canister concept was developed according to the nuclear environment in Korea.

  7. Deep Borehole Disposal as an Alternative Concept to Deep Geological Disposal

    International Nuclear Information System (INIS)

    Lee, Jongyoul; Lee, Minsoo; Choi, Heuijoo; Kim, Kyungsu

    2016-01-01

    In this paper, the general concept and key technologies for deep borehole disposal of spent fuels or HLW, as an alternative method to the mined geological disposal method, were reviewed. After then an analysis on the distance between boreholes for the disposal of HLW was carried out. Based on the results, a disposal area were calculated approximately and compared with that of mined geological disposal. These results will be used as an input for the analyses of applicability for DBD in Korea. The disposal safety of this system has been demonstrated with underground research laboratory and some advanced countries such as Finland and Sweden are implementing their disposal project on commercial stage. However, if the spent fuels or the high-level radioactive wastes can be disposed of in the depth of 3-5 km and more stable rock formation, it has several advantages. Therefore, as an alternative disposal concept to the mined deep geological disposal concept (DGD), very deep borehole disposal (DBD) technology is under consideration in number of countries in terms of its outstanding safety and cost effectiveness. In this paper, the general concept of deep borehole disposal for spent fuels or high level radioactive wastes was reviewed. And the key technologies, such as drilling technology of large diameter borehole, packaging and emplacement technology, sealing technology and performance/safety analyses technologies, and their challenges in development of deep borehole disposal system were analyzed. Also, very preliminary deep borehole disposal concept including disposal canister concept was developed according to the nuclear environment in Korea

  8. Applying a Consumer Behavior Lens to Salt Reduction Initiatives.

    Science.gov (United States)

    Regan, Áine; Kent, Monique Potvin; Raats, Monique M; McConnon, Áine; Wall, Patrick; Dubois, Lise

    2017-08-18

    Reformulation of food products to reduce salt content has been a central strategy for achieving population level salt reduction. In this paper, we reflect on current reformulation strategies and consider how consumer behavior determines the ultimate success of these strategies. We consider the merits of adopting a 'health by stealth', silent approach to reformulation compared to implementing a communications strategy which draws on labeling initiatives in tandem with reformulation efforts. We end this paper by calling for a multi-actor approach which utilizes co-design, participatory tools to facilitate the involvement of all stakeholders, including, and especially, consumers, in making decisions around how best to achieve population-level salt reduction.

  9. Deep Neural Architectures for Mapping Scalp to Intracranial EEG.

    Science.gov (United States)

    Antoniades, Andreas; Spyrou, Loukianos; Martin-Lopez, David; Valentin, Antonio; Alarcon, Gonzalo; Sanei, Saeid; Took, Clive Cheong

    2018-03-19

    Data is often plagued by noise which encumbers machine learning of clinically useful biomarkers and electroencephalogram (EEG) data is no exemption. Intracranial EEG (iEEG) data enhances the training of deep learning models of the human brain, yet is often prohibitive due to the invasive recording process. A more convenient alternative is to record brain activity using scalp electrodes. However, the inherent noise associated with scalp EEG data often impedes the learning process of neural models, achieving substandard performance. Here, an ensemble deep learning architecture for nonlinearly mapping scalp to iEEG data is proposed. The proposed architecture exploits the information from a limited number of joint scalp-intracranial recording to establish a novel methodology for detecting the epileptic discharges from the sEEG of a general population of subjects. Statistical tests and qualitative analysis have revealed that the generated pseudo-intracranial data are highly correlated with the true intracranial data. This facilitated the detection of IEDs from the scalp recordings where such waveforms are not often visible. As a real-world clinical application, these pseudo-iEEGs are then used by a convolutional neural network for the automated classification of intracranial epileptic discharges (IEDs) and non-IED of trials in the context of epilepsy analysis. Although the aim of this work was to circumvent the unavailability of iEEG and the limitations of sEEG, we have achieved a classification accuracy of 68% an increase of 6% over the previously proposed linear regression mapping.

  10. Novel real-time tumor-contouring method using deep learning to prevent mistracking in X-ray fluoroscopy.

    Science.gov (United States)

    Terunuma, Toshiyuki; Tokui, Aoi; Sakae, Takeji

    2018-03-01

    Robustness to obstacles is the most important factor necessary to achieve accurate tumor tracking without fiducial markers. Some high-density structures, such as bone, are enhanced on X-ray fluoroscopic images, which cause tumor mistracking. Tumor tracking should be performed by controlling "importance recognition": the understanding that soft-tissue is an important tracking feature and bone structure is unimportant. We propose a new real-time tumor-contouring method that uses deep learning with importance recognition control. The novelty of the proposed method is the combination of the devised random overlay method and supervised deep learning to induce the recognition of structures in tumor contouring as important or unimportant. This method can be used for tumor contouring because it uses deep learning to perform image segmentation. Our results from a simulated fluoroscopy model showed accurate tracking of a low-visibility tumor with an error of approximately 1 mm, even if enhanced bone structure acted as an obstacle. A high similarity of approximately 0.95 on the Jaccard index was observed between the segmented and ground truth tumor regions. A short processing time of 25 ms was achieved. The results of this simulated fluoroscopy model support the feasibility of robust real-time tumor contouring with fluoroscopy. Further studies using clinical fluoroscopy are highly anticipated.

  11. Strength Reduction of Coal Pillar after CO2 Sequestration in Abandoned Coal Mines

    Directory of Open Access Journals (Sweden)

    Qiuhao Du

    2017-02-01

    Full Text Available CO2 geosequestration is currently considered to be the most effective and economical method to dispose of artificial greenhouse gases. There are a large number of coal mines that will be scrapped, and some of them are located in deep formations in China. CO2 storage in abandoned coal mines will be a potential option for greenhouse gas disposal. However, CO2 trapping in deep coal pillars would induce swelling effects of coal matrix. Adsorption-induced swelling not only modifies the volume and permeability of coal mass, but also causes the basic physical and mechanical properties changing, such as elastic modulus and Poisson ratio. It eventually results in some reduction in pillar strength. Based on the fractional swelling as a function of time and different loading pressure steps, the relationship between volumetric stress and adsorption pressure increment is acquired. Eventually, this paper presents a theory model to analyze the pillar strength reduction after CO2 adsorption. The model provides a method to quantitatively describe the interrelation of volumetric strain, swelling stress, and mechanical strength reduction after gas adsorption under the condition of step-by-step pressure loading and the non-Langmuir isothermal model. The model might have a significantly important implication for predicting the swelling stress and mechanical behaviors of coal pillars during CO2 sequestration in abandoned coal mines.

  12. What can the food and drink industry do to help achieve the 5% free sugars goal?

    Science.gov (United States)

    Gibson, Sigrid; Ashwell, Margaret; Arthur, Jenny; Bagley, Lindsey; Lennox, Alison; Rogers, Peter J; Stanner, Sara

    2017-07-01

    To contribute evidence and make recommendations to assist in achieving free sugars reduction, with due consideration to the broader picture of weight management and dietary quality. An expert workshop in July 2016 addressed options outlined in the Public Health England report 'Sugar reduction: The evidence for action' that related directly to the food industry. Panel members contributed expertise in food technology, public heath nutrition, marketing, communications, psychology and behaviour. Recommendations were directed towards reformulation, reduced portion sizes, labelling and consumer education. These were evaluated based on their feasibility, likely consumer acceptability, efficacy and cost. The panel agreed that the 5% target for energy from free sugars is unlikely to be achievable by the UK population in the near future, but a gradual reduction from average current level of intake is feasible. Progress requires collaborations between government, food industry, non-government organisations, health professionals, educators and consumers. Reformulation should start with the main contributors of free sugars in the diet, prioritising those products high in free sugars and relatively low in micronutrients. There is most potential for replacing free sugars in beverages using high-potency sweeteners and possibly via gradual reduction in sweetness levels. However, reformulation alone, with its inherent practical difficulties, will not achieve the desired reduction in free sugars. Food manufacturers and the out-of-home sector can help consumers by providing smaller portions. Labelling of free sugars would extend choice and encourage reformulation; however, government needs to assist industry by addressing current analytical and regulatory problems. There are also opportunities for multi-agency collaboration to develop tools/communications based on the Eatwell Guide, to help consumers understand the principles of a varied, healthy, balanced diet. Multiple strategies

  13. PEDLA: predicting enhancers with a deep learning-based algorithmic framework.

    Science.gov (United States)

    Liu, Feng; Li, Hao; Ren, Chao; Bo, Xiaochen; Shu, Wenjie

    2016-06-22

    Transcriptional enhancers are non-coding segments of DNA that play a central role in the spatiotemporal regulation of gene expression programs. However, systematically and precisely predicting enhancers remain a major challenge. Although existing methods have achieved some success in enhancer prediction, they still suffer from many issues. We developed a deep learning-based algorithmic framework named PEDLA (https://github.com/wenjiegroup/PEDLA), which can directly learn an enhancer predictor from massively heterogeneous data and generalize in ways that are mostly consistent across various cell types/tissues. We first trained PEDLA with 1,114-dimensional heterogeneous features in H1 cells, and demonstrated that PEDLA framework integrates diverse heterogeneous features and gives state-of-the-art performance relative to five existing methods for enhancer prediction. We further extended PEDLA to iteratively learn from 22 training cell types/tissues. Our results showed that PEDLA manifested superior performance consistency in both training and independent test sets. On average, PEDLA achieved 95.0% accuracy and a 96.8% geometric mean (GM) of sensitivity and specificity across 22 training cell types/tissues, as well as 95.7% accuracy and a 96.8% GM across 20 independent test cell types/tissues. Together, our work illustrates the power of harnessing state-of-the-art deep learning techniques to consistently identify regulatory elements at a genome-wide scale from massively heterogeneous data across diverse cell types/tissues.

  14. Hydrochemistry, origin and residence time of deep groundwater in the Yuseong area

    International Nuclear Information System (INIS)

    Koh, Yong Kwon; Kim, Geon Young; Bae, Dae Seok; Park, Kyung Woo

    2005-01-01

    As a part of the radioactive waste disposal research program in Korea, the geological, hydrogeological and hydrogeochemical investigations have been carried out in the Yuseong area (KAERI). The temperature or groundwater is measured up to 24 .deg. C and thermal gradient is obtained, to 0.26 .deg. C/100m. pH of groundwater at upper section shows about 7 and the pH of groundwater of 200m below surface reaches almost constant value as 9.9∼10.3. The redox potential of groundwater varied with depth and more negative values were recognized in deep groundwater. The redox potential of deep groundwater, main factor of U solubility, was measured up to -150 mV. These high pH and reduced conditions indicates that the maximum U concentration in groundwater would be limited by the equilibrium solubility of U minerals. The chemistry of shallow groundwater shows Ca-HCO 3 or Ca-Na-HCO 3 type, whereas the deep groundwater belongs to typical Na-HCO 3 type. The chemistry of groundwater below 250m from the surface is constant with depth, indicating that the extent of water-rock reaction is almost unique, which is controlled by the residence time of groundwater. The carbon isotope data (δ 13 C) of groundwater show the contribution of carbon from either that microbial oxidation of organic matter or carbon dioxide from plant respiration. The measurement and interpretation of C-14 indicate that the residence time of borehole deep groundwater ranges from about 2,000 to 6,000 yr BP. The high δ 34 S so4 value of groundwater indicate that the sulfate reduction might be occurred in the deep environment

  15. Predicting Long-Term Growth in Students' Mathematics Achievement: The Unique Contributions of Motivation and Cognitive Strategies

    Science.gov (United States)

    Murayama, Kou; Pekrun, Reinhard; Lichtenfeld, Stephanie; vom Hofe, Rudolf

    2013-01-01

    This research examined how motivation (perceived control, intrinsic motivation, and extrinsic motivation), cognitive learning strategies (deep and surface strategies), and intelligence jointly predict long-term growth in students' mathematics achievement over 5 years. Using longitudinal data from six annual waves (Grades 5 through 10;…

  16. Deep Mapping and Spatial Anthropology

    Directory of Open Access Journals (Sweden)

    Les Roberts

    2016-01-01

    Full Text Available This paper provides an introduction to the Humanities Special Issue on “Deep Mapping”. It sets out the rationale for the collection and explores the broad-ranging nature of perspectives and practices that fall within the “undisciplined” interdisciplinary domain of spatial humanities. Sketching a cross-current of ideas that have begun to coalesce around the concept of “deep mapping”, the paper argues that rather than attempting to outline a set of defining characteristics and “deep” cartographic features, a more instructive approach is to pay closer attention to the multivalent ways deep mapping is performatively put to work. Casting a critical and reflexive gaze over the developing discourse of deep mapping, it is argued that what deep mapping “is” cannot be reduced to the otherwise a-spatial and a-temporal fixity of the “deep map”. In this respect, as an undisciplined survey of this increasing expansive field of study and practice, the paper explores the ways in which deep mapping can engage broader discussion around questions of spatial anthropology.

  17. Deep-HiTS: Rotation Invariant Convolutional Neural Network for Transient Detection

    Science.gov (United States)

    Cabrera-Vives, Guillermo; Reyes, Ignacio; Förster, Francisco; Estévez, Pablo A.; Maureira, Juan-Carlos

    2017-02-01

    We introduce Deep-HiTS, a rotation-invariant convolutional neural network (CNN) model for classifying images of transient candidates into artifacts or real sources for the High cadence Transient Survey (HiTS). CNNs have the advantage of learning the features automatically from the data while achieving high performance. We compare our CNN model against a feature engineering approach using random forests (RFs). We show that our CNN significantly outperforms the RF model, reducing the error by almost half. Furthermore, for a fixed number of approximately 2000 allowed false transient candidates per night, we are able to reduce the misclassified real transients by approximately one-fifth. To the best of our knowledge, this is the first time CNNs have been used to detect astronomical transient events. Our approach will be very useful when processing images from next generation instruments such as the Large Synoptic Survey Telescope. We have made all our code and data available to the community for the sake of allowing further developments and comparisons at https://github.com/guille-c/Deep-HiTS. Deep-HiTS is licensed under the terms of the GNU General Public License v3.0.

  18. Deep Vein Thrombosis

    African Journals Online (AJOL)

    OWNER

    Deep Vein Thrombosis: Risk Factors and Prevention in Surgical Patients. Deep Vein ... preventable morbidity and mortality in hospitalized surgical patients. ... the elderly.3,4 It is very rare before the age ... depends on the risk level; therefore an .... but also in the post-operative period. ... is continuing uncertainty regarding.

  19. Design features and cost reduction potential of JSFR

    International Nuclear Information System (INIS)

    Katoh, Atsushi; Hayafune, Hiroki; Kotake, Shoji

    2014-01-01

    Highlights: • Japan Sodium Cooled Fast Reactor (JSFR) is designed to reduce plant commodity. • Cost reduction effectiveness by innovative designs is estimated by bottom up method. • JSFR achieves 76% construction cost reduction compared with Monju by design effort. • Commercial JSFR construction cost could be less than that of conventional LWR. - Abstract: To improve the economic competitiveness of the Japan Sodium-cooled Fast Reactor (JSFR), several innovative designs have been introduced, e.g. reduction of number of main cooling loop, shorter pipe arrangement by adopting thermally durable material, in fact high chromium ferrite steel, a compact reactor vessel (RV), integration of a primary pump and an intermediate heat exchanger (IHX). Since they had not been introduced in the past and existing reactors, a new approach for construction cost estimation has been introduced to handle innovative technologies, for example, concerning different kinds of material, fabrication processes of equipment etc. As results of JSFR construction cost estimations based on the new method and the latest conceptual JSFR design, economic goals of Generation IV nuclear energy systems can be achieved by expecting the following cost reduction effects: commodity reduction by adopting innovative design, an economy of scale by power generation increase, learning effect etc. It is well analyzed quantitatively that feasibility of innovative designs is essential for economic competitiveness of JSFR

  20. Design features and cost reduction potential of JSFR

    Energy Technology Data Exchange (ETDEWEB)

    Katoh, Atsushi, E-mail: kato.atsushi@jaea.go.jp [Japan Atomic Energy Agency, 4002 Narita, Oarai-machi, Higashi-ibaraki-gun, Ibaraki-ken 311-1393 (Japan); Hayafune, Hiroki [Japan Atomic Energy Agency, 4002 Narita, Oarai-machi, Higashi-ibaraki-gun, Ibaraki-ken 311-1393 (Japan); Kotake, Shoji [The Japan Atomic Power Company, 1-1 Kanda-midoricyo, Chiyoda-ku, Tokyo-to 101-0053 (Japan)

    2014-12-15

    Highlights: • Japan Sodium Cooled Fast Reactor (JSFR) is designed to reduce plant commodity. • Cost reduction effectiveness by innovative designs is estimated by bottom up method. • JSFR achieves 76% construction cost reduction compared with Monju by design effort. • Commercial JSFR construction cost could be less than that of conventional LWR. - Abstract: To improve the economic competitiveness of the Japan Sodium-cooled Fast Reactor (JSFR), several innovative designs have been introduced, e.g. reduction of number of main cooling loop, shorter pipe arrangement by adopting thermally durable material, in fact high chromium ferrite steel, a compact reactor vessel (RV), integration of a primary pump and an intermediate heat exchanger (IHX). Since they had not been introduced in the past and existing reactors, a new approach for construction cost estimation has been introduced to handle innovative technologies, for example, concerning different kinds of material, fabrication processes of equipment etc. As results of JSFR construction cost estimations based on the new method and the latest conceptual JSFR design, economic goals of Generation IV nuclear energy systems can be achieved by expecting the following cost reduction effects: commodity reduction by adopting innovative design, an economy of scale by power generation increase, learning effect etc. It is well analyzed quantitatively that feasibility of innovative designs is essential for economic competitiveness of JSFR.

  1. Deep molecular responses for treatment-free remission in chronic myeloid leukemia.

    Science.gov (United States)

    Dulucq, Stéphanie; Mahon, Francois-Xavier

    2016-09-01

    Several clinical trials have demonstrated that some patients with chronic myeloid leukemia in chronic phase (CML-CP) who achieve sustained deep molecular responses on tyrosine kinase inhibitor (TKI) therapy can safely suspend therapy and attempt treatment-free remission (TFR). Many TFR studies to date have enrolled imatinib-treated patients; however, the feasibility of TFR following nilotinib or dasatinib has also been demonstrated. In this review, we discuss available data from TFR trials and what these data reveal about the molecular biology of TFR. With an increasing number of ongoing TFR clinical trials, TFR may become an achievable goal for patients with CML-CP. © 2016 The Authors. Cancer Medicine published by John Wiley & Sons Ltd.

  2. Abordagem profunda e abordagem superficial à aprendizagem: diferentes perspectivas do rendimento escolar Deep and surface approach to learning: different perspectives about academic achievement

    Directory of Open Access Journals (Sweden)

    Cristiano Mauro Assis Gomes

    2011-01-01

    Full Text Available O presente estudo investiga a relação entre a abordagem superficial e a abordagem profunda à aprendizagem na explicação do rendimento escolar. Algumas questões são delineadas buscando verificar o papel de cada uma das abordagens na proficiência escolar em séries distintas. Foram analisados dados de 684 estudantes da sexta série do ensino fundamental à terceira série do ensino médio de uma escola particular de Belo Horizonte, Minas Gerais. Foi delineado um modelo para comparação das séries escolares, através da modelagem por equação estrutural. O modelo desenhado apresentou bom grau de ajuste (c²=427,12; gl=182; CFI=0,95; RMSEA=0,04 para a amostra completa e para cada série escolar. Os resultados mostram que há uma participação distinta da abordagem superficial e da abordagem profunda no desempenho escolar nas diferentes séries. São discutidas implicações dos resultados para a teoria das abordagens.This study investigates the relationship between deep and surface approach to learning in explaining academic achievement. Some questions are outlined aiming to verify the role of each approach in student's proficiency in different grades. Data from 684 students from junior high school (6th year to high school (12th year of a private school in Belo Horizonte, Minas Gerais, were analyzed. A model was designed to compare the grades through structural equation modeling. The designed model showed a good fit (c² = 427.12; df = 182; CFI = .95, RMSEA = .04 for the full sample and for each grade. The results show that there is a distinct contribution of both approaches in academic achievement in the different grades. Further implications to the learning approach theory will be discussed.

  3. Greenhouse gas reduction benefits and costs of a large-scale transition to hydrogen in the USA

    International Nuclear Information System (INIS)

    Dougherty, William; Kartha, Sivan; Lazarus, Michael; Fencl, Amanda; Rajan, Chella; Bailie, Alison; Runkle, Benjamin

    2009-01-01

    Hydrogen is an energy carrier able to be produced from domestic, zero-carbon sources and consumed by zero-pollution devices. A transition to a hydrogen-based economy could therefore potentially respond to climate, air quality, and energy security concerns. In a hydrogen economy, both mobile and stationary energy needs could be met through the reaction of hydrogen (H 2 ) with oxygen (O 2 ). This study applies a full fuel cycle approach to quantify the energy, greenhouse gas emissions (GHGs), and cost implications associated with a large transition to hydrogen in the United States. It explores a national and four metropolitan area transitions in two contrasting policy contexts: a 'business-as-usual' (BAU) context with continued reliance on fossil fuels, and a 'GHG-constrained' context with policies aimed at reducing greenhouse gas emissions. A transition in either policy context faces serious challenges, foremost among them from the highly inertial investments over the past century or so in technology and infrastructure based on petroleum, natural gas, and coal. A hydrogen transition in the USA could contribute to an effective response to climate change by helping to achieve deep reductions in GHG emissions by mid-century across all sectors of the economy; however, these reductions depend on the use of hydrogen to exploit clean, zero-carbon energy supply options. (author)

  4. Greenhouse gas reduction benefits and costs of a large-scale transition to hydrogen in the USA

    Energy Technology Data Exchange (ETDEWEB)

    Dougherty, William; Kartha, Sivan; Lazarus, Michael; Fencl, Amanda [Stockholm Environment Institute - US Center, 11 Curtis Avenue, Somerville, MA 02143 (United States); Rajan, Chella [Indian Institute of Technology Madras, I.I.T. Post Office, Chennai 600 036 (India); Bailie, Alison [The Pembina Institute, 200, 608 - 7th Street, S.W. Calgary, AB (Canada); Runkle, Benjamin [Department of Civil and Environmental Engineering, University of California, Berkeley, CA 94720 (United States)

    2009-01-15

    Hydrogen is an energy carrier able to be produced from domestic, zero-carbon sources and consumed by zero-pollution devices. A transition to a hydrogen-based economy could therefore potentially respond to climate, air quality, and energy security concerns. In a hydrogen economy, both mobile and stationary energy needs could be met through the reaction of hydrogen (H{sub 2}) with oxygen (O{sub 2}). This study applies a full fuel cycle approach to quantify the energy, greenhouse gas emissions (GHGs), and cost implications associated with a large transition to hydrogen in the United States. It explores a national and four metropolitan area transitions in two contrasting policy contexts: a 'business-as-usual' (BAU) context with continued reliance on fossil fuels, and a 'GHG-constrained' context with policies aimed at reducing greenhouse gas emissions. A transition in either policy context faces serious challenges, foremost among them from the highly inertial investments over the past century or so in technology and infrastructure based on petroleum, natural gas, and coal. A hydrogen transition in the USA could contribute to an effective response to climate change by helping to achieve deep reductions in GHG emissions by mid-century across all sectors of the economy; however, these reductions depend on the use of hydrogen to exploit clean, zero-carbon energy supply options. (author)

  5. Spontaneous and Widespread Electricity Generation in Natural Deep-Sea Hydrothermal Fields.

    Science.gov (United States)

    Yamamoto, Masahiro; Nakamura, Ryuhei; Kasaya, Takafumi; Kumagai, Hidenori; Suzuki, Katsuhiko; Takai, Ken

    2017-05-15

    Deep-sea hydrothermal vents discharge abundant reductive energy into oxidative seawater. Herein, we demonstrated that in situ measurements of redox potentials on the surfaces of active hydrothermal mineral deposits were more negative than the surrounding seawater potential, driving electrical current generation. We also demonstrated that negative potentials in the surface of minerals were widespread in the hydrothermal fields, regardless of the proximity to hydrothermal fluid discharges. Lab experiments verified that the negative potential of the mineral surface was induced by a distant electron transfer from the hydrothermal fluid through the metallic and catalytic properties of minerals. These results indicate that electric current is spontaneously and widely generated in natural mineral deposits in deep-sea hydrothermal fields. Our discovery provides important insights into the microbial communities that are supported by extracellular electron transfer and the prebiotic chemical and metabolic evolution of the ocean hydrothermal systems. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network

    Science.gov (United States)

    2018-01-01

    Skin lesions are a severe disease globally. Early detection of melanoma in dermoscopy images significantly increases the survival rate. However, the accurate recognition of melanoma is extremely challenging due to the following reasons: low contrast between lesions and skin, visual similarity between melanoma and non-melanoma lesions, etc. Hence, reliable automatic detection of skin tumors is very useful to increase the accuracy and efficiency of pathologists. In this paper, we proposed two deep learning methods to address three main tasks emerging in the area of skin lesion image processing, i.e., lesion segmentation (task 1), lesion dermoscopic feature extraction (task 2) and lesion classification (task 3). A deep learning framework consisting of two fully convolutional residual networks (FCRN) is proposed to simultaneously produce the segmentation result and the coarse classification result. A lesion index calculation unit (LICU) is developed to refine the coarse classification results by calculating the distance heat-map. A straight-forward CNN is proposed for the dermoscopic feature extraction task. The proposed deep learning frameworks were evaluated on the ISIC 2017 dataset. Experimental results show the promising accuracies of our frameworks, i.e., 0.753 for task 1, 0.848 for task 2 and 0.912 for task 3 were achieved. PMID:29439500

  7. Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network

    Directory of Open Access Journals (Sweden)

    Yuexiang Li

    2018-02-01

    Full Text Available Skin lesions are a severe disease globally. Early detection of melanoma in dermoscopy images significantly increases the survival rate. However, the accurate recognition of melanoma is extremely challenging due to the following reasons: low contrast between lesions and skin, visual similarity between melanoma and non-melanoma lesions, etc. Hence, reliable automatic detection of skin tumors is very useful to increase the accuracy and efficiency of pathologists. In this paper, we proposed two deep learning methods to address three main tasks emerging in the area of skin lesion image processing, i.e., lesion segmentation (task 1, lesion dermoscopic feature extraction (task 2 and lesion classification (task 3. A deep learning framework consisting of two fully convolutional residual networks (FCRN is proposed to simultaneously produce the segmentation result and the coarse classification result. A lesion index calculation unit (LICU is developed to refine the coarse classification results by calculating the distance heat-map. A straight-forward CNN is proposed for the dermoscopic feature extraction task. The proposed deep learning frameworks were evaluated on the ISIC 2017 dataset. Experimental results show the promising accuracies of our frameworks, i.e., 0.753 for task 1, 0.848 for task 2 and 0.912 for task 3 were achieved.

  8. Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network.

    Science.gov (United States)

    Li, Yuexiang; Shen, Linlin

    2018-02-11

    Skin lesions are a severe disease globally. Early detection of melanoma in dermoscopy images significantly increases the survival rate. However, the accurate recognition of melanoma is extremely challenging due to the following reasons: low contrast between lesions and skin, visual similarity between melanoma and non-melanoma lesions, etc. Hence, reliable automatic detection of skin tumors is very useful to increase the accuracy and efficiency of pathologists. In this paper, we proposed two deep learning methods to address three main tasks emerging in the area of skin lesion image processing, i.e., lesion segmentation (task 1), lesion dermoscopic feature extraction (task 2) and lesion classification (task 3). A deep learning framework consisting of two fully convolutional residual networks (FCRN) is proposed to simultaneously produce the segmentation result and the coarse classification result. A lesion index calculation unit (LICU) is developed to refine the coarse classification results by calculating the distance heat-map. A straight-forward CNN is proposed for the dermoscopic feature extraction task. The proposed deep learning frameworks were evaluated on the ISIC 2017 dataset. Experimental results show the promising accuracies of our frameworks, i.e., 0.753 for task 1, 0.848 for task 2 and 0.912 for task 3 were achieved.

  9. pDeep: Predicting MS/MS Spectra of Peptides with Deep Learning.

    Science.gov (United States)

    Zhou, Xie-Xuan; Zeng, Wen-Feng; Chi, Hao; Luo, Chunjie; Liu, Chao; Zhan, Jianfeng; He, Si-Min; Zhang, Zhifei

    2017-12-05

    In tandem mass spectrometry (MS/MS)-based proteomics, search engines rely on comparison between an experimental MS/MS spectrum and the theoretical spectra of the candidate peptides. Hence, accurate prediction of the theoretical spectra of peptides appears to be particularly important. Here, we present pDeep, a deep neural network-based model for the spectrum prediction of peptides. Using the bidirectional long short-term memory (BiLSTM), pDeep can predict higher-energy collisional dissociation, electron-transfer dissociation, and electron-transfer and higher-energy collision dissociation MS/MS spectra of peptides with >0.9 median Pearson correlation coefficients. Further, we showed that intermediate layer of the neural network could reveal physicochemical properties of amino acids, for example the similarities of fragmentation behaviors between amino acids. We also showed the potential of pDeep to distinguish extremely similar peptides (peptides that contain isobaric amino acids, for example, GG = N, AG = Q, or even I = L), which were very difficult to distinguish using traditional search engines.

  10. Radiative cooling to deep sub-freezing temperatures through a 24-h day-night cycle

    Science.gov (United States)

    Chen, Zhen; Zhu, Linxiao; Raman, Aaswath; Fan, Shanhui

    2016-12-01

    Radiative cooling technology utilizes the atmospheric transparency window (8-13 μm) to passively dissipate heat from Earth into outer space (3 K). This technology has attracted broad interests from both fundamental sciences and real world applications, ranging from passive building cooling, renewable energy harvesting and passive refrigeration in arid regions. However, the temperature reduction experimentally demonstrated, thus far, has been relatively modest. Here we theoretically show that ultra-large temperature reduction for as much as 60 °C from ambient is achievable by using a selective thermal emitter and by eliminating parasitic thermal load, and experimentally demonstrate a temperature reduction that far exceeds previous works. In a populous area at sea level, we have achieved an average temperature reduction of 37 °C from the ambient air temperature through a 24-h day-night cycle, with a maximal reduction of 42 °C that occurs when the experimental set-up enclosing the emitter is exposed to peak solar irradiance.

  11. Radiative cooling to deep sub-freezing temperatures through a 24-h day-night cycle.

    Science.gov (United States)

    Chen, Zhen; Zhu, Linxiao; Raman, Aaswath; Fan, Shanhui

    2016-12-13

    Radiative cooling technology utilizes the atmospheric transparency window (8-13 μm) to passively dissipate heat from Earth into outer space (3 K). This technology has attracted broad interests from both fundamental sciences and real world applications, ranging from passive building cooling, renewable energy harvesting and passive refrigeration in arid regions. However, the temperature reduction experimentally demonstrated, thus far, has been relatively modest. Here we theoretically show that ultra-large temperature reduction for as much as 60 °C from ambient is achievable by using a selective thermal emitter and by eliminating parasitic thermal load, and experimentally demonstrate a temperature reduction that far exceeds previous works. In a populous area at sea level, we have achieved an average temperature reduction of 37 °C from the ambient air temperature through a 24-h day-night cycle, with a maximal reduction of 42 °C that occurs when the experimental set-up enclosing the emitter is exposed to peak solar irradiance.

  12. Bubble-induced skin-friction drag reduction and the abrupt transition to air-layer drag reduction

    Science.gov (United States)

    Elbing, Brian R.; Winkel, Eric S.; Lay, Keary A.; Ceccio, Steven L.; Dowling, David R.; Perlin, Marc

    To investigate the phenomena of skin-friction drag reduction in a turbulent boundary layer (TBL) at large scales and high Reynolds numbers, a set of experiments has been conducted at the US Navy's William B. Morgan Large Cavitation Channel (LCC). Drag reduction was achieved by injecting gas (air) from a line source through the wall of a nearly zero-pressure-gradient TBL that formed on a flat-plate test model that was either hydraulically smooth or fully rough. Two distinct drag-reduction phenomena were investigated; bubble drag reduction (BDR) and air-layer drag reduction (ALDR).The streamwise distribution of skin-friction drag reduction was monitored with six skin-friction balances at downstream-distance-based Reynolds numbers to 220 million and at test speeds to 20.0msinitial zone1. These results indicated that there are three distinct regions associated with drag reduction with air injection: Region I, BDR; Region II, transition between BDR and ALDR; and Region III, ALDR. In addition, once ALDR was established: friction drag reduction in excess of 80% was observed over the entire smooth model for speeds to 15.3ms1 with the surface fully roughened (though approximately 50% greater volumetric air flux was required); and ALDR was sensitive to the inflow conditions. The sensitivity to the inflow conditions can be mitigated by employing a small faired step (10mm height in the experiment) that helps to create a fixed separation line.

  13. Influence of aging on the retention of elemental radioiodine by deep bed carbon filters under accident conditions

    International Nuclear Information System (INIS)

    Deuber, H.

    1985-01-01

    No significant difference was found in the retention of I-131 loaded as I 2 , by various impregnated activated carbons that had been aged in the containment exhaust air of a pressurized water reactor over a period of 12 months. In all the cases, the I-131 passing through deep beds of carbon was in a nonelemental form. It was concluded that a minimum retention of 99.99%, as required by new guidelines for certain accident filters, can be equally well achieved with various carbons in deep beds

  14. Deep ensemble learning of sparse regression models for brain disease diagnosis.

    Science.gov (United States)

    Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang

    2017-04-01

    Recent studies on brain imaging analysis witnessed the core roles of machine learning techniques in computer-assisted intervention for brain disease diagnosis. Of various machine-learning techniques, sparse regression models have proved their effectiveness in handling high-dimensional data but with a small number of training samples, especially in medical problems. In the meantime, deep learning methods have been making great successes by outperforming the state-of-the-art performances in various applications. In this paper, we propose a novel framework that combines the two conceptually different methods of sparse regression and deep learning for Alzheimer's disease/mild cognitive impairment diagnosis and prognosis. Specifically, we first train multiple sparse regression models, each of which is trained with different values of a regularization control parameter. Thus, our multiple sparse regression models potentially select different feature subsets from the original feature set; thereby they have different powers to predict the response values, i.e., clinical label and clinical scores in our work. By regarding the response values from our sparse regression models as target-level representations, we then build a deep convolutional neural network for clinical decision making, which thus we call 'Deep Ensemble Sparse Regression Network.' To our best knowledge, this is the first work that combines sparse regression models with deep neural network. In our experiments with the ADNI cohort, we validated the effectiveness of the proposed method by achieving the highest diagnostic accuracies in three classification tasks. We also rigorously analyzed our results and compared with the previous studies on the ADNI cohort in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Deep inelastic structure functions in the chiral bag model

    International Nuclear Information System (INIS)

    Sanjose, V.; Vento, V.; Centro Mixto CSIC/Valencia Univ., Valencia

    1989-01-01

    We calculate the structure functions for deep inelastic scattering on baryons in the cavity approximation to the chiral bag model. The behavior of these structure functions is analyzed in the Bjorken limit. We conclude that scaling is satisfied, but not Regge behavior. A trivial extension as a parton model can be achieved by introducing the structure function for the pion in a convolution picture. In this extended version of the model not only scaling but also Regge behavior is satisfied. Conclusions are drawn from the comparison of our results with experimental data. (orig.)

  16. Deep inelastic structure functions in the chiral bag model

    Energy Technology Data Exchange (ETDEWEB)

    Sanjose, V. (Valencia Univ. (Spain). Dept. de Didactica de las Ciencias Experimentales); Vento, V. (Valencia Univ. (Spain). Dept. de Fisica Teorica; Centro Mixto CSIC/Valencia Univ., Valencia (Spain). Inst. de Fisica Corpuscular)

    1989-10-02

    We calculate the structure functions for deep inelastic scattering on baryons in the cavity approximation to the chiral bag model. The behavior of these structure functions is analyzed in the Bjorken limit. We conclude that scaling is satisfied, but not Regge behavior. A trivial extension as a parton model can be achieved by introducing the structure function for the pion in a convolution picture. In this extended version of the model not only scaling but also Regge behavior is satisfied. Conclusions are drawn from the comparison of our results with experimental data. (orig.).

  17. Gearbox fault diagnosis based on deep random forest fusion of acoustic and vibratory signals

    Science.gov (United States)

    Li, Chuan; Sanchez, René-Vinicio; Zurita, Grover; Cerrada, Mariela; Cabrera, Diego; Vásquez, Rafael E.

    2016-08-01

    Fault diagnosis is an effective tool to guarantee safe operations in gearboxes. Acoustic and vibratory measurements in such mechanical devices are all sensitive to the existence of faults. This work addresses the use of a deep random forest fusion (DRFF) technique to improve fault diagnosis performance for gearboxes by using measurements of an acoustic emission (AE) sensor and an accelerometer that are used for monitoring the gearbox condition simultaneously. The statistical parameters of the wavelet packet transform (WPT) are first produced from the AE signal and the vibratory signal, respectively. Two deep Boltzmann machines (DBMs) are then developed for deep representations of the WPT statistical parameters. A random forest is finally suggested to fuse the outputs of the two DBMs as the integrated DRFF model. The proposed DRFF technique is evaluated using gearbox fault diagnosis experiments under different operational conditions, and achieves 97.68% of the classification rate for 11 different condition patterns. Compared to other peer algorithms, the addressed method exhibits the best performance. The results indicate that the deep learning fusion of acoustic and vibratory signals may improve fault diagnosis capabilities for gearboxes.

  18. Achieving a competitive advantage in managed care.

    Science.gov (United States)

    Stahl, D A

    1998-02-01

    When building a competitive advantage to thrive in the managed care arena, subacute care providers are urged to be revolutionary rather than reactionary, proactive rather than passive, optimistic rather than pessimistic and growth-oriented rather than cost-reduction oriented. Weaknesses must be addressed aggressively. To achieve a competitive edge, assess the facility's strengths, understand the marketplace and comprehend key payment methods.

  19. Building Extraction in Very High Resolution Remote Sensing Imagery Using Deep Learning and Guided Filters

    Directory of Open Access Journals (Sweden)

    Yongyang Xu

    2018-01-01

    Full Text Available Very high resolution (VHR remote sensing imagery has been used for land cover classification, and it tends to a transition from land-use classification to pixel-level semantic segmentation. Inspired by the recent success of deep learning and the filter method in computer vision, this work provides a segmentation model, which designs an image segmentation neural network based on the deep residual networks and uses a guided filter to extract buildings in remote sensing imagery. Our method includes the following steps: first, the VHR remote sensing imagery is preprocessed and some hand-crafted features are calculated. Second, a designed deep network architecture is trained with the urban district remote sensing image to extract buildings at the pixel level. Third, a guided filter is employed to optimize the classification map produced by deep learning; at the same time, some salt-and-pepper noise is removed. Experimental results based on the Vaihingen and Potsdam datasets demonstrate that our method, which benefits from neural networks and guided filtering, achieves a higher overall accuracy when compared with other machine learning and deep learning methods. The method proposed shows outstanding performance in terms of the building extraction from diversified objects in the urban district.

  20. Optimal Risk Reduction in the Railway Industry by Using Dynamic Programming

    OpenAIRE

    Michael Todinov; Eberechi Weli

    2013-01-01

    The paper suggests for the first time the use of dynamic programming techniques for optimal risk reduction in the railway industry. It is shown that by using the concept ‘amount of removed risk by a risk reduction option’, the problem related to optimal allocation of a fixed budget to achieve a maximum risk reduction in the railway industry can be reduced to an optimisation problem from dynamic programming. For n risk reduction options and size of the available risk reduction budget B (expres...