WorldWideScience

Sample records for achieving deep reductions

  1. Fundamental research on sintering technology with super deep bed achieving energy saving and reduction of emissions

    International Nuclear Information System (INIS)

    Hongliang Han; Shengli Wu; Gensheng Feng; Luowen Ma; Weizhong Jiang

    2012-01-01

    In the general frame of energy saving, environment protection and the concept of circular economy, the fundamental research on the sintering technology with super deep bed, achieving energy saving and emission reduction, was carried out. At first, the characteristics of the process and exhaust emission in the sintering with super deep bed was mastered through the study of the influence of different bed depths on the sintering process. Then, considering the bed permeability and the fuel combustion, their influence on the sinter yield and quality, their potential for energy saving and emission reduction was studied. The results show that the improvement of the bed permeability and of the fuel combustibility respectively and simultaneously, leads to an improvement of the sintering technical indices, to energy saving and emission reduction in the condition of super deep bed. At 1000 mm bed depth, and taking the appropriate countermeasure, it is possible to decrease the solid fuel consumption and the emission of CO 2 , SO 2 , NO x by 10.08%, 11.20%, 22.62% and 25.86% respectively; and at 700 mm bed depth, it is possible to reduce the solid fuel consumption and the emission of CO 2 , SO 2 , NO x by 20.71%, 22.01%, 58.86% and 13.13% respectively. This research provides the theoretical and technical basis for the new technology of sintering with super deep bed, achieving energy saving and reduction of emission. (authors)

  2. Achieving deep reductions in US transport greenhouse gas emissions: Scenario analysis and policy implications

    International Nuclear Information System (INIS)

    McCollum, David; Yang, Christopher

    2009-01-01

    This paper investigates the potential for making deep cuts in US transportation greenhouse gas (GHG) emissions in the long-term (50-80% below 1990 levels by 2050). Scenarios are used to envision how such a significant decarbonization might be achieved through the application of advanced vehicle technologies and fuels, and various options for behavioral change. A Kaya framework that decomposes GHG emissions into the product of four major drivers is used to analyze emissions and mitigation options. In contrast to most previous studies, a relatively simple, easily adaptable modeling methodology is used which can incorporate insights from other modeling studies and organize them in a way that is easy for policymakers to understand. Also, a wider range of transportation subsectors is considered here-light- and heavy-duty vehicles, aviation, rail, marine, agriculture, off-road, and construction. This analysis investigates scenarios with multiple options (increased efficiency, lower-carbon fuels, and travel demand management) across the various subsectors and confirms the notion that there are no 'silver bullet' strategies for making deep cuts in transport GHGs. If substantial emission reductions are to be made, considerable action is needed on all fronts, and no subsectors can be ignored. Light-duty vehicles offer the greatest potential for emission reductions; however, while deep reductions in other subsectors are also possible, there are more limitations in the types of fuels and propulsion systems that can be used. In all cases travel demand management strategies are critical; deep emission cuts will not likely be possible without slowing growth in travel demand across all modes. Even though these scenarios represent only a small subset of the potential futures in which deep reductions might be achieved, they provide a sense of the magnitude of changes required in our transportation system and the need for early and aggressive action if long-term targets are to be met.

  3. Modeling transitions in the California light-duty vehicles sector to achieve deep reductions in transportation greenhouse gas emissions

    International Nuclear Information System (INIS)

    Leighty, Wayne; Ogden, Joan M.; Yang, Christopher

    2012-01-01

    California’s target for reducing economy-wide greenhouse gas (GHG) emissions is 80% below 1990 levels by 2050. We develop transition scenarios for meeting this goal in California’s transportation sector, with focus on light-duty vehicles (LDVs). We explore four questions: (1) what options are available to reduce transportation sector GHG emissions 80% below 1990 levels by 2050; (2) how rapidly would transitions in LDV markets, fuels, and travel behaviors need to occur over the next 40 years; (3) how do intermediate policy goals relate to different transition pathways; (4) how would rates of technological change and market adoption between 2010 and 2050 impact cumulative GHG emissions? We develop four LDV transition scenarios to meet the 80in50 target through a combination of travel demand reduction, fuel economy improvements, and low-carbon fuel supply, subject to restrictions on trajectories of technological change, potential market adoption of new vehicles and fuels, and resource availability. These scenarios exhibit several common themes: electrification of LDVs, rapid improvements in vehicle efficiency, and future fuels with less than half the carbon intensity of current gasoline and diesel. Availability of low-carbon biofuels and the level of travel demand reduction are “swing factors” that influence the degree of LDV electrification required. - Highlights: ► We model change in California LDVs for deep reduction in transportation GHG emissions. ► Reduced travel demand, improved fuel economy, and low-carbon fuels are all needed. ► Transitions must begin soon and occur quickly in order to achieve the 80in50 goal. ► Low-C biofuel supply and travel demand influence the need for rapid LDV electrification. ► Cumulative GHG emissions from LDVs can differ between strategies by up to 40%.

  4. Cost reduction in deep water production systems

    International Nuclear Information System (INIS)

    Beltrao, R.L.C.

    1995-01-01

    This paper describes a cost reduction program that Petrobras has conceived for its deep water field. Beginning with the Floating Production Unit, a new concept of FPSO was established where a simple system, designed to long term testing, can be upgraded, on the location, to be the definitive production unit. Regarding to the subsea system, the following projects will be considered. (1) Subsea Manifold: There are two 8-well-diverless manifolds designed for 1,000 meters presently under construction and after a value analysis, a new design was achieved for the next generation. Both projects will be discussed and a cost evaluation will also be provided. (2) Subsea Pipelines: Petrobras has just started a large program aiming to reduce cost on this important item. There are several projects such as hybrid (flexible and rigid) pipes for large diameter in deep water, alternatives laying methods, rigid riser on FPS, new material...etc. The authors intend to provide an overview of each project

  5. Deep sedation during pneumatic reduction of intussusception.

    Science.gov (United States)

    Ilivitzki, Anat; Shtark, Luda Glozman; Arish, Karin; Engel, Ahuva

    2012-05-01

    Pneumatic reduction of intussusception under fluoroscopic guidance is a routine procedure. The unsedated child may resist the procedure, which may lengthen its duration and increase the radiation dose. We use deep sedation during the procedure to overcome these difficulties. The purpose of this study was to summarize our experience with deep sedation during fluoroscopic reduction of intussusception and assess the added value and complication rate of deep sedation. All children with intussusception who underwent pneumatic reduction in our hospital between January 2004 and June 2011 were included in this retrospective study. Anesthetists sedated the children using propofol. The fluoroscopic studies, ultrasound (US) studies and the childrens' charts were reviewed. One hundred thirty-one attempted reductions were performed in 119 children, of which 121 (92%) were successful and 10 (8%) failed. Two perforations (1.5%) occurred during attempted reduction. Average fluoroscopic time was 1.5 minutes. No complication to sedation was recorded. Deep sedation with propofol did not add any complication to the pneumatic reduction. The fluoroscopic time was short. The success rate of reduction was high,raising the possibility that sedation is beneficial, possibly by smooth muscle relaxation.

  6. The DEEP-South: Scheduling and Data Reduction Software System

    Science.gov (United States)

    Yim, Hong-Suh; Kim, Myung-Jin; Bae, Youngho; Moon, Hong-Kyu; Choi, Young-Jun; Roh, Dong-Goo; the DEEP-South Team

    2015-08-01

    The DEep Ecliptic Patrol of the Southern sky (DEEP-South), started in October 2012, is currently in test runs with the first Korea Microlensing Telescope Network (KMTNet) 1.6 m wide-field telescope located at CTIO in Chile. While the primary objective for the DEEP-South is physical characterization of small bodies in the Solar System, it is expected to discover a large number of such bodies, many of them previously unknown.An automatic observation planning and data reduction software subsystem called "The DEEP-South Scheduling and Data reduction System" (the DEEP-South SDS) is currently being designed and implemented for observation planning, data reduction and analysis of huge amount of data with minimum human interaction. The DEEP-South SDS consists of three software subsystems: the DEEP-South Scheduling System (DSS), the Local Data Reduction System (LDR), and the Main Data Reduction System (MDR). The DSS manages observation targets, makes decision on target priority and observation methods, schedules nightly observations, and archive data using the Database Management System (DBMS). The LDR is designed to detect moving objects from CCD images, while the MDR conducts photometry and reconstructs lightcurves. Based on analysis made at the LDR and the MDR, the DSS schedules follow-up observation to be conducted at other KMTNet stations. In the end of 2015, we expect the DEEP-South SDS to achieve a stable operation. We also have a plan to improve the SDS to accomplish finely tuned observation strategy and more efficient data reduction in 2016.

  7. Are Reductions in Population Sodium Intake Achievable?

    Directory of Open Access Journals (Sweden)

    Jessica L. Levings

    2014-10-01

    Full Text Available The vast majority of Americans consume too much sodium, primarily from packaged and restaurant foods. The evidence linking sodium intake with direct health outcomes indicates a positive relationship between higher levels of sodium intake and cardiovascular disease risk, consistent with the relationship between sodium intake and blood pressure. Despite communication and educational efforts focused on lowering sodium intake over the last three decades data suggest average US sodium intake has remained remarkably elevated, leading some to argue that current sodium guidelines are unattainable. The IOM in 2010 recommended gradual reductions in the sodium content of packaged and restaurant foods as a primary strategy to reduce US sodium intake, and research since that time suggests gradual, downward shifts in mean population sodium intake are achievable and can move the population toward current sodium intake guidelines. The current paper reviews recent evidence indicating: (1 significant reductions in mean population sodium intake can be achieved with gradual sodium reduction in the food supply, (2 gradual sodium reduction in certain cases can be achieved without a noticeable change in taste or consumption of specific products, and (3 lowering mean population sodium intake can move us toward meeting the current individual guidelines for sodium intake.

  8. Criteria for achieving actinide reduction goals

    International Nuclear Information System (INIS)

    Liljenzin, J.O.

    1996-01-01

    In order to discuss various criteria for achieving actinide reduction goals, the goals for actinide reduction must be defined themselves. In this context the term actinides is interpreted to mean plutonium and the so called ''minor actinides'' neptunium, americium and curium, but also protactinium. Some possible goals and the reasons behind these will be presented. On the basis of the suggested goals it is possible to analyze various types of devices for production of nuclear energy from uranium or thorium, such as thermal or fast reactors and accelerator driven system, with their associated fuel cycles with regard to their ability to reach the actinide reduction goals. The relation between necessary single cycle burn-up values, fuel cycle processing losses and losses to waste will be defined and discussed. Finally, an attempt is made to arrange the possible systems on order of performance with regard to their potential to reduce the actinide inventory and the actinide losses to wastes. (author). 3 refs, 3 figs, 2 tabs

  9. Deep Belief Networks for dimensionality reduction

    NARCIS (Netherlands)

    Noulas, A.K.; Kröse, B.J.A.

    2008-01-01

    Deep Belief Networks are probabilistic generative models which are composed by multiple layers of latent stochastic variables. The top two layers have symmetric undirected connections, while the lower layers receive directed top-down connections from the layer above. The current state-of-the-art

  10. Policy packages to achieve demand reduction

    International Nuclear Information System (INIS)

    Boardman, Brenda

    2005-01-01

    In many sectors and many countries, energy demand is still increasing, despite decades of policies to reduce demand. Controlling climate change is becoming more urgent, so there is a need to devise policies that will, virtually, guarantee demand reduction. This has to come from policy, at least in the UK, as the conditions do not exist, yet, when the consumers will 'pull' the market for energy efficiency or the manufacturers will use technological development to 'push' it. That virtuous circle has to be created by a mixture of consumer education and restrictions on manufacturers (for instance, permission to manufacture). The wider policy options include higher prices for energy and stronger product policies. An assessment of the effectiveness of different policy packages indicates some guiding principles, for instance that improved product policy must precede higher prices, otherwise consumers are unable to react effectively to price rises. The evidence will be assessed about the ways in which national and EU policies can either reinforce, duplicate or undermine each other. Another area of examination will be timescales: what is the time lag between the implementation of a policy (whether prices or product based) and the level of maximum reductions. In addition, the emphasis given to factors such as equity, raising investment funds and speed of delivery also influence policy design and the extent to which absolute carbon reductions can be expected

  11. Variance reduction methods applied to deep-penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course

  12. Achieving 80% greenhouse gas reduction target in Saudi Arabia under low and medium oil prices

    International Nuclear Information System (INIS)

    Alshammari, Yousef M.; Sarathy, S. Mani

    2017-01-01

    COP 21 led to a global agreement to limit the earth's rising temperature to less than 2 °C. This will require countries to act upon climate change and achieve a significant reduction in their greenhouse gas emissions which will play a pivotal role in shaping future energy systems. Saudi Arabia is the World's largest exporter of crude oil, and the 11th largest CO_2 emitter. Understanding the Kingdom's role in global greenhouse gas reduction is critical in shaping the future of fossil fuels. Hence, this work presents an optimisation study to understand how Saudi Arabia can meet the CO_2 reduction targets to achieve the 80% reduction in the power generation sector. It is found that the implementation of energy efficiency measures is necessary to enable meeting the 80% target, and it would also lower costs of transition to low carbon energy system while maintaining cleaner use of hydrocarbons with CCS. Setting very deep GHG reduction targets may be economically uncompetitive in consideration of the energy supply requirements. In addition, we determine the breakeven price of crude oil needed to make CCS economically viable. Results show important dimension for pricing CO_2 and the role of CCS compared with alternative sources of energy. - Highlights: • Energy efficiency measures are needed to achieve 80% reduction. • Nuclear appears as an important option to achieve deep cuts in CO_2 by 2050. • Technology improvement can enable using heavy fuel oil with CCS until 2050. • IGCC requires lower net CO_2 footprint in order to be competitive. • Nuclear power causes a sharp increase in the CO_2 avoidance costs.

  13. Achieving CO2 Emissions Reduction Goals with Energy Infrastructure Projects

    International Nuclear Information System (INIS)

    Eberlinc, M.; Medved, K.; Simic, J.

    2013-01-01

    The EU has set its short-term goals in the Europe 2020 Strategy (20% of CO 2 emissions reduction, 20% increase in energy efficiency, 20% share of renewables in final energy). The analyses show that the EU Member States in general are on the right track of achieving these goals; they are even ahead (including Slovenia). But setting long-term goals by 2050 is a tougher challenge. Achieving CO 2 emissions reduction goes hand in hand with increasing the share of renewables and strategically planning the projects, which include exploiting the potential of renewable sources of energy (e.g. hydropower). In Slovenia, the expected share of hydropower in electricity production from large HPPs in the share of renewables by 2030 is 1/3. The paper includes a presentation of a hydro power plants project on the middle Sava river in Slovenia and its specifics (influenced by the expansion of the Natura 2000 protected sites and on the other hand by the changes in the Environment Protection Law, which implements the EU Industrial Emissions Directive and the ETS Directive). Studies show the importance of the HPPs in terms of CO 2 emissions reduction. The main conclusion of the paper shows the importance of energy infrastructure projects, which contribute to on the one hand the CO 2 emissions reduction and on the other the increase of renewables.(author)

  14. Deep Learning in Distance Education: Are We Achieving the Goal?

    Science.gov (United States)

    Shearer, Rick L.; Gregg, Andrea; Joo, K. P.

    2015-01-01

    As educators, one of our goals is to help students arrive at deeper levels of learning. However, how is this accomplished, especially in online courses? This design-based research study explored the concept of deep learning through a series of design changes in a graduate education course. A key question that emerged was through what learning…

  15. VARIATION OF STRIKE INCENTIVES IN DEEP REDUCTIONS; FINAL

    International Nuclear Information System (INIS)

    G.H. CANAVAN

    2001-01-01

    This note studies the sensitivity of strike incentives to deep offensive force reductions using exchange, cost, and game theoretic decision models derived and discussed in companion reports. As forces fall, weapon allocations shift from military to high value targets, with the shift being half complete at about 1,000 weapons. By 500 weapons, the first and second strikes are almost totally on high value. The dominant cost for striking first is that of damage to one's high value, which is near total absent other constraints, and hence proportional to preferences for survival of high value. Changes in military costs are largely offsetting, so total first strike costs change little. The resulting costs at decision nodes are well above the costs of inaction, so the preferred course is inaction for all offensive reductions studied. As the dominant cost for striking first is proportional to the preference for survival of high value. There is a wide gap between the first strike cost and that of inaction for the parameters studied here. These conclusions should be insensitive to significant reductions in the preference for survival of high value, which is the most sensitive parameter

  16. Deep greenhouse gas emission reductions in Europe: Exploring different options

    International Nuclear Information System (INIS)

    Deetman, Sebastiaan; Hof, Andries F.; Pfluger, Benjamin; Vuuren, Detlef P. van; Girod, Bastien; Ruijven, Bas J. van

    2013-01-01

    Most modelling studies that explore emission mitigation scenarios only look into least-cost emission pathways, induced by a carbon tax. This means that European policies targeting specific – sometimes relatively costly – technologies, such as electric cars and advanced insulation measures, are usually not evaluated as part of cost-optimal scenarios. This study explores an emission mitigation scenario for Europe up to 2050, taking as a starting point specific emission reduction options instead of a carbon tax. The purpose is to identify the potential of each of these policies and identify trade-offs between sectoral policies in achieving emission reduction targets. The reduction options evaluated in this paper together lead to a reduction of 65% of 1990 CO 2 -equivalent emissions by 2050. More bottom-up modelling exercises, like the one presented here, provide a promising starting point to evaluate policy options that are currently considered by policy makers. - Highlights: ► We model the effects of 15 climate change mitigation measures in Europe. ► We assess the greenhouse gas emission reduction potential in different sectors. ► The measures could reduce greenhouse gas emissions by 60% below 1990 levels in 2050. ► The approach allows to explore arguably more relevant climate policy scenarios

  17. Positioning Reduction of Deep Space Probes Based on VLBI Tracking

    Science.gov (United States)

    Qiao, S. B.

    2011-11-01

    In the background of the Chinese Lunar Exploration Project and the Yinghuo Project, through theoretical analysis, algorithm study, software development, data simulation, real data processing and so on, the positioning reductions of the European lunar satellite Smart-1 and Mars Express (MEX) satellite, as well as the Chinese Chang'e-1 (CE-1) and Chang'e-2 (CE-2) satellites are accomplished by using VLBI and USB tracking data in this dissertation. The progress is made in various aspects including the development of theoretical model, the construction of observation equation, the analysis of the condition of normal equation, the selection and determination of the constraint, the analysis of data simulation, the detection of outliers in observations, the maintenance of the stability of the solution of parameters, the development of the practical software system, the processing of the real tracking data and so on. The details of the research progress in this dissertation are written as follows: (1) The algorithm is analyzed concerning the positioning reduction of the deep spacecraft based on VLBI tracking data. Through data simulation, it is analyzed for the effects of the bias in predicted orbit, the white noises and systematic errors in VLBI delays, and USB ranges on the positioning reduction of spacecraft. Results show that it is preferable to suppress the dispersion of positioning data points by applying the constraint of geocentric distance of spacecraft when there are only VLBI tracking data. The positioning solution is a biased estimate via observations of three VLBI stations. For the case of four tracking stations, the uncertainty of the constraint should be in accordance with the bias in the predicted orbit. White noises in delays and ranges mainly result in dispersion of the sequence of positioning data points. If there is the systematic error of observations, the systematic offset of the positioning results is caused, and there are trend jumps in the shape of

  18. Achieving 80% greenhouse gas reduction target in Saudi Arabia under low and medium oil prices

    KAUST Repository

    Alshammari, Yousef Mohammad

    2016-11-10

    COP 21 led to a global agreement to limit the earth\\'s rising temperature to less than 2 °C. This will require countries to act upon climate change and achieve a significant reduction in their greenhouse gas emissions which will play a pivotal role in shaping future energy systems. Saudi Arabia is the World\\'s largest exporter of crude oil, and the 11th largest CO2 emitter. Understanding the Kingdom\\'s role in global greenhouse gas reduction is critical in shaping the future of fossil fuels. Hence, this work presents an optimisation study to understand how Saudi Arabia can meet the CO2 reduction targets to achieve the 80% reduction in the power generation sector. It is found that the implementation of energy efficiency measures is necessary to enable meeting the 80% target, and it would also lower costs of transition to low carbon energy system while maintaining cleaner use of hydrocarbons with CCS. Setting very deep GHG reduction targets may be economically uncompetitive in consideration of the energy supply requirements. In addition, we determine the breakeven price of crude oil needed to make CCS economically viable. Results show important dimension for pricing CO2 and the role of CCS compared with alternative sources of energy.

  19. A core framework and scenario for deep GHG reductions at the city scale

    International Nuclear Information System (INIS)

    Lazarus, Michael; Chandler, Chelsea; Erickson, Peter

    2013-01-01

    Trends in increasing urbanization, paired with a lack of ambitious action on larger scales, uniquely position cities to resume leadership roles in climate mitigation. While many cities have adopted ambitious long-term emission reduction goals, few have articulated how to reach them. This paper presents one of the first long-term scenarios of deep greenhouse gas abatement for a major U.S. city. Using a detailed, bottom-up scenario analysis, we investigate how Seattle might achieve its recently stated goal of carbon neutrality by the year 2050. The analysis demonstrates that a series of ambitious strategies could achieve per capita GHG reductions of 34% in 2020, and 91% in 2050 in Seattle's “core” emissions from the buildings, transportation, and waste sectors. We examine the pros and cons of options to get to, or beyond, net zero emissions in these sectors. We also discuss methodological innovations for community-scale emissions accounting frameworks, including a “core” emissions focus that excludes industrial activity and a consumption perspective that expands the emissions footprint and scope of policy solutions. As in Seattle, other communities may find the mitigation strategies and analytical approaches presented here are useful for crafting policies to achieve deep GHG-reduction goals. - Highlights: ► Cities can play a pivotal role in mitigating climate change. ► Strategies modeled achieve per-capita GHG reductions of 91% by 2050 in Seattle. ► We discuss methodological innovations in community-scale accounting frameworks. ► We weigh options for getting to, or beyond, zero GHG emissions. ► Other cities may adapt these measures and analytical approaches to curb emissions

  20. Modelling the potential to achieve deep carbon emission cuts in existing UK social housing: The case of Peabody

    International Nuclear Information System (INIS)

    Reeves, Andrew; Taylor, Simon; Fleming, Paul

    2010-01-01

    As part of the UK's effort to combat climate change, deep cuts in carbon emissions will be required from existing housing over the coming decades. The viability of achieving such emission cuts for the UK social housing sector has been explored through a case study of Peabody, a housing association operating in London. Various approaches to stock refurbishment were modelled for Peabody's existing stock up to the year 2030, incorporating insulation, communal heating and micro-generation technologies. Outputs were evaluated under four future socio-economic scenarios. The results indicate that the Greater London Authority's target of a 60% carbon emission cut by 2025 can be achieved if extensive stock refurbishment is coupled with a background of wider societal efforts to reduce carbon emissions. The two key external requirements identified are a significant reduction in the carbon intensity of grid electricity and a stabilisation or reduction in householder demand for energy. A target of achieving zero net carbon emissions across Peabody stock by 2030 can only be achieved if grid electricity becomes available from entirely zero-carbon sources. These results imply that stronger action is needed from both social landlords and Government to enable deep emission cuts to be achieved in UK social housing.

  1. Achieving 80% greenhouse gas reduction target in Saudi Arabia under low and medium oil prices

    KAUST Repository

    Alshammari, Yousef Mohammad; Sarathy, Mani

    2016-01-01

    meeting the 80% target, and it would also lower costs of transition to low carbon energy system while maintaining cleaner use of hydrocarbons with CCS. Setting very deep GHG reduction targets may be economically uncompetitive in consideration of the energy

  2. Final LDRD report : science-based solutions to achieve high-performance deep-UV laser diodes.

    Energy Technology Data Exchange (ETDEWEB)

    Armstrong, Andrew M.; Miller, Mary A.; Crawford, Mary Hagerott; Alessi, Leonard J.; Smith, Michael L.; Henry, Tanya A.; Westlake, Karl R.; Cross, Karen Charlene; Allerman, Andrew Alan; Lee, Stephen Roger

    2011-12-01

    We present the results of a three year LDRD project that has focused on overcoming major materials roadblocks to achieving AlGaN-based deep-UV laser diodes. We describe our growth approach to achieving AlGaN templates with greater than ten times reduction of threading dislocations which resulted in greater than seven times enhancement of AlGaN quantum well photoluminescence and 15 times increase in electroluminescence from LED test structures. We describe the application of deep-level optical spectroscopy to AlGaN epilayers to quantify deep level energies and densities and further correlate defect properties with AlGaN luminescence efficiency. We further review our development of p-type short period superlattice structures as an approach to mitigate the high acceptor activation energies in AlGaN alloys. Finally, we describe our laser diode fabrication process, highlighting the development of highly vertical and smooth etched laser facets, as well as characterization of resulting laser heterostructures.

  3. Deep learning methods for CT image-domain metal artifact reduction

    Science.gov (United States)

    Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Shan, Hongming; Claus, Bernhard; Jin, Yannan; De Man, Bruno; Wang, Ge

    2017-09-01

    Artifacts resulting from metal objects have been a persistent problem in CT images over the last four decades. A common approach to overcome their effects is to replace corrupt projection data with values synthesized from an interpolation scheme or by reprojection of a prior image. State-of-the-art correction methods, such as the interpolation- and normalization-based algorithm NMAR, often do not produce clinically satisfactory results. Residual image artifacts remain in challenging cases and even new artifacts can be introduced by the interpolation scheme. Metal artifacts continue to be a major impediment, particularly in radiation and proton therapy planning as well as orthopedic imaging. A new solution to the long-standing metal artifact reduction (MAR) problem is deep learning, which has been successfully applied to medical image processing and analysis tasks. In this study, we combine a convolutional neural network (CNN) with the state-of-the-art NMAR algorithm to reduce metal streaks in critical image regions. Training data was synthesized from CT simulation scans of a phantom derived from real patient images. The CNN is able to map metal-corrupted images to artifact-free monoenergetic images to achieve additional correction on top of NMAR for improved image quality. Our results indicate that deep learning is a novel tool to address CT reconstruction challenges, and may enable more accurate tumor volume estimation for radiation therapy planning.

  4. Drag Reduction of an Airfoil Using Deep Learning

    Science.gov (United States)

    Jiang, Chiyu; Sun, Anzhu; Marcus, Philip

    2017-11-01

    We reduced the drag of a 2D airfoil by starting with a NACA-0012 airfoil and used deep learning methods. We created a database which consists of simulations of 2D external flow over randomly generated shapes. We then developed a machine learning framework for external flow field inference given input shapes. Past work which utilized machine learning in Computational Fluid Dynamics focused on estimations of specific flow parameters, but this work is novel in the inference of entire flow fields. We further showed that learned flow patterns are transferable to cases that share certain similarities. This study illustrates the prospects of deeper integration of data-based modeling into current CFD simulation frameworks for faster flow inference and more accurate flow modeling.

  5. A deep 3D residual CNN for false-positive reduction in pulmonary nodule detection.

    Science.gov (United States)

    Jin, Hongsheng; Li, Zongyao; Tong, Ruofeng; Lin, Lanfen

    2018-05-01

    The automatic detection of pulmonary nodules using CT scans improves the efficiency of lung cancer diagnosis, and false-positive reduction plays a significant role in the detection. In this paper, we focus on the false-positive reduction task and propose an effective method for this task. We construct a deep 3D residual CNN (convolution neural network) to reduce false-positive nodules from candidate nodules. The proposed network is much deeper than the traditional 3D CNNs used in medical image processing. Specifically, in the network, we design a spatial pooling and cropping (SPC) layer to extract multilevel contextual information of CT data. Moreover, we employ an online hard sample selection strategy in the training process to make the network better fit hard samples (e.g., nodules with irregular shapes). Our method is evaluated on 888 CT scans from the dataset of the LUNA16 Challenge. The free-response receiver operating characteristic (FROC) curve shows that the proposed method achieves a high detection performance. Our experiments confirm that our method is robust and that the SPC layer helps increase the prediction accuracy. Additionally, the proposed method can easily be extended to other 3D object detection tasks in medical image processing. © 2018 American Association of Physicists in Medicine.

  6. Deep vein thrombus formation induced by flow reduction in mice is determined by venous side branches.

    Science.gov (United States)

    Brandt, Moritz; Schönfelder, Tanja; Schwenk, Melanie; Becker, Christian; Jäckel, Sven; Reinhardt, Christoph; Stark, Konstantin; Massberg, Steffen; Münzel, Thomas; von Brühl, Marie-Luise; Wenzel, Philip

    2014-01-01

    Interaction between vascular wall abnormalities, inflammatory leukocytes, platelets, coagulation factors and hemorheology in the pathogenesis of deep vein thrombosis (DVT) is incompletely understood, requiring well defined animal models of human disease. We subjected male C57BL/6 mice to ligation of the inferior vena cava (IVC) as a flow reduction model to induce DVT. Thrombus size and weight were analyzed macroscopically and sonographically by B-mode, pulse wave (pw) Doppler and power Doppler imaging (PDI) using high frequency ultrasound. Thrombus size varied substantially between individual procedures and mice, irrespective of the flow reduction achieved by the ligature. Interestingly, PDI accurately predicted thrombus size in a very robust fashion (r2 = 0.9734, p thrombus weight (r2 = 0.5597, p thrombus formation. Occlusion of side branches prior to ligation of IVC did not increase thrombus size, probably due to patent side branches inaccessible to surgery. Venous side branches influence thrombus size in experimental DVT and might therefore prevent thrombus formation. This renders vessel anatomy and hemorheology important determinants in mouse models of DVT, which should be controlled for.

  7. Biogenic Properties of Deep Waters from the Black Sea Reduction (Hydrogen Sulphide) Zone for Marine Algae

    OpenAIRE

    Polikarpov, Gennady G.; Lazorenko, Galina Е.; Тereschenko, Natalya N.

    2015-01-01

    Abstract Generalized data of biogenic properties investigations of the Black Sea deep waters from its reduction zone for marine algae are presented. It is shown on board and in laboratory that after pre-oxidation of hydrogen sulphide by intensive aeration of the deep waters lifted to the surface of the sea, they are ready to be used for cultivation of the Black Sea unicellular, planktonic, and multicellular, benthic, algae instead of artificial medium. Naturally balanced micro- and macroeleme...

  8. Iron oxide reduction in methane-rich deep Baltic Sea sediments

    DEFF Research Database (Denmark)

    Egger, Matthias; Hagens, Mathilde; Sapart, Celia J.

    2017-01-01

    /L transition. Our results reveal a complex interplay between production, oxidation and transport of methane showing that besides organoclastic Fe reduction, oxidation of downward migrating methane with Fe oxides may also explain the elevated concentrations of dissolved ferrous Fe in deep Baltic Sea sediments...... profiles and numerical modeling, we propose that a potential coupling between Fe oxide reduction and methane oxidation likely affects deep Fe cycling and related biogeochemical processes, such as burial of phosphorus, in systems subject to changes in organic matter loading or bottom water salinity....

  9. Chronic sleep reduction, functioning at school and school achievement in preadolescents.

    Science.gov (United States)

    Meijer, Anne Marie

    2008-12-01

    This study investigates the relationship between chronic sleep reduction, functioning at school and school achievement of boys and girls. To establish individual consequences of chronic sleep reduction (tiredness, sleepiness, loss of energy and emotional instability) the Chronic Sleep Reduction Questionnaire has been developed. A total of 436 children (219 boys, 216 girls, 1 [corrected] missing; mean age = 11 years and 5 months) from the seventh and eight grades of 12 elementary schools participated in this study. The inter-item reliability (Cronbach's alpha = 0.84) and test-retest reliability (r = 0.78) of the Chronic Sleep Reduction Questionnaire were satisfactory. The construct validity of the questionnaire as measured by a confirmative factor analysis was acceptable as well (CMIN/DF = 1.49; CFI = 0.94; RMSEA = 0.034). Cronbach's alpha's of the scales measuring functioning at school (teacher's influence, self-image as pupil, and achievement motivation) were 0.69, 0.86 and 0.79. School achievement was based on self-reported marks concerning six school subjects. To test the models concerning the relations of chronic sleep reduction, functioning at school, and school achievement, the covariance matrix of these variables were analysed by means of structural equation modelling. To test for differences between boys and girls a multi-group model is used. The models representing the relations between chronic sleep reduction - school achievement and chronic sleep reduction - functioning at school - school achievement fitted the data quite well. The impact of chronic sleep reduction on school achievement and functioning at school appeared to be different for boys and girls. Based on the results of this study, it may be concluded that chronic sleep reduction may affect school achievement directly and indirectly via functioning at school, with worse school marks as a consequence.

  10. Bacterial Sulfate Reduction Above 100-Degrees-C in Deep-Sea Hydrothermal Vent Sediments

    DEFF Research Database (Denmark)

    JØRGENSEN, BB; ISAKSEN, MF; JANNASCH, HW

    1992-01-01

    -reducing bacteria was done in hot deep-sea sediments at the hydrothermal vents of the Guaymas Basin tectonic spreading center in the Gulf of California. Radiotracer studies revealed that sulfate reduction can occur at temperatures up to 110-degrees-C, with an optimum rate at 103-degrees to 106-degrees......-C. This observation expands the upper temperature limit of this process in deep-ocean sediments by 20-degrees-C and indicates the existence of an unknown groUp of hyperthermophilic bacteria with a potential importance for the biogeochemistry of sulfur above 100-degrees-C....

  11. Association of Kinesthetic and Read-Write Learner with Deep Approach Learning and Academic Achievement

    Directory of Open Access Journals (Sweden)

    Latha Rajendra Kumar

    2011-06-01

    Full Text Available Background: The main purpose of the present study was to further investigate study processes, learning styles, and academic achievement in medical students. Methods: A total of 214 (mean age 22.5 years first and second year students - preclinical years - at the Asian Institute of Medical Science and Technology (AIMST University School of Medicine, in Malaysia participated.  There were 119 women (55.6% and 95 men (44.4%.   Biggs questionnaire for determining learning approaches and the VARK questionnaire for determining learning styles were used.  These were compared to the student’s performance in the assessment examinations. Results: The major findings were 1 the majority of students prefer to study alone, 2 most students employ a superficial study approach, and 3 students with high kinesthetic and read-write scores performed better on examinations and approached the subject by deep approach method compared to students with low scores.  Furthermore, there was a correlation between superficial approach scores and visual learner’s scores. Discussion: Read-write and kinesthetic learners who adopt a deep approach learning strategy perform better academically than do the auditory, visual learners that employ superficial study strategies.   Perhaps visual and auditory learners can be encouraged to adopt kinesthetic and read-write styles to enhance their performance in the exams.

  12. The Path to Deep Nuclear Reductions. Dealing with American Conventional Superiority

    Energy Technology Data Exchange (ETDEWEB)

    Gormley, D.M.

    2009-07-01

    The transformation of the U.S. conventional capabilities has begun to have a substantial and important impact on counter-force strike missions particularly as they affect counter-proliferation requirements. So too have improvements in ballistic missile defense programs, which are also critically central to U.S. counter-proliferation objectives. These improved conventional capabilities come at a time when thinking about the prospects of eventually achieving a nuclear disarmed world has never been so promising. Yet, the path toward achieving that goal, or making substantial progress towards it, is fraught with pitfalls, including domestic political, foreign, and military ones. Two of the most important impediments to deep reductions in U.S. and Russian nuclear arsenals - no less a nuclear disarmed world - are perceived U.S. advantages in conventional counter-force strike capabilities working in combination with even imperfect but growing missile defense systems. The Barack Obama administration has already toned down the George W. Bush administration's rhetoric surrounding many of these new capabilities. Nevertheless, it is likely to affirm that it is a worthy goal to pursue a more conventionally oriented denial strategy as America further weans itself from its reliance on nuclear weapons. The challenge is to do so in the context of a more multilateral or collective security environment in which transparency plays the role it once did during the Cold War as a necessary adjunct to arms control agreements. Considerable thought has already been devoted to assessing many of the challenges along the way to a nuclear-free world, including verifying arsenals when they reach very low levels, more effective management of the civilian nuclear programs that remain, enforcement procedures, and what, if anything, might be needed to deal with latent capacities to produce nuclear weapons.1 But far less thought has been expended on why Russia - whose cooperation is absolutely

  13. The Path to Deep Nuclear Reductions. Dealing with American Conventional Superiority

    International Nuclear Information System (INIS)

    Gormley, D.M.

    2009-01-01

    The transformation of the U.S. conventional capabilities has begun to have a substantial and important impact on counter-force strike missions particularly as they affect counter-proliferation requirements. So too have improvements in ballistic missile defense programs, which are also critically central to U.S. counter-proliferation objectives. These improved conventional capabilities come at a time when thinking about the prospects of eventually achieving a nuclear disarmed world has never been so promising. Yet, the path toward achieving that goal, or making substantial progress towards it, is fraught with pitfalls, including domestic political, foreign, and military ones. Two of the most important impediments to deep reductions in U.S. and Russian nuclear arsenals - no less a nuclear disarmed world - are perceived U.S. advantages in conventional counter-force strike capabilities working in combination with even imperfect but growing missile defense systems. The Barack Obama administration has already toned down the George W. Bush administration's rhetoric surrounding many of these new capabilities. Nevertheless, it is likely to affirm that it is a worthy goal to pursue a more conventionally oriented denial strategy as America further weans itself from its reliance on nuclear weapons. The challenge is to do so in the context of a more multilateral or collective security environment in which transparency plays the role it once did during the Cold War as a necessary adjunct to arms control agreements. Considerable thought has already been devoted to assessing many of the challenges along the way to a nuclear-free world, including verifying arsenals when they reach very low levels, more effective management of the civilian nuclear programs that remain, enforcement procedures, and what, if anything, might be needed to deal with latent capacities to produce nuclear weapons.1 But far less thought has been expended on why Russia - whose cooperation is absolutely

  14. Achievable peak electrode voltage reduction by neurostimulators using descending staircase currents to deliver charge.

    Science.gov (United States)

    Halpern, Mark

    2011-01-01

    This paper considers the achievable reduction in peak voltage across two driving terminals of an RC circuit when delivering charge using a stepped current waveform, comprising a chosen number of steps of equal duration, compared with using a constant current over the total duration. This work has application to the design of neurostimulators giving reduced peak electrode voltage when delivering a given electric charge over a given time duration. Exact solutions for the greatest possible peak voltage reduction using two and three steps are given. Furthermore, it is shown that the achievable peak voltage reduction, for any given number of steps is identical for simple series RC circuits and parallel RC circuits, for appropriate different values of RC. It is conjectured that the maximum peak voltage reduction cannot be improved using a more complicated RC circuit.

  15. Assessing Multiple Pathways for Achieving China’s National Emissions Reduction Target

    Directory of Open Access Journals (Sweden)

    Mingyue Wang

    2018-06-01

    Full Text Available In order to achieve China’s target of carbon intensity emissions reduction in 2030, there is a need to identify a scientific pathway and feasible strategies. In this study, we used stochastic frontier analysis method of energy efficiency, incorporating energy structure, economic structure, human capital, capital stock and potential energy efficiency to identify an efficient pathway for achieving emissions reduction target. We set up 96 scenarios including single factor scenarios and multi-factors combination scenarios for the simulation. The effects of each scenario on achieving the carbon intensity reduction target are then evaluated. It is found that: (1 Potential energy efficiency has the greatest contribution to the carbon intensity emissions reduction target; (2 they are unlikely to reach the 2030 carbon intensity reduction target of 60% by only optimizing a single factor; (3 in order to achieve the 2030 target, several aspects have to be adjusted: the fossil fuel ratio must be lower than 80%, and its average growth rate must be decreased by 2.2%; the service sector ratio in GDP must be higher than 58.3%, while the growth rate of non-service sectors must be lowered by 2.4%; and both human capital and capital stock must achieve and maintain a stable growth rate and a 1% increase annually in energy efficiency. Finally, the specific recommendations of this research were discussed, including constantly improved energy efficiency; the upgrading of China’s industrial structure must be accelerated; emissions reduction must be done at the root of energy sources; multi-level input mechanisms in overall levels of education and training to cultivate the human capital stock must be established; investment in emerging equipment and accelerate the closure of backward production capacity to accumulate capital stock.

  16. Application of variance reduction techniques of Monte-Carlo method to deep penetration shielding problems

    International Nuclear Information System (INIS)

    Rawat, K.K.; Subbaiah, K.V.

    1996-01-01

    General purpose Monte Carlo code MCNP is being widely employed for solving deep penetration problems by applying variance reduction techniques. These techniques depend on the nature and type of the problem being solved. Application of geometry splitting and implicit capture method are examined to study the deep penetration problems of neutron, gamma and coupled neutron-gamma in thick shielding materials. The typical problems chosen are: i) point isotropic monoenergetic gamma ray source of 1 MeV energy in nearly infinite water medium, ii) 252 Cf spontaneous source at the centre of 140 cm thick water and concrete and iii) 14 MeV fast neutrons incident on the axis of 100 cm thick concrete disk. (author). 7 refs., 5 figs

  17. Construction of System for Seismic Observation in Deep Borehole (SODB) - Overview and Achievement Status of the Project

    International Nuclear Information System (INIS)

    Kobayashi, Genyu

    2014-01-01

    The seismic responses of each unit at the Kashiwazaki-Kariwa NPP differed greatly during the 2007 Niigata-ken Chuetsu-oki Earthquake; the deep sedimentary structure around the site greatly affected these differences. To clarify underground structure and to evaluate ground motion amplification and attenuation effects more accurately in accordance with deep sedimentary structure, JNES initiated the SODB project. Deployment of a vertical seismometer array in a 3000-meter deep borehole was completed in June 2012 on the premises of NIIT. Horizontal arrays were also placed on the ground surface. Experiences and achievements in the JNES project were introduced, including development of seismic observation technology in deep boreholes, site amplification measurements from logging data, application of borehole observation data to maintenance of nuclear power plant safety, and so on. Afterwards, the relationships of other presentations in this WS, were explained. (authors)

  18. Analytical results of variance reduction characteristics of biased Monte Carlo for deep-penetration problems

    International Nuclear Information System (INIS)

    Murthy, K.P.N.; Indira, R.

    1986-01-01

    An analytical formulation is presented for calculating the mean and variance of transmission for a model deep-penetration problem. With this formulation, the variance reduction characteristics of two biased Monte Carlo schemes are studied. The first is the usual exponential biasing wherein it is shown that the optimal biasing parameter depends sensitively on the scattering properties of the shielding medium. The second is a scheme that couples exponential biasing to the scattering angle biasing proposed recently. It is demonstrated that the coupled scheme performs better than exponential biasing

  19. THE DEEP2 GALAXY REDSHIFT SURVEY: DESIGN, OBSERVATIONS, DATA REDUCTION, AND REDSHIFTS

    International Nuclear Information System (INIS)

    Newman, Jeffrey A.; Cooper, Michael C.; Davis, Marc; Faber, S. M.; Guhathakurta, Puragra; Koo, David C.; Phillips, Andrew C.; Conroy, Charlie; Harker, Justin J.; Lai, Kamson; Coil, Alison L.; Dutton, Aaron A.; Finkbeiner, Douglas P.; Gerke, Brian F.; Rosario, David J.; Weiner, Benjamin J.; Willmer, C. N. A.; Yan Renbin; Kassin, Susan A.; Konidaris, N. P.

    2013-01-01

    We describe the design and data analysis of the DEEP2 Galaxy Redshift Survey, the densest and largest high-precision redshift survey of galaxies at z ∼ 1 completed to date. The survey was designed to conduct a comprehensive census of massive galaxies, their properties, environments, and large-scale structure down to absolute magnitude M B = –20 at z ∼ 1 via ∼90 nights of observation on the Keck telescope. The survey covers an area of 2.8 deg 2 divided into four separate fields observed to a limiting apparent magnitude of R AB = 24.1. Objects with z ∼ 0.7 to be targeted ∼2.5 times more efficiently than in a purely magnitude-limited sample. Approximately 60% of eligible targets are chosen for spectroscopy, yielding nearly 53,000 spectra and more than 38,000 reliable redshift measurements. Most of the targets that fail to yield secure redshifts are blue objects that lie beyond z ∼ 1.45, where the [O II] 3727 Å doublet lies in the infrared. The DEIMOS 1200 line mm –1 grating used for the survey delivers high spectral resolution (R ∼ 6000), accurate and secure redshifts, and unique internal kinematic information. Extensive ancillary data are available in the DEEP2 fields, particularly in the Extended Groth Strip, which has evolved into one of the richest multiwavelength regions on the sky. This paper is intended as a handbook for users of the DEEP2 Data Release 4, which includes all DEEP2 spectra and redshifts, as well as for the DEEP2 DEIMOS data reduction pipelines. Extensive details are provided on object selection, mask design, biases in target selection and redshift measurements, the spec2d two-dimensional data-reduction pipeline, the spec1d automated redshift pipeline, and the zspec visual redshift verification process, along with examples of instrumental signatures or other artifacts that in some cases remain after data reduction. Redshift errors and catastrophic failure rates are assessed through more than 2000 objects with duplicate

  20. Vehicle Color Recognition with Vehicle-Color Saliency Detection and Dual-Orientational Dimensionality Reduction of CNN Deep Features

    Science.gov (United States)

    Zhang, Qiang; Li, Jiafeng; Zhuo, Li; Zhang, Hui; Li, Xiaoguang

    2017-12-01

    Color is one of the most stable attributes of vehicles and often used as a valuable cue in some important applications. Various complex environmental factors, such as illumination, weather, noise and etc., result in the visual characteristics of the vehicle color being obvious diversity. Vehicle color recognition in complex environments has been a challenging task. The state-of-the-arts methods roughly take the whole image for color recognition, but many parts of the images such as car windows; wheels and background contain no color information, which will have negative impact on the recognition accuracy. In this paper, a novel vehicle color recognition method using local vehicle-color saliency detection and dual-orientational dimensionality reduction of convolutional neural network (CNN) deep features has been proposed. The novelty of the proposed method includes two parts: (1) a local vehicle-color saliency detection method has been proposed to determine the vehicle color region of the vehicle image and exclude the influence of non-color regions on the recognition accuracy; (2) dual-orientational dimensionality reduction strategy has been designed to greatly reduce the dimensionality of deep features that are learnt from CNN, which will greatly mitigate the storage and computational burden of the subsequent processing, while improving the recognition accuracy. Furthermore, linear support vector machine is adopted as the classifier to train the dimensionality reduced features to obtain the recognition model. The experimental results on public dataset demonstrate that the proposed method can achieve superior recognition performance over the state-of-the-arts methods.

  1. Deep Learning-Based Noise Reduction Approach to Improve Speech Intelligibility for Cochlear Implant Recipients.

    Science.gov (United States)

    Lai, Ying-Hui; Tsao, Yu; Lu, Xugang; Chen, Fei; Su, Yu-Ting; Chen, Kuang-Chao; Chen, Yu-Hsuan; Chen, Li-Ching; Po-Hung Li, Lieber; Lee, Chin-Hui

    2018-01-20

    We investigate the clinical effectiveness of a novel deep learning-based noise reduction (NR) approach under noisy conditions with challenging noise types at low signal to noise ratio (SNR) levels for Mandarin-speaking cochlear implant (CI) recipients. The deep learning-based NR approach used in this study consists of two modules: noise classifier (NC) and deep denoising autoencoder (DDAE), thus termed (NC + DDAE). In a series of comprehensive experiments, we conduct qualitative and quantitative analyses on the NC module and the overall NC + DDAE approach. Moreover, we evaluate the speech recognition performance of the NC + DDAE NR and classical single-microphone NR approaches for Mandarin-speaking CI recipients under different noisy conditions. The testing set contains Mandarin sentences corrupted by two types of maskers, two-talker babble noise, and a construction jackhammer noise, at 0 and 5 dB SNR levels. Two conventional NR techniques and the proposed deep learning-based approach are used to process the noisy utterances. We qualitatively compare the NR approaches by the amplitude envelope and spectrogram plots of the processed utterances. Quantitative objective measures include (1) normalized covariance measure to test the intelligibility of the utterances processed by each of the NR approaches; and (2) speech recognition tests conducted by nine Mandarin-speaking CI recipients. These nine CI recipients use their own clinical speech processors during testing. The experimental results of objective evaluation and listening test indicate that under challenging listening conditions, the proposed NC + DDAE NR approach yields higher intelligibility scores than the two compared classical NR techniques, under both matched and mismatched training-testing conditions. When compared to the two well-known conventional NR techniques under challenging listening condition, the proposed NC + DDAE NR approach has superior noise suppression capabilities and gives less distortion

  2. On the Reduction of Computational Complexity of Deep Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Partha Maji

    2018-04-01

    Full Text Available Deep convolutional neural networks (ConvNets, which are at the heart of many new emerging applications, achieve remarkable performance in audio and visual recognition tasks. Unfortunately, achieving accuracy often implies significant computational costs, limiting deployability. In modern ConvNets it is typical for the convolution layers to consume the vast majority of computational resources during inference. This has made the acceleration of these layers an important research area in academia and industry. In this paper, we examine the effects of co-optimizing the internal structures of the convolutional layers and underlying implementation of fundamental convolution operation. We demonstrate that a combination of these methods can have a big impact on the overall speedup of a ConvNet, achieving a ten-fold increase over baseline. We also introduce a new class of fast one-dimensional (1D convolutions for ConvNets using the Toom–Cook algorithm. We show that our proposed scheme is mathematically well-grounded, robust, and does not require any time-consuming retraining, while still achieving speedups solely from convolutional layers with no loss in baseline accuracy.

  3. DEEP U BAND AND R IMAGING OF GOODS-SOUTH: OBSERVATIONS, DATA REDUCTION AND FIRST RESULTS ,

    International Nuclear Information System (INIS)

    Nonino, M.; Cristiani, S.; Vanzella, E.; Dickinson, M.; Reddy, N.; Rosati, P.; Grazian, A.; Giavalisco, M.; Kuntschner, H.; Fosbury, R. A. E.; Daddi, E.; Cesarsky, C.

    2009-01-01

    We present deep imaging in the U band covering an area of 630 arcmin 2 centered on the southern field of the Great Observatories Origins Deep Survey (GOODS). The data were obtained with the VIMOS instrument at the European Southern Observatory (ESO) Very Large Telescope. The final images reach a magnitude limit U lim ∼ 29.8 (AB, 1σ, in a 1'' radius aperture), and have good image quality, with full width at half-maximum ∼0.''8. They are significantly deeper than previous U-band images available for the GOODS fields, and better match the sensitivity of other multiwavelength GOODS photometry. The deeper U-band data yield significantly improved photometric redshifts, especially in key redshift ranges such as 2 lim ∼ 29 (AB, 1σ, 1'' radius aperture), and image quality ∼0.''75. We discuss the strategies for the observations and data reduction, and present the first results from the analysis of the co-added images.

  4. Visualizing histopathologic deep learning classification and anomaly detection using nonlinear feature space dimensionality reduction.

    Science.gov (United States)

    Faust, Kevin; Xie, Quin; Han, Dominick; Goyle, Kartikay; Volynskaya, Zoya; Djuric, Ugljesa; Diamandis, Phedias

    2018-05-16

    There is growing interest in utilizing artificial intelligence, and particularly deep learning, for computer vision in histopathology. While accumulating studies highlight expert-level performance of convolutional neural networks (CNNs) on focused classification tasks, most studies rely on probability distribution scores with empirically defined cutoff values based on post-hoc analysis. More generalizable tools that allow humans to visualize histology-based deep learning inferences and decision making are scarce. Here, we leverage t-distributed Stochastic Neighbor Embedding (t-SNE) to reduce dimensionality and depict how CNNs organize histomorphologic information. Unique to our workflow, we develop a quantitative and transparent approach to visualizing classification decisions prior to softmax compression. By discretizing the relationships between classes on the t-SNE plot, we show we can super-impose randomly sampled regions of test images and use their distribution to render statistically-driven classifications. Therefore, in addition to providing intuitive outputs for human review, this visual approach can carry out automated and objective multi-class classifications similar to more traditional and less-transparent categorical probability distribution scores. Importantly, this novel classification approach is driven by a priori statistically defined cutoffs. It therefore serves as a generalizable classification and anomaly detection tool less reliant on post-hoc tuning. Routine incorporation of this convenient approach for quantitative visualization and error reduction in histopathology aims to accelerate early adoption of CNNs into generalized real-world applications where unanticipated and previously untrained classes are often encountered.

  5. Achieving CO2 reductions in Colombia: Effects of carbon taxes and abatement targets

    International Nuclear Information System (INIS)

    Calderón, Silvia; Alvarez, Andrés Camilo; Loboguerrero, Ana María; Arango, Santiago; Calvin, Katherine; Kober, Tom; Daenzer, Kathryn; Fisher-Vanden, Karen

    2016-01-01

    In this paper we investigate CO 2 emission scenarios for Colombia and the effects of implementing carbon taxes and abatement targets on the energy system. By comparing baseline and policy scenario results from two integrated assessment partial equilibrium models TIAM-ECN and GCAM and two general equilibrium models Phoenix and MEG4C, we provide an indication of future developments and dynamics in the Colombian energy system. Currently, the carbon intensity of the energy system in Colombia is low compared to other countries in Latin America. However, this trend may change given the projected rapid growth of the economy and the potential increase in the use of carbon-based technologies. Climate policy in Colombia is under development and has yet to consider economic instruments such as taxes and abatement targets. This paper shows how taxes or abatement targets can achieve significant CO 2 reductions in Colombia. Though abatement may be achieved through different pathways, taxes and targets promote the entry of cleaner energy sources into the market and reduce final energy demand through energy efficiency improvements and other demand-side responses. The electric power sector plays an important role in achieving CO 2 emission reductions in Colombia, through the increase of hydropower, the introduction of wind technologies, and the deployment of biomass, coal and natural gas with CO 2 capture and storage (CCS). Uncertainty over the prevailing mitigation pathway reinforces the importance of climate policy to guide sectors toward low-carbon technologies. This paper also assesses the economy-wide implications of mitigation policies such as potential losses in GDP and consumption. An assessment of the legal, institutional, social and environmental barriers to economy-wide mitigation policies is critical yet beyond the scope of this paper. - Highlights: • Four energy and economy-wide models under carbon mitigation scenarios are compared. • Baseline results show that CO

  6. How to Achieve CO2 Emission Reduction Goals: What 'Jazz' and 'Symphony' Can Offer

    International Nuclear Information System (INIS)

    Rose, K.

    2013-01-01

    Achieving CO 2 emission reduction goals remains one of today's most challenging tasks. Global energy demand will grow for many decades to come. In many regions of the world cheap fossil fuels seem to be the way forward to meet ever growing energy demand. However, there are negative consequences to this, most notably increasing emission levels. Politicians and industry therefore must accept that make hard choices in this generation need to be made to bring about real changes for future generations and the planet to limit CO 2 emissions and climate change. In his presentation, prof. Rose will provide an insight into how CO 2 emission reduction goals can be set and achieved and how a balance between future energy needs and supply can be realised in the long run up to 2050 both globally and regionally. This will be done based on WEC's own leading analysis in this area, namely it recently launched World Energy Scenarios: composing energy futures to 2050 report and WEC's scenarios, Jazz and Symphony. WEC's full analysis, the complete report and supporting material is available online at: http://www.worldenergy.org/publications/2013/world-energy-scenarios-composing-energy-futures-to-2050.(author)

  7. Eigenspectra optoacoustic tomography achieves quantitative blood oxygenation imaging deep in tissues

    Science.gov (United States)

    Tzoumas, Stratis; Nunes, Antonio; Olefir, Ivan; Stangl, Stefan; Symvoulidis, Panagiotis; Glasl, Sarah; Bayer, Christine; Multhoff, Gabriele; Ntziachristos, Vasilis

    2016-06-01

    Light propagating in tissue attains a spectrum that varies with location due to wavelength-dependent fluence attenuation, an effect that causes spectral corruption. Spectral corruption has limited the quantification accuracy of optical and optoacoustic spectroscopic methods, and impeded the goal of imaging blood oxygen saturation (sO2) deep in tissues; a critical goal for the assessment of oxygenation in physiological processes and disease. Here we describe light fluence in the spectral domain and introduce eigenspectra multispectral optoacoustic tomography (eMSOT) to account for wavelength-dependent light attenuation, and estimate blood sO2 within deep tissue. We validate eMSOT in simulations, phantoms and animal measurements and spatially resolve sO2 in muscle and tumours, validating our measurements with histology data. eMSOT shows substantial sO2 accuracy enhancement over previous optoacoustic methods, potentially serving as a valuable tool for imaging tissue pathophysiology.

  8. Eigenspectra optoacoustic tomography achieves quantitative blood oxygenation imaging deep in tissues.

    Science.gov (United States)

    Tzoumas, Stratis; Nunes, Antonio; Olefir, Ivan; Stangl, Stefan; Symvoulidis, Panagiotis; Glasl, Sarah; Bayer, Christine; Multhoff, Gabriele; Ntziachristos, Vasilis

    2016-06-30

    Light propagating in tissue attains a spectrum that varies with location due to wavelength-dependent fluence attenuation, an effect that causes spectral corruption. Spectral corruption has limited the quantification accuracy of optical and optoacoustic spectroscopic methods, and impeded the goal of imaging blood oxygen saturation (sO2) deep in tissues; a critical goal for the assessment of oxygenation in physiological processes and disease. Here we describe light fluence in the spectral domain and introduce eigenspectra multispectral optoacoustic tomography (eMSOT) to account for wavelength-dependent light attenuation, and estimate blood sO2 within deep tissue. We validate eMSOT in simulations, phantoms and animal measurements and spatially resolve sO2 in muscle and tumours, validating our measurements with histology data. eMSOT shows substantial sO2 accuracy enhancement over previous optoacoustic methods, potentially serving as a valuable tool for imaging tissue pathophysiology.

  9. Thermodynamic and achievable efficiencies for solar-driven electrochemical reduction of carbon dioxide to transportation fuels

    Science.gov (United States)

    Singh, Meenesh R.; Clark, Ezra L.; Bell, Alexis T.

    2015-11-01

    Thermodynamic, achievable, and realistic efficiency limits of solar-driven electrochemical conversion of water and carbon dioxide to fuels are investigated as functions of light-absorber composition and configuration, and catalyst composition. The maximum thermodynamic efficiency at 1-sun illumination for adiabatic electrochemical synthesis of various solar fuels is in the range of 32-42%. Single-, double-, and triple-junction light absorbers are found to be optimal for electrochemical load ranges of 0-0.9 V, 0.9-1.95 V, and 1.95-3.5 V, respectively. Achievable solar-to-fuel (STF) efficiencies are determined using ideal double- and triple-junction light absorbers and the electrochemical load curves for CO2 reduction on silver and copper cathodes, and water oxidation kinetics over iridium oxide. The maximum achievable STF efficiencies for synthesis gas (H2 and CO) and Hythane (H2 and CH4) are 18.4% and 20.3%, respectively. Whereas the realistic STF efficiency of photoelectrochemical cells (PECs) can be as low as 0.8%, tandem PECs and photovoltaic (PV)-electrolyzers can operate at 7.2% under identical operating conditions. We show that the composition and energy content of solar fuels can also be adjusted by tuning the band-gaps of triple-junction light absorbers and/or the ratio of catalyst-to-PV area, and that the synthesis of liquid products and C2H4 have high profitability indices.

  10. Thermodynamic and achievable efficiencies for solar-driven electrochemical reduction of carbon dioxide to transportation fuels.

    Science.gov (United States)

    Singh, Meenesh R; Clark, Ezra L; Bell, Alexis T

    2015-11-10

    Thermodynamic, achievable, and realistic efficiency limits of solar-driven electrochemical conversion of water and carbon dioxide to fuels are investigated as functions of light-absorber composition and configuration, and catalyst composition. The maximum thermodynamic efficiency at 1-sun illumination for adiabatic electrochemical synthesis of various solar fuels is in the range of 32-42%. Single-, double-, and triple-junction light absorbers are found to be optimal for electrochemical load ranges of 0-0.9 V, 0.9-1.95 V, and 1.95-3.5 V, respectively. Achievable solar-to-fuel (STF) efficiencies are determined using ideal double- and triple-junction light absorbers and the electrochemical load curves for CO2 reduction on silver and copper cathodes, and water oxidation kinetics over iridium oxide. The maximum achievable STF efficiencies for synthesis gas (H2 and CO) and Hythane (H2 and CH4) are 18.4% and 20.3%, respectively. Whereas the realistic STF efficiency of photoelectrochemical cells (PECs) can be as low as 0.8%, tandem PECs and photovoltaic (PV)-electrolyzers can operate at 7.2% under identical operating conditions. We show that the composition and energy content of solar fuels can also be adjusted by tuning the band-gaps of triple-junction light absorbers and/or the ratio of catalyst-to-PV area, and that the synthesis of liquid products and C2H4 have high profitability indices.

  11. Thermodynamic and achievable efficiencies for solar-driven electrochemical reduction of carbon dioxide to transportation fuels

    Science.gov (United States)

    Singh, Meenesh R.; Clark, Ezra L.; Bell, Alexis T.

    2015-01-01

    Thermodynamic, achievable, and realistic efficiency limits of solar-driven electrochemical conversion of water and carbon dioxide to fuels are investigated as functions of light-absorber composition and configuration, and catalyst composition. The maximum thermodynamic efficiency at 1-sun illumination for adiabatic electrochemical synthesis of various solar fuels is in the range of 32–42%. Single-, double-, and triple-junction light absorbers are found to be optimal for electrochemical load ranges of 0–0.9 V, 0.9–1.95 V, and 1.95–3.5 V, respectively. Achievable solar-to-fuel (STF) efficiencies are determined using ideal double- and triple-junction light absorbers and the electrochemical load curves for CO2 reduction on silver and copper cathodes, and water oxidation kinetics over iridium oxide. The maximum achievable STF efficiencies for synthesis gas (H2 and CO) and Hythane (H2 and CH4) are 18.4% and 20.3%, respectively. Whereas the realistic STF efficiency of photoelectrochemical cells (PECs) can be as low as 0.8%, tandem PECs and photovoltaic (PV)-electrolyzers can operate at 7.2% under identical operating conditions. We show that the composition and energy content of solar fuels can also be adjusted by tuning the band-gaps of triple-junction light absorbers and/or the ratio of catalyst-to-PV area, and that the synthesis of liquid products and C2H4 have high profitability indices. PMID:26504215

  12. The removal of the deep lateral wall in orbital decompression: Its contribution to exophthalmos reduction and influence on consecutive diplopia

    NARCIS (Netherlands)

    Baldeschi, Lelio; Macandie, Kerr; Hintschich, Christoph; Wakelkamp, Iris M. M. J.; Prummel, Mark F.; Wiersinga, Wilmar M.

    2005-01-01

    PURPOSE: To evaluate the contribution of maximal removal of the deep lateral wall of the orbit to exophthalmos reduction in Graves' orbitopathy and its influence on the onset of consecutive diplopia. DESIGN: Case-control study. METHODS: The medical records of two cohorts of patients affected by

  13. Green Infrastructure Simulation and Optimization to Achieve Combined Sewer Overflow Reductions in Philadelphia's Mill Creek Sewershed

    Science.gov (United States)

    Cohen, J. S.; McGarity, A. E.

    2017-12-01

    The ability for mass deployment of green stormwater infrastructure (GSI) to intercept significant amounts of urban runoff has the potential to reduce the frequency of a city's combined sewer overflows (CSOs). This study was performed to aid in the Overbrook Environmental Education Center's vision of applying this concept to create a Green Commercial Corridor in Philadelphia's Overbrook Neighborhood, which lies in the Mill Creek Sewershed. In an attempt to further implement physical and social reality into previous work using simulation-optimization techniques to produce GSI deployment strategies (McGarity, et al., 2016), this study's models incorporated land use types and a specific neighborhood in the sewershed. The low impact development (LID) feature in EPA's Storm Water Management Model (SWMM) was used to simulate various geographic configurations of GSI in Overbrook. The results from these simulations were used to obtain formulas describing the annual CSO reduction in the sewershed based on the deployed GSI practices. These non-linear hydrologic response formulas were then implemented into the Storm Water Investment Strategy Evaluation (StormWISE) model (McGarity, 2012), a constrained optimization model used to develop optimal stormwater management practices on the watershed scale. By saturating the avenue with GSI, not only will CSOs from the sewershed into the Schuylkill River be reduced, but ancillary social and economic benefits of GSI will also be achieved. The effectiveness of these ancillary benefits changes based on the type of GSI practice and the type of land use in which the GSI is implemented. Thus, the simulation and optimization processes were repeated while delimiting GSI deployment by land use (residential, commercial, industrial, and transportation). The results give a GSI deployment strategy that achieves desired annual CSO reductions at a minimum cost based on the locations of tree trenches, rain gardens, and rain barrels in specified land

  14. Thermodynamic and achievable efficiencies for solar-driven electrochemical reduction of carbon dioxide to transportation fuels

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Meenesh R. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Center for Artificial Photosynthesis, Material Science Division; Clark, Ezra L. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Center for Artificial Photosynthesis, Material Science Division; Univ. of California, Berkeley, CA (United States). Dept. of Chemical & Biomolecular Engineering; Bell, Alexis T. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Center for Artificial Photosynthesis, Material Science Division; Univ. of California, Berkeley, CA (United States). Dept. of Chemical & Biomolecular Engineering

    2015-10-26

    Thermodynamic, achievable, and realistic efficiency limits of solar-driven electrochemical conversion of water and carbon dioxide to fuels are investigated as functions of light-absorber composition and configuration, and catalyst composition. The maximum thermodynamic efficiency at 1-sun illumination for adiabatic electrochemical synthesis of various solar fuels is in the range of 32–42%. Single-, double-, and triple-junction light absorbers are found to be optimal for electrochemical load ranges of 0–0.9 V, 0.9–1.95 V, and 1.95–3.5 V, respectively. Achievable solar-to-fuel (STF) efficiencies are determined using ideal double- and triple-junction light absorbers and the electrochemical load curves for CO2 reduction on silver and copper cathodes, and water oxidation kinetics over iridium oxide. The maximum achievable STF efficiencies for synthesis gas (H2 and CO) and Hythane (H2 and CH4) are 18.4% and 20.3%, respectively. Whereas the realistic STF efficiency of photoelectrochemical cells (PECs) can be as low as 0.8%, tandem PECs and photovoltaic (PV)-electrolyzers can operate at 7.2% under identical operating conditions. Finally, we show that the composition and energy content of solar fuels can also be adjusted by tuning the band-gaps of triple-junction light absorbers and/or the ratio of catalyst-to-PV area, and that the synthesis of liquid products and C2H4 have high profitability indices.

  15. Exploring Students' Reflective Thinking Practice, Deep Processing Strategies, Effort, and Achievement Goal Orientations

    Science.gov (United States)

    Phan, Huy Phuong

    2009-01-01

    Recent research indicates that study processing strategies, effort, reflective thinking practice, and achievement goals are important factors contributing to the prediction of students' academic success. Very few studies have combined these theoretical orientations within one conceptual model. This study tested a conceptual model that included, in…

  16. Observations of the Hubble Deep Field with the Infrared Space Observatory .1. Data reduction, maps and sky coverage

    DEFF Research Database (Denmark)

    Serjeant, S.B.G.; Eaton, N.; Oliver, S.J.

    1997-01-01

    We present deep imaging at 6.7 and 15 mu m from the CAM instrument on the Infrared Space Observatory (ISO), centred on the Hubble Deep Field (HDF). These are the deepest integrations published to date at these wavelengths in any region of sky. We discuss the observational strategy and the data...... reduction. The observed source density appears to approach the CAM confusion limit at 15 mu m, and fluctuations in the 6.7-mu m sky background may be identifiable with similar spatial fluctuations in the HDF galaxy counts. ISO appears to be detecting comparable field galaxy populations to the HDF, and our...

  17. Achieving Realistic Energy and Greenhouse Gas Emission Reductions in U.S. Cities

    Science.gov (United States)

    Blackhurst, Michael F.

    2011-12-01

    In recognizing that energy markets and greenhouse gas emissions are significantly influences by local factors, this research examines opportunities for achieving realistic energy greenhouse gas emissions from U.S. cities through provisions of more sustainable infrastructure. Greenhouse gas reduction opportunities are examined through the lens of a public program administrator charged with reducing emissions given realistic financial constraints and authority over emissions reductions and energy use. Opportunities are evaluated with respect to traditional public policy metrics, such as benefit-cost analysis, net benefit analysis, and cost-effectiveness. Section 2 summarizes current practices used to estimate greenhouse gas emissions from communities. I identify improved and alternative emissions inventory techniques such as disaggregating the sectors reported, reporting inventory uncertainty, and aligning inventories with local organizations that could facilitate emissions mitigation. The potential advantages and challenges of supplementing inventories with comparative benchmarks are also discussed. Finally, I highlight the need to integrate growth (population and economic) and business as usual implications (such as changes to electricity supply grids) into climate action planning. I demonstrate how these techniques could improve decision making when planning reductions, help communities set meaningful emission reduction targets, and facilitate CAP implementation and progress monitoring. Section 3 evaluates the costs and benefits of building energy efficiency are estimated as a means of reducing greenhouse gas emissions in Pittsburgh, PA and Austin, TX. Two policy objectives were evaluated: maximize GHG reductions given initial budget constraints or maximize social savings given target GHG reductions. This approach explicitly evaluates the trade-offs between three primary and often conflicting program design parameters: initial capital constraints, social savings

  18. Deep and tapered silicon photonic crystals for achieving anti-reflection and enhanced absorption.

    Science.gov (United States)

    Hung, Yung-Jr; Lee, San-Liang; Coldren, Larry A

    2010-03-29

    Tapered silicon photonic crystals (PhCs) with smooth sidewalls are realized using a novel single-step deep reactive ion etching. The PhCs can significantly reduce the surface reflection over the wavelength range between the ultra-violet and near-infrared regions. From the measurements using a spectrophotometer and an angle-variable spectroscopic ellipsometer, the sub-wavelength periodic structure can provide a broad and angular-independent antireflective window in the visible region for the TE-polarized light. The PhCs with tapered rods can further reduce the reflection due to a gradually changed effective index. On the other hand, strong optical resonances for TM-mode can be found in this structure, which is mainly due to the existence of full photonic bandgaps inside the material. Such resonance can enhance the optical absorption inside the silicon PhCs due to its increased optical paths. With the help of both antireflective and absorption-enhanced characteristics in this structure, the PhCs can be used for various applications.

  19. Pareto-optimal multi-objective dimensionality reduction deep auto-encoder for mammography classification.

    Science.gov (United States)

    Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan

    2017-07-01

    Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Achieving reductions in greenhouse gases in the US road transportation sector

    International Nuclear Information System (INIS)

    Kay, Andrew I.; Noland, Robert B.; Rodier, Caroline J.

    2014-01-01

    It is well established that GHG emissions must be reduced 50 to 80% by 2050 in order to limit global temperature increase to 2 °C. Achieving reductions of this magnitude in the transportation sector is a challenge and requires a multitude of policies and technology options. The research presented here analyzes three scenarios: changes in the perceived price of travel, land use intensification, and increases in transit. Elasticity estimates are derived using an activity-based travel model for the state of California and broadly representative of the US. The VISION model is used to forecast changes in technology and fuel options that are currently forecast to occur in the US for the period 2000–2040, providing a life-cycle GHG forecast for the road transportation sector. Results suggest that aggressive policy action is required, especially pricing policies, but also more on the technology side, especially increases in the carbon efficiency of medium and heavy-duty vehicles. - Highlights: • Travel elasticities are calculated for policy scenarios using an activity-based travel model. • These elasticities are used to estimate changes in total life-cycle greenhouse gas emissions. • Current technology and fuel policy and the strongest behavioral policy will not meet targets. • Heavy and medium-duty trucks need more aggressive technology and fuel options

  1. U.S. electric power sector transitions required to achieve 80% reductions in economy-wide greenhouse gas emissions: Results based on a state-level model of the U.S. energy system

    Energy Technology Data Exchange (ETDEWEB)

    Iyer, Gokul C.; Clarke, Leon E.; Edmonds, James A.; Kyle, Gordon P.; Ledna, Catherine M.; McJeon, Haewon C.; Wise, M. A.

    2017-05-01

    The United States has articulated a deep decarbonization strategy for achieving a reduction in economy-wide greenhouse gas (GHG) emissions of 80% below 2005 levels by 2050. Achieving such deep emissions reductions will entail a major transformation of the energy system and of the electric power sector in particular. , This study uses a detailed state-level model of the U.S. energy system embedded within a global integrated assessment model (GCAM-USA) to demonstrate pathways for the evolution of the U.S. electric power sector that achieve 80% economy-wide reductions in GHG emissions by 2050. The pathways presented in this report are based on feedback received during a workshop of experts organized by the U.S. Department of Energy’s Office of Energy Policy and Systems Analysis. Our analysis demonstrates that achieving deep decarbonization by 2050 will require substantial decarbonization of the electric power sector resulting in an increase in the deployment of zero-carbon and low-carbon technologies such as renewables and carbon capture utilization and storage. The present results also show that the degree to which the electric power sector will need to decarbonize and low-carbon technologies will need to deploy depends on the nature of technological advances in the energy sector, the ability of end-use sectors to electrify and level of electricity demand.

  2. Full-scale operating experience of deep bed denitrification filter achieving phosphorus.

    Science.gov (United States)

    Husband, Joseph A; Slattery, Larry; Garrett, John; Corsoro, Frank; Smithers, Carol; Phipps, Scott

    2012-01-01

    The Arlington County Wastewater Pollution Control Plant (ACWPCP) is located in the southern part of Arlington County, Virginia, USA and discharges to the Potomac River via the Four Mile Run. The ACWPCP was originally constructed in 1937. In 2001, Arlington County, Virginia (USA) committed to expanding their 113,500 m³/d, (300,000 pe) secondary treatment plant to a 151,400 m³/d (400,000 pe) to achieve effluent total nitrogen (TN) to phosphorus (TP) phosphorus, to very low concentrations. This paper will review the steps from concept to the first year of operation, including pilot and full-scale operating data and the capital cost for the denitrification filters.

  3. Performance evaluation of iterative reconstruction algorithms for achieving CT radiation dose reduction — a phantom study

    Science.gov (United States)

    Dodge, Cristina T.; Tamm, Eric P.; Cody, Dianna D.; Liu, Xinming; Jensen, Corey T.; Wei, Wei; Kundra, Vikas

    2016-01-01

    The purpose of this study was to characterize image quality and dose performance with GE CT iterative reconstruction techniques, adaptive statistical iterative reconstruction (ASiR), and model‐based iterative reconstruction (MBIR), over a range of typical to low‐dose intervals using the Catphan 600 and the anthropomorphic Kyoto Kagaku abdomen phantoms. The scope of the project was to quantitatively describe the advantages and limitations of these approaches. The Catphan 600 phantom, supplemented with a fat‐equivalent oval ring, was scanned using a GE Discovery HD750 scanner at 120 kVp, 0.8 s rotation time, and pitch factors of 0.516, 0.984, and 1.375. The mA was selected for each pitch factor to achieve CTDIvol values of 24, 18, 12, 6, 3, 2, and 1 mGy. Images were reconstructed at 2.5 mm thickness with filtered back‐projection (FBP); 20%, 40%, and 70% ASiR; and MBIR. The potential for dose reduction and low‐contrast detectability were evaluated from noise and contrast‐to‐noise ratio (CNR) measurements in the CTP 404 module of the Catphan. Hounsfield units (HUs) of several materials were evaluated from the cylinder inserts in the CTP 404 module, and the modulation transfer function (MTF) was calculated from the air insert. The results were confirmed in the anthropomorphic Kyoto Kagaku abdomen phantom at 6, 3, 2, and 1 mGy. MBIR reduced noise levels five‐fold and increased CNR by a factor of five compared to FBP below 6 mGy CTDIvol, resulting in a substantial improvement in image quality. Compared to ASiR and FBP, HU in images reconstructed with MBIR were consistently lower, and this discrepancy was reversed by higher pitch factors in some materials. MBIR improved the conspicuity of the high‐contrast spatial resolution bar pattern, and MTF quantification confirmed the superior spatial resolution performance of MBIR versus FBP and ASiR at higher dose levels. While ASiR and FBP were relatively insensitive to changes in dose and pitch, the spatial

  4. Effect of the antimicrobial photodynamic therapy on microorganism reduction in deep caries lesions: a systematic review and meta-analysis

    Science.gov (United States)

    Ornellas, Pâmela Oliveira; Antunes, Leonardo Santos; Fontes, Karla Bianca Fernandes da Costa; Póvoa, Helvécio Cardoso Corrêa; Küchler, Erika Calvano; Iorio, Natalia Lopes Pontes; Antunes, Lívia Azeredo Alves

    2016-09-01

    This study aimed to perform a systematic review to assess the effectiveness of antimicrobial photodynamic therapy (aPDT) in the reduction of microorganisms in deep carious lesions. An electronic search was conducted in Pubmed, Web of Science, Scopus, Lilacs, and Cochrane Library, followed by a manual search. The MeSH terms, MeSH synonyms, related terms, and free terms were used in the search. As eligibility criteria, only clinical studies were included. Initially, 227 articles were identified in the electronic search, and 152 studies remained after analysis and exclusion of the duplicated studies; 6 remained after application of the eligibility criteria; and 3 additional studies were found in the manual search. After access to the full articles, three were excluded, leaving six for evaluation by the criteria of the Cochrane Collaboration's tool for assessing risk of bias. Of these, five had some risk of punctuated bias. All results from the selected studies showed a significant reduction of microorganisms in deep carious lesions for both primary and permanent teeth. The meta-analysis demonstrated a significant reduction in microorganism counts in all analyses (p<0.00001). Based on these findings, there is scientific evidence emphasizing the effectiveness of aPDT in reducing microorganisms in deep carious lesions.

  5. Exploring the effects of dimensionality reduction in deep networks for force estimation in robotic-assisted surgery

    Science.gov (United States)

    Aviles, Angelica I.; Alsaleh, Samar; Sobrevilla, Pilar; Casals, Alicia

    2016-03-01

    Robotic-Assisted Surgery approach overcomes the limitations of the traditional laparoscopic and open surgeries. However, one of its major limitations is the lack of force feedback. Since there is no direct interaction between the surgeon and the tissue, there is no way of knowing how much force the surgeon is applying which can result in irreversible injuries. The use of force sensors is not practical since they impose different constraints. Thus, we make use of a neuro-visual approach to estimate the applied forces, in which the 3D shape recovery together with the geometry of motion are used as input to a deep network based on LSTM-RNN architecture. When deep networks are used in real time, pre-processing of data is a key factor to reduce complexity and improve the network performance. A common pre-processing step is dimensionality reduction which attempts to eliminate redundant and insignificant information by selecting a subset of relevant features to use in model construction. In this work, we show the effects of dimensionality reduction in a real-time application: estimating the applied force in Robotic-Assisted Surgeries. According to the results, we demonstrated positive effects of doing dimensionality reduction on deep networks including: faster training, improved network performance, and overfitting prevention. We also show a significant accuracy improvement, ranging from about 33% to 86%, over existing approaches related to force estimation.

  6. Deep vadose zone remediation: technical and policy challenges, opportunities, and progress in achieving cleanup endpoints

    International Nuclear Information System (INIS)

    Wellman, D.M.; Freshley, M.D.; Truex, M.J.; Lee, M.H.

    2013-01-01

    Current requirements for site remediation and closure are standards-based and are often overly conservative, costly, and in some cases, technically impractical. Use of risk-informed alternate endpoints provides a means to achieve remediation goals that are permitted by regulations and are protective of human health and the environment. Alternate endpoints enable the establishment of a path for cleanup that may include intermediate remedial milestones and transition points and/or regulatory alternatives to standards-based remediation. A framework is presented that is centered around developing and refining conceptual models in conjunction with assessing risks and potential endpoints as part of a system-based assessment that integrates site data with scientific understanding of processes that control the distribution and transport of contaminants in the subsurface and pathways to receptors. This system-based assessment and subsequent implementation of the remediation strategy with appropriate monitoring are targeted at providing a holistic approach to addressing risks to human health and the environment. This holistic approach also enables effective predictive analysis of contaminant behavior to provide defensible criteria and data for making long-term decisions. Developing and implementing an alternate endpoint-based approach for remediation and waste site closure presents a number of challenges and opportunities. Categories of these challenges include scientific and technical, regulatory, institutional, and budget and resource allocation issues. Opportunities exist for developing and implementing systems-based approaches with respect to supportive characterization, monitoring, predictive modeling, and remediation approaches. (authors)

  7. Deep vadose zone remediation: technical and policy challenges, opportunities, and progress in achieving cleanup endpoints

    Energy Technology Data Exchange (ETDEWEB)

    Wellman, D.M.; Freshley, M.D.; Truex, M.J.; Lee, M.H. [Pacific Northwest National Laboratory, Richland, Washington (United States)

    2013-07-01

    Current requirements for site remediation and closure are standards-based and are often overly conservative, costly, and in some cases, technically impractical. Use of risk-informed alternate endpoints provides a means to achieve remediation goals that are permitted by regulations and are protective of human health and the environment. Alternate endpoints enable the establishment of a path for cleanup that may include intermediate remedial milestones and transition points and/or regulatory alternatives to standards-based remediation. A framework is presented that is centered around developing and refining conceptual models in conjunction with assessing risks and potential endpoints as part of a system-based assessment that integrates site data with scientific understanding of processes that control the distribution and transport of contaminants in the subsurface and pathways to receptors. This system-based assessment and subsequent implementation of the remediation strategy with appropriate monitoring are targeted at providing a holistic approach to addressing risks to human health and the environment. This holistic approach also enables effective predictive analysis of contaminant behavior to provide defensible criteria and data for making long-term decisions. Developing and implementing an alternate endpoint-based approach for remediation and waste site closure presents a number of challenges and opportunities. Categories of these challenges include scientific and technical, regulatory, institutional, and budget and resource allocation issues. Opportunities exist for developing and implementing systems-based approaches with respect to supportive characterization, monitoring, predictive modeling, and remediation approaches. (authors)

  8. Interleaving subthalamic nucleus deep brain stimulation to avoid side effects while achieving satisfactory motor benefits in Parkinson disease

    Science.gov (United States)

    Zhang, Shizhen; Zhou, Peizhi; Jiang, Shu; Wang, Wei; Li, Peng

    2016-01-01

    Abstract Background: Deep brain stimulation (DBS) of the subthalamic nucleus is an effective treatment for advanced Parkinson disease (PD). However, achieving ideal outcomes by conventional programming can be difficult in some patients, resulting in suboptimal control of PD symptoms and stimulation-induced adverse effects. Interleaving stimulation (ILS) is a newer programming technique that can individually optimize the stimulation area, thereby improving control of PD symptoms while alleviating stimulation-induced side effects after conventional programming fails to achieve the desired results. Methods: We retrospectively reviewed PD patients who received DBS programming during the previous 4 years in our hospital. We collected clinical and demographic data from 12 patients who received ILS because of incomplete alleviation of PD symptoms or stimulation-induced adverse effects after conventional programming had proven ineffective or intolerable. Appropriate lead location was confirmed with postoperative reconstruction images. The rationale and clinical efficacy of ILS was analyzed. Results: We divided our patients into 4 groups based on the following symptoms: stimulation-induced dysarthria and choreoathetoid dyskinesias, gait disturbance, and incomplete control of parkinsonism. After treatment with ILS, patients showed satisfactory improvement in PD symptoms and alleviation of stimulation-induced side effects, with a mean improvement in Unified PD Rating Scale motor scores of 26.9%. Conclusions: ILS is a newer choice and effective programming strategy to maximize symptom control in PD while decreasing stimulation-induced adverse effects when conventional programming fails to achieve satisfactory outcome. However, we should keep in mind that most DBS patients are routinely treated with conventional stimulation and that not all patients benefit from ILS. ILS is not recommended as the first choice of programming, and it is recommended only when patients have

  9. Achieving carbon emission reduction through industrial and urban symbiosis: A case of Kawasaki

    International Nuclear Information System (INIS)

    Dong, Huijuan; Ohnishi, Satoshi; Fujita, Tsuyoshi; Geng, Yong; Fujii, Minoru; Dong, Liang

    2014-01-01

    Industry and fossil fuel combustion are the main sources for urban carbon emissions. Most studies focus on energy consumption emission reduction and energy efficiency improvement. Material saving is also important for carbon emission reduction from a lifecycle perspective. IS (Industrial symbiosis) and U r S (urban symbiosis) have been effective since both of them encourage byproduct exchange. However, quantitative carbon emission reduction evaluation on applying them is still lacking. Consequently, the purpose of this paper is to fill such a gap through a case study in Kawasaki Eco-town, Japan. A hybrid LCA model was employed to evaluate to the lifecycle carbon footprint. The results show that lifecycle carbon footprints with and without IS and U r S were 26.66 Mt CO 2 e and 30.92 Mt CO 2 e, respectively. The carbon emission efficiency was improved by 13.77% with the implementation of IS and U r S. The carbon emission reduction was mainly from “iron and steel” industry, cement industry and “paper making” industry, with figures of 2.76 Mt CO 2 e, 1.16 Mt CO 2 e and 0.34 Mt CO 2 e, respectively. Reuse of scrape steel, blast furnace slag and waste paper are all effective measures for promoting carbon emission reductions. Finally, policy implications on how to further promote IS and U r S are presented. - Highlights: • We evaluate carbon emission reduction of industrial and urban symbiosis (IS/U r S). • Hybrid LCA model was used to evaluate lifecycle carbon footprint. • Carbon emission efficiency was improved by 13.77% after applying IS/U r S. • The importance of U r S in responding carbon reduction was addressed in the paper

  10. Cocatalysts in Semiconductor-based Photocatalytic CO2 Reduction: Achievements, Challenges, and Opportunities.

    Science.gov (United States)

    Ran, Jingrun; Jaroniec, Mietek; Qiao, Shi-Zhang

    2018-02-01

    Ever-increasing fossil-fuel combustion along with massive CO 2 emissions has aroused a global energy crisis and climate change. Photocatalytic CO 2 reduction represents a promising strategy for clean, cost-effective, and environmentally friendly conversion of CO 2 into hydrocarbon fuels by utilizing solar energy. This strategy combines the reductive half-reaction of CO 2 conversion with an oxidative half reaction, e.g., H 2 O oxidation, to create a carbon-neutral cycle, presenting a viable solution to global energy and environmental problems. There are three pivotal processes in photocatalytic CO 2 conversion: (i) solar-light absorption, (ii) charge separation/migration, and (iii) catalytic CO 2 reduction and H 2 O oxidation. While significant progress is made in optimizing the first two processes, much less research is conducted toward enhancing the efficiency of the third step, which requires the presence of cocatalysts. In general, cocatalysts play four important roles: (i) boosting charge separation/transfer, (ii) improving the activity and selectivity of CO 2 reduction, (iii) enhancing the stability of photocatalysts, and (iv) suppressing side or back reactions. Herein, for the first time, all the developed CO 2 -reduction cocatalysts for semiconductor-based photocatalytic CO 2 conversion are summarized, and their functions and mechanisms are discussed. Finally, perspectives in this emerging area are provided. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Reductions in energy use and environmental emissions achievable with utility-based cogeneration: Simplified illustrations for Ontario

    International Nuclear Information System (INIS)

    Rosen, M.A.

    1998-01-01

    Significant reductions in energy use and environmental emissions are demonstrated to be achievable when electrical utilities use cogeneration. Simplified illustrations of these reductions are presented for the province of Ontario, based on applying cogeneration to the facilities of the main provincial electrical utility. Three cogeneration illustrations are considered: (i) fuel cogeneration is substituted for fuel electrical generation and fuel heating, (ii) nuclear cogeneration is substituted for nuclear electrical generation and fuel heating, and (iii) fuel cogeneration is substituted for fuel electrical generation and electrical heating. The substitution of cogeneration for separate electrical and heat generation processes for all illustrations considered leads to significant reductions in fuel energy consumption (24-61%), which lead to approximately proportional reductions in emissions. (Copyright (c) 1998 Elsevier Science B.V., Amsterdam. All rights reserved.)

  12. The importance of grid integration for achievable greenhouse gas emissions reductions from alternative vehicle technologies

    International Nuclear Information System (INIS)

    Tarroja, Brian; Shaffer, Brendan; Samuelsen, Scott

    2015-01-01

    Alternative vehicles must appropriately interface with the electric grid and renewable generation to contribute to decarbonization. This study investigates the impact of infrastructure configurations and management strategies on the vehicle–grid interface and vehicle greenhouse gas reduction potential with regard to California's Executive Order S-21-09 goal. Considered are battery electric vehicles, gasoline-fueled plug-in hybrid electric vehicles, hydrogen-fueled fuel cell vehicles, and plug-in hybrid fuel cell vehicles. Temporally resolved models of the electric grid, electric vehicle charging, hydrogen infrastructure, and vehicle powertrain simulations are integrated. For plug-in vehicles, consumer travel patterns can limit the greenhouse gas reductions without smart charging or energy storage. For fuel cell vehicles, the fuel production mix must be optimized for minimal greenhouse gas emissions. The plug-in hybrid fuel cell vehicle has the largest potential for emissions reduction due to smaller battery and fuel cells keeping efficiencies higher and meeting 86% of miles on electric travel keeping the hydrogen demand low. Energy storage is required to meet Executive Order S-21-09 goals in all cases. Meeting the goal requires renewable capacities of 205 GW for plug-in hybrid fuel cell vehicles and battery electric vehicle 100s, 255 GW for battery electric vehicle 200s, and 325 GW for fuel cell vehicles. - Highlights: • Consumer travel patterns limit greenhouse gas reductions with immediate charging. • Smart charging or energy storage are required for large greenhouse gas reductions. • Fuel cells as a plug-in vehicle range extender provided the most greenhouse gas reductions. • Energy storage is required to meet greenhouse gas goals regardless of vehicle type. • Smart charging reduces the required energy storage size for a given greenhouse gas goal

  13. The role of poverty reduction strategies in achieving the millennium development goals

    NARCIS (Netherlands)

    Bezemer, Dirk; Eggen, Andrea

    2008-01-01

    We provide a literature overview of the linkages between Poverty Reduction Strategy Papers (PRSPs) and the Millenium Development Goals (MDGs) and use novel data to examine their relation. We find that introduction of a PRSP is associated with progress in four of the nine MDG indicators we study.

  14. Developments in greenhouse gas emissions and net energy use in Danish agriculture - How to achieve substantial CO2 reductions?

    International Nuclear Information System (INIS)

    Dalgaard, T.; Olesen, J.E.; Petersen, S.O.; Petersen, B.M.; Jorgensen, U.; Kristensen, T.; Hutchings, N.J.; Gyldenkaerne, S.; Hermansen, J.E.

    2011-01-01

    Greenhouse gas (GHG) emissions from agriculture are a significant contributor to total Danish emissions. Consequently, much effort is currently given to the exploration of potential strategies to reduce agricultural emissions. This paper presents results from a study estimating agricultural GHG emissions in the form of methane, nitrous oxide and carbon dioxide (including carbon sources and sinks, and the impact of energy consumption/bioenergy production) from Danish agriculture in the years 1990-2010. An analysis of possible measures to reduce the GHG emissions indicated that a 50-70% reduction of agricultural emissions by 2050 relative to 1990 is achievable, including mitigation measures in relation to the handling of manure and fertilisers, optimization of animal feeding, cropping practices, and land use changes with more organic farming, afforestation and energy crops. In addition, the bioenergy production may be increased significantly without reducing the food production, whereby Danish agriculture could achieve a positive energy balance. - Highlights: → GHG emissions from Danish agriculture 1990-2010 are calculated, including carbon sequestration. → Effects of measures to further reduce GHG emissions are listed. → Land use scenarios for a substantially reduced GHG emission by 2050 are presented. → A 50-70% reduction of agricultural emissions by 2050 relative to 1990 is achievable. → Via bioenergy production Danish agriculture could achieve a positive energy balance. - Scenario studies of greenhouse gas mitigation measures illustrate the possible realization of CO 2 reductions for Danish agriculture by 2050, sustaining current food production.

  15. Reduction of light oil usage as power fluid for jet pumping in deep heavy oil reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Chen, S.; Li, H.; Yang, D. [Society of Petroleum Engineers, Canadian Section, Calgary, AB (Canada)]|[Regina Univ., SK (Canada); Zhang, Q. [China Univ. of Petroleum, Dongying, Shandong (China); He, J. [China National Petroleum Corp., Haidan District, Beijing (China). PetroChina Tarim Oilfield Co.

    2008-10-15

    In deep heavy oil reservoirs, reservoir fluid can flow more easily in the formation as well as around the bottomhole. However, during its path along the production string, viscosity of the reservoir fluid increases dramatically due to heat loss and release of the dissolved gas, resulting in significant pressure drop along the wellbore. Artificial lifting methods need to be adopted to pump the reservoir fluids to the surface. This paper discussed the development of a new technique for reducing the amount of light oil used for jet pumping in deep heavy oil wells. Two approaches were discussed. Approach A uses the light oil as a power fluid first to obtain produced fluid with lower viscosity, and then the produced fluid is reinjected into the well as a power fluid. The process continues until the viscosity of the produced fluid is too high to be utilized. Approach B combines a portion of the produced fluid with the light oil at a reasonable ratio and then the produced fluid-light oil mixture is used as the power fluid for deep heavy oil well production. The viscosity of the blended power fluid continue to increase and eventually reach equilibrium. The paper presented the detailed processes of both approaches in order to indicate how to apply them in field applications. Theoretic models were also developed and presented to determine the key parameters in the field operations. A field case was also presented and a comparison and analysis between the two approaches were discussed. It was concluded from the field applications that, with a certain amount of light oil, the amount of reservoir fluid produced by using the new technique could be 3 times higher than that of the conventional jet pumping method. 17 refs., 3 tabs., 6 figs.

  16. Does Class-Size Reduction Close the Achievement Gap? Evidence from TIMSS 2011

    Science.gov (United States)

    Li, Wei; Konstantopoulos, Spyros

    2017-01-01

    Policies about reducing class size have been implemented in the US and Europe in the past decades. Only a few studies have discussed the effects of class size at different levels of student achievement, and their findings have been mixed. We employ quantile regression analysis, coupled with instrumental variables, to examine the causal effects of…

  17. Achieving emissions reduction through oil sands cogeneration in Alberta’s deregulated electricity market

    International Nuclear Information System (INIS)

    Ouellette, A.; Rowe, A.; Sopinka, A.; Wild, P.

    2014-01-01

    The province of Alberta faces the challenge of balancing its commitment to reduce CO 2 emissions and the growth of its energy-intensive oil sands industry. Currently, these operations rely on the Alberta electricity system and on-site generation to satisfy their steam and electricity requirements. Most of the on-site generation units produce steam and electricity through the process of cogeneration. It is unclear to what extent new and existing operations will continue to develop cogeneration units or rely on electricity from the Alberta grid to meet their energy requirements in the near future. This study explores the potential for reductions in fuel usage and CO 2 emissions by increasing the penetration of oil sands cogeneration in the provincial generation mixture. EnergyPLAN is used to perform scenario analyses on Alberta’s electricity system in 2030 with a focus on transmission conditions to the oil sands region. The results show that up to 15–24% of CO 2 reductions prescribed by the 2008 Alberta Climate Strategy are possible. Furthermore, the policy implications of these scenarios within a deregulated market are discussed. - Highlights: • High levels of cogeneration in the oil sands significantly reduce the total fuel usage and CO 2 emissions for the province. • Beyond a certain threshold, the emissions reduction intensity per MW of cogeneration installed is reduced. • The cost difference between scenarios is not significant. • Policy which gives an advantage to a particular technology goes against the ideology of a deregulated market. • Alberta will need significant improvements to its transmission system in order for oil sands cogeneration to persist

  18. Reduction of surface subsidence risk by fly ash exploitation as filling material in deep mining areas

    Czech Academy of Sciences Publication Activity Database

    Trčková, Jiřina; Šperl, Jan

    2010-01-01

    Roč. 53, č. 2 (2010), s. 251-258 ISSN 0921-030X Institutional research plan: CEZ:AV0Z30460519 Keywords : undermining * subsidence of the surface * impact reduction Subject RIV: DO - Wilderness Conservation Impact factor: 1.398, year: 2010 www.springerlink.com/content/y8257893528lp56w/

  19. Sulfate reduction controlled by organic matter availability in deep sediment cores from the saline, alkaline Lake Van (Eastern Anatolia, Turkey

    Directory of Open Access Journals (Sweden)

    Clemens eGlombitza

    2013-07-01

    Full Text Available As part of the International Continental Drilling Program (ICDP deep lake drilling project PaleoVan, we investigated sulfate reduction (SR in deep sediment cores of the saline, alkaline (salinity 21.4 ‰, alkalinity 155 m mEq-1, pH 9.81 Lake Van, Turkey. The cores were retrieved in the Northern Basin (NB and at Ahlat Ridge (AR and reached a maximum depth of 220 m. Additionally, 65-75 cm long gravity cores were taken at both sites. Sulfate reduction rates (SRR were low (≤ 22 nmol cm-3 d-1 compared to lakes with higher salinity and alkalinity, indicating that salinity and alkalinity are not limiting SR in Lake Van. Both sites differ significantly in rates and depth distribution of SR. In NB, SRR are up to 10 times higher than at AR. Sulfate reduction (SR could be detected down to 19 meters below lake floor (mblf at NB and down to 13 mblf at AR. Although SRR were lower at AR than at NB, organic matter (OM concentrations were higher. In contrast, dissolved OM in the pore water at AR contained more macromolecular OM and less low molecular weight OM. We thus suggest, that OM content alone cannot be used to infer microbial activity at Lake Van but that quality of OM has an important impact as well. These differences suggest that biogeochemical processes in lacustrine sediments are reacting very sensitively to small variations in geological, physical or chemical parameters over relatively short distances. 

  20. Reflected Sunlight Reduction and Characterization for a Deep-Space Optical Receiver Antenna (DSORA)

    Science.gov (United States)

    Clymer, B. D.

    1990-01-01

    A baffle system for the elimination of first-order specular and diffuse reflection of sunlight from the sunshade of a deep-space optical receiver telescope is presented. This baffle system consists of rings of 0.5cm blades spaced 2.5 cm apart on the walls of GO hexagonal sunshade tubes that combine to form the telescope sunshade. The shadow cast by the blades, walls, and rims of the tubes prevent all first-order reflections of direct sunlight from reaching the primary mirror of the telescope. A reflection model of the sunshade without baffles is also presented for comparison. Since manufacturers of absorbing surfaces do not measure data near grazing incidence, the reflection properties at anticipated angles of incidence must be characterized. A description of reflection from matte surfaces in term of bidirectional reflection distribution function (BRDF) is presented along with a discussion of measuring BRDF near grazing incidence.

  1. How to Achieve Transparency in Public-Private Partnerships Engaged in Hunger and Malnutrition Reduction.

    Science.gov (United States)

    Eggersdorfer, Manfred; Bird, Julia K

    2016-01-01

    Multi-stakeholder partnerships are important facilitators of improving nutrition in developing countries to achieve the United Nations' Sustainable Development Goals. Often, the role of industry is challenged and questions are raised as to the ethics of involving for-profit companies in humanitarian projects. The Second International Conference on Nutrition placed great emphasis on the role of the private sector, including industry, in multi-stakeholder partnerships to reduce hunger and malnutrition. Governments have to establish regulatory frameworks and institutions to guarantee fair competition and invest in infrastructure that makes investments for private companies attractive, eventually leading to economic growth. Civil society organizations can contribute by delivering nutrition interventions and behavioral change-related communication to consumers, providing capacity, and holding governments and private sector organizations accountable. Industry provides technical support, innovation, and access to markets and the supply chain. The greatest progress and impact can be achieved if all stakeholders cooperate in multi-stakeholder partnerships aimed at improving nutrition, thereby strengthening local economies and reducing poverty and inequality. Successful examples of public-private partnerships exist, as well as examples in which these partnerships did not achieve mutually agreed objectives. The key requirements for productive alliances between industry and civil society organizations are the establishment of rules of engagement, transparency and mutual accountability. The Global Social Observatory performed a consultation on conflicts of interest related to the Scaling Up Nutrition movement and provided recommendations to prevent, identify, manage and monitor potential conflicts of interest. Multi-stakeholder partnerships can be successful models in improving nutrition if they meet societal demand with transparent decision-making and execution. Solutions to

  2. The role of anthropogenic aerosol emission reduction in achieving the Paris Agreement's objective

    Science.gov (United States)

    Hienola, Anca; Pietikäinen, Joni-Pekka; O'Donnell, Declan; Partanen, Antti-Ilari; Korhonen, Hannele; Laaksonen, Ari

    2017-04-01

    The Paris agreement reached in December 2015 under the auspices of the United Nation Framework Convention on Climate Change (UNFCCC) aims at holding the global temperature increase to well below 2◦C above preindustrial levels and "to pursue efforts to limit the temperature increase to 1.5◦C above preindustrial levels". Limiting warming to any level implies that the total amount of carbon dioxide (CO2) - the dominant driver of long-term temperatures - that can ever be emitted into the atmosphere is finite. Essentially, this means that global CO2 emissions need to become net zero. CO2 is not the only pollutant causing warming, although it is the most persistent. Short-lived, non-CO2 climate forcers also must also be considered. Whereas much effort has been put into defining a threshold for temperature increase and zero net carbon emissions, surprisingly little attention has been paid to the non-CO2 climate forcers, including not just the non-CO2 greenhouse gases (methane (CH4), nitrous oxide (N2O), halocarbons etc.) but also the anthropogenic aerosols like black carbon (BC), organic carbon (OC) and sulfate. This study investigates the possibility of limiting the temperature increase to 1.5◦C by the end of the century under different future scenarios of anthropogenic aerosol emissions simulated with the very simplistic MAGICC climate carbon cycle model as well as with ECHAM6.1-HAM2.2-SALSA + UVic ESCM. The simulations include two different CO2 scenarios- RCP3PD as control and a CO2 reduction leading to 1.5◦C (which translates into reaching the net zero CO2 emissions by mid 2040s followed by negative emissions by the end of the century); each CO2 scenario includes also two aerosol pollution control cases denoted with CLE (current legislation) and MFR (maximum feasible reduction). The main result of the above scenarios is that the stronger the anthropogenic aerosol emission reduction is, the more significant the temperature increase by 2100 relative to pre

  3. Key Questions for Achieving EU Emission Reductions without Abandoning Other Energy Goals

    International Nuclear Information System (INIS)

    Stang, G.

    2014-01-01

    What considerations must be addressed to ensure that efforts to achieve the EU's new 2030 emissions and renewables targets are compatible with the other energy goals of the EU and its member states: energy security, and energy affordability? How should these other energy goals be addressed when pursuing energy efficiency improvements, upgrading electricity systems to handle different renewable energy sources, and developing policies to reduce overall CO2 emissions? Markets have been defined as being central to achieving all of Europe's energy goals - both the creation of an EU internal energy market and the use of the Emissions Trading System (ETS) to allow a market for managing a portion of the continent's greenhouse gas emissions. But once these markets are in place and operational, there will still be great variances among the goals, instruments, and level of market integration available for the different countries and regions of Europe. Choosing the most cost effective mechanisms for pursuing the new goals will require effective use of the flexibility that is available - an improved ETS, tradable national targets for non-ETS emissions, and a rapidly widening array of cost-effective renewable energy options. Sufficient use of this flexibility should facilitate the flow of energy investments toward energy system improvements where there is low-hanging fruit - anywhere in the continent - without requiring that local or continental energy security goals be sacrificed. (author).

  4. The influence of biopreparations on the reduction of energy consumption and CO2 emissions in shallow and deep soil tillage.

    Science.gov (United States)

    Naujokienė, Vilma; Šarauskis, Egidijus; Lekavičienė, Kristina; Adamavičienė, Aida; Buragienė, Sidona; Kriaučiūnienė, Zita

    2018-06-01

    The application of innovation in agriculture technologies is very important for increasing the efficiency of agricultural production, ensuring the high productivity of plants, production quality, farm profitability, the positive balance of used energy, and the requirements of environmental protection. Therefore, it is a scientific problem that solid and soil surfaces covered with plant residue have a negative impact on the work, traction resistance, energy consumption, and environmental pollution of tillage machines. The objective of this work was to determine the dependence of the reduction of energy consumption and CO 2 gas emissions on different biopreparations. Experimental research was carried out in a control (SC1) and seven different biopreparations using scenarios (SC2-SC8) using bacterial and non-bacterial biopreparations in different consistencies (with essential and mineral oils, extracts of various grasses and sea algae, phosphorus, potassium, humic and gibberellic acids, copper, zinc, manganese, iron, and calcium), estimating discing and plowing as the energy consumption parameters of shallow and deep soil tillage machines, respectively. CO 2 emissions were determined by evaluating soil characteristics (such as hardness, total porosity and density). Meteorological conditions such average daily temperatures (2015-20.3 °C; 2016-16.90 °C) and precipitations (2015-6.9 mm; 2016-114.9 mm) during the month strongly influenced different results in 2015 and 2016. Substantial differences between the averages of energy consumption identified in approximately 62% of biological preparation combinations created usage scenarios. Experimental research established that crop field treatments with biological preparations at the beginning of vegetation could reduce the energy consumption of shallow tillage machines by up to approximately 23%, whereas the energy consumption of deep tillage could be reduced by up to approximately 19.2% compared with the control

  5. Weight gain after subthalamic nucleus deep brain stimulation in Parkinson's disease is influenced by dyskinesias' reduction and electrodes' position.

    Science.gov (United States)

    Balestrino, Roberta; Baroncini, Damiano; Fichera, Mario; Donofrio, Carmine Antonio; Franzin, Alberto; Mortini, Pietro; Comi, Giancarlo; Volontè, Maria Antonietta

    2017-12-01

    Parkinson's disease is a common neurodegenerative disease that can be treated with pharmacological or surgical therapy. Subthalamic nucleus (STN) deep brain stimulation is a commonly used surgical option. A reported side effect of STN-DBS is weight gain: the aim of our study was to find those factors that determine weight gain, through one year-long observation of 32 patients that underwent surgery in our centre. During the follow-up, we considered: anthropometric features, hormonal levels, motor outcome, neuropsychological and quality of life outcomes, therapeutic parameters and electrodes position. The majority (84%) of our patients gained weight (6.7 kg in 12 months); more than a half of the cohort became overweight. At 12th month, weight gain showed a correlation with dyskinesias reduction, electrodes voltage and distance on the lateral axis. In the multivariate regression analysis, the determinants of weight gain were dyskinesias reduction and electrodes position. In this study, we identified dyskinesias reduction and distance between the active electrodes and the third ventricle as determining factors of weight gain after STN-DBS implantation in PD patients. The first finding could be linked to a decrease in energy consumption, while the second one could be due to a lower stimulation of the lateral hypothalamic area, known for its important role in metabolism and body weight control. Weight gain is a common finding after STN-DBS implantation, and it should be carefully monitored given the potential harmful consequences of overweight.

  6. Possible pathways for dealing with Japan's post-Fukushima challenge and achieving CO2 emission reduction targets in 2030

    International Nuclear Information System (INIS)

    Su, Xuanming; Zhou, Weisheng; Sun, Faming; Nakagami, Ken'Ichi

    2014-01-01

    Considering the unclear nuclear future of Japan after Fukushima Dai-ichi nuclear power plant accident since Mar. 11, 2011, this study assesses a series of energy consumption scenarios including the reference scenario, nuclear limited scenarios and current nuclear use level scenario for Japan in 2030 by the G-CEEP (Glocal Century Energy Environment Planning) model. The simulation result for each scenario is firstly presented in terms of primary energy consumption, electricity generation, CO 2 emission, marginal abatement cost and GDP (gross domestic product) loss. According to the results, energy saving contributes the biggest share in total CO 2 emission reduction, regardless of different nuclear use levels and different CO 2 emission reduction levels. A certain amount of coal generation can be retained in the nuclear limited scenarios due to the applying of CCS (carbon capture and storage). The discussion indicates that Japan needs to improve energy use efficiency, increase renewable energy and introduce CCS in order to reduce the dependence on nuclear power and to achieve CO 2 emission reduction target in 2030. In addition, it is ambitious for Japan to achieve the zero nuclear scenario with 30% CO 2 emission reduction which will cause a marginal abatement cost of 383 USD/tC and up to −2.54% GDP loss from the reference scenario. Dealing with the nuclear power issue, Japan is faced with a challenge as well as an opportunity. - Highlights: • Nuclear use limited and carbon emission reduction scenarios for Japan in 2030. • Contributions of different abatement options to carbon emissions. • CCS for reducing dependence on nuclear power

  7. Reduction in training time of a deep learning model in detection of lesions in CT

    Science.gov (United States)

    Makkinejad, Nazanin; Tajbakhsh, Nima; Zarshenas, Amin; Khokhar, Ashfaq; Suzuki, Kenji

    2018-02-01

    Deep learning (DL) emerged as a powerful tool for object detection and classification in medical images. Building a well-performing DL model, however, requires a huge number of images for training, and it takes days to train a DL model even on a cutting edge high-performance computing platform. This study is aimed at developing a method for selecting a "small" number of representative samples from a large collection of training samples to train a DL model for the could be used to detect polyps in CT colonography (CTC), without compromising the classification performance. Our proposed method for representative sample selection (RSS) consists of a K-means clustering algorithm. For the performance evaluation, we applied the proposed method to select samples for the training of a massive training artificial neural network based DL model, to be used for the classification of polyps and non-polyps in CTC. Our results show that the proposed method reduce the training time by a factor of 15, while maintaining the classification performance equivalent to the model trained using the full training set. We compare the performance using area under the receiveroperating- characteristic curve (AUC).

  8. [Emissions trading potential : achieving emission reductions in a cost-effective manner

    International Nuclear Information System (INIS)

    Fay, K.

    1998-01-01

    The issue of emissions trading as a viable tool to reduce greenhouse gas emissions by developed countries was discussed. The essence of this author's argument was that emissions trading alone will not solve the climate change problem and that the details of the program are hazy at best. In order to have any hope of meeting the emission reductions, it is essential to begin working out the details now, and to coordinate them with the Clean Development Mechanism (CDM) and Joint Implementation (JI) plan since all three of these flexibility mechanisms will be working in and among themselves, therefore they need to be consistent. Work on a general set of draft principles by the International Climate Change Partnership (ICCP), a coalition headquartered in Washington, DC, was summarized. Essentially, ICCP favors voluntary programs, incentives for participation, no quantitative limits on trading, no limits on sources and sinks. ICCP believes that trading should be allowed at the company level, and liability should not devolve on the buyer alone, rather, it should be negotiated between buyers and sellers. Credits for early action should also be tradable and most of all, the trading program should be simple to allow active participation by industry, and be free of bureaucratic impediments

  9. Deep carbon reductions in California require electrification and integration across economic sectors

    International Nuclear Information System (INIS)

    Wei, Max; Greenblatt, Jeffery B; McMahon, James E; Nelson, James H; Mileva, Ana; Johnston, Josiah; Jones, Chris; Kammen, Daniel M; Ting, Michael; Yang, Christopher

    2013-01-01

    Meeting a greenhouse gas (GHG) reduction target of 80% below 1990 levels in the year 2050 requires detailed long-term planning due to complexity, inertia, and path dependency in the energy system. A detailed investigation of supply and demand alternatives is conducted to assess requirements for future California energy systems that can meet the 2050 GHG target. Two components are developed here that build novel analytic capacity and extend previous studies: (1) detailed bottom-up projections of energy demand across the building, industry and transportation sectors; and (2) a high-resolution variable renewable resource capacity planning model (SWITCH) that minimizes the cost of electricity while meeting GHG policy goals in the 2050 timeframe. Multiple pathways exist to a low-GHG future, all involving increased efficiency, electrification, and a dramatic shift from fossil fuels to low-GHG energy. The electricity system is found to have a diverse, cost-effective set of options that meet aggressive GHG reduction targets. This conclusion holds even with increased demand from transportation and heating, but the optimal levels of wind and solar deployment depend on the temporal characteristics of the resulting load profile. Long-term policy support is found to be a key missing element for the successful attainment of the 2050 GHG target in California. (letter)

  10. Some Remarks on Practical Aspects of Laboratory Testing of Deep Soil Mixing Composites Achieved in Organic Soils

    Science.gov (United States)

    Kanty, Piotr; Rybak, Jarosław; Stefaniuk, Damian

    2017-10-01

    This paper presents the results of laboratory testing of organic soil-cement samples are presented in the paper. The research program continues previously reported the authors’ experiences with cement-fly ash-soil sample testing. Over 100 of compression and a dozen of tension tests have been carried out altogether. Several samples were waiting for failure test for over one year after they were formed. Several factors, like: the large amount of the tested samples, a long observation time, carrying out the tests in complex cycles of loading and the possibility of registering the loads and deformation in the axial and lateral direction - have made it possible to take into consideration numerous interdependencies, three of which have been presented in this work: the increments of compression strength, the stiffness of soil-cement in relation to strength and the tensile strength. Compressive strength, elastic modulus and tensile resistance of cubic samples were examined. Samples were mixed and stored in the laboratory conditions. Further numerical analysis in the Finite Element Method numerical code Z_Soil, were performed on the basis of laboratory test results. Computations prove that cement-based stabilization of organic soil brings serious risks (in terms of material capacity and stiffness) and Deep Soil Mixing technology should not be recommended for achieving it. The numerical analysis presented in the study below includes only one type of organic and sandy soil and several possible geometric combinations. Despite that, it clearly points to the fact that designing the DSM columns in the organic soil may be linked with a considerable risk and the settlement may reach too high values. During in situ mixing, the organic material surrounded by sand layers surely mixes with one another in certain areas. However, it has not been examined and it is difficult to assume such mixing already at the designing stage. In case of designing the DSM columns which goes through a

  11. High frequency of phylogenetically diverse reductive dehalogenase-homologous genes in deep subseafloor sedimentary metagenomes

    Directory of Open Access Journals (Sweden)

    Mikihiko eKawai

    2014-03-01

    Full Text Available Marine subsurface sediments on the Pacific margin harbor diverse microbial communities even at depths of several hundreds meters below the seafloor (mbsf or more. Previous PCR-based molecular analysis showed the presence of diverse reductive dehalogenase gene (rdhA homologs in marine subsurface sediment, suggesting that anaerobic respiration of organohalides is one of the possible energy-yielding pathways in the organic-rich sedimentary habitat. However, primer-independent molecular characterization of rdhA has remained to be demonstrated. Here, we studied the diversity and frequency of rdhA homologs by metagenomic analysis of five different depth horizons (0.8, 5.1, 18.6, 48.5 and 107.0 mbsf at Site C9001 off the Shimokita Peninsula of Japan. From all metagenomic pools, remarkably diverse rdhA-homologous sequences, some of which are affiliated with novel clusters, were observed with high frequency. As a comparison, we also examined frequency of dissimilatory sulfite reductase genes (dsrAB, key functional genes for microbial sulfate reduction. The dsrAB were also widely observed in the metagenomic pools whereas the frequency of dsrAB genes was generally smaller than that of rdhA-homologous genes. The phylogenetic composition of rdhA-homologous genes was similar among the five depth horizons. Our metagenomic data revealed that subseafloor rdhA homologs are more diverse than previously identified from PCR-based molecular studies. Spatial distribution of similar rdhA homologs across wide depositional ages indicates that the heterotrophic metabolic processes mediated by the genes can be ecologically important, functioning in the organic-rich subseafloor sedimentary biosphere.

  12. Automated detection of masses on whole breast volume ultrasound scanner: false positive reduction using deep convolutional neural network

    Science.gov (United States)

    Hiramatsu, Yuya; Muramatsu, Chisako; Kobayashi, Hironobu; Hara, Takeshi; Fujita, Hiroshi

    2017-03-01

    Breast cancer screening with mammography and ultrasonography is expected to improve sensitivity compared with mammography alone, especially for women with dense breast. An automated breast volume scanner (ABVS) provides the operator-independent whole breast data which facilitate double reading and comparison with past exams, contralateral breast, and multimodality images. However, large volumetric data in screening practice increase radiologists' workload. Therefore, our goal is to develop a computer-aided detection scheme of breast masses in ABVS data for assisting radiologists' diagnosis and comparison with mammographic findings. In this study, false positive (FP) reduction scheme using deep convolutional neural network (DCNN) was investigated. For training DCNN, true positive and FP samples were obtained from the result of our initial mass detection scheme using the vector convergence filter. Regions of interest including the detected regions were extracted from the multiplanar reconstraction slices. We investigated methods to select effective FP samples for training the DCNN. Based on the free response receiver operating characteristic analysis, simple random sampling from the entire candidates was most effective in this study. Using DCNN, the number of FPs could be reduced by 60%, while retaining 90% of true masses. The result indicates the potential usefulness of DCNN for FP reduction in automated mass detection on ABVS images.

  13. Thermochemical sulfate reduction in deep petroleum reservoirs: a molecular approach; Thermoreduction des sulfates dans les reservoirs petroliers: approche moleculaire

    Energy Technology Data Exchange (ETDEWEB)

    Hanin, S.

    2002-11-01

    The thermochemical sulfate reduction (TSR) is a set of chemical reactions leading to hydrocarbon oxidation and production of carbon dioxide and sour gas (H{sub 2}S) which is observed in deep petroleum reservoirs enriched in anhydrites (calcium sulfate). Molecular and isotopic studies have been conducted on several crude oil samples to determine which types of compounds could have been produced during TSR. Actually, we have shown that the main molecules formed by TSR were organo-sulfur compounds. Indeed, sulfur isotopic measurements. of alkyl-di-benzothiophenes, di-aryl-disulfides and thia-diamondoids (identified by NMR or synthesis of standards) shows that they are formed during TSR as their value approach that of the sulfur of the anhydrite. Moreover, thia-diamondoids are apparently exclusively formed during this phenomenon and can thus be considered as true molecular markers of TSR. In a second part, we have investigated with laboratory experiments the formation mechanism of the molecules produced during TSR. A first model has shown that sulfur incorporation into the organic matter occurred with mineral sulfur species of low oxidation degree. The use of {sup 34}S allowed to show that the sulfates reduction occurred during these simulations. At least, some experiments on polycyclic hydrocarbons, sulfurized or not, allowed to establish that thia-diamondoids could be formed by acid-catalysed rearrangements at high temperatures in a similar way as the diamondoids. (author)

  14. Radiation dose reduction in digital breast tomosynthesis (DBT) by means of deep-learning-based supervised image processing

    Science.gov (United States)

    Liu, Junchi; Zarshenas, Amin; Qadir, Ammar; Wei, Zheng; Yang, Limin; Fajardo, Laurie; Suzuki, Kenji

    2018-03-01

    To reduce cumulative radiation exposure and lifetime risks for radiation-induced cancer from breast cancer screening, we developed a deep-learning-based supervised image-processing technique called neural network convolution (NNC) for radiation dose reduction in DBT. NNC employed patched-based neural network regression in a convolutional manner to convert lower-dose (LD) to higher-dose (HD) tomosynthesis images. We trained our NNC with quarter-dose (25% of the standard dose: 12 mAs at 32 kVp) raw projection images and corresponding "teaching" higher-dose (HD) images (200% of the standard dose: 99 mAs at 32 kVp) of a breast cadaver phantom acquired with a DBT system (Selenia Dimensions, Hologic, CA). Once trained, NNC no longer requires HD images. It converts new LD images to images that look like HD images; thus the term "virtual" HD (VHD) images. We reconstructed tomosynthesis slices on a research DBT system. To determine a dose reduction rate, we acquired 4 studies of another test phantom at 4 different radiation doses (1.35, 2.7, 4.04, and 5.39 mGy entrance dose). Structural SIMilarity (SSIM) index was used to evaluate the image quality. For testing, we collected half-dose (50% of the standard dose: 32+/-14 mAs at 33+/-5 kVp) and full-dose (standard dose: 68+/-23 mAs at 33+/-5 kvp) images of 10 clinical cases with the DBT system at University of Iowa Hospitals and Clinics. NNC converted half-dose DBT images of 10 clinical cases to VHD DBT images that were equivalent to full dose DBT images. Our cadaver phantom experiment demonstrated 79% dose reduction.

  15. How to achieve emission reductions in Germany and the European Union. Energy policy, RUE with cross cutting technologies, Pinch technology

    Energy Technology Data Exchange (ETDEWEB)

    Radgen, P.

    1999-10-01

    The German presentations will cover three main topics. These are: (1) Energy policy on the national level and in the European Community. (2) Rational use of energy and efficiency improvements by cross cutting technologies. (3) Optimizing heat recovery and heat recovery network with Pinch technology. Actual development of carbon dioxide emissions and scenarios to forecast for the future development will be presented. It will be shown, that long term agreements are widely used in the EC to obtain a reduction of emissions. Specific attention will also be placed on the burden sharing in the EC and the other GHG. In the second part the efficiency improvement by cross cutting technologies will be discussed for furnaces, waste heat recovery, electric motors, compressed air systems, cooling systems, lighting and heat pumps. Most of these improvement potentials are economic at present energy prices, but some barriers for their application have to be overcome which will be discussed. In the last part a systematic method for the optimization of heat recovery networks is presented. The Pinch technology, developed in the late seventies is an easy and reliable way to obtain quickly a good insight into the heat flows of a process. The basics of Pinch technology will be presented with a simple example and the presentation of an in deep analysis of a fertilizer complex. (orig.)

  16. A New Trend in the Management of Pediatric Deep Neck Abscess: Achievement of the Medical Treatment Alone.

    Science.gov (United States)

    Çetin, Aslı Çakır; Olgun, Yüksel; Özses, Arif; Erdağ, Taner Kemal

    2017-06-01

    Albeit the traditional opinion that advocates a routine surgical drainage for the treatment of an abscess, the case series presenting high success rates of the medical therapy alone is increasing in deep neck abscesses of childhood. This research focuses on children whose deep neck abscess fully disappeared after only medical treatment. In a retrospective study, we evaluated medical records of 12 pediatric (<18 years old) cases diagnosed with deep neck abscess or abscess containing suppurative lymphadenitis and treated with only medical therapy between 2010 and 2015 for age, gender, treatment modality, parameters related to antimicrobial agents, location of the infection, etiology, symptoms, duration of hospital stay, characteristics of the radiological and biochemical examination findings, and complications. The mean age of 10 male and two female children was 5.9 years (range, 1-17 years). Baseline and the last control's mean values of white blood cell (WBC), C-reactive protein, and erythrocyte sedimentation rate were 18,050/μL, 99.8 mg/L, 73.1 mm/h, and 8,166/μL, 34.1 mg/L, 35.3 mm/h, respectively. Contrast-enhanced neck computed tomography demonstrated an abscess in seven cases and an abscess containing suppurative lymphadenitis in five cases. The largest diameter of the abscess was 41 mm. All cases were given broad-spectrum empirical antibiotherapy (penicillin+metronidazole, ceftriaxone+metronidazole, or clindamycin). No medical treatment failure was experienced. Independent of age and abscess size, if the baseline WBC is ≤25.200/μL, if only two or less than two cervical compartments are involved, if there are no complications in the admission, and if the etiological reason is not a previous history of trauma, surgery, foreign body, and malignancy, pediatric deep neck abscess can be treated successfully with parenteral empirical wide-spectrum antibiotherapy.

  17. Report of the working group on achieving a fourfold reduction in greenhouse gas emissions in France by 2050

    International Nuclear Information System (INIS)

    2006-01-01

    Achieving a fourfold reduction of in greenhouse gas emissions by 2050 is the ambitious and voluntary objective for France that addresses a combination of many different aspects (technical, technological, economic, social) against a backdrop of important issues and choices for public policy-makers. This document is the bilingual version of the factor 4 group report. It discusses the Factor 4 objectives, the different proposed scenario and the main lessons learned, the strategies to support the Factor 4 objectives (fostering changes in behavior and defining the role of public policies), the Factor 4 objective in international and european contexts (experience aboard, strategic behavior, constraints and opportunities, particularly in europe) and recommendations. (A.L.B.)

  18. The BOS-X approach: achieving drastic cost reduction in CPV through holistic power plant level innovation

    Science.gov (United States)

    Plesniak, A.; Garboushian, V.

    2012-10-01

    In 2011, the Amonix Advanced Technology Group was awarded DOE SunShot funding in the amount of 4.5M to design a new Balance of System (BOS) architecture utilizing Amonix MegaModules™ focused on reaching the SunShot goal of 0.06-$0.08/kWhr LCOE. The project proposal presented a comprehensive re-evaluation of the cost components of a utility scale CPV plant and identified critical areas of focus where innovation is needed to achieve cost reduction. As the world's premier manufacturer and most experienced installer of CPV power plants, Amonix is uniquely qualified to lead a rethinking of BOS architecture for CPV. The presentation will focus on the structure of the BOS-X approach, which looks for the next wave of cost reduction in CPV through evaluation of non-module subsystems and the interaction between subsystems during the lifecycle of a solar power plant. Innovation around nonmodule components is minimal to date because CPV companies are just now getting enough practice through completion of large projects to create ideas and tests on how to improve baseline designs and processes. As CPV companies increase their installed capacity, they can utilize an approach similar to the methodology of BOS-X to increase the competitiveness of their product. Through partnership with DOE, this holistic approach is expected to define a path for CPV well aligned with the goals of the SunShot Initiative.

  19. Reduction of MRI acoustic noise achieved by manipulation of scan parameters – A study using veterinary MR sequences

    International Nuclear Information System (INIS)

    Baker, Martin A.

    2013-01-01

    Sound pressure levels were measured within an MR scan room for a range of sequences employed in veterinary brain scanning, using a test phantom in an extremity coil. Variation of TR and TE, and use of a quieter gradient mode (‘whisper’ mode) were evaluated to determine their effect on sound pressure levels (SPLs). Use of a human head coil and a human brain sequence was also evaluated. Significant differences in SPL were achieved for T2, T1, T2* gradient echo and VIBE sequences by varying TR or TE, or by selecting the ‘whisper’ gradient mode. An appreciable reduction was achieved for the FLAIR sequence. Noise levels were not affected when a head coil was used in place of an extremity coil. Due to sequence parameters employed, veterinary patients and anaesthetists may be exposed to higher sound levels than those experienced in human MR examinations. The techniques described are particularly valuable in small animal MR scanning where ear protection is not routinely provided for the patient.

  20. An intensive nurse-led, multi-interventional clinic is more successful in achieving vascular risk reduction targets than standard diabetes care.

    LENUS (Irish Health Repository)

    MacMahon Tone, J

    2009-06-01

    The aim of this research was to determine whether an intensive, nurse-led clinic could achieve recommended vascular risk reduction targets in patients with type 2 diabetes as compared to standard diabetes management.

  1. Solutions and business opportunities to achieve emissions reduction objectives; Solutions et opportunites d'affaires pour atteindre vos objectifs de reduction d'emissions

    Energy Technology Data Exchange (ETDEWEB)

    Lefebvre, J.-F. [GRAME, Montreal, PQ (Canada)

    2003-07-01

    Launched in 1989, GRAME is a non-profit organization dedicated to the development of management and analysis tools for sustainable development. It is involved in greenhouse gas (GHG) management, sustainable transportation, sustainable economy and the environment. The author reviewed the events that led to the development of the Kyoto Protocol. The principal factor influencing the development of the Kyoto Protocol was climate change and global warming, caused in large part by GHG emissions. The ratification of the Kyoto Protocol presents business opportunities in both Canada and Quebec, such as the export of products and services based on the expertise developed in the country; promotion of sustainable transportation options; and, investment in energy efficiency and energy substitution. An action plan must include a combination of complementary measures, take into consideration the impact of those measures on the life cycle, and compare the options offering an equivalent service. There are a number of companies that can assist in energy efficiency and energy demand management, including Hydro-Quebec, Gaz Metropolitain, and the Agence de l'efficacite energetique. GRAME has launched an awareness campaign called ClimAction, designed to promote the implementation of concrete measures for the reduction of GHG emissions and energy savings. Successes have been achieved in the renewable energy sector, such as the Paix des Braves agreement between the Inuit and the Quebec government, Hydro-Quebec's commitment to buy 1000 megawatts of wind energy in 10 years, and recent advances in solar water and space heating. The remaining challenges include: government support, separation of duties, and regulations. The factors influencing a Canadian emissions trading market are price of permits, withdrawal of permits, allocation, and when to begin implementation. tabs., figs.

  2. Beyond Measurement-Driven Instruction: Achieving Deep Learning Based on Constructivist Learning Theory, Integrated Assessment, and a Flipped Classroom Approach

    Science.gov (United States)

    Bernauer, James A.; Fuller, Richard G.

    2017-01-01

    The authors focus on the critical role of assessment within a flipped classroom environment where instruction is based on constructivist learning theory and where desired student outcomes are at the higher levels of Bloom's Taxonomy. While assessment is typically thought of in terms of providing summative measures of performance or achievement, it…

  3. Homoacetogenesis in Deep-Sea Chloroflexi, as Inferred by Single-Cell Genomics, Provides a Link to Reductive Dehalogenation in Terrestrial Dehalococcoidetes.

    Science.gov (United States)

    Sewell, Holly L; Kaster, Anne-Kristin; Spormann, Alfred M

    2017-12-19

    The deep marine subsurface is one of the largest unexplored biospheres on Earth and is widely inhabited by members of the phylum Chloroflexi In this report, we investigated genomes of single cells obtained from deep-sea sediments of the Peruvian Margin, which are enriched in such Chloroflexi 16S rRNA gene sequence analysis placed two of these single-cell-derived genomes (DscP3 and Dsc4) in a clade of subphylum I Chloroflexi which were previously recovered from deep-sea sediment in the Okinawa Trough and a third (DscP2-2) as a member of the previously reported DscP2 population from Peruvian Margin site 1230. The presence of genes encoding enzymes of a complete Wood-Ljungdahl pathway, glycolysis/gluconeogenesis, a Rhodobacter nitrogen fixation (Rnf) complex, glyosyltransferases, and formate dehydrogenases in the single-cell genomes of DscP3 and Dsc4 and the presence of an NADH-dependent reduced ferredoxin:NADP oxidoreductase (Nfn) and Rnf in the genome of DscP2-2 imply a homoacetogenic lifestyle of these abundant marine Chloroflexi We also report here the first complete pathway for anaerobic benzoate oxidation to acetyl coenzyme A (CoA) in the phylum Chloroflexi (DscP3 and Dsc4), including a class I benzoyl-CoA reductase. Of remarkable evolutionary significance, we discovered a gene encoding a formate dehydrogenase (FdnI) with reciprocal closest identity to the formate dehydrogenase-like protein (complex iron-sulfur molybdoenzyme [CISM], DET0187) of terrestrial Dehalococcoides/Dehalogenimonas spp. This formate dehydrogenase-like protein has been shown to lack formate dehydrogenase activity in Dehalococcoides/Dehalogenimonas spp. and is instead hypothesized to couple HupL hydrogenase to a reductive dehalogenase in the catabolic reductive dehalogenation pathway. This finding of a close functional homologue provides an important missing link for understanding the origin and the metabolic core of terrestrial Dehalococcoides/Dehalogenimonas spp. and of reductive

  4. Cardiac and pulmonary dose reduction for tangentially irradiated breast cancer, utilizing deep inspiration breath-hold with audio-visual guidance, without compromising target coverage

    International Nuclear Information System (INIS)

    Vikstroem, Johan; Hjelstuen, Mari H.B.; Mjaaland, Ingvil; Dybvik, Kjell Ivar

    2011-01-01

    Background and purpose. Cardiac disease and pulmonary complications are documented risk factors in tangential breast irradiation. Respiratory gating radiotherapy provides a possibility to substantially reduce cardiopulmonary doses. This CT planning study quantifies the reduction of radiation doses to the heart and lung, using deep inspiration breath-hold (DIBH). Patients and methods. Seventeen patients with early breast cancer, referred for adjuvant radiotherapy, were included. For each patient two CT scans were acquired; the first during free breathing (FB) and the second during DIBH. The scans were monitored by the Varian RPM respiratory gating system. Audio coaching and visual feedback (audio-visual guidance) were used. The treatment planning of the two CT studies was performed with conformal tangential fields, focusing on good coverage (V95>98%) of the planning target volume (PTV). Dose-volume histograms were calculated and compared. Doses to the heart, left anterior descending (LAD) coronary artery, ipsilateral lung and the contralateral breast were assessed. Results. Compared to FB, the DIBH-plans obtained lower cardiac and pulmonary doses, with equal coverage of PTV. The average mean heart dose was reduced from 3.7 to 1.7 Gy and the number of patients with >5% heart volume receiving 25 Gy or more was reduced from four to one of the 17 patients. With DIBH the heart was completely out of the beam portals for ten patients, with FB this could not be achieved for any of the 17 patients. The average mean dose to the LAD coronary artery was reduced from 18.1 to 6.4 Gy. The average ipsilateral lung volume receiving more than 20 Gy was reduced from 12.2 to 10.0%. Conclusion. Respiratory gating with DIBH, utilizing audio-visual guidance, reduces cardiac and pulmonary doses for tangentially treated left sided breast cancer patients without compromising the target coverage

  5. Cardiac and pulmonary dose reduction for tangentially irradiated breast cancer, utilizing deep inspiration breath-hold with audio-visual guidance, without compromising target coverage

    Energy Technology Data Exchange (ETDEWEB)

    Vikstroem, Johan; Hjelstuen, Mari H.B.; Mjaaland, Ingvil; Dybvik, Kjell Ivar (Dept. of Radiotherapy, Stavanger Univ. Hospital, Stavanger (Norway)), e-mail: vijo@sus.no

    2011-01-15

    Background and purpose. Cardiac disease and pulmonary complications are documented risk factors in tangential breast irradiation. Respiratory gating radiotherapy provides a possibility to substantially reduce cardiopulmonary doses. This CT planning study quantifies the reduction of radiation doses to the heart and lung, using deep inspiration breath-hold (DIBH). Patients and methods. Seventeen patients with early breast cancer, referred for adjuvant radiotherapy, were included. For each patient two CT scans were acquired; the first during free breathing (FB) and the second during DIBH. The scans were monitored by the Varian RPM respiratory gating system. Audio coaching and visual feedback (audio-visual guidance) were used. The treatment planning of the two CT studies was performed with conformal tangential fields, focusing on good coverage (V95>98%) of the planning target volume (PTV). Dose-volume histograms were calculated and compared. Doses to the heart, left anterior descending (LAD) coronary artery, ipsilateral lung and the contralateral breast were assessed. Results. Compared to FB, the DIBH-plans obtained lower cardiac and pulmonary doses, with equal coverage of PTV. The average mean heart dose was reduced from 3.7 to 1.7 Gy and the number of patients with >5% heart volume receiving 25 Gy or more was reduced from four to one of the 17 patients. With DIBH the heart was completely out of the beam portals for ten patients, with FB this could not be achieved for any of the 17 patients. The average mean dose to the LAD coronary artery was reduced from 18.1 to 6.4 Gy. The average ipsilateral lung volume receiving more than 20 Gy was reduced from 12.2 to 10.0%. Conclusion. Respiratory gating with DIBH, utilizing audio-visual guidance, reduces cardiac and pulmonary doses for tangentially treated left sided breast cancer patients without compromising the target coverage

  6. Combination of deep eutectic solvent and ionic liquid to improve biocatalytic reduction of 2-octanone with Acetobacter pasteurianus GIM1.158 cell

    OpenAIRE

    Pei Xu; Peng-Xuan Du; Min-Hua Zong; Ning Li; Wen-Yong Lou

    2016-01-01

    The efficient anti-Prelog asymmetric reduction of 2-octanone with Acetobacter pasteurianus GIM1.158 cells was successfully performed in a biphasic system consisting of deep eutectic solvent (DES) and water-immiscible ionic liquid (IL). Various DESs exerted different effects on the synthesis of (R)-2-octanol. Choline chloride/ethylene glycol (ChCl/EG) exhibited good biocompatibility and could moderately increase the cell membrane permeability thus leading to the better results. Adding ChCl/EG ...

  7. Identifying the appropriate time for deep brain stimulation to achieve spatial memory improvement on the Morris water maze.

    Science.gov (United States)

    Jeong, Da Un; Lee, Jihyeon; Chang, Won Seok; Chang, Jin Woo

    2017-03-07

    The possibility of using deep brain stimulation (DBS) for memory enhancement has recently been reported, but the precise underlying mechanisms of its effects remain unknown. Our previous study suggested that spatial memory improvement by medial septum (MS)-DBS may be associated with cholinergic regulation and neurogenesis. However, the affected stage of memory could not be distinguished because the stimulation was delivered during the execution of all memory processes. Therefore, this study was performed to determine the stage of memory affected by MS-DBS. Rats were administered 192 IgG-saporin to lesion cholinergic neurons. Stimulation was delivered at different times in different groups of rats: 5 days before the Morris water maze test (pre-stimulation), 5 days during the training phase of the Morris water maze test (training-stimulation), and 2 h before the Morris water maze probe test (probe-stimulation). A fourth group of rats was lesioned but received no stimulation. These four groups were compared with a normal (control) group. The most effective memory restoration occurred in the pre-stimulation group. Moreover, the pre-stimulation group exhibited better recall of the platform position than the other stimulation groups. An increase in the level of brain derived neurotrophic factor (BDNF) was observed in the pre-stimulation group; this increase was maintained for 1 week. However, acetylcholinesterase activity in the pre-stimulation group was not significantly different from the lesion group. Memory impairment due to cholinergic denervation can be improved by DBS. The improvement is significantly correlated with the up-regulation of BDNF expression and neurogenesis. Based on the results of this study, the use of MS-DBS during the early stage of disease may restore spatial memory impairment.

  8. How to Achieve CO2 Emission Reduction Goals by 2050. Abstracts of the 22nd Forum: Energy Day in Croatia

    International Nuclear Information System (INIS)

    2013-01-01

    This years' annual Forum is held twenty-two years in a row. Analysing the energy development to 2050 takes into consideration the nature and complexity of the energy development, a long period of preparing plants and facilities, a long service life of plants, dimensions of technological development and continuous growth of energy demand. New reasons for a long-term observation of the energy development, at least by 2050, are climate change and radical reducing emissions of carbon dioxide and other greenhouse gases according to the EU Energy Policy to 2050. The impact of permitted level of carbon dioxide emissions on the energy production and consumption is drastic and fundamentally changes the structure of energy production and consumption. The new legal and economical approach as well as the new technological development and and political determination are required for the implementation of energy policy which should lead to radical reduction of carbon dioxide emissions. The time is also an important factor because postponement of defining the new approach of energy policy decreases the possibility of its realisation. An additional argument for a new technological development is the mobilization of the science and industry for achieving that new step in technological development. This development is required in order to develop the energy industry without or with minimal greenhouse gas emissions. Furthermore, it is very important for renewable energy, at the carbon dioxide capture and storage, at the energy efficiency in the whole range of activities, from the production, transmission and distribution to the consumption of devices and equipment used by consumers, at smart grids, at the energy storage and vehicles. Reducing the energy consumption in buildings is energy, economical, architectual and organizational project which has to include every commercial and residential building. It is important to eliminate all disadvantages that incurred in the period when

  9. Deep Energy Retrofit

    DEFF Research Database (Denmark)

    Zhivov, Alexander; Lohse, Rüdiger; Rose, Jørgen

    Deep Energy Retrofit – A Guide to Achieving Significant Energy User Reduction with Major Renovation Projects contains recommendations for characteristics of some of core technologies and measures that are based on studies conducted by national teams associated with the International Energy Agency...... Energy Conservation in Buildings and Communities Program (IEA-EBC) Annex 61 (Lohse et al. 2016, Case, et al. 2016, Rose et al. 2016, Yao, et al. 2016, Dake 2014, Stankevica et al. 2016, Kiatreungwattana 2014). Results of these studies provided a base for setting minimum requirements to the building...... envelope-related technologies to make Deep Energy Retrofit feasible and, in many situations, cost effective. Use of energy efficiency measures (EEMs) in addition to core technologies bundle and high-efficiency appliances will foster further energy use reduction. This Guide also provides best practice...

  10. Interleaving subthalamic nucleus deep brain stimulation to avoid side effects while achieving satisfactory motor benefits in Parkinson disease: A report of 12 cases.

    Science.gov (United States)

    Zhang, Shizhen; Zhou, Peizhi; Jiang, Shu; Wang, Wei; Li, Peng

    2016-12-01

    Deep brain stimulation (DBS) of the subthalamic nucleus is an effective treatment for advanced Parkinson disease (PD). However, achieving ideal outcomes by conventional programming can be difficult in some patients, resulting in suboptimal control of PD symptoms and stimulation-induced adverse effects. Interleaving stimulation (ILS) is a newer programming technique that can individually optimize the stimulation area, thereby improving control of PD symptoms while alleviating stimulation-induced side effects after conventional programming fails to achieve the desired results. We retrospectively reviewed PD patients who received DBS programming during the previous 4 years in our hospital. We collected clinical and demographic data from 12 patients who received ILS because of incomplete alleviation of PD symptoms or stimulation-induced adverse effects after conventional programming had proven ineffective or intolerable. Appropriate lead location was confirmed with postoperative reconstruction images. The rationale and clinical efficacy of ILS was analyzed. We divided our patients into 4 groups based on the following symptoms: stimulation-induced dysarthria and choreoathetoid dyskinesias, gait disturbance, and incomplete control of parkinsonism. After treatment with ILS, patients showed satisfactory improvement in PD symptoms and alleviation of stimulation-induced side effects, with a mean improvement in Unified PD Rating Scale motor scores of 26.9%. ILS is a newer choice and effective programming strategy to maximize symptom control in PD while decreasing stimulation-induced adverse effects when conventional programming fails to achieve satisfactory outcome. However, we should keep in mind that most DBS patients are routinely treated with conventional stimulation and that not all patients benefit from ILS. ILS is not recommended as the first choice of programming, and it is recommended only when patients have unsatisfactory control of PD symptoms or stimulation

  11. Achieving high field-effect mobility in amorphous indium-gallium-zinc oxide by capping a strong reduction layer.

    Science.gov (United States)

    Zan, Hsiao-Wen; Yeh, Chun-Cheng; Meng, Hsin-Fei; Tsai, Chuang-Chuang; Chen, Liang-Hao

    2012-07-10

    An effective approach to reduce defects and increase electron mobility in a-IGZO thin-film transistors (a-IGZO TFTs) is introduced. A strong reduction layer, calcium, is capped onto the back interface of a-IGZO TFT. After calcium capping, the effective electron mobility of a-IGZO TFT increases from 12 cm(2) V(-1) s(-1) to 160 cm(2) V(-1) s(-1). This high mobility is a new record, which implies that the proposed defect reduction effect is key to improve electron transport in oxide semiconductor materials. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Developments in greenhouse gas emissions and net energy use in Danish agriculture - How to achieve substantial CO{sub 2} reductions?

    Energy Technology Data Exchange (ETDEWEB)

    Dalgaard, T., E-mail: tommy.dalgaard@agrsci.dk [Aarhus University, Department of Agroecology, Blichers Alle 20, P.O. Box 50, DK-8830 Tjele (Denmark); Olesen, J.E.; Petersen, S.O.; Petersen, B.M.; Jorgensen, U.; Kristensen, T.; Hutchings, N.J. [Aarhus University, Department of Agroecology, Blichers Alle 20, P.O. Box 50, DK-8830 Tjele (Denmark); Gyldenkaerne, S. [Aarhus University, National Environmental Research Institute, Frederiksborgvej 399, DK-4000 Roskilde (Denmark); Hermansen, J.E. [Aarhus University, Department of Agroecology, Blichers Alle 20, P.O. Box 50, DK-8830 Tjele (Denmark)

    2011-11-15

    Greenhouse gas (GHG) emissions from agriculture are a significant contributor to total Danish emissions. Consequently, much effort is currently given to the exploration of potential strategies to reduce agricultural emissions. This paper presents results from a study estimating agricultural GHG emissions in the form of methane, nitrous oxide and carbon dioxide (including carbon sources and sinks, and the impact of energy consumption/bioenergy production) from Danish agriculture in the years 1990-2010. An analysis of possible measures to reduce the GHG emissions indicated that a 50-70% reduction of agricultural emissions by 2050 relative to 1990 is achievable, including mitigation measures in relation to the handling of manure and fertilisers, optimization of animal feeding, cropping practices, and land use changes with more organic farming, afforestation and energy crops. In addition, the bioenergy production may be increased significantly without reducing the food production, whereby Danish agriculture could achieve a positive energy balance. - Highlights: > GHG emissions from Danish agriculture 1990-2010 are calculated, including carbon sequestration. > Effects of measures to further reduce GHG emissions are listed. > Land use scenarios for a substantially reduced GHG emission by 2050 are presented. > A 50-70% reduction of agricultural emissions by 2050 relative to 1990 is achievable. > Via bioenergy production Danish agriculture could achieve a positive energy balance. - Scenario studies of greenhouse gas mitigation measures illustrate the possible realization of CO{sub 2} reductions for Danish agriculture by 2050, sustaining current food production.

  13. Homoacetogenesis in Deep-Sea Chloroflexi, as Inferred by Single-Cell Genomics, Provides a Link to Reductive Dehalogenation in Terrestrial Dehalococcoidetes

    Directory of Open Access Journals (Sweden)

    Holly L. Sewell

    2017-12-01

    Full Text Available The deep marine subsurface is one of the largest unexplored biospheres on Earth and is widely inhabited by members of the phylum Chloroflexi. In this report, we investigated genomes of single cells obtained from deep-sea sediments of the Peruvian Margin, which are enriched in such Chloroflexi. 16S rRNA gene sequence analysis placed two of these single-cell-derived genomes (DscP3 and Dsc4 in a clade of subphylum I Chloroflexi which were previously recovered from deep-sea sediment in the Okinawa Trough and a third (DscP2-2 as a member of the previously reported DscP2 population from Peruvian Margin site 1230. The presence of genes encoding enzymes of a complete Wood-Ljungdahl pathway, glycolysis/gluconeogenesis, a Rhodobacter nitrogen fixation (Rnf complex, glyosyltransferases, and formate dehydrogenases in the single-cell genomes of DscP3 and Dsc4 and the presence of an NADH-dependent reduced ferredoxin:NADP oxidoreductase (Nfn and Rnf in the genome of DscP2-2 imply a homoacetogenic lifestyle of these abundant marine Chloroflexi. We also report here the first complete pathway for anaerobic benzoate oxidation to acetyl coenzyme A (CoA in the phylum Chloroflexi (DscP3 and Dsc4, including a class I benzoyl-CoA reductase. Of remarkable evolutionary significance, we discovered a gene encoding a formate dehydrogenase (FdnI with reciprocal closest identity to the formate dehydrogenase-like protein (complex iron-sulfur molybdoenzyme [CISM], DET0187 of terrestrial Dehalococcoides/Dehalogenimonas spp. This formate dehydrogenase-like protein has been shown to lack formate dehydrogenase activity in Dehalococcoides/Dehalogenimonas spp. and is instead hypothesized to couple HupL hydrogenase to a reductive dehalogenase in the catabolic reductive dehalogenation pathway. This finding of a close functional homologue provides an important missing link for understanding the origin and the metabolic core of terrestrial Dehalococcoides/Dehalogenimonas spp. and of

  14. Preparation of porous lead from shape-controlled PbO bulk by in situ electrochemical reduction in ChCl-EG deep eutectic solvent

    Science.gov (United States)

    Ru, Juanjian; Hua, Yixin; Xu, Cunying; Li, Jian; Li, Yan; Wang, Ding; Zhou, Zhongren; Gong, Kai

    2015-12-01

    Porous lead with different shapes was firstly prepared from controlled geometries of solid PbO bulk by in situ electrochemical reduction in choline chloride-ethylene glycol deep eutectic solvents at cell voltage 2.5 V and 353 K. The electrochemical behavior of PbO powders on cavity microelectrode was investigated by cyclic voltammetry. It is indicated that solid PbO can be directly reduced to metal in the solvent and a nucleation loop is apparent. Constant voltage electrolysis demonstrates that PbO pellet can be completely converted to metal for 13 h, and the current efficiency and specific energy consumption are about 87.79% and 736.82 kWh t-1, respectively. With the electro-deoxidation progress on the pellet surface, the reduction rate reaches the fastest and decreases along the distance from surface to inner center. The morphologies of metallic products are porous and mainly consisted of uniform particles which connect with each other by finer strip-shaped grains to remain the geometry and macro size constant perfectly. In addition, an empirical model of the electro-deoxidation process from spherical PbO bulk to porous lead is also proposed. These findings provide a novel and simple route for the preparation of porous metals from oxide precursors in deep eutectic solvents at room temperature.

  15. Repeated application of fuel reduction treatments in the southern Appalachian Mountains, USA: implications for achieving management goals

    Science.gov (United States)

    Thomas A. Waldrop; Donald L. Hagan; Dean M. Simon

    2016-01-01

    Fire and resource managers of the southern Appalachian Mountains, USA, have many questions about the use of prescribed fire and mechanical treatments to meet various land management objectives. Three common objectives include restoration to an open woodland, oak regeneration, and fuel reduction. This paper provides information about reaching each of these three...

  16. Justice policy reform for high-risk juveniles: using science to achieve large-scale crime reduction.

    Science.gov (United States)

    Skeem, Jennifer L; Scott, Elizabeth; Mulvey, Edward P

    2014-01-01

    After a distinctly punitive era, a period of remarkable reform in juvenile crime regulation has begun. Practical urgency has fueled interest in both crime reduction and research on the prediction and malleability of criminal behavior. In this rapidly changing context, high-risk juveniles--the small proportion of the population where crime becomes concentrated--present a conundrum. Research indicates that these are precisely the individuals to treat intensively to maximize crime reduction, but there are both real and imagined barriers to doing so. Mitigation principles (during early adolescence, ages 10-13) and institutional placement or criminal court processing (during mid-late adolescence, ages 14-18) can prevent these juveniles from receiving interventions that would best protect public safety. In this review, we synthesize relevant research to help resolve this challenge in a manner that is consistent with the law's core principles. In our view, early adolescence offers unique opportunities for risk reduction that could (with modifications) be realized in the juvenile justice system in cooperation with other social institutions.

  17. Portosystemic pressure reduction achieved with TIPPS and impact of portosystemic collaterals for the prediction of the portosystemic-pressure gradient in cirrhotic patients

    Energy Technology Data Exchange (ETDEWEB)

    Grözinger, Gerd, E-mail: gerd.groezinger@med.uni-tuebingen.de [Department of Diagnostic Radiology, Department of Radiology, University of Tübingen (Germany); Wiesinger, Benjamin; Schmehl, Jörg; Kramer, Ulrich [Department of Diagnostic Radiology, Department of Radiology, University of Tübingen (Germany); Mehra, Tarun [Department of Dermatology, University of Tübingen (Germany); Grosse, Ulrich; König, Claudius [Department of Diagnostic Radiology, Department of Radiology, University of Tübingen (Germany)

    2013-12-01

    Purpose: The portosystemic pressure gradient is an important factor defining prognosis in hepatic disease. However, noninvasive prediction of the gradient and the possible reduction by establishment of a TIPSS is challenging. A cohort of patients receiving TIPSS was evaluated with regard to imaging features of collaterals in cross-sectional imaging and the achievable reduction of the pressure gradient by establishment of a TIPSS. Methods: In this study 70 consecutive patients with cirrhotic liver disease were retrospectively evaluated. Patients received either CT or MR imaging before invasive pressure measurement during TIPSS procedure. Images were evaluated with regard to esophageal and fundus varices, splenorenal collaterals, short gastric vein and paraumbilical vein. Results were correlated with Child stage, portosystemic pressure gradient and post-TIPSS reduction of the pressure gradient. Results: In 55 of the 70 patients TIPSS reduced the pressure gradient to less than 12 mmHg. The pre-interventional pressure and the pressure reduction were not significantly different between Child stages. Imaging features of varices and portosystemic collaterals did not show significant differences. The only parameter with a significant predictive value for the reduction of the pressure gradient was the pre-TIPSS pressure gradient (r = 0.8, p < 0.001). Conclusions: TIPSS allows a reliable reduction of the pressure gradient even at high pre-interventional pressure levels and a high collateral presence. In patients receiving TIPSS the presence and the characteristics of the collateral vessels seem to be too variable to draw reliable conclusions concerning the portosystemic pressure gradient.

  18. Portosystemic pressure reduction achieved with TIPPS and impact of portosystemic collaterals for the prediction of the portosystemic-pressure gradient in cirrhotic patients

    International Nuclear Information System (INIS)

    Grözinger, Gerd; Wiesinger, Benjamin; Schmehl, Jörg; Kramer, Ulrich; Mehra, Tarun; Grosse, Ulrich; König, Claudius

    2013-01-01

    Purpose: The portosystemic pressure gradient is an important factor defining prognosis in hepatic disease. However, noninvasive prediction of the gradient and the possible reduction by establishment of a TIPSS is challenging. A cohort of patients receiving TIPSS was evaluated with regard to imaging features of collaterals in cross-sectional imaging and the achievable reduction of the pressure gradient by establishment of a TIPSS. Methods: In this study 70 consecutive patients with cirrhotic liver disease were retrospectively evaluated. Patients received either CT or MR imaging before invasive pressure measurement during TIPSS procedure. Images were evaluated with regard to esophageal and fundus varices, splenorenal collaterals, short gastric vein and paraumbilical vein. Results were correlated with Child stage, portosystemic pressure gradient and post-TIPSS reduction of the pressure gradient. Results: In 55 of the 70 patients TIPSS reduced the pressure gradient to less than 12 mmHg. The pre-interventional pressure and the pressure reduction were not significantly different between Child stages. Imaging features of varices and portosystemic collaterals did not show significant differences. The only parameter with a significant predictive value for the reduction of the pressure gradient was the pre-TIPSS pressure gradient (r = 0.8, p < 0.001). Conclusions: TIPSS allows a reliable reduction of the pressure gradient even at high pre-interventional pressure levels and a high collateral presence. In patients receiving TIPSS the presence and the characteristics of the collateral vessels seem to be too variable to draw reliable conclusions concerning the portosystemic pressure gradient

  19. Electrochemical CO2 Reduction by Ni-containing Iron Sulfides: How Is CO2 Electrochemically Reduced at Bisulfide-Bearing Deep-sea Hydrothermal Precipitates?

    International Nuclear Information System (INIS)

    Yamaguchi, Akira; Yamamoto, Masahiro; Takai, Ken; Ishii, Takumi; Hashimoto, Kazuhito; Nakamura, Ryuhei

    2014-01-01

    The discovery of deep-sea hydrothermal vents on the late 1970's has led to many hypotheses concerning chemical evolution in the prebiotic ocean and the early evolution of energy metabolism in ancient Earth. Such studies stand on the quest for the bioenergetic evolution to utilize reducing chemicals such as H 2 for CO 2 reduction and carbon assimilation. In addition to the direct reaction of H 2 and CO 2 , the electrical current passing across a bisulfide-bearing chimney structure has pointed to the possible electrocatalytic CO 2 reduction at the cold ocean-vent interface (R. Nakamura, et al. Angew. Chem. Int. Ed. 2010, 49, 7692 − 7694). To confirm the validity of this hypothesis, here, we examined the energetics of electrocatalytic CO 2 reduction by iron sulfide (FeS) deposits at slightly acidic pH. Although FeS deposits inefficiently reduced CO 2 , the efficiency of the reaction was substantially improved by the substitution of Fe with Ni to form FeNi 2 S 4 (violarite), of which surface was further modified with amine compounds. The potential-dependent activity of CO 2 reduction demonstrated that CO 2 reduction by H 2 in hydrothermal fluids was involved in a strong endergonic electron transfer reaction, suggesting that a naturally occurring proton-motive force (PMF) as high as 200 mV would be established across the hydrothermal vent chimney wall. However, in the chimney structures, H 2 generation competes with CO 2 reduction for electrical current, resulting in rapid consumption of the PMF. Therefore, to maintain the PMF and the electrosynthesis of organic compounds in hydrothermal vent mineral deposits, we propose a homeostatic pH regulation mechanism of FeS deposits, in which elemental hydrogen stored in the hydrothermal mineral deposits is used to balance the consumption of the electrochemical gradient by H 2 generation

  20. Biotic and a-biotic Mn and Fe cycling in deep sediments across a gradient of sulfate reduction rates along the California margin

    Science.gov (United States)

    Schneider-Mor, A.; Steefel, C.; Maher, K.

    2011-12-01

    The coupling between the biological and a-biotic processes controlling trace metals in deep marine sediments are not well understood, although the fluxes of elements and trace metals across the sediment-water interface can be a major contribution to ocean water. Four marine sediment profiles (ODP leg 167 sites 1011, 1017, 1018 and 1020)were examined to evaluate and quantify the biotic and abiotic reaction networks and fluxes that occur in deep marine sediments. We compared biogeochemical processes across a gradient of sulfate reduction (SR) rates with the objective of studying the processes that control these rates and how they affect major elements as well as trace metal redistribution. The rates of sulfate reduction, methanogenesis and anaerobic methane oxidation (AMO) were constrained using a multicomponent reactive transport model (CrunchFlow). Constraints for the model include: sediment and pore water concentrations, as well as %CaCO3, %biogenic silica, wt% carbon and δ13C of total organic carbon (TOC), particulate organic matter (POC) and mineral associated carbon (MAC). The sites are distinguished by the depth of AMO: a shallow zone is observed at sites 1018 (9 to 19 meters composite depth (mcd)) and 1017 (19 to 30 mcd), while deeper zones occur at sites 1011 (56 to 76 mcd) and 1020 (101 to 116 mcd). Sulfate reduction rates at the shallow AMO sites are on the order 1x10-16 mol/L/yr, much faster than rates in the deeper zone sulfate reduction (1-3x10-17 mol/L/yr), as expected. The dissolved metal ion concentrations varied between the sites, with Fe (0.01-7 μM) and Mn (0.01-57 μM) concentrations highest at Site 1020 and lowest at site 1017. The highest Fe and Mn concentrations occurred at various depths, and were not directly correlated with the rates of sulfate reduction and the maximum alkalinity values. The main processes that control cycling of Fe are the production of sulfide from sulfate reduction and the distribution of Fe-oxides. The Mn distribution

  1. Microbial Sulfate Reduction in Deep-Sea Sediments at the Guaymas Basin - Hydrothermal Vent Area - Influence of Temperature and Substrates

    DEFF Research Database (Denmark)

    ELSGAARD, L.; ISAKSEN, MF; JØRGENSEN, BB

    1994-01-01

    Microbial sulfate reduction was studied by a S-35 tracer technique in sediments from the hydrothermal vent site in Guaymas Basin, Gulf of California, Mexico. In situ temperatures ranged from 2.7-degrees-C in the overlying seawater to > 120-degrees-C at 30 cm depth in the hydrothermal sediment...

  2. The use of radiological guidelines to achieve a sustained reduction in the number of radiographic examinations of the cervical spine, lumbar spine and knees performed for GPs

    International Nuclear Information System (INIS)

    Glaves, J.

    2005-01-01

    AIM: To determine if the use of request guidelines can achieve a sustained reduction in the number of radiographic examinations of the cervical spine, lumbar spine and knee joints performed for general practitioners (GPs). METHODS: GPs referring to three community hospitals and a district general hospital were circulated with referral guidelines for radiography of the cervical spine, lumbar spine and knee, and all requests for these three examinations were checked. Requests that did not fit the guidelines were returned to the GP with an explanatory letter and a further copy of the guidelines. Where applicable, a large-joint replacement algorithm was also enclosed. If the GP maintained the opinion that the examination was indicated, she or he had the option of supplying further justifying information in writing or speaking to a consultant radiologist. RESULTS: Overall the number of radiographic examinations fell by 68% in the first year, achieving a 79% reduction in the second year. For knees, lumbar spine and cervical spine radiographs the total reductions were 77%, 78% and 86%, respectively. CONCLUSION: The use of referral guidelines, reinforced by request checking and clinical management algorithms, can produce a dramatic and sustained reduction in the number of radiographs of the cervical spine, lumbar spine and knees performed for GPs

  3. Achieving deep cuts in the carbon intensity of U.S. automobile transportation by 2050: complementary roles for electricity and biofuels.

    Science.gov (United States)

    Scown, Corinne D; Taptich, Michael; Horvath, Arpad; McKone, Thomas E; Nazaroff, William W

    2013-08-20

    Passenger cars in the United States (U.S.) rely primarily on petroleum-derived fuels and contribute the majority of U.S. transportation-related greenhouse gas (GHG) emissions. Electricity and biofuels are two promising alternatives for reducing both the carbon intensity of automotive transportation and U.S. reliance on imported oil. However, as standalone solutions, the biofuels option is limited by land availability and the electricity option is limited by market adoption rates and technical challenges. This paper explores potential GHG emissions reductions attainable in the United States through 2050 with a county-level scenario analysis that combines ambitious plug-in hybrid electric vehicle (PHEV) adoption rates with scale-up of cellulosic ethanol production. With PHEVs achieving a 58% share of the passenger car fleet by 2050, phasing out most corn ethanol and limiting cellulosic ethanol feedstocks to sustainably produced crop residues and dedicated crops, we project that the United States could supply the liquid fuels needed for the automobile fleet with an average blend of 80% ethanol (by volume) and 20% gasoline. If electricity for PHEV charging could be supplied by a combination of renewables and natural-gas combined-cycle power plants, the carbon intensity of automotive transport would be 79 g CO2e per vehicle-kilometer traveled, a 71% reduction relative to 2013.

  4. Ultradeep Near-Infrared ISAAC Observations of the Hubble Deep Field South: Observations, Reduction, Multicolor Catalog, and Photometric Redshifts

    Science.gov (United States)

    Labbé, Ivo; Franx, Marijn; Rudnick, Gregory; Schreiber, Natascha M. Förster; Rix, Hans-Walter; Moorwood, Alan; van Dokkum, Pieter G.; van der Werf, Paul; Röttgering, Huub; van Starkenburg, Lottie; van der Wel, Arjen; Kuijken, Konrad; Daddi, Emanuele

    2003-03-01

    We present deep near-infrared (NIR) Js-, H-, and Ks-band ISAAC imaging of the Wide Field Planetary Camera 2 (WFPC2) field of the Hubble Deep Field South (HDF-S). The 2.5‧×2.5‧ high Galactic latitude field was observed with the Very Large Telescope under the best seeing conditions, with integration times amounting to 33.6 hr in Js, 32.3 hr in H, and 35.6 hr in Ks. We reach total AB magnitudes for point sources of 26.8, 26.2, and 26.2, respectively (3 σ), which make it the deepest ground-based NIR observation to date and the deepest Ks-band data in any field. The effective seeing of the co-added images is ~0.45" in Js, ~0.48" in H, and ~0.46" in Ks. Using published WFPC2 optical data, we constructed a Ks-limited multicolor catalog containing 833 sources down to Ktots,AB2.3 (in Johnson magnitudes). Because they are extremely faint in the observed optical, they would be missed by ultraviolet-optical selection techniques, such as the U-dropout method. Based on service mode observations collected at the European Southern Observatory, Paranal, Chile (ESO Program 164.O-0612). Also based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy (AURA), Inc., under NASA contract NAS 5-26555.

  5. City-specific vehicle emission control strategies to achieve stringent emission reduction targets in China's Yangtze River Delta region.

    Science.gov (United States)

    Zhang, Shaojun; Wu, Ye; Zhao, Bin; Wu, Xiaomeng; Shu, Jiawei; Hao, Jiming

    2017-01-01

    The Yangtze River Delta (YRD) region is one of the most prosperous and densely populated regions in China and is facing tremendous pressure to mitigate vehicle emissions and improve air quality. Our assessment has revealed that mitigating vehicle emissions of NOx would be more difficult than reducing the emissions of other major vehicular pollutants (e.g., CO, HC and PM 2.5 ) in the YRD region. Even in Shanghai, where the emission control implemented are more stringent than in Jiangsu and Zhejiang, we observed little to no reduction in NOx emissions from 2000 to 2010. Emission-reduction targets for HC, NOx and PM 2.5 are determined using a response surface modeling tool for better air quality. We design city-specific emission control strategies for three vehicle-populated cities in the YRD region: Shanghai and Nanjing and Wuxi in Jiangsu. Our results indicate that even if stringent emission control consisting of the Euro 6/VI standards, the limitation of vehicle population and usage, and the scrappage of older vehicles is applied, Nanjing and Wuxi will not be able to meet the NOx emissions target by 2020. Therefore, additional control measures are proposed for Nanjing and Wuxi to further mitigate NOx emissions from heavy-duty diesel vehicles. Copyright © 2016. Published by Elsevier B.V.

  6. A study on the effect of flaw detection probability assumptions on risk reduction achieved by non-destructive inspection

    International Nuclear Information System (INIS)

    Cronvall, O.; Simola, K.; Männistö, I.; Gunnars, J.; Alverlind, L.; Dillström, P.; Gandossi, L.

    2012-01-01

    Leakages and ruptures of piping components lead to reduction or loss of the pressure retaining capability of the system, and thus contribute to the overall risk associated with nuclear power plants. In-service inspection (ISI) aims at verifying that defects are not present in components of the pressure boundary or, if defects are present, ensuring that these are detected before they affect the safe operation of the plant. Reliability estimates of piping are needed e.g., in probabilistic safety assessment (PSA) studies, risk-informed ISI (RI-ISI) applications, and other structural reliability assessments. Probabilistic fracture mechanics models can account for ISI reliability, but a quantitative estimate for the latter is needed. This is normally expressed in terms of probability of detection (POD) curves, which correlate the probability of detecting a flaw with flaw size. A detailed POD curve is often difficult (or practically impossible) to obtain. If sufficient risk reduction can be shown by using simplified (but reasonably conservative) POD estimates, more complex PODs are not needed. This paper summarises the results of a study on the effect of piping inspection reliability assumptions on failure probability using structural reliability models. The main interest was to investigate whether it is justifiable to use a simplified POD curve. Further, the study compared various structural reliability calculation approaches for a set of analysis cases. The results indicate that the use of a simplified POD could be justifiable in RI-ISI applications.

  7. Abordagem profunda e abordagem superficial à aprendizagem: diferentes perspectivas do rendimento escolar Deep and surface approach to learning: different perspectives about academic achievement

    Directory of Open Access Journals (Sweden)

    Cristiano Mauro Assis Gomes

    2011-01-01

    Full Text Available O presente estudo investiga a relação entre a abordagem superficial e a abordagem profunda à aprendizagem na explicação do rendimento escolar. Algumas questões são delineadas buscando verificar o papel de cada uma das abordagens na proficiência escolar em séries distintas. Foram analisados dados de 684 estudantes da sexta série do ensino fundamental à terceira série do ensino médio de uma escola particular de Belo Horizonte, Minas Gerais. Foi delineado um modelo para comparação das séries escolares, através da modelagem por equação estrutural. O modelo desenhado apresentou bom grau de ajuste (c²=427,12; gl=182; CFI=0,95; RMSEA=0,04 para a amostra completa e para cada série escolar. Os resultados mostram que há uma participação distinta da abordagem superficial e da abordagem profunda no desempenho escolar nas diferentes séries. São discutidas implicações dos resultados para a teoria das abordagens.This study investigates the relationship between deep and surface approach to learning in explaining academic achievement. Some questions are outlined aiming to verify the role of each approach in student's proficiency in different grades. Data from 684 students from junior high school (6th year to high school (12th year of a private school in Belo Horizonte, Minas Gerais, were analyzed. A model was designed to compare the grades through structural equation modeling. The designed model showed a good fit (c² = 427.12; df = 182; CFI = .95, RMSEA = .04 for the full sample and for each grade. The results show that there is a distinct contribution of both approaches in academic achievement in the different grades. Further implications to the learning approach theory will be discussed.

  8. Cardiac dose reduction with deep inspiration breath hold for left-sided breast cancer radiotherapy patients with and without regional nodal irradiation.

    Science.gov (United States)

    Yeung, Rosanna; Conroy, Leigh; Long, Karen; Walrath, Daphne; Li, Haocheng; Smith, Wendy; Hudson, Alana; Phan, Tien

    2015-09-22

    Deep inspiration breath hold (DIBH) reduces heart and left anterior descending artery (LAD) dose during left-sided breast radiation therapy (RT); however there is limited information about which patients derive the most benefit from DIBH. The primary objective of this study was to determine which patients benefit the most from DIBH by comparing percent reduction in mean cardiac dose conferred by DIBH for patients treated with whole breast RT ± boost (WBRT) versus those receiving breast/chest wall plus regional nodal irradiation, including internal mammary chain (IMC) nodes (B/CWRT + RNI) using a modified wide tangent technique. A secondary objective was to determine if DIBH was required to meet a proposed heart dose constraint of Dmean irradiation.

  9. Painless, safe, and efficacious noninvasive skin tightening, body contouring, and cellulite reduction using multisource 3DEEP radiofrequency.

    Science.gov (United States)

    Harth, Yoram

    2015-03-01

    In the last decade, Radiofrequency (RF) energy has proven to be safe and highly efficacious for face and neck skin tightening, body contouring, and cellulite reduction. In contrast to first-generation Monopolar/Bipolar and "X -Polar" RF systems which use one RF generator connected to one or more skin electrodes, multisource radiofrequency devices use six independent RF generators allowing efficient dermal heating to 52-55°C, with no pain or risk of other side effects. In this review, the basic science and clinical results of body contouring and cellulite treatment using multisource radiofrequency system (Endymed PRO, Endymed, Cesarea, Israel) will be discussed and analyzed. © 2015 Wiley Periodicals, Inc.

  10. Identification of the microbes mediating Fe reduction in a deep saline aquifer and their influence during managed aquifer recharge.

    Science.gov (United States)

    Ko, Myoung-Soo; Cho, Kyungjin; Jeong, Dawoon; Lee, Seunghak

    2016-03-01

    In this study, indigenous microbes enabling Fe reduction under saline groundwater conditions were identified, and their potential contribution to Fe release from aquifer sediments during managed aquifer recharge (MAR) was evaluated. Sediment and groundwater samples were collected from a MAR feasibility test site in Korea, where adjacent river water will be injected into the confined aquifer. The residual groundwater had a high salinity over 26.0 psu, as well as strong reducing conditions (dissolved oxygen, DOaquifer were found to be Citrobacter sp. However, column experiments to simulate field operation scenarios indicated that additional Fe release would be limited during MAR, as the dominant microbial community in the sediment would shift from Citrobacter sp. to Pseudomonas sp. and Limnohabitans sp. as river water injection alters the pore water chemistry. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Cardiac dose reduction with deep inspiration breath hold for left-sided breast cancer radiotherapy patients with and without regional nodal irradiation

    International Nuclear Information System (INIS)

    Yeung, Rosanna; Conroy, Leigh; Long, Karen; Walrath, Daphne; Li, Haocheng; Smith, Wendy; Hudson, Alana; Phan, Tien

    2015-01-01

    Deep inspiration breath hold (DIBH) reduces heart and left anterior descending artery (LAD) dose during left-sided breast radiation therapy (RT); however there is limited information about which patients derive the most benefit from DIBH. The primary objective of this study was to determine which patients benefit the most from DIBH by comparing percent reduction in mean cardiac dose conferred by DIBH for patients treated with whole breast RT ± boost (WBRT) versus those receiving breast/chest wall plus regional nodal irradiation, including internal mammary chain (IMC) nodes (B/CWRT + RNI) using a modified wide tangent technique. A secondary objective was to determine if DIBH was required to meet a proposed heart dose constraint of D mean < 4 Gy in these two cohorts. Twenty consecutive patients underwent CT simulation both free breathing (FB) and DIBH. Patients were grouped into two cohorts: WBRT (n = 11) and B/CWRT + RNI (n = 9). 3D-conformal plans were developed and FB was compared to DIBH for each cohort using Wilcoxon signed-rank tests for continuous variables and McNemar’s test for discrete variables. The percent relative reduction conferred by DIBH in mean heart and LAD dose, as well as lung V 20 were compared between the two cohorts using Wilcox rank-sum testing. The significance level was set at 0.05 with Bonferroni correction for multiple testing. All patients had comparable target coverage on DIBH and FB. DIBH statistically significantly reduced mean heart and LAD dose for both cohorts. Percent reduction in mean heart and LAD dose with DIBH was significantly larger in the B/CWRT + RNI cohort compared to WBRT group (relative reduction in mean heart and LAD dose: 55.9 % and 72.1 % versus 29.2 % and 43.5 %, p < 0.02). All patients in the WBRT group and five patients (56 %) in the B/CWBRT + RNI group met heart D mean <4 Gy with FB. All patients met this constraint with DIBH. All patients receiving WBRT met D mean Heart < 4 Gy on FB, while only slightly over

  12. Framework for the analysis of the low-carbon scenario 2020 to achieve the national carbon Emissions reduction target: Focused on educational facilities

    International Nuclear Information System (INIS)

    Koo, Choongwan; Kim, Hyunjoong; Hong, Taehoon

    2014-01-01

    Since the increase in greenhouse gas emissions has increased the global warming potential, an international agreement on carbon emissions reduction target (CERT) has been formulated in Kyoto Protocol (1997). This study aimed to develop a framework for the analysis of the low-carbon scenario 2020 to achieve the national CERT. To verify the feasibility of the proposed framework, educational facilities were used for a case study. This study was conducted in six steps: (i) selection of the target school; (ii) establishment of the reference model for the target school; (iii) energy consumption pattern analysis by target school; (iv) establishment of the energy retrofit model for the target school; (v) economic and environmental assessment through the life cycle cost and life cycle CO 2 analysis; and (vi) establishment of the low-carbon scenario in 2020 to achieve the national CERT. This study can help facility managers or policymakers establish the optimal retrofit strategy within the limited budget from a short-term perspective and the low-carbon scenario 2020 to achieve the national CERT from the long-term perspective. The proposed framework could be also applied to any other building type or country in the global environment

  13. Combination of deep eutectic solvent and ionic liquid to improve biocatalytic reduction of 2-octanone with Acetobacter pasteurianus GIM1.158 cell

    Science.gov (United States)

    Xu, Pei; Du, Peng-Xuan; Zong, Min-Hua; Li, Ning; Lou, Wen-Yong

    2016-05-01

    The efficient anti-Prelog asymmetric reduction of 2-octanone with Acetobacter pasteurianus GIM1.158 cells was successfully performed in a biphasic system consisting of deep eutectic solvent (DES) and water-immiscible ionic liquid (IL). Various DESs exerted different effects on the synthesis of (R)-2-octanol. Choline chloride/ethylene glycol (ChCl/EG) exhibited good biocompatibility and could moderately increase the cell membrane permeability thus leading to the better results. Adding ChCl/EG increased the optimal substrate concentration from 40 mM to 60 mM and the product e.e. kept above 99.9%. To further improve the reaction efficiency, water-immiscible ILs were introduced to the reaction system and an enhanced substrate concentration (1.5 M) was observed with C4MIM·PF6. Additionally, the cells manifested good operational stability in the reaction system. Thus, the efficient biocatalytic process with ChCl/EG and C4MIM·PF6 was promising for efficient synthesis of (R)-2-octanol.

  14. Effects of reduction in porosity and permeability with depth on storage capacity and injectivity in deep saline aquifers: A case study from the Mount Simon Sandstone aquifer

    Science.gov (United States)

    Medina, C.R.; Rupp, J.A.; Barnes, D.A.

    2011-01-01

    The Upper Cambrian Mount Simon Sandstone is recognized as a deep saline reservoir that has significant potential for geological sequestration in the Midwestern region of the United States. Porosity and permeability values collected from core analyses in rocks from this formation and its lateral equivalents in Indiana, Kentucky, Michigan, and Ohio indicate a predictable relationship with depth owing to a reduction in the pore structure due to the effects of compaction and/or cementation, primarily as quartz overgrowths. The regional trend of decreasing porosity with depth is described by the equation: ??(d)=16.36??e-0.00039*d, where ?? is the porosity and d is the depth in m. The decrease of porosity with depth generally holds true on a basinwide scale. Bearing in mind local variations in lithologic and petrophysical character within the Mount Simon Sandstone, the source data that were used to predict porosity were utilized to estimate the pore volume available within the reservoir that could potentially serve as storage space for injected CO2. The potential storage capacity estimated for the Mount Simon Sandstone in the study area, using efficiency factors of 1%, 5%, 10%, and 15%, is 23,680, 118,418, 236,832, and 355,242 million metric tons of CO2, respectively. ?? 2010 Elsevier Ltd.

  15. The Impacts of Budget Reductions on Indiana's Public Schools: The Impact of Budget Changes on Student Achievement, Personnel, and Class Size for Public School Corporations in the State of Indiana

    Science.gov (United States)

    Jarman, Del W.; Boyland, Lori G.

    2011-01-01

    In recent years, economic downturn and changes to Indiana's school funding have resulted in significant financial reductions in General Fund allocations for many of Indiana's public school corporations. The main purpose of this statewide study is to examine the possible impacts of these budget reductions on class size and student achievement. This…

  16. Catalogue of methods of calculation, interpolation, smoothing, and reduction for the physical, chemical, and biological parameters of deep hydrology (CATMETH) (NODC Accession 7700442)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The document presents the methods, formulas and citations used by the BNDO to process physical, chemical, and biological data for deep hydrology including...

  17. The change in deep cervical flexor activity after training is associated with the degree of pain reduction in patients with chronic neck pain.

    Science.gov (United States)

    Falla, Deborah; O'Leary, Shaun; Farina, Dario; Jull, Gwendolen

    2012-09-01

    Altered activation of the deep cervical flexors (longus colli and longus capitis) has been found in individuals with neck pain disorders but the response to training has been variable. Therefore, this study investigated the relationship between change in deep cervical flexor muscle activity and symptoms in response to specific training. Fourteen women with chronic neck pain undertook a 6-week program of specific training that consisted of a craniocervical flexion exercise performed twice per day (10 to 20 min) for the duration of the trial. The exercise targets the deep flexor muscles of the upper cervical region. At baseline and follow-up, measures were taken of neck pain intensity (visual analogue scale, 0 to 10), perceived disability (Neck Disability Index, 0 to 50) and electromyography (EMG) of the deep cervical flexors (by a nasopharyngeal electrode suctioned over the posterior oropharyngeal wall) during performance of craniocervical flexion. After training, the activation of the deep cervical flexors increased (Pcervical flexor EMG amplitude at baseline (R(2)=0.68; Ppain intensity, change in pain level with training, and change in EMG amplitude for the deep cervical flexors during craniocervical flexion (R(2)=0.34; Pcervical flexor muscles in women with chronic neck pain reduces pain and improves the activation of these muscles, especially in those with the least activation of their deep cervical flexors before training. This finding suggests that the selection of exercise based on a precise assessment of the patients' neuromuscular control and targeted exercise interventions based on this assessment are likely to be the most beneficial to patients with neck pain.

  18. Dasatinib rapidly induces deep molecular response in chronic-phase chronic myeloid leukemia patients who achieved major molecular response with detectable levels of BCR-ABL1 transcripts by imatinib therapy.

    Science.gov (United States)

    Shiseki, Masayuki; Yoshida, Chikashi; Takezako, Naoki; Ohwada, Akira; Kumagai, Takashi; Nishiwaki, Kaichi; Horikoshi, Akira; Fukuda, Tetsuya; Takano, Hina; Kouzai, Yasuji; Tanaka, Junji; Morita, Satoshi; Sakamoto, Junichi; Sakamaki, Hisashi; Inokuchi, Koiti

    2017-10-01

    With the introduction of imatinib, a first-generation tyrosine kinase inhibitor (TKI) to inhibit BCR-ABL1 kinase, the outcome of chronic-phase chronic myeloid leukemia (CP-CML) has improved dramatically. However, only a small proportion of CP-CML patients subsequently achieve a deep molecular response (DMR) with imatinib. Dasatinib, a second-generation TKI, is more potent than imatinib in the inhibition of BCR-ABL1 tyrosine kinase in vitro and more effective in CP-CML patients who do not achieve an optimal response with imatinib treatment. In the present study, we attempted to investigate whether switching the treatment from imatinib to dasatinib can induce DMR in 16 CP-CML patients treated with imatinib for at least two years who achieved a major molecular response (MMR) with detectable levels of BCR-ABL1 transcripts. The rates of achievement of DMR at 1, 3, 6 and 12 months after switching to dasatinib treatment in the 16 patients were 44% (7/16), 56% (9/16), 63% (10/16) and 75% (12/16), respectively. The cumulative rate of achieving DMR at 12 months from initiation of dasatinib therapy was 93.8% (15/16). The proportion of natural killer cells and cytotoxic T cells in peripheral lymphocytes increased after switching to dasatinib. In contrast, the proportion of regulatory T cells decreased during treatment. The safety profile of dasatinib was consistent with previous studies. Switching to dasatinib would be a therapeutic option for CP-CML patients who achieved MMR but not DMR by imatinib, especially for patients who wish to discontinue TKI therapy.

  19. Pathways to deep decarbonization - Interim 2014 Report

    International Nuclear Information System (INIS)

    2014-01-01

    The interim 2014 report by the Deep Decarbonization Pathways Project (DDPP), coordinated and published by IDDRI and the Sustainable Development Solutions Network (SDSN), presents preliminary findings of the pathways developed by the DDPP Country Research Teams with the objective of achieving emission reductions consistent with limiting global warming to less than 2 deg. C. The DDPP is a knowledge network comprising 15 Country Research Teams and several Partner Organizations who develop and share methods, assumptions, and findings related to deep decarbonization. Each DDPP Country Research Team has developed an illustrative road-map for the transition to a low-carbon economy, with the intent of taking into account national socio-economic conditions, development aspirations, infrastructure stocks, resource endowments, and other relevant factors. The interim 2014 report focuses on technically feasible pathways to deep decarbonization

  20. Training-induced changes in physical performance can be achieved without body mass reduction after eight week of strength and injury prevention oriented programme in volleyball female players

    Directory of Open Access Journals (Sweden)

    M Lehnert

    2017-04-01

    Full Text Available The purpose of the study was to analyse the changes in muscle strength, power, and somatic parameters in elite volleyball players after a specific pre-season training programme aimed at improving jumping and strength performance and injury prevention. Twelve junior female volleyball players participated in an 8-week training programme. Anthropometric characteristics, isokinetic peak torque (PT single-joint knee flexion (H and extension (Q at 60º/s and 180º/s, counter movement jump (CMJ, squat jump (SJ, and reactive strength index (RSI were measured before and after intervention. Significant moderate effects were found in flexor concentric PT at 60º/s and at 180 º/s in the dominant leg (DL (18.3±15.1%, likely; 17.8±11.2%, very likely and in extensor concentric PT at 180º/s (7.4%±7.8%, very likely in the DL. In the non-dominant leg (NL significant moderate effects were found in flexor concentric PT at 60º/s and at 180º/s (13.7±11.3%, likely; 13.4±8.0%, very likely and in extensor concentric PT at 180º/s (10.7±11.5%, very likely. Small to moderate changes were observed for H/QCONV in the DL at 60º/s and 180º/s (15.9±14.1%; 9.6±10.4%, both likely and in the NL at 60º/s (moderate change, 9.6±11.8%, likely, and small to moderate decreases were detected for H/QFUNC at 180º/s, in both the DL and NL (-7.0±8.3%, likely; -9.5±10.0%, likely. Training-induced changes in jumping performance were trivial (for RSI to small (for CMJ and SJ. The applied pre-season training programme induced a number of positive changes in physical performance and risk of injury, despite a lack of changes in body mass and composition. CITATION: Lehnert M, Sigmund M, Lipinska P et al. Training-induced changes in physical performance can be achieved without body mass reduction after eight week of strength and injury prevention oriented programme in volleyball female players. Biol Sport. 2017;34(2:205-213.

  1. Contrasting impacts of light reduction on sediment biogeochemistry in deep- and shallow-water tropical seagrass assemblages (Green Island, Great Barrier Reef).

    Science.gov (United States)

    Schrameyer, Verena; York, Paul H; Chartrand, Kathryn; Ralph, Peter J; Kühl, Michael; Brodersen, Kasper Elgetti; Rasheed, Michael A

    2018-05-01

    Seagrass meadows increasingly face reduced light availability as a consequence of coastal development, eutrophication, and climate-driven increases in rainfall leading to turbidity plumes. We examined the impact of reduced light on above-ground seagrass biomass and sediment biogeochemistry in tropical shallow- (∼2 m) and deep-water (∼17 m) seagrass meadows (Green Island, Australia). Artificial shading (transmitting ∼10-25% of incident solar irradiance) was applied to the shallow- and deep-water sites for up to two weeks. While above-ground biomass was unchanged, higher diffusive O 2 uptake (DOU) rates, lower O 2 penetration depths, and higher volume-specific O 2 consumption (R) rates were found in seagrass-vegetated sediments as compared to adjacent bare sand (control) areas at the shallow-water sites. In contrast, deep-water sediment characteristics did not differ between bare sand and vegetated sites. At the vegetated shallow-water site, shading resulted in significantly lower hydrogen sulphide (H 2 S) levels in the sediment. No shading effects were found on sediment biogeochemistry at the deep-water site. Overall, our results show that the sediment biogeochemistry of shallow-water (Halodule uninervis, Syringodium isoetifolium, Cymodocea rotundata and C. serrulata) and deep-water (Halophila decipiens) seagrass meadows with different species differ in response to reduced light. The light-driven dynamics of the sediment biogeochemistry at the shallow-water site could suggest the presence of a microbial consortium, which might be stimulated by photosynthetically produced exudates from the seagrass, which becomes limited due to lower seagrass photosynthesis under shaded conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. NBS for Drought risks reduction in the Algarve (Portugal): selected achievements from PT FCT ProWaterMan and from EU FP7 MARSOL projects

    Science.gov (United States)

    Lobo-Ferreira, João-Paulo

    2017-04-01

    region. It included the evaluation of willingness of farmers to collaborate and pay for the use of Managed Aquifer Recharge (MAR) as a nature-based solution to minimize the drought impacts and to manage flood risk in the area. Close cooperation has been established between EIP Water Action Group MARsolutions and FP7 MARSOL INNO_DEMO (http://www.eip-water.eu/close-cooperation-between-eip-marsolutions-and-fp7-marsol-inno-demo-project ). In http://www.eip-water.eu/sites/default/files/Rel%20101_15.pdf a LNEC report is available, presenting a descriptive analysis of the responses to a survey about protection and preservation of groundwater conducted with a sample of Portuguese farmers of the Algarve region. It is possible that Direção Regional de Agricultura e Pescas do Algarve is willing to participate on the implementation the nature-based solutions as they will decrease the risk for agriculture losses. The Portuguese Water Agency has precipitation and flow bulletins for the Algarve, e.g. for Faro and Albufeira areas, in http://snirh.pt/index.php?idMain=1&idItem=1.1 . Concerning the climate change impact in Querença-Silves (QS) Aquifer, LNEC/University of Algarve MARSOL project teams presented descriptions regarding respectively groundwater recharge and flow simulations of future scenarios. E.g. Stigter et al. (2009, 2014) summarized achieved conclusions were "(1) (2020-2050) changes in recharge, particularly due to a reduction in autumn rainfall resulting in a longer dry period. More frequent droughts are predicted at the QS aquifer; (2) toward the end of the century (2069-2099), results indicate a significant decrease (mean 25 %) in recharge at QS aquifer, with an high decrease in absolute terms (mean 134 mm/year); and, (3) scenario modelling of groundwater flow shows its response to the predicted decreases in recharge and increases in pumping rates, with strongly reduced outflow into the coastal wetlands, whereas changes due to sea level rise are negligible". These

  3. Path towards achieving of China's 2020 carbon emission reduction target-A discussion of low-carbon energy policies at province level

    International Nuclear Information System (INIS)

    Wang Run; Liu Wenjuan; Xiao Lishan; Liu Jian; Kao, William

    2011-01-01

    Following the announcement of the China's 2020 national target for the reduction of the intensity of carbon dioxide emissions per unit of GDP by 40-45% compared with 2005 levels, Chinese provincial governments prepared to restructure provincial energy policy and plan their contribution to realizing the State reduction target. Focusing on Fujian and Anhui provinces as case studies, this paper reviews two contrasting policies as a means for meeting the national reduction target. That of the coastal province of Fujian proposes to do so largely through the development of nuclear power, whilst the coal-rich province of Anhui proposes to do so through its energy consumption rate rising at a lower rate than that of the rise in GDP. In both cases renewable energy makes up a small proportion of their proposed 2020 energy structures. The conclusion discusses in depth concerns about nuclear power policy, energy efficiency, energy consumption strategy and problems in developing renewable energy. - Research Highlights: → We review two contrasting policies as a means for meeting the national reduction target of carbon emission in two provinces. → Scenario review of energy structure in Fujian and Anhui Provinces to 2020. → We discuss concerns about nuclear power policy, energy efficiency, energy consumption strategy and problems in developing renewable energy.

  4. Post2012 climate regime options for global GHG emission reduction. Analysis and evaluation of regime options and reduction potential for achieving the 2 degree target with respect to environmental effectiveness, costs and institutional aspects

    Energy Technology Data Exchange (ETDEWEB)

    Schumacher, Katja; Graichen, Jakob; Healy, Sean [Oeko-Institut, Inst. fuer Angewandte Oekologie e.V., Freiburg im Breisgau (Germany); Schleich, Joachim; Duscha, Vicki [Fraunhofer-Institut fuer Systemtechnik und Innovationsforschung (ISI), Karlsruhe (Germany)

    2011-08-15

    This report explores the environmental and economic effects of the pledges submitted by industrialized and major developing countries for 2020 under the Copenhagen Accord and provides an in-depth comparison with results arrived at in other model analyses. Two scenarios reflect the lower (''weak'') and upper (''ambitious'') bounds of the Copenhagen pledges. In addition, two scenarios in accordance with the IPCC range for reaching a 2 C target are analyzed with industrialized countries in aggregate reducing their CO2 emissions by 30 % in 2020 compared to 1990 levels. For all four policy scenarios the effects of emission paths leading to a global reduction target of 50 % below 1990 levels in 2050 are also simulated for 2030. In addition, a separate scenario is carried out which estimates the costs of an unconditioned EU 30 % emission reduction target, i.e. where the EU adopts a 30 % emission reduction target in 2020 (rather than a 20 % reduction target), while all other countries stick with their ''weak'' pledges. Not included in the calculations is possible financial support for developing countries from industrialized countries as currently discussed in the climate change negotiations and laid out in the Copenhagen Accord. (orig.)

  5. Infrared variation reduction by simultaneous background suppression and target contrast enhancement for deep convolutional neural network-based automatic target recognition

    Science.gov (United States)

    Kim, Sungho

    2017-06-01

    Automatic target recognition (ATR) is a traditionally challenging problem in military applications because of the wide range of infrared (IR) image variations and the limited number of training images. IR variations are caused by various three-dimensional target poses, noncooperative weather conditions (fog and rain), and difficult target acquisition environments. Recently, deep convolutional neural network-based approaches for RGB images (RGB-CNN) showed breakthrough performance in computer vision problems, such as object detection and classification. The direct use of RGB-CNN to the IR ATR problem fails to work because of the IR database problems (limited database size and IR image variations). An IR variation-reduced deep CNN (IVR-CNN) to cope with the problems is presented. The problem of limited IR database size is solved by a commercial thermal simulator (OKTAL-SE). The second problem of IR variations is mitigated by the proposed shifted ramp function-based intensity transformation. This can suppress the background and enhance the target contrast simultaneously. The experimental results on the synthesized IR images generated by the thermal simulator (OKTAL-SE) validated the feasibility of IVR-CNN for military ATR applications.

  6. A Class Size Reduction (CSR) Implementation Plan Based on an Evaluative Study of CSRs for the Improvement of Third Grade Reading Achievement

    Science.gov (United States)

    Vandyke, Barbara Adrienne

    2009-01-01

    For too long, educators have been left to their own devices when implementing educational policies, initiatives, strategies, and interventions, and they have longed to see the full benefits of these programs, especially in reading achievement. However, instead of determining whether a policy/initiative is working, educators have been asked to…

  7. Sulfate reduction and methane oxidation activity below the sulfate-methane transition zone in Alaskan Beaufort Sea continental margin sediments: Implications for deep sulfur cycling

    Science.gov (United States)

    Treude, Tina; Krause, Stefan; Maltby, Johanna; Dale, Andrew W.; Coffin, Richard; Hamdan, Leila J.

    2014-11-01

    Two ∼6 m long sediment cores were collected along the ∼300 m isobath on the Alaskan Beaufort Sea continental margin. Both cores showed distinct sulfate-methane transition zones (SMTZ) at 105 and 120 cm below seafloor (cmbsf). Sulfate was not completely depleted below the SMTZ but remained between 30 and 500 μM. Sulfate reduction and anaerobic oxidation of methane (AOM) determined by radiotracer incubations were active throughout the methanogenic zone. Although a mass balance could not explain the source of sulfate below the SMTZ, geochemical profiles and correlation network analyses of biotic and abiotic data suggest a cryptic sulfur cycle involving iron, manganese and barite. Inhibition experiments with molybdate and 2-bromoethanesulfonate (BES) indicated decoupling of sulfate reduction and AOM and competition between sulfate reducers and methanogens for substrates. While correlation network analyses predicted coupling of AOM to iron reduction, the addition of manganese or iron did not stimulate AOM. Since none of the classical archaeal anaerobic methanotrophs (ANME) were abundant, the involvement of unknown or unconventional phylotypes in AOM is conceivable. The resistance of AOM activity to inhibitors implies deviation from conventional enzymatic pathways. This work suggests that the classical redox cascade of electron acceptor utilization based on Gibbs energy yields does not always hold in diffusion-dominated systems, and instead biotic processes may be more strongly coupled to mineralogy.

  8. Achieving consistent image quality and overall radiation dose reduction for coronary CT angiography with body mass index-dependent tube voltage and tube current selection

    International Nuclear Information System (INIS)

    Wang, G.; Gao, J.; Zhao, S.; Sun, X.; Chen, X.; Cui, X.

    2014-01-01

    Aim: To develop a quantitative body mass index (BMI)-dependent tube voltage and tube current selection method for obtaining consistent image quality and overall dose reduction in computed tomography coronary angiography (CTCA). Methods and materials: The images of 190 consecutive patients (group A) who underwent CTCA with fixed protocols (100 kV/193 mAs for 100 patients with a BMI of <27 and 120 kV/175 mAs for 90 patients with a BMI of >27) were retrospectively analysed and reconstructed with an adaptive statistical iterative reconstruction (ASIR) algorithm at 50% blending. Image noise was measured and the relationship to BMI was studied to establish BMI-dependent tube current for obtaining CTCA images with user-specified image noise. One hundred additional cardiac patients (group B) were examined using prospective triggering with the BMI-dependent tube voltage/current. CTCA image-quality score, image noise, and effective dose from groups B and C (subgroup of A of 100 patients examined with prospective triggering only) were obtained and compared. Results: There was a linear relationship between image noise and BMI in group A. Using a BMI-dependent tube current in group B, an average CTCA image noise of 27.7 HU (target 28 HU) and 31.7 HU (target 33 HU) was obtained for the subgroups of patients with BMIs of >27 and of <27, respectively, and was independent of patient BMI. There was no difference between image-quality scores between groups B and C (4.52 versus 4.60, p > 0.05). The average effective dose for group B (2.56 mSv) was 42% lower than group C (4.38 mSv; p < 0.01). Conclusion: BMI-dependent tube voltage/current selection in CTCA provides an individualized protocol that generates consistent image quality and helps to reduce overall patient radiation dose. - Highlights: • BMI-dependent kVp and mA selection method may be established in CCTA. • BMI-dependent kVp and mA enables consistent CCTA image quality. • Overall dose reduction of 40% can

  9. Revising the role of pH and thermal treatments in aflatoxin content reduction during the tortilla and deep frying processes.

    Science.gov (United States)

    Torres, P; Guzmán-Ortiz, M; Ramírez-Wong, B

    2001-06-01

    Naturally aflatoxin-contaminated corn (Zea mays L.) was made into tortillas, tortilla chips, and corn chips by the traditional and commercial alkaline cooking processes. The traditional nixtamalization (alkaline-cooking) process involved cooking and steeping the corn, whereas the commercial nixtamalization process only steeps the corn in a hot alkaline solution (initially boiling). A pilot plant that includes the cooker, stone grinder, celorio cutter, and oven was used for the experiments. The traditional process eliminated 51.7, 84.5, and 78.8% of the aflatoxins content in tortilla, tortilla chips, and corn chips, respectively. The commercial process was less effective: it removed 29.5, 71.2, and 71.2 of the aflatoxin in the same products. Intermediate and final products did not reach a high enough pH to allow permanent aflatoxin reduction during thermal processing. The cooking or steeping liquor (nejayote) is the only component of the system with a sufficiently high pH (10.2-10.7) to allow modification and detoxification of aflatoxins present in the corn grain. The importance of removal of tip, pericarp, and germ during nixtamalization for aflatoxin reduction in tortilla is evident.

  10. What Really is Deep Learning Doing?

    OpenAIRE

    Xiong, Chuyu

    2017-01-01

    Deep learning has achieved a great success in many areas, from computer vision to natural language processing, to game playing, and much more. Yet, what deep learning is really doing is still an open question. There are a lot of works in this direction. For example, [5] tried to explain deep learning by group renormalization, and [6] tried to explain deep learning from the view of functional approximation. In order to address this very crucial question, here we see deep learning from perspect...

  11. Deep Reinforcement Learning: An Overview

    OpenAIRE

    Li, Yuxi

    2017-01-01

    We give an overview of recent exciting achievements of deep reinforcement learning (RL). We discuss six core elements, six important mechanisms, and twelve applications. We start with background of machine learning, deep learning and reinforcement learning. Next we discuss core RL elements, including value function, in particular, Deep Q-Network (DQN), policy, reward, model, planning, and exploration. After that, we discuss important mechanisms for RL, including attention and memory, unsuperv...

  12. Consistent Reduction in Periprocedural Myocardial Infarction With Cangrelor as Assessed by Multiple Definitions: Findings From CHAMPION PHOENIX (Cangrelor Versus Standard Therapy to Achieve Optimal Management of Platelet Inhibition).

    Science.gov (United States)

    Cavender, Matthew A; Bhatt, Deepak L; Stone, Gregg W; White, Harvey D; Steg, Ph Gabriel; Gibson, C Michael; Hamm, Christian W; Price, Matthew J; Leonardi, Sergio; Prats, Jayne; Deliargyris, Efthymios N; Mahaffey, Kenneth W; Harrington, Robert A

    2016-09-06

    Cangrelor is an intravenous P2Y12 inhibitor approved to reduce periprocedural ischemic events in patients undergoing percutaneous coronary intervention not pretreated with a P2Y12 inhibitor. A total of 11 145 patients were randomized to cangrelor or clopidogrel in the CHAMPION PHOENIX trial (Cangrelor versus Standard Therapy to Achieve Optimal Management of Platelet Inhibition). We explored the effects of cangrelor on myocardial infarction (MI) using different definitions and performed sensitivity analyses on the primary end point of the trial. A total of 462 patients (4.2%) undergoing percutaneous coronary intervention had an MI as defined by the second universal definition. The majority of these MIs (n=433, 93.7%) were type 4a. Treatment with cangrelor reduced the incidence of MI at 48 hours (3.8% versus 4.7%; odds ratio [OR], 0.80; 95% confidence interval [CI], 0.67-0.97; P=0.02). When the Society of Coronary Angiography and Intervention definition of periprocedural MI was applied to potential ischemic events, there were fewer total MIs (n=134); however, the effects of cangrelor on MI remained significant (OR, 0.65; 95% CI, 0.46-0.92; P=0.01). Similar effects were seen in the evaluation of the effects of cangrelor on MIs with peak creatinine kinase-MB ≥10 times the upper limit of normal (OR, 0.64; 95% CI, 0.45-0.91) and those with peak creatinine kinase-MB ≥10 times the upper limit of normal, ischemic symptoms, or ECG changes (OR, 0.63; 95% CI, 0.48-0.84). MIs defined by any of these definitions were associated with increased risk of death at 30 days. Treatment with cangrelor reduced the composite end point of death, MI (Society of Coronary Angiography and Intervention definition), ischemia-driven revascularization, or Academic Research Consortium definite stent thrombosis (1.4% versus 2.1%; OR, 0.69; 95% CI, 0.51-0.92). MI in patients undergoing percutaneous coronary intervention, regardless of definition, remains associated with increased risk of death

  13. Deep frying

    NARCIS (Netherlands)

    Koerten, van K.N.

    2016-01-01

    Deep frying is one of the most used methods in the food processing industry. Though practically any food can be fried, French fries are probably the most well-known deep fried products. The popularity of French fries stems from their unique taste and texture, a crispy outside with a mealy soft

  14. Deep learning

    CERN Document Server

    Goodfellow, Ian; Courville, Aaron

    2016-01-01

    Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language proces...

  15. Achievement report for fiscal 2000 on New Sunshine Project aiding program. Development of hot water utilizing power generation plant (Development of deep seated geothermal resource collection technologies - development of deep seated geothermal resource production technologies); 2000 nendo nessui riyo hatsuden plant to kaihatsu seika hokokusho. Shinbu chinetsu shigen saishu gijutsu no kaihatsu (Shinbu chinetsu shigen seisan gijutu no kaihatsu)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    Items of information about deep seated geothermal resource production technologies were collected, and tests and studies were performed using actual wells. This paper summarizes the achievements in fiscal 2000. In developing the PTDS logging technology, it was verified in the actual well tests that the measured density of a D probe is consistent with the theoretical density, and the accuracy is satisfactory. The extended time measurement at fixed points on temperatures of fluids in the wells, pressures, flow rates, and fluid densities has identified chronological change of the characteristics of the fluids in the wells, including the enthalpy, proving them to be effective in well control. In developing the PTC monitoring technology, a fluid extracting machine for the downhole fluid sampler was fabricated, which has collected hot water successfully in the actual well twice out of seven attempts. In developing the high temperature tracer monitoring technology, experiments were performed using vapor phase and liquid phase tracers, whereas re-discharge of all the tracer materials was identified. In developing the scale preventing and removing technology, a silica recovering device capable of treating hot water at 0.6 ton per hour as maximum was fabricated, and the site tests were performed by using cation-based coagulant. (NEDO)

  16. A Global Survey and Interactive Map Suite of Deep Underground Facilities; Examples of Geotechnical and Engineering Capabilities, Achievements, Challenges: (Mines, Shafts, Tunnels, Boreholes, Sites and Underground Facilities for Nuclear Waste and Physics R&D)

    Science.gov (United States)

    Tynan, M. C.; Russell, G. P.; Perry, F.; Kelley, R.; Champenois, S. T.

    2017-12-01

    This global survey presents a synthesis of some notable geotechnical and engineering information reflected in four interactive layer maps for selected: 1) deep mines and shafts; 2) existing, considered or planned radioactive waste management deep underground studies, sites, or disposal facilities; 3) deep large diameter boreholes, and 4) physics underground laboratories and facilities from around the world. These data are intended to facilitate user access to basic information and references regarding deep underground "facilities", history, activities, and plans. In general, the interactive maps and database [http://gis.inl.gov/globalsites/] provide each facility's approximate site location, geology, and engineered features (e.g.: access, geometry, depth, diameter, year of operations, groundwater, lithology, host unit name and age, basin; operator, management organization, geographic data, nearby cultural features, other). Although the survey is not all encompassing, it is a comprehensive review of many of the significant existing and historical underground facilities discussed in the literature addressing radioactive waste management and deep mined geologic disposal safety systems. The global survey is intended to support and to inform: 1) interested parties and decision makers; 2) radioactive waste disposal and siting option evaluations, and 3) safety case development as a communication tool applicable to any mined geologic disposal facility as a demonstration of historical and current engineering and geotechnical capabilities available for use in deep underground facility siting, planning, construction, operations and monitoring.

  17. Achieving Cost Reduction Through Data Analytics.

    Science.gov (United States)

    Rocchio, Betty Jo

    2016-10-01

    The reimbursement structure of the US health care system is shifting from a volume-based system to a value-based system. Adopting a comprehensive data analytics platform has become important to health care facilities, in part to navigate this shift. Hospitals generate plenty of data, but actionable analytics are necessary to help personnel interpret and apply data to improve practice. Perioperative services is an important revenue-generating department for hospitals, and each perioperative service line requires a tailored approach to be successful in managing outcomes and controlling costs. Perioperative leaders need to prepare to use data analytics to reduce variation in supplies, labor, and overhead. Mercy, based in Chesterfield, Missouri, adopted a perioperative dashboard that helped perioperative leaders collaborate with surgeons and perioperative staff members to organize and analyze health care data, which ultimately resulted in significant cost savings. Copyright © 2016 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  18. Auxiliary Deep Generative Models

    DEFF Research Database (Denmark)

    Maaløe, Lars; Sønderby, Casper Kaae; Sønderby, Søren Kaae

    2016-01-01

    Deep generative models parameterized by neural networks have recently achieved state-of-the-art performance in unsupervised and semi-supervised learning. We extend deep generative models with auxiliary variables which improves the variational approximation. The auxiliary variables leave...... the generative model unchanged but make the variational distribution more expressive. Inspired by the structure of the auxiliary variable we also propose a model with two stochastic layers and skip connections. Our findings suggest that more expressive and properly specified deep generative models converge...... faster with better results. We show state-of-the-art performance within semi-supervised learning on MNIST (0.96%), SVHN (16.61%) and NORB (9.40%) datasets....

  19. Development of batch electrolytic enrichment cells with 100-fold volume reduction, control electronic units and neutralization/distillation unit, to enable better sensitivity to be achieved in low-level tritium measurements when liquid scintillation counting follows the enrichment process

    International Nuclear Information System (INIS)

    Taylor, C.B.

    1980-06-01

    Full details of the batch-cell tritium enrichment system design are provided including electronic control circuits specially developed for these cells. The system incorporates a new type of concentric electrode cell (outer cathode of mild steel, anode of stainless steel, inner cathode of mild steel) with volume reduction capability 1 l to ca 9 ml. Electrolysis of 20 cells is performed in 2 steps. Down to sample volume ca 20 ml, the cells are series connected at constant currents up to 14.5 A, in the 2nd step, each cell is connected to its own individual current supply (2A) and control circuit. Automatic shut-off at the desired final volume is achieved by sensing the drop in current through the inner cathode as the electrolyte level falls below a PTFE insulator. The large electrode surface area and careful dimensioning at the foot of the cell allow operation with low starting electrolyte concentration 1.5 g Na 2 O 2 .l -1 . After electrolysis, quantitative recovery as distilled water of all hydrogen from the enriched residue is achieved by CO 2 -neutralisation and vacuum distillation at 100 0 C in a distillation unit which handles 20 cells simultaneously

  20. Scenarios for Deep Carbon Emission Reductions from Electricity by 2050 in Western North America using the Switch Electric Power Sector Planning Model: California's Carbon Challenge Phase II, Volume II

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, James; Mileva, Ana; Johnston, Josiah; Kammen, Daniel; Wei, Max; Greenblatt, Jeffrey

    2014-01-01

    This study used a state-of-the-art planning model called SWITCH for the electric power system to investigate the evolution of the power systems of California and western North America from present-day to 2050 in the context of deep decarbonization of the economy. Researchers concluded that drastic power system carbon emission reductions were feasible by 2050 under a wide range of possible futures. The average cost of power in 2050 would range between $149 to $232 per megawatt hour across scenarios, a 21 to 88 percent increase relative to a business-as-usual scenario, and a 38 to 115 percent increase relative to the present-day cost of power. The power system would need to undergo sweeping change to rapidly decarbonize. Between present-day and 2030 the evolution of the Western Electricity Coordinating Council power system was dominated by implementing aggressive energy efficiency measures, installing renewable energy and gas-fired generation facilities and retiring coal-fired generation. Deploying wind, solar and geothermal power in the 2040 timeframe reduced power system emissions by displacing gas-fired generation. This trend continued for wind and solar in the 2050 timeframe but was accompanied by large amounts of new storage and long-distance high-voltage transmission capacity. Electricity storage was used primarily to move solar energy from the daytime into the night to charge electric vehicles and meet demand from electrified heating. Transmission capacity over the California border increased by 40 - 220 percent by 2050, implying that transmission siting, permitting, and regional cooperation will become increasingly important. California remained a net electricity importer in all scenarios investigated. Wind and solar power were key elements in power system decarbonization in 2050 if no new nuclear capacity was built. The amount of installed gas capacity remained relatively constant between present-day and 2050, although carbon capture and sequestration was

  1. Deep Learning and Bayesian Methods

    OpenAIRE

    Prosper Harrison B.

    2017-01-01

    A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such meth...

  2. Deep Learning

    DEFF Research Database (Denmark)

    Jensen, Morten Bornø; Bahnsen, Chris Holmberg; Nasrollahi, Kamal

    2018-01-01

    I løbet af de sidste 10 år er kunstige neurale netværk gået fra at være en støvet, udstødt tekno-logi til at spille en hovedrolle i udviklingen af kunstig intelligens. Dette fænomen kaldes deep learning og er inspireret af hjernens opbygning.......I løbet af de sidste 10 år er kunstige neurale netværk gået fra at være en støvet, udstødt tekno-logi til at spille en hovedrolle i udviklingen af kunstig intelligens. Dette fænomen kaldes deep learning og er inspireret af hjernens opbygning....

  3. Deep geothermics

    International Nuclear Information System (INIS)

    Anon.

    1995-01-01

    The hot-dry-rocks located at 3-4 km of depth correspond to low permeable rocks carrying a large amount of heat. The extraction of this heat usually requires artificial hydraulic fracturing of the rock to increase its permeability before water injection. Hot-dry-rocks geothermics or deep geothermics is not today a commercial channel but only a scientific and technological research field. The Soultz-sous-Forets site (Northern Alsace, France) is characterized by a 6 degrees per meter geothermal gradient and is used as a natural laboratory for deep geothermal and geological studies in the framework of a European research program. Two boreholes have been drilled up to 3600 m of depth in the highly-fractured granite massif beneath the site. The aim is to create a deep heat exchanger using only the natural fracturing for water transfer. A consortium of german, french and italian industrial companies (Pfalzwerke, Badenwerk, EdF and Enel) has been created for a more active participation to the pilot phase. (J.S.). 1 fig., 2 photos

  4. Model project for reduction of electric power consumption in cement plant. Report for fiscal 1999 on achievements of commissioned operation; Cement shosei plant denryoku shohi sakugen model jigyo 1999 nendo itaku gyomu seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    With objectives of utilizing energies efficiently and serving for environmental improvements, a model project was implemented for reduction of electric power consumption in cement plant in Vietnam. This paper reports the achievements in fiscal 1999. This project is intended to install waste heat boilers in waste gas lines in the preheating process in the cement plant, whose waste heat is recovered by steam to generate electric power by using steam turbines. The current fiscal year has executed the following activities: design of turbines, condensers, and oil units; discussions on the arrangement drawings thereof obtained from the Vietnam side; design of piping and provision of the detailed drawings thereof to the Vietnam side; planning of the electric cable routes; planning of instrumentation wiring routes; and grounding and interlocks. Results of the discussions on the proposed plant operation methods were reflected to the system design of the monitoring devices. Furthermore, the turbines were fabricated, and the associated facilities, valves, and piping materials were procured based on the detailed design. The piping materials were given pre-shipment inspections before having been transported to the site. (NEDO)

  5. Streaming Reduction Circuit

    NARCIS (Netherlands)

    Gerards, Marco Egbertus Theodorus; Kuper, Jan; Kokkeler, Andre B.J.; Molenkamp, Egbert

    2009-01-01

    Reduction circuits are used to reduce rows of floating point values to single values. Binary floating point operators often have deep pipelines, which may cause hazards when many consecutive rows have to be reduced. We present an algorithm by which any number of consecutive rows of arbitrary lengths

  6. Pathways to Deep Decarbonization in the United States

    Science.gov (United States)

    Torn, M. S.; Williams, J.

    2015-12-01

    Limiting anthropogenic warming to less than 2°C will require a reduction in global net greenhouse gas (GHG) emissions on the order of 80% below 1990 levels by 2050. Thus, there is a growing need to understand what would be required to achieve deep decarbonization (DD) in different economies. We examined the technical and economic feasibility of such a transition in the United States, evaluating the infrastructure and technology changes required to reduce U.S. GHG emissions in 2050 by 80% below 1990 levels. Using the PATHWAYS and GCAM models, we found that this level of decarbonization in the U.S. can be accomplished with existing commercial or near-commercial technologies, while providing the same level of energy services and economic growth as a reference case based on the U.S. DOE Annual Energy Outlook. Reductions are achieved through high levels of energy efficiency, decarbonization of electric generation, electrification of most end uses, and switching the remaining end uses to lower carbon fuels. Incremental energy system cost would be equivalent to roughly 1% of gross domestic product, not including potential non-energy benefits such as avoided human and infrastructure costs of climate change. Starting now on the deep decarbonization path would allow infrastructure stock turnover to follow natural replacement rates, which reduces costs, eases demand on manufacturing, and allows gradual consumer adoption. Energy system changes must be accompanied by reductions in non-energy and non-CO2 GHG emissions.

  7. Deep smarts.

    Science.gov (United States)

    Leonard, Dorothy; Swap, Walter

    2004-09-01

    When a person sizes up a complex situation and rapidly comes to a decision that proves to be not just good but brilliant, you think, "That was smart." After you watch him do this a few times, you realize you're in the presence of something special. It's not raw brainpower, though that helps. It's not emotional intelligence, either, though that, too, is often involved. It's deep smarts. Deep smarts are not philosophical--they're not"wisdom" in that sense, but they're as close to wisdom as business gets. You see them in the manager who understands when and how to move into a new international market, in the executive who knows just what kind of talk to give when her organization is in crisis, in the technician who can track a product failure back to an interaction between independently produced elements. These are people whose knowledge would be hard to purchase on the open market. Their insight is based on know-how more than on know-what; it comprises a system view as well as expertise in individual areas. Because deep smarts are experienced based and often context specific, they can't be produced overnight or readily imported into an organization. It takes years for an individual to develop them--and no time at all for an organization to lose them when a valued veteran walks out the door. They can be taught, however, with the right techniques. Drawing on their forthcoming book Deep Smarts, Dorothy Leonard and Walter Swap say the best way to transfer such expertise to novices--and, on a larger scale, to make individual knowledge institutional--isn't through PowerPoint slides, a Web site of best practices, online training, project reports, or lectures. Rather, the sage needs to teach the neophyte individually how to draw wisdom from experience. Companies have to be willing to dedicate time and effort to such extensive training, but the investment more than pays for itself.

  8. pathways to deep decarbonization - 2014 report

    International Nuclear Information System (INIS)

    Sachs, Jeffrey; Guerin, Emmanuel; Mas, Carl; Schmidt-Traub, Guido; Tubiana, Laurence; Waisman, Henri; Colombier, Michel; Bulger, Claire; Sulakshana, Elana; Zhang, Kathy; Barthelemy, Pierre; Spinazze, Lena; Pharabod, Ivan

    2014-09-01

    The Deep Decarbonization Pathways Project (DDPP) is a collaborative initiative to understand and show how individual countries can transition to a low-carbon economy and how the world can meet the internationally agreed target of limiting the increase in global mean surface temperature to less than 2 degrees Celsius (deg. C). Achieving the 2 deg. C limit will require that global net emissions of greenhouse gases (GHG) approach zero by the second half of the century. This will require a profound transformation of energy systems by mid-century through steep declines in carbon intensity in all sectors of the economy, a transition we call 'deep decarbonization.' Successfully transition to a low-carbon economy will require unprecedented global cooperation, including a global cooperative effort to accelerate the development and diffusion of some key low carbon technologies. As underscored throughout this report, the results of the DDPP analyses remain preliminary and incomplete. The DDPP proceeds in two phases. This 2014 report describes the DDPP's approach to deep decarbonization at the country level and presents preliminary findings on technically feasible pathways to deep decarbonization, utilizing technology assumptions and timelines provided by the DDPP Secretariat. At this stage we have not yet considered the economic and social costs and benefits of deep decarbonization, which will be the topic for the next report. The DDPP is issuing this 2014 report to the UN Secretary-General Ban Ki-moon in support of the Climate Leaders' Summit at the United Nations on September 23, 2014. This 2014 report by the Deep Decarbonization Pathway Project (DDPP) summarizes preliminary findings of the technical pathways developed by the DDPP Country Research Partners with the objective of achieving emission reductions consistent with limiting global warming to less than 2 deg. C., without, at this stage, consideration of economic and social costs and benefits. The DDPP is a knowledge

  9. Deep Visual Attention Prediction

    Science.gov (United States)

    Wang, Wenguan; Shen, Jianbing

    2018-05-01

    In this work, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although Convolutional Neural Networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve CNN based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark datasets demonstrate our method yields state-of-the-art performance with competitive inference time.

  10. FOSTERING DEEP LEARNING AMONGST ENTREPRENEURSHIP ...

    African Journals Online (AJOL)

    An important prerequisite for this important objective to be achieved is that lecturers ensure that students adopt a deep learning approach towards entrepreneurship courses been taught, as this will enable them to truly understand key entrepreneurial concepts and strategies and how they can be implemented in the real ...

  11. Radon reduction

    International Nuclear Information System (INIS)

    Hamilton, M.A.

    1990-01-01

    During a radon gas screening program, elevated levels of radon gas were detected in homes on Mackinac Island, Mich. Six homes on foundations with crawl spaces were selected for a research project aimed at reducing radon gas concentrations, which ranged from 12.9 to 82.3 pCi/l. Using isolation and ventilation techniques, and variations thereof, radon concentrations were reduced to less than 1 pCi/l. This paper reports that these reductions were achieved using 3.5 mil cross laminated or 10 mil high density polyethylene plastic as a barrier without sealing to the foundation or support piers, solid and/or perforated plastic pipe and mechanical fans. Wind turbines were found to be ineffective at reducing concentrations to acceptable levels. Homeowners themselves installed all materials

  12. DeepPy: Pythonic deep learning

    DEFF Research Database (Denmark)

    Larsen, Anders Boesen Lindbo

    This technical report introduces DeepPy – a deep learning framework built on top of NumPy with GPU acceleration. DeepPy bridges the gap between highperformance neural networks and the ease of development from Python/NumPy. Users with a background in scientific computing in Python will quickly...... be able to understand and change the DeepPy codebase as it is mainly implemented using high-level NumPy primitives. Moreover, DeepPy supports complex network architectures by letting the user compose mathematical expressions as directed graphs. The latest version is available at http...

  13. Deep Energy Retrofits - Eleven California Case Studies

    Energy Technology Data Exchange (ETDEWEB)

    Less, Brennan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fisher, Jeremy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Walker, Iain [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-10-01

    This research documents and demonstrates viable approaches using existing materials, tools and technologies in owner-conducted deep energy retrofits (DERs). These retrofits are meant to reduce energy use by 70% or more, and include extensive upgrades to the building enclosure, heating, cooling and hot water equipment, and often incorporate appliance and lighting upgrades as well as the addition of renewable energy. In this report, 11 Northern California (IECC climate zone 3) DER case studies are described and analyzed in detail, including building diagnostic tests and end-use energy monitoring results. All projects recognized the need to improve the home and its systems approximately to current building code-levels, and then pursued deeper energy reductions through either enhanced technology/ building enclosure measures, or through occupant conservation efforts, both of which achieved impressive energy performance and reductions. The beyond-code incremental DER costs averaged $25,910 for the six homes where cost data were available. DERs were affordable when these incremental costs were financed as part of a remodel, averaging a $30 per month increase in the net-cost of home ownership.

  14. Deep Learning and Bayesian Methods

    Directory of Open Access Journals (Sweden)

    Prosper Harrison B.

    2017-01-01

    Full Text Available A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such methods might be used to automate certain aspects of data analysis in particle physics. Next, the connection to Bayesian methods is discussed and the paper ends with thoughts on a significant practical issue, namely, how, from a Bayesian perspective, one might optimize the construction of deep neural networks.

  15. Vietnam; Poverty Reduction Strategy Paper

    OpenAIRE

    International Monetary Fund

    2004-01-01

    This paper assesses the Poverty Reduction Strategy Paper of Vietnam, known as the Comprehensive Poverty Reduction and Growth Strategy (CPRGS). It is an action program to achieve economic growth and poverty reduction objectives. This paper reviews the objectives and tasks of socio-economic development and poverty reduction. The government of Vietnam takes poverty reduction as a cutting-through objective in the process of country socio-economic development and declares its commitment to impleme...

  16. Greedy Deep Dictionary Learning

    OpenAIRE

    Tariyal, Snigdha; Majumdar, Angshul; Singh, Richa; Vatsa, Mayank

    2016-01-01

    In this work we propose a new deep learning tool called deep dictionary learning. Multi-level dictionaries are learnt in a greedy fashion, one layer at a time. This requires solving a simple (shallow) dictionary learning problem, the solution to this is well known. We apply the proposed technique on some benchmark deep learning datasets. We compare our results with other deep learning tools like stacked autoencoder and deep belief network; and state of the art supervised dictionary learning t...

  17. Response to deep TMS in depressive patients with previous electroconvulsive treatment.

    Science.gov (United States)

    Rosenberg, Oded; Zangen, Abraham; Stryjer, Rafael; Kotler, Moshe; Dannon, Pinhas N

    2010-10-01

    The efficacy of transcranial magnetic stimulation (TMS) in the treatment of major depression has already been shown. Novel TMS coils allowing stimulation of deeper brain regions have recently been developed and studied. Our study is aimed at exploring the possible efficacy of deep TMS in patients with resistant depression, who previously underwent electroconvalsive therapy (ECT). Using Brainsway's deep TMS H1 coil, six patients who previously underwent ECT, were treated with 120% power of the motor threshold at a frequency of 20 Hz. Patients underwent five sessions per week, up to 4 weeks. Before the study, patients were evaluated using the Hamilton depression rating scale (HDRS, 24 items), the Hamilton anxiety scale, and the Beck depression inventory and were again evaluated after 5, 10, 15, and 20 daily treatments. Response to treatment was considered a reduction in the HDRS of at least 50%, and remission was considered a reduction of the HDRS-24 below 10 points. Two of six patients responded to the treatment with deep TMS, including one who achieved full remission. Our results suggest the possibility of a subpopulation of depressed patients who may benefit from deep TMS treatment, including patients who did not respond to ECT previously. However, the power of the study is small and similar larger samples are needed. Copyright © 2010 Elsevier Inc. All rights reserved.

  18. The anaerobic degradation of organic matter in Danish coastal sediments: iron reduction, manganese reduction, and sulfate reduction

    DEFF Research Database (Denmark)

    Canfield, Donald Eugene; Thamdrup, B; Hansen, Jens Würgler

    1993-01-01

    ). In the deep portion of the basin, surface Mn enrichments reached 3.5 wt%, and Mn reduction was the only important anaerobic carbon oxidation process in the upper 10 cm of the sediment. In the less Mn-rich sediments from intermediate depths in the basin, Fe reduction ranged from somewhat less, to far more...... speculate that in shallow sediments of the Skagerrak, surface Mn oxides are present in a somewhat reduced oxidation level (deep basin....

  19. Global Emissions of Nitrous Oxide: Key Source Sectors, their Future Activities and Technical Opportunities for Emission Reduction

    Science.gov (United States)

    Winiwarter, W.; Höglund-Isaksson, L.; Klimont, Z.; Schöpp, W.; Amann, M.

    2017-12-01

    Nitrous oxide originates primarily from natural biogeochemical processes, but its atmospheric concentrations have been strongly affected by human activities. According to IPCC, it is the third largest contributor to the anthropogenic greenhouse gas emissions (after carbon dioxide and methane). Deep decarbonization scenarios, which are able to constrain global temperature increase within 1.5°C, require strategies to cut methane and nitrous oxide emissions on top of phasing out carbon dioxide emissions. Employing the Greenhouse gas and Air pollution INteractions and Synergies (GAINS) model, we have estimated global emissions of nitrous oxide until 2050. Using explicitly defined emission reduction technologies we demonstrate that, by 2030, about 26% ± 9% of the emissions can be avoided assuming full implementation of currently existing reduction technologies. Nearly a quarter of this mitigation can be achieved at marginal costs lower than 10 Euro/t CO2-eq with the chemical industry sector offering important reductions. Overall, the largest emitter of nitrous oxide, agriculture, also provides the largest emission abatement potentials. Emission reduction may be achieved by precision farming methods (variable rate technology) as well as by agrochemistry (nitrification inhibitors). Regionally, the largest emission reductions are achievable where intensive agriculture and industry are prevalent (production and application of mineral fertilizers): Centrally Planned Asia including China, North and Latin America, and South Asia including India. Further deep cuts in nitrous oxide emissions will require extending reduction efforts beyond strictly technological solutions, i.e., considering behavioral changes, including widespread adoption of "healthy diets" minimizing excess protein consumption.

  20. Towards deep learning with segregated dendrites.

    Science.gov (United States)

    Guerguiev, Jordan; Lillicrap, Timothy P; Richards, Blake A

    2017-12-05

    Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network learns to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful higher-order representations-the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the morphology of neocortical pyramidal neurons.

  1. Deep Energy Retrofit Guidance for the Building America Solutions Center

    Energy Technology Data Exchange (ETDEWEB)

    Less, Brennan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Walker, Iain [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-01-01

    The U.S. DOE Building America program has established a research agenda targeting market-relevant strategies to achieve 40% reductions in existing home energy use by 2030. Deep Energy Retrofits (DERs) are part of the strategy to meet and exceed this goal. DERs are projects that create new, valuable assets from existing residences, by bringing homes into alignment with the expectations of the 21st century. Ideally, high energy using, dated homes that are failing to provide adequate modern services to their owners and occupants (e.g., comfortable temperatures, acceptable humidity, clean, healthy), are transformed through comprehensive upgrades to the building envelope, services and miscellaneous loads into next generation high performance homes. These guidance documents provide information to aid in the broader market adoption of DERs. They are intended for inclusion in the online resource the Building America Solutions Center (BASC). This document is an assemblage of multiple entries in the BASC, each of which addresses a specific aspect of Deep Energy Retrofit best practices for projects targeting at least 50% energy reductions. The contents are based upon a review of actual DERs in the U.S., as well as a mixture of engineering judgment, published guidance from DOE research in technologies and DERs, simulations of cost-optimal DERs, Energy Star and Consortium for Energy Efficiency (CEE) product criteria, and energy codes.

  2. Deep learning aided decision support for pulmonary nodules diagnosing: a review.

    Science.gov (United States)

    Yang, Yixin; Feng, Xiaoyi; Chi, Wenhao; Li, Zhengyang; Duan, Wenzhe; Liu, Haiping; Liang, Wenhua; Wang, Wei; Chen, Ping; He, Jianxing; Liu, Bo

    2018-04-01

    Deep learning techniques have recently emerged as promising decision supporting approaches to automatically analyze medical images for different clinical diagnosing purposes. Diagnosing of pulmonary nodules by using computer-assisted diagnosing has received considerable theoretical, computational, and empirical research work, and considerable methods have been developed for detection and classification of pulmonary nodules on different formats of images including chest radiographs, computed tomography (CT), and positron emission tomography in the past five decades. The recent remarkable and significant progress in deep learning for pulmonary nodules achieved in both academia and the industry has demonstrated that deep learning techniques seem to be promising alternative decision support schemes to effectively tackle the central issues in pulmonary nodules diagnosing, including feature extraction, nodule detection, false-positive reduction, and benign-malignant classification for the huge volume of chest scan data. The main goal of this investigation is to provide a comprehensive state-of-the-art review of the deep learning aided decision support for pulmonary nodules diagnosing. As far as the authors know, this is the first time that a review is devoted exclusively to deep learning techniques for pulmonary nodules diagnosing.

  3. Reduction redux.

    Science.gov (United States)

    Shapiro, Lawrence

    2018-04-01

    Putnam's criticisms of the identity theory attack a straw man. Fodor's criticisms of reduction attack a straw man. Properly interpreted, Nagel offered a conception of reduction that captures everything a physicalist could want. I update Nagel, introducing the idea of overlap, and show why multiple realization poses no challenge to reduction so construed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Taoism and Deep Ecology.

    Science.gov (United States)

    Sylvan, Richard; Bennett, David

    1988-01-01

    Contrasted are the philosophies of Deep Ecology and ancient Chinese. Discusses the cosmology, morality, lifestyle, views of power, politics, and environmental philosophies of each. Concludes that Deep Ecology could gain much from Taoism. (CW)

  5. Hello World Deep Learning in Medical Imaging.

    Science.gov (United States)

    Lakhani, Paras; Gray, Daniel L; Pett, Carl R; Nagy, Paul; Shih, George

    2018-05-03

    There is recent popularity in applying machine learning to medical imaging, notably deep learning, which has achieved state-of-the-art performance in image analysis and processing. The rapid adoption of deep learning may be attributed to the availability of machine learning frameworks and libraries to simplify their use. In this tutorial, we provide a high-level overview of how to build a deep neural network for medical image classification, and provide code that can help those new to the field begin their informatics projects.

  6. Deep Incremental Boosting

    OpenAIRE

    Mosca, Alan; Magoulas, George D

    2017-01-01

    This paper introduces Deep Incremental Boosting, a new technique derived from AdaBoost, specifically adapted to work with Deep Learning methods, that reduces the required training time and improves generalisation. We draw inspiration from Transfer of Learning approaches to reduce the start-up time to training each incremental Ensemble member. We show a set of experiments that outlines some preliminary results on some common Deep Learning datasets and discuss the potential improvements Deep In...

  7. Deep Space Telecommunications

    Science.gov (United States)

    Kuiper, T. B. H.; Resch, G. M.

    2000-01-01

    The increasing load on NASA's deep Space Network, the new capabilities for deep space missions inherent in a next-generation radio telescope, and the potential of new telescope technology for reducing construction and operation costs suggest a natural marriage between radio astronomy and deep space telecommunications in developing advanced radio telescope concepts.

  8. Active3 noise reduction

    International Nuclear Information System (INIS)

    Holzfuss, J.

    1996-01-01

    Noise reduction is a problem being encountered in a variety of applications, such as environmental noise cancellation, signal recovery and separation. Passive noise reduction is done with the help of absorbers. Active noise reduction includes the transmission of phase inverted signals for the cancellation. This paper is about a threefold active approach to noise reduction. It includes the separation of a combined source, which consists of both a noise and a signal part. With the help of interaction with the source by scanning it and recording its response, modeling as a nonlinear dynamical system is achieved. The analysis includes phase space analysis and global radial basis functions as tools for the prediction used in a subsequent cancellation procedure. Examples are given which include noise reduction of speech. copyright 1996 American Institute of Physics

  9. Cough event classification by pretrained deep neural network.

    Science.gov (United States)

    Liu, Jia-Ming; You, Mingyu; Wang, Zheng; Li, Guo-Zheng; Xu, Xianghuai; Qiu, Zhongmin

    2015-01-01

    Cough is an essential symptom in respiratory diseases. In the measurement of cough severity, an accurate and objective cough monitor is expected by respiratory disease society. This paper aims to introduce a better performed algorithm, pretrained deep neural network (DNN), to the cough classification problem, which is a key step in the cough monitor. The deep neural network models are built from two steps, pretrain and fine-tuning, followed by a Hidden Markov Model (HMM) decoder to capture tamporal information of the audio signals. By unsupervised pretraining a deep belief network, a good initialization for a deep neural network is learned. Then the fine-tuning step is a back propogation tuning the neural network so that it can predict the observation probability associated with each HMM states, where the HMM states are originally achieved by force-alignment with a Gaussian Mixture Model Hidden Markov Model (GMM-HMM) on the training samples. Three cough HMMs and one noncough HMM are employed to model coughs and noncoughs respectively. The final decision is made based on viterbi decoding algorihtm that generates the most likely HMM sequence for each sample. A sample is labeled as cough if a cough HMM is found in the sequence. The experiments were conducted on a dataset that was collected from 22 patients with respiratory diseases. Patient dependent (PD) and patient independent (PI) experimental settings were used to evaluate the models. Five criteria, sensitivity, specificity, F1, macro average and micro average are shown to depict different aspects of the models. From overall evaluation criteria, the DNN based methods are superior to traditional GMM-HMM based method on F1 and micro average with maximal 14% and 11% error reduction in PD and 7% and 10% in PI, meanwhile keep similar performances on macro average. They also surpass GMM-HMM model on specificity with maximal 14% error reduction on both PD and PI. In this paper, we tried pretrained deep neural network in

  10. Deep learning with Python

    CERN Document Server

    Chollet, Francois

    2018-01-01

    DESCRIPTION Deep learning is applicable to a widening range of artificial intelligence problems, such as image classification, speech recognition, text classification, question answering, text-to-speech, and optical character recognition. Deep Learning with Python is structured around a series of practical code examples that illustrate each new concept introduced and demonstrate best practices. By the time you reach the end of this book, you will have become a Keras expert and will be able to apply deep learning in your own projects. KEY FEATURES • Practical code examples • In-depth introduction to Keras • Teaches the difference between Deep Learning and AI ABOUT THE TECHNOLOGY Deep learning is the technology behind photo tagging systems at Facebook and Google, self-driving cars, speech recognition systems on your smartphone, and much more. AUTHOR BIO Francois Chollet is the author of Keras, one of the most widely used libraries for deep learning in Python. He has been working with deep neural ...

  11. Deep learning evaluation using deep linguistic processing

    OpenAIRE

    Kuhnle, Alexander; Copestake, Ann

    2017-01-01

    We discuss problems with the standard approaches to evaluation for tasks like visual question answering, and argue that artificial data can be used to address these as a complement to current practice. We demonstrate that with the help of existing 'deep' linguistic processing technology we are able to create challenging abstract datasets, which enable us to investigate the language understanding abilities of multimodal deep learning models in detail, as compared to a single performance value ...

  12. Quantitative phase microscopy using deep neural networks

    Science.gov (United States)

    Li, Shuai; Sinha, Ayan; Lee, Justin; Barbastathis, George

    2018-02-01

    Deep learning has been proven to achieve ground-breaking accuracy in various tasks. In this paper, we implemented a deep neural network (DNN) to achieve phase retrieval in a wide-field microscope. Our DNN utilized the residual neural network (ResNet) architecture and was trained using the data generated by a phase SLM. The results showed that our DNN was able to reconstruct the profile of the phase target qualitatively. In the meantime, large error still existed, which indicated that our approach still need to be improved.

  13. Deep Energy Retrofit Guidance for the Building America Solutions Center

    Energy Technology Data Exchange (ETDEWEB)

    Less, Brennan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Walker, Iain [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-01-01

    The U.S. DOE Building America program has established a research agenda targeting market-relevant strategies to achieve 40% reductions in existing home energy use by 2030. Deep Energy Retrofits (DERs) are part of the strategy to meet and exceed this goal. DERs are projects that create new, valuable assets from existing residences, by bringing homes into alignment with the expectations of the 21st century. Ideally, high energy using, dated homes that are failing to provide adequate modern services to their owners and occupants (e.g., comfortable temperatures, acceptable humidity, clean, healthy), are transformed through comprehensive upgrades to the building envelope, services and miscellaneous loads into next generation high performance homes. These guidance documents provide information to aid in the broader market adoption of DERs.

  14. A Roadmap for Achieving Sustainable Energy Conversion and Storage: Graphene-Based Composites Used Both as an Electrocatalyst for Oxygen Reduction Reactions and an Electrode Material for a Supercapacitor

    Directory of Open Access Journals (Sweden)

    Peipei Huo

    2018-01-01

    Full Text Available Based on its unique features including 2D planar geometry, high specific surface area and electron conductivity, graphene has been intensively studied as oxygen reduction reaction (ORR electrocatalyst and supercapacitor material. On the one hand, graphene possesses standalone electrocatalytic activity. It can also provide a good support for combining with other materials to generate graphene-based electrocatalysts, where the catalyst-support structure improves the stability and performance of electrocatalysts for ORR. On the other hand, graphene itself and its derivatives demonstrate a promising electrochemical capability as supercapacitors including electric double-layer capacitors (EDLCs and pseudosupercapacitors. A hybrid supercapacitor (HS is underlined and the advantages are elaborated. Graphene endows many materials that are capable of faradaic redox reactions with an outstanding pseudocapacitance behavior. In addition, the characteristics of graphene-based composite are also utilized in many respects to provide a porous 3D structure, formulate a novel supercapacitor with innovative design, and construct a flexible and tailorable device. In this review, we will present an overview of the use of graphene-based composites for sustainable energy conversion and storage.

  15. Reduced impact of induced gate noise on inductively degenerated LNAs in deep submicron CMOS technologies

    DEFF Research Database (Denmark)

    Rossi, P.; Svelto, F.; Mazzanti, A.

    2005-01-01

    Designers of radio-frequency inductively-degenerated CMOS low-noise-amplifiers have usually not followed the guidelines for achieving minimum noise figure. Nonetheless, state-of-the- art implementations display noise figure values very close to the theoretical minimum. In this paper, we point out...... that this is due to the effect of the parasitic overlap capacitances in the MOS device. In particular, we show that overlap capacitances lead to a significant induced-gate-noise reduction, especially when deep sub-micron CMOS processes are used....

  16. Outcomes of the DeepWind conceptual design

    NARCIS (Netherlands)

    Paulsen, US; Borg, M.; Madsen, HA; Pedersen, TF; Hattel, J; Ritchie, E.; Simao Ferreira, C.; Svendsen, H.; Berthelsen, P.A.; Smadja, C.

    2015-01-01

    DeepWind has been presented as a novel floating offshore wind turbine concept with cost reduction potentials. Twelve international partners developed a Darrieus type floating turbine with new materials and technologies for deep-sea offshore environment. This paper summarizes results of the 5 MW

  17. Acetogenesis in the energy-starved deep biosphere - a paradox?

    DEFF Research Database (Denmark)

    Lever, Mark

    2012-01-01

    Under anoxic conditions in sediments, acetogens are often thought to be outcompeted by microorganisms performing energetically more favorable metabolic pathways, such as sulfate reduction or methanogenesis. Recent evidence from deep subseafloor sediments suggesting acetogenesis in the presence of...... to be taken into account to understand microbial survival in the energy-depleted deep biosphere....

  18. Deep Energy Retrofit Performance Metric Comparison: Eight California Case Studies

    Energy Technology Data Exchange (ETDEWEB)

    Walker, Iain [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fisher, Jeremy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Less, Brennan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2014-06-01

    In this paper we will present the results of monitored annual energy use data from eight residential Deep Energy Retrofit (DER) case studies using a variety of performance metrics. For each home, the details of the retrofits were analyzed, diagnostic tests to characterize the home were performed and the homes were monitored for total and individual end-use energy consumption for approximately one year. Annual performance in site and source energy, as well as carbon dioxide equivalent (CO2e) emissions were determined on a per house, per person and per square foot basis to examine the sensitivity to these different metrics. All eight DERs showed consistent success in achieving substantial site energy and CO2e reductions, but some projects achieved very little, if any source energy reduction. This problem emerged in those homes that switched from natural gas to electricity for heating and hot water, resulting in energy consumption dominated by electricity use. This demonstrates the crucial importance of selecting an appropriate metric to be used in guiding retrofit decisions. Also, due to the dynamic nature of DERs, with changes in occupancy, size, layout, and comfort, several performance metrics might be necessary to understand a project’s success.

  19. Could US mayors achieve the entire US Paris climate target?

    Science.gov (United States)

    Gurney, K. R.; Huang, J.; Hutchins, M.; Liang, J.

    2017-12-01

    After the recent US Federal Administration announcement not to adhere to the Paris Accords, 359 mayors (and counting) in the US pledged to maintain their commitments, reducing emissions within their jurisdictions by 26-28% from their 2005 levels by the year 2025. While important, this leaves a large portion of the US landscape, and a large amount of US emissions, outside of the Paris commitment. With Federal US policy looking unlikely to change, could additional effort by US cities overcome the gap in national policy and achieve the equivalent US national Paris commitment? How many cities would be required and how deep would reductions need to be? Up until now, this question could not be reliably resolved due to lack of data at the urban scale. Here, we answer this question with new data - the Vulcan V3.0 FFCO2 emissions data product - through examination of the total US energy related CO2 emissions from cities. We find that the top 500 urban areas in the US could meet the national US commitment to the Paris Accords with a reduction of roughly 30% below their 2015 levels by the year 2025. This is driven by the share of US emissions emanating from cities, particularly the largest cohort. Indeed, as the number of urban areas taking on CO2 reduction targets grows, the less the reduction burden on any individual city. In this presentation, we provide an analysis of US urban CO2 emissions and US climate policy, accounting for varying definitions of urban areas, emitting sectors and the tradeoff between the number of policy-active cities and the CO2 reduction burden.

  20. Training Deep Spiking Neural Networks Using Backpropagation.

    Science.gov (United States)

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael

    2016-01-01

    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  1. WFIRST: Science from Deep Field Surveys

    Science.gov (United States)

    Koekemoer, Anton; Foley, Ryan; WFIRST Deep Field Working Group

    2018-01-01

    WFIRST will enable deep field imaging across much larger areas than those previously obtained with Hubble, opening up completely new areas of parameter space for extragalactic deep fields including cosmology, supernova and galaxy evolution science. The instantaneous field of view of the Wide Field Instrument (WFI) is about 0.3 square degrees, which would for example yield an Ultra Deep Field (UDF) reaching similar depths at visible and near-infrared wavelengths to that obtained with Hubble, over an area about 100-200 times larger, for a comparable investment in time. Moreover, wider fields on scales of 10-20 square degrees could achieve depths comparable to large HST surveys at medium depths such as GOODS and CANDELS, and would enable multi-epoch supernova science that could be matched in area to LSST Deep Drilling fields or other large survey areas. Such fields may benefit from being placed on locations in the sky that have ancillary multi-band imaging or spectroscopy from other facilities, from the ground or in space. The WFIRST Deep Fields Working Group has been examining the science considerations for various types of deep fields that may be obtained with WFIRST, and present here a summary of the various properties of different locations in the sky that may be considered for future deep fields with WFIRST.

  2. Pathways to deep decarbonization - 2015 report

    International Nuclear Information System (INIS)

    Ribera, Teresa; Colombier, Michel; Waisman, Henri; Bataille, Chris; Pierfederici, Roberta; Sachs, Jeffrey; Schmidt-Traub, Guido; Williams, Jim; Segafredo, Laura; Hamburg Coplan, Jill; Pharabod, Ivan; Oury, Christian

    2015-12-01

    In September 2015, the Deep Decarbonization Pathways Project published the Executive Summary of the Pathways to Deep Decarbonization: 2015 Synthesis Report. The full 2015 Synthesis Report was launched in Paris on December 3, 2015, at a technical workshop with the Mitigation Action Plans and Scenarios (MAPS) program. The Deep Decarbonization Pathways Project (DDPP) is a collaborative initiative to understand and show how individual countries can transition to a low-carbon economy and how the world can meet the internationally agreed target of limiting the increase in global mean surface temperature to less than 2 degrees Celsius (deg. C). Achieving the 2 deg. C limit will require that global net emissions of greenhouse gases (GHG) approach zero by the second half of the century. In turn, this will require a profound transformation of energy systems by mid-century through steep declines in carbon intensity in all sectors of the economy, a transition we call 'deep decarbonization'

  3. Applications of Deep Learning in Biomedicine.

    Science.gov (United States)

    Mamoshina, Polina; Vieira, Armando; Putin, Evgeny; Zhavoronkov, Alex

    2016-05-02

    Increases in throughput and installed base of biomedical research equipment led to a massive accumulation of -omics data known to be highly variable, high-dimensional, and sourced from multiple often incompatible data platforms. While this data may be useful for biomarker identification and drug discovery, the bulk of it remains underutilized. Deep neural networks (DNNs) are efficient algorithms based on the use of compositional layers of neurons, with advantages well matched to the challenges -omics data presents. While achieving state-of-the-art results and even surpassing human accuracy in many challenging tasks, the adoption of deep learning in biomedicine has been comparatively slow. Here, we discuss key features of deep learning that may give this approach an edge over other machine learning methods. We then consider limitations and review a number of applications of deep learning in biomedical studies demonstrating proof of concept and practical utility.

  4. Deep learning relevance

    DEFF Research Database (Denmark)

    Lioma, Christina; Larsen, Birger; Petersen, Casper

    2016-01-01

    train a Recurrent Neural Network (RNN) on existing relevant information to that query. We then use the RNN to "deep learn" a single, synthetic, and we assume, relevant document for that query. We design a crowdsourcing experiment to assess how relevant the "deep learned" document is, compared...... to existing relevant documents. Users are shown a query and four wordclouds (of three existing relevant documents and our deep learned synthetic document). The synthetic document is ranked on average most relevant of all....

  5. Deep Galaxy: Classification of Galaxies based on Deep Convolutional Neural Networks

    OpenAIRE

    Khalifa, Nour Eldeen M.; Taha, Mohamed Hamed N.; Hassanien, Aboul Ella; Selim, I. M.

    2017-01-01

    In this paper, a deep convolutional neural network architecture for galaxies classification is presented. The galaxy can be classified based on its features into main three categories Elliptical, Spiral, and Irregular. The proposed deep galaxies architecture consists of 8 layers, one main convolutional layer for features extraction with 96 filters, followed by two principles fully connected layers for classification. It is trained over 1356 images and achieved 97.272% in testing accuracy. A c...

  6. Text feature extraction based on deep learning: a review.

    Science.gov (United States)

    Liang, Hong; Sun, Xiao; Sun, Yunlei; Gao, Yuan

    2017-01-01

    Selection of text feature item is a basic and important matter for text mining and information retrieval. Traditional methods of feature extraction require handcrafted features. To hand-design, an effective feature is a lengthy process, but aiming at new applications, deep learning enables to acquire new effective feature representation from training data. As a new feature extraction method, deep learning has made achievements in text mining. The major difference between deep learning and conventional methods is that deep learning automatically learns features from big data, instead of adopting handcrafted features, which mainly depends on priori knowledge of designers and is highly impossible to take the advantage of big data. Deep learning can automatically learn feature representation from big data, including millions of parameters. This thesis outlines the common methods used in text feature extraction first, and then expands frequently used deep learning methods in text feature extraction and its applications, and forecasts the application of deep learning in feature extraction.

  7. DeepMitosis: Mitosis detection via deep detection, verification and segmentation networks.

    Science.gov (United States)

    Li, Chao; Wang, Xinggang; Liu, Wenyu; Latecki, Longin Jan

    2018-04-01

    Mitotic count is a critical predictor of tumor aggressiveness in the breast cancer diagnosis. Nowadays mitosis counting is mainly performed by pathologists manually, which is extremely arduous and time-consuming. In this paper, we propose an accurate method for detecting the mitotic cells from histopathological slides using a novel multi-stage deep learning framework. Our method consists of a deep segmentation network for generating mitosis region when only a weak label is given (i.e., only the centroid pixel of mitosis is annotated), an elaborately designed deep detection network for localizing mitosis by using contextual region information, and a deep verification network for improving detection accuracy by removing false positives. We validate the proposed deep learning method on two widely used Mitosis Detection in Breast Cancer Histological Images (MITOSIS) datasets. Experimental results show that we can achieve the highest F-score on the MITOSIS dataset from ICPR 2012 grand challenge merely using the deep detection network. For the ICPR 2014 MITOSIS dataset that only provides the centroid location of mitosis, we employ the segmentation model to estimate the bounding box annotation for training the deep detection network. We also apply the verification model to eliminate some false positives produced from the detection model. By fusing scores of the detection and verification models, we achieve the state-of-the-art results. Moreover, our method is very fast with GPU computing, which makes it feasible for clinical practice. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Staged Inference using Conditional Deep Learning for energy efficient real-time smart diagnosis.

    Science.gov (United States)

    Parsa, Maryam; Panda, Priyadarshini; Sen, Shreyas; Roy, Kaushik

    2017-07-01

    Recent progress in biosensor technology and wearable devices has created a formidable opportunity for remote healthcare monitoring systems as well as real-time diagnosis and disease prevention. The use of data mining techniques is indispensable for analysis of the large pool of data generated by the wearable devices. Deep learning is among the promising methods for analyzing such data for healthcare applications and disease diagnosis. However, the conventional deep neural networks are computationally intensive and it is impractical to use them in real-time diagnosis with low-powered on-body devices. We propose Staged Inference using Conditional Deep Learning (SICDL), as an energy efficient approach for creating healthcare monitoring systems. For smart diagnostics, we observe that all diagnoses are not equally challenging. The proposed approach thus decomposes the diagnoses into preliminary analysis (such as healthy vs unhealthy) and detailed analysis (such as identifying the specific type of cardio disease). The preliminary diagnosis is conducted real-time with a low complexity neural network realized on the resource-constrained on-body device. The detailed diagnosis requires a larger network that is implemented remotely in cloud and is conditionally activated only for detailed diagnosis (unhealthy individuals). We evaluated the proposed approach using available physiological sensor data from Physionet databases, and achieved 38% energy reduction in comparison to the conventional deep learning approach.

  9. Deep Vein Thrombosis

    African Journals Online (AJOL)

    OWNER

    Deep Vein Thrombosis: Risk Factors and Prevention in Surgical Patients. Deep Vein ... preventable morbidity and mortality in hospitalized surgical patients. ... the elderly.3,4 It is very rare before the age ... depends on the risk level; therefore an .... but also in the post-operative period. ... is continuing uncertainty regarding.

  10. Reduction corporoplasty.

    Science.gov (United States)

    Hakky, Tariq S; Martinez, Daniel; Yang, Christopher; Carrion, Rafael E

    2015-01-01

    Here we present the first video demonstration of reduction corporoplasty in the management of phallic disfigurement in a 17 year old man with a history sickle cell disease and priapism. Surgical management of aneurysmal dilation of the corpora has yet to be defined in the literature. We preformed bilateral elliptical incisions over the lateral corpora as management of aneurysmal dilation of the corpora to correct phallic disfigurement. The patient tolerated the procedure well and has resolution of his corporal disfigurement. Reduction corporoplasty using bilateral lateral elliptical incisions in the management of aneurysmal dilation of the corpora is a safe an feasible operation in the management of phallic disfigurement.

  11. Deep learning methods for protein torsion angle prediction.

    Science.gov (United States)

    Li, Haiou; Hou, Jie; Adhikari, Badri; Lyu, Qiang; Cheng, Jianlin

    2017-09-18

    Deep learning is one of the most powerful machine learning methods that has achieved the state-of-the-art performance in many domains. Since deep learning was introduced to the field of bioinformatics in 2012, it has achieved success in a number of areas such as protein residue-residue contact prediction, secondary structure prediction, and fold recognition. In this work, we developed deep learning methods to improve the prediction of torsion (dihedral) angles of proteins. We design four different deep learning architectures to predict protein torsion angles. The architectures including deep neural network (DNN) and deep restricted Boltzmann machine (DRBN), deep recurrent neural network (DRNN) and deep recurrent restricted Boltzmann machine (DReRBM) since the protein torsion angle prediction is a sequence related problem. In addition to existing protein features, two new features (predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments) are used as input to each of the four deep learning architectures to predict phi and psi angles of protein backbone. The mean absolute error (MAE) of phi and psi angles predicted by DRNN, DReRBM, DRBM and DNN is about 20-21° and 29-30° on an independent dataset. The MAE of phi angle is comparable to the existing methods, but the MAE of psi angle is 29°, 2° lower than the existing methods. On the latest CASP12 targets, our methods also achieved the performance better than or comparable to a state-of-the art method. Our experiment demonstrates that deep learning is a valuable method for predicting protein torsion angles. The deep recurrent network architecture performs slightly better than deep feed-forward architecture, and the predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments are useful features for improving prediction accuracy.

  12. Deep Echo State Network (DeepESN): A Brief Survey

    OpenAIRE

    Gallicchio, Claudio; Micheli, Alessio

    2017-01-01

    The study of deep recurrent neural networks (RNNs) and, in particular, of deep Reservoir Computing (RC) is gaining an increasing research attention in the neural networks community. The recently introduced deep Echo State Network (deepESN) model opened the way to an extremely efficient approach for designing deep neural networks for temporal data. At the same time, the study of deepESNs allowed to shed light on the intrinsic properties of state dynamics developed by hierarchical compositions ...

  13. Air Layer Drag Reduction

    Science.gov (United States)

    Ceccio, Steven; Elbing, Brian; Winkel, Eric; Dowling, David; Perlin, Marc

    2008-11-01

    A set of experiments have been conducted at the US Navy's Large Cavitation Channel to investigate skin-friction drag reduction with the injection of air into a high Reynolds number turbulent boundary layer. Testing was performed on a 12.9 m long flat-plate test model with the surface hydraulically smooth and fully rough at downstream-distance-based Reynolds numbers to 220 million and at speeds to 20 m/s. Local skin-friction, near-wall bulk void fraction, and near-wall bubble imaging were monitored along the length of the model. The instrument suite was used to access the requirements necessary to achieve air layer drag reduction (ALDR). Injection of air over a wide range of air fluxes showed that three drag reduction regimes exist when injecting air; (1) bubble drag reduction that has poor downstream persistence, (2) a transitional regime with a steep rise in drag reduction, and (3) ALDR regime where the drag reduction plateaus at 90% ± 10% over the entire model length with large void fractions in the near-wall region. These investigations revealed several requirements for ALDR including; sufficient volumetric air fluxes that increase approximately with the square of the free-stream speed, slightly higher air fluxes are needed when the surface tension is reduced, higher air fluxes are required for rough surfaces, and the formation of ALDR is sensitive to the inlet condition.

  14. Deep learning in bioinformatics.

    Science.gov (United States)

    Min, Seonwoo; Lee, Byunghan; Yoon, Sungroh

    2017-09-01

    In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Deep subsurface microbial processes

    Science.gov (United States)

    Lovley, D.R.; Chapelle, F.H.

    1995-01-01

    Information on the microbiology of the deep subsurface is necessary in order to understand the factors controlling the rate and extent of the microbially catalyzed redox reactions that influence the geophysical properties of these environments. Furthermore, there is an increasing threat that deep aquifers, an important drinking water resource, may be contaminated by man's activities, and there is a need to predict the extent to which microbial activity may remediate such contamination. Metabolically active microorganisms can be recovered from a diversity of deep subsurface environments. The available evidence suggests that these microorganisms are responsible for catalyzing the oxidation of organic matter coupled to a variety of electron acceptors just as microorganisms do in surface sediments, but at much slower rates. The technical difficulties in aseptically sampling deep subsurface sediments and the fact that microbial processes in laboratory incubations of deep subsurface material often do not mimic in situ processes frequently necessitate that microbial activity in the deep subsurface be inferred through nonmicrobiological analyses of ground water. These approaches include measurements of dissolved H2, which can predict the predominant microbially catalyzed redox reactions in aquifers, as well as geochemical and groundwater flow modeling, which can be used to estimate the rates of microbial processes. Microorganisms recovered from the deep subsurface have the potential to affect the fate of toxic organics and inorganic contaminants in groundwater. Microbial activity also greatly influences 1 the chemistry of many pristine groundwaters and contributes to such phenomena as porosity development in carbonate aquifers, accumulation of undesirably high concentrations of dissolved iron, and production of methane and hydrogen sulfide. Although the last decade has seen a dramatic increase in interest in deep subsurface microbiology, in comparison with the study of

  16. A Global Survey of Deep Underground Facilities; Examples of Geotechnical and Engineering Capabilities, Achievements, Challenges (Mines, Shafts, Tunnels, Boreholes, Sites and Underground Facilities for Nuclear Waste and Physics R&D): A Guide to Interactive Global Map Layers, Table Database, References and Notes

    International Nuclear Information System (INIS)

    Tynan, Mark C.; Russell, Glenn P.; Perry, Frank V.; Kelley, Richard E.; Champenois, Sean T.

    2017-01-01

    These associated tables, references, notes, and report present a synthesis of some notable geotechnical and engineering information used to create four interactive layer maps for selected: 1) deep mines and shafts; 2) existing, considered or planned radioactive waste management deep underground studies or disposal facilities 3) deep large diameter boreholes, and 4) physics underground laboratories and facilities from around the world. These data are intended to facilitate user access to basic information and references regarding “deep underground” facilities, history, activities, and plans. In general, the interactive maps and database provide each facility’s approximate site location, geology, and engineered features (e.g.: access, geometry, depth, diameter, year of operations, groundwater, lithology, host unit name and age, basin; operator, management organization, geographic data, nearby cultural features, other). Although the survey is not comprehensive, it is representative of many of the significant existing and historical underground facilities discussed in the literature addressing radioactive waste management and deep mined geologic disposal safety systems. The global survey is intended to support and to inform: 1) interested parties and decision makers; 2) radioactive waste disposal and siting option evaluations, and 3) safety case development applicable to any mined geologic disposal facility as a demonstration of historical and current engineering and geotechnical capabilities available for use in deep underground facility siting, planning, construction, operations and monitoring.

  17. A Global Survey of Deep Underground Facilities; Examples of Geotechnical and Engineering Capabilities, Achievements, Challenges (Mines, Shafts, Tunnels, Boreholes, Sites and Underground Facilities for Nuclear Waste and Physics R&D): A Guide to Interactive Global Map Layers, Table Database, References and Notes

    Energy Technology Data Exchange (ETDEWEB)

    Tynan, Mark C. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Russell, Glenn P. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Perry, Frank V. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Kelley, Richard E. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Champenois, Sean T. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2017-06-13

    These associated tables, references, notes, and report present a synthesis of some notable geotechnical and engineering information used to create four interactive layer maps for selected: 1) deep mines and shafts; 2) existing, considered or planned radioactive waste management deep underground studies or disposal facilities 3) deep large diameter boreholes, and 4) physics underground laboratories and facilities from around the world. These data are intended to facilitate user access to basic information and references regarding “deep underground” facilities, history, activities, and plans. In general, the interactive maps and database provide each facility’s approximate site location, geology, and engineered features (e.g.: access, geometry, depth, diameter, year of operations, groundwater, lithology, host unit name and age, basin; operator, management organization, geographic data, nearby cultural features, other). Although the survey is not comprehensive, it is representative of many of the significant existing and historical underground facilities discussed in the literature addressing radioactive waste management and deep mined geologic disposal safety systems. The global survey is intended to support and to inform: 1) interested parties and decision makers; 2) radioactive waste disposal and siting option evaluations, and 3) safety case development applicable to any mined geologic disposal facility as a demonstration of historical and current engineering and geotechnical capabilities available for use in deep underground facility siting, planning, construction, operations and monitoring.

  18. Applications of Should Cost to Achieve Cost Reductions

    Science.gov (United States)

    2014-04-01

    CAS . . . . . . . . Control Actuation Section CATM . . . . . . Captive Air Training Missile CATM BIT . . . . . . . . . Captive Air Training Missile...Integrated Fire Control Network IMU . . . . . . . . . . Inertial Measurment Unit IOC . . . . . . .Initial Operational Capability IOT . . . . . . . . . Initial

  19. Managing Air Quality - Control Strategies to Achieve Air Pollution Reduction

    Science.gov (United States)

    Considerations in designing an effective control strategy related to air quality, controlling pollution sources, need for regional or national controls, steps to developing a control strategy, and additional EPA resources.

  20. Life Support for Deep Space and Mars

    Science.gov (United States)

    Jones, Harry W.; Hodgson, Edward W.; Kliss, Mark H.

    2014-01-01

    How should life support for deep space be developed? The International Space Station (ISS) life support system is the operational result of many decades of research and development. Long duration deep space missions such as Mars have been expected to use matured and upgraded versions of ISS life support. Deep space life support must use the knowledge base incorporated in ISS but it must also meet much more difficult requirements. The primary new requirement is that life support in deep space must be considerably more reliable than on ISS or anywhere in the Earth-Moon system, where emergency resupply and a quick return are possible. Due to the great distance from Earth and the long duration of deep space missions, if life support systems fail, the traditional approaches for emergency supply of oxygen and water, emergency supply of parts, and crew return to Earth or escape to a safe haven are likely infeasible. The Orbital Replacement Unit (ORU) maintenance approach used by ISS is unsuitable for deep space with ORU's as large and complex as those originally provided in ISS designs because it minimizes opportunities for commonality of spares, requires replacement of many functional parts with each failure, and results in substantial launch mass and volume penalties. It has become impractical even for ISS after the shuttle era, resulting in the need for ad hoc repair activity at lower assembly levels with consequent crew time penalties and extended repair timelines. Less complex, more robust technical approaches may be needed to meet the difficult deep space requirements for reliability, maintainability, and reparability. Developing an entirely new life support system would neglect what has been achieved. The suggested approach is use the ISS life support technologies as a platform to build on and to continue to improve ISS subsystems while also developing new subsystems where needed to meet deep space requirements.

  1. Nuclear structure in deep-inelastic reactions

    International Nuclear Information System (INIS)

    Rehm, K.E.

    1986-01-01

    The paper concentrates on recent deep inelastic experiments conducted at Argonne National Laboratory and the nuclear structure effects evident in reactions between super heavy nuclei. Experiments indicate that these reactions evolve gradually from simple transfer processes which have been studied extensively for lighter nuclei such as 16 O, suggesting a theoretical approach connecting the one-step DWBA theory to the multistep statistical models of nuclear reactions. This transition between quasi-elastic and deep inelastic reactions is achieved by a simple random walk model. Some typical examples of nuclear structure effects are shown. 24 refs., 9 figs

  2. Outage reduction of Hamaoka NPS

    International Nuclear Information System (INIS)

    Hida, Shigeru; Anma, Minoru

    1999-01-01

    In the Hamaoka nuclear power plant, we have worked on the outage reduction since 1993. In those days, the outage length in Hamaoka was 80 days or more, and was largely far apart from excellent results of European and American plants about the 30days. A concrete strategy to achieve the reduction process is the extension of working hours, the changing work schedule control unit for every hour, the equipment improvements, and the improvements of work environments, etc. We executed them one by one reflecting results. As a result, we achieved the outage for 57 days in 1995. Starting from this, we acquired the further outage reduction one by one and achieved the outage for 38 days in 1997 while maintaining safety and reliability of the plant. We advance these strategies further and we will aim at the achievement of the 30·35 days outage in the future. (author)

  3. Reduction of Subjective and Objective System Complexity

    Science.gov (United States)

    Watson, Michael D.

    2015-01-01

    Occam's razor is often used in science to define the minimum criteria to establish a physical or philosophical idea or relationship. Albert Einstein is attributed the saying "everything should be made as simple as possible, but not simpler". These heuristic ideas are based on a belief that there is a minimum state or set of states for a given system or phenomena. In looking at system complexity, these heuristics point us to an idea that complexity can be reduced to a minimum. How then, do we approach a reduction in complexity? Complexity has been described as a subjective concept and an objective measure of a system. Subjective complexity is based on human cognitive comprehension of the functions and inter relationships of a system. Subjective complexity is defined by the ability to fully comprehend the system. Simplifying complexity, in a subjective sense, is thus gaining a deeper understanding of the system. As Apple's Jonathon Ive has stated," It's not just minimalism or the absence of clutter. It involves digging through the depth of complexity. To be truly simple, you have to go really deep". Simplicity is not the absence of complexity but a deeper understanding of complexity. Subjective complexity, based on this human comprehension, cannot then be discerned from the sociological concept of ignorance. The inability to comprehend a system can be either a lack of knowledge, an inability to understand the intricacies of a system, or both. Reduction in this sense is based purely on a cognitive ability to understand the system and no system then may be truly complex. From this view, education and experience seem to be the keys to reduction or eliminating complexity. Objective complexity, is the measure of the systems functions and interrelationships which exist independent of human comprehension. Jonathon Ive's statement does not say that complexity is removed, only that the complexity is understood. From this standpoint, reduction of complexity can be approached

  4. Reduction Corporoplasty

    Directory of Open Access Journals (Sweden)

    Tariq S. Hakky

    2015-04-01

    Full Text Available Objective Here we present the first video demonstration of reduction corporoplasty in the management of phallic disfigurement in a 17 year old man with a history sickle cell disease and priapism. Introduction Surgical management of aneurysmal dilation of the corpora has yet to be defined in the literature. Materials and Methods: We preformed bilateral elliptical incisions over the lateral corpora as management of aneurysmal dilation of the corpora to correct phallic disfigurement. Results The patient tolerated the procedure well and has resolution of his corporal disfigurement. Conclusions Reduction corporoplasty using bilateral lateral elliptical incisions in the management of aneurysmal dilation of the corpora is a safe an feasible operation in the management of phallic disfigurement.

  5. Deep Water Survey Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The deep water biodiversity surveys explore and describe the biodiversity of the bathy- and bentho-pelagic nekton using Midwater and bottom trawls centered in the...

  6. Deep Space Habitat Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The Deep Space Habitat was closed out at the end of Fiscal Year 2013 (September 30, 2013). Results and select content have been incorporated into the new Exploration...

  7. Deep Learning in Neuroradiology.

    Science.gov (United States)

    Zaharchuk, G; Gong, E; Wintermark, M; Rubin, D; Langlotz, C P

    2018-02-01

    Deep learning is a form of machine learning using a convolutional neural network architecture that shows tremendous promise for imaging applications. It is increasingly being adapted from its original demonstration in computer vision applications to medical imaging. Because of the high volume and wealth of multimodal imaging information acquired in typical studies, neuroradiology is poised to be an early adopter of deep learning. Compelling deep learning research applications have been demonstrated, and their use is likely to grow rapidly. This review article describes the reasons, outlines the basic methods used to train and test deep learning models, and presents a brief overview of current and potential clinical applications with an emphasis on how they are likely to change future neuroradiology practice. Facility with these methods among neuroimaging researchers and clinicians will be important to channel and harness the vast potential of this new method. © 2018 by American Journal of Neuroradiology.

  8. Deep inelastic lepton scattering

    International Nuclear Information System (INIS)

    Nachtmann, O.

    1977-01-01

    Deep inelastic electron (muon) nucleon and neutrino nucleon scattering as well as electron positron annihilation into hadrons are reviewed from a theoretical point of view. The emphasis is placed on comparisons of quantum chromodynamics with the data. (orig.) [de

  9. Neuromorphic Deep Learning Machines

    OpenAIRE

    Neftci, E; Augustine, C; Paul, S; Detorakis, G

    2017-01-01

    An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Back Propagation (BP) rule, often relies on the immediate availability of network-wide...

  10. Pathogenesis of deep endometriosis.

    Science.gov (United States)

    Gordts, Stephan; Koninckx, Philippe; Brosens, Ivo

    2017-12-01

    The pathophysiology of (deep) endometriosis is still unclear. As originally suggested by Cullen, change the definition "deeper than 5 mm" to "adenomyosis externa." With the discovery of the old European literature on uterine bleeding in 5%-10% of the neonates and histologic evidence that the bleeding represents decidual shedding, it is postulated/hypothesized that endometrial stem/progenitor cells, implanted in the pelvic cavity after birth, may be at the origin of adolescent and even the occasionally premenarcheal pelvic endometriosis. Endometriosis in the adolescent is characterized by angiogenic and hemorrhagic peritoneal and ovarian lesions. The development of deep endometriosis at a later age suggests that deep infiltrating endometriosis is a delayed stage of endometriosis. Another hypothesis is that the endometriotic cell has undergone genetic or epigenetic changes and those specific changes determine the development into deep endometriosis. This is compatible with the hereditary aspects, and with the clonality of deep and cystic ovarian endometriosis. It explains the predisposition and an eventual causal effect by dioxin or radiation. Specific genetic/epigenetic changes could explain the various expressions and thus typical, cystic, and deep endometriosis become three different diseases. Subtle lesions are not a disease until epi(genetic) changes occur. A classification should reflect that deep endometriosis is a specific disease. In conclusion the pathophysiology of deep endometriosis remains debated and the mechanisms of disease progression, as well as the role of genetics and epigenetics in the process, still needs to be unraveled. Copyright © 2017 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  11. Report of the working group on achieving a fourfold reduction in greenhouse gas emissions in France by 2050; Rapport du groupe de travail division par 4 des emissions de gaz a effet de serre de la France a l'horizon 2050

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-07-01

    Achieving a fourfold reduction of in greenhouse gas emissions by 2050 is the ambitious and voluntary objective for France that addresses a combination of many different aspects (technical, technological, economic, social) against a backdrop of important issues and choices for public policy-makers. This document is the bilingual version of the factor 4 group report. It discusses the Factor 4 objectives, the different proposed scenario and the main lessons learned, the strategies to support the Factor 4 objectives (fostering changes in behavior and defining the role of public policies), the Factor 4 objective in international and european contexts (experience aboard, strategic behavior, constraints and opportunities, particularly in europe) and recommendations. (A.L.B.)

  12. Report of the working group on achieving a fourfold reduction in greenhouse gas emissions in France by 2050; Rapport du groupe de travail division par 4 des emissions de gaz a effet de serre de la France a l'horizon 2050

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-07-01

    Achieving a fourfold reduction of in greenhouse gas emissions by 2050 is the ambitious and voluntary objective for France that addresses a combination of many different aspects (technical, technological, economic, social) against a backdrop of important issues and choices for public policy-makers. This document is the bilingual version of the factor 4 group report. It discusses the Factor 4 objectives, the different proposed scenario and the main lessons learned, the strategies to support the Factor 4 objectives (fostering changes in behavior and defining the role of public policies), the Factor 4 objective in international and european contexts (experience aboard, strategic behavior, constraints and opportunities, particularly in europe) and recommendations. (A.L.B.)

  13. Adaptive deep brain stimulation in advanced Parkinson disease.

    Science.gov (United States)

    Little, Simon; Pogosyan, Alex; Neal, Spencer; Zavala, Baltazar; Zrinzo, Ludvic; Hariz, Marwan; Foltynie, Thomas; Limousin, Patricia; Ashkan, Keyoumars; FitzGerald, James; Green, Alexander L; Aziz, Tipu Z; Brown, Peter

    2013-09-01

    Brain-computer interfaces (BCIs) could potentially be used to interact with pathological brain signals to intervene and ameliorate their effects in disease states. Here, we provide proof-of-principle of this approach by using a BCI to interpret pathological brain activity in patients with advanced Parkinson disease (PD) and to use this feedback to control when therapeutic deep brain stimulation (DBS) is delivered. Our goal was to demonstrate that by personalizing and optimizing stimulation in real time, we could improve on both the efficacy and efficiency of conventional continuous DBS. We tested BCI-controlled adaptive DBS (aDBS) of the subthalamic nucleus in 8 PD patients. Feedback was provided by processing of the local field potentials recorded directly from the stimulation electrodes. The results were compared to no stimulation, conventional continuous stimulation (cDBS), and random intermittent stimulation. Both unblinded and blinded clinical assessments of motor effect were performed using the Unified Parkinson's Disease Rating Scale. Motor scores improved by 66% (unblinded) and 50% (blinded) during aDBS, which were 29% (p = 0.03) and 27% (p = 0.005) better than cDBS, respectively. These improvements were achieved with a 56% reduction in stimulation time compared to cDBS, and a corresponding reduction in energy requirements (p random intermittent stimulation. BCI-controlled DBS is tractable and can be more efficient and efficacious than conventional continuous neuromodulation for PD. Copyright © 2013 American Neurological Association.

  14. A mediation analysis of achievement motives, goals, learning strategies, and academic achievement.

    Science.gov (United States)

    Diseth, Age; Kobbeltvedt, Therese

    2010-12-01

    Previous research is inconclusive regarding antecedents and consequences of achievement goals, and there is a need for more research in order to examine the joint effects of different types of motives and learning strategies as predictors of academic achievement. To investigate the relationship between achievement motives, achievement goals, learning strategies (deep, surface, and strategic), and academic achievement in a hierarchical model. Participants were 229 undergraduate students (mean age: 21.2 years) of psychology and economics at the University of Bergen, Norway. Variables were measured by means of items from the Achievement Motives Scale (AMS), the Approaches and Study Skills Inventory for Students, and an achievement goal scale. Correlation analysis showed that academic achievement (examination grade) was positively correlated with performance-approach goal, mastery goal, and strategic learning strategies, and negatively correlated with performance-avoidance goal and surface learning strategy. A path analysis (structural equation model) showed that achievement goals were mediators between achievement motives and learning strategies, and that strategic learning strategies mediated the relationship between achievement goals and academic achievement. This study integrated previous findings from several studies and provided new evidence on the direct and indirect effects of different types of motives and learning strategies as predictors of academic achievement.

  15. Why & When Deep Learning Works: Looking Inside Deep Learnings

    OpenAIRE

    Ronen, Ronny

    2017-01-01

    The Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI) has been heavily supporting Machine Learning and Deep Learning research from its foundation in 2012. We have asked six leading ICRI-CI Deep Learning researchers to address the challenge of "Why & When Deep Learning works", with the goal of looking inside Deep Learning, providing insights on how deep networks function, and uncovering key observations on their expressiveness, limitations, and potential. The outp...

  16. Snubber reduction

    International Nuclear Information System (INIS)

    Olson, D.E.; Singh, A.K.

    1986-01-01

    Many safety-related piping systems in nuclear power plants have been oversupported. Since snubbers make up a large percentage of the pipe supports or restraints used in a plant, a plant's snubber population is much larger than required to adequately restrain the piping. This has resulted in operating problems and unnecessary expenses for maintenance and inservice inspections (ISIs) of snubbers. This paper presents an overview of snubber reduction, including: the incentives for removing snubbers, a historical perspective on how piping became oversupported, why it is possible to remove snubbers, and the costs and benefits of doing so

  17. Deep remission: a new concept?

    Science.gov (United States)

    Colombel, Jean-Frédéric; Louis, Edouard; Peyrin-Biroulet, Laurent; Sandborn, William J; Panaccione, Remo

    2012-01-01

    Crohn's disease (CD) is a chronic inflammatory disorder characterized by periods of clinical remission alternating with periods of relapse defined by recurrent clinical symptoms. Persistent inflammation is believed to lead to progressive bowel damage over time, which manifests with the development of strictures, fistulae and abscesses. These disease complications frequently lead to a need for surgical resection, which in turn leads to disability. So CD can be characterized as a chronic, progressive, destructive and disabling disease. In rheumatoid arthritis, treatment paradigms have evolved beyond partial symptom control alone toward the induction and maintenance of sustained biological remission, also known as a 'treat to target' strategy, with the goal of improving long-term disease outcomes. In CD, there is currently no accepted, well-defined, comprehensive treatment goal that entails the treatment of both clinical symptoms and biologic inflammation. It is important that such a treatment concept begins to evolve for CD. A treatment strategy that delays or halts the progression of CD to increasing damage and disability is a priority. As a starting point, a working definition of sustained deep remission (that includes long-term biological remission and symptom control) with defined patient outcomes (including no disease progression) has been proposed. The concept of sustained deep remission represents a goal for CD management that may still evolve. It is not clear if the concept also applies to ulcerative colitis. Clinical trials are needed to evaluate whether treatment algorithms that tailor therapy to achieve deep remission in patients with CD can prevent disease progression and disability. Copyright © 2012 S. Karger AG, Basel.

  18. Fiscal 1984 Sunshine Program achievement report. Development for practical application of photovoltaic system (Verification of experimental low cost silicon refining - Development of technology for chlorosilane hydrogen-reduction process); 1984 nendo taiyoko hatsuden system jitsuyoka gijutsu kaihatsu seika hokokusho. Tei cost silicon jikken seisei kensho (chlorosilane no suiso kangen kotei no gijutsu kaihatsu)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1985-03-01

    In fiscal 1984, studies are conducted for long-time stabilized operation of the device (extension of reactor tube service life, elimination of flaws in the granule extraction process, control of deposited silicon), improved granule quality (measures to deal with Cu pollution), cost reduction (improvement on yield of silicon after reaction, reduction in electric power consumption rate), and so forth. The results are found to be satisfactory in outline. The problem of strengthening the reactor tube material itself remains to be solved, however, with many knotty issues to settle before practical application. In the effort to deliberate these difficulties, as many as 4,377 hours (reaction hours) in total are spent in fiscal 1984. During the operation, 8.3 tons of granules are produced of which approximately 7 tons are fed to the next stage of processing. The quality of granules produced in this way is stable thanks to efforts to prevent pollution and to prolong continuous reaction time, and is found to satisfy NEDO (New Energy and Industrial Technology Development Organization) specifications. Furthermore, the cast cell and the ribbon cell using thus-produced granules achieve photoelectric conversion rates of 10% and 9%, respectively, thereby meeting the target goal of 9%. (NEDO)

  19. BIBLIOGRAPHY ON ACHIEVEMENT.

    Science.gov (United States)

    Harvard Univ., Cambridge, MA. Graduate School of Education.

    THIS BIBLIOGRAPHY LISTS MATERIAL ON VARIOUS ASPECTS OF ACHIEVEMENT. APPROXIMATELY 40 UNANNOTATED REFERENCES ARE PROVIDED TO DOCUMENTS DATING FROM 1952 TO 1965. JOURNALS, BOOKS, AND REPORT MATERIALS ARE LISTED. SUBJECT AREAS INCLUDED ARE BEHAVIOR TESTS, ACHIEVEMENT BEHAVIOR, ACADEMIC ACHIEVEMENT, AND SOCIAL-CLASS BACKGROUND. A RELATED REPORT IS ED…

  20. DeepQA: improving the estimation of single protein model quality with deep belief networks.

    Science.gov (United States)

    Cao, Renzhi; Bhattacharya, Debswapna; Hou, Jie; Cheng, Jianlin

    2016-12-05

    Protein quality assessment (QA) useful for ranking and selecting protein models has long been viewed as one of the major challenges for protein tertiary structure prediction. Especially, estimating the quality of a single protein model, which is important for selecting a few good models out of a large model pool consisting of mostly low-quality models, is still a largely unsolved problem. We introduce a novel single-model quality assessment method DeepQA based on deep belief network that utilizes a number of selected features describing the quality of a model from different perspectives, such as energy, physio-chemical characteristics, and structural information. The deep belief network is trained on several large datasets consisting of models from the Critical Assessment of Protein Structure Prediction (CASP) experiments, several publicly available datasets, and models generated by our in-house ab initio method. Our experiments demonstrate that deep belief network has better performance compared to Support Vector Machines and Neural Networks on the protein model quality assessment problem, and our method DeepQA achieves the state-of-the-art performance on CASP11 dataset. It also outperformed two well-established methods in selecting good outlier models from a large set of models of mostly low quality generated by ab initio modeling methods. DeepQA is a useful deep learning tool for protein single model quality assessment and protein structure prediction. The source code, executable, document and training/test datasets of DeepQA for Linux is freely available to non-commercial users at http://cactus.rnet.missouri.edu/DeepQA/ .

  1. Deep ocean model penetrator experiments

    International Nuclear Information System (INIS)

    Freeman, T.J.; Burdett, J.R.F.

    1986-01-01

    Preliminary trials of experimental model penetrators in the deep ocean have been conducted as an international collaborative exercise by participating members (national bodies and the CEC) of the Engineering Studies Task Group of the Nuclear Energy Agency's Seabed Working Group. This report describes and gives the results of these experiments, which were conducted at two deep ocean study areas in the Atlantic: Great Meteor East and the Nares Abyssal Plain. Velocity profiles of penetrators of differing dimensions and weights have been determined as they free-fell through the water column and impacted the sediment. These velocity profiles are used to determine the final embedment depth of the penetrators and the resistance to penetration offered by the sediment. The results are compared with predictions of embedment depth derived from elementary models of a penetrator impacting with a sediment. It is tentatively concluded that once the resistance to penetration offered by a sediment at a particular site has been determined, this quantity can be used to sucessfully predict the embedment that penetrators of differing sizes and weights would achieve at the same site

  2. Dijet production in diffractive deep-inelastic scattering in next-to-next-to-leading order QCD arXiv

    CERN Document Server

    Britzger, D.; Gehrmann, T.; Huss, A.; Niehues, J.; Žlebčík, R.

    Hard processes in diffractive deep-inelastic scattering can be described by a factorisation into parton-level subprocesses and diffractive parton distributions. In this framework, cross sections for inclusive dijet production in diffractive deep-inelastic electron-proton scattering (DIS) are computed to next-to-next-to-leading order (NNLO) QCD accuracy and compared to a comprehensive selection of data. Predictions for the total cross sections, 39 single-differential and four double-differential distributions for six measurements at HERA by the H1 and ZEUS collaborations are calculated. In the studied kinematical range, the NNLO corrections are found to be sizeable and positive. The NNLO predictions typically exceed the data, while the kinematical shape of the data is described better at NNLO than at next-to-leading order (NLO). A significant reduction of the scale uncertainty is achieved in comparison to NLO predictions. Our results use the currently available NLO diffractive parton distributions, and the dis...

  3. AN EFFICIENT METHOD FOR DEEP WEB CRAWLER BASED ON ACCURACY -A REVIEW

    OpenAIRE

    Pranali Zade1, Dr.S.W.Mohod2

    2018-01-01

    As deep web grows at a very fast pace, there has been increased interest in techniques that help efficiently locate deep-web interfaces. However, due to the large volume of web resources and the dynamic nature of deep web, achieving wide coverage and high efficiency is a challenging issue. We propose a three-stage framework, for efficient harvesting deep web interfaces. Project experimental results on a set of representative domains show the agility and accuracy of our proposed crawler framew...

  4. Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition

    OpenAIRE

    Li, Xiangang; Wu, Xihong

    2014-01-01

    Long short-term memory (LSTM) based acoustic modeling methods have recently been shown to give state-of-the-art performance on some speech recognition tasks. To achieve a further performance improvement, in this research, deep extensions on LSTM are investigated considering that deep hierarchical model has turned out to be more efficient than a shallow one. Motivated by previous research on constructing deep recurrent neural networks (RNNs), alternative deep LSTM architectures are proposed an...

  5. Detector for deep well logging

    International Nuclear Information System (INIS)

    1976-01-01

    A substantial improvement in the useful life and efficiency of a deep-well scintillation detector is achieved by a unique construction wherein the steel cylinder enclosing the sodium iodide scintillation crystal is provided with a tapered recess to receive a glass window which has a high transmittance at the critical wavelength and, for glass, a high coefficient of thermal expansion. A special high-temperature epoxy adhesive composition is employed to form a relatively thick sealing annulus which keeps the glass window in the tapered recess and compensates for the differences in coefficients of expansion between the container and glass so as to maintain a hermetic seal as the unit is subjected to a wide range of temperature

  6. Deep Learning from Crowds

    DEFF Research Database (Denmark)

    Rodrigues, Filipe; Pereira, Francisco Camara

    Over the last few years, deep learning has revolutionized the field of machine learning by dramatically improving the stateof-the-art in various domains. However, as the size of supervised artificial neural networks grows, typically so does the need for larger labeled datasets. Recently...... networks from crowds. We begin by describing an EM algorithm for jointly learning the parameters of the network and the reliabilities of the annotators. Then, a novel general-purpose crowd layer is proposed, which allows us to train deep neural networks end-to-end, directly from the noisy labels......, crowdsourcing has established itself as an efficient and cost-effective solution for labeling large sets of data in a scalable manner, but it often requires aggregating labels from multiple noisy contributors with different levels of expertise. In this paper, we address the problem of learning deep neural...

  7. Deep boreholes; Tiefe Bohrloecher

    Energy Technology Data Exchange (ETDEWEB)

    Bracke, Guido [Gesellschaft fuer Anlagen- und Reaktorsicherheit gGmbH Koeln (Germany); Charlier, Frank [NSE international nuclear safety engineering gmbh, Aachen (Germany); Geckeis, Horst [Karlsruher Institut fuer Technologie (Germany). Inst. fuer Nukleare Entsorgung; and others

    2016-02-15

    The report on deep boreholes covers the following subject areas: methods for safe enclosure of radioactive wastes, requirements concerning the geological conditions of possible boreholes, reversibility of decisions and retrievability, status of drilling technology. The introduction covers national and international activities. Further chapters deal with the following issues: basic concept of the storage in deep bore holes, status of the drilling technology, safe enclosure, geomechanics and stability, reversibility of decisions, risk scenarios, compliancy with safe4ty requirements and site selection criteria, research and development demand.

  8. Deep Water Acoustics

    Science.gov (United States)

    2016-06-28

    the Deep Water project and participate in the NPAL Workshops, including Art Baggeroer (MIT), J. Beron- Vera (UMiami), M. Brown (UMiami), T...Kathleen E . Wage. The North Pacific Acoustic Laboratory deep-water acoustic propagation experiments in the Philippine Sea. J. Acoust. Soc. Am., 134(4...estimate of the angle α during PhilSea09, made from ADCP measurements at the site of the DVLA. Sim. A B1 B2 B3 C D E F Prof. # 0 4 4 4 5 10 16 20 α

  9. New optimized drill pipe size for deep-water, extended reach and ultra-deep drilling

    Energy Technology Data Exchange (ETDEWEB)

    Jellison, Michael J.; Delgado, Ivanni [Grant Prideco, Inc., Hoston, TX (United States); Falcao, Jose Luiz; Sato, Ademar Takashi [PETROBRAS, Rio de Janeiro, RJ (Brazil); Moura, Carlos Amsler [Comercial Perfuradora Delba Baiana Ltda., Rio de Janeiro, RJ (Brazil)

    2004-07-01

    A new drill pipe size, 5-7/8 in. OD, represents enabling technology for Extended Reach Drilling (ERD), deep water and other deep well applications. Most world-class ERD and deep water wells have traditionally been drilled with 5-1/2 in. drill pipe or a combination of 6-5/8 in. and 5-1/2 in. drill pipe. The hydraulic performance of 5-1/2 in. drill pipe can be a major limitation in substantial ERD and deep water wells resulting in poor cuttings removal, slower penetration rates, diminished control over well trajectory and more tendency for drill pipe sticking. The 5-7/8 in. drill pipe provides a significant improvement in hydraulic efficiency compared to 5-1/2 in. drill pipe and does not suffer from the disadvantages associated with use of 6-5/8 in. drill pipe. It represents a drill pipe assembly that is optimized dimensionally and on a performance basis for casing and bit programs that are commonly used for ERD, deep water and ultra-deep wells. The paper discusses the engineering philosophy behind 5-7/8 in. drill pipe, the design challenges associated with development of the product and reviews the features and capabilities of the second-generation double-shoulder connection. The paper provides drilling case history information on significant projects where the pipe has been used and details results achieved with the pipe. (author)

  10. The challenge of meeting Canada's greenhouse gas reduction targets

    International Nuclear Information System (INIS)

    Hughes, Larry; Chaudhry, Nikhil

    2011-01-01

    In 2007, the Government of Canada announced its medium- and long-term greenhouse gas (GHG) emissions reduction plan entitled Turning the Corner, proposed emission cuts of 20% below 2006 levels by 2020 and 60-70% below 2006 levels by 2050. A report from a Canadian government advisory organization, the National Round Table on Environment and Economy (NRTEE), Achieving 2050: A carbon pricing policy for Canada, recommended 'fast and deep' energy pathways to emissions reduction through large-scale electrification of Canada's economy by relying on a major expansion of hydroelectricity, adoption of carbon capture and storage for coal and natural gas, and increasing the use of nuclear. This paper examines the likelihood of the pathways being met by considering the report's proposed energy systems, their associated energy sources, and the magnitude of the changes. It shows that the pathways assume some combination of technological advances, access to secure energy supplies, or rapid installation in order to meet both the 2020 and 2050 targets. This analysis suggests that NRTEE's projections are optimistic and unlikely to be achieved. The analysis described in this paper can be applied to other countries to better understand and develop strategies that can help reduce global greenhouse gas emissions. - Research highlights: → An analysis of a Canadian government advisory organization's GHG reduction plans. → Hydroelectricity and wind development is overly optimistic. → Declining coal and natural gas supplies and lack of CO 2 storage may hamper CCS. → Changing precipitation patterns may limit nuclear and hydroelectricity. → Bioenergy and energy reduction policies largely ignored despite their promise.

  11. Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video

    Science.gov (United States)

    Li, Honggui

    2017-09-01

    This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.

  12. Deep diode atomic battery

    International Nuclear Information System (INIS)

    Anthony, T.R.; Cline, H.E.

    1977-01-01

    A deep diode atomic battery is made from a bulk semiconductor crystal containing three-dimensional arrays of columnar and lamellar P-N junctions. The battery is powered by gamma rays and x-ray emission from a radioactive source embedded in the interior of the semiconductor crystal

  13. Deep Learning Policy Quantization

    NARCIS (Netherlands)

    van de Wolfshaar, Jos; Wiering, Marco; Schomaker, Lambertus

    2018-01-01

    We introduce a novel type of actor-critic approach for deep reinforcement learning which is based on learning vector quantization. We replace the softmax operator of the policy with a more general and more flexible operator that is similar to the robust soft learning vector quantization algorithm.

  14. Deep-sea fungi

    Digital Repository Service at National Institute of Oceanography (India)

    Raghukumar, C; Damare, S.R.

    significant in terms of carbon sequestration (5, 8). In light of this, the diversity, abundance, and role of fungi in deep-sea sediments may form an important link in the global C biogeochemistry. This review focuses on issues related to collection...

  15. Deep inelastic scattering

    International Nuclear Information System (INIS)

    Aubert, J.J.

    1982-01-01

    Deep inelastic lepton-nucleon interaction experiments are renewed. Singlet and non-singlet structure functions are measured and the consistency of the different results is checked. A detailed analysis of the scaling violation is performed in terms of the quantum chromodynamics predictions [fr

  16. Deep Vein Thrombosis

    Centers for Disease Control (CDC) Podcasts

    2012-04-05

    This podcast discusses the risk for deep vein thrombosis in long-distance travelers and ways to minimize that risk.  Created: 4/5/2012 by National Center for Emerging and Zoonotic Infectious Diseases (NCEZID).   Date Released: 4/5/2012.

  17. Deep Learning Microscopy

    KAUST Repository

    Rivenson, Yair; Gorocs, Zoltan; Gunaydin, Harun; Zhang, Yibo; Wang, Hongda; Ozcan, Aydogan

    2017-01-01

    regular optical microscope, without any changes to its design. We blindly tested this deep learning approach using various tissue samples that are imaged with low-resolution and wide-field systems, where the network rapidly outputs an image with remarkably

  18. The deep universe

    CERN Document Server

    Sandage, AR; Longair, MS

    1995-01-01

    Discusses the concept of the deep universe from two conflicting theoretical viewpoints: firstly as a theory embracing the evolution of the universe from the Big Bang to the present; and secondly through observations gleaned over the years on stars, galaxies and clusters.

  19. Teaching for Deep Learning

    Science.gov (United States)

    Smith, Tracy Wilson; Colby, Susan A.

    2007-01-01

    The authors have been engaged in research focused on students' depth of learning as well as teachers' efforts to foster deep learning. Findings from a study examining the teaching practices and student learning outcomes of sixty-four teachers in seventeen different states (Smith et al. 2005) indicated that most of the learning in these classrooms…

  20. Deep Trawl Dataset

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Otter trawl (36' Yankee and 4-seam net deepwater gear) catches from mid-Atlantic slope and canyons at 200 - 800 m depth. Deep-sea (200-800 m depth) flat otter trawls...

  1. [Deep vein thrombosis prophylaxis.

    Science.gov (United States)

    Sandoval-Chagoya, Gloria Alejandra; Laniado-Laborín, Rafael

    2013-01-01

    Background: despite the proven effectiveness of preventive therapy for deep vein thrombosis, a significant proportion of patients at risk for thromboembolism do not receive prophylaxis during hospitalization. Our objective was to determine the adherence to thrombosis prophylaxis guidelines in a general hospital as a quality control strategy. Methods: a random audit of clinical charts was conducted at the Tijuana General Hospital, Baja California, Mexico, to determine the degree of adherence to deep vein thrombosis prophylaxis guidelines. The instrument used was the Caprini's checklist for thrombosis risk assessment in adult patients. Results: the sample included 300 patient charts; 182 (60.7 %) were surgical patients and 118 were medical patients. Forty six patients (15.3 %) received deep vein thrombosis pharmacologic prophylaxis; 27.1 % of medical patients received deep vein thrombosis prophylaxis versus 8.3 % of surgical patients (p < 0.0001). Conclusions: our results show that adherence to DVT prophylaxis at our hospital is extremely low. Only 15.3 % of our patients at risk received treatment, and even patients with very high risk received treatment in less than 25 % of the cases. We have implemented strategies to increase compliance with clinical guidelines.

  2. DEWS (DEep White matter hyperintensity Segmentation framework): A fully automated pipeline for detecting small deep white matter hyperintensities in migraineurs.

    Science.gov (United States)

    Park, Bo-Yong; Lee, Mi Ji; Lee, Seung-Hak; Cha, Jihoon; Chung, Chin-Sang; Kim, Sung Tae; Park, Hyunjin

    2018-01-01

    Migraineurs show an increased load of white matter hyperintensities (WMHs) and more rapid deep WMH progression. Previous methods for WMH segmentation have limited efficacy to detect small deep WMHs. We developed a new fully automated detection pipeline, DEWS (DEep White matter hyperintensity Segmentation framework), for small and superficially-located deep WMHs. A total of 148 non-elderly subjects with migraine were included in this study. The pipeline consists of three components: 1) white matter (WM) extraction, 2) WMH detection, and 3) false positive reduction. In WM extraction, we adjusted the WM mask to re-assign misclassified WMHs back to WM using many sequential low-level image processing steps. In WMH detection, the potential WMH clusters were detected using an intensity based threshold and region growing approach. For false positive reduction, the detected WMH clusters were classified into final WMHs and non-WMHs using the random forest (RF) classifier. Size, texture, and multi-scale deep features were used to train the RF classifier. DEWS successfully detected small deep WMHs with a high positive predictive value (PPV) of 0.98 and true positive rate (TPR) of 0.70 in the training and test sets. Similar performance of PPV (0.96) and TPR (0.68) was attained in the validation set. DEWS showed a superior performance in comparison with other methods. Our proposed pipeline is freely available online to help the research community in quantifying deep WMHs in non-elderly adults.

  3. Plant Species Identification by Bi-channel Deep Convolutional Networks

    Science.gov (United States)

    He, Guiqing; Xia, Zhaoqiang; Zhang, Qiqi; Zhang, Haixi; Fan, Jianping

    2018-04-01

    Plant species identification achieves much attention recently as it has potential application in the environmental protection and human life. Although deep learning techniques can be directly applied for plant species identification, it still needs to be designed for this specific task to obtain the state-of-art performance. In this paper, a bi-channel deep learning framework is developed for identifying plant species. In the framework, two different sub-networks are fine-tuned over their pretrained models respectively. And then a stacking layer is used to fuse the output of two different sub-networks. We construct a plant dataset of Orchidaceae family for algorithm evaluation. Our experimental results have demonstrated that our bi-channel deep network can achieve very competitive performance on accuracy rates compared to the existing deep learning algorithm.

  4. DRREP: deep ridge regressed epitope predictor.

    Science.gov (United States)

    Sher, Gene; Zhi, Degui; Zhang, Shaojie

    2017-10-03

    The ability to predict epitopes plays an enormous role in vaccine development in terms of our ability to zero in on where to do a more thorough in-vivo analysis of the protein in question. Though for the past decade there have been numerous advancements and improvements in epitope prediction, on average the best benchmark prediction accuracies are still only around 60%. New machine learning algorithms have arisen within the domain of deep learning, text mining, and convolutional networks. This paper presents a novel analytically trained and string kernel using deep neural network, which is tailored for continuous epitope prediction, called: Deep Ridge Regressed Epitope Predictor (DRREP). DRREP was tested on long protein sequences from the following datasets: SARS, Pellequer, HIV, AntiJen, and SEQ194. DRREP was compared to numerous state of the art epitope predictors, including the most recently published predictors called LBtope and DMNLBE. Using area under ROC curve (AUC), DRREP achieved a performance improvement over the best performing predictors on SARS (13.7%), HIV (8.9%), Pellequer (1.5%), and SEQ194 (3.1%), with its performance being matched only on the AntiJen dataset, by the LBtope predictor, where both DRREP and LBtope achieved an AUC of 0.702. DRREP is an analytically trained deep neural network, thus capable of learning in a single step through regression. By combining the features of deep learning, string kernels, and convolutional networks, the system is able to perform residue-by-residue prediction of continues epitopes with higher accuracy than the current state of the art predictors.

  5. Achieving Public Schools

    Science.gov (United States)

    Abowitz, Kathleen Knight

    2011-01-01

    Public schools are functionally provided through structural arrangements such as government funding, but public schools are achieved in substance, in part, through local governance. In this essay, Kathleen Knight Abowitz explains the bifocal nature of achieving public schools; that is, that schools are both subject to the unitary Public compact of…

  6. Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification.

    Science.gov (United States)

    Rueckauer, Bodo; Lungu, Iulia-Alexandra; Hu, Yuhuang; Pfeiffer, Michael; Liu, Shih-Chii

    2017-01-01

    Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications.

  7. Deep-learning-based classification of FDG-PET data for Alzheimer's disease categories

    Science.gov (United States)

    Singh, Shibani; Srivastava, Anant; Mi, Liang; Caselli, Richard J.; Chen, Kewei; Goradia, Dhruman; Reiman, Eric M.; Wang, Yalin

    2017-11-01

    Fluorodeoxyglucose (FDG) positron emission tomography (PET) measures the decline in the regional cerebral metabolic rate for glucose, offering a reliable metabolic biomarker even on presymptomatic Alzheimer's disease (AD) patients. PET scans provide functional information that is unique and unavailable using other types of imaging. However, the computational efficacy of FDG-PET data alone, for the classification of various Alzheimers Diagnostic categories, has not been well studied. This motivates us to correctly discriminate various AD Diagnostic categories using FDG-PET data. Deep learning has improved state-of-the-art classification accuracies in the areas of speech, signal, image, video, text mining and recognition. We propose novel methods that involve probabilistic principal component analysis on max-pooled data and mean-pooled data for dimensionality reduction, and multilayer feed forward neural network which performs binary classification. Our experimental dataset consists of baseline data of subjects including 186 cognitively unimpaired (CU) subjects, 336 mild cognitive impairment (MCI) subjects with 158 Late MCI and 178 Early MCI, and 146 AD patients from Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. We measured F1-measure, precision, recall, negative and positive predictive values with a 10-fold cross validation scheme. Our results indicate that our designed classifiers achieve competitive results while max pooling achieves better classification performance compared to mean-pooled features. Our deep model based research may advance FDG-PET analysis by demonstrating their potential as an effective imaging biomarker of AD.

  8. Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification

    Directory of Open Access Journals (Sweden)

    Bodo Rueckauer

    2017-12-01

    Full Text Available Spiking neural networks (SNNs can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications.

  9. Assisted Diagnosis Research Based on Improved Deep Autoencoder

    Directory of Open Access Journals (Sweden)

    Ke Zhang-Han

    2017-01-01

    Full Text Available Deep Autoencoder has the powerful ability to learn features from large number of unlabeled samples and a small number of labeled samples. In this work, we have improved the network structure of the general deep autoencoder and applied it to the disease auxiliary diagnosis. We have achieved a network by entering the specific indicators and predicting whether suffering from liver disease, the network using real physical examination data for training and verification. Compared with the traditional semi-supervised machine learning algorithm, deep autoencoder will get higher accuracy.

  10. Deep kernel learning method for SAR image target recognition

    Science.gov (United States)

    Chen, Xiuyuan; Peng, Xiyuan; Duan, Ran; Li, Junbao

    2017-10-01

    With the development of deep learning, research on image target recognition has made great progress in recent years. Remote sensing detection urgently requires target recognition for military, geographic, and other scientific research. This paper aims to solve the synthetic aperture radar image target recognition problem by combining deep and kernel learning. The model, which has a multilayer multiple kernel structure, is optimized layer by layer with the parameters of Support Vector Machine and a gradient descent algorithm. This new deep kernel learning method improves accuracy and achieves competitive recognition results compared with other learning methods.

  11. Achieving excellence in training

    International Nuclear Information System (INIS)

    Mangin, A.M.; Solymossy, J.M.

    1983-01-01

    Operating a nuclear power plant is a uniquely challenging activity, requiring a high degree of competence from all who are involved. Achieving and maintaining this competence requires excellence in training. But what does excellence mean, and how do we achieve it. Based on the experience gained by INPO in plant training evaluations and accreditation activities, this paper describes some of the actions that can be taken to achieve the quality appropriate for nuclear power plant training. These actions are discussed in relation to the four phases of a performance-based training system: (1) needs analysis, (2) program design and development, (3) implementation, and (4) evaluation and improvement

  12. Deep learning beyond cats and dogs: recent advances in diagnosing breast cancer with deep neural networks.

    Science.gov (United States)

    Burt, Jeremy R; Torosdagli, Neslisah; Khosravan, Naji; RaviPrakash, Harish; Mortazi, Aliasghar; Tissavirasingham, Fiona; Hussein, Sarfaraz; Bagci, Ulas

    2018-04-10

    Deep learning has demonstrated tremendous revolutionary changes in the computing industry and its effects in radiology and imaging sciences have begun to dramatically change screening paradigms. Specifically, these advances have influenced the development of computer-aided detection and diagnosis (CAD) systems. These technologies have long been thought of as "second-opinion" tools for radiologists and clinicians. However, with significant improvements in deep neural networks, the diagnostic capabilities of learning algorithms are approaching levels of human expertise (radiologists, clinicians etc.), shifting the CAD paradigm from a "second opinion" tool to a more collaborative utility. This paper reviews recently developed CAD systems based on deep learning technologies for breast cancer diagnosis, explains their superiorities with respect to previously established systems, defines the methodologies behind the improved achievements including algorithmic developments, and describes remaining challenges in breast cancer screening and diagnosis. We also discuss possible future directions for new CAD models that continue to change as artificial intelligence algorithms evolve.

  13. Fiscal 1982 Sunshine Program achievement report. Development for practical application of photovoltaic system (Verification of experimental low cost silicon refining - Development of technology for chlorosilane hydrogen-reduction process); 1982 nendo taiyoko hatsuden system jitsuyoka gijutsu kaihatsu seika hokokusho. Tei cost silicon jikken seisei kensho (Chlorosilane no suiso kangen kotei no gijutsu kaihatsu)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1983-03-01

    The effort aims to develop a reactor and its peripheral devices and process management technology therefor and to develop chlorosilane hydrogen-reduction process technology as part of the endeavors to develop a low cost production process for silicon for photovoltaic cells for the purpose of building a model plant capable of approximately 10 tons/year in terms of SOG (spin on glass) silicon. The installation of a 10 tons/year class model plant for SOG-Si production is completed in July 1982. Flaws are removed after a test run, and four reactions are accomplished without damage on the reactor tube in the period from February-end to March-beginning marking 550 hours of operation in total. Thanks to the four operations, 1,086kg of granules are experimentally produced and an electric power consumption rate of 30.6KWH/kg Si is achieved, control of the flowing particle amount by reactor differential pressure is accomplished and the same of the yield by recovered silane liquid composition analysis, and essence is seized of operation control technologies. In an experimental apparatus for seed production, Si is crushed by a roll crusher and then subjected to separation by a quartz-made air elutriator. A high yield of 130kg is obtained after crushing for 75 hours. (NEDO)

  14. Deep inelastic scattering

    International Nuclear Information System (INIS)

    Zakharov, V.I.

    1977-01-01

    The present status of the quark-parton-gluon picture of deep inelastic scattering is reviewed. The general framework is mostly theoretical and covers investigations since 1970. Predictions of the parton model and of the asymptotically free field theories are compared with experimental data available. The valence quark approximation is concluded to be valid in most cases, but fails to account for the data on the total momentum transfer. On the basis of gluon corrections introduced to the parton model certain predictions concerning both the deep inelastic structure functions and form factors are made. The contributions of gluon exchanges and gluon bremsstrahlung are highlighted. Asymptotic freedom is concluded to be very attractive and provide qualitative explanation to some experimental observations (scaling violations, breaking of the Drell-Yan-West type relations). Lepton-nuclear scattering is pointed out to be helpful in probing the nature of nuclear forces and studying the space-time picture of the parton model

  15. Deep groundwater chemistry

    International Nuclear Information System (INIS)

    Wikberg, P.; Axelsen, K.; Fredlund, F.

    1987-06-01

    Starting in 1977 and up till now a number of places in Sweden have been investigated in order to collect the necessary geological, hydrogeological and chemical data needed for safety analyses of repositories in deep bedrock systems. Only crystalline rock is considered and in many cases this has been gneisses of sedimentary origin but granites and gabbros are also represented. Core drilled holes have been made at nine sites. Up to 15 holes may be core drilled at one site, the deepest down to 1000 m. In addition to this a number of boreholes are percussion drilled at each site to depths of about 100 m. When possible drilling water is taken from percussion drilled holes. The first objective is to survey the hydraulic conditions. Core drilled boreholes and sections selected for sampling of deep groundwater are summarized. (orig./HP)

  16. Dose reduction - the radiologist's view

    International Nuclear Information System (INIS)

    Russell, J.G.B.

    1984-01-01

    The magnitude of the exposure to ionising radiation dominates radiological practice in only three fields, i.e. foetal radiography, mammography and computed tomography. The balance between risk and benefit are briefly examined. The types of hazard considered are carcinogenesis, genetic injury and organogenesis. Ways of achieving a reduction of the dose to the patient are also briefly discussed. (U.K.)

  17. Deep Reinforcement Fuzzing

    OpenAIRE

    Böttinger, Konstantin; Godefroid, Patrice; Singh, Rishabh

    2018-01-01

    Fuzzing is the process of finding security vulnerabilities in input-processing code by repeatedly testing the code with modified inputs. In this paper, we formalize fuzzing as a reinforcement learning problem using the concept of Markov decision processes. This in turn allows us to apply state-of-the-art deep Q-learning algorithms that optimize rewards, which we define from runtime properties of the program under test. By observing the rewards caused by mutating with a specific set of actions...

  18. Achieveing Organizational Excellence Through

    OpenAIRE

    Mehdi Abzari; Mohammadreza Dalvi

    2009-01-01

    AbstractToday, In order to create motivation and desirable behavior in employees, to obtain organizational goals,to increase human resources productivity and finally to achieve organizational excellence, top managers oforganizations apply new and effective strategies. One of these strategies to achieve organizational excellenceis creating desirable corporate culture. This research has been conducted to identify the path to reachorganizational excellence by creating corporate culture according...

  19. Achieving a competitive advantage in managed care.

    Science.gov (United States)

    Stahl, D A

    1998-02-01

    When building a competitive advantage to thrive in the managed care arena, subacute care providers are urged to be revolutionary rather than reactionary, proactive rather than passive, optimistic rather than pessimistic and growth-oriented rather than cost-reduction oriented. Weaknesses must be addressed aggressively. To achieve a competitive edge, assess the facility's strengths, understand the marketplace and comprehend key payment methods.

  20. Formability of paperboard during deep-drawing with local steam application

    Science.gov (United States)

    Franke, Wilken; Stein, Philipp; Dörsam, Sven; Groche, Peter

    2018-05-01

    The use of paperboard can significantly improve the environmental compatibility of everyday products such as packages. Nevertheless, most packages are currently made of plastics, since the three-dimensional shaping of paperboard is possible only to a limited extent. In order to increase the forming possibilities, deep drawing of cardboard has been intensively investigated for more than a decade. An improvement with regard to increased forming limits has been achieved by heating of the tool parts, which leads to a softening of paperboard constituents such as lignin. A further approach is the moistening of the samples, whereby the hydrogen bonds between the fibers are weakened and as a result an increase of the formability. It is expected that a combination of both parameter approaches will result in a significant increase in the forming capacity and in the shape accuracy. For this reason, a new tool concept is introduced within the scope of this work which makes it possible to moisten samples during the deep drawing process by means of steam supply. The conducted investigations show that spring-back in the preferred fiber direction can be reduced by 38 %. Orthogonal to the preferred fiber direction a reduction of spring back of up to 79 % is determined, which corresponds to a perfect shape. Moreover, it was determined that the steam duration and the initial moisture content have an influence on the final shape. In addition to the increased dimensional accuracy, an optimized wrinkle compression compared to conventional deep drawing is found. According to the results, it can be summarized that a steam application in the deep drawing of paperboard significantly improves the part quality.

  1. Key technologies and risk management of deep tunnel construction at Jinping II hydropower station

    Directory of Open Access Journals (Sweden)

    Chunsheng Zhang

    2016-08-01

    Full Text Available The four diversion tunnels at Jinping II hydropower station represent the deepest underground project yet conducted in China, with an overburden depth of 1500–2000 m and a maximum depth of 2525 m. The tunnel structure was subjected to a maximum external water pressure of 10.22 MPa and the maximum single-point groundwater inflow of 7.3 m3/s. The success of the project construction was related to numerous challenging issues such as the stability of the rock mass surrounding the deep tunnels, strong rockburst prevention and control, and the treatment of high-pressure, large-volume groundwater infiltration. During the construction period, a series of new technologies was developed for the purpose of risk control in the deep tunnel project. Nondestructive sampling and in-situ measurement technologies were employed to fully characterize the formation and development of excavation damaged zones (EDZs, and to evaluate the mechanical behaviors of deep rocks. The time effect of marble fracture propagation, the brittle–ductile–plastic transition of marble, and the temporal development of rock mass fracture and damage induced by high geostress were characterized. The safe construction of deep tunnels was achieved under a high risk of strong rockburst using active measures, a support system comprised of lining, grouting, and external water pressure reduction techniques that addressed the coupled effect of high geostress, high external water pressure, and a comprehensive early-warning system. A complete set of technologies for the treatment of high-pressure and large-volume groundwater infiltration was developed. Monitoring results indicated that the Jinping II hydropower station has been generally stable since it was put into operation in 2014.

  2. Construction of Neural Networks for Realization of Localized Deep Learning

    Directory of Open Access Journals (Sweden)

    Charles K. Chui

    2018-05-01

    Full Text Available The subject of deep learning has recently attracted users of machine learning from various disciplines, including: medical diagnosis and bioinformatics, financial market analysis and online advertisement, speech and handwriting recognition, computer vision and natural language processing, time series forecasting, and search engines. However, theoretical development of deep learning is still at its infancy. The objective of this paper is to introduce a deep neural network (also called deep-net approach to localized manifold learning, with each hidden layer endowed with a specific learning task. For the purpose of illustrations, we only focus on deep-nets with three hidden layers, with the first layer for dimensionality reduction, the second layer for bias reduction, and the third layer for variance reduction. A feedback component is also designed to deal with outliers. The main theoretical result in this paper is the order O(m-2s/(2s+d of approximation of the regression function with regularity s, in terms of the number m of sample points, where the (unknown manifold dimension d replaces the dimension D of the sampling (Euclidean space for shallow nets.

  3. Deep TMS in a resistant major depressive disorder: a brief report.

    Science.gov (United States)

    Rosenberg, O; Shoenfeld, N; Zangen, A; Kotler, M; Dannon, P N

    2010-05-01

    Repetitive transcranial magnetic stimulation (rTMS) has proven effective. Recently, a greater intracranial penetration coil has been developed. We tested the efficacy of the coil in the treatment of resistant major depression. Our sample included seven patients suffering from major depression who were treated using Brainsway's H1-coil connected to a Magstim rapid 2 stimulator. Deep TMS treatment was given to each patient in five sessions per week over a period of 4 weeks. Patients were treated with 120% intensity of the motor threshold and a frequency of 20 HZ with a total of 1,680 pulses per session. Five patients completed 20 sessions: one attained remission (Hamilton Depression Rating Scale (HDRS)=9); three patients reached a reduction of more than 50% in their pre-treatment HDRS; and one patient achieved a partial response (i.e., the HDRS score dropped from 21 to 12). Average HDRS score dropped to 12.6 and average Hamilton Anxiety Rating Scale score dropped to 9.Two patients dropped out: one due to insomnia and the second due to a lack of response. Compared to the pooled response and remission rates when treating major depression with rTMS, deep TMS as used in this study is at least similarly effective. Still, a severe limitation of this study is its small sample size, which makes the comparison of the two methods in terms of their effectiveness or side effects impossible. Greater numbers of subjects should be studied to achieve this aim. An H1 deep TMS coil could be used as an alternative treatment for major depressive disorder.

  4. Economic Evaluation of SMART Deployment in the MENA Region using DEEP 5..0

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Han-Ok; Lee, Man-Ki; Zee, Sung-Kyun; Kim, Young-In; Kim, Keung Koo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    Some countries have officially announced that the development of atomic energy is essential to meet the growing nation's requirements for energy to generate electricity, produce desalination water, and reduce reliance on depleting hydrocarbon resources. SMART (system-integrated modular advanced reactor) is a small-sized advanced integral reactor with a rated thermal power of 330 MW. It can produce 100 MW of electricity, or 90 MW of electricity and 40,000 tons of desalinated water concurrently, which is sufficient for 100,000 residents. It is an integral type reactor with a sensible mixture of proven technologies and advanced design features. SMART aims at achieving enhanced safety and improved economics; the enhancement of safety and reliability is realized by incorporating inherent safety-improving features and reliable passive safety systems. The improvement in the economics is achieved through a system simplification, component modularization, reduction of construction time, and high plant availability. The standard design approval assures the safety of the SMART system. The economics of SMART are evaluated for the deployment in MENA region in this study. DEEP 5.0 software was selected for the economic evaluation of SMART plant. By using the collected technical and economic data as the input data into DEEP program, the power and water costs are calculated. Electric power and fresh water production costs for the case of SMART deployment at the MENA region is evaluated using the DEEP 5.0 software in this study. Technical input data are prepared on the basis of the local environmental conditions of the MENA region. The results show that the SMART plant can supply 94 MWe to an external grid system with 40,000 m{sup 3}/d of fresh water. The power and water costs are calculated for the various specific construction costs.

  5. Economic Evaluation of SMART Deployment in the MENA Region using DEEP 5..0

    International Nuclear Information System (INIS)

    Kang, Han-Ok; Lee, Man-Ki; Zee, Sung-Kyun; Kim, Young-In; Kim, Keung Koo

    2014-01-01

    Some countries have officially announced that the development of atomic energy is essential to meet the growing nation's requirements for energy to generate electricity, produce desalination water, and reduce reliance on depleting hydrocarbon resources. SMART (system-integrated modular advanced reactor) is a small-sized advanced integral reactor with a rated thermal power of 330 MW. It can produce 100 MW of electricity, or 90 MW of electricity and 40,000 tons of desalinated water concurrently, which is sufficient for 100,000 residents. It is an integral type reactor with a sensible mixture of proven technologies and advanced design features. SMART aims at achieving enhanced safety and improved economics; the enhancement of safety and reliability is realized by incorporating inherent safety-improving features and reliable passive safety systems. The improvement in the economics is achieved through a system simplification, component modularization, reduction of construction time, and high plant availability. The standard design approval assures the safety of the SMART system. The economics of SMART are evaluated for the deployment in MENA region in this study. DEEP 5.0 software was selected for the economic evaluation of SMART plant. By using the collected technical and economic data as the input data into DEEP program, the power and water costs are calculated. Electric power and fresh water production costs for the case of SMART deployment at the MENA region is evaluated using the DEEP 5.0 software in this study. Technical input data are prepared on the basis of the local environmental conditions of the MENA region. The results show that the SMART plant can supply 94 MWe to an external grid system with 40,000 m 3 /d of fresh water. The power and water costs are calculated for the various specific construction costs

  6. Implications of Deep Decarbonization for Carbon Cycle Science

    Science.gov (United States)

    Jones, A. D.; Williams, J.; Torn, M. S.

    2016-12-01

    The energy-system transformations required to achieve deep decarbonization in the United States, defined as a reduction of greenhouse gas emissions of 80% or more below 1990 levels by 2050, have profound implications for carbon cycle science, particularly with respect to 4 key objectives: understanding and enhancing the terrestrial carbon sink, using bioenergy sustainably, controlling non-CO2 GHGs, and emissions monitoring and verification. (1) As a source of mitigation, the terrestrial carbon sink is pivotal but uncertain, and changes in the expected sink may significantly affect the overall cost of mitigation. Yet the dynamics of the sink under changing climatic conditions, and the potential to protect and enhance the sink through land management, are poorly understood. Policy urgently requires an integrative research program that links basic science knowledge to land management practices. (2) Biomass resources can fill critical energy needs in a deeply decarbonized system, but current understanding of sustainability and lifecycle carbon aspects is limited. Mitigation policy needs better understanding of the sustainable amount, types, and cost of bioenergy feedstocks, their interactions with other land uses, and more efficient and reliable monitoring of embedded carbon. (3) As CO2 emissions from energy decrease under deep decarbonization, the relative share of non-CO2 GHGs grows larger and their mitigation more important. Because the sources tend to be distributed, variable, and uncertain, they have been under-researched. Policy needs a better understanding of mitigation priorities and costs, informed by deeper research in key areas such as fugitive CH4, fertilizer-derived N2O, and industrial F-gases. (4) The M&V challenge under deep decarbonization changes with a steep decrease in the combustion CO2 sources due to widespread electrification, while a greater share of CO2 releases is net-carbon-neutral. Similarly, gas pipelines may carry an increasing share of

  7. Imaging findings and significance of deep neck space infection

    International Nuclear Information System (INIS)

    Zhuang Qixin; Gu Yifeng; Du Lianjun; Zhu Lili; Pan Yuping; Li Minghua; Yang Shixun; Shang Kezhong; Yin Shankai

    2004-01-01

    Objective: To study the imaging appearance of deep neck space cellulitis and abscess and to evaluate the diagnostic criteria of deep neck space infection. Methods: CT and MRI findings of 28 cases with deep neck space infection proved by clinical manifestation and pathology were analyzed, including 11 cases of retropharyngeal space, 5 cases of parapharyngeal space infection, 4 cases of masticator space infection, and 8 cases of multi-space infection. Results: CT and MRI could display the swelling of the soft tissues and displacement, reduction, or disappearance of lipoid space in the cellulitis. In inflammatory tissues, MRI imaging demonstrated hypointense or isointense signal on T 1 WI, and hyperintense signal changes on T 2 WI. In abscess, CT could display hypodensity in the center and boundary enhancement of the abscess. MRI could display obvious hyperintense signal on T 2 WI and boundary enhancement. Conclusion: CT and MRI could provide useful information for deep neck space cellulitis and abscess

  8. Application of Moessbauer spectroscopy to the study of neptunium adsorbed on deep-sea sediments

    International Nuclear Information System (INIS)

    Bennett, B.A.; Rees, L.V.C.

    1987-03-01

    A Neptunium Moessbauer spectrometer (the first in Great Britain) was constructed and the Moessbauer spectra of NpAl Laves phase alloy obtained. Neptunium was sorbed onto a calcareous deep-sea sediment from sea water, using a successive-loading technique. Sorption appeared to be by an equilibrium reaction, and because of the low solubility of neptunium in seawater, this meant that the maximum loading that could be achieved was 8mg 237 Np/g sediment. This proved to be an adequate concentration for Moessbauer measurements and a Moessbauer spectrum was obtained. This showed that most of the neptunium was in exchange sites and not present as precipitates of neptunium compounds. It was probably in the 4+ state indicating that reduction had occurred during sorption. This work has demonstrated that Moessbauer Spectroscopy has great potential as an aid to understanding the mechanism of actinide sorption in natural systems. (author)

  9. The onset of fabric development in deep marine sediments

    NARCIS (Netherlands)

    Maffione, Marco; Morris, Antony

    2017-01-01

    Post-depositional compaction is a key stage in the formation of sedimentary rocks that results in porosity reduction, grain realignment and the production of sedimentary fabrics. The progressive time-depth evolution of the onset of fabric development in deep marine sediments is poorly constrained

  10. Gene expression inference with deep learning.

    Science.gov (United States)

    Chen, Yifei; Li, Yi; Narayan, Rajiv; Subramanian, Aravind; Xie, Xiaohui

    2016-06-15

    Large-scale gene expression profiling has been widely used to characterize cellular states in response to various disease conditions, genetic perturbations, etc. Although the cost of whole-genome expression profiles has been dropping steadily, generating a compendium of expression profiling over thousands of samples is still very expensive. Recognizing that gene expressions are often highly correlated, researchers from the NIH LINCS program have developed a cost-effective strategy of profiling only ∼1000 carefully selected landmark genes and relying on computational methods to infer the expression of remaining target genes. However, the computational approach adopted by the LINCS program is currently based on linear regression (LR), limiting its accuracy since it does not capture complex nonlinear relationship between expressions of genes. We present a deep learning method (abbreviated as D-GEX) to infer the expression of target genes from the expression of landmark genes. We used the microarray-based Gene Expression Omnibus dataset, consisting of 111K expression profiles, to train our model and compare its performance to those from other methods. In terms of mean absolute error averaged across all genes, deep learning significantly outperforms LR with 15.33% relative improvement. A gene-wise comparative analysis shows that deep learning achieves lower error than LR in 99.97% of the target genes. We also tested the performance of our learned model on an independent RNA-Seq-based GTEx dataset, which consists of 2921 expression profiles. Deep learning still outperforms LR with 6.57% relative improvement, and achieves lower error in 81.31% of the target genes. D-GEX is available at https://github.com/uci-cbcl/D-GEX CONTACT: xhx@ics.uci.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Feasibility and Costs of Natural Gas as a Bridge to Deep Decarbonization in the United States

    Science.gov (United States)

    Jones, A. D.; McJeon, H. C.; Muratori, M.; Shi, W.

    2015-12-01

    Achieving emissions reductions consistent with a 2 degree Celsius global warming target requires nearly complete replacement of traditional fossil fuel combustion with near-zero carbon energy technologies in the United States by 2050. There are multiple technological change pathways consistent with this deep decarbonization, including strategies that rely on renewable energy, nuclear, and carbon capture and storage (CCS) technologies. The replacement of coal-fired power plants with natural gas-fired power plants has also been suggested as a bridge strategy to achieve near-term emissions reduction targets. These gas plants, however, would need to be replaced by near-zero energy technologies or retrofitted with CCS by 2050 in order to achieve longer-term targets. Here we examine the costs and feasibility of a natural gas bridge strategy. Using the Global Change Assessment (GCAM) model, we develop multiple scenarios that each meet the recent US Intended Nationally Determined Contribution (INDC) to reduce GHG emissions by 26%-28% below its 2005 levels in 2025, as well as a deep decarbonization target of 80% emissions reductions below 1990 levels by 2050. We find that the gas bridge strategy requires that gas plants be retired on average 20 years earlier than their designed lifetime of 45 years, a potentially challenging outcome to achieve from a policy perspective. Using a more idealized model, we examine the net energy system costs of this gas bridge strategy compared to one in which near-zero energy technologies are deployed in the near tem. We explore the sensitivity of these cost results to four factors: the discount rate applied to future costs, the length (or start year) of the gas bridge, the relative capital cost of natural gas vs. near-zero energy technology, and the fuel price of natural gas. The discount rate and cost factors are found to be more important than the length of the bridge. However, we find an important interaction as well. At low discount rates

  12. Deep Red (Profondo Rosso)

    CERN Multimedia

    Cine Club

    2015-01-01

    Wednesday 29 April 2015 at 20:00 CERN Council Chamber    Deep Red (Profondo Rosso) Directed by Dario Argento (Italy, 1975) 126 minutes A psychic who can read minds picks up the thoughts of a murderer in the audience and soon becomes a victim. An English pianist gets involved in solving the murders, but finds many of his avenues of inquiry cut off by new murders, and he begins to wonder how the murderer can track his movements so closely. Original version Italian; English subtitles

  13. Reversible deep disposal

    International Nuclear Information System (INIS)

    2009-10-01

    This presentation, given by the national agency of radioactive waste management (ANDRA) at the meeting of October 8, 2009 of the high committee for the nuclear safety transparency and information (HCTISN), describes the concept of deep reversible disposal for high level/long living radioactive wastes, as considered by the ANDRA in the framework of the program law of June 28, 2006 about the sustainable management of radioactive materials and wastes. The document presents the social and political reasons of reversibility, the technical means considered (containers, disposal cavities, monitoring system, test facilities and industrial prototypes), the decisional process (progressive development and blocked off of the facility, public information and debate). (J.S.)

  14. Deep inelastic neutron scattering

    International Nuclear Information System (INIS)

    Mayers, J.

    1989-03-01

    The report is based on an invited talk given at a conference on ''Neutron Scattering at ISIS: Recent Highlights in Condensed Matter Research'', which was held in Rome, 1988, and is intended as an introduction to the techniques of Deep Inelastic Neutron Scattering. The subject is discussed under the following topic headings:- the impulse approximation I.A., scaling behaviour, kinematical consequences of energy and momentum conservation, examples of measurements, derivation of the I.A., the I.A. in a harmonic system, and validity of the I.A. in neutron scattering. (U.K.)

  15. [Deep mycoses rarely described].

    Science.gov (United States)

    Charles, D

    1986-01-01

    Beside deep mycoses very well known: histoplasmosis, candidosis, cryptococcosis, there are other mycoses less frequently described. Some of them are endemic in some countries: South American blastomycosis in Brazil, coccidioidomycosis in California; some others are cosmopolitan and may affect everyone: sporotrichosis, or may affect only immunodeficient persons: mucormycosis. They do not spare Africa, we may encounter basidiobolomycosis, rhinophycomycosis, dermatophytosis, sporotrichosis and, more recently reported, rhinosporidiosis. Important therapeutic progresses have been accomplished with amphotericin B and with antifungus imidazole compounds (miconazole and ketoconazole). Surgical intervention is sometime recommended in chromomycosis and rhinosporidiosis.

  16. Deep penetration calculations

    International Nuclear Information System (INIS)

    Thompson, W.L.; Deutsch, O.L.; Booth, T.E.

    1980-04-01

    Several Monte Carlo techniques are compared in the transport of neutrons of different source energies through two different deep-penetration problems each with two parts. The first problem involves transmission through a 200-cm concrete slab. The second problem is a 90 0 bent pipe jacketed by concrete. In one case the pipe is void, and in the other it is filled with liquid sodium. Calculations are made with two different Los Alamos Monte Carlo codes: the continuous-energy code MCNP and the multigroup code MCMG

  17. NCLB: Achievement Robin Hood?

    Science.gov (United States)

    Bracey, Gerald W.

    2008-01-01

    In his "Wall Street Journal" op-ed on the 25th of anniversary of "A Nation At Risk", former assistant secretary of education Chester E. Finn Jr. applauded the report for turning U.S. education away from equality and toward achievement. It was not surprising, then, that in mid-2008, Finn arranged a conference to examine the…

  18. Reducing the Achievement Gap.

    Science.gov (United States)

    McCombs, Barbara L.

    2000-01-01

    Reviews the College Board's report, "Reaching the Top," which addresses educational underrepresentation of high-achieving minority students, examining how social sciences, psychology, and education research contribute to an understanding of the feasibility of the report's recommendations and noting implications of these recommendations…

  19. Explorations in achievement motivation

    Science.gov (United States)

    Helmreich, Robert L.

    1982-01-01

    Recent research on the nature of achievement motivation is reviewed. A three-factor model of intrinsic motives is presented and related to various criteria of performance, job satisfaction and leisure activities. The relationships between intrinsic and extrinsic motives are discussed. Needed areas for future research are described.

  20. Schooling and Social Achievement.

    Science.gov (United States)

    Kim, Byong-sung; And Others

    Until the 1960s schooling in Korea was looked upon quite favorably as a means of achieving equal social and economic opportunities. In the 1970s, however, many began to raise the question of whether the expansion of educational opportunities really did reduce social inequalities. This report discusses research that analyzes available evidence…

  1. Correlates of Achievement Motivation.

    Science.gov (United States)

    Whiteside, Marilyn

    1978-01-01

    Undergraduates given a self-concept scale, a sentence completion exercise, and story cues related to academic achievement generally expressed positive attitudes toward success; but students of both sexes with high self-esteem tended to associate success with a male, and those with lower self-esteem attributed success to a female. (Author)

  2. Achieving Quality Integrated Education.

    Science.gov (United States)

    Hawley, Willis D.; Rosenholtz, Susan J.

    While desegregation is neither a necessary nor a sufficient condition for ensuring either equity or quality education for minorities, the evidence is convincing that it is "educationally more difficult" to improve student achievement in segregated schools. Desegregation offers the opportunity to enhance the quality of education, particularly when…

  3. Deep Super Learner: A Deep Ensemble for Classification Problems

    OpenAIRE

    Young, Steven; Abdou, Tamer; Bener, Ayse

    2018-01-01

    Deep learning has become very popular for tasks such as predictive modeling and pattern recognition in handling big data. Deep learning is a powerful machine learning method that extracts lower level features and feeds them forward for the next layer to identify higher level features that improve performance. However, deep neural networks have drawbacks, which include many hyper-parameters and infinite architectures, opaqueness into results, and relatively slower convergence on smaller datase...

  4. Particle swarm optimization for programming deep brain stimulation arrays.

    Science.gov (United States)

    Peña, Edgar; Zhang, Simeng; Deyo, Steve; Xiao, YiZi; Johnson, Matthew D

    2017-02-01

    Deep brain stimulation (DBS) therapy relies on both precise neurosurgical targeting and systematic optimization of stimulation settings to achieve beneficial clinical outcomes. One recent advance to improve targeting is the development of DBS arrays (DBSAs) with electrodes segmented both along and around the DBS lead. However, increasing the number of independent electrodes creates the logistical challenge of optimizing stimulation parameters efficiently. Solving such complex problems with multiple solutions and objectives is well known to occur in biology, in which complex collective behaviors emerge out of swarms of individual organisms engaged in learning through social interactions. Here, we developed a particle swarm optimization (PSO) algorithm to program DBSAs using a swarm of individual particles representing electrode configurations and stimulation amplitudes. Using a finite element model of motor thalamic DBS, we demonstrate how the PSO algorithm can efficiently optimize a multi-objective function that maximizes predictions of axonal activation in regions of interest (ROI, cerebellar-receiving area of motor thalamus), minimizes predictions of axonal activation in regions of avoidance (ROA, somatosensory thalamus), and minimizes power consumption. The algorithm solved the multi-objective problem by producing a Pareto front. ROI and ROA activation predictions were consistent across swarms (<1% median discrepancy in axon activation). The algorithm was able to accommodate for (1) lead displacement (1 mm) with relatively small ROI (⩽9.2%) and ROA (⩽1%) activation changes, irrespective of shift direction; (2) reduction in maximum per-electrode current (by 50% and 80%) with ROI activation decreasing by 5.6% and 16%, respectively; and (3) disabling electrodes (n  =  3 and 12) with ROI activation reduction by 1.8% and 14%, respectively. Additionally, comparison between PSO predictions and multi-compartment axon model simulations showed discrepancies

  5. Particle Swarm Optimization for Programming Deep Brain Stimulation Arrays

    Science.gov (United States)

    Peña, Edgar; Zhang, Simeng; Deyo, Steve; Xiao, YiZi; Johnson, Matthew D.

    2017-01-01

    Objective Deep brain stimulation (DBS) therapy relies on both precise neurosurgical targeting and systematic optimization of stimulation settings to achieve beneficial clinical outcomes. One recent advance to improve targeting is the development of DBS arrays (DBSAs) with electrodes segmented both along and around the DBS lead. However, increasing the number of independent electrodes creates the logistical challenge of optimizing stimulation parameters efficiently. Approach Solving such complex problems with multiple solutions and objectives is well known to occur in biology, in which complex collective behaviors emerge out of swarms of individual organisms engaged in learning through social interactions. Here, we developed a particle swarm optimization (PSO) algorithm to program DBSAs using a swarm of individual particles representing electrode configurations and stimulation amplitudes. Using a finite element model of motor thalamic DBS, we demonstrate how the PSO algorithm can efficiently optimize a multi-objective function that maximizes predictions of axonal activation in regions of interest (ROI, cerebellar-receiving area of motor thalamus), minimizes predictions of axonal activation in regions of avoidance (ROA, somatosensory thalamus), and minimizes power consumption. Main Results The algorithm solved the multi-objective problem by producing a Pareto front. ROI and ROA activation predictions were consistent across swarms (<1% median discrepancy in axon activation). The algorithm was able to accommodate for (1) lead displacement (1 mm) with relatively small ROI (≤9.2%) and ROA (≤1%) activation changes, irrespective of shift direction; (2) reduction in maximum per-electrode current (by 50% and 80%) with ROI activation decreasing by 5.6% and 16%, respectively; and (3) disabling electrodes (n=3 and 12) with ROI activation reduction by 1.8% and 14%, respectively. Additionally, comparison between PSO predictions and multi-compartment axon model simulations

  6. Motivation, Cognitive Processing and Achievement in Higher Education

    Science.gov (United States)

    Bruinsma, Marjon

    2004-01-01

    This study investigated the question of whether a student's expectancy, values and negative affect influenced their deep information processing approach and achievement at the end of the first and second academic year. Five hundred and sixty-five first-year students completed a self-report questionnaire on three different occasions. The…

  7. Motivation, cognitive processing and achievement in higher education

    NARCIS (Netherlands)

    Bruinsma, M.

    2004-01-01

    This study investigated the question of whether a student's expectancy, values and negative affect influenced their deep information processing approach and achievement at the end of the first and second academic year. Five hundred and sixty-five first-year students completed a self-report

  8. Time-lagged autoencoders: Deep learning of slow collective variables for molecular kinetics

    Science.gov (United States)

    Wehmeyer, Christoph; Noé, Frank

    2018-06-01

    Inspired by the success of deep learning techniques in the physical and chemical sciences, we apply a modification of an autoencoder type deep neural network to the task of dimension reduction of molecular dynamics data. We can show that our time-lagged autoencoder reliably finds low-dimensional embeddings for high-dimensional feature spaces which capture the slow dynamics of the underlying stochastic processes—beyond the capabilities of linear dimension reduction techniques.

  9. Deep sea biophysics

    International Nuclear Information System (INIS)

    Yayanos, A.A.

    1982-01-01

    A collection of deep-sea bacterial cultures was completed. Procedures were instituted to shelter the culture collection from accidential warming. A substantial data base on the rates of reproduction of more than 100 strains of bacteria from that collection was obtained from experiments and the analysis of that data was begun. The data on the rates of reproduction were obtained under conditions of temperature and pressure found in the deep sea. The experiments were facilitated by inexpensively fabricated pressure vessels, by the streamlining of the methods for the study of kinetics at high pressures, and by computer-assisted methods. A polybarothermostat was used to study the growth of bacteria along temperature gradients at eight distinct pressures. This device should allow for the study of microbial processes in the temperature field simulating the environment around buried HLW. It is small enough to allow placement in a radiation field in future studies. A flow fluorocytometer was fabricated. This device will be used to determine the DNA content per cell in bacteria grown in laboratory culture and in microorganisms in samples from the ocean. The technique will be tested for its rapidity in determining the concentration of cells (standing stock of microorganisms) in samples from the ocean

  10. Deep Learning in Radiology.

    Science.gov (United States)

    McBee, Morgan P; Awan, Omer A; Colucci, Andrew T; Ghobadi, Comeron W; Kadom, Nadja; Kansagra, Akash P; Tridandapani, Srini; Auffermann, William F

    2018-03-29

    As radiology is inherently a data-driven specialty, it is especially conducive to utilizing data processing techniques. One such technique, deep learning (DL), has become a remarkably powerful tool for image processing in recent years. In this work, the Association of University Radiologists Radiology Research Alliance Task Force on Deep Learning provides an overview of DL for the radiologist. This article aims to present an overview of DL in a manner that is understandable to radiologists; to examine past, present, and future applications; as well as to evaluate how radiologists may benefit from this remarkable new tool. We describe several areas within radiology in which DL techniques are having the most significant impact: lesion or disease detection, classification, quantification, and segmentation. The legal and ethical hurdles to implementation are also discussed. By taking advantage of this powerful tool, radiologists can become increasingly more accurate in their interpretations with fewer errors and spend more time to focus on patient care. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  11. Deep Learning Microscopy

    KAUST Repository

    Rivenson, Yair

    2017-05-12

    We demonstrate that a deep neural network can significantly improve optical microscopy, enhancing its spatial resolution over a large field-of-view and depth-of-field. After its training, the only input to this network is an image acquired using a regular optical microscope, without any changes to its design. We blindly tested this deep learning approach using various tissue samples that are imaged with low-resolution and wide-field systems, where the network rapidly outputs an image with remarkably better resolution, matching the performance of higher numerical aperture lenses, also significantly surpassing their limited field-of-view and depth-of-field. These results are transformative for various fields that use microscopy tools, including e.g., life sciences, where optical microscopy is considered as one of the most widely used and deployed techniques. Beyond such applications, our presented approach is broadly applicable to other imaging modalities, also spanning different parts of the electromagnetic spectrum, and can be used to design computational imagers that get better and better as they continue to image specimen and establish new transformations among different modes of imaging.

  12. Deep Transfer Metric Learning.

    Science.gov (United States)

    Junlin Hu; Jiwen Lu; Yap-Peng Tan; Jie Zhou

    2016-12-01

    Conventional metric learning methods usually assume that the training and test samples are captured in similar scenarios so that their distributions are assumed to be the same. This assumption does not hold in many real visual recognition applications, especially when samples are captured across different data sets. In this paper, we propose a new deep transfer metric learning (DTML) method to learn a set of hierarchical nonlinear transformations for cross-domain visual recognition by transferring discriminative knowledge from the labeled source domain to the unlabeled target domain. Specifically, our DTML learns a deep metric network by maximizing the inter-class variations and minimizing the intra-class variations, and minimizing the distribution divergence between the source domain and the target domain at the top layer of the network. To better exploit the discriminative information from the source domain, we further develop a deeply supervised transfer metric learning (DSTML) method by including an additional objective on DTML, where the output of both the hidden layers and the top layer are optimized jointly. To preserve the local manifold of input data points in the metric space, we present two new methods, DTML with autoencoder regularization and DSTML with autoencoder regularization. Experimental results on face verification, person re-identification, and handwritten digit recognition validate the effectiveness of the proposed methods.

  13. Remarkable reduction of thermal conductivity in phosphorene phononic crystal

    International Nuclear Information System (INIS)

    Xu, Wen; Zhang, Gang

    2016-01-01

    Phosphorene has received much attention due to its interesting physical and chemical properties, and its potential applications such as thermoelectricity. In thermoelectric applications, low thermal conductivity is essential for achieving a high figure of merit. In this work, we propose to reduce the thermal conductivity of phosphorene by adopting the phononic crystal structure, phosphorene nanomesh. With equilibrium molecular dynamics simulations, we find that the thermal conductivity is remarkably reduced in the phononic crystal. Our analysis shows that the reduction is due to the depressed phonon group velocities induced by Brillouin zone folding, and the reduced phonon lifetimes in the phononic crystal. Interestingly, it is found that the anisotropy ratio of thermal conductivity could be tuned by the ‘non-square’ pores in the phononic crystal, as the phonon group velocities in the direction with larger projection of pores is more severely suppressed, leading to greater reduction of thermal conductivity in this direction. Our work provides deep insight into thermal transport in phononic crystals and proposes a new strategy to reduce the thermal conductivity of monolayer phosphorene. (paper)

  14. Deep Feature Consistent Variational Autoencoder

    OpenAIRE

    Hou, Xianxu; Shen, Linlin; Sun, Ke; Qiu, Guoping

    2016-01-01

    We present a novel method for constructing Variational Autoencoder (VAE). Instead of using pixel-by-pixel loss, we enforce deep feature consistency between the input and the output of a VAE, which ensures the VAE's output to preserve the spatial correlation characteristics of the input, thus leading the output to have a more natural visual appearance and better perceptual quality. Based on recent deep learning works such as style transfer, we employ a pre-trained deep convolutional neural net...

  15. Learning Motivation and Achievements

    Institute of Scientific and Technical Information of China (English)

    冯泽野

    2016-01-01

    It is known to all that motivation is one of the most important elements in EFL learning.This study analyzes the type of English learning motivations and learning achievements within non-English majors’ students (Bilingual program in Highway School and Architecture) in Chang’an University, who has been considered English as the foreign language. This thesis intends to put forward certain strategies in promoting foreign language teaching.

  16. Achieving maximum baryon densities

    International Nuclear Information System (INIS)

    Gyulassy, M.

    1984-01-01

    In continuing work on nuclear stopping power in the energy range E/sub lab/ approx. 10 GeV/nucleon, calculations were made of the energy and baryon densities that could be achieved in uranium-uranium collisions. Results are shown. The energy density reached could exceed 2 GeV/fm 3 and baryon densities could reach as high as ten times normal nuclear densities

  17. Hexavalent Chromium reduction by Trichoderma inhamatum

    Energy Technology Data Exchange (ETDEWEB)

    Morales-Battera, L.; Cristiani-Urbina, E.

    2009-07-01

    Reduction of hexavalent chromium [Cr(VI)] to trivalent chromium [Cr(III)] is a useful and attractive process for remediation of ecosystems and industrial effluents contaminated with Cr(VI). Cr(VI) reduction to Cr(II) can be achieved by both chemical and biological methods; however, the biological reduction is more convenient than the chemical one since costs are lower, and sludge is generated in smaller amounts. (Author)

  18. Reduction of nuclear waste with ALMRS

    International Nuclear Information System (INIS)

    Bultman, J.H.

    1993-10-01

    The Advanced Liquid Metal Reactor (ALMR) can operate on LWR discharged material. In the calculation of the reduction of this material in the ALMR the inventory of the core should be taken into account. A high reduction can only be obtained if this inventory is reduced during operation of ALMRs. Then, it is possible to achieve a high reduction upto a factor 100 within a few hundred years. (orig.)

  19. School Segregation and Racial Academic Achievement Gaps

    Directory of Open Access Journals (Sweden)

    Sean F. Reardon

    2016-09-01

    Full Text Available Although it is clear that racial segregation is linked to academic achievement gaps, the mechanisms underlying this link have been debated since James Coleman published his eponymous 1966 report. In this paper, I examine sixteen distinct measures of segregation to determine which is most strongly associated with academic achievement gaps. I find clear evidence that one aspect of segregation in particular—the disparity in average school poverty rates between white and black students’ schools—is consistently the single most powerful correlate of achievement gaps, a pattern that holds in both bivariate and multivariate analyses. This implies that high-poverty schools are, on average, much less effective than lower-poverty schools and suggests that strategies that reduce the differential exposure of black, Hispanic, and white students to poor schoolmates may lead to meaningful reductions in academic achievement gaps.

  20. Silicon germanium mask for deep silicon etching

    KAUST Repository

    Serry, Mohamed

    2014-07-29

    Polycrystalline silicon germanium (SiGe) can offer excellent etch selectivity to silicon during cryogenic deep reactive ion etching in an SF.sub.6/O.sub.2 plasma. Etch selectivity of over 800:1 (Si:SiGe) may be achieved at etch temperatures from -80 degrees Celsius to -140 degrees Celsius. High aspect ratio structures with high resolution may be patterned into Si substrates using SiGe as a hard mask layer for construction of microelectromechanical systems (MEMS) devices and semiconductor devices.

  1. Deep web query interface understanding and integration

    CERN Document Server

    Dragut, Eduard C; Yu, Clement T

    2012-01-01

    There are millions of searchable data sources on the Web and to a large extent their contents can only be reached through their own query interfaces. There is an enormous interest in making the data in these sources easily accessible. There are primarily two general approaches to achieve this objective. The first is to surface the contents of these sources from the deep Web and add the contents to the index of regular search engines. The second is to integrate the searching capabilities of these sources and support integrated access to them. In this book, we introduce the state-of-the-art tech

  2. Silicon germanium mask for deep silicon etching

    KAUST Repository

    Serry, Mohamed; Rubin, Andrew; Refaat, Mohamed; Sedky, Sherif; Abdo, Mohammad

    2014-01-01

    Polycrystalline silicon germanium (SiGe) can offer excellent etch selectivity to silicon during cryogenic deep reactive ion etching in an SF.sub.6/O.sub.2 plasma. Etch selectivity of over 800:1 (Si:SiGe) may be achieved at etch temperatures from -80 degrees Celsius to -140 degrees Celsius. High aspect ratio structures with high resolution may be patterned into Si substrates using SiGe as a hard mask layer for construction of microelectromechanical systems (MEMS) devices and semiconductor devices.

  3. Deep learning for image classification

    Science.gov (United States)

    McCoppin, Ryan; Rizki, Mateen

    2014-06-01

    This paper provides an overview of deep learning and introduces the several subfields of deep learning including a specific tutorial of convolutional neural networks. Traditional methods for learning image features are compared to deep learning techniques. In addition, we present our preliminary classification results, our basic implementation of a convolutional restricted Boltzmann machine on the Mixed National Institute of Standards and Technology database (MNIST), and we explain how to use deep learning networks to assist in our development of a robust gender classification system.

  4. DEEP: a general computational framework for predicting enhancers

    KAUST Repository

    Kleftogiannis, Dimitrios A.

    2014-11-05

    Transcription regulation in multicellular eukaryotes is orchestrated by a number of DNA functional elements located at gene regulatory regions. Some regulatory regions (e.g. enhancers) are located far away from the gene they affect. Identification of distal regulatory elements is a challenge for the bioinformatics research. Although existing methodologies increased the number of computationally predicted enhancers, performance inconsistency of computational models across different cell-lines, class imbalance within the learning sets and ad hoc rules for selecting enhancer candidates for supervised learning, are some key questions that require further examination. In this study we developed DEEP, a novel ensemble prediction framework. DEEP integrates three components with diverse characteristics that streamline the analysis of enhancer\\'s properties in a great variety of cellular conditions. In our method we train many individual classification models that we combine to classify DNA regions as enhancers or non-enhancers. DEEP uses features derived from histone modification marks or attributes coming from sequence characteristics. Experimental results indicate that DEEP performs better than four state-of-the-art methods on the ENCODE data. We report the first computational enhancer prediction results on FANTOM5 data where DEEP achieves 90.2% accuracy and 90% geometric mean (GM) of specificity and sensitivity across 36 different tissues. We further present results derived using in vivo-derived enhancer data from VISTA database. DEEP-VISTA, when tested on an independent test set, achieved GM of 80.1% and accuracy of 89.64%. DEEP framework is publicly available at http://cbrc.kaust.edu.sa/deep/.

  5. DEEP: a general computational framework for predicting enhancers

    KAUST Repository

    Kleftogiannis, Dimitrios A.; Kalnis, Panos; Bajic, Vladimir B.

    2014-01-01

    Transcription regulation in multicellular eukaryotes is orchestrated by a number of DNA functional elements located at gene regulatory regions. Some regulatory regions (e.g. enhancers) are located far away from the gene they affect. Identification of distal regulatory elements is a challenge for the bioinformatics research. Although existing methodologies increased the number of computationally predicted enhancers, performance inconsistency of computational models across different cell-lines, class imbalance within the learning sets and ad hoc rules for selecting enhancer candidates for supervised learning, are some key questions that require further examination. In this study we developed DEEP, a novel ensemble prediction framework. DEEP integrates three components with diverse characteristics that streamline the analysis of enhancer's properties in a great variety of cellular conditions. In our method we train many individual classification models that we combine to classify DNA regions as enhancers or non-enhancers. DEEP uses features derived from histone modification marks or attributes coming from sequence characteristics. Experimental results indicate that DEEP performs better than four state-of-the-art methods on the ENCODE data. We report the first computational enhancer prediction results on FANTOM5 data where DEEP achieves 90.2% accuracy and 90% geometric mean (GM) of specificity and sensitivity across 36 different tissues. We further present results derived using in vivo-derived enhancer data from VISTA database. DEEP-VISTA, when tested on an independent test set, achieved GM of 80.1% and accuracy of 89.64%. DEEP framework is publicly available at http://cbrc.kaust.edu.sa/deep/.

  6. Deep learning? What deep learning? | Fourie | South African ...

    African Journals Online (AJOL)

    In teaching generally over the past twenty years, there has been a move towards teaching methods that encourage deep, rather than surface approaches to learning. The reason for this being that students, who adopt a deep approach to learning are considered to have learning outcomes of a better quality and desirability ...

  7. Super-resolution for asymmetric resolution of FIB-SEM 3D imaging using AI with deep learning.

    Science.gov (United States)

    Hagita, Katsumi; Higuchi, Takeshi; Jinnai, Hiroshi

    2018-04-12

    Scanning electron microscopy equipped with a focused ion beam (FIB-SEM) is a promising three-dimensional (3D) imaging technique for nano- and meso-scale morphologies. In FIB-SEM, the specimen surface is stripped by an ion beam and imaged by an SEM installed orthogonally to the FIB. The lateral resolution is governed by the SEM, while the depth resolution, i.e., the FIB milling direction, is determined by the thickness of the stripped thin layer. In most cases, the lateral resolution is superior to the depth resolution; hence, asymmetric resolution is generated in the 3D image. Here, we propose a new approach based on an image-processing or deep-learning-based method for super-resolution of 3D images with such asymmetric resolution, so as to restore the depth resolution to achieve symmetric resolution. The deep-learning-based method learns from high-resolution sub-images obtained via SEM and recovers low-resolution sub-images parallel to the FIB milling direction. The 3D morphologies of polymeric nano-composites are used as test images, which are subjected to the deep-learning-based method as well as conventional methods. We find that the former yields superior restoration, particularly as the asymmetric resolution is increased. Our super-resolution approach for images having asymmetric resolution enables observation time reduction.

  8. Deep sea radionuclides

    International Nuclear Information System (INIS)

    Kanisch, G.; Vobach, M.

    1993-01-01

    Every year since 1979, either in sping or in summer, the fishing research vessel 'Walther Herwig' goes to the North Atlantic disposal areas of solid radioactive wastes, and, for comparative purposes, to other areas, in order to collect water samples, plankton and nekton, and, from the deep sea bed, sediment samples and benthos organisms. In addition to data on the radionuclide contents of various media, information about the plankton, nekton and benthos organisms living in those areas and about their biomasses could be gathered. The investigations are aimed at acquiring scientifically founded knowledge of the uptake of radioactive substances by microorganisms, and their migration from the sea bottom to the areas used by man. (orig.) [de

  9. Deep inelastic phenomena

    International Nuclear Information System (INIS)

    Aubert, J.J.

    1982-01-01

    The experimental situation of the deep inelastic scattering for electrons (muons) is reviewed. A brief history of experimentation highlights Mohr and Nicoll's 1932 experiment on electron-atom scattering and Hofstadter's 1950 experiment on electron-nucleus scattering. The phenomenology of electron-nucleon scattering carried out between 1960 and 1970 is described, with emphasis on the parton model, and scaling. Experiments at SLAC and FNAL since 1974 exhibit scaling violations. Three muon-nucleon scattering experiments at BFP, BCDMA, and EMA, currently producing new results in the high Q 2 domain suggest a rather flat behaviour of the structure function at fixed x as a function of Q 2 . It is seen that the structure measured in DIS can then be projected into a pure hadronic process to predict a cross section. Protonneutron difference, moment analysis, and Drell-Yan pairs are also considered

  10. Finding optimal exact reducts

    KAUST Repository

    AbouEisha, Hassan M.

    2014-01-01

    The problem of attribute reduction is an important problem related to feature selection and knowledge discovery. The problem of finding reducts with minimum cardinality is NP-hard. This paper suggests a new algorithm for finding exact reducts

  11. Deep neural networks to enable real-time multimessenger astrophysics

    Science.gov (United States)

    George, Daniel; Huerta, E. A.

    2018-02-01

    Gravitational wave astronomy has set in motion a scientific revolution. To further enhance the science reach of this emergent field of research, there is a pressing need to increase the depth and speed of the algorithms used to enable these ground-breaking discoveries. We introduce Deep Filtering—a new scalable machine learning method for end-to-end time-series signal processing. Deep Filtering is based on deep learning with two deep convolutional neural networks, which are designed for classification and regression, to detect gravitational wave signals in highly noisy time-series data streams and also estimate the parameters of their sources in real time. Acknowledging that some of the most sensitive algorithms for the detection of gravitational waves are based on implementations of matched filtering, and that a matched filter is the optimal linear filter in Gaussian noise, the application of Deep Filtering using whitened signals in Gaussian noise is investigated in this foundational article. The results indicate that Deep Filtering outperforms conventional machine learning techniques, achieves similar performance compared to matched filtering, while being several orders of magnitude faster, allowing real-time signal processing with minimal resources. Furthermore, we demonstrate that Deep Filtering can detect and characterize waveform signals emitted from new classes of eccentric or spin-precessing binary black holes, even when trained with data sets of only quasicircular binary black hole waveforms. The results presented in this article, and the recent use of deep neural networks for the identification of optical transients in telescope data, suggests that deep learning can facilitate real-time searches of gravitational wave sources and their electromagnetic and astroparticle counterparts. In the subsequent article, the framework introduced herein is directly applied to identify and characterize gravitational wave events in real LIGO data.

  12. Outstanding engineering achievement

    International Nuclear Information System (INIS)

    Anon.

    1984-01-01

    The annual award of the South African Institution of Civil Engineers for 'The Most Outstanding Civil Engineering Achievement of 1982' was made to Escom for the Koeberg Nuclear Power Station. In the site selection a compromise had to be made between an area remote from habitation, and an area relatively close to the need for power, sources of construction materials, transportation, operational staff and large quantities of cooling water. In the construction of Koeberg the safety of the workers and the public was regarded with the utmost concern

  13. Geological evidence for deep exploration in Xiazhuang uranium orefield and its periphery

    International Nuclear Information System (INIS)

    Feng Zhijun; Huang Hongkun; Zeng Wenwei; Wu Jiguang

    2011-01-01

    This paper first discussed the ore-controlling role of deep structure, the origin of metallogenic matter and fluid, the relation of diabase to silicification zone, then summarized the achievement of Geophysical survey and drilling, and finally analysed the potential for deep exploration in Xiazhuang uranium orefield.(authors)

  14. Sulphate reduction in the Aespoe HRL tunnel

    International Nuclear Information System (INIS)

    Gustafson, G.; Pedersen, K.; Tullborg, E.L.; Wallin, B.; Wikberg, P.

    1995-12-01

    Evidence and indications of sulphate reduction based on geological, hydrogeological, groundwater, isotope and microbial data gathered in and around the Aespoe Hard Rock Laboratory tunnel have been evaluated. This integrated investigation showed that sulphate reduction had taken place in the past but is most likely also an ongoing process. Anaerobic sulphate-reducing bacteria can live in marine sediments, in the tunnel sections under the sea and in deep groundwaters, since there is no access to oxygen. The sulphate-reducing bacteria seem to thrive when the Cl - concentration of the groundwater is 4000-6000 mg/l. Sulphate reduction is an in situ process but the resulting hydrogen-sulphide rich water can be transported to other locations. A more vigorous sulphate reduction takes place when the organic content in the groundwater is high (>10 mg/l DOC) which is the case in the sediments and in the groundwaters under the sea. Some bacteria use hydrogen as an electron donor instead of organic carbon and can therefore live in deep environments where access to organic material is limited. The sulphate-reducing bacteria seem to adapt to changing flow situations caused by the tunnel construction relatively fast. Sulphate reduction seems to have occurred and will probably occur where conditions are favourable for the sulphate-reducing bacteria such as anaerobic brackish groundwater with dissolved sulphate and organic carbon or hydrogen. 59 refs, 37 figs, 6 tabs

  15. Exposure reduction in panoramic radiography

    International Nuclear Information System (INIS)

    Kapa, S.F.; Platin, E.

    1990-01-01

    Increased receptor speed in panoramic radiography is useful in reducing patient exposure if it doesn't substantially decrease the diagnostic quality of the resultant image. In a laboratory investigation four rare earth screen/film combinations were evaluated ranging in relative speed from 400 to 1200. The results indicated that an exposure reduction of approximately 15 percent can be achieved by substituting a 1200 speed system for a 400 speed system without significantly affecting the diagnostic quality of the image

  16. Breast Reduction Surgery

    Science.gov (United States)

    ... considering breast reduction surgery, consult a board-certified plastic surgeon. It's important to understand what breast reduction surgery entails — including possible risks and complications — as ...

  17. Context and Deep Learning Design

    Science.gov (United States)

    Boyle, Tom; Ravenscroft, Andrew

    2012-01-01

    Conceptual clarification is essential if we are to establish a stable and deep discipline of technology enhanced learning. The technology is alluring; this can distract from deep design in a surface rush to exploit the affordances of the new technology. We need a basis for design, and a conceptual unit of organization, that are applicable across…

  18. Achievement in Physics

    Science.gov (United States)

    1999-03-01

    Naomi Moran, a student at the Arnewood School, New Milton, Hampshire was the first recipient of the `Achievement in Physics' prize awarded by the South Central Branch of The Institute of Physics. Naomi received an award certificate and cheque for £100 from Dr Ruth Fenn, Chairman of the Branch, at the annual Christmas lecture held at the University of Surrey in December. She is pictured with Dr Fenn and Steve Beith, physics teacher at the Arnewood School.  Photo Figure 1. Naomi Moran receiving her award (photograph courtesy of Peter Milford). The award is intended to celebrate personal achievement in physics at any level at age 16-17 and is not restricted to those who gain the highest academic results. Schools across the county were invited to nominate suitable candidates; Naomi's nomination by the school's deputy head of science impressed the judges because of her ability to grasp the most difficult parts of the subject quickly, in addition to the fact that she took her AS-level science in year 11 when she was only 16. She is currently studying A-level physics, chemistry and mathematics and hopes to continue her studies at university later this year.

  19. DANN: a deep learning approach for annotating the pathogenicity of genetic variants.

    Science.gov (United States)

    Quang, Daniel; Chen, Yifei; Xie, Xiaohui

    2015-03-01

    Annotating genetic variants, especially non-coding variants, for the purpose of identifying pathogenic variants remains a challenge. Combined annotation-dependent depletion (CADD) is an algorithm designed to annotate both coding and non-coding variants, and has been shown to outperform other annotation algorithms. CADD trains a linear kernel support vector machine (SVM) to differentiate evolutionarily derived, likely benign, alleles from simulated, likely deleterious, variants. However, SVMs cannot capture non-linear relationships among the features, which can limit performance. To address this issue, we have developed DANN. DANN uses the same feature set and training data as CADD to train a deep neural network (DNN). DNNs can capture non-linear relationships among features and are better suited than SVMs for problems with a large number of samples and features. We exploit Compute Unified Device Architecture-compatible graphics processing units and deep learning techniques such as dropout and momentum training to accelerate the DNN training. DANN achieves about a 19% relative reduction in the error rate and about a 14% relative increase in the area under the curve (AUC) metric over CADD's SVM methodology. All data and source code are available at https://cbcl.ics.uci.edu/public_data/DANN/. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. 30 CFR 203.60 - Who may apply for royalty relief on a case-by-case basis in deep water in the Gulf of Mexico or...

    Science.gov (United States)

    2010-07-01

    ...-case basis in deep water in the Gulf of Mexico or offshore of Alaska? 203.60 Section 203.60 Mineral... basis in deep water in the Gulf of Mexico or offshore of Alaska? You may apply for royalty relief under... REDUCTION IN ROYALTY RATES OCS Oil, Gas, and Sulfur General Royalty Relief for Pre-Act Deep Water Leases and...

  1. Achieving diagnosis by consensus

    LENUS (Irish Health Repository)

    Kane, Bridget

    2009-08-01

    This paper provides an analysis of the collaborative work conducted at a multidisciplinary medical team meeting, where a patient’s definitive diagnosis is agreed, by consensus. The features that distinguish this process of diagnostic work by consensus are examined in depth. The current use of technology to support this collaborative activity is described, and experienced deficiencies are identified. Emphasis is placed on the visual and perceptual difficulty for individual specialities in making interpretations, and on how, through collaboration in discussion, definitive diagnosis is actually achieved. The challenge for providing adequate support for the multidisciplinary team at their meeting is outlined, given the multifaceted nature of the setting, i.e. patient management, educational, organizational and social functions, that need to be satisfied.

  2. NATIC achievement report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    This paper reports the achievements of the MAnufacturing Technology supported by advanced and integrated Information system through international Cooperation (MATIC) ended in March 1999. The MATIC project is intended to develop international information systems to support manufacturing process from design to production through an international network in order to upgrade the manufacturing and supporting industries in Asian countries. The project has been completed by support provided by a large number of Japanese corporations and research institutes, and the counterparts in China, Indonesia, Malaysia, Singapore and Thailand. The developed prototype systems cover the three areas of automobile, electronics, textile and apparel industries. Demonstration tests have verified the functions thereof. In the automobile industry field, development was made on a system to link Japanese research and development corporations with Indonesian parts making corporations, and a system to exchange technological data between Indonesia and Thailand. In the electronics industry field, development was performed on an electronic catalog system to link Indonesia, Malaysia, Singapore and Thailand. (NEDO)

  3. Achieving Kaiser Permanente quality.

    Science.gov (United States)

    McHugh, Matthew D; Aiken, Linda H; Eckenhoff, Myra E; Burns, Lawton R

    2016-01-01

    The Kaiser Permanente model of integrated health delivery is highly regarded for high-quality and efficient health care. Efforts to reproduce Kaiser's success have mostly failed. One factor that has received little attention and that could explain Kaiser's advantage is its commitment to and investment in nursing as a key component of organizational culture and patient-centered care. The aim of this study was to investigate the role of Kaiser's nursing organization in promoting quality of care. This was a cross-sectional analysis of linked secondary data from multiple sources, including a detailed survey of nurses, for 564 adult, general acute care hospitals from California, Florida, Pennsylvania, and New Jersey in 2006-2007. We used logistic regression models to examine whether patient (mortality and failure-to-rescue) and nurse (burnout, job satisfaction, and intent-to-leave) outcomes in Kaiser hospitals were better than in non-Kaiser hospitals. We then assessed whether differences in nursing explained outcomes differences between Kaiser and other hospitals. Finally, we examined whether Kaiser hospitals compared favorably with hospitals known for having excellent nurse work environments-Magnet hospitals. Patient and nurse outcomes in Kaiser hospitals were significantly better compared with non-Magnet hospitals. Kaiser hospitals had significantly better nurse work environments, staffing levels, and more nurses with bachelor's degrees. Differences in nursing explained a significant proportion of the Kaiser outcomes advantage. Kaiser hospital outcomes were comparable with Magnet hospitals, where better outcomes have been largely explained by differences in nursing. An important element in Kaiser's success is its investment in professional nursing, which may not be evident to systems seeking to achieve Kaiser's advantage. Our results suggest that a possible strategy for achieving outcomes like Kaiser may be for hospitals to consider Magnet designation, a proven and

  4. Anticipating Deep Mapping: Tracing the Spatial Practice of Tim Robinson

    Directory of Open Access Journals (Sweden)

    Jos Smith

    2015-07-01

    Full Text Available There has been little academic research published on the work of Tim Robinson despite an illustrious career, first as an artist of the London avant-garde, then as a map-maker in the west of Ireland, and finally as an author of place. In part, this dearth is due to the difficulty of approaching these three diverse strands collectively. However, recent developments in the field of deep mapping encourage us to look back at the continuity of Robinson’s achievements in full and offer a suitable framework for doing so. Socially engaged with living communities and a depth of historical knowledge about place, but at the same time keen to contribute artistically to the ongoing contemporary culture of place, the parameters of deep mapping are broad enough to encompass the range of Robinson’s whole practice and suggest unique ways to illuminate his very unusual career. But Robinson’s achievements also encourage a reflection on the historical context of deep mapping itself, as well as on the nature of its spatial practice (especially where space comes to connote a medium to be worked rather than an area/volume. With this in mind the following article both explores Robinson’s work through deep mapping and deep mapping through the work of this unusual artist.

  5. Achievement Goals and Achievement Emotions: A Meta-Analysis

    Science.gov (United States)

    Huang, Chiungjung

    2011-01-01

    This meta-analysis synthesized 93 independent samples (N = 30,003) in 77 studies that reported in 78 articles examining correlations between achievement goals and achievement emotions. Achievement goals were meaningfully associated with different achievement emotions. The correlations of mastery and mastery approach goals with positive achievement…

  6. Deep Learning Fluid Mechanics

    Science.gov (United States)

    Barati Farimani, Amir; Gomes, Joseph; Pande, Vijay

    2017-11-01

    We have developed a new data-driven model paradigm for the rapid inference and solution of the constitutive equations of fluid mechanic by deep learning models. Using generative adversarial networks (GAN), we train models for the direct generation of solutions to steady state heat conduction and incompressible fluid flow without knowledge of the underlying governing equations. Rather than using artificial neural networks to approximate the solution of the constitutive equations, GANs can directly generate the solutions to these equations conditional upon an arbitrary set of boundary conditions. Both models predict temperature, velocity and pressure fields with great test accuracy (>99.5%). The application of our framework for inferring and generating the solutions of partial differential equations can be applied to any physical phenomena and can be used to learn directly from experiments where the underlying physical model is complex or unknown. We also have shown that our framework can be used to couple multiple physics simultaneously, making it amenable to tackle multi-physics problems.

  7. Deep video deblurring

    KAUST Repository

    Su, Shuochen

    2016-11-25

    Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.

  8. Deep space telescopes

    CERN Multimedia

    CERN. Geneva

    2006-01-01

    The short series of seminars will address results and aims of current and future space astrophysics as the cultural framework for the development of deep space telescopes. It will then present such new tools, as they are currently available to, or imagined by, the scientific community, in the context of the science plans of ESA and of all major world space agencies. Ground-based astronomy, in the 400 years since Galileo’s telescope, has given us a profound phenomenological comprehension of our Universe, but has traditionally been limited to the narrow band(s) to which our terrestrial atmosphere is transparent. Celestial objects, however, do not care about our limitations, and distribute most of the information about their physics throughout the complete electromagnetic spectrum. Such information is there for the taking, from millimiter wavelengths to gamma rays. Forty years astronomy from space, covering now most of the e.m. spectrum, have thus given us a better understanding of our physical Universe then t...

  9. Deep inelastic final states

    International Nuclear Information System (INIS)

    Girardi, G.

    1980-11-01

    In these lectures we attempt to describe the final states of deep inelastic scattering as given by QCD. In the first section we shall briefly comment on the parton model and give the main properties of decay functions which are of interest for the study of semi-inclusive leptoproduction. The second section is devoted to the QCD approach to single hadron leptoproduction. First we recall basic facts on QCD log's and derive after that the evolution equations for the fragmentation functions. For this purpose we make a short detour in e + e - annihilation. The rest of the section is a study of the factorization of long distance effects associated with the initial and final states. We then show how when one includes next to leading QCD corrections one induces factorization breaking and describe the double moments useful for testing such effects. The next section contains a review on the QCD jets in the hadronic final state. We begin by introducing the notion of infrared safe variable and defining a few useful examples. Distributions in these variables are studied to first order in QCD, with some comments on the resummation of logs encountered in higher orders. Finally the last section is a 'gaullimaufry' of jet studies

  10. A Meta-Analysis of Single-Family Deep Energy Retrofit Performance in the U.S.

    Energy Technology Data Exchange (ETDEWEB)

    Less, Brennan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Walker, Iain [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2014-08-01

    The current state of Deep Energy Retrofit (DER) performance in the U.S. has been assessed in 116 homes in the United States, using actual and simulated data gathered from the available domestic literature. Substantial airtightness reductions averaging 63% (n=48) were reported (two- to three-times more than in conventional retrofits), with average post-retrofit airtightness of 4.7 Air Changes per House at 50 Pascal (ACH50) (n=94). Yet, mechanical ventilation was not installed consistently. In order to avoid indoor air quality (IAQ) issues, all future DERs should comply with ASHRAE 62.2-2013 requirements or equivalent. Projects generally achieved good energy results, with average annual net-site and net-source energy savings of 47%±20% and 45%±24% (n=57 and n=35), respectively, and carbon emission reductions of 47%±22% (n=23). Net-energy reductions did not vary reliably with house age, airtightness, or reported project costs, but pre-retrofit energy usage was correlated with total reductions (MMBtu).

  11. Detecting atrial fibrillation by deep convolutional neural networks.

    Science.gov (United States)

    Xia, Yong; Wulan, Naren; Wang, Kuanquan; Zhang, Henggui

    2018-02-01

    Atrial fibrillation (AF) is the most common cardiac arrhythmia. The incidence of AF increases with age, causing high risks of stroke and increased morbidity and mortality. Efficient and accurate diagnosis of AF based on the ECG is valuable in clinical settings and remains challenging. In this paper, we proposed a novel method with high reliability and accuracy for AF detection via deep learning. The short-term Fourier transform (STFT) and stationary wavelet transform (SWT) were used to analyze ECG segments to obtain two-dimensional (2-D) matrix input suitable for deep convolutional neural networks. Then, two different deep convolutional neural network models corresponding to STFT output and SWT output were developed. Our new method did not require detection of P or R peaks, nor feature designs for classification, in contrast to existing algorithms. Finally, the performances of the two models were evaluated and compared with those of existing algorithms. Our proposed method demonstrated favorable performances on ECG segments as short as 5 s. The deep convolutional neural network using input generated by STFT, presented a sensitivity of 98.34%, specificity of 98.24% and accuracy of 98.29%. For the deep convolutional neural network using input generated by SWT, a sensitivity of 98.79%, specificity of 97.87% and accuracy of 98.63% was achieved. The proposed method using deep convolutional neural networks shows high sensitivity, specificity and accuracy, and, therefore, is a valuable tool for AF detection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Distributed deep learning networks among institutions for medical imaging.

    Science.gov (United States)

    Chang, Ken; Balachandar, Niranjan; Lam, Carson; Yi, Darvin; Brown, James; Beers, Andrew; Rosen, Bruce; Rubin, Daniel L; Kalpathy-Cramer, Jayashree

    2018-03-29

    Deep learning has become a promising approach for automated support for clinical diagnosis. When medical data samples are limited, collaboration among multiple institutions is necessary to achieve high algorithm performance. However, sharing patient data often has limitations due to technical, legal, or ethical concerns. In this study, we propose methods of distributing deep learning models as an attractive alternative to sharing patient data. We simulate the distribution of deep learning models across 4 institutions using various training heuristics and compare the results with a deep learning model trained on centrally hosted patient data. The training heuristics investigated include ensembling single institution models, single weight transfer, and cyclical weight transfer. We evaluated these approaches for image classification in 3 independent image collections (retinal fundus photos, mammography, and ImageNet). We find that cyclical weight transfer resulted in a performance that was comparable to that of centrally hosted patient data. We also found that there is an improvement in the performance of cyclical weight transfer heuristic with a high frequency of weight transfer. We show that distributing deep learning models is an effective alternative to sharing patient data. This finding has implications for any collaborative deep learning study.

  13. Deep Mapping and Spatial Anthropology

    Directory of Open Access Journals (Sweden)

    Les Roberts

    2016-01-01

    Full Text Available This paper provides an introduction to the Humanities Special Issue on “Deep Mapping”. It sets out the rationale for the collection and explores the broad-ranging nature of perspectives and practices that fall within the “undisciplined” interdisciplinary domain of spatial humanities. Sketching a cross-current of ideas that have begun to coalesce around the concept of “deep mapping”, the paper argues that rather than attempting to outline a set of defining characteristics and “deep” cartographic features, a more instructive approach is to pay closer attention to the multivalent ways deep mapping is performatively put to work. Casting a critical and reflexive gaze over the developing discourse of deep mapping, it is argued that what deep mapping “is” cannot be reduced to the otherwise a-spatial and a-temporal fixity of the “deep map”. In this respect, as an undisciplined survey of this increasing expansive field of study and practice, the paper explores the ways in which deep mapping can engage broader discussion around questions of spatial anthropology.

  14. Infinitary Combinatory Reduction Systems: Normalising Reduction Strategies

    NARCIS (Netherlands)

    Ketema, J.; Simonsen, Jakob Grue

    2010-01-01

    We study normalising reduction strategies for infinitary Combinatory Reduction Systems (iCRSs). We prove that all fair, outermost-fair, and needed-fair strategies are normalising for orthogonal, fully-extended iCRSs. These facts properly generalise a number of results on normalising strategies in

  15. Deep learning for computational chemistry.

    Science.gov (United States)

    Goh, Garrett B; Hodas, Nathan O; Vishnu, Abhinav

    2017-06-15

    The rise and fall of artificial neural networks is well documented in the scientific literature of both computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on multilayer neural networks. Within the last few years, we have seen the transformative impact of deep learning in many domains, particularly in speech recognition and computer vision, to the extent that the majority of expert practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. In this review, we provide an introductory overview into the theory of deep neural networks and their unique properties that distinguish them from traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including quantitative structure activity relationship, virtual screening, protein structure prediction, quantum chemistry, materials design, and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non-neural networks state-of-the-art models across disparate research topics, and deep neural network-based models often exceeded the "glass ceiling" expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a valuable tool for computational chemistry. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  16. Deep learning for computational chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Goh, Garrett B. [Advanced Computing, Mathematics, and Data Division, Pacific Northwest National Laboratory, 902 Battelle Blvd Richland Washington 99354; Hodas, Nathan O. [Advanced Computing, Mathematics, and Data Division, Pacific Northwest National Laboratory, 902 Battelle Blvd Richland Washington 99354; Vishnu, Abhinav [Advanced Computing, Mathematics, and Data Division, Pacific Northwest National Laboratory, 902 Battelle Blvd Richland Washington 99354

    2017-03-08

    The rise and fall of artificial neural networks is well documented in the scientific literature of both the fields of computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on “deep” neural networks. Within the last few years, we have seen the transformative impact of deep learning the computer science domain, notably in speech recognition and computer vision, to the extent that the majority of practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. In this review, we provide an introductory overview into the theory of deep neural networks and their unique properties as compared to traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including QSAR, virtual screening, protein structure modeling, QM calculations, materials synthesis and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non neural networks state-of-the-art models across disparate research topics, and deep neural network based models often exceeded the “glass ceiling” expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a useful tool and may grow into a pivotal role for various challenges in the computational chemistry field.

  17. Achievement Motivation: A Rational Approach to Psychological Education

    Science.gov (United States)

    Smith, Robert L.; Troth, William A.

    1975-01-01

    Investigated the achievement motivation training component of psychological education. The subjects were 54 late-adolescent pupils. The experimental training program had as its objectives an increase in academic achievement motivation, internal feelings of control, and school performance, and a reduction of test anxiety. Results indicated…

  18. DeepSimulator: a deep simulator for Nanopore sequencing

    KAUST Repository

    Li, Yu; Han, Renmin; Bi, Chongwei; Li, Mo; Wang, Sheng; Gao, Xin

    2017-01-01

    or assembled contigs, we simulate the electrical current signals by a context-dependent deep learning model, followed by a base-calling procedure to yield simulated reads. This workflow mimics the sequencing procedure more naturally. The thorough experiments

  19. Structural damage detection using deep learning of ultrasonic guided waves

    Science.gov (United States)

    Melville, Joseph; Alguri, K. Supreet; Deemer, Chris; Harley, Joel B.

    2018-04-01

    Structural health monitoring using ultrasonic guided waves relies on accurate interpretation of guided wave propagation to distinguish damage state indicators. However, traditional physics based models do not provide an accurate representation, and classic data driven techniques, such as a support vector machine, are too simplistic to capture the complex nature of ultrasonic guide waves. To address this challenge, this paper uses a deep learning interpretation of ultrasonic guided waves to achieve fast, accurate, and automated structural damaged detection. To achieve this, full wavefield scans of thin metal plates are used, half from the undamaged state and half from the damaged state. This data is used to train our deep network to predict the damage state of a plate with 99.98% accuracy given signals from just 10 spatial locations on the plate, as compared to that of a support vector machine (SVM), which achieved a 62% accuracy.

  20. Recent achievements of SIRGAS

    Science.gov (United States)

    Brunini, C.; Sánchez, L.

    2008-05-01

    SIRGAS is the geocentric reference system for the Americas. Its definition corresponds to the IERS International Terrestrial Reference System (ITRS) and it is realized by a regional densification of the IERS International Terrestrial Reference Frame (ITRF). The SIRGAS activities are coordinated by three working groups: SIRGAS-WGI (Reference System) is committed to establish and maintain a continental-wide geocentric reference frame within the ITRF. This objective was initially accomplished through two continental GPS campaigns in 1995 and 2000, including 58 and 184 stations, respectively. Today, it is realized by around 130 continuously operating GNSS sites, which are processed weekly by the IGS Regional Network Associate Analysis Centre for SIRGAS (IGS- RNAAC-SIR). SIRGAS-WGII (Geocentric Datum) is primarily in charged of defining the SIRGAS geodetic datum in the individual countries, which is given by the origin, orientation and scale of the SIRGAS system, and the parameters of the GRS80 ellipsoid. It is concentrating on promoting and supporting the adoption of SIRGAS in the Latin American and Caribbean countries through national densifications of the continental network. SIRGAS- WGIII (Vertical Datum) is dedicated to the definition and realization of a unified vertical reference system within a global frame. Its central purpose is to refer the geopotential numbers (or physical heights) in all countries to one and the same equipotential surface (W0), which must be globally defined. This includes also the transformation of the existing height datums into the new system. This study shows the SIRGAS achievements of the last two years.

  1. Entrepreneur achievement. Liaoning province.

    Science.gov (United States)

    Zhao, R

    1994-03-01

    This paper reports the successful entrepreneurial endeavors of members of a 20-person women's group in Liaoning Province, China. Jing Yuhong, a member of the Family Planning Association at Shileizi Village, Dalian City, provided the basis for their achievements by first building an entertainment/study room in her home to encourage married women to learn family planning. Once stocked with books, magazines, pamphlets, and other materials on family planning and agricultural technology, dozens of married women in the neighborhood flocked voluntarily to the room. Yuhong also set out to give these women a way to earn their own income as a means of helping then gain greater equality with their husbands and exert greater control over their personal reproductive and social lives. She gave a section of her farming land to the women's group, loaned approximately US$5200 to group members to help them generate income from small business initiatives, built a livestock shed in her garden for the group to raise marmots, and erected an awning behind her house under which mushrooms could be grown. The investment yielded $12,000 in the first year, allowing each woman to keep more than $520 in dividends. Members then soon began going to fairs in the capital and other places to learn about the outside world, and have successfully ventured out on their own to generate individual incomes. Ten out of twenty women engaged in these income-generating activities asked for and got the one-child certificate.

  2. Outcomes of the DeepWind Conceptual Design

    DEFF Research Database (Denmark)

    Schmidt Paulsen, Uwe; Borg, Michael; Aagaard Madsen, Helge

    2015-01-01

    DeepWind has been presented as a novel floating offshore wind turbine concept with cost reduction potentials. Twelve international partners developed a Darrieus type floating turbine with new materials and technologies for deep-sea offshore environment. This paper summarizes results of the 5 MW...... the Deepwind floating 1 kW demonstrator. The 5 MW simulation results, loading and performance are compared to the OC3-NREL 5 MW wind turbine. Finally the paper elaborates the conceptual design on cost modelling....... DeepWind conceptual design. The concept was evaluated at the Hywind test site, described on its few components, in particular on the modified Troposkien blade shape and airfoil design. The feasibility of upscaling from 5 MW to 20 MW is discussed, taking into account the results from testing...

  3. An introduction to deep submicron CMOS for vertex applications

    CERN Document Server

    Campbell, M; Cantatore, E; Faccio, F; Heijne, Erik H M; Jarron, P; Santiard, Jean-Claude; Snoeys, W; Wyllie, K

    2001-01-01

    Microelectronics has become a key enabling technology in the development of tracking detectors for High Energy Physics. Deep submicron CMOS is likely to be extensively used in all future tracking systems. Radiation tolerance in the Mrad region has been achieved and complete readout chips comprising many millions of transistors now exist. The choice of technology is dictated by market forces but the adoption of deep submicron CMOS for tracking applications still poses some challenges. The techniques used are reviewed and some of the future challenges are discussed.

  4. An adaptive deep learning approach for PPG-based identification.

    Science.gov (United States)

    Jindal, V; Birjandtalab, J; Pouyan, M Baran; Nourani, M

    2016-08-01

    Wearable biosensors have become increasingly popular in healthcare due to their capabilities for low cost and long term biosignal monitoring. This paper presents a novel two-stage technique to offer biometric identification using these biosensors through Deep Belief Networks and Restricted Boltzman Machines. Our identification approach improves robustness in current monitoring procedures within clinical, e-health and fitness environments using Photoplethysmography (PPG) signals through deep learning classification models. The approach is tested on TROIKA dataset using 10-fold cross validation and achieved an accuracy of 96.1%.

  5. Deep Learning for Plant Identification in Natural Environment.

    Science.gov (United States)

    Sun, Yu; Liu, Yuan; Wang, Guan; Zhang, Haiyan

    2017-01-01

    Plant image identification has become an interdisciplinary focus in both botanical taxonomy and computer vision. The first plant image dataset collected by mobile phone in natural scene is presented, which contains 10,000 images of 100 ornamental plant species in Beijing Forestry University campus. A 26-layer deep learning model consisting of 8 residual building blocks is designed for large-scale plant classification in natural environment. The proposed model achieves a recognition rate of 91.78% on the BJFU100 dataset, demonstrating that deep learning is a promising technology for smart forestry.

  6. Deep Learning for Plant Identification in Natural Environment

    Directory of Open Access Journals (Sweden)

    Yu Sun

    2017-01-01

    Full Text Available Plant image identification has become an interdisciplinary focus in both botanical taxonomy and computer vision. The first plant image dataset collected by mobile phone in natural scene is presented, which contains 10,000 images of 100 ornamental plant species in Beijing Forestry University campus. A 26-layer deep learning model consisting of 8 residual building blocks is designed for large-scale plant classification in natural environment. The proposed model achieves a recognition rate of 91.78% on the BJFU100 dataset, demonstrating that deep learning is a promising technology for smart forestry.

  7. Deep UV LEDs

    Science.gov (United States)

    Han, Jung; Amano, Hiroshi; Schowalter, Leo

    2014-06-01

    Deep ultraviolet (DUV) photons interact strongly with a broad range of chemical and biological molecules; compact DUV light sources could enable a wide range of applications in chemi/bio-sensing, sterilization, agriculture, and industrial curing. The much shorter wavelength also results in useful characteristics related to optical diffraction (for lithography) and scattering (non-line-of-sight communication). The family of III-N (AlGaInN) compound semiconductors offers a tunable energy gap from infrared to DUV. While InGaN-based blue light emitters have been the primary focus for the obvious application of solid state lighting, there is a growing interest in the development of efficient UV and DUV light-emitting devices. In the past few years we have witnessed an increasing investment from both government and industry sectors to further the state of DUV light-emitting devices. The contributions in Semiconductor Science and Technology 's special issue on DUV devices provide an up-to-date snapshot covering many relevant topics in this field. Given the expected importance of bulk AlN substrate in DUV technology, we are pleased to include a review article by Hartmann et al on the growth of AlN bulk crystal by physical vapour transport. The issue of polarization field within the deep ultraviolet LEDs is examined in the article by Braut et al. Several commercial companies provide useful updates in their development of DUV emitters, including Nichia (Fujioka et al ), Nitride Semiconductors (Muramoto et al ) and Sensor Electronic Technology (Shatalov et al ). We believe these articles will provide an excellent overview of the state of technology. The growth of AlGaN heterostructures by molecular beam epitaxy, in contrast to the common organo-metallic vapour phase epitaxy, is discussed by Ivanov et al. Since hexagonal boron nitride (BN) has received much attention as both a UV and a two-dimensional electronic material, we believe it serves readers well to include the

  8. BIBLIOGRAPHY ON ACHIEVEMENT. SUPPLEMENT I.

    Science.gov (United States)

    Harvard Univ., Cambridge, MA. Graduate School of Education.

    THIS BIBLIOGRAPHY SUPPLEMENT LISTS MATERIALS ON VARIOUS ASPECTS OF ACHIEVEMENT. APPROXIMATELY 60 REFERENCES ARE PROVIDED TO DOCUMENTS DATING FROM 1961 TO 1966. JOURNALS, BOOKS, AND REPORT MATERIALS ARE LISTED. SUBJECT AREAS INCLUDED ARE ACHIEVEMENT LEVEL, ACADEMIC ACHIEVEMENT, ACHIEVEMENT MOTIVATION, UNDERACHIEVERS, PROBABILITY ESTIMATES, AND…

  9. DEEP INFILTRATING ENDOMETRIOSIS

    Directory of Open Access Journals (Sweden)

    Martina Ribič-Pucelj

    2018-02-01

    Full Text Available Background: Endometriosis is not considered a unified disease, but a disease encompassing three differ- ent forms differentiated by aetiology and pathogenesis: peritoneal endometriosis, ovarian endometriosis and deep infiltrating endometriosis (DIE. The disease is classified as DIE when the lesions penetrate 5 mm or more into the retroperitoneal space. The estimated incidence of endometriosis in women of reproductive age ranges from 10–15 % and that of DIE from 3–10 %, the highest being in infertile women and in those with chronic pelvic pain. The leading symptoms of DIE are chronic pelvic pain which increases with age and correlates with the depth of infiltration and infertility. The most important diagnostic procedures are patient’s history and proper gynecological examination. The diagnosis is confirmed with laparoscopy. DIE can affect, beside reproductive organs, also bowel, bladder and ureters, therefore adi- tional diagnostic procedures must be performed preopertively to confirm or to exclude the involvement of the mentioned organs. Endometriosis is hormon dependent disease, there- fore several hormonal treatment regims are used to supress estrogen production but the symptoms recurr soon after caesation of the treatment. At the moment, surgical treatment with excision of all lesions, including those of bowel, bladder and ureters, is the method of choice but requires frequently interdisciplinary approach. Surgical treatment significantly reduces pain and improves fertility in inferile patients. Conclusions: DIE is not a rare form of endometriosis characterized by chronic pelvic pain and infertility. Medical treatment is not efficient. The method of choice is surgical treatment with excision of all lesions. It significantly reduces pelvic pain and enables high spontaneus and IVF preg- nacy rates.Therefore such patients should be treated at centres with experience in treatment of DIE and with possibility of interdisciplinary approach.

  10. HEPEX - achievements and challenges!

    Science.gov (United States)

    Pappenberger, Florian; Ramos, Maria-Helena; Thielen, Jutta; Wood, Andy; Wang, Qj; Duan, Qingyun; Collischonn, Walter; Verkade, Jan; Voisin, Nathalie; Wetterhall, Fredrik; Vuillaume, Jean-Francois Emmanuel; Lucatero Villasenor, Diana; Cloke, Hannah L.; Schaake, John; van Andel, Schalk-Jan

    2014-05-01

    HEPEX is an international initiative bringing together hydrologists, meteorologists, researchers and end-users to develop advanced probabilistic hydrological forecast techniques for improved flood, drought and water management. HEPEX was launched in 2004 as an independent, cooperative international scientific activity. During the first meeting, the overarching goal was defined as: "to develop and test procedures to produce reliable hydrological ensemble forecasts, and to demonstrate their utility in decision making related to the water, environmental and emergency management sectors." The applications of hydrological ensemble predictions span across large spatio-temporal scales, ranging from short-term and localized predictions to global climate change and regional modeling. Within the HEPEX community, information is shared through its blog (www.hepex.org), meetings, testbeds and intercompaison experiments, as well as project reportings. Key questions of HEPEX are: * What adaptations are required for meteorological ensemble systems to be coupled with hydrological ensemble systems? * How should the existing hydrological ensemble prediction systems be modified to account for all sources of uncertainty within a forecast? * What is the best way for the user community to take advantage of ensemble forecasts and to make better decisions based on them? This year HEPEX celebrates its 10th year anniversary and this poster will present a review of the main operational and research achievements and challenges prepared by Hepex contributors on data assimilation, post-processing of hydrologic predictions, forecast verification, communication and use of probabilistic forecasts in decision-making. Additionally, we will present the most recent activities implemented by Hepex and illustrate how everyone can join the community and participate to the development of new approaches in hydrologic ensemble prediction.

  11. What factors determine academic achievement in high achieving undergraduate medical students? A qualitative study.

    Science.gov (United States)

    Abdulghani, Hamza M; Al-Drees, Abdulmajeed A; Khalil, Mahmood S; Ahmad, Farah; Ponnamperuma, Gominda G; Amin, Zubair

    2014-04-01

    Medical students' academic achievement is affected by many factors such as motivational beliefs and emotions. Although students with high intellectual capacity are selected to study medicine, their academic performance varies widely. The aim of this study is to explore the high achieving students' perceptions of factors contributing to academic achievement. Focus group discussions (FGD) were carried out with 10 male and 9 female high achieving (scores more than 85% in all tests) students, from the second, third, fourth and fifth academic years. During the FGDs, the students were encouraged to reflect on their learning strategies and activities. The discussion was audio-recorded, transcribed and analysed qualitatively. Factors influencing high academic achievement include: attendance to lectures, early revision, prioritization of learning needs, deep learning, learning in small groups, mind mapping, learning in skills lab, learning with patients, learning from mistakes, time management, and family support. Internal motivation and expected examination results are important drivers of high academic performance. Management of non-academic issues like sleep deprivation, homesickness, language barriers, and stress is also important for academic success. Addressing these factors, which might be unique for a given student community, in a systematic manner would be helpful to improve students' performance.

  12. Telepresence for Deep Space Missions

    Data.gov (United States)

    National Aeronautics and Space Administration — Incorporating telepresence technologies into deep space mission operations can give the crew and ground personnel the impression that they are in a location at time...

  13. Hybrid mask for deep etching

    KAUST Repository

    Ghoneim, Mohamed T.

    2017-01-01

    Deep reactive ion etching is essential for creating high aspect ratio micro-structures for microelectromechanical systems, sensors and actuators, and emerging flexible electronics. A novel hybrid dual soft/hard mask bilayer may be deposited during

  14. Density functionals from deep learning

    OpenAIRE

    McMahon, Jeffrey M.

    2016-01-01

    Density-functional theory is a formally exact description of a many-body quantum system in terms of its density; in practice, however, approximations to the universal density functional are required. In this work, a model based on deep learning is developed to approximate this functional. Deep learning allows computational models that are capable of naturally discovering intricate structure in large and/or high-dimensional data sets, with multiple levels of abstraction. As no assumptions are ...

  15. Reduction in language testing

    DEFF Research Database (Denmark)

    Dimova, Slobodanka; Jensen, Christian

    2013-01-01

    /video recorded speech samples and written reports produced by two experienced raters after testing. Our findings suggest that reduction or reduction-like pronunciation features are found in tested L2 speech, but whenever raters identify and comment on such reductions, they tend to assess reductions negatively......This study represents an initial exploration of raters' comments and actual realisations of form reductions in L2 test speech performances. Performances of three L2 speakers were selected as case studies and illustrations of how reductions are evaluated by the raters. The analysis is based on audio...

  16. A trial of scheduled deep brain stimulation for Tourette syndrome: moving away from continuous deep brain stimulation paradigms.

    Science.gov (United States)

    Okun, Michael S; Foote, Kelly D; Wu, Samuel S; Ward, Herbert E; Bowers, Dawn; Rodriguez, Ramon L; Malaty, Irene A; Goodman, Wayne K; Gilbert, Donald M; Walker, Harrison C; Mink, Jonathan W; Merritt, Stacy; Morishita, Takashi; Sanchez, Justin C

    2013-01-01

    To collect the information necessary to design the methods and outcome variables for a larger trial of scheduled deep brain stimulation (DBS) for Tourette syndrome. We performed a small National Institutes of Health-sponsored clinical trials planning study of the safety and preliminary efficacy of implanted DBS in the bilateral centromedian thalamic region. The study used a cranially contained constant-current device and a scheduled, rather than the classic continuous, DBS paradigm. Baseline vs 6-month outcomes were collected and analyzed. In addition, we compared acute scheduled vs acute continuous vs off DBS. A university movement disorders center. Five patients with implanted DBS. A 50% improvement in the Yale Global Tic Severity Scale (YGTSS) total score. RESULTS Participating subjects had a mean age of 34.4 (range, 28-39) years and a mean disease duration of 28.8 years. No significant adverse events or hardware-related issues occurred. Baseline vs 6-month data revealed that reductions in the YGTSS total score did not achieve the prestudy criterion of a 50% improvement in the YGTSS total score on scheduled stimulation settings. However, statistically significant improvements were observed in the YGTSS total score (mean [SD] change, -17.8 [9.4]; P=.01), impairment score (-11.3 [5.0]; P=.007), and motor score (-2.8 [2.2]; P=.045); the Modified Rush Tic Rating Scale Score total score (-5.8 [2.9]; P=.01); and the phonic tic severity score (-2.2 [2.6]; P=.04). Continuous, off, and scheduled stimulation conditions were assessed blindly in an acute experiment at 6 months after implantation. The scores in all 3 conditions showed a trend for improvement. Trends for improvement also occurred with continuous and scheduled conditions performing better than the off condition. Tic suppression was commonly seen at ventral (deep) contacts, and programming settings resulting in tic suppression were commonly associated with a subjective feeling of calmness. This study provides

  17. Deep Unfolding for Topic Models.

    Science.gov (United States)

    Chien, Jen-Tzung; Lee, Chao-Hsi

    2018-02-01

    Deep unfolding provides an approach to integrate the probabilistic generative models and the deterministic neural networks. Such an approach is benefited by deep representation, easy interpretation, flexible learning and stochastic modeling. This study develops the unsupervised and supervised learning of deep unfolded topic models for document representation and classification. Conventionally, the unsupervised and supervised topic models are inferred via the variational inference algorithm where the model parameters are estimated by maximizing the lower bound of logarithm of marginal likelihood using input documents without and with class labels, respectively. The representation capability or classification accuracy is constrained by the variational lower bound and the tied model parameters across inference procedure. This paper aims to relax these constraints by directly maximizing the end performance criterion and continuously untying the parameters in learning process via deep unfolding inference (DUI). The inference procedure is treated as the layer-wise learning in a deep neural network. The end performance is iteratively improved by using the estimated topic parameters according to the exponentiated updates. Deep learning of topic models is therefore implemented through a back-propagation procedure. Experimental results show the merits of DUI with increasing number of layers compared with variational inference in unsupervised as well as supervised topic models.

  18. Hot, deep origin of petroleum: deep basin evidence and application

    Science.gov (United States)

    Price, Leigh C.

    1978-01-01

    Use of the model of a hot deep origin of oil places rigid constraints on the migration and entrapment of crude oil. Specifically, oil originating from depth migrates vertically up faults and is emplaced in traps at shallower depths. Review of petroleum-producing basins worldwide shows oil occurrence in these basins conforms to the restraints of and therefore supports the hypothesis. Most of the world's oil is found in the very deepest sedimentary basins, and production over or adjacent to the deep basin is cut by or directly updip from faults dipping into the basin deep. Generally the greater the fault throw the greater the reserves. Fault-block highs next to deep sedimentary troughs are the best target areas by the present concept. Traps along major basin-forming faults are quite prospective. The structural style of a basin governs the distribution, types, and amounts of hydrocarbons expected and hence the exploration strategy. Production in delta depocenters (Niger) is in structures cut by or updip from major growth faults, and structures not associated with such faults are barren. Production in block fault basins is on horsts next to deep sedimentary troughs (Sirte, North Sea). In basins whose sediment thickness, structure and geologic history are known to a moderate degree, the main oil occurrences can be specifically predicted by analysis of fault systems and possible hydrocarbon migration routes. Use of the concept permits the identification of significant targets which have either been downgraded or ignored in the past, such as production in or just updip from thrust belts, stratigraphic traps over the deep basin associated with major faulting, production over the basin deep, and regional stratigraphic trapping updip from established production along major fault zones.

  19. Technetium behaviour under deep geological conditions

    International Nuclear Information System (INIS)

    Kumata, M.; Vandergraaf, T.T.

    1993-01-01

    The migration behaviour of technetium under deep geological conditions was investigated by performing column tests using groundwater and altered granitic rock sampled from a fracture zone in a granitic pluton at a depth of about 250 m. The experiment was performed under a pressure of about 0.7 MPa in a controlled atmosphere glove box at the 240 m level of the Underground Research Laboratory (URL) near Pinawa, Manitoba, Canada. The technetium was strongly sorbed on the dark mafic minerals in the column. With the exception of a very small unretarded fraction that was eluted with the tritiated water, no further breakthrough of technetium was observed. This strong sorption of technetium on the mineral surface was caused by reduction of Tc(VII), probably to Tc(IV) even though the groundwater was only mildly reducing. (author) 5 figs., 4 tabs., 15 refs

  20. 40Ar/39Ar studies of deep sea igneous rocks

    International Nuclear Information System (INIS)

    Seidemann, D.

    1978-01-01

    An attempt to date deep-sea igneous rocks reliably was made using the 40 Ar/ 39 Ar dating technique. It was determined that the 40 Ar/ 39 Ar incremental release technique could not be used to eliminate the effects of excess radiogenic 40 Ar in deep-sea basalts. Excess 40 Ar is released throughout the extraction temperature range and cannot be distinguished from 40 Ar generated by in situ 40 K decay. The problem of the reduction of K-Ar dates associated with sea water alteration of deep-sea igneous rocks could not be resolved using the 40 Ar/ 39 Ar technique. Irradiation induced 39 Ar loss and/or redistribution in fine-grained and altered igneous rocks results in age spectra that are artifacts of the experimental procedure and only partly reflect the geologic history of the sample. Therefore, caution must be used in attributing significance to age spectra of fine grained and altered deep-sea igneous rocks. Effects of 39 Ar recoil are not important for either medium-grained (or coarser) deep-sea rocks or glasses because only a small fraction of the 39 Ar recoils to channels of easy diffusion, such as intergranular boundaries or cracks, during the irradiation. (author)

  1. An Assessment of Envelope Measures in Mild Climate Deep Energy Retrofits

    Energy Technology Data Exchange (ETDEWEB)

    Walker, Iain [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Less, Brennan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2014-06-01

    Energy end-uses and interior comfort conditions have been monitored in 11 Deep Energy Retrofits (DERs) in a mild marine climate. Two broad categories of DER envelope were identified: first, bringing homes up to current code levels of insulation and airtightness, and second, enhanced retrofits that go beyond these code requirements. The efficacy of envelope measures in DERs was difficult to determine, due to the intermingled effects of enclosure improvements, HVAC system upgrades and changes in interior comfort conditions. While energy reductions in these project homes could not be assigned to specific improvements, the combined effects of changes in enclosure, HVAC system and comfort led to average heating energy reductions of 76percent (12,937 kWh) in the five DERs with pre-retrofit data, or 80percent (5.9 kWh/ft2) when normalized by floor area. Overall, net-site energy reductions averaged 58percent (15,966 kWh; n=5), and DERs with code-style envelopes achieved average net-site energy reductions of 65percent (18,923 kWh; n=4). In some homes, the heating energy reductions were actually larger than the whole house reductions that were achieved, which suggests that substantial additional energy uses were added to the home during the retrofit that offset some heating savings. Heating system operation and energy use was shown to vary inconsistently with outdoor conditions, suggesting that most DERs were not thermostatically controlled and that occupants were engaged in managing the indoor environmental conditions. Indoor temperatures maintained in these DERs were highly variable, and no project home consistently provided conditions within the ASHRAE Standard 55-2010 heating season comfort zone. Thermal comfort and heating system operation had a large impact on performance and were found to depend upon the occupant activities, so DERs should be designed with the occupants needs and patterns of consumption in mind. Beyond-code building envelopes were not found to be

  2. Attitude Towards Physics and Additional Mathematics Achievement Towards Physics Achievement

    Science.gov (United States)

    Veloo, Arsaythamby; Nor, Rahimah; Khalid, Rozalina

    2015-01-01

    The purpose of this research is to identify the difference in students' attitude towards Physics and Additional Mathematics achievement based on gender and relationship between attitudinal variables towards Physics and Additional Mathematics achievement with achievement in Physics. This research focused on six variables, which is attitude towards…

  3. Guidance levels, achievable doses and expectation levels

    International Nuclear Information System (INIS)

    Li, Lianbo; Meng, Bing

    2002-01-01

    The National Radiological Protection Board (NRPB), the International Atomic Energy Agency (IAEA) and the Commission of the European Communities (CEC) published their guidance levels and reference doses for typical X-ray examination and nuclear medicine in their documents in 1993, 1994 and 1996 respectively. From then on, the concept of guidance levels or reference doses have been applied to different examinations in the field of radiology and proved to be effective for reduction of patient doses. But the guidance levels or reference doses are likely to have some shortcomings and can do little to make further reduction of patient dose in the radiology departments where patient dose are already below them. For this reason, the National Radiological Protection Board (NRPB) proposed a concept named achievable doses which are based on the mean dose observed for a selected sample of radiology departments. This paper will review and discuss the concept of guidance levels and achievable doses, and propose a new concept referred to as Expectation Levels that will encourage the radiology departments where patient dose are already below the guidance levels to keep patient dose as low as reasonably achievable. Some examples of the expectation levels based on the data published by a few countries are also illustrated in this paper

  4. Climate Leadership Award for Excellence in GHG Management (Goal Achievement Award)

    Science.gov (United States)

    Apply to the Climate Leadership Award for Excellence in GHG Management (Goal Achievement Award), which publicly recognizes organizations that achieve publicly-set aggressive greenhouse gas emissions reduction goals.

  5. Towards testing quantum physics in deep space

    Science.gov (United States)

    Kaltenbaek, Rainer

    2016-07-01

    MAQRO is a proposal for a medium-sized space mission to use the unique environment of deep space in combination with novel developments in space technology and quantum technology to test the foundations of physics. The goal is to perform matter-wave interferometry with dielectric particles of up to 10^{11} atomic mass units and testing for deviations from the predictions of quantum theory. Novel techniques from quantum optomechanics with optically trapped particles are to be used for preparing the test particles for these experiments. The core elements of the instrument are placed outside the spacecraft and insulated from the hot spacecraft via multiple thermal shields allowing to achieve cryogenic temperatures via passive cooling and ultra-high vacuum levels by venting to deep space. In combination with low force-noise microthrusters and inertial sensors, this allows realizing an environment well suited for long coherence times of macroscopic quantum superpositions and long integration times. Since the original proposal in 2010, significant progress has been made in terms of technology development and in refining the instrument design. Based on these new developments, we submitted/will submit updated versions of the MAQRO proposal in 2015 and 2016 in response to Cosmic-Vision calls of ESA for a medium-sized mission. A central goal has been to address and overcome potentially critical issues regarding the readiness of core technologies and to provide realistic concepts for further technology development. We present the progress on the road towards realizing this ground-breaking mission harnessing deep space in novel ways for testing the foundations of physics, a technology pathfinder for macroscopic quantum technology and quantum optomechanics in space.

  6. MCNP variance reduction overview

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Booth, T.E.

    1985-01-01

    The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code

  7. Modern Reduction Methods

    CERN Document Server

    Andersson, Pher G

    2008-01-01

    With its comprehensive overview of modern reduction methods, this book features high quality contributions allowing readers to find reliable solutions quickly and easily. The monograph treats the reduction of carbonyles, alkenes, imines and alkynes, as well as reductive aminations and cross and heck couplings, before finishing off with sections on kinetic resolutions and hydrogenolysis. An indispensable lab companion for every chemist.

  8. To Master or Perform? Exploring Relations between Achievement Goals and Conceptual Change Learning

    Science.gov (United States)

    Ranellucci, John; Muis, Krista R.; Duffy, Melissa; Wang, Xihui; Sampasivam, Lavanya; Franco, Gina M.

    2013-01-01

    Background: Research is needed to explore conceptual change in relation to achievement goal orientations and depth of processing. Aims: To address this need, we examined relations between achievement goals, use of deep versus shallow processing strategies, and conceptual change learning using a think-aloud protocol. Sample and Method:…

  9. Sex-work harm reduction.

    Science.gov (United States)

    Rekart, Michael L

    2005-12-17

    Sex work is an extremely dangerous profession. The use of harm-reduction principles can help to safeguard sex workers' lives in the same way that drug users have benefited from drug-use harm reduction. Sex workers are exposed to serious harms: drug use, disease, violence, discrimination, debt, criminalisation, and exploitation (child prostitution, trafficking for sex work, and exploitation of migrants). Successful and promising harm-reduction strategies are available: education, empowerment, prevention, care, occupational health and safety, decriminalisation of sex workers, and human-rights-based approaches. Successful interventions include peer education, training in condom-negotiating skills, safety tips for street-based sex workers, male and female condoms, the prevention-care synergy, occupational health and safety guidelines for brothels, self-help organisations, and community-based child protection networks. Straightforward and achievable steps are available to improve the day-to-day lives of sex workers while they continue to work. Conceptualising and debating sex-work harm reduction as a new paradigm can hasten this process.

  10. How Stressful Is "Deep Bubbling"?

    Science.gov (United States)

    Tyrmi, Jaana; Laukkanen, Anne-Maria

    2017-03-01

    Water resistance therapy by phonating through a tube into the water is used to treat dysphonia. Deep submersion (≥10 cm in water, "deep bubbling") is used for hypofunctional voice disorders. Using it with caution is recommended to avoid vocal overloading. This experimental study aimed to investigate how strenuous "deep bubbling" is. Fourteen subjects, half of them with voice training, repeated the syllable [pa:] in comfortable speaking pitch and loudness, loudly, and in strained voice. Thereafter, they phonated a vowel-like sound both in comfortable loudness and loudly into a glass resonance tube immersed 10 cm into the water. Oral pressure, contact quotient (CQ, calculated from electroglottographic signal), and sound pressure level were studied. The peak oral pressure P(oral) during [p] and shuttering of the outer end of the tube was measured to estimate the subglottic pressure P(sub) and the mean P(oral) during vowel portions to enable calculation of transglottic pressure P(trans). Sensations during phonation were reported with an open-ended interview. P(sub) and P(oral) were higher in "deep bubbling" and P(trans) lower than in loud syllable phonation, but the CQ did not differ significantly. Similar results were obtained for the comparison between loud "deep bubbling" and strained phonation, although P(sub) did not differ significantly. Most of the subjects reported "deep bubbling" to be stressful only for respiratory and lip muscles. No big differences were found between trained and untrained subjects. The CQ values suggest that "deep bubbling" may increase vocal fold loading. Further studies should address impact stress during water resistance exercises. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  11. Reduction of LNG FOB cost

    International Nuclear Information System (INIS)

    Aoki, Ichizo; Kikkawa, Yoshitsugi

    1997-01-01

    To achieve a competitive LNG price for the consumers against other energy sources, reduction of LNG FOB (Free on Board) cost i.e. LNG cost at LNG ship flange, will be the key item. It is necessary to perform a many optimization studies (or value engineering) for each stage of the LNG project. These stages are: Feasibility study; Conceptual design - FEED (Front End Engineering and Design); EPC (Engineering, Procurement and Construction); Operation and maintenance. Since the LNG plant forms one part of the LNG chain, starting from gas production to LNG receiving, and requires several billion US dollar of investment, the consequences of a plant shut down on the LNG chain are clear, it is, therefore, important to get high availability which will also contribute the reduction of LNG FOB cost. (au) 25 refs

  12. Physical Activity and Academic Achievement

    Centers for Disease Control (CDC) Podcasts

    This podcast highlights the evidence that supports the link between physical activity and improved academic achievement. It also identifies a few actions to support a comprehensive school physical activity program to improve academic achievement.

  13. Healthy Eating and Academic Achievement

    Centers for Disease Control (CDC) Podcasts

    This podcast highlights the evidence that supports the link between healthy eating and improved academic achievement. It also identifies a few actions to support a healthy school nutrition environment to improve academic achievement.

  14. Welfare Effects of Tariff Reduction Formulas

    DEFF Research Database (Denmark)

    Guldager, Jan G.; Schröder, Philipp J.H.

    WTO negotiations rely on tariff reduction formulas. It has been argued that formula approaches are of increasing importance in trade talks, because of the large number of countries involved, the wider dispersion in initial tariffs (e.g. tariff peaks) and gaps between bound and applied tariff rate....... No single formula dominates for all conditions. The ranking of the three tools depends on the degree of product differentiation in the industry, and the achieved reduction in the average tariff....

  15. The deep space 1 extended mission

    Science.gov (United States)

    Rayman, Marc D.; Varghese, Philip

    2001-03-01

    The primary mission of Deep Space 1 (DS1), the first flight of the New Millennium program, completed successfully in September 1999, having exceeded its objectives of testing new, high-risk technologies important for future space and Earth science missions. DS1 is now in its extended mission, with plans to take advantage of the advanced technologies, including solar electric propulsion, to conduct an encounter with comet 19P/Borrelly in September 2001. During the extended mission, the spacecraft's commercial star tracker failed; this critical loss prevented the spacecraft from achieving three-axis attitude control or knowledge. A two-phase approach to recovering the mission was undertaken. The first involved devising a new method of pointing the high-gain antenna to Earth using the radio signal received at the Deep Space Network as an indicator of spacecraft attitude. The second was the development of new flight software that allowed the spacecraft to return to three-axis operation without substantial ground assistance. The principal new feature of this software is the use of the science camera as an attitude sensor. The differences between the science camera and the star tracker have important implications not only for the design of the new software but also for the methods of operating the spacecraft and conducting the mission. The ambitious rescue was fully successful, and the extended mission is back on track.

  16. Auditory processing during deep propofol sedation and recovery from unconsciousness.

    Science.gov (United States)

    Koelsch, Stefan; Heinke, Wolfgang; Sammler, Daniela; Olthoff, Derk

    2006-08-01

    Using evoked potentials, this study investigated effects of deep propofol sedation, and effects of recovery from unconsciousness, on the processing of auditory information with stimuli suited to elicit a physical MMN, and a (music-syntactic) ERAN. Levels of sedation were assessed using the Bispectral Index (BIS) and the Modified Observer's Assessment of Alertness and Sedation Scale (MOAAS). EEG-measurements were performed during wakefulness, deep propofol sedation (MOAAS 2-3, mean BIS=68), and a recovery period. Between deep sedation and recovery period, the infusion rate of propofol was increased to achieve unconsciousness (MOAAS 0-1, mean BIS=35); EEG measurements of recovery period were performed after subjects regained consciousness. During deep sedation, the physical MMN was markedly reduced, but still significant. No ERAN was observed in this level. A clear P3a was elicited during deep sedation by those deviants, which were task-relevant during the awake state. As soon as subjects regained consciousness during the recovery period, a normal MMN was elicited. By contrast, the P3a was absent in the recovery period, and the P3b was markedly reduced. Results indicate that the auditory sensory memory (as indexed by the physical MMN) is still active, although strongly reduced, during deep sedation (MOAAS 2-3). The presence of the P3a indicates that attention-related processes are still operating during this level. Processes of syntactic analysis appear to be abolished during deep sedation. After propofol-induced anesthesia, the auditory sensory memory appears to operate normal as soon as subjects regain consciousness, whereas the attention-related processes indexed by P3a and P3b are markedly impaired. Results inform about effects of sedative drugs on auditory and attention-related mechanisms. The findings are important because these mechanisms are prerequisites for auditory awareness, auditory learning and memory, as well as language perception during anesthesia.

  17. Formability of dual-phase steels in deep drawing of rectangular parts: Influence of blank thickness and die radius

    Science.gov (United States)

    López, Ana María Camacho; Regueras, José María Gutiérrez

    2017-10-01

    The new goals of automotive industry related with environment concerns, the reduction of fuel emissions and the security requirements have driven up to new designs which main objective is reducing weight. It can be achieved through new materials such as nano-structured materials, fibre-reinforced composites or steels with higher strength, among others. Into the last group, the Advance High Strength Steels (AHSS) and particularly, dual-phase steels are in a predominant situation. However, despite of their special characteristics, they present issues related to their manufacturability such as springback, splits and cracks, among others. This work is focused on the deep drawing processof rectangular shapes, a very usual forming operation that allows manufacturing several automotive parts like oil pans, cases, etc. Two of the main parameters in this process which affect directly to the characteristics of final product are blank thickness (t) and die radius (Rd). Influence of t and Rd on the formability of dual-phase steels has been analysed considering values typically used in industrial manufacturing for a wide range of dual-phase steels using finite element modelling and simulation; concretely, the influence of these parameters in the percentage of thickness reduction pt(%), a quite important value for manufactured parts by deep drawing operations, which affects to its integrity and its service behaviour. Modified Morh Coulomb criteria (MMC) has been used in order to obtain Fracture Forming Limit Diagrams (FFLD) which take into account an important failure mode in dual-phase steels: shear fracture. Finally, a relation between thickness reduction percentage and studied parameters has been established fordual-phase steels, obtaining a collection of equations based on Design of Experiments (D.O.E) technique, which can be useful in order to predict approximate results.

  18. CANDELS: THE COSMIC ASSEMBLY NEAR-INFRARED DEEP EXTRAGALACTIC LEGACY SURVEY—THE HUBBLE SPACE TELESCOPE OBSERVATIONS, IMAGING DATA PRODUCTS, AND MOSAICS

    International Nuclear Information System (INIS)

    Koekemoer, Anton M.; Ferguson, Henry C.; Grogin, Norman A.; Lotz, Jennifer M.; Lucas, Ray A.; Ogaz, Sara; Rajan, Abhijith; Casertano, Stefano; Dahlen, Tomas; Faber, S. M.; Kocevski, Dale D.; Koo, David C.; Lai, Kamson; McGrath, Elizabeth J.; Riess, Adam G.; Rodney, Steve A.; Dolch, Timothy; Strolger, Louis; Castellano, Marco; Dickinson, Mark

    2011-01-01

    This paper describes the Hubble Space Telescope imaging data products and data reduction procedures for the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS). This survey is designed to document the evolution of galaxies and black holes at z ≈ 1.5-8, and to study Type Ia supernovae at z > 1.5. Five premier multi-wavelength sky regions are selected, each with extensive multi-wavelength observations. The primary CANDELS data consist of imaging obtained in the Wide Field Camera 3 infrared channel (WFC3/IR) and the WFC3 ultraviolet/optical channel, along with the Advanced Camera for Surveys (ACS). The CANDELS/Deep survey covers ∼125 arcmin 2 within GOODS-N and GOODS-S, while the remainder consists of the CANDELS/Wide survey, achieving a total of ∼800 arcmin 2 across GOODS and three additional fields (Extended Groth Strip, COSMOS, and Ultra-Deep Survey). We summarize the observational aspects of the survey as motivated by the scientific goals and present a detailed description of the data reduction procedures and products from the survey. Our data reduction methods utilize the most up-to-date calibration files and image combination procedures. We have paid special attention to correcting a range of instrumental effects, including charge transfer efficiency degradation for ACS, removal of electronic bias-striping present in ACS data after Servicing Mission 4, and persistence effects and other artifacts in WFC3/IR. For each field, we release mosaics for individual epochs and eventual mosaics containing data from all epochs combined, to facilitate photometric variability studies and the deepest possible photometry. A more detailed overview of the science goals and observational design of the survey are presented in a companion paper.

  19. Climate Change and Poverty Reduction

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Simon

    2011-08-15

    Climate change will make it increasingly difficult to achieve and sustain development goals. This is largely because climate effects on poverty remain poorly understood, and poverty reduction strategies do not adequately support climate resilience. Ensuring effective development in the face of climate change requires action on six fronts: investing in a stronger climate and poverty evidence base; applying the learning about development effectiveness to how we address adaptation needs; supporting nationally derived, integrated policies and programmes; including the climate-vulnerable poor in developing strategies; and identifying how mitigation strategies can also reduce poverty and enable adaptation.

  20. Accelerating Deep Learning with Shrinkage and Recall

    OpenAIRE

    Zheng, Shuai; Vishnu, Abhinav; Ding, Chris

    2016-01-01

    Deep Learning is a very powerful machine learning model. Deep Learning trains a large number of parameters for multiple layers and is very slow when data is in large scale and the architecture size is large. Inspired from the shrinking technique used in accelerating computation of Support Vector Machines (SVM) algorithm and screening technique used in LASSO, we propose a shrinking Deep Learning with recall (sDLr) approach to speed up deep learning computation. We experiment shrinking Deep Lea...

  1. Boosting compound-protein interaction prediction by deep learning.

    Science.gov (United States)

    Tian, Kai; Shao, Mingyu; Wang, Yang; Guan, Jihong; Zhou, Shuigeng

    2016-11-01

    The identification of interactions between compounds and proteins plays an important role in network pharmacology and drug discovery. However, experimentally identifying compound-protein interactions (CPIs) is generally expensive and time-consuming, computational approaches are thus introduced. Among these, machine-learning based methods have achieved a considerable success. However, due to the nonlinear and imbalanced nature of biological data, many machine learning approaches have their own limitations. Recently, deep learning techniques show advantages over many state-of-the-art machine learning methods in some applications. In this study, we aim at improving the performance of CPI prediction based on deep learning, and propose a method called DL-CPI (the abbreviation of Deep Learning for Compound-Protein Interactions prediction), which employs deep neural network (DNN) to effectively learn the representations of compound-protein pairs. Extensive experiments show that DL-CPI can learn useful features of compound-protein pairs by a layerwise abstraction, and thus achieves better prediction performance than existing methods on both balanced and imbalanced datasets. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Deep Neural Network-Based Chinese Semantic Role Labeling

    Institute of Scientific and Technical Information of China (English)

    ZHENG Xiaoqing; CHEN Jun; SHANG Guoqiang

    2017-01-01

    A recent trend in machine learning is to use deep architec-tures to discover multiple levels of features from data, which has achieved impressive results on various natural language processing (NLP) tasks. We propose a deep neural network-based solution to Chinese semantic role labeling (SRL) with its application on message analysis. The solution adopts a six-step strategy: text normalization, named entity recognition (NER), Chinese word segmentation and part-of-speech (POS) tagging, theme classification, SRL, and slot filling. For each step, a novel deep neural network - based model is designed and optimized, particularly for smart phone applications. Ex-periment results on all the NLP sub - tasks of the solution show that the proposed neural networks achieve state-of-the-art performance with the minimal computational cost. The speed advantage of deep neural networks makes them more competitive for large-scale applications or applications requir-ing real-time response, highlighting the potential of the pro-posed solution for practical NLP systems.

  3. Opportunities and Challenges in Deep Mining: A Brief Review

    Directory of Open Access Journals (Sweden)

    Pathegama G. Ranjith

    2017-08-01

    Full Text Available Mineral consumption is increasing rapidly as more consumers enter the market for minerals and as the global standard of living increases. As a result, underground mining continues to progress to deeper levels in order to tackle the mineral supply crisis in the 21st century. However, deep mining occurs in a very technical and challenging environment, in which significant innovative solutions and best practice are required and additional safety standards must be implemented in order to overcome the challenges and reap huge economic gains. These challenges include the catastrophic events that are often met in deep mining engineering: rockbursts, gas outbursts, high in situ and redistributed stresses, large deformation, squeezing and creeping rocks, and high temperature. This review paper presents the current global status of deep mining and highlights some of the newest technological achievements and opportunities associated with rock mechanics and geotechnical engineering in deep mining. Of the various technical achievements, unmanned working-faces and unmanned mines based on fully automated mining and mineral extraction processes have become important fields in the 21st century.

  4. Current fragmentation in deep inelastic scattering

    International Nuclear Information System (INIS)

    Hamer, C.J.

    1975-04-01

    It is argued that the current fragmentation products in deep inelastic electron scattering will not be distributed in a 'one-dimensional' rapidity plateau as in the parton model picture of Feynman and Bjorken. A reaction mechanism with a multiperipheral topology, but which the above configuration might have been achieved, does not in fact populate the current fragmentation plateau; and unless partons are actually observed in the final state, it cannot lead to Bjorken scaling. The basic reason for this failure is shown to be the fact that when a particle is produced in the current fragmentation plateau, the adjacent momentum transfer in the multiperipheral chain becomes large and negative: such processes are inevitably suppressed. Instead, the current fragmentation products are likely to be generated by a fragmentation, or sequential decay process. (author)

  5. A Flat World with Deep Fractures

    Directory of Open Access Journals (Sweden)

    Emil Constantinescu

    2016-10-01

    Full Text Available The Internet manages to connect different parts of the world, defies geographical distances and gives the impression that our planet is flat, but the Internet is there only for the ones who have the possibility and the ability to use it. Our contemporary flat world has deep transversal fractures which, like in many geological structures, make a direct connection between layers with different characteristics. The elites are moving across information avenues with targets set in the future; at the same time, in many parts of our planet, there are people organizing their lives in pre-modern agrarian cycles. Diversity in ways of living and in social organization is a sign of human freedom, not a sign of error, so, having different alternatives to achieving prosperity and happiness should be good news. Holding dear to a society’s lifestyle should not push for the destruction of societies with different sets of values.

  6. Accurate identification of RNA editing sites from primitive sequence with deep neural networks.

    Science.gov (United States)

    Ouyang, Zhangyi; Liu, Feng; Zhao, Chenghui; Ren, Chao; An, Gaole; Mei, Chuan; Bo, Xiaochen; Shu, Wenjie

    2018-04-16

    RNA editing is a post-transcriptional RNA sequence alteration. Current methods have identified editing sites and facilitated research but require sufficient genomic annotations and prior-knowledge-based filtering steps, resulting in a cumbersome, time-consuming identification process. Moreover, these methods have limited generalizability and applicability in species with insufficient genomic annotations or in conditions of limited prior knowledge. We developed DeepRed, a deep learning-based method that identifies RNA editing from primitive RNA sequences without prior-knowledge-based filtering steps or genomic annotations. DeepRed achieved 98.1% and 97.9% area under the curve (AUC) in training and test sets, respectively. We further validated DeepRed using experimentally verified U87 cell RNA-seq data, achieving 97.9% positive predictive value (PPV). We demonstrated that DeepRed offers better prediction accuracy and computational efficiency than current methods with large-scale, mass RNA-seq data. We used DeepRed to assess the impact of multiple factors on editing identification with RNA-seq data from the Association of Biomolecular Resource Facilities and Sequencing Quality Control projects. We explored developmental RNA editing pattern changes during human early embryogenesis and evolutionary patterns in Drosophila species and the primate lineage using DeepRed. Our work illustrates DeepRed's state-of-the-art performance; it may decipher the hidden principles behind RNA editing, making editing detection convenient and effective.

  7. Students' Achievement Goals, Learning-Related Emotions and Academic Achievement

    Directory of Open Access Journals (Sweden)

    Marko eLüftenegger

    2016-05-01

    Full Text Available In the present research, the recently proposed 3x2 model of achievement goals is tested and associations with achievement emotions and their joint influence on academic achievement are investigated. The study was conducted with 388 students using the 3x2 Achievement Goal Questionnaire including the six proposed goal constructs (task-approach, task-avoidance, self-approach, self-avoidance, other-approach, other-avoidance and the enjoyment and boredom scales from the Achievement Emotion Questionnaire. Exam grades were used as an indicator of academic achievement. Findings from CFAs provided strong support for the proposed structure of the 3x2 achievement goal model. Self-based goals, other-based goals and task-approach goals predicted enjoyment. Task-approach goals negatively predicted boredom. Task-approach and other-approach predicted achievement. The indirect effects of achievement goals through emotion variables on achievement were assessed using bias-corrected bootstrapping. No mediation effects were found. Implications for educational practice are discussed.

  8. Evaluating the Factors that Facilitate a Deep Understanding of Data Analysis

    Directory of Open Access Journals (Sweden)

    Oliver Burmeister

    1995-11-01

    Full Text Available Ideally the product of tertiary informatic study is more than a qualification, it is a rewarding experience of learning in a discipline area. It should build a desire for a deeper understanding and lead to fruitful research both personally and for the benefit of the wider community. This paper asks: 'What are the factors that lead to this type of quality (deep learning in data analysis?' In the study reported in this paper, students whose general approach to learning was achieving or surface oriented adopted a deep approach when the context encouraged it. An overseas study found a decline in deep learning at this stage of a tertiary program; the contention of this paper is that the opposite of this expected outcome was achieved due to the enhanced learning environment. Though only 15.1% of students involved in this study were deep learners, the data analysis instructional context resulted in 38.8% of students achieving deep learning outcomes. Other factors discovered that contributed to deep learning outcomes were an increase in the intrinsic motivation of students to study the domain area; their prior knowledge of informatics; assessment that sought an integrated, developed yet comprehensive understanding of analytical concepts and processes; and, their learning preferences. The preferences of deep learning students are analyzed in comparison to another such study of professionals in informatics, examining commonalties and differences between this and the wider professional study.

  9. LMFBR technical development: achievements and prospects

    International Nuclear Information System (INIS)

    Hennies, H.H.; Nicholson, R.L.R.; Rapin, M.

    1986-10-01

    The recent commissioning of the SUPERPHENIX prototype (1200MWe), which is the outcome of a tight cooperation between several European partners, demonstrates the technical feasibility of industrial size Fast Breeder Reactors (FBR) and gives to Europe the leading part in FBR development. This achievement relies on studies which started more than 30 years ago and which have been marked by various realizations in European countries. Taking into account the slowing down of major nuclear programmes throughout the world and the resulting reduction of natural uranium needs the commercial deployment of LMFBRs does not appear presently necessary before the beginning of next century: this delay has to be used to work out a reactor model which will be economically attractive. The importance of efforts which remain to be carried out to achieve this goal, notably for what concern R and D, justifies the strengthening of the European cooperation and the extension of its scope to FBR fuel cycle activities. (author)

  10. Realization of Chinese word segmentation based on deep learning method

    Science.gov (United States)

    Wang, Xuefei; Wang, Mingjiang; Zhang, Qiquan

    2017-08-01

    In recent years, with the rapid development of deep learning, it has been widely used in the field of natural language processing. In this paper, I use the method of deep learning to achieve Chinese word segmentation, with large-scale corpus, eliminating the need to construct additional manual characteristics. In the process of Chinese word segmentation, the first step is to deal with the corpus, use word2vec to get word embedding of the corpus, each character is 50. After the word is embedded, the word embedding feature is fed to the bidirectional LSTM, add a linear layer to the hidden layer of the output, and then add a CRF to get the model implemented in this paper. Experimental results show that the method used in the 2014 People's Daily corpus to achieve a satisfactory accuracy.

  11. Deep Learning in Drug Discovery.

    Science.gov (United States)

    Gawehn, Erik; Hiss, Jan A; Schneider, Gisbert

    2016-01-01

    Artificial neural networks had their first heyday in molecular informatics and drug discovery approximately two decades ago. Currently, we are witnessing renewed interest in adapting advanced neural network architectures for pharmaceutical research by borrowing from the field of "deep learning". Compared with some of the other life sciences, their application in drug discovery is still limited. Here, we provide an overview of this emerging field of molecular informatics, present the basic concepts of prominent deep learning methods and offer motivation to explore these techniques for their usefulness in computer-assisted drug discovery and design. We specifically emphasize deep neural networks, restricted Boltzmann machine networks and convolutional networks. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Eric Davidson and deep time.

    Science.gov (United States)

    Erwin, Douglas H

    2017-10-13

    Eric Davidson had a deep and abiding interest in the role developmental mechanisms played in generating evolutionary patterns documented in deep time, from the origin of the euechinoids to the processes responsible for the morphological architectures of major animal clades. Although not an evolutionary biologist, Davidson's interests long preceded the current excitement over comparative evolutionary developmental biology. Here I discuss three aspects at the intersection between his research and evolutionary patterns in deep time: First, understanding the mechanisms of body plan formation, particularly those associated with the early diversification of major metazoan clades. Second, a critique of early claims about ancestral metazoans based on the discoveries of highly conserved genes across bilaterian animals. Third, Davidson's own involvement in paleontology through a collaborative study of the fossil embryos from the Ediacaran Doushantuo Formation in south China.

  13. Aligning Seminars with Bologna Requirements: Reciprocal Peer Tutoring, the Solo Taxonomy and Deep Learning

    Science.gov (United States)

    Lueg, Rainer; Lueg, Klarissa; Lauridsen, Ole

    2016-01-01

    Changes in public policy, such as the Bologna Process, require students to be equipped with multifunctional competencies to master relevant tasks in unfamiliar situations. Achieving this goal might imply a change in many curricula toward deeper learning. As a didactical means to achieve deep learning results, the authors suggest reciprocal peer…

  14. Deep Learning in Gastrointestinal Endoscopy.

    Science.gov (United States)

    Patel, Vivek; Armstrong, David; Ganguli, Malika; Roopra, Sandeep; Kantipudi, Neha; Albashir, Siwar; Kamath, Markad V

    2016-01-01

    Gastrointestinal (GI) endoscopy is used to inspect the lumen or interior of the GI tract for several purposes, including, (1) making a clinical diagnosis, in real time, based on the visual appearances; (2) taking targeted tissue samples for subsequent histopathological examination; and (3) in some cases, performing therapeutic interventions targeted at specific lesions. GI endoscopy is therefore predicated on the assumption that the operator-the endoscopist-is able to identify and characterize abnormalities or lesions accurately and reproducibly. However, as in other areas of clinical medicine, such as histopathology and radiology, many studies have documented marked interobserver and intraobserver variability in lesion recognition. Thus, there is a clear need and opportunity for techniques or methodologies that will enhance the quality of lesion recognition and diagnosis and improve the outcomes of GI endoscopy. Deep learning models provide a basis to make better clinical decisions in medical image analysis. Biomedical image segmentation, classification, and registration can be improved with deep learning. Recent evidence suggests that the application of deep learning methods to medical image analysis can contribute significantly to computer-aided diagnosis. Deep learning models are usually considered to be more flexible and provide reliable solutions for image analysis problems compared to conventional computer vision models. The use of fast computers offers the possibility of real-time support that is important for endoscopic diagnosis, which has to be made in real time. Advanced graphics processing units and cloud computing have also favored the use of machine learning, and more particularly, deep learning for patient care. This paper reviews the rapidly evolving literature on the feasibility of applying deep learning algorithms to endoscopic imaging.

  15. Deep mycoses in Amazon region.

    Science.gov (United States)

    Talhari, S; Cunha, M G; Schettini, A P; Talhari, A C

    1988-09-01

    Patients with deep mycoses diagnosed in dermatologic clinics of Manaus (state of Amazonas, Brazil) were studied from November 1973 to December 1983. They came from the Brazilian states of Amazonas, Pará, Acre, and Rondônia and the Federal Territory of Roraima. All of these regions, with the exception of Pará, are situated in the western part of the Amazon Basin. The climatic conditions in this region are almost the same: tropical forest, high rainfall, and mean annual temperature of 26C. The deep mycoses diagnosed, in order of frequency, were Jorge Lobo's disease, paracoccidioidomycosis, chromomycosis, sporotrichosis, mycetoma, cryptococcosis, zygomycosis, and histoplasmosis.

  16. Producing deep-water hydrocarbons

    International Nuclear Information System (INIS)

    Pilenko, Thierry

    2011-01-01

    Several studies relate the history and progress made in offshore production from oil and gas fields in relation to reserves and the techniques for producing oil offshore. The intention herein is not to review these studies but rather to argue that the activities of prospecting and producing deep-water oil and gas call for a combination of technology and project management and, above all, of devotion and innovation. Without this sense of commitment motivating men and women in this industry, the human adventure of deep-water production would never have taken place

  17. GMSK Modulation for Deep Space Applications

    Science.gov (United States)

    Shambayati, Shervin; Lee, Dennis K.

    2012-01-01

    Due to scarcity of spectrum at 8.42 GHz deep space Xband allocation, many deep space missions are now considering the use of higher order modulation schemes instead of the traditional binary phase shift keying (BPSK). One such scheme is pre-coded Gaussian minimum shift keying (GMSK). GMSK is an excellent candidate for deep space missions. GMSK is a constant envelope, bandwidth efficien modulation whose frame error rate (FER) performance with perfect carrier tracking and proper receiver structure is nearly identical to that of BPSK. There are several issues that need to be addressed with GMSK however. Specificall, we are interested in the combined effects of spectrum limitations and receiver structure on the coded performance of the X-band link using GMSK. The receivers that are typically used for GMSK demodulations are variations on offset quadrature phase shift keying (OQPSK) receivers. In this paper we consider three receivers: the standard DSN OQPSK receiver, DSN OQPSK receiver with filte ed input, and an optimum OQPSK receiver with filte ed input. For the DSN OQPSK receiver we show experimental results with (8920, 1/2), (8920, 1/3) and (8920, 1/6) turbo codes in terms of their error rate performance. We also consider the tracking performance of this receiver as a function of data rate, channel code and the carrier loop signal-to-noise ratio (SNR). For the other two receivers we derive theoretical results that will show that for a given loop bandwidth, a receiver structure, and a channel code, there is a lower data rate limit on the GMSK below which a higher SNR than what is required to achieve the required FER on the link is needed. These limits stem from the minimum loop signal-to-noise ratio requirements on the receivers for achieving lock. As a result of this, for a given channel code and a given FER, there could be a gap between the maximum data rate that BPSK can support without violating the spectrum limits and the minimum data rate that GMSK can support

  18. Speech reconstruction using a deep partially supervised neural network.

    Science.gov (United States)

    McLoughlin, Ian; Li, Jingjie; Song, Yan; Sharifzadeh, Hamid R

    2017-08-01

    Statistical speech reconstruction for larynx-related dysphonia has achieved good performance using Gaussian mixture models and, more recently, restricted Boltzmann machine arrays; however, deep neural network (DNN)-based systems have been hampered by the limited amount of training data available from individual voice-loss patients. The authors propose a novel DNN structure that allows a partially supervised training approach on spectral features from smaller data sets, yielding very good results compared with the current state-of-the-art.

  19. DeepSimulator: a deep simulator for Nanopore sequencing

    KAUST Repository

    Li, Yu

    2017-12-23

    Motivation: Oxford Nanopore sequencing is a rapidly developed sequencing technology in recent years. To keep pace with the explosion of the downstream data analytical tools, a versatile Nanopore sequencing simulator is needed to complement the experimental data as well as to benchmark those newly developed tools. However, all the currently available simulators are based on simple statistics of the produced reads, which have difficulty in capturing the complex nature of the Nanopore sequencing procedure, the main task of which is the generation of raw electrical current signals. Results: Here we propose a deep learning based simulator, DeepSimulator, to mimic the entire pipeline of Nanopore sequencing. Starting from a given reference genome or assembled contigs, we simulate the electrical current signals by a context-dependent deep learning model, followed by a base-calling procedure to yield simulated reads. This workflow mimics the sequencing procedure more naturally. The thorough experiments performed across four species show that the signals generated by our context-dependent model are more similar to the experimentally obtained signals than the ones generated by the official context-independent pore model. In terms of the simulated reads, we provide a parameter interface to users so that they can obtain the reads with different accuracies ranging from 83% to 97%. The reads generated by the default parameter have almost the same properties as the real data. Two case studies demonstrate the application of DeepSimulator to benefit the development of tools in de novo assembly and in low coverage SNP detection. Availability: The software can be accessed freely at: https://github.com/lykaust15/DeepSimulator.

  20. Stimulation Technologies for Deep Well Completions

    Energy Technology Data Exchange (ETDEWEB)

    None

    2003-09-30

    The Department of Energy (DOE) is sponsoring the Deep Trek Program targeted at improving the economics of drilling and completing deep gas wells. Under the DOE program, Pinnacle Technologies is conducting a study to evaluate the stimulation of deep wells. The objective of the project is to assess U.S. deep well drilling & stimulation activity, review rock mechanics & fracture growth in deep, high pressure/temperature wells and evaluate stimulation technology in several key deep plays. An assessment of historical deep gas well drilling activity and forecast of future trends was completed during the first six months of the project; this segment of the project was covered in Technical Project Report No. 1. The second progress report covers the next six months of the project during which efforts were primarily split between summarizing rock mechanics and fracture growth in deep reservoirs and contacting operators about case studies of deep gas well stimulation.

  1. STIMULATION TECHNOLOGIES FOR DEEP WELL COMPLETIONS

    Energy Technology Data Exchange (ETDEWEB)

    Stephen Wolhart

    2003-06-01

    The Department of Energy (DOE) is sponsoring a Deep Trek Program targeted at improving the economics of drilling and completing deep gas wells. Under the DOE program, Pinnacle Technologies is conducting a project to evaluate the stimulation of deep wells. The objective of the project is to assess U.S. deep well drilling & stimulation activity, review rock mechanics & fracture growth in deep, high pressure/temperature wells and evaluate stimulation technology in several key deep plays. Phase 1 was recently completed and consisted of assessing deep gas well drilling activity (1995-2007) and an industry survey on deep gas well stimulation practices by region. Of the 29,000 oil, gas and dry holes drilled in 2002, about 300 were drilled in the deep well; 25% were dry, 50% were high temperature/high pressure completions and 25% were simply deep completions. South Texas has about 30% of these wells, Oklahoma 20%, Gulf of Mexico Shelf 15% and the Gulf Coast about 15%. The Rockies represent only 2% of deep drilling. Of the 60 operators who drill deep and HTHP wells, the top 20 drill almost 80% of the wells. Six operators drill half the U.S. deep wells. Deep drilling peaked at 425 wells in 1998 and fell to 250 in 1999. Drilling is expected to rise through 2004 after which drilling should cycle down as overall drilling declines.

  2. Enhanced deep ocean ventilation and oxygenation with global warming

    Science.gov (United States)

    Froelicher, T. L.; Jaccard, S.; Dunne, J. P.; Paynter, D.; Gruber, N.

    2014-12-01

    Twenty-first century coupled climate model simulations, observations from the recent past, and theoretical arguments suggest a consistent trend towards warmer ocean temperatures and fresher polar surface oceans in response to increased radiative forcing resulting in increased upper ocean stratification and reduced ventilation and oxygenation of the deep ocean. Paleo-proxy records of the warming at the end of the last ice age, however, suggests a different outcome, namely a better ventilated and oxygenated deep ocean with global warming. Here we use a four thousand year global warming simulation from a comprehensive Earth System Model (GFDL ESM2M) to show that this conundrum is a consequence of different rates of warming and that the deep ocean is actually better ventilated and oxygenated in a future warmer equilibrated climate consistent with paleo-proxy records. The enhanced deep ocean ventilation in the Southern Ocean occurs in spite of increased positive surface buoyancy fluxes and a constancy of the Southern Hemisphere westerly winds - circumstances that would otherwise be expected to lead to a reduction in deep ocean ventilation. This ventilation recovery occurs through a global scale interaction of the Atlantic Meridional Overturning Circulation undergoing a multi-centennial recovery after an initial century of transient decrease and transports salinity-rich waters inform the subtropical surface ocean to the Southern Ocean interior on multi-century timescales. The subsequent upwelling of salinity-rich waters in the Southern Ocean strips away the freshwater cap that maintains vertical stability and increases open ocean convection and the formation of Antarctic Bottom Waters. As a result, the global ocean oxygen content and the nutrient supply from the deep ocean to the surface are higher in a warmer ocean. The implications for past and future changes in ocean heat and carbon storage will be discussed.

  3. Visual Vehicle Tracking Based on Deep Representation and Semisupervised Learning

    Directory of Open Access Journals (Sweden)

    Yingfeng Cai

    2017-01-01

    Full Text Available Discriminative tracking methods use binary classification to discriminate between the foreground and background and have achieved some useful results. However, the use of labeled training samples is insufficient for them to achieve accurate tracking. Hence, discriminative classifiers must use their own classification results to update themselves, which may lead to feedback-induced tracking drift. To overcome these problems, we propose a semisupervised tracking algorithm that uses deep representation and transfer learning. Firstly, a 2D multilayer deep belief network is trained with a large amount of unlabeled samples. The nonlinear mapping point at the top of this network is subtracted as the feature dictionary. Then, this feature dictionary is utilized to transfer train and update a deep tracker. The positive samples for training are the tracked vehicles, and the negative samples are the background images. Finally, a particle filter is used to estimate vehicle position. We demonstrate experimentally that our proposed vehicle tracking algorithm can effectively restrain drift while also maintaining the adaption of vehicle appearance. Compared with similar algorithms, our method achieves a better tracking success rate and fewer average central-pixel errors.

  4. Reduction - competitive tomorrow

    International Nuclear Information System (INIS)

    Worley, L.; Bargerstock, S.

    1995-01-01

    Inventory reduction is one of the few initiatives that represent significant cost-reduction potential that does not result in personnel reduction. Centerior Energy's Perry nuclear power plant has embarked on an aggressive program to reduce inventory while maintaining plant material availability. Material availability to the plant was above 98%, but at an unacceptable 1994 inventory book value of $47 million with inventory carrying costs calculated at 30% annually

  5. Achievements

    Digital Repository Service at National Institute of Oceanography (India)

    Banakar, V.K.

    A historic decision was taken by the Preparatory Commission of the International Seabed Authority (PRE-PCOM) on 17 th August 1987 It was decided to allocate to India exclusive rights for the exploration of polymetallic nodules in an area of about...

  6. The Predictiveness of Achievement Goals

    Directory of Open Access Journals (Sweden)

    Huy P. Phan

    2013-11-01

    Full Text Available Using the Revised Achievement Goal Questionnaire (AGQ-R (Elliot & Murayama, 2008, we explored first-year university students’ achievement goal orientations on the premise of the 2 × 2 model. Similar to recent studies (Elliot & Murayama, 2008; Elliot & Thrash, 2010, we conceptualized a model that included both antecedent (i.e., enactive learning experience and consequence (i.e., intrinsic motivation and academic achievement of achievement goals. Two hundred seventy-seven university students (151 women, 126 men participated in the study. Structural equation modeling procedures yielded evidence that showed the predictive effects of enactive learning experience and mastery goals on intrinsic motivation. Academic achievement was influenced intrinsic motivation, performance-approach goals, and enactive learning experience. Enactive learning experience also served as an antecedent of the four achievement goal types. On the whole, evidence obtained supports the AGQ-R and contributes, theoretically, to 2 × 2 model.

  7. The Mechanics of Human Achievement.

    Science.gov (United States)

    Duckworth, Angela L; Eichstaedt, Johannes C; Ungar, Lyle H

    2015-07-01

    Countless studies have addressed why some individuals achieve more than others. Nevertheless, the psychology of achievement lacks a unifying conceptual framework for synthesizing these empirical insights. We propose organizing achievement-related traits by two possible mechanisms of action: Traits that determine the rate at which an individual learns a skill are talent variables and can be distinguished conceptually from traits that determine the effort an individual puts forth. This approach takes inspiration from Newtonian mechanics: achievement is akin to distance traveled, effort to time, skill to speed, and talent to acceleration. A novel prediction from this model is that individual differences in effort (but not talent) influence achievement (but not skill) more substantially over longer (rather than shorter) time intervals. Conceptualizing skill as the multiplicative product of talent and effort, and achievement as the multiplicative product of skill and effort, advances similar, but less formal, propositions by several important earlier thinkers.

  8. Process energy reduction

    International Nuclear Information System (INIS)

    Lowthian, W.E.

    1993-01-01

    Process Energy Reduction (PER) is a demand-side energy reduction approach which complements and often supplants other traditional energy reduction methods such as conservation and heat recovery. Because the application of PER is less obvious than the traditional methods, it takes some time to learn the steps as well as practice to become proficient in its use. However, the benefit is significant, often far outweighing the traditional energy reduction approaches. Furthermore, the method usually results in a better process having less waste and pollution along with improved yields, increased capacity, and lower operating costs

  9. Finding optimal exact reducts

    KAUST Repository

    AbouEisha, Hassan M.

    2014-01-01

    The problem of attribute reduction is an important problem related to feature selection and knowledge discovery. The problem of finding reducts with minimum cardinality is NP-hard. This paper suggests a new algorithm for finding exact reducts with minimum cardinality. This algorithm transforms the initial table to a decision table of a special kind, apply a set of simplification steps to this table, and use a dynamic programming algorithm to finish the construction of an optimal reduct. I present results of computer experiments for a collection of decision tables from UCIML Repository. For many of the experimented tables, the simplification steps solved the problem.

  10. Metallothermic reduction of molybdate

    International Nuclear Information System (INIS)

    Mukherjee, T.K.; Bose, D.K.

    1987-01-01

    This paper gives a brief account of the investigations conducted so far on metallothermic reduction of high grade molybdenite with particular emphasis on the work carried out in Bhabha Atomic Research Centre. Based on thermochemical considerations, the paper first introduces a number of metallic reductants suitable for use in metallothermic reduction of molybdenite. Aluminium, sodium and tin are found to be suitable reducing agents and very rightly they have found most applications in the research and development efforts on metallothermic reduction of molybdenite. The reduction with tin was conducted on fairly large scale both in vacuum and hydrogen atmosphere. The reaction was reported to be invariant depending mainly on the reduction temperature and a temperature of the order of 1250deg to 1300degC was required for good metal recovery. In comparison to tin, aluminothermic reduction of molybdenite was studied more extensively and it was conducted in closed bomb, vacuum and also in open atmosphere. In aluminothermic reduction, the influence of amount of reducing agent, amount of heat booster, preheating temperature and charging procedure on these metal yield was studied in detail. The reduction generally yielded massive molybdenum metal contaminated with aluminium as the major impurity element. Efforts were made to purify the reduced metal by arc melting, electron beam melting and molten salt electrorefining. 9 refs. (author)

  11. The Mechanics of Human Achievement

    OpenAIRE

    Duckworth, Angela L.; Eichstaedt, Johannes C.; Ungar, Lyle H.

    2015-01-01

    Countless studies have addressed why some individuals achieve more than others. Nevertheless, the psychology of achievement lacks a unifying conceptual framework for synthesizing these empirical insights. We propose organizing achievement-related traits by two possible mechanisms of action: Traits that determine the rate at which an individual learns a skill are talent variables and can be distinguished conceptually from traits that determine the effort an individual puts forth. This approach...

  12. The modulatory effect of adaptive deep brain stimulation on beta bursts in Parkinson's disease.

    Science.gov (United States)

    Tinkhauser, Gerd; Pogosyan, Alek; Little, Simon; Beudel, Martijn; Herz, Damian M; Tan, Huiling; Brown, Peter

    2017-04-01

    Adaptive deep brain stimulation uses feedback about the state of neural circuits to control stimulation rather than delivering fixed stimulation all the time, as currently performed. In patients with Parkinson's disease, elevations in beta activity (13-35 Hz) in the subthalamic nucleus have been demonstrated to correlate with clinical impairment and have provided the basis for feedback control in trials of adaptive deep brain stimulation. These pilot studies have suggested that adaptive deep brain stimulation may potentially be more effective, efficient and selective than conventional deep brain stimulation, implying mechanistic differences between the two approaches. Here we test the hypothesis that such differences arise through differential effects on the temporal dynamics of beta activity. The latter is not constantly increased in Parkinson's disease, but comes in bursts of different durations and amplitudes. We demonstrate that the amplitude of beta activity in the subthalamic nucleus increases in proportion to burst duration, consistent with progressively increasing synchronization. Effective adaptive deep brain stimulation truncated long beta bursts shifting the distribution of burst duration away from long duration with large amplitude towards short duration, lower amplitude bursts. Critically, bursts with shorter duration are negatively and bursts with longer duration positively correlated with the motor impairment off stimulation. Conventional deep brain stimulation did not change the distribution of burst durations. Although both adaptive and conventional deep brain stimulation suppressed mean beta activity amplitude compared to the unstimulated state, this was achieved by a selective effect on burst duration during adaptive deep brain stimulation, whereas conventional deep brain stimulation globally suppressed beta activity. We posit that the relatively selective effect of adaptive deep brain stimulation provides a rationale for why this approach could

  13. Deep Space Climate Observatory (DSCOVR)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Deep Space Climate ObserVatoRy (DSCOVR) satellite is a NOAA operated asset at the first Lagrange (L1) point. The primary space weather instrument is the PlasMag...

  14. Ploughing the deep sea floor.

    Science.gov (United States)

    Puig, Pere; Canals, Miquel; Company, Joan B; Martín, Jacobo; Amblas, David; Lastras, Galderic; Palanques, Albert

    2012-09-13

    Bottom trawling is a non-selective commercial fishing technique whereby heavy nets and gear are pulled along the sea floor. The direct impact of this technique on fish populations and benthic communities has received much attention, but trawling can also modify the physical properties of seafloor sediments, water–sediment chemical exchanges and sediment fluxes. Most of the studies addressing the physical disturbances of trawl gear on the seabed have been undertaken in coastal and shelf environments, however, where the capacity of trawling to modify the seafloor morphology coexists with high-energy natural processes driving sediment erosion, transport and deposition. Here we show that on upper continental slopes, the reworking of the deep sea floor by trawling gradually modifies the shape of the submarine landscape over large spatial scales. We found that trawling-induced sediment displacement and removal from fishing grounds causes the morphology of the deep sea floor to become smoother over time, reducing its original complexity as shown by high-resolution seafloor relief maps. Our results suggest that in recent decades, following the industrialization of fishing fleets, bottom trawling has become an important driver of deep seascape evolution. Given the global dimension of this type of fishery, we anticipate that the morphology of the upper continental slope in many parts of the world’s oceans could be altered by intensive bottom trawling, producing comparable effects on the deep sea floor to those generated by agricultural ploughing on land.

  15. Deep Space Gateway "Recycler" Mission

    Science.gov (United States)

    Graham, L.; Fries, M.; Hamilton, J.; Landis, R.; John, K.; O'Hara, W.

    2018-02-01

    Use of the Deep Space Gateway provides a hub for a reusable planetary sample return vehicle for missions to gather star dust as well as samples from various parts of the solar system including main belt asteroids, near-Earth asteroids, and Mars moon.

  16. Deep freezers with heat recovery

    Energy Technology Data Exchange (ETDEWEB)

    Kistler, J.

    1981-09-02

    Together with space and water heating systems, deep freezers are the biggest energy consumers in households. The article investigates the possibility of using the waste heat for water heating. The design principle of such a system is presented in a wiring diagram.

  17. A Deep-Sea Simulation.

    Science.gov (United States)

    Montes, Georgia E.

    1997-01-01

    Describes an activity that simulates exploration techniques used in deep-sea explorations and teaches students how this technology can be used to take a closer look inside volcanoes, inspect hazardous waste sites such as nuclear reactors, and explore other environments dangerous to humans. (DDR)

  18. Barbabos Deep-Water Sponges

    NARCIS (Netherlands)

    Soest, van R.W.M.; Stentoft, N.

    1988-01-01

    Deep-water sponges dredged up in two locations off the west coast of Barbados are systematically described. A total of 69 species is recorded, among which 16 are new to science, viz. Pachymatisma geodiformis, Asteropus syringiferus, Cinachyra arenosa, Theonella atlantica. Corallistes paratypus,

  19. Deep learning for visual understanding

    NARCIS (Netherlands)

    Guo, Y.

    2017-01-01

    With the dramatic growth of the image data on the web, there is an increasing demand of the algorithms capable of understanding the visual information automatically. Deep learning, served as one of the most significant breakthroughs, has brought revolutionary success in diverse visual applications,

  20. Deep-Sky Video Astronomy

    CERN Document Server

    Massey, Steve

    2009-01-01

    A guide to using modern integrating video cameras for deep-sky viewing and imaging with the kinds of modest telescopes available commercially to amateur astronomers. It includes an introduction and a brief history of the technology and camera types. It examines the pros and cons of this unrefrigerated yet highly efficient technology

  1. DM Considerations for Deep Drilling

    OpenAIRE

    Dubois-Felsmann, Gregory

    2016-01-01

    An outline of the current situation regarding the DM plans for the Deep Drilling surveys and an invitation to the community to provide feedback on what they would like to see included in the data processing and visualization of these surveys.

  2. Lessons from Earth's Deep Time

    Science.gov (United States)

    Soreghan, G. S.

    2005-01-01

    Earth is a repository of data on climatic changes from its deep-time history. Article discusses the collection and study of these data to predict future climatic changes, the need to create national study centers for the purpose, and the necessary cooperation between different branches of science in climatic research.

  3. Digging Deeper: The Deep Web.

    Science.gov (United States)

    Turner, Laura

    2001-01-01

    Focuses on the Deep Web, defined as Web content in searchable databases of the type that can be found only by direct query. Discusses the problems of indexing; inability to find information not indexed in the search engine's database; and metasearch engines. Describes 10 sites created to access online databases or directly search them. Lists ways…

  4. Deep Learning and Music Adversaries

    DEFF Research Database (Denmark)

    Kereliuk, Corey Mose; Sturm, Bob L.; Larsen, Jan

    2015-01-01

    the minimal perturbation of the input image such that the system misclassifies it with high confidence. We adapt this approach to construct and deploy an adversary of deep learning systems applied to music content analysis. In our case, however, the system inputs are magnitude spectral frames, which require...

  5. Stimulation Technologies for Deep Well Completions

    Energy Technology Data Exchange (ETDEWEB)

    Stephen Wolhart

    2005-06-30

    The Department of Energy (DOE) is sponsoring the Deep Trek Program targeted at improving the economics of drilling and completing deep gas wells. Under the DOE program, Pinnacle Technologies conducted a study to evaluate the stimulation of deep wells. The objective of the project was to review U.S. deep well drilling and stimulation activity, review rock mechanics and fracture growth in deep, high-pressure/temperature wells and evaluate stimulation technology in several key deep plays. This report documents results from this project.

  6. L-shaped fiber-chip grating couplers with high directionality and low reflectivity fabricated with deep-UV lithography.

    Science.gov (United States)

    Benedikovic, Daniel; Alonso-Ramos, Carlos; Pérez-Galacho, Diego; Guerber, Sylvain; Vakarin, Vladyslav; Marcaud, Guillaume; Le Roux, Xavier; Cassan, Eric; Marris-Morini, Delphine; Cheben, Pavel; Boeuf, Frédéric; Baudot, Charles; Vivien, Laurent

    2017-09-01

    Grating couplers enable position-friendly interfacing of silicon chips by optical fibers. The conventional coupler designs call upon comparatively complex architectures to afford efficient light coupling to sub-micron silicon-on-insulator (SOI) waveguides. Conversely, the blazing effect in double-etched gratings provides high coupling efficiency with reduced fabrication intricacy. In this Letter, we demonstrate for the first time, to the best of our knowledge, the realization of an ultra-directional L-shaped grating coupler, seamlessly fabricated by using 193 nm deep-ultraviolet (deep-UV) lithography. We also include a subwavelength index engineered waveguide-to-grating transition that provides an eight-fold reduction of the grating reflectivity, down to 1% (-20  dB). A measured coupling efficiency of -2.7  dB (54%) is achieved, with a bandwidth of 62 nm. These results open promising prospects for the implementation of efficient, robust, and cost-effective coupling interfaces for sub-micrometric SOI waveguides, as desired for large-volume applications in silicon photonics.

  7. Fiscal 1999 achievement report. Development of technology for reducing power consumption during standby (Research and development of technologies for application of standby power reduction to domestic and office-automation appliances); 1999 nendo taikiji shohi denryoku sakugen gijutsu kaihatsu seika hokokusho. Kaden oyobi OA kiki no taiki denryoku sakugen jitsuyoka gijutsu kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-05-01

    Efforts are exerted to develop power-efficient modules to be built into electrical products for reduction in power consumption in the standby state for domestic and office-automation appliances. In this study, television sets, audio sets, and air conditioners were selected out of domestic appliances and, out of office-automation appliances, notebook-size and desktop personal computers were selected. The standby power consumption is to be reduced to 3mW for domestic appliances, to 0.2W for notebook-size personal computers, and to 1/10-1/200 of the level being currently consumed in the case of desktop personal computers. For domestic appliances, a power efficient module not insulated from the AC power line was developed, to be built into a CPU-aided appliance to be turned on and off by remote control for the reduction of its standby power to 3mW. For notebook-size personal computers, a power-efficient power source insulated from the AC power line was developed, which consumes but 0.2W of standby power. It was built into a marketed notebook-size personal computer and tested for performance. For desktop personal computers, a 25mW power source insulated from the AC power line was fabricated, and tested for performance. (NEDO)

  8. Accounting for Natural Reduction of Nitrogen

    DEFF Research Database (Denmark)

    Højberg, A L; Refsgaard, J. C.; Hansen, A.L.

    the same restriction for all areas independent on drainage schemes, hydrogeochemical conditions in the subsurface and retention in surface waters. Although significant reductions have been achieved this way, general measures are not cost-effective, as nitrogen retention (primarily as denitrification...

  9. Reduction operator for wide-SIMDs reconsidered

    NARCIS (Netherlands)

    Waeijen, L.J.W.; She, D.; Corporaal, H.; He, Y.

    2014-01-01

    It has been shown that wide Single Instruction Multiple Data architectures (wide-SIMDs) can achieve high energy efficiency, especially in domains such as image and vision processing. In these and various other application domains, reduction is a frequently encountered operation, where multiple input

  10. Deep shaft high rate aerobic digestion: laboratory and pilot plant performance

    Energy Technology Data Exchange (ETDEWEB)

    Tran, F; Gannon, D

    1981-01-01

    The Deep Shaft is essentially an air-lift reactor, sunk deep in the ground (100-160 m); the resulting high hydrostatic pressure together with very efficient mixing in the shaft provide extremely high O transfer efficiencies (O.T.E.) of less than or equal to 90% vs. 4-20% in other aerators. This high O.T.E. suggests real potential for Deep-Shaft technology in the aerobic digestion of sludges and animal wastes: with conventional aerobic digesters an O.T.E. over 8% is extremely difficult to achieve. Laboratory and pilot plant Deep-Shaft aerobic digester studies carried out at Eco-Research's Pointe Claire, Quebec laboratories, and at the Paris, Ontario pilot Deep-Shaft digester are described.

  11. Bacteriological examination and biological characteristics of deep frozen bone preserved by gamma sterilization

    International Nuclear Information System (INIS)

    Pham Quang Ngoc; Le The Trung; Vo Van Thuan; Ho Minh Duc

    1999-01-01

    To promote the surgical success in Vietnam, we should supply bone allografts of different sizes. For this reason we have developed a standard procedure in procurement, deep freezing, packaging and radiation sterilization of massive bone. The achievement in this attempt will be briefly reported. The dose of 10-15 kGy is proved to be suitable for radiation sterilization of massive bone allografts being treated in clean condition and preserved in deep frozen. Neither deep freezing nor radiation sterilization cause any significant loss of biochemical stability of massive bone allografts especially when deep freezing combines with radiation. There were neither cross infection nor change of biological characteristics found after 6 months of storage since radiation treatment. In addition to results of the previous research and development of tissue grafts for medical care, the deep freezing radiation sterilization has been established for preservation of massive bone that is of high demand for surgery in Vietnam

  12. Microbial reductive dehalogenation.

    Science.gov (United States)

    Mohn, W W; Tiedje, J M

    1992-01-01

    A wide variety of compounds can be biodegraded via reductive removal of halogen substituents. This process can degrade toxic pollutants, some of which are not known to be biodegraded by any other means. Reductive dehalogenation of aromatic compounds has been found primarily in undefined, syntrophic anaerobic communities. We discuss ecological and physiological principles which appear to be important in these communities and evaluate how widely applicable these principles are. Anaerobic communities that catalyze reductive dehalogenation appear to differ in many respects. A large number of pure cultures which catalyze reductive dehalogenation of aliphatic compounds are known, in contrast to only a few organisms which catalyze reductive dehalogenation of aromatic compounds. Desulfomonile tiedjei DCB-1 is an anaerobe which dehalogenates aromatic compounds and is physiologically and morphologically unusual in a number of respects, including the ability to exploit reductive dehalogenation for energy metabolism. When possible, we use D. tiedjei as a model to understand dehalogenating organisms in the above-mentioned undefined systems. Aerobes use reductive dehalogenation for substrates which are resistant to known mechanisms of oxidative attack. Reductive dehalogenation, especially of aliphatic compounds, has recently been found in cell-free systems. These systems give us an insight into how and why microorganisms catalyze this activity. In some cases transition metal complexes serve as catalysts, whereas in other cases, particularly with aromatic substrates, the catalysts appear to be enzymes. Images PMID:1406492

  13. Volume fracturing of deep shale gas horizontal wells

    Directory of Open Access Journals (Sweden)

    Tingxue Jiang

    2017-03-01

    Full Text Available Deep shale gas reservoirs buried underground with depth being more than 3500 m are characterized by high in-situ stress, large horizontal stress difference, complex distribution of bedding and natural cracks, and strong rock plasticity. Thus, during hydraulic fracturing, these reservoirs often reveal difficult fracture extension, low fracture complexity, low stimulated reservoir volume (SRV, low conductivity and fast decline, which hinder greatly the economic and effective development of deep shale gas. In this paper, a specific and feasible technique of volume fracturing of deep shale gas horizontal wells is presented. In addition to planar perforation, multi-scale fracturing, full-scale fracture filling, and control over extension of high-angle natural fractures, some supporting techniques are proposed, including multi-stage alternate injection (of acid fluid, slick water and gel and the mixed- and small-grained proppant to be injected with variable viscosity and displacement. These techniques help to increase the effective stimulated reservoir volume (ESRV for deep gas production. Some of the techniques have been successfully used in the fracturing of deep shale gas horizontal wells in Yongchuan, Weiyuan and southern Jiaoshiba blocks in the Sichuan Basin. As a result, Wells YY1HF and WY1HF yielded initially 14.1 × 104 m3/d and 17.5 × 104 m3/d after fracturing. The volume fracturing of deep shale gas horizontal well is meaningful in achieving the productivity of 50 × 108 m3 gas from the interval of 3500–4000 m in Phase II development of Fuling and also in commercial production of huge shale gas resources at a vertical depth of less than 6000 m.

  14. Experimental demonstration of deep frequency modulation interferometry.

    Science.gov (United States)

    Isleif, Katharina-Sophie; Gerberding, Oliver; Schwarze, Thomas S; Mehmet, Moritz; Heinzel, Gerhard; Cervantes, Felipe Guzmán

    2016-01-25

    Experiments for space and ground-based gravitational wave detectors often require a large dynamic range interferometric position readout of test masses with 1 pm/√Hz precision over long time scales. Heterodyne interferometer schemes that achieve such precisions are available, but they require complex optical set-ups, limiting their scalability for multiple channels. This article presents the first experimental results on deep frequency modulation interferometry, a new technique that combines sinusoidal laser frequency modulation in unequal arm length interferometers with a non-linear fit algorithm. We have tested the technique in a Michelson and a Mach-Zehnder Interferometer topology, respectively, demonstrated continuous phase tracking of a moving mirror and achieved a performance equivalent to a displacement sensitivity of 250 pm/Hz at 1 mHz between the phase measurements of two photodetectors monitoring the same optical signal. By performing time series fitting of the extracted interference signals, we measured that the linearity of the laser frequency modulation is on the order of 2% for the laser source used.

  15. Poor Results for High Achievers

    Science.gov (United States)

    Bui, Sa; Imberman, Scott; Craig, Steven

    2012-01-01

    Three million students in the United States are classified as gifted, yet little is known about the effectiveness of traditional gifted and talented (G&T) programs. In theory, G&T programs might help high-achieving students because they group them with other high achievers and typically offer specially trained teachers and a more advanced…

  16. Physical Activity and Academic Achievement

    Centers for Disease Control (CDC) Podcasts

    2014-12-09

    This podcast highlights the evidence that supports the link between physical activity and improved academic achievement. It also identifies a few actions to support a comprehensive school physical activity program to improve academic achievement.  Created: 12/9/2014 by National Center for Chronic Disease Prevention and Health Promotion (NCCDPHP).   Date Released: 12/9/2014.

  17. Healthy Eating and Academic Achievement

    Centers for Disease Control (CDC) Podcasts

    2014-12-09

    This podcast highlights the evidence that supports the link between healthy eating and improved academic achievement. It also identifies a few actions to support a healthy school nutrition environment to improve academic achievement.  Created: 12/9/2014 by National Center for Chronic Disease Prevention and Health Promotion (NCCDPHP).   Date Released: 12/9/2014.

  18. Parental Involvement and Academic Achievement

    Science.gov (United States)

    Goodwin, Sarah Christine

    2015-01-01

    This research study examined the correlation between student achievement and parent's perceptions of their involvement in their child's schooling. Parent participants completed the Parent Involvement Project Parent Questionnaire. Results slightly indicated parents of students with higher level of achievement perceived less demand or invitations…

  19. Peer relationships and academic achievement

    Directory of Open Access Journals (Sweden)

    Krnjajić Stevan B.

    2002-01-01

    Full Text Available After their childhood, when children begin to establish more intensive social contacts outside family, first of all, in school setting, their behavior i.e. their social, intellectual, moral and emotional development is more strongly affected by their peers. Consequently, the quality of peer relationships considerably affects the process of adaptation and academic achievement and their motivational and emotional attitude towards school respectively. Empirical findings showed that there is bi-directional influence between peer relationships and academic achievement. In other words, the quality of peer relationships affects academic achievement, and conversely, academic achievement affects the quality of peer relationships. For example, socially accepted children exhibiting prosocial, cooperative and responsible forms of behavior in school most frequently have high academic achievement. On the other hand, children rejected by their peers often have lower academic achievement and are a risk group tending to delinquency, absenteeism and drop out of school. Those behavioral and interpersonal forms of competence are frequently more reliable predictors of academic achievement than intellectual abilities are. Considering the fact that various patterns of peer interaction differently exert influence on students' academic behavior, the paper analyzed effects of (a social competence, (b social acceptance/rejection, (c child's friendships and (d prosocial behavior on academic achievement.

  20. HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.

    Science.gov (United States)

    Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye

    2017-02-09

    In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.

  1. Current Status of Deep Geological Repository Development

    International Nuclear Information System (INIS)

    Budnitz, R J

    2005-01-01

    This talk provided an overview of the current status of deep-geological-repository development worldwide. Its principal observation is that a broad consensus exists internationally that deep-geological disposal is the only long-term solution for disposition of highly radioactive nuclear waste. Also, it is now clear that the institutional and political aspects are as important as the technical aspects in achieving overall progress. Different nations have taken different approaches to overall management of their highly radioactive wastes. Some have begun active programs to develop a deep repository for permanent disposal: the most active such programs are in the United States, Sweden, and Finland. Other countries (including France and Russia) are still deciding on whether to proceed quickly to develop such a repository, while still others (including the UK, China, Japan) have affirmatively decided to delay repository development for a long time, typically for a generation of two. In recent years, a major conclusion has been reached around the world that there is very high confidence that deep repositories can be built, operated, and closed safely and can meet whatever safety requirements are imposed by the regulatory agencies. This confidence, which has emerged in the last few years, is based on extensive work around the world in understanding how repositories behave, including both the engineering aspects and the natural-setting aspects, and how they interact together. The construction of repositories is now understood to be technically feasible, and no major barriers have been identified that would stand in the way of a successful project. Another major conclusion around the world is that the overall cost of a deep repository is not as high as some had predicted or feared. While the actual cost will not be known in detail until the costs are incurred, the general consensus is that the total life-cycle cost will not exceed a few percent of the value of the

  2. Efficacy of two types of palliative sedation therapy defined using intervention protocols: proportional vs. deep sedation.

    Science.gov (United States)

    Imai, Kengo; Morita, Tatsuya; Yokomichi, Naosuke; Mori, Masanori; Naito, Akemi Shirado; Tsukuura, Hiroaki; Yamauchi, Toshihiro; Kawaguchi, Takashi; Fukuta, Kaori; Inoue, Satoshi

    2018-06-01

    This study investigated the effect of two types of palliative sedation defined using intervention protocols: proportional and deep sedation. We retrospectively analyzed prospectively recorded data of consecutive cancer patients who received the continuous infusion of midazolam in a palliative care unit. Attending physicians chose the sedation protocol based on each patient's wish, symptom severity, prognosis, and refractoriness of suffering. The primary endpoint was a treatment goal achievement at 4 h: in proportional sedation, the achievement of symptom relief (Support Team Assessment Schedule (STAS) ≤ 1) and absence of agitation (modified Richmond Agitation-Sedation Scale (RASS) ≤ 0) and in deep sedation, the achievement of deep sedation (RASS ≤ - 4). Secondary endpoints included mean scores of STAS and RASS, deep sedation as a result, and adverse events. Among 398 patients who died during the period, 32 received proportional and 18 received deep sedation. The treatment goal achievement rate was 68.8% (22/32, 95% confidence interval 52.7-84.9) in the proportional sedation group vs. 83.3% (15/18, 66.1-100) in the deep sedation group. STAS decreased from 3.8 to 0.8 with proportional sedation at 4 h vs. 3.7 to 0.3 with deep sedation; RASS decreased from + 1.2 to - 1.7 vs. + 1.4 to - 3.7, respectively. Deep sedation was needed as a result in 31.3% (10/32) of the proportional sedation group. No fatal events that were considered as probably or definitely related to the intervention occurred. The two types of intervention protocol well reflected the treatment intention and expected outcomes. Further, large-scale cohort studies are promising.

  3. Self-compression of femtosecond deep-ultraviolet pulses by filamentation in krypton.

    Science.gov (United States)

    Adachi, Shunsuke; Suzuki, Toshinori

    2017-05-15

    We demonstrate self-compression of deep-ultraviolet (DUV) pulses by filamentation in krypton. In contrast to self-compression in the near-infrared, that in the DUV is associated with a red-shifted sub-pulse appearing in the pulse temporal profile. The achieved pulse width of 15 fs is the shortest among demonstrated sub-mJ deep-ultraviolet pulses.

  4. Deep Web and Dark Web: Deep World of the Internet

    OpenAIRE

    Çelik, Emine

    2018-01-01

    The Internet is undoubtedly still a revolutionary breakthrough in the history of humanity. Many people use the internet for communication, social media, shopping, political and social agenda, and more. Deep Web and Dark Web concepts not only handled by computer, software engineers but also handled by social siciensists because of the role of internet for the States in international arenas, public institutions and human life. By the moving point that very importantrole of internet for social s...

  5. DeepNAT: Deep convolutional neural network for segmenting neuroanatomy.

    Science.gov (United States)

    Wachinger, Christian; Reuter, Martin; Klein, Tassilo

    2018-04-15

    We introduce DeepNAT, a 3D Deep convolutional neural network for the automatic segmentation of NeuroAnaTomy in T1-weighted magnetic resonance images. DeepNAT is an end-to-end learning-based approach to brain segmentation that jointly learns an abstract feature representation and a multi-class classification. We propose a 3D patch-based approach, where we do not only predict the center voxel of the patch but also neighbors, which is formulated as multi-task learning. To address a class imbalance problem, we arrange two networks hierarchically, where the first one separates foreground from background, and the second one identifies 25 brain structures on the foreground. Since patches lack spatial context, we augment them with coordinates. To this end, we introduce a novel intrinsic parameterization of the brain volume, formed by eigenfunctions of the Laplace-Beltrami operator. As network architecture, we use three convolutional layers with pooling, batch normalization, and non-linearities, followed by fully connected layers with dropout. The final segmentation is inferred from the probabilistic output of the network with a 3D fully connected conditional random field, which ensures label agreement between close voxels. The roughly 2.7million parameters in the network are learned with stochastic gradient descent. Our results show that DeepNAT compares favorably to state-of-the-art methods. Finally, the purely learning-based method may have a high potential for the adaptation to young, old, or diseased brains by fine-tuning the pre-trained network with a small training sample on the target application, where the availability of larger datasets with manual annotations may boost the overall segmentation accuracy in the future. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Deep Phenotyping: Deep Learning For Temporal Phenotype/Genotype Classification

    OpenAIRE

    Najafi, Mohammad; Namin, Sarah; Esmaeilzadeh, Mohammad; Brown, Tim; Borevitz, Justin

    2017-01-01

    High resolution and high throughput, genotype to phenotype studies in plants are underway to accelerate breeding of climate ready crops. Complex developmental phenotypes are observed by imaging a variety of accessions in different environment conditions, however extracting the genetically heritable traits is challenging. In the recent years, deep learning techniques and in particular Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs) and Long-Short Term Memories (LSTMs), h...

  7. Deep Neuromuscular Blockade Improves Laparoscopic Surgical Conditions

    DEFF Research Database (Denmark)

    Rosenberg, Jacob; Herring, W Joseph; Blobner, Manfred

    2017-01-01

    INTRODUCTION: Sustained deep neuromuscular blockade (NMB) during laparoscopic surgery may facilitate optimal surgical conditions. This exploratory study assessed whether deep NMB improves surgical conditions and, in doing so, allows use of lower insufflation pressures during laparoscopic cholecys...

  8. Joint Training of Deep Boltzmann Machines

    OpenAIRE

    Goodfellow, Ian; Courville, Aaron; Bengio, Yoshua

    2012-01-01

    We introduce a new method for training deep Boltzmann machines jointly. Prior methods require an initial learning pass that trains the deep Boltzmann machine greedily, one layer at a time, or do not perform well on classifi- cation tasks.

  9. Social media audits achieving deep impact without sacrificing the bottom line

    CERN Document Server

    Gattiker, Urs E

    2014-01-01

    Social media is quickly becoming important to most businesses, but many managers, professionals, and marketing experts are unsure about the practicalities of social media marketing and how to measure success. Social Media Audits gives people dealing with social business in their working life a guide to social media marketing, measurement, and how to evaluate and improve the use of social media in an organizational context. This book consists of three parts, the first of which introduces the reader to concepts and ideas emerging in social media. The second part considers the need to shift from

  10. A Comprehensive Overview of CO2 Flow Behaviour in Deep Coal Seams

    Directory of Open Access Journals (Sweden)

    Mandadige Samintha Anne Perera

    2018-04-01

    Full Text Available Although enhanced coal bed methane recovery (ECBM and CO2 sequestration are effective approaches for achieving lower and safer CO2 levels in the atmosphere, the effectiveness of CO2 storage is greatly influenced by the flow ability of the injected CO2 through the coal seam. A precious understanding of CO2 flow behaviour is necessary due to various complexities generated in coal seams upon CO2 injection. This paper aims to provide a comprehensive overview on the CO2 flow behaviour in deep coal seams, specifically addressing the permeability alterations associated with different in situ conditions. The low permeability nature of natural coal seams has a significant impact on the CO2 sequestration process. One of the major causative factors for this low permeability nature is the high effective stresses applying on them, which reduces the pore space available for fluid movement with giving negative impact on the flow capability. Further, deep coal seams are often water saturated where, the moisture behave as barriers for fluid movement and thus reduce the seam permeability. Although the high temperatures existing at deep seams cause thermal expansion in the coal matrix, reducing their permeability, extremely high temperatures may create thermal cracks, resulting permeability enhancements. Deep coal seams preferable for CO2 sequestration generally are high-rank coal, as they have been subjected to greater pressure and temperature variations over a long period of time, which confirm the low permeability nature of such seams. The resulting extremely low CO2 permeability nature creates serious issues in large-scale CO2 sequestration/ECBM projects, as critically high injection pressures are required to achieve sufficient CO2 injection into the coal seam. The situation becomes worse when CO2 is injected into such coal seams, because CO2 movement in the coal seam creates a significant influence on the natural permeability of the seams through CO2

  11. Impact of Deepwater Horizon Spill on food supply to deep-sea benthos communities

    Science.gov (United States)

    Prouty, Nancy G.; Swarzenski, Pamela; Mienis, Furu; Duineveld, Gerald; Demopoulos, Amanda W.J.; Ross, Steve W.; Brooke, Sandra

    2016-01-01

    Deep-sea ecosystems encompass unique and often fragile communities that are sensitive to a variety of anthropogenic and natural impacts. After the 2010 Deepwater Horizon (DWH) oil spill, sampling efforts documented the acute impact of the spill on some deep-sea coral colonies. To investigate the impact of the DWH spill on quality and quantity of biomass delivered to the deep-sea, a suite of geochemical tracers (e.g., stable and radio-isotopes, lipid biomarkers, and compound specific isotopes) was measured from monthly sediment trap samples deployed near a high-density deep-coral site in the Viosca Knoll area of the north-central Gulf of Mexico prior to (Oct-2008 to Sept-2009) and after the spill (Oct-10 to Sept-11). Marine (e.g., autochthonous) sources of organic matter dominated the sediment traps in both years, however after the spill, there was a pronounced reduction in marinesourced OM, including a reduction in marine-sourced sterols and n-alkanes and a concomitant decrease in sediment trap organic carbon and pigment flux. Results from this study indicate a reduction in primary production and carbon export to the deep-sea in 2010-2011, at least 6-18 months after the spill started. Whereas satellite observations indicate an initial increase in phytoplankton biomass, results from this sediment trap study define a reduction in primary production and carbon export to the deep-sea community. In addition, a dilution from a low-14C carbon source (e.g., petrocarbon) was detected in the sediment trap samples after the spill, in conjunction with a change in the petrogenic composition. The data presented here fills a critical gap in our knowledge of biogeochemical processes and sub-acute impacts to the deep-sea that ensued after the 2010 DWH spill.

  12. Low-complexity object detection with deep convolutional neural network for embedded systems

    Science.gov (United States)

    Tripathi, Subarna; Kang, Byeongkeun; Dane, Gokce; Nguyen, Truong

    2017-09-01

    We investigate low-complexity convolutional neural networks (CNNs) for object detection for embedded vision applications. It is well-known that consolidation of an embedded system for CNN-based object detection is more challenging due to computation and memory requirement comparing with problems like image classification. To achieve these requirements, we design and develop an end-to-end TensorFlow (TF)-based fully-convolutional deep neural network for generic object detection task inspired by one of the fastest framework, YOLO.1 The proposed network predicts the localization of every object by regressing the coordinates of the corresponding bounding box as in YOLO. Hence, the network is able to detect any objects without any limitations in the size of the objects. However, unlike YOLO, all the layers in the proposed network is fully-convolutional. Thus, it is able to take input images of any size. We pick face detection as an use case. We evaluate the proposed model for face detection on FDDB dataset and Widerface dataset. As another use case of generic object detection, we evaluate its performance on PASCAL VOC dataset. The experimental results demonstrate that the proposed network can predict object instances of different sizes and poses in a single frame. Moreover, the results show that the proposed method achieves comparative accuracy comparing with the state-of-the-art CNN-based object detection methods while reducing the model size by 3× and memory-BW by 3 - 4× comparing with one of the best real-time CNN-based object detectors, YOLO. Our 8-bit fixed-point TF-model provides additional 4× memory reduction while keeping the accuracy nearly as good as the floating-point model. Moreover, the fixed- point model is capable of achieving 20× faster inference speed comparing with the floating-point model. Thus, the proposed method is promising for embedded implementations.

  13. Building Program Vector Representations for Deep Learning

    OpenAIRE

    Mou, Lili; Li, Ge; Liu, Yuxuan; Peng, Hao; Jin, Zhi; Xu, Yan; Zhang, Lu

    2014-01-01

    Deep learning has made significant breakthroughs in various fields of artificial intelligence. Advantages of deep learning include the ability to capture highly complicated features, weak involvement of human engineering, etc. However, it is still virtually impossible to use deep learning to analyze programs since deep architectures cannot be trained effectively with pure back propagation. In this pioneering paper, we propose the "coding criterion" to build program vector representations, whi...

  14. Endovascular treatment of iliofemoral deep vein thrombosis in pregnancy using US-guided percutaneous aspiration thrombectomy.

    Science.gov (United States)

    Gedikoglu, Murat; Oguzkurt, Levent

    2017-01-01

    We aimed to describe ultrasonography (US)-guided percutaneous aspiration thrombectomy in pregnant women with iliofemoral deep vein thrombosis. This study included nine pregnant women with acute and subacute iliofemoral deep vein thrombosis, who were severe symptomatic cases with massive swelling and pain of the leg. Patients were excluded from the study if they had only femoropopliteal deep vein thrombosis or mild symptoms of deep vein thrombosis. US-guided percutaneous aspiration thrombectomy was applied to achieve thrombus removal and uninterrupted venous flow. The treatment was considered successful if there was adequate venous patency and symptomatic relief. Complete or significant thrombus removal and uninterrupted venous flow from the puncture site up to the iliac veins were achieved in all patients at first intervention. Complete relief of leg pain was achieved immediately in seven patients (77.8%). Two patients (22.2%) had a recurrence of thrombosis in the first week postintervention. One of them underwent a second intervention, where percutaneous aspiration thrombectomy was performed again with successful removal of thrombus and establishment of in line flow. Two patients were lost to follow-up after birth. None of the remaining seven patients had rethrombosis throughout the postpartum period. Symptomatic relief was detected clinically in these patients. Endovascular treatment with US-guided percutaneous aspiration thrombectomy can be considered as a safe and effective way to remove thrombus from the deep veins in pregnant women with acute and subacute iliofemoral deep vein thrombosis.

  15. Is Multitask Deep Learning Practical for Pharma?

    Science.gov (United States)

    Ramsundar, Bharath; Liu, Bowen; Wu, Zhenqin; Verras, Andreas; Tudor, Matthew; Sheridan, Robert P; Pande, Vijay

    2017-08-28

    Multitask deep learning has emerged as a powerful tool for computational drug discovery. However, despite a number of preliminary studies, multitask deep networks have yet to be widely deployed in the pharmaceutical and biotech industries. This lack of acceptance stems from both software difficulties and lack of understanding of the robustness of multitask deep networks. Our work aims to resolve both of these barriers to adoption. We introduce a high-quality open-source implementation of multitask deep networks as part of the DeepChem open-source platform. Our implementation enables simple python scripts to construct, fit, and evaluate sophisticated deep models. We use our implementation to analyze the performance of multitask deep networks and related deep models on four collections of pharmaceutical data (three of which have not previously been analyzed in the literature). We split these data sets into train/valid/test using time and neighbor splits to test multitask deep learning performance under challenging conditions. Our results demonstrate that multitask deep networks are surprisingly robust and can offer strong improvement over random forests. Our analysis and open-source implementation in DeepChem provide an argument that multitask deep networks are ready for widespread use in commercial drug discovery.

  16. Evaluation of the DeepWind concept

    DEFF Research Database (Denmark)

    Schmidt Paulsen, Uwe; Borg, Michael; Gonzales Seabra, Luis Alberto

    The report describes the DeepWind 5 MW conceptual design as a baseline for results obtained in the scientific and technical work packages of the DeepWind project. A comparison of DeepWi nd with existing VAWTs and paper projects are carried out and the evaluation of the concept in terms of cost...

  17. Consolidated Deep Actor Critic Networks (DRAFT)

    NARCIS (Netherlands)

    Van der Laan, T.A.

    2015-01-01

    The works [Volodymyr et al. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.] and [Volodymyr et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.] have demonstrated the power of combining deep neural networks with

  18. Simulator Studies of the Deep Stall

    Science.gov (United States)

    White, Maurice D.; Cooper, George E.

    1965-01-01

    Simulator studies of the deep-stall problem encountered with modern airplanes are discussed. The results indicate that the basic deep-stall tendencies produced by aerodynamic characteristics are augmented by operational considerations. Because of control difficulties to be anticipated in the deep stall, it is desirable that adequate safeguards be provided against inadvertent penetrations.

  19. TOPIC MODELING: CLUSTERING OF DEEP WEBPAGES

    OpenAIRE

    Muhunthaadithya C; Rohit J.V; Sadhana Kesavan; E. Sivasankar

    2015-01-01

    The internet is comprised of massive amount of information in the form of zillions of web pages.This information can be categorized into the surface web and the deep web. The existing search engines can effectively make use of surface web information.But the deep web remains unexploited yet. Machine learning techniques have been commonly employed to access deep web content.

  20. Deep Recurrent Neural Networks for Supernovae Classification

    Science.gov (United States)

    Charnock, Tom; Moss, Adam

    2017-03-01

    We apply deep recurrent neural networks, which are capable of learning complex sequential information, to classify supernovae (code available at https://github.com/adammoss/supernovae). The observational time and filter fluxes are used as inputs to the network, but since the inputs are agnostic, additional data such as host galaxy information can also be included. Using the Supernovae Photometric Classification Challenge (SPCC) data, we find that deep networks are capable of learning about light curves, however the performance of the network is highly sensitive to the amount of training data. For a training size of 50% of the representational SPCC data set (around 104 supernovae) we obtain a type-Ia versus non-type-Ia classification accuracy of 94.7%, an area under the Receiver Operating Characteristic curve AUC of 0.986 and an SPCC figure-of-merit F 1 = 0.64. When using only the data for the early-epoch challenge defined by the SPCC, we achieve a classification accuracy of 93.1%, AUC of 0.977, and F 1 = 0.58, results almost as good as with the whole light curve. By employing bidirectional neural networks, we can acquire impressive classification results between supernovae types I, II and III at an accuracy of 90.4% and AUC of 0.974. We also apply a pre-trained model to obtain classification probabilities as a function of time and show that it can give early indications of supernovae type. Our method is competitive with existing algorithms and has applications for future large-scale photometric surveys.

  1. Deep bottleneck features for spoken language identification.

    Directory of Open Access Journals (Sweden)

    Bing Jiang

    Full Text Available A key problem in spoken language identification (LID is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF for spoken LID, motivated by the success of Deep Neural Networks (DNN in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV, using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09 show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed.

  2. The Achievement Ideology and Whiteness: "Achieving Whiteness" or "Achieving Middle Class?"

    Science.gov (United States)

    Allen, Ricky Lee

    Over the past few decades, social reproduction theorists have criticized achievement ideology as a dominant and dominating myth that hides the true nature of class immobility. Social reproductionists' primary criticism of achievement ideology is that it blinds the working class, regardless of race or gender, to the possibilities of collective…

  3. Potential for waste reduction

    International Nuclear Information System (INIS)

    Warren, J.L.

    1990-01-01

    The author focuses on wastes considered hazardous under the Resource Conservation and Recovery Act. This chapter discusses wastes that are of interest as well as the factors affecting the quantity of waste considered available for waste reduction. Estimates are provided of the quantities of wastes generated. Estimates of the potential for waste reduction are meaningful only to the extent that one can understand the amount of waste actually being generated. Estimates of waste reduction potential are summarized from a variety of government and nongovernment sources

  4. DeepFlavour in CMS

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Flavour-tagging of jets is an important task in collider based high energy physics and a field where machine learning tools are applied by all major experiments. A new tagger (DeepFlavour) was developed and commissioned in CMS that is based on an advanced machine learning procedure. A deep neural network is used to do multi-classification of jets that origin from a b-quark, two b-quarks, a c-quark, two c-quarks or light colored particles (u, d, s-quark or gluon). The performance was measured in both, data and simulation. The talk will also include the measured performance of all taggers in CMS. The different taggers and results will be discussed and compared with some focus on details of the newest tagger.

  5. Deep Learning for ECG Classification

    Science.gov (United States)

    Pyakillya, B.; Kazachenko, N.; Mikhailovsky, N.

    2017-10-01

    The importance of ECG classification is very high now due to many current medical applications where this problem can be stated. Currently, there are many machine learning (ML) solutions which can be used for analyzing and classifying ECG data. However, the main disadvantages of these ML results is use of heuristic hand-crafted or engineered features with shallow feature learning architectures. The problem relies in the possibility not to find most appropriate features which will give high classification accuracy in this ECG problem. One of the proposing solution is to use deep learning architectures where first layers of convolutional neurons behave as feature extractors and in the end some fully-connected (FCN) layers are used for making final decision about ECG classes. In this work the deep learning architecture with 1D convolutional layers and FCN layers for ECG classification is presented and some classification results are showed.

  6. Deep Space Habitat Concept Demonstrator

    Science.gov (United States)

    Bookout, Paul S.; Smitherman, David

    2015-01-01

    This project will develop, integrate, test, and evaluate Habitation Systems that will be utilized as technology testbeds and will advance NASA's understanding of alternative deep space mission architectures, requirements, and operations concepts. Rapid prototyping and existing hardware will be utilized to develop full-scale habitat demonstrators. FY 2014 focused on the development of a large volume Space Launch System (SLS) class habitat (Skylab Gen 2) based on the SLS hydrogen tank components. Similar to the original Skylab, a tank section of the SLS rocket can be outfitted with a deep space habitat configuration and launched as a payload on an SLS rocket. This concept can be used to support extended stay at the Lunar Distant Retrograde Orbit to support the Asteroid Retrieval Mission and provide a habitat suitable for human missions to Mars.

  7. Hybrid mask for deep etching

    KAUST Repository

    Ghoneim, Mohamed T.

    2017-08-10

    Deep reactive ion etching is essential for creating high aspect ratio micro-structures for microelectromechanical systems, sensors and actuators, and emerging flexible electronics. A novel hybrid dual soft/hard mask bilayer may be deposited during semiconductor manufacturing for deep reactive etches. Such a manufacturing process may include depositing a first mask material on a substrate; depositing a second mask material on the first mask material; depositing a third mask material on the second mask material; patterning the third mask material with a pattern corresponding to one or more trenches for transfer to the substrate; transferring the pattern from the third mask material to the second mask material; transferring the pattern from the second mask material to the first mask material; and/or transferring the pattern from the first mask material to the substrate.

  8. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.

    Science.gov (United States)

    Chen, Liang-Chieh; Papandreou, George; Kokkinos, Iasonas; Murphy, Kevin; Yuille, Alan L

    2018-04-01

    In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.

  9. Soft-Deep Boltzmann Machines

    OpenAIRE

    Kiwaki, Taichi

    2015-01-01

    We present a layered Boltzmann machine (BM) that can better exploit the advantages of a distributed representation. It is widely believed that deep BMs (DBMs) have far greater representational power than its shallow counterpart, restricted Boltzmann machines (RBMs). However, this expectation on the supremacy of DBMs over RBMs has not ever been validated in a theoretical fashion. In this paper, we provide both theoretical and empirical evidences that the representational power of DBMs can be a...

  10. Evolving Deep Networks Using HPC

    Energy Technology Data Exchange (ETDEWEB)

    Young, Steven R. [ORNL, Oak Ridge; Rose, Derek C. [ORNL, Oak Ridge; Johnston, Travis [ORNL, Oak Ridge; Heller, William T. [ORNL, Oak Ridge; Karnowski, thomas P. [ORNL, Oak Ridge; Potok, Thomas E. [ORNL, Oak Ridge; Patton, Robert M. [ORNL, Oak Ridge; Perdue, Gabriel [Fermilab; Miller, Jonathan [Santa Maria U., Valparaiso

    2017-01-01

    While a large number of deep learning networks have been studied and published that produce outstanding results on natural image datasets, these datasets only make up a fraction of those to which deep learning can be applied. These datasets include text data, audio data, and arrays of sensors that have very different characteristics than natural images. As these “best” networks for natural images have been largely discovered through experimentation and cannot be proven optimal on some theoretical basis, there is no reason to believe that they are the optimal network for these drastically different datasets. Hyperparameter search is thus often a very important process when applying deep learning to a new problem. In this work we present an evolutionary approach to searching the possible space of network hyperparameters and construction that can scale to 18, 000 nodes. This approach is applied to datasets of varying types and characteristics where we demonstrate the ability to rapidly find best hyperparameters in order to enable practitioners to quickly iterate between idea and result.

  11. Deep Space Gateway Science Opportunities

    Science.gov (United States)

    Quincy, C. D.; Charles, J. B.; Hamill, Doris; Sidney, S. C.

    2018-01-01

    The NASA Life Sciences Research Capabilities Team (LSRCT) has been discussing deep space research needs for the last two years. NASA's programs conducting life sciences studies - the Human Research Program, Space Biology, Astrobiology, and Planetary Protection - see the Deep Space Gateway (DSG) as affording enormous opportunities to investigate biological organisms in a unique environment that cannot be replicated in Earth-based laboratories or on Low Earth Orbit science platforms. These investigations may provide in many cases the definitive answers to risks associated with exploration and living outside Earth's protective magnetic field. Unlike Low Earth Orbit or terrestrial locations, the Gateway location will be subjected to the true deep space spectrum and influence of both galactic cosmic and solar particle radiation and thus presents an opportunity to investigate their long-term exposure effects. The question of how a community of biological organisms change over time within the harsh environment of space flight outside of the magnetic field protection can be investigated. The biological response to the absence of Earth's geomagnetic field can be studied for the first time. Will organisms change in new and unique ways under these new conditions? This may be specifically true on investigations of microbial communities. The Gateway provides a platform for microbiology experiments both inside, to improve understanding of interactions between microbes and human habitats, and outside, to improve understanding of microbe-hardware interactions exposed to the space environment.

  12. Reviewing nuclear power station achievement

    International Nuclear Information System (INIS)

    Howles, L.R.

    1976-01-01

    For measurement of nuclear power station achievement against original purchase the usual gross output figures are of little value since the term loosely covers many different definitions. An authentically designed output figure has been established which relates to net design output plus house load at full load. Based on these figures both cumulative and moving annual load factors are measured, the latter measuring the achievement over the last year, thus showing trends with time. Calculations have been carried out for all nuclear stations in the Western World with 150 MW(e) gross design output and above. From these are shown: moving annual load factor indicating relative station achievements for all the plants; cumulative load factors from which return of investment can be calculated; average moving annual load factors for the four types of system Magnox, PWR, HWR, and BWR; and a relative comparison of achievement by country in a few cases. (U.K.)

  13. [Theme: Achieving Quality Laboratory Projects.[.

    Science.gov (United States)

    Shinn, Glen C.; And Others

    1983-01-01

    The theme articles present strategies for achieving quality laboratory projects in vocational agriculture. They describe fundamentals of the construction of quality projects and stress the importance of quality instruction. (JOW)

  14. Fiscal 1983 Sunshine Program achievement report. Development for practical application of photovoltaic system (Verification of experimental low cost silicon refining - Development of technology for chlorosilane hydrogen-reduction process); 1983 nendo taiyoko hatsuden system jitsuyoka gijutsu kaihatsu seika hokokusho. Tei cost silicon jikken seisei kensho (Chlorosilane no suiso kangen kotei no gijutsu kaihatsu)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1984-03-01

    The effort aims to develop a reactor and its peripheral devices and process management technology therefor and to develop chlorosilane hydrogen-reduction process technology as part of the endeavors to develop a low cost production process for silicon for photovoltaic cells for the purpose of building a model plant capable of approximately 10 tons/year in terms of SOG (spin on glass) silicon. A study is made of ten operations (total reaction time of 2,264 hours), and it turns out that the electric power consumption efficiency is near the initially planned value. The yield of Si is, however, but 16.5% which is lower than the initially planned value of 20%. The value is elevated to 18% by raising the reactor temperature. To prevent overcleaning, a reactor with its internal walls experimentally coated with SiC is tested. The problems with devices other than the reactor tube are extraction rendered difficult by anomalously grown granules and processing devices choked by Si powder contained in waste gas after reaction. The first problem is settled by modifying the extraction tube but the other still needs a remedy. The produced granules are found to be high in quality. The seed producing roll crusher is operated for a total of 92 hours yielding 857kg in total. (NEDO)

  15. Fiscal 1980-1987 Sunshine Program achievement reports. Development for practical application of photovoltaic system (Verification of experimental low cost silicon refining - Overview: Development of technology for chlorosilane hydrogen-reduction process); 1980-1987 nendo taiyoko hatsuden system jitsuyoka gijutsu kaihatsu seika hokokusho. Tei cost silicon jijkken seisei kensho sokatsuban (chlorosilane no suiso kangen kotei no gijutsu kaihatsu)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1988-11-01

    The chlorosilane hydrogen-reduction technology development period may be divided into the first phase (fiscal 1980-1985) and the second phase (fiscal 1986-1988). During the former phase, efforts were exerted to develop a small experimental device (10 tons/year class) and technologies to operate the same. Important challenges were to cause reaction to occur only on the seed grains in the reactor and to create a proper material for the reactor tube. In the latter phase, element technologies were developed, indispensable for the development of a practical reactor. Endeavors exerted to solve these challenges were the development of a high-strength large-diameter reactor tube, the development of an SiC-CVD (chemical vapor deposition) coating technology, and the development of a technology to join parts to the ceramic-made reactor tube. After eight years' striving, a fluidized bed reactor has been successfully constructed, capable of continuously reducing trichlorosilane by hydrogen. The success promises stable production and supply of polycrystalline silicon. The SOG (spun on glass)-Si produced by the reactor is pure enough to serve the purpose of photovoltaics, and its unit cost has been lowered as initially intended. (NEDO)

  16. Achievement report for fiscal 1998 on the preceding research related to global environment industry technologies. Survey and research on reduction of nitrogen monoxide; 1998 nendo chikyu sangyo gijutsu ni kakawaru sendo kenkyu asanka chisso no haishutsu teigen ni kansuru chosa kenkyu seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    Nitrogen monoxide is a strong greenhouse effect gas having warming up index per molecule 300 times greater than that of CO2, and is designated as the object of reduction in the Kyoto Conference. The present preceding research discusses necessity of performing research and development works related to reducing the emission of nitrogen monoxide, and if it is necessary, places the final objective on proposition of what researches should be planned. Fiscal 1997 being the first fiscal year of the preceding research has surveyed emission amount from different emission sources, and enumerated the research and development assignments. Fiscal 1998 falling under the final fiscal year summarizes the emission amount including the future trends, surveys the feasibility of the promising technological measures through experiments, and proposed finally a research and development plan desired of implementation in the future. The proposal contains a research plan placing development of nitrogen monoxide decomposing catalysts and automobile catalysts as the main objectives. Among the domestic nitrogen monoxide generating sources, about 2/3 is the man-made generation sources, hence catalysts, if developed, may be applied to such facilities as combustion furnaces. (NEDO)

  17. Breast reduction (mammoplasty) - slideshow

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/presentations/100189.htm Breast reduction (mammoplasty) - series—Indications To use the sharing features ... Lickstein, MD, FACS, specializing in cosmetic and reconstructive plastic surgery, Palm Beach Gardens, FL. Review provided by ...

  18. Medical Errors Reduction Initiative

    National Research Council Canada - National Science Library

    Mutter, Michael L

    2005-01-01

    The Valley Hospital of Ridgewood, New Jersey, is proposing to extend a limited but highly successful specimen management and medication administration medical errors reduction initiative on a hospital-wide basis...

  19. STRATEGIES FOR ACHIEVING COMPETITIVE ADVANTAGE

    OpenAIRE

    Jusuf ZEKIRI; Alexandru NEDELEA

    2011-01-01

    This paper is organized in three parts. A brief overview of the importance of strategies within companies, as well as literature review is presented along with traditional approaches on strategies for achieving competitive advantage, and new approaches for gaining a competitive advantage. The main objective of the paper is to outline and discuss the relevant issues and challenges from a theoretical viewpoint related with the possible strategy formulation of companies in order to achieve a com...

  20. Classification of Exacerbation Frequency in the COPDGene Cohort Using Deep Learning with Deep Belief Networks.

    Science.gov (United States)

    Ying, Jun; Dutta, Joyita; Guo, Ning; Hu, Chenhui; Zhou, Dan; Sitek, Arkadiusz; Li, Quanzheng

    2016-12-21

    This study aims to develop an automatic classifier based on deep learning for exacerbation frequency in patients with chronic obstructive pulmonary disease (COPD). A threelayer deep belief network (DBN) with two hidden layers and one visible layer was employed to develop classification models and the models' robustness to exacerbation was analyzed. Subjects from the COPDGene cohort were labeled with exacerbation frequency, defined as the number of exacerbation events per year. 10,300 subjects with 361 features each were included in the analysis. After feature selection and parameter optimization, the proposed classification method achieved an accuracy of 91.99%, using a 10-fold cross validation experiment. The analysis of DBN weights showed that there was a good visual spatial relationship between the underlying critical features of different layers. Our findings show that the most sensitive features obtained from the DBN weights are consistent with the consensus showed by clinical rules and standards for COPD diagnostics. We thus demonstrate that DBN is a competitive tool for exacerbation risk assessment for patients suffering from COPD.