WorldWideScience

Sample records for deep throttling pintle

  1. Northrop Grumman TR202 LOX/LH2 Deep Throttling Engine Technology Project Status

    Science.gov (United States)

    Gromski, Jason; Majamaki, Annik; Chianese, Silvio; Weinstock, Vladimir; Kim, Tony S.

    2010-01-01

    NASA's Propulsion and Cryogenic Advanced Development (PCAD) project is currently developing enabling propulsion technologies in support of future lander missions. To meet lander requirements, several technical challenges need to be overcome, one of which is the ability for the descent engine(s) to operate over a deep throttle range with cryogenic propellants. To address this need, PCAD has enlisted Northrop Grumman Aerospace Systems (NGAS) in a technology development effort associated with the TR202 engine. The TR202 is a LOX/LH2 expander cycle engine driven by independent turbopump assemblies and featuring a variable area pintle injector similar to the injector used on the TR200 Apollo Lunar Module Descent Engine (LMDE). Since the Apollo missions, NGAS has continued to mature deep throttling pintle injector technology. The TR202 program has completed two series of pintle injector testing. The first series of testing used ablative thrust chambers and demonstrated igniter operation as well as stable performance at discrete points throughout the designed 10:1 throttle range. The second series was conducted with calorimeter chambers and demonstrated injector performance at discrete points throughout the throttle range as well as chamber heat flow adequate to power an expander cycle design across the throttle range. This paper provides an overview of the TR202 program, describing the different phases and key milestones. It describes how test data was correlated to the engine conceptual design. The test data obtained has created a valuable database for deep throttling cryogenic pintle technology, a technology that is readily scalable in thrust level.

  2. Northrop Grumman TR202 LOX/LH2 Deep Throttling Engine Project Status

    Science.gov (United States)

    Gromski, J.; Majamaki, A. N.; Chianese, S. G.; Weinstock, V. D.; Kim, T.

    2010-01-01

    NASA's Propulsion and Cryogenic Advanced Development (PCAD) project is currently developing enabling propulsion technologies in support of the Exploration Initiative, with a particular focus on the needs of the Altair Project. To meet Altair requirements, several technical challenges need to be overcome, one of which is the ability for the lunar descent engine(s) to operate over a deep throttle range with cryogenic propellants. To address this need, PCAD has enlisted Northrop Grumman Aerospace Systems (NGAS) in a technology development effort associated with the TR202, a LOX/LH2 expander cycle engine driven by independent turbopump assemblies and featuring a variable area pintle injector similar to the injector used on the TR200 Apollo Lunar Module Descent Engine (LMDE). Since the Apollo missions, NGAS has continued to mature deep throttling pintle injector technology. The TR202 program has completed two phases of pintle injector testing. The first phase of testing used ablative thrust chambers and demonstrated igniter operation as well as stable performance at several power levels across the designed 10:1 throttle range. The second phase of testing was performed on a calorimeter chamber and demonstrated injector performance at various power levels (75%, 50%, 25%, 10%, and 7.5%) across the throttle range as well as chamber heat flux to show that the engine can close an expander cycle design across the throttle range. This paper provides an overview of the TR202 program. It describes the different phases of the program with the key milestones of each phase. It then shows when those milestones were met. Next, it describes how the test data was used to update the conceptual design and how the test data has created a database for deep throttling cryogenic pintle technology that is readily scaleable and can be used to again update the design once the Altair program's requirements are firm. The final section of the paper describes the path forward, which includes

  3. On the prediction of spray angle of liquid-liquid pintle injectors

    Science.gov (United States)

    Cheng, Peng; Li, Qinglian; Xu, Shun; Kang, Zhongtao

    2017-09-01

    The pintle injector is famous for its capability of deep throttling and low cost. However, the pintle injector has been seldom investigated. To get a good prediction of the spray angle of liquid-liquid pintle injectors, theoretical analysis, numerical simulations and experiments were conducted. Under the hypothesis of incompressible and inviscid flow, a spray angle formula was deduced from the continuity and momentum equations based on a control volume analysis. The formula was then validated by numerical and experimental data. The results indicates that both geometric and injection parameters affect the total momentum ratio (TMR) and then influence the spray angle formed by liquid-liquid pintle injectors. TMR is the pivotal non-dimensional number that dominates the spray angle. Compared with gas-gas pintle injectors, spray angle formed by liquid-liquid injectors is larger, which benefits from the local high pressure zone near the pintle wall caused by the impingement of radial and axial sheets.

  4. CECE: Expanding the Envelope of Deep Throttling Technology in Liquid Oxygen/Liquid Hydrogen Rocket Engines for NASA Exploration Missions

    Science.gov (United States)

    Giuliano, Victor J.; Leonard, Timothy G.; Lyda, Randy T.; Kim, Tony S.

    2010-01-01

    As one of the first technology development programs awarded by NASA under the Vision for Space Exploration, the Pratt & Whitney Rocketdyne (PWR) Deep Throttling, Common Extensible Cryogenic Engine (CECE) program was selected by NASA in November 2004 to begin technology development and demonstration toward a deep throttling, cryogenic engine supporting ongoing trade studies for NASA s Lunar Lander descent stage. The CECE program leverages the maturity and previous investment of a flight-proven hydrogen/oxygen expander cycle engine, the PWR RL10, to develop and demonstrate an unprecedented combination of reliability, safety, durability, throttlability, and restart capabilities in high-energy, cryogenic, in-space propulsion. The testbed selected for the deep throttling demonstration phases of this program was a minimally modified RL10 engine, allowing for maximum current production engine commonality and extensibility with minimum program cost. Four series of demonstrator engine tests have been successfully completed between April 2006 and April 2010, accumulating 7,436 seconds of hot fire time over 47 separate tests. While the first two test series explored low power combustion (chug) and system instabilities, the third test series investigated and was ultimately successful in demonstrating several mitigating technologies for these instabilities and achieved a stable throttling ratio of 13:1. The fourth test series significantly expanded the engine s operability envelope by successfully demonstrating a closed-loop control system and extensive transient modeling to enable lower power engine starting, faster throttle ramp rates, and mission-specific ignition testing. The final hot fire test demonstrated a chug-free, minimum power level of 5.9%, corresponding to an overall 17.6:1 throttling ratio achieved. In total, these tests have provided an early technology demonstration of an enabling cryogenic propulsion concept with invaluable system-level technology data

  5. Development of the Pintle Release Fork Mechanism

    International Nuclear Information System (INIS)

    BOGER, R.M.; DALE, R.

    1999-01-01

    An improved method of attachment of the pintle to the piston in the universal sampler is being developed. The mechanism utilizes a forked release disk which captures two balls in a cavity formed by a hole in the piston and a groove in the pintle rod

  6. Verification on spray simulation of a pintle injector for liquid rocket engine

    Science.gov (United States)

    Son, Min; Yu, Kijeong; Radhakrishnan, Kanmaniraja; Shin, Bongchul; Koo, Jaye

    2016-02-01

    The pintle injector used for a liquid rocket engine is a newly re-attracted injection system famous for its wide throttle ability with high efficiency. The pintle injector has many variations with complex inner structures due to its moving parts. In order to study the rotating flow near the injector tip, which was observed from the cold flow experiment using water and air, a numerical simulation was adopted and a verification of the numerical model was later conducted. For the verification process, three types of experimental data including velocity distributions of gas flows, spray angles and liquid distribution were all compared using simulated results. The numerical simulation was performed using a commercial simulation program with the Eulerian multiphase model and axisymmetric two dimensional grids. The maximum and minimum velocities of gas were within the acceptable range of agreement, however, the spray angles experienced up to 25% error when the momentum ratios were increased. The spray density distributions were quantitatively measured and had good agreement. As a result of this study, it was concluded that the simulation method was properly constructed to study specific flow characteristics of the pintle injector despite having the limitations of two dimensional and coarse grids.

  7. A Historical Systems Study of Liquid Rocket Engine Throttling Capabilities

    Science.gov (United States)

    Betts, Erin M.; Frederick, Robert A., Jr.

    2010-01-01

    This is a comprehensive systems study to examine and evaluate throttling capabilities of liquid rocket engines. The focus of this study is on engine components, and how the interactions of these components are considered for throttling applications. First, an assessment of space mission requirements is performed to determine what applications require engine throttling. A background on liquid rocket engine throttling is provided, along with the basic equations that are used to predict performance. Three engines are discussed that have successfully demonstrated throttling. Next, the engine system is broken down into components to discuss special considerations that need to be made for engine throttling. This study focuses on liquid rocket engines that have demonstrated operational capability on American space launch vehicles, starting with the Apollo vehicle engines and ending with current technology demonstrations. Both deep throttling and shallow throttling engines are discussed. Boost and sustainer engines have demonstrated throttling from 17% to 100% thrust, while upper stage and lunar lander engines have demonstrated throttling in excess of 10% to 100% thrust. The key difficulty in throttling liquid rocket engines is maintaining an adequate pressure drop across the injector, which is necessary to provide propellant atomization and mixing. For the combustion chamber, cooling can be an issue at low thrust levels. For turbomachinery, the primary considerations are to avoid cavitation, stall, surge, and to consider bearing leakage flows, rotordynamics, and structural dynamics. For valves, it is necessary to design valves and actuators that can achieve accurate flow control at all thrust levels. It is also important to assess the amount of nozzle flow separation that can be tolerated at low thrust levels for ground testing.

  8. Study of new prototype pintle injectors for diesel engine application

    International Nuclear Information System (INIS)

    Payri, Raul; Gimeno, Jaime; De la Morena, Joaquin; Battiston, Paul A.; Wadhwa, Amrita; Straub, Robert

    2016-01-01

    Highlights: • Pintle nozzles are proposed as a way to perform injection rate shaping strategies. • Rate shaping is achieved controlling the relative shape between needle and hole. • Compared to other rate shaping strategies, the spray velocity is less impacted. • Pintle nozzle design features determine initial spray penetration and angle. • The stabilized liquid length and spray angle depend mostly on the hole outlet area. - Abstract: A new prototype common rail injector featuring a complete new nozzle design concept was exhaustively characterized both from the hydraulic and spray formation point of view. A commercial injection rate meter together with a spray momentum test rig were used to determine the flow characteristics at the nozzle exit. A novel high pressure and high temperature chamber (up to 15 MPa and 1000 K) was used to determine liquid length and vapor penetration. Using these tools, three different pintle nozzle designs, with specific features in the outlet section, were studied. The test matrix included a sweep of injection pressure up to 2000 bar and a sweep of ambient temperature up to 950 K. The results obtained show that pintle nozzles offer great potential in terms of fuel mass flux controlled by variable nozzle geometry. Effects in the hydraulic measurements and spray images due to the variable geometry were observed and characterized.

  9. Effects of Hardness on Pintle Rod Performance in the Universal and Retained Gas Samplers

    International Nuclear Information System (INIS)

    BOGER, R.M.

    1999-01-01

    Interaction between hardness of the pintle rods and the retainer rings used in the core samplers is investigated. It is found that ordinary Rockwell C measurements are not sufficient and superficial hardness instruments are recommended to verify hardness since in-production hardness of pintle rods is found to vary widely and probably leads to some premature release of pistons in samplers

  10. Critical stresses in pintle, weldment and top head of nuclear waste container

    International Nuclear Information System (INIS)

    Ladkany, S.G.; Kniss, B.R.

    1992-01-01

    Critical stresses in the pintle, the weldment, and the top heads (flat and curved), of a high level nuclear waste container are evaluated under an annular loading. This loading is three times larger than the expected normal operating load. Results show that the shape and the thickness of the pintle and the top head, along with the thickness of the weldment, substantially affect the magnitude of the critical stresses and distortions in the various components (i.e. pintle, shell, and heads) when they are supporting a load. Stiffer top heads and pintles and larger weldment sizes reduce the critical stresses in all welded joints. Various shapes of curved top heads were investigated. In this paper an ASME flanged and dished top head, which has the same thickness as the canister, is analyzed

  11. Flow-throttling orifice nozzle

    International Nuclear Information System (INIS)

    Sletten, H.L.

    1975-01-01

    A series-parallel-flow type throttling apparatus to restrict coolant flow to certain fuel assemblies of a nuclear reactor is comprised of an axial extension nozzle of the fuel assembly. The nozzle has a series of concentric tubes with parallel-flow orifice holes in each tube. Flow passes from a high pressure plenum chamber outside the nozzle through the holes in each tube in series to the inside of the innermost tube where the coolant, having dissipated most of its pressure, flows axially to the fuel element. (U.S.)

  12. Model based design of electronic throttle control

    Science.gov (United States)

    Cherian, Fenin; Ranjan, Ashish; Bhowmick, Pathikrit; Rammohan, A.

    2017-11-01

    With the advent of torque based Engine Management Systems, the precise control and robust performance of the throttle body becomes a key factor in the overall performance of the vehicle. Electronic Throttle Control provides benefits such as improved air-fuel ratio for improving the vehicle performance and lower exhausts emissions to meet the stringent emission norms. Modern vehicles facilitate various features such as Cruise Control, Traction Control, Electronic Stability Program and Pre-crash systems. These systems require control over engine power without driver intervention, which is not possible with conventional mechanical throttle system. Thus these systems are integrated to function with the electronic throttle control. However, due to inherent non-linearities in the throttle body, the control becomes a difficult task. In order to eliminate the influence of this hysteresis at the initial operation of the butterfly valve, a control to compensate the shortage must be added to the duty required for starting throttle operation when the initial operation is detected. Therefore, a lot of work is being done in this field to incorporate the various nonlinearities to achieve robust control. In our present work, the ETB was tested to verify the working of the system. Calibration of the TPS sensors was carried out in order to acquire accurate throttle opening angle. The response of the calibrated system was then plotted against a step input signal. A linear model of the ETB was prepared using Simulink and its response was compared with the experimental data to find out the initial deviation of the model from the actual system. To reduce this deviation, non-linearities from existing literature were introduced to the system and a response analysis was performed to check the deviation from the actual system. Based on this investigation, an introduction of a new nonlinearity parameter can be used in future to reduce the deviation further making the control of the ETB more

  13. ThrottleBot - Performance without Insight

    OpenAIRE

    Chang, Michael Alan; Panda, Aurojit; Tsai, Yuan-Cheng; Wang, Hantao; Shenker, Scott

    2017-01-01

    Large scale applications are increasingly built by composing sets of microservices. In this model the functionality for a single application might be split across 100s or 1000s of microservices. Resource provisioning for these applications is complex, requiring administrators to understand both the functioning of each microservice, and dependencies between microservices in an application. In this paper we present ThrottleBot, a system that automates the process of determining what resource wh...

  14. Closed-loop thrust and pressure profile throttling of a nitrous oxide/hydroxyl-terminated polybutadiene hybrid rocket motor

    Science.gov (United States)

    Peterson, Zachary W.

    Hybrid motors that employ non-toxic, non-explosive components with a liquid oxidizer and a solid hydrocarbon fuel grain have inherently safe operating characteristics. The inherent safety of hybrid rocket motors offers the potential to greatly reduce overall operating costs. Another key advantage of hybrid rocket motors is the potential for in-flight shutdown, restart, and throttle by controlling the pressure drop between the oxidizer tank and the injector. This research designed, developed, and ground tested a closed-loop throttle controller for a hybrid rocket motor using nitrous oxide and hydroxyl-terminated polybutadiene as propellants. The research simultaneously developed closed-loop throttle algorithms and lab scale motor hardware to evaluate the fidelity of the throttle simulations and algorithms. Initial open-loop motor tests were performed to better classify system parameters and to validate motor performance values. Deep-throttle open-loop tests evaluated limits of stable thrust that can be achieved on the test hardware. Open-loop tests demonstrated the ability to throttle the motor to less than 10% of maximum thrust with little reduction in effective specific impulse and acoustical stability. Following the open-loop development, closed-loop, hardware-in-the-loop tests were performed. The closed-loop controller successfully tracked prescribed step and ramp command profiles with a high degree of fidelity. Steady-state accuracy was greatly improved over uncontrolled thrust.

  15. Raising the efficiency of open-throttle liquefiers

    International Nuclear Information System (INIS)

    Zakharov, N.D.; Merkel', N.D.

    1986-01-01

    This paper makes a comparative thermodynamic analysis of certain open-throttle liquefier schemes that operate with multicomponent cryogenic agents. The most promising routes for implementing their advantages are determined. It is found that the correct choice of flow diagram and complex parameter optimization can raise the relative available energy (mass profile characteristics) of open-throttle liquefiers with mixtures to at least four times that of nitrogen installation. The most economical scheme is one that involves mixing the components in feedback, followed by double throttling of nitrogen

  16. DETERMINATION OF CHARACTERISTICS OF THROTTLING DEVICE FOR PNEUMATIC SPRING

    Directory of Open Access Journals (Sweden)

    O. H. Reidemeister

    2018-02-01

    Full Text Available Purpose. This paper focuses on determination of the dependence of the working medium flow on the capacity of the throttling device, its geometric features and the pressure difference in the pneumatic spring cylinder and in the auxiliary reservoir. Methodology. Calculation of the dependence of the working medium and pressure drop is performed in two ways: 1 by numerical simulation of a stationary gas flow through a throttling element; 2 its analytical calculation expression using empirical relationships (control calculation to evaluate the reliability of numerical simulation results. For the calculation, three models of throttling devices were chosen. Dependence of the flow rate of the working medium on the capacity of the throttling device and its geometric features was determined based on the approximation of the dependency graphs of the pressure drop against the mass flow rate of the working medium. Findings. We obtained graphical dependencies between the pressure drop and the mass flow rate of the working medium from the two calculation options. Based on the results of calculations performed with the help of a software package with visualization of the results, we calculated a proportionality coefficient that describes the dependence of the working medium flow on the throttling device capacity and its geometric features for each of the throttling elements considered, with three degrees of closure. The air flow values, obtained by numerical simulation, are greater than the flow rates obtained from semi-empirical formulas. At the same time, they are in good qualitative agreement, and the quantitative difference averages 25%, which can be regarded as confirmation of the reliability of the nu-merical model. Based on the calculation results, we plotted the proportionality coefficient graphs against the degree of closure of the throttling device. Originality. The work allows determining the degree of influence of the frictio-nal component on the

  17. Theoretical Acoustic Absorber Design Approach for LOX/LCH4 Pintle Injector Rocket Engines

    Science.gov (United States)

    Candelaria, Jonathan

    dampening system for a 500 lbf and a 2000 lbf throttleable liquid oxygen liquid methane pintle injector rocket engine.

  18. Spray characterization of a piezo pintle-type injector for gasoline direct injection engines

    Science.gov (United States)

    Nouri, J. M.; Hamid, M. A.; Yan, Y.; Arcoumanis, C.

    2007-10-01

    The sprays from a pintle-type nozzle injected into a constant volume chamber have been visualised by a high resolution CCD camera and quantified in terms of droplet velocity and diameter with a 2-D phase Doppler anemometry (PDA) system at an injection pressure of 200 bar and back-pressures varying from atmospheric to 12 bar. Spray visualization illustrated that the spray was string-structured, that the location of the strings remained constant from one injection to the next and that the spray structure was unaffected by back pressure. The overall spray cone angle was also stable and independent of back pressure whose effect was to reduce the spray tip penetration so that the averaged vertical spray tip velocity was reduced by 37% when the back-pressure increased from 1 to 12 bar. Detailed PDA measurements were carried out under atmospheric conditions at 2.5 and 10 mm from the injector exit with the results providing both the temporal and the spatial velocity and size distributions of the spray droplets. The maximum axial mean droplet velocity was 155 m/s at 2.5 mm from the injector which was reduced to 140 m/s at z = 10 mm. The string spacing determined from PDA measurements was around 0.375 mm and 0.6 mm at z=2.5 and 10 mm, respectively. The maximum mean droplet diameter was found to be in the core of the strings with values up to 40 μm at z=2.5 mm reducing to 20 μm at z=10 mm.

  19. Spray characterization of a piezo pintle-type injector for gasoline direct injection engines

    International Nuclear Information System (INIS)

    Nouri, J M; Hamid, M A; Yan, Y; Arcoumanis, C

    2007-01-01

    The sprays from a pintle-type nozzle injected into a constant volume chamber have been visualised by a high resolution CCD camera and quantified in terms of droplet velocity and diameter with a 2-D phase Doppler anemometry (PDA) system at an injection pressure of 200 bar and back-pressures varying from atmospheric to 12 bar. Spray visualization illustrated that the spray was string-structured, that the location of the strings remained constant from one injection to the next and that the spray structure was unaffected by back pressure. The overall spray cone angle was also stable and independent of back pressure whose effect was to reduce the spray tip penetration so that the averaged vertical spray tip velocity was reduced by 37% when the back-pressure increased from 1 to 12 bar. Detailed PDA measurements were carried out under atmospheric conditions at 2.5 and 10 mm from the injector exit with the results providing both the temporal and the spatial velocity and size distributions of the spray droplets. The maximum axial mean droplet velocity was 155 m/s at 2.5 mm from the injector which was reduced to 140 m/s at z = 10 mm. The string spacing determined from PDA measurements was around 0.375 mm and 0.6 mm at z=2.5 and 10 mm, respectively. The maximum mean droplet diameter was found to be in the core of the strings with values up to 40 μm at z=2.5 mm reducing to 20 μm at z=10 mm

  20. Manual Manipulation of Engine Throttles for Emergency Flight Control

    Science.gov (United States)

    Burcham, Frank W., Jr.; Fullerton, C. Gordon; Maine, Trindel A.

    2004-01-01

    If normal aircraft flight controls are lost, emergency flight control may be attempted using only engines thrust. Collective thrust is used to control flightpath, and differential thrust is used to control bank angle. Flight test and simulation results on many airplanes have shown that pilot manipulation of throttles is usually adequate to maintain up-and-away flight, but is most often not capable of providing safe landings. There are techniques that will improve control and increase the chances of a survivable landing. This paper reviews the principles of throttles-only control (TOC), a history of accidents or incidents in which some or all flight controls were lost, manual TOC results for a wide range of airplanes from simulation and flight, and suggested techniques for flying with throttles only and making a survivable landing.

  1. Background and principles of throttles-only flight control

    Science.gov (United States)

    Burcham, Frank W., Jr.

    1995-01-01

    There have been many cases in which the crew of a multi-engine airplane had to use engine thrust for emergency flight control. Such a procedure is very difficult, because the propulsive control forces are small, the engine response is slow, and airplane dynamics such as the phugoid and dutch roll are difficult to damp with thrust. In general, thrust increases are used to climb, thrust decreases to descend, and differential thrust is used to turn. Average speed is not significantly affected by changes in throttle setting. Pitch control is achieved because of pitching moments due to speed changes, from thrust offset, and from the vertical component of thrust. Roll control is achieved by using differential thrust to develop yaw, which, through the normal dihedral effect, causes a roll. Control power in pitch and roll tends to increase as speed decreases. Although speed is not controlled by the throttles, configuration changes are often available (lowering gear, flaps, moving center-of-gravity) to change the speed. The airplane basic stability is also a significant factor. Fuel slosh and gyroscopic moments are small influences on throttles-only control. The background and principles of throttles-only flight control are described.

  2. Minimum throttling feedwater control in VVER-1000 and PWR NPPs

    International Nuclear Information System (INIS)

    Symkin, B.E.; Thaulez, F.

    2004-01-01

    This paper presents an approach for the design and implementation of advanced digital control systems that use a minimum-throttling algorithm for the feedwater control. The minimum-throttling algorithm for the feedwater control, i.e. for the control of steam generators level and of the feedwater pumps speed, is applicable for NPPs with variable speed feedwater pumps. It operates in such a way that the feedwater control valve in the most loaded loop is wide open, steam generator level in this loop being controlled by the feedwater pumps speed, while the feedwater control valves in the other loops are slightly throttling under the action of their control system, to accommodate the slight loop imbalances. This has the advantage of minimizing the valve pressure losses hence minimizing the feedwater pumps power consumption and increasing the net MWe. The benefit has been evaluated for specific plants as being roughly 0.7 and 2.4 MW. The minimum throttling mode has the further advantages of lowering the actuator efforts with potential positive impact in actuator life and of minimizing the feedwater pipelines vibrations. The minimum throttling mode of operation has been developed by the Ukrainian company LvivORGRES. It has been applied with great deal of success on several VVER-1000 NPPs, six units of Zaporizhzha in Ukraine plus, with participation of Westinghouse, Kozloduy 5 and 6 in Bulgaria and South Ukraine 1 to 3 in Ukraine. The concept operates with both ON-OFF valves and true control valves. A study, jointly conducted by Westinghouse and LvivORGRES, is ongoing to demonstrate the applicability of the concept to PWRs having variable speed feedwater pumps and having, or installing, digital feedwater control, standalone or as part of a global digital control system. The implementation of the algorithm at VVER-1000 plants provided both safety improvement and direct commercial benefits. The minimum-throttling algorithm will similarly increase the performance of PWRs. The

  3. Diesel engine exhaust particulate filter with intake throttling incineration control

    Energy Technology Data Exchange (ETDEWEB)

    Ludecke, O.; Rosebrock, T.

    1980-07-08

    A description is given of a diesel engine exhaust filter and particulate incineration system in combination with a diesel engine having a normally unthrottled air induction system for admitting combustion air to the engine and an exhaust system for carrying off spent combustion products exhausted from the engine, said filter and incineration system comprising: a combustion resistant filter disposed in the exhaust system and operative to collect and retain portions of the largely carbonaceous particulate matter contained in the engine exhaust products, said fiber being capable of withstanding without substantial damage internal temperatures sufficient to burn the collected particulate matter, a throttle in the indication system and operable to restrict air flow into the engine to reduce the admittance of excess combustion air and thereby increase engine exhaust gas temperature, and means to actuate said throttle periodically during engine operation to an air flow restricting burn mode capable of raising the particulates in said filter to their combustion temperature under certain engine operating conditions and to maintain said throttle mode for an interval adequate to burn retained particulates in the filter.

  4. DEPENDENCE OF AIR SPRING PARAMETERS ON THROTTLE RESISTANCE

    Directory of Open Access Journals (Sweden)

    O. H. Reidemeister

    2016-04-01

    Full Text Available Purpose. In this paper it is necessary to conduct: 1 research and analyse the influence of throttle element pneumatic resistance on elastic and damping parameters of air spring; 2 to obtain the dependence of air spring parameters on throttle element pneumatic resistance value. Methodology. The work presents the elaborated model of the air spring as a dynamic system with three phase coordinates (cylinder pressure, auxiliary reservoir pressure, cylinder air mass. Stiffness and viscosity coefficients were determined on the basis of system response to harmonic kinematic disturbance. The data for the analysis are obtained by changing the capacity of the connecting element and the law of pressure variation between the reservoir and the cylinder. The viscosity coefficient is regarded as the viscosity ratio of the hydraulic damper, which for one oscillation cycle consumes the same energy as the air spring. The process of air condition change inside the cylinder (reservoir is considered to be adiabatic; the mass air flow through the connecting element depends on the pressure difference. Findings. We obtained the curves for spring viscosity and stiffness coefficients dependence on the throttle resistance at three different laws, linking airflow through the cylinder with the pressure difference in cylinder and reservoir. At both maximum and minimum limiting resistance values the spring viscosity tends to zero, reaching its peak in the mean resistance values. Stiffness increases monotonically with increasing resistance, tends to the limit corresponding to the absence of an auxiliary reservoir (at high resistance and the increase in cylinder volume by the reservoir volume (at low resistance. Originality.The designed scheme allows determining the optimal parameters of elastic and damping properties of the pneumatic system as function of the throttle element air resistance. Practical value.The ability to predict the parameters of elastic and damping properties

  5. ASIL determination for motorbike's Electronics Throttle Control System (ETCS) mulfunction

    Science.gov (United States)

    Zaman Rokhani, Fakhrul; Rahman, Muhammad Taqiuddin Abdul; Ain Kamsani, Noor; Sidek, Roslina Mohd; Saripan, M. Iqbal; Samsudin, Khairulmizam; Khair Hassan, Mohd

    2017-11-01

    Electronics Throttle Control System (ETCS) is the principal electronic unit in all fuel injection engine motorbike, augmenting the engine performance efficiency in comparison to the conventional carburetor based engine. ETCS is regarded as a safety-critical component, whereby ETCS malfunction can cause unintended acceleration or deceleration event, which can be hazardous to riders. In this study, Hazard Analysis and Risk Assessment, an ISO26262 functional safety standard analysis has been applied on motorbike's ETCS to determine the required automotive safety integrity level. Based on the analysis, the established automotive safety integrity level can help to derive technical and functional safety measures for ETCS development.

  6. Lyapunov-based constrained engine torque control using electronic throttle and variable cam timing

    NARCIS (Netherlands)

    Feru, E.; Lazar, M.; Gielen, R.H.; Kolmanovsky, I.V.; Di Cairano, S.

    2012-01-01

    In this paper, predictive control of a spark ignition engine equipped with an electronic throttle and a variable cam timing actuator is considered. The objective is to adjust the throttle angle and the engine cam timing in order to reduce the exhaust gas emissions while maintaining fast and

  7. Convex modeling and optimization of a vehicle powertrain equipped with a generator-turbine throttle unit

    NARCIS (Netherlands)

    Marinkov, S.; Murgovski, N.; de Jager, A.G.

    2017-01-01

    This paper investigates an internal combustion (gasoline) engine throttled by a generator-turbine unit. Apart from throttling, the purpose of this device is to complement the operation of a conventional car alternator and support its downsizing by introducing an additional source of energy for the

  8. Design and analysis of throttle orifice applying to small space with large pressure drop

    International Nuclear Information System (INIS)

    Li Yan; Lu Daogang; Zeng Xiaokang

    2013-01-01

    Throttle orifices are widely used in various pipe systems of nuclear power plants. Improper placement of orifices would aggravate the vibration of the pipe with strong noise, damaging the structure of the pipe and the completeness of the system. In this paper, effects of orifice diameter, thickness, eccentric distance and chamfering on the throttling are analyzed applying CFD software. Based on that, we propose the throttle orifices which apply to small space with large pressure drop are multiple eccentric orifices. The results show that the multiple eccentric orifices can effectively restrain the cavitation and flash distillation, while generating a large pressure drop. (authors)

  9. Utilization of the heat of mixing in open-circuit throttle refrigerators

    International Nuclear Information System (INIS)

    Zhakharov, N.D.; Anikeev, G.N.; Grezin, A.K.

    1986-01-01

    Open-circuit throttle refrigerators based on gas mixtures operate, as a rule, according to a single-stream scheme. The refrigerating effect is determined by the isothermal throttling effect of the mixture in the cylinder under the conditions at the inlet to the cryogenic unit. The authors use the heat of mixing of the cryogenic mixtures to increase the available refrigerating effect. Data are presented on mixtures of nitrogen and Freon-13; the thermodynamic properties of these compounds have been investigated experimentally over a wide range of parameters. It was found that in the case of correct selection of the scheme and complex optimization of the parameters, two-stream throttle refrigerators exceed the single-stream throttle refrigerators by at least a factor of 1.5 with respect to relative useful energy. With account taken of the design, technological, and operational parameters, that which is most promising is the scheme with mixing of the components in reverse flow

  10. Influence of throttling of the heavy fraction on the uranium isotope separation in the separation nozzle

    International Nuclear Information System (INIS)

    Bley, P.; Ehrfeld, W.; Heiden, U.

    1978-04-01

    In a separation nozzle cascade for enrichment of U-235 the cut of the separation elements is adjusted by throttling the heavy fraction. This control process influences directly the flow properties in the nozzle and may noticeably change its separation characteristics. This paper deals with an experimental investigation of the throttling effect on the separation and control characteristics of the separation nozzle operated with a H 2 /UF 6 mixture. In consideration of the extremely small characteristic dimensions of commercial separation nozzle elements the influence of manufacturing tolerances on the characteristics of the throttled nozzle was analysed in detail. It appears, that the elementary effect of isotope separation increases by throttling of the heavy fraction up to 5% without changing the optimum operating conditions. This increase of the elementary effect is not only obtained for separation nozzles with zero tolerances but also for separation nozzles having finite tolerances of the skimmer position. Tolerances of the nozzle width, however, become increasingly detrimental, when the heavy fraction is throttled. Regarding the control characteristics of the separation nozzle it was found out, that the UF 6 -cut of the throttled nozzle reacts more sensitively to alterations of the operating pressures and less sensitively to alterations of the UF 6 -concentration of the process gas mixture. (orig.) [de

  11. Adaptive Backstepping Sliding-Mode Control of the Electronic Throttle System in Modern Automobiles

    Directory of Open Access Journals (Sweden)

    Rui Bai

    2014-01-01

    Full Text Available In modern automobiles, electronic throttle is a DC-motor-driven valve that regulates air inflow into the vehicle’s combustion system. The electronic throttle is increasingly being used in order to improve the vehicle drivability, fuel economy, and emissions. Electronic throttle system has the nonlinear dynamical characteristics with the unknown disturbance and parameters. At first, the dynamical nonlinear model of the electronic throttle is built in this paper. Based on the model and using the backstepping design technique, a new adaptive backstepping sliding-mode controller of the electronic throttle is developed. During the backstepping design process, parameter adaptive law is designed to estimate the unknown parameter, and sliding-mode control term is applied to compensate the unknown disturbance. The proposed controller can make the actual angle of the electronic throttle track its set point with the satisfactory performance. Finally, a computer simulation is performed, and simulation results verify that the proposed control method can achieve favorable tracking performance.

  12. Logic Model Checking of Unintended Acceleration Claims in the 2005 Toyota Camry Electronic Throttle Control System

    Science.gov (United States)

    Gamble, Ed; Holzmann, Gerard

    2011-01-01

    Part of the US DOT investigation of Toyota SUA involved analysis of the throttle control software. JPL LaRS applied several techniques, including static analysis and logic model checking, to the software. A handful of logic models were built. Some weaknesses were identified; however, no cause for SUA was found. The full NASA report includes numerous other analyses

  13. Performance of a Throttle Cycle Refrigerator with Nitrogen-Hydrocarbon and Argon-Hydrocarbon Mixtures

    Science.gov (United States)

    Venkatarathnam, G.; Senthil Kumar, P.; Srinivasa Murthy, S.

    2004-06-01

    Throttle cycle refrigerators are a class of vapor compression refrigerators that can provide refrigeration at cryogenic temperatures and operate with refrigerant mixtures. The performance of our prototype refrigerators with nitrogen-hydrocarbon, nitrogen-hydrocarbon-helium and argon-hydrocarbon refrigerant mixtures is presented in this paper.

  14. Exergy analysis on throttle reduction efficiency based on real gas equations

    International Nuclear Information System (INIS)

    Luo, Yuxi; Wang, Xuanyin

    2010-01-01

    This paper proposes an approach to calculate the efficiency of throttling in which the exergy (available energy) is used to evaluate the energy conversion processes. In the exergy calculation for real gases, a difficult part of integration can be removed by judiciously advised thermodynamic paths; the compressibility factor is calculated by using Peng-Robinson (P-R) equation. It is found that the largest deviation between the exergies calculated by the real gas equation and ideal gas assumption is about 1%. Because the exergy is a function of the pressure and temperature, the Joule-Thomson coefficients are used to calculate the temperature changes of throttling, based on the compressibility factors of the Soave-Redlich-Kwong (S-R-K) and P-R equations, and the temperature decreases are compared with those calculated by empirical formula. The result shows that the heat exergy contributes very little in throttling. The simple equation of ideal gas is suggested to calculate the efficiency of throttling for air at atmospheric temperatures.

  15. Investigation of the Flow Rate Effect Upstream of the Constant-Geometry Throttle on the Gas Mass Flow

    Directory of Open Access Journals (Sweden)

    Yu. M. Timofeev

    2016-01-01

    Full Text Available The turbulent-flow throttles are used in pneumatic systems and gas-supply ones to restrict or measure gas mass flow. It is customary to install the throttles in joints of pipelines (in teejoints and cross tees or in joints of pipelines with pneumatic automation devices Presently, in designing the pneumatic systems and gas-supply ones a gas mass flow through a throttle is calculated by a known equation derived from the Saint-Venant-Vantсel formula for the adiabatic flow of ideal gas through a nozzle from an unrestrictedly high capacity tank. Neglect of gas velocity at the throttle inlet is one of the assumptions taken in the development of the above equation. As may be seen in practice, in actual systems the diameters of the throttle and the pipe wherein it is mounted can be commensurable. Neglect of the inlet velocity therewith can result in an error when determining the required throttle diameter in design calculation and a flow rate in checking calculation, as well as when measuring a flow rate in the course of the test. The theoretical study has revealed that the flow velocity at the throttle inlet is responsible for two parameter values: the outlet flow velocity and the critical pressure ratio, which in turn determine the gas mass flow value. To calculate the gas mass flow, the dependencies are given in the paper, which allow taking into account the flow rate at the throttle inlet. The analysis of obtained dependencies has revealed that the degree of influence of inlet flow rate upon the mass flow is defined by two parameters: pressure ratio at the throttle and open area ratio of the throttle and the pipe wherein it is mounted. An analytical investigation has been pursued to evaluate the extent to which the gas mass flow through the throttle is affected by the inlet flow rate. The findings of the investigation and the indications for using the present dependencies are given in this paper. By and large the investigation allowed the

  16. Tandem mirror experiment upgrade (TMX-U) throttle, mechanical design, construction, installation, and alignment

    International Nuclear Information System (INIS)

    Pedrotti, L.R.; Wong, R.L.

    1983-01-01

    We will soon add a high-field axisymmetric throttle region to the central cell of the TMX-U. Field amplitude will be adjusted between 2.25 and 6.0 T. This field is produced by adding a high-field solenoid and a cee coil to each end of the central cell. We describe these coils as well as the additions to the restraint structure. We analyzed the stresses within the solenoid using the STANSOL code. In addition, we performed a finite-element structural analysis of the complete magnet set with the SAP4 code. Particular attention was paid to the transition section where the new magnets were added and where the currents in the existing magnets were increased. The peak temperature rise in the throttle coil was calculated to be 41 0 C above ambient

  17. The patent-technical investigation of throttling control devices for nuclear reactors

    International Nuclear Information System (INIS)

    Ionajtis, R.R.; Kolganova, L.I.

    1979-01-01

    Presented are the results of the analysis of the statistic distribution and dynamics of the parents on throttling control devices (TCD) used in nuclear power plants to regulate the coolant flow in the core and technological channels. 197 foreign patents, given in 1950-75, are studied. To analyze the patents proposed is the TCD classification according to the degree of the change of the flowing part geometry (passing cross section), throttling way, drive type (the way of movable part transfer) and the used medium (coolant), and according to the location. The investigation has shown that the TCD with smoothly or stepply changing flowing part (mainly due to the narrowing of the passing cross section for gaseous or other coolant, not specified) is of great interest for the designers. The most of such devices are supposed to be provided with the drive from the external source and to be placed in the technological channel

  18. ASIL determination for motorbike’s Electronics Throttle Control System (ETCS) mulfunction

    OpenAIRE

    Rokhani Fakhrul Zaman; Abdul Rahman Muhammad Taqiuddin; Kamsani Noor Ain; Mohd Sidek Roslina; Saripan M Iqbal; Samsudin Khairulmizam; Hassan Mohd Khair

    2017-01-01

    Electronics Throttle Control System (ETCS) is the principal electronic unit in all fuel injection engine motorbike, augmenting the engine performance efficiency in comparison to the conventional carburetor based engine. ETCS is regarded as a safety-critical component, whereby ETCS malfunction can cause unintended acceleration or deceleration event, which can be hazardous to riders. In this study, Hazard Analysis and Risk Assessment, an ISO26262 functional safety standard analysis has been app...

  19. The Evolution of Utilizing Manual Throttles to Avoid Low LH2 NPSP at the SSME Inlet

    Science.gov (United States)

    Henfling, Rick

    2011-01-01

    Even before the first flight of the Space Shuttle, it was understood low liquid hydrogen (LH2) Net Positive Suction Pressure (NPSP) at the inlet to the Space Shuttle Main Engine (SSME) can have adverse effects on engine operation. A number of failures within both the External Tank (ET) and the Orbiter Main Propulsion System could result in a low LH2 NPSP condition. Operational workarounds were developed to take advantage of the onboard crew s ability to manually throttle down the SSMEs, which alleviated the low LH2 NPSP condition. A throttling down of the SSME resulted in an increase in NPSP, mainly due to the reduction in frictional flow losses while at a lower throttle setting. As engineers refined their understanding of the NPSP requirements for the SSME (through a robust testing program), the operational techniques evolved to take advantage of these additional capabilities. Currently the procedure, which for early Space Shuttle missions required a Return-to-Launch-Site abort, now would result in a nominal Main Engine Cut Off (MECO) and no loss of mission objectives.

  20. Manual Throttles-Only Control Effectivity for Emergency Flight Control of Transport Aircraft

    Science.gov (United States)

    Stevens, Richard; Burcham, Frank W., Jr.

    2009-01-01

    If normal aircraft flight controls are lost, emergency flight control may be attempted using only the thrust of engines. Collective thrust is used to control flightpath, and differential thrust is used to control bank angle. One issue is whether a total loss of hydraulics (TLOH) leaves an airplane in a recoverable condition. Recoverability is a function of airspeed, altitude, flight phase, and configuration. If the airplane can be recovered, flight test and simulation results on several transport-class airplanes have shown that throttles-only control (TOC) is usually adequate to maintain up-and-away flight, but executing a safe landing is very difficult. There are favorable aircraft configurations, and also techniques that will improve recoverability and control and increase the chances of a survivable landing. The DHS and NASA have recently conducted a flight and simulator study to determine the effectivity of manual throttles-only control as a way to recover and safely land a range of transport airplanes. This paper discusses TLOH recoverability as a function of conditions, and TOC landability results for a range of transport airplanes, and some key techniques for flying with throttles and making a survivable landing. Airplanes evaluated include the B-747, B-767, B-777, B-757, A320, and B-737 airplanes.

  1. Selected Aircraft Throttle Controller With Support Of Fuzzy Expert Inference System

    Directory of Open Access Journals (Sweden)

    Żurek Józef

    2014-12-01

    Full Text Available The paper describes Zlin 143Lsi aircraft engine work parameters control support method – hourly fuel flow as a main factor under consideration. The method concerns project of aircraft throttle control support system with use of fuzzy logic (fuzzy inference. The primary purpose of the system is aircraft performance optimization, reducing flight cost at the same time and support proper aircraft engine maintenance. Matlab Software and Fuzzy Logic Toolbox were used in the project. Work of the system is presented with use of twenty test samples, five of them are presented graphically. In addition, system control surface, included in the paper, supports system all work range analysis.

  2. SQERTSS: Dynamic rank based throttling of transition probabilities in kinetic Monte Carlo simulations

    International Nuclear Information System (INIS)

    Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; Virginia Polytechnic Institute and State University; Savara, Aditya

    2017-01-01

    Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of “KMC stiffness” (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps / cpu-time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order to achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events -- allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm designed for use in achieving and simulating steady-state conditions in KMC simulations. Lastly, as shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.

  3. Performance research on modified KCS (Kalina cycle system) 11 without throttle valve

    International Nuclear Information System (INIS)

    He, Jiacheng; Liu, Chao; Xu, Xiaoxiao; Li, Yourong; Wu, Shuangying; Xu, Jinliang

    2014-01-01

    Two modified systems based on a KCS (Kalina cycle system) 11 with a two-phase expander to substitute a throttle valve are proposed. The two-phase expander is located between the regenerator and the absorber in the B-modified cycle and between the separator and the regenerator in the C-modified cycle. A thermodynamic performance analysis of both the original KCS 11 and the modified systems is carried out. The optimization of two key parameters (the concentration of working fluid and the temperature of cooling water) is also conducted. It is shown that the two modified cycles have different performance under the investigated conditions. Results also indicate that the C-modified cycle can obtain better thermodynamic effect than the B-modified cycle. The temperature of cooling water plays an important role in improving the system performance. When the cooling water temperature drops from 303 K to 278 K, the C-modified cycle thermal efficiency can be improved by 27%. - Highlights: • Throttling valve is replaced by a two-phase expander to recover the expansion work. • Thermodynamic performance of two modified cycle systems is very different. • The maximum increase of work output by C-modified cycle compared with KCS (Kalina cycle system) 11 is 9.4%. • The ranges of ammonia content of B-modified cycle are rather larger

  4. The Evolution of Utilizing Manual Throttles to Avoid Excessively Low LH2 NPSP at the SSME Inlet

    Science.gov (United States)

    Henfling, Rick

    2011-01-01

    In the late 1970s, years before the Space Shuttle flew its maiden voyage, it was understood low liquid hydrogen (LH2) Net Positive Suction Pressure (NPSP) at the inlet to the Space Shuttle Main Engine (SSME) could have adverse effects on engine operation. A number of failures within both the External Tank (ET) and the Orbiter Main Propulsion System (MPS) could result in a low LH2 NPSP condition, which at extremely low levels can result in cavitation of SSME turbomachinery. Operational workarounds were developed to take advantage of the onboard crew s ability to manually throttle down the SSMEs (via the Pilot s Speedbrake/Throttle Controller), which alleviated the low LH2 NPSP condition. Manually throttling the SSME to a lower power level resulted in an increase in NPSP, mainly due to the reduction in frictional flow losses while at the lower throttle setting. Early in the Space Shuttle Program s history, the relevant Flight Rule for the Booster flight controllers in Mission Control did not distinguish between ET and Orbiter MPS failures and the same crew action was taken for both. However, after a review of all Booster operational techniques following the Challenger disaster in the late 1980s, it was determined manually throttling the SSME to a lower power was only effective for Orbiter MPS failures and the Flight Rule was updated to reflect this change. The Flight Rule and associated crew actions initially called for a single throttle step to minimum power level when a low threshold for NPSP was met. As engineers refined their understanding of the NPSP requirements for the SSME (through a robust testing program), the operational techniques evolved to take advantage of the additional capabilities. This paper will examine the evolution of the Flight rule and associated procedure and how increases in knowledge about the SSME and the Space Shuttle vehicle as a whole have helped shape their development. What once was a single throttle step when NPSP decreased to a

  5. The Evolution of Utilizing Manual Throttling to Avoid Excessively Low LH2 NPSP at the SSME Inlet

    Science.gov (United States)

    Henfling, Rick

    2010-01-01

    In the late 1970s, years before the Space Shuttle flew its maiden voyage, it was understood low liquid hydrogen (LH2) Net Positive Suction Pressure (NPSP) at the inlet to the Space Shuttle Main Engine (SSME) could have adverse effects on engine operation. A number of failures within both the External Tank (ET) and the Orbiter Main Propulsion System (MPS) could result in a low LH2 NPSP condition, which at extremely low levels can result in cavitation of SSME turbomachinery. Operational workarounds were developed to take advantage of the onboard crew s ability to manually throttle down the SSMEs (via the Pilot s Speedbrake/Throttle Controller), which alleviated the low LH2 NPSP condition. Manually throttling the SSME to a lower power level resulted in an increase in NPSP, mainly due to the reduction in frictional flow losses while at the lower throttle setting. Early in the Space Shuttle Program s history, the relevant Flight Rule for the Booster flight controller in Mission Control did not distinguish between ET and Orbiter MPS failures and the same crew action was taken for both. However, after a review of all Booster operational techniques following the Challenger disaster in the late 1980s, it was determined manually throttling the SSME to a lower power was only effective for Orbiter MPS failures and the Flight Rule was updated to reflect this change. The Flight Rule and associated crew actions initially called for a single throttle step to minimum power level when a low threshold for NPSP was met. As engineers refined their understanding of the NPSP requirements for the SSME (through a robust testing program), the operational techniques evolved to take advantage of the additional capabilities. This paper will examine the evolution of the Flight rule and associated procedure and how increases in knowledge about the SSME and the Space Shuttle vehicle as a whole have helped shape their development. What once was a single throttle step when NPSP decreased to a

  6. Preliminary flight test results of a fly-by-throttle emergency flight control system on an F-15 airplane

    Science.gov (United States)

    Burcham, Frank W., Jr.; Maine, Trindel A.; Fullerton, C. G.; Wells, Edward A.

    1993-01-01

    A multi-engine aircraft, with some or all of the flight control system inoperative, may use engine thrust for control. NASA Dryden has conducted a study of the capability and techniques for this emergency flight control method for the F-15 airplane. With an augmented control system, engine thrust, along with appropriate feedback parameters, is used to control flightpath and bank angle. Extensive simulation studies have been followed by flight tests. This paper discusses the principles of throttles-only control, the F-15 airplane, the augmented system, and the flight results including landing approaches with throttles-only control to within 10 ft of the ground.

  7. Preliminary Flight Results of a Fly-by-throttle Emergency Flight Control System on an F-15 Airplane

    Science.gov (United States)

    Burcham, Frank W., Jr.; Maine, Trindel A.; Fullerton, C. Gordon; Wells, Edward A.

    1993-01-01

    A multi-engine aircraft, with some or all of the flight control system inoperative, may use engine thrust for control. NASA Dryden has conducted a study of the capability and techniques for this emergency flight control method for the F-15 airplane. With an augmented control system, engine thrust, along with appropriate feedback parameters, is used to control flightpath and bank angle. Extensive simulation studies were followed by flight tests. The principles of throttles only control, the F-15 airplane, the augmented system, and the flight results including actual landings with throttles-only control are discussed.

  8. Dynamics of vibration isolation system with rubber-cord-pneumatic spring with damping throttle

    Science.gov (United States)

    Burian, Yu A.; Silkov, M. V.

    2017-06-01

    The study refers to the important area of applied mechanics; it is the theory of vibration isolation of vibroactive facilities. The design and the issues of mathematical modeling of pneumatic spring perspective design made on the basis of rubber-cord shell with additional volume connected with its primary volume by means of throttle passageway are considered in the text. Damping at the overflow of air through the hole limits the amplitude of oscillation at resonance. But in contrast to conventional systems with viscous damping it does not increase transmission ratio at high frequencies. The mathematical model of suspension allowing selecting options to reduce the power transmission ratio on the foundation, especially in the high frequency range is obtained

  9. Performance Analysis of a Fluidic Axial Oscillation Tool for Friction Reduction with the Absence of a Throttling Plate

    Directory of Open Access Journals (Sweden)

    Xinxin Zhang

    2017-04-01

    Full Text Available An axial oscillation tool is proved to be effective in solving problems associated with high friction and torque in the sliding drilling of a complex well. The fluidic axial oscillation tool, based on an output-fed bistable fluidic oscillator, is a type of axial oscillation tool which has become increasingly popular in recent years. The aim of this paper is to analyze the dynamic flow behavior of a fluidic axial oscillation tool with the absence of a throttling plate in order to evaluate its overall performance. In particular, the differences between the original design with a throttling plate and the current default design are profoundly analyzed, and an improvement is expected to be recorded for the latter. A commercial computational fluid dynamics code, Fluent, was used to predict the pressure drop and oscillation frequency of a fluidic axial oscillation tool. The results of the numerical simulations agree well with corresponding experimental results. A sufficient pressure pulse amplitude with a low pressure drop is desired in this study. Therefore, a relative pulse amplitude of pressure drop and displacement are introduced in our study. A comparison analysis between the two designs with and without a throttling plate indicates that when the supply flow rate is relatively low or higher than a certain value, the fluidic axial oscillation tool with a throttling plate exhibits a better performance; otherwise, the fluidic axial oscillation tool without a throttling plate seems to be a preferred alternative. In most of the operating circumstances in terms of the supply flow rate and pressure drop, the fluidic axial oscillation tool performs better than the original design.

  10. Effect of throttling on burnout heat flux and hydrodynamic instability in natural circulation

    International Nuclear Information System (INIS)

    Mahmoud, S.I.

    1980-01-01

    Twenty-four experiments were carried out to study the effect of restriction of the flow before inlet of the test section on burnout heat flux and instability of the flow boiling. These experiments were carried out on a 10, 12, 14, 16 mm out diameter stainless steel heated elements, 50 and 75 cms long centered inside a 26 mm inner diameter stainless steel channel forming an annulus through which water followed upwards to give diameter ratios 2.6, 2.17, 1.86 and 1.63 respectively. The parameters are chosen to cover the lack of the literatures for burnout conditions at low pressures, 1.5, 3, 6, 10 atma). These are of great benefit to the designers of high heat flux devices such as boilers and nuclear reactors ...etc. A detail description of the experimental loop is given. Test section, steam separator, condenser, precooler, preheater, throttle valve, flow measurements and safety devices are designed and constructed for an operating pressure up to 10 atma and temperature up to 220 0 C. The results show that 1. The burnout heat flux first increases and then decreases as the restriction of the flow increases; 2. The hydrodynamic instability increases as the restriction of the flow before the test section increases. (author)

  11. A theoretical model for measuring mass flowrate and quality of two phase flow by the noise of throttling set

    International Nuclear Information System (INIS)

    Tong Yunxian; Wang Wenran

    1992-03-01

    The mass flowrate and steam quality measuring of two phase flowrate is an essential issue in the tests of loss-of-coolant accident (LOCA). The spatial stochastic distribution of phase concentration would cause a differential pressure noise when two phase flow is crossing a throttling set. Under the assumption of that the variance of disperse phase concentration is proportional to its mean phase concentration and by using the separated flow model of two phase flow, it has demonstrated that the variance of noise of differential pressure square root is approximately proportional to the flowrate of disperse phase. Thus, a theoretical model for measuring mass flowrate and quality of two phase flow by noise measurement is developed. It indicates that there is a possibility to measure two phase flowrate and steam quality by using the simple theoretical model and a single throttling set

  12. A prediction method of temperature distribution and thermal stress for the throttle turbine rotor and its application

    Directory of Open Access Journals (Sweden)

    Yang Yu

    2017-01-01

    Full Text Available In this paper, a prediction method of the temperature distribution for the thermal stress for the throttle-regulated steam turbine rotor is proposed. The rotor thermal stress curve can be calculated according to the preset power requirement, the operation mode and the predicted critical parameters. The results of the 660 MW throttle turbine rotor show that the operators are able to predict the operation results and to adjust the operation parameters in advance with the help of the inertial element method. Meanwhile, it can also raise the operation level, thus providing the technical guarantee for the thermal stress optimization control and the safety of the steam turbine rotor under the variable load operation.

  13. Fuel/Oxidizer Injector Modeling in Sub- and Super-Critical Regimes for Deep Throttling Cryogenic Engines, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Accurate CFD modeling of fuel/oxidizer injection and combustion is needed to design and analyze liquid rocket engines. Currently, however, there is no mature...

  14. Numerical study of saturation steam/water mixture flow and flashing initial sub-cooled water flow inside throttling devices

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    In this work, a Computational Fluid-Dynamics (CFD) approach to model this phenomenon inside throttling devices is proposed. To validate CFD results, different nozzle geometries are analyzed, comparing numerical results with experimental data. Two cases are studied: Case 1: saturation steam/water mixture flow inside 2D convergent-divergent nozzle (inlet, outlet and throat diameter of nozzle are 0.1213m, 0.0452m and 0.0191m respectively). In this benchmark, a range of total inle...

  15. Dynamic Characteristics of Communication Lines with Distributed Parameters to Control the Throttle-controlled Hydraulic Actuators

    Directory of Open Access Journals (Sweden)

    D. N. Popov

    2015-01-01

    Full Text Available The article considers a mathematical model of the hydraulic line for remote control of electro-hydraulic servo drive (EHSD with throttle control. This type of hydraulic lines is designed as a backup to replace the electrical connections, which are used to control EHSD being remote from the site with devices located to form the control signals of any object. A disadvantage of electric connections is that they are sensitive to magnetic fields and thereby do not provide the required reliability of the remote control. Hydraulic lines have no this disadvantage and therefore are used in aircraft and other industrial systems. However, dynamic characteristics of hydraulic systems still have been investigated insufficiently in the case of transmitting control signals at a distance at which the signal may be distorted when emerging the wave processes.The article results of mathematical simulation, which are verified through physical experimentation, largely eliminate the shortcomings of said information.The mathematical model described in the paper is based on the theory of unsteady pressure compressible fluids. In the model there are formulas that provide calculation of frequency characteristics of the hydraulic lines under hydraulic oscillations of the laminar flow parameters of viscous fluid.A real mock-up of the system under consideration and an experimental ad hoc unit are used to verify the results of mathematically simulated hydraulic systems.Calculated logarithmic amplitude and phase frequency characteristics compared with those obtained experimentally prove, under certain conditions, the proposed theoretical method of calculation. These conditions have to ensure compliance with initial parameters of fluid defined under stationary conditions. The applied theory takes into consideration a non-stationary hydraulic resistance of the line when calculating frequency characteristics.The scientific novelty in the article material is presented in

  16. Pilot-in-the-Loop Evaluation of a Yaw Rate to Throttle Feedback Control with Enhanced Engine Response

    Science.gov (United States)

    Litt, Jonathan S.; Guo, Ten-Huei; Sowers, T. Shane; Chicatelli, Amy K.; Fulton, Christopher E.; May, Ryan D.; Owen, A. Karl

    2012-01-01

    This paper describes the implementation and evaluation of a yaw rate to throttle feedback system designed to replace a damaged rudder. It can act as a Dutch roll damper and as a means to facilitate pilot input for crosswind landings. Enhanced propulsion control modes were implemented to increase responsiveness and thrust level of the engine, which impact flight dynamics and performance. Piloted evaluations were performed to determine the capability of the engines to substitute for the rudder function under emergency conditions. The results showed that this type of implementation is beneficial, but the engines' capability to replace the rudder is limited.

  17. Emergency Flight Control of a Twin-Jet Commercial Aircraft using Manual Throttle Manipulation

    Science.gov (United States)

    Cole, Jennifer H.; Cogan, Bruce R.; Fullerton, C. Gordon; Burken, John J.; Venti, Michael W.; Burcham, Frank W.

    2007-01-01

    The Department of Homeland Security (DHS) created the PCAR (Propulsion-Controlled Aircraft Recovery) project in 2005 to mitigate the ManPADS (man-portable air defense systems) threat to the commercial aircraft fleet with near-term, low-cost proven technology. Such an attack could potentially cause a major FCS (flight control system) malfunction or other critical system failure onboard the aircraft, despite the extreme reliability of current systems. For the situations in which nominal flight controls are lost or degraded, engine thrust may be the only remaining means for emergency flight control [ref 1]. A computer-controlled thrust system, known as propulsion-controlled aircraft (PCA), was developed in the mid 1990s with NASA, McDonnell Douglas and Honeywell. PCA's major accomplishment was a demonstration of an automatic landing capability using only engine thrust [ref 11. Despite these promising results, no production aircraft have been equipped with a PCA system, due primarily to the modifications required for implementation. A minimally invasive option is TOC (throttles-only control), which uses the same control principles as PCA, but requires absolutely no hardware, software or other aircraft modifications. TOC is pure piloting technique, and has historically been utilized several times by flight crews, both military and civilian, in emergency situations stemming from a loss of conventional control. Since the 1990s, engineers at NASA Dryden Flight Research Center (DFRC) have studied TOC, in both simulation and flight, for emergency flight control with test pilots in numerous configurations. In general, it was shown that TOC was effective on certain aircraft for making a survivable landing. DHS sponsored both NASA Dryden Flight Research Center (Edwards, CA) and United Airlines (Denver, Colorado) to conduct a flight and simulation study of the TOC characteristics of a twin-jet commercial transport, and assess the ability of a crew to control an aircraft down to

  18. Extended State Observer Based Adaptive Back-Stepping Sliding Mode Control of Electronic Throttle in Transportation Cyber-Physical Systems

    Directory of Open Access Journals (Sweden)

    Yongfu Li

    2015-01-01

    Full Text Available Considering the high accuracy requirement of information exchange via vehicle-to-vehicle (V2V communications, an extended state observer (ESO is designed to estimate the opening angle change of an electronic throttle (ET, wherein the emphasis is placed on the nonlinear uncertainties of stick-slip friction and spring in the system as well as the existence of external disturbance. In addition, a back-stepping sliding mode controller incorporating an adaptive control law is presented, and the stability and robustness of the system are analyzed using Lyapunov technique. Finally, numerical experiments are conducted using simulation. The results show that, compared with back-stepping control (BSC, the proposed controller achieves superior performance in terms of the steady-state error and rising time.

  19. Analytical study on water hammer pressure in pressurized conduits with a throttled surge chamber for slow closure

    Directory of Open Access Journals (Sweden)

    Yong-liang Zhang

    2010-06-01

    Full Text Available This paper presents an analytical investigation of water hammer in a hydraulic pressurized pipe system with a throttled surge chamber located at the junction between a tunnel and a penstock, and a valve positioned at the downstream end of the penstock. Analytical formulas of maximum water hammer pressures at the downstream end of the tunnel and the valve were derived for a system subjected to linear and slow valve closure. The analytical results were then compared with numerical ones obtained using the method of characteristics. There is agreement between them. The formulas can be applied to estimating water hammer pressure at the valve and transmission of water hammer pressure through the surge chamber at the junction for a hydraulic pipe system with a surge chamber.

  20. The Terabit/s Super-Fragment Builder and Trigger Throttling System for the Compact Muon Solenoid Experiment at CERN

    CERN Document Server

    Bauer, Gerry; Boyer, Vincent; Branson, James; Brett, Angela; Cano, Eric; Carboni, Andrea; Ciganek, Marek; Cittolin, Sergio; Erhan, Samim; Gigi, Dominique; Glege, Frank; Gómez-Reino, Robert; Gulmini, Michele; Gutíerrez-Mlot, Esteban; Gutleber, Johannes; Jacobs, Claude; Kim, Jin Cheol; Klute, Markus; Lipeles, Elliot; Lopez-Perez, Juan Antonio; Maron, Gaetano; Meijers, Frans; Meschi, Emilio; Moser, Roland; Murray, Steven; Oh, Alexander; Orsini, Luciano; Paus, Christoph; Petrucci, Andrea; Pieri, Marco; Pollet, Lucien; Rácz, Attila; Sakulin, Hannes; Sani, Matteo; Schieferdecker, Philipp; Schwick, Christoph; Sumorok, Konstanty; Suzuki, Ichiro; Tsirigkas, Dimitrios

    2007-01-01

    The Data Acquisition System of the Compact Muon Solenoid experiment at the Large Hadron Collider reads out event fragments of an average size of 2 kilobytes from around 650 detector front-ends at a rate of up to 100 kHz. The first stage of event-building is performed by the Super-Fragment Builder employing custom-built electronics and a Myrinet optical network. It reduces the number of fragments by one order of magnitude, thereby greatly decreasing the requirements for the subsequent event-assembly stage. By providing fast feedback from any of the front-ends to the trigger, the Trigger Throttling System prevents buffer overflows in the front-end electronics due to variations in the size and rate of events or due to back-pressure from the down-stream event-building and processing. This paper reports on new performance measurements and on the recent successful integration of a scaled-down setup of the described system with the trigger and with front-ends of all major sub-detectors. The on-going commissioning of...

  1. Engineering stategies and implications of using higher plants for throttling gas and water exchange in a controlled ecological life support system

    Science.gov (United States)

    Chamberland, Dennis; Wheeler, Raymond M.; Corey, Kenneth A.

    1993-01-01

    Engineering stategies for advanced life support systems to be used on Lunar and Mars bases involve a wide spectrum of approaches. These range from purely physical-chemical life support strategies to purely biological approaches. Within the context of biological based systems, a bioengineered system can be devised that would utilize the metabolic mechanisms of plants to control the rates of CO2 uptake and O2 evolution (photosynthesis) and water production (transpiration). Such a mechanism of external engineering control has become known as throttling. Research conducted at the John F. Kennedy Space Center's Controlled Ecological Life Support System Breadboard Project has demonstrated the potential of throttling these fluxes by changing environmental parameters affecting the plant processes. Among the more effective environmental throttles are: light and CO2 concentration for controllingthe rate of photsynthesis and humidity and CO2 concentration for controlling transpiration. Such a bioengineered strategy implies control mechanisms that in the past have not been widely attributed to life support systems involving biological components and suggests a broad range of applications in advanced life support system design.

  2. Voith Maxima: simulation-based throttle parameterisation; Voith Maxima: Simulationsbasierte Reglerparametrierung. Bei der Elektronik-Entwicklung der Lokomotive Voith Maxima wurden Methoden der Hardware-in-the-Loop-Simulation zur Reglerparametrierung eingesetzt

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, Thorsten; Hanke, Bjoern [IAV GmbH, Gifhorn (Germany). Powertrain Mechatronik; Jung, Eggert [Voith Turbo Lokomotiv Technik, Kiel (Germany)

    2008-10-15

    For fine-tuning of the diesel engine throttle control on the new Voith Maxima, IAV GmbH developed a physical Hardware-in-the-Loop simulation (HiL), whose parameterisation is based chiefly on easily obtainable construction design data. With the aid of the HiL simulation, the software functions of the of the engine's ECU could already be parameterised and verified in the lab, before a running prototype was available. As a result, a throttle application was available at an early stage, needing only to be further optimised during subsequent test runs. This enabled the number of driving trials, with the cost and effort involved, to be reduced. Looking to the future, an extended HiL simulation with an optimisation algorithm for automatic throttle parameterisation will allow for a further reduction in application development costs. (orig.)

  3. Deep frying

    NARCIS (Netherlands)

    Koerten, van K.N.

    2016-01-01

    Deep frying is one of the most used methods in the food processing industry. Though practically any food can be fried, French fries are probably the most well-known deep fried products. The popularity of French fries stems from their unique taste and texture, a crispy outside with a mealy soft

  4. Deep learning

    CERN Document Server

    Goodfellow, Ian; Courville, Aaron

    2016-01-01

    Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language proces...

  5. Performance and emission characteristics of a turbocharged spark-ignition hydrogen-enriched compressed natural gas engine under wide open throttle operating conditions

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Fanhua; Wang, Mingyue; Jiang, Long; Deng, Jiao; Chen, Renzhe; Naeve, Nashay; Zhao, Shuli [State Key Laboratory of Automotive Safety and Energy, Tsinghua University, Beijing 100084 (China)

    2010-11-15

    This paper investigates the effect of various hydrogen ratios in HCNG (hydrogen-enriched compressed natural gas) fuels on performance and emission characteristics at wide open throttle operating conditions using a turbocharged spark-ignition natural gas engine. The experimental data was taken at hydrogen fractions of 0%, 30% and 55% by volume and was conducted under different excess air ratio ({lambda}) at MBT operating conditions. It is found that under various {lambda}, the addition of hydrogen can significantly reduce CO, CH{sub 4} emissions and the NO{sub x} emission remain at an acceptable level when ignition timing is optimized. Using the same excess air ratio, as more hydrogen is added the power, exhaust temperatures and max cylinder pressure decrease slowly until the mixture's lower heating value remains unchanged with the hydrogen enrichment, then they rise gradually. In addition, the early flame development period and the flame propagation duration are both shorter, and the indicated thermal efficiency and maximum heat release rate both increase with more hydrogen addition. (author)

  6. Deep Learning

    DEFF Research Database (Denmark)

    Jensen, Morten Bornø; Bahnsen, Chris Holmberg; Nasrollahi, Kamal

    2018-01-01

    I løbet af de sidste 10 år er kunstige neurale netværk gået fra at være en støvet, udstødt tekno-logi til at spille en hovedrolle i udviklingen af kunstig intelligens. Dette fænomen kaldes deep learning og er inspireret af hjernens opbygning.......I løbet af de sidste 10 år er kunstige neurale netværk gået fra at være en støvet, udstødt tekno-logi til at spille en hovedrolle i udviklingen af kunstig intelligens. Dette fænomen kaldes deep learning og er inspireret af hjernens opbygning....

  7. Deep geothermics

    International Nuclear Information System (INIS)

    Anon.

    1995-01-01

    The hot-dry-rocks located at 3-4 km of depth correspond to low permeable rocks carrying a large amount of heat. The extraction of this heat usually requires artificial hydraulic fracturing of the rock to increase its permeability before water injection. Hot-dry-rocks geothermics or deep geothermics is not today a commercial channel but only a scientific and technological research field. The Soultz-sous-Forets site (Northern Alsace, France) is characterized by a 6 degrees per meter geothermal gradient and is used as a natural laboratory for deep geothermal and geological studies in the framework of a European research program. Two boreholes have been drilled up to 3600 m of depth in the highly-fractured granite massif beneath the site. The aim is to create a deep heat exchanger using only the natural fracturing for water transfer. A consortium of german, french and italian industrial companies (Pfalzwerke, Badenwerk, EdF and Enel) has been created for a more active participation to the pilot phase. (J.S.). 1 fig., 2 photos

  8. Deep smarts.

    Science.gov (United States)

    Leonard, Dorothy; Swap, Walter

    2004-09-01

    When a person sizes up a complex situation and rapidly comes to a decision that proves to be not just good but brilliant, you think, "That was smart." After you watch him do this a few times, you realize you're in the presence of something special. It's not raw brainpower, though that helps. It's not emotional intelligence, either, though that, too, is often involved. It's deep smarts. Deep smarts are not philosophical--they're not"wisdom" in that sense, but they're as close to wisdom as business gets. You see them in the manager who understands when and how to move into a new international market, in the executive who knows just what kind of talk to give when her organization is in crisis, in the technician who can track a product failure back to an interaction between independently produced elements. These are people whose knowledge would be hard to purchase on the open market. Their insight is based on know-how more than on know-what; it comprises a system view as well as expertise in individual areas. Because deep smarts are experienced based and often context specific, they can't be produced overnight or readily imported into an organization. It takes years for an individual to develop them--and no time at all for an organization to lose them when a valued veteran walks out the door. They can be taught, however, with the right techniques. Drawing on their forthcoming book Deep Smarts, Dorothy Leonard and Walter Swap say the best way to transfer such expertise to novices--and, on a larger scale, to make individual knowledge institutional--isn't through PowerPoint slides, a Web site of best practices, online training, project reports, or lectures. Rather, the sage needs to teach the neophyte individually how to draw wisdom from experience. Companies have to be willing to dedicate time and effort to such extensive training, but the investment more than pays for itself.

  9. DeepPy: Pythonic deep learning

    DEFF Research Database (Denmark)

    Larsen, Anders Boesen Lindbo

    This technical report introduces DeepPy – a deep learning framework built on top of NumPy with GPU acceleration. DeepPy bridges the gap between highperformance neural networks and the ease of development from Python/NumPy. Users with a background in scientific computing in Python will quickly...... be able to understand and change the DeepPy codebase as it is mainly implemented using high-level NumPy primitives. Moreover, DeepPy supports complex network architectures by letting the user compose mathematical expressions as directed graphs. The latest version is available at http...

  10. Greedy Deep Dictionary Learning

    OpenAIRE

    Tariyal, Snigdha; Majumdar, Angshul; Singh, Richa; Vatsa, Mayank

    2016-01-01

    In this work we propose a new deep learning tool called deep dictionary learning. Multi-level dictionaries are learnt in a greedy fashion, one layer at a time. This requires solving a simple (shallow) dictionary learning problem, the solution to this is well known. We apply the proposed technique on some benchmark deep learning datasets. We compare our results with other deep learning tools like stacked autoencoder and deep belief network; and state of the art supervised dictionary learning t...

  11. Model-Based Throttle Control using Static Compensators and Pole Placement Commande des gaz basée sur un modèle utilisant des compensateurs statiques et un placement de pôles

    Directory of Open Access Journals (Sweden)

    Thomasson A.

    2011-10-01

    Full Text Available In modern spark ignited engines, the throttle is controlled by the Electronic Control Unit (ECU, which gives the ECU direct control of the air flow and thereby the engine torque. This puts high demands on the speed and accuracy of the controller that positions the throttle plate. The throttle control problem is complicated by two strong nonlinear effects, friction and limp-home torque. This paper proposes the use of two, simultaneously active, static compensators to counter these effects and approximately linearize the system. A PID controller is designed for the linearized system, where pole placement is applied to design the PD controller and a gain scheduled I-part is added for robustness against model errors. A systematic procedure for generating compensator and controller parameters from open loop experiments is also developed. The controller performance is evaluated both in simulation, on a throttle control benchmark problem, and experimentally. A robustness investigation pointed out that the limp-home position is an important parameter for the controller performance, this is emphasized by the deviations found in experiments. The proposed method for parameter identification achieves the desired accuracy. Au sein des moteurs à allumage commandé modernes, les gaz sont régulés par le boîtier de commande électronique (ECU; Electronic Control Unit, qui permet la régulation directe par l’ECU du flux d’air et ainsi du couple moteur. Cela conduit à des exigences élevées quant à la vitesse et à la précision du régulateur qui positionne le papillon des gaz. Le problème de commande des gaz est compliqué par deux forts effets non linéaires, le frottement et le couple de mode de secours (“limp-home”. Cet article propose l’utilisation de deux compensateurs statiques, actifs simultanément, pour contrer ces effets et linéariser approximativement le système. Un régulateur PID est conçu pour le système linéarisé, où un

  12. Taoism and Deep Ecology.

    Science.gov (United States)

    Sylvan, Richard; Bennett, David

    1988-01-01

    Contrasted are the philosophies of Deep Ecology and ancient Chinese. Discusses the cosmology, morality, lifestyle, views of power, politics, and environmental philosophies of each. Concludes that Deep Ecology could gain much from Taoism. (CW)

  13. Deep Incremental Boosting

    OpenAIRE

    Mosca, Alan; Magoulas, George D

    2017-01-01

    This paper introduces Deep Incremental Boosting, a new technique derived from AdaBoost, specifically adapted to work with Deep Learning methods, that reduces the required training time and improves generalisation. We draw inspiration from Transfer of Learning approaches to reduce the start-up time to training each incremental Ensemble member. We show a set of experiments that outlines some preliminary results on some common Deep Learning datasets and discuss the potential improvements Deep In...

  14. Deep Space Telecommunications

    Science.gov (United States)

    Kuiper, T. B. H.; Resch, G. M.

    2000-01-01

    The increasing load on NASA's deep Space Network, the new capabilities for deep space missions inherent in a next-generation radio telescope, and the potential of new telescope technology for reducing construction and operation costs suggest a natural marriage between radio astronomy and deep space telecommunications in developing advanced radio telescope concepts.

  15. Deep learning with Python

    CERN Document Server

    Chollet, Francois

    2018-01-01

    DESCRIPTION Deep learning is applicable to a widening range of artificial intelligence problems, such as image classification, speech recognition, text classification, question answering, text-to-speech, and optical character recognition. Deep Learning with Python is structured around a series of practical code examples that illustrate each new concept introduced and demonstrate best practices. By the time you reach the end of this book, you will have become a Keras expert and will be able to apply deep learning in your own projects. KEY FEATURES • Practical code examples • In-depth introduction to Keras • Teaches the difference between Deep Learning and AI ABOUT THE TECHNOLOGY Deep learning is the technology behind photo tagging systems at Facebook and Google, self-driving cars, speech recognition systems on your smartphone, and much more. AUTHOR BIO Francois Chollet is the author of Keras, one of the most widely used libraries for deep learning in Python. He has been working with deep neural ...

  16. Deep learning evaluation using deep linguistic processing

    OpenAIRE

    Kuhnle, Alexander; Copestake, Ann

    2017-01-01

    We discuss problems with the standard approaches to evaluation for tasks like visual question answering, and argue that artificial data can be used to address these as a complement to current practice. We demonstrate that with the help of existing 'deep' linguistic processing technology we are able to create challenging abstract datasets, which enable us to investigate the language understanding abilities of multimodal deep learning models in detail, as compared to a single performance value ...

  17. Deep learning relevance

    DEFF Research Database (Denmark)

    Lioma, Christina; Larsen, Birger; Petersen, Casper

    2016-01-01

    train a Recurrent Neural Network (RNN) on existing relevant information to that query. We then use the RNN to "deep learn" a single, synthetic, and we assume, relevant document for that query. We design a crowdsourcing experiment to assess how relevant the "deep learned" document is, compared...... to existing relevant documents. Users are shown a query and four wordclouds (of three existing relevant documents and our deep learned synthetic document). The synthetic document is ranked on average most relevant of all....

  18. Deep Vein Thrombosis

    African Journals Online (AJOL)

    OWNER

    Deep Vein Thrombosis: Risk Factors and Prevention in Surgical Patients. Deep Vein ... preventable morbidity and mortality in hospitalized surgical patients. ... the elderly.3,4 It is very rare before the age ... depends on the risk level; therefore an .... but also in the post-operative period. ... is continuing uncertainty regarding.

  19. Deep Echo State Network (DeepESN): A Brief Survey

    OpenAIRE

    Gallicchio, Claudio; Micheli, Alessio

    2017-01-01

    The study of deep recurrent neural networks (RNNs) and, in particular, of deep Reservoir Computing (RC) is gaining an increasing research attention in the neural networks community. The recently introduced deep Echo State Network (deepESN) model opened the way to an extremely efficient approach for designing deep neural networks for temporal data. At the same time, the study of deepESNs allowed to shed light on the intrinsic properties of state dynamics developed by hierarchical compositions ...

  20. Deep learning in bioinformatics.

    Science.gov (United States)

    Min, Seonwoo; Lee, Byunghan; Yoon, Sungroh

    2017-09-01

    In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Deep subsurface microbial processes

    Science.gov (United States)

    Lovley, D.R.; Chapelle, F.H.

    1995-01-01

    Information on the microbiology of the deep subsurface is necessary in order to understand the factors controlling the rate and extent of the microbially catalyzed redox reactions that influence the geophysical properties of these environments. Furthermore, there is an increasing threat that deep aquifers, an important drinking water resource, may be contaminated by man's activities, and there is a need to predict the extent to which microbial activity may remediate such contamination. Metabolically active microorganisms can be recovered from a diversity of deep subsurface environments. The available evidence suggests that these microorganisms are responsible for catalyzing the oxidation of organic matter coupled to a variety of electron acceptors just as microorganisms do in surface sediments, but at much slower rates. The technical difficulties in aseptically sampling deep subsurface sediments and the fact that microbial processes in laboratory incubations of deep subsurface material often do not mimic in situ processes frequently necessitate that microbial activity in the deep subsurface be inferred through nonmicrobiological analyses of ground water. These approaches include measurements of dissolved H2, which can predict the predominant microbially catalyzed redox reactions in aquifers, as well as geochemical and groundwater flow modeling, which can be used to estimate the rates of microbial processes. Microorganisms recovered from the deep subsurface have the potential to affect the fate of toxic organics and inorganic contaminants in groundwater. Microbial activity also greatly influences 1 the chemistry of many pristine groundwaters and contributes to such phenomena as porosity development in carbonate aquifers, accumulation of undesirably high concentrations of dissolved iron, and production of methane and hydrogen sulfide. Although the last decade has seen a dramatic increase in interest in deep subsurface microbiology, in comparison with the study of

  2. Deep Water Survey Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The deep water biodiversity surveys explore and describe the biodiversity of the bathy- and bentho-pelagic nekton using Midwater and bottom trawls centered in the...

  3. Deep Space Habitat Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The Deep Space Habitat was closed out at the end of Fiscal Year 2013 (September 30, 2013). Results and select content have been incorporated into the new Exploration...

  4. Deep Learning in Neuroradiology.

    Science.gov (United States)

    Zaharchuk, G; Gong, E; Wintermark, M; Rubin, D; Langlotz, C P

    2018-02-01

    Deep learning is a form of machine learning using a convolutional neural network architecture that shows tremendous promise for imaging applications. It is increasingly being adapted from its original demonstration in computer vision applications to medical imaging. Because of the high volume and wealth of multimodal imaging information acquired in typical studies, neuroradiology is poised to be an early adopter of deep learning. Compelling deep learning research applications have been demonstrated, and their use is likely to grow rapidly. This review article describes the reasons, outlines the basic methods used to train and test deep learning models, and presents a brief overview of current and potential clinical applications with an emphasis on how they are likely to change future neuroradiology practice. Facility with these methods among neuroimaging researchers and clinicians will be important to channel and harness the vast potential of this new method. © 2018 by American Journal of Neuroradiology.

  5. Deep inelastic lepton scattering

    International Nuclear Information System (INIS)

    Nachtmann, O.

    1977-01-01

    Deep inelastic electron (muon) nucleon and neutrino nucleon scattering as well as electron positron annihilation into hadrons are reviewed from a theoretical point of view. The emphasis is placed on comparisons of quantum chromodynamics with the data. (orig.) [de

  6. Neuromorphic Deep Learning Machines

    OpenAIRE

    Neftci, E; Augustine, C; Paul, S; Detorakis, G

    2017-01-01

    An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Back Propagation (BP) rule, often relies on the immediate availability of network-wide...

  7. Pathogenesis of deep endometriosis.

    Science.gov (United States)

    Gordts, Stephan; Koninckx, Philippe; Brosens, Ivo

    2017-12-01

    The pathophysiology of (deep) endometriosis is still unclear. As originally suggested by Cullen, change the definition "deeper than 5 mm" to "adenomyosis externa." With the discovery of the old European literature on uterine bleeding in 5%-10% of the neonates and histologic evidence that the bleeding represents decidual shedding, it is postulated/hypothesized that endometrial stem/progenitor cells, implanted in the pelvic cavity after birth, may be at the origin of adolescent and even the occasionally premenarcheal pelvic endometriosis. Endometriosis in the adolescent is characterized by angiogenic and hemorrhagic peritoneal and ovarian lesions. The development of deep endometriosis at a later age suggests that deep infiltrating endometriosis is a delayed stage of endometriosis. Another hypothesis is that the endometriotic cell has undergone genetic or epigenetic changes and those specific changes determine the development into deep endometriosis. This is compatible with the hereditary aspects, and with the clonality of deep and cystic ovarian endometriosis. It explains the predisposition and an eventual causal effect by dioxin or radiation. Specific genetic/epigenetic changes could explain the various expressions and thus typical, cystic, and deep endometriosis become three different diseases. Subtle lesions are not a disease until epi(genetic) changes occur. A classification should reflect that deep endometriosis is a specific disease. In conclusion the pathophysiology of deep endometriosis remains debated and the mechanisms of disease progression, as well as the role of genetics and epigenetics in the process, still needs to be unraveled. Copyright © 2017 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  8. Why & When Deep Learning Works: Looking Inside Deep Learnings

    OpenAIRE

    Ronen, Ronny

    2017-01-01

    The Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI) has been heavily supporting Machine Learning and Deep Learning research from its foundation in 2012. We have asked six leading ICRI-CI Deep Learning researchers to address the challenge of "Why & When Deep Learning works", with the goal of looking inside Deep Learning, providing insights on how deep networks function, and uncovering key observations on their expressiveness, limitations, and potential. The outp...

  9. Power throttling of collections of computing elements

    Science.gov (United States)

    Bellofatto, Ralph E [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Crumley, Paul G [Yorktown Heights, NY; Gara, Alan G [Mount Kidsco, NY; Giampapa, Mark E [Irvington, NY; Gooding,; Thomas, M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Megerian, Mark G [Rochester, MN; Ohmacht, Martin [Yorktown Heights, NY; Reed, Don D [Mantorville, MN; Swetz, Richard A [Mahopac, NY; Takken, Todd [Brewster, NY

    2011-08-16

    An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

  10. ‘Accelerating Science’ - full throttle physics

    CERN Multimedia

    2009-01-01

    CERN’s new travelling exhibition has been inaugurated as part of the celebrations to mark the 450th anniversary of the University of Geneva. It will then take up temporary residence in the Globe of Science and Innovation before setting off on a tour of Europe. var flash_video_player=get_video_player_path(); insert_player_for_external('Video/Public/Movies/2009/CERN-MOVIE-2009-022/CERN-MOVIE-2009-022-0753-kbps-640x360-25-fps-audio-64-kbps-44-kHz-stereo', 'mms://mediastream.cern.ch/MediaArchive/Video/Public/Movies/2009/CERN-MOVIE-2009-022/CERN-MOVIE-2009-022-Multirate-200-to-753-kbps-640x360-25-fps.wmv', 'false', 533, 300, 'https://mediastream.cern.ch/MediaArchive/Video/Public/Movies/2009/CERN-MOVIE-2009-022/CERN-MOVIE-2009-022-posterframe-640x360-at-10-percent.jpg', '1173807', true, 'Video/Public/Movies/2009/CERN-MOVIE-2009-022/CERN-MOVIE-2009-022-0600-kbps-maxH-360-25-fps-audio-128-kbps-48-kHz-stereo.mp4'); Windows Media   Flash ...

  11. Auxiliary Deep Generative Models

    DEFF Research Database (Denmark)

    Maaløe, Lars; Sønderby, Casper Kaae; Sønderby, Søren Kaae

    2016-01-01

    Deep generative models parameterized by neural networks have recently achieved state-of-the-art performance in unsupervised and semi-supervised learning. We extend deep generative models with auxiliary variables which improves the variational approximation. The auxiliary variables leave...... the generative model unchanged but make the variational distribution more expressive. Inspired by the structure of the auxiliary variable we also propose a model with two stochastic layers and skip connections. Our findings suggest that more expressive and properly specified deep generative models converge...... faster with better results. We show state-of-the-art performance within semi-supervised learning on MNIST (0.96%), SVHN (16.61%) and NORB (9.40%) datasets....

  12. Deep Learning from Crowds

    DEFF Research Database (Denmark)

    Rodrigues, Filipe; Pereira, Francisco Camara

    Over the last few years, deep learning has revolutionized the field of machine learning by dramatically improving the stateof-the-art in various domains. However, as the size of supervised artificial neural networks grows, typically so does the need for larger labeled datasets. Recently...... networks from crowds. We begin by describing an EM algorithm for jointly learning the parameters of the network and the reliabilities of the annotators. Then, a novel general-purpose crowd layer is proposed, which allows us to train deep neural networks end-to-end, directly from the noisy labels......, crowdsourcing has established itself as an efficient and cost-effective solution for labeling large sets of data in a scalable manner, but it often requires aggregating labels from multiple noisy contributors with different levels of expertise. In this paper, we address the problem of learning deep neural...

  13. Deep boreholes; Tiefe Bohrloecher

    Energy Technology Data Exchange (ETDEWEB)

    Bracke, Guido [Gesellschaft fuer Anlagen- und Reaktorsicherheit gGmbH Koeln (Germany); Charlier, Frank [NSE international nuclear safety engineering gmbh, Aachen (Germany); Geckeis, Horst [Karlsruher Institut fuer Technologie (Germany). Inst. fuer Nukleare Entsorgung; and others

    2016-02-15

    The report on deep boreholes covers the following subject areas: methods for safe enclosure of radioactive wastes, requirements concerning the geological conditions of possible boreholes, reversibility of decisions and retrievability, status of drilling technology. The introduction covers national and international activities. Further chapters deal with the following issues: basic concept of the storage in deep bore holes, status of the drilling technology, safe enclosure, geomechanics and stability, reversibility of decisions, risk scenarios, compliancy with safe4ty requirements and site selection criteria, research and development demand.

  14. Deep Water Acoustics

    Science.gov (United States)

    2016-06-28

    the Deep Water project and participate in the NPAL Workshops, including Art Baggeroer (MIT), J. Beron- Vera (UMiami), M. Brown (UMiami), T...Kathleen E . Wage. The North Pacific Acoustic Laboratory deep-water acoustic propagation experiments in the Philippine Sea. J. Acoust. Soc. Am., 134(4...estimate of the angle α during PhilSea09, made from ADCP measurements at the site of the DVLA. Sim. A B1 B2 B3 C D E F Prof. # 0 4 4 4 5 10 16 20 α

  15. Deep diode atomic battery

    International Nuclear Information System (INIS)

    Anthony, T.R.; Cline, H.E.

    1977-01-01

    A deep diode atomic battery is made from a bulk semiconductor crystal containing three-dimensional arrays of columnar and lamellar P-N junctions. The battery is powered by gamma rays and x-ray emission from a radioactive source embedded in the interior of the semiconductor crystal

  16. Deep Learning Policy Quantization

    NARCIS (Netherlands)

    van de Wolfshaar, Jos; Wiering, Marco; Schomaker, Lambertus

    2018-01-01

    We introduce a novel type of actor-critic approach for deep reinforcement learning which is based on learning vector quantization. We replace the softmax operator of the policy with a more general and more flexible operator that is similar to the robust soft learning vector quantization algorithm.

  17. Deep-sea fungi

    Digital Repository Service at National Institute of Oceanography (India)

    Raghukumar, C; Damare, S.R.

    significant in terms of carbon sequestration (5, 8). In light of this, the diversity, abundance, and role of fungi in deep-sea sediments may form an important link in the global C biogeochemistry. This review focuses on issues related to collection...

  18. Deep inelastic scattering

    International Nuclear Information System (INIS)

    Aubert, J.J.

    1982-01-01

    Deep inelastic lepton-nucleon interaction experiments are renewed. Singlet and non-singlet structure functions are measured and the consistency of the different results is checked. A detailed analysis of the scaling violation is performed in terms of the quantum chromodynamics predictions [fr

  19. Deep Vein Thrombosis

    Centers for Disease Control (CDC) Podcasts

    2012-04-05

    This podcast discusses the risk for deep vein thrombosis in long-distance travelers and ways to minimize that risk.  Created: 4/5/2012 by National Center for Emerging and Zoonotic Infectious Diseases (NCEZID).   Date Released: 4/5/2012.

  20. Deep Learning Microscopy

    KAUST Repository

    Rivenson, Yair; Gorocs, Zoltan; Gunaydin, Harun; Zhang, Yibo; Wang, Hongda; Ozcan, Aydogan

    2017-01-01

    regular optical microscope, without any changes to its design. We blindly tested this deep learning approach using various tissue samples that are imaged with low-resolution and wide-field systems, where the network rapidly outputs an image with remarkably

  1. The deep universe

    CERN Document Server

    Sandage, AR; Longair, MS

    1995-01-01

    Discusses the concept of the deep universe from two conflicting theoretical viewpoints: firstly as a theory embracing the evolution of the universe from the Big Bang to the present; and secondly through observations gleaned over the years on stars, galaxies and clusters.

  2. Teaching for Deep Learning

    Science.gov (United States)

    Smith, Tracy Wilson; Colby, Susan A.

    2007-01-01

    The authors have been engaged in research focused on students' depth of learning as well as teachers' efforts to foster deep learning. Findings from a study examining the teaching practices and student learning outcomes of sixty-four teachers in seventeen different states (Smith et al. 2005) indicated that most of the learning in these classrooms…

  3. Deep Trawl Dataset

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Otter trawl (36' Yankee and 4-seam net deepwater gear) catches from mid-Atlantic slope and canyons at 200 - 800 m depth. Deep-sea (200-800 m depth) flat otter trawls...

  4. [Deep vein thrombosis prophylaxis.

    Science.gov (United States)

    Sandoval-Chagoya, Gloria Alejandra; Laniado-Laborín, Rafael

    2013-01-01

    Background: despite the proven effectiveness of preventive therapy for deep vein thrombosis, a significant proportion of patients at risk for thromboembolism do not receive prophylaxis during hospitalization. Our objective was to determine the adherence to thrombosis prophylaxis guidelines in a general hospital as a quality control strategy. Methods: a random audit of clinical charts was conducted at the Tijuana General Hospital, Baja California, Mexico, to determine the degree of adherence to deep vein thrombosis prophylaxis guidelines. The instrument used was the Caprini's checklist for thrombosis risk assessment in adult patients. Results: the sample included 300 patient charts; 182 (60.7 %) were surgical patients and 118 were medical patients. Forty six patients (15.3 %) received deep vein thrombosis pharmacologic prophylaxis; 27.1 % of medical patients received deep vein thrombosis prophylaxis versus 8.3 % of surgical patients (p < 0.0001). Conclusions: our results show that adherence to DVT prophylaxis at our hospital is extremely low. Only 15.3 % of our patients at risk received treatment, and even patients with very high risk received treatment in less than 25 % of the cases. We have implemented strategies to increase compliance with clinical guidelines.

  5. Deep inelastic scattering

    International Nuclear Information System (INIS)

    Zakharov, V.I.

    1977-01-01

    The present status of the quark-parton-gluon picture of deep inelastic scattering is reviewed. The general framework is mostly theoretical and covers investigations since 1970. Predictions of the parton model and of the asymptotically free field theories are compared with experimental data available. The valence quark approximation is concluded to be valid in most cases, but fails to account for the data on the total momentum transfer. On the basis of gluon corrections introduced to the parton model certain predictions concerning both the deep inelastic structure functions and form factors are made. The contributions of gluon exchanges and gluon bremsstrahlung are highlighted. Asymptotic freedom is concluded to be very attractive and provide qualitative explanation to some experimental observations (scaling violations, breaking of the Drell-Yan-West type relations). Lepton-nuclear scattering is pointed out to be helpful in probing the nature of nuclear forces and studying the space-time picture of the parton model

  6. Deep Energy Retrofit

    DEFF Research Database (Denmark)

    Zhivov, Alexander; Lohse, Rüdiger; Rose, Jørgen

    Deep Energy Retrofit – A Guide to Achieving Significant Energy User Reduction with Major Renovation Projects contains recommendations for characteristics of some of core technologies and measures that are based on studies conducted by national teams associated with the International Energy Agency...... Energy Conservation in Buildings and Communities Program (IEA-EBC) Annex 61 (Lohse et al. 2016, Case, et al. 2016, Rose et al. 2016, Yao, et al. 2016, Dake 2014, Stankevica et al. 2016, Kiatreungwattana 2014). Results of these studies provided a base for setting minimum requirements to the building...... envelope-related technologies to make Deep Energy Retrofit feasible and, in many situations, cost effective. Use of energy efficiency measures (EEMs) in addition to core technologies bundle and high-efficiency appliances will foster further energy use reduction. This Guide also provides best practice...

  7. Deep groundwater chemistry

    International Nuclear Information System (INIS)

    Wikberg, P.; Axelsen, K.; Fredlund, F.

    1987-06-01

    Starting in 1977 and up till now a number of places in Sweden have been investigated in order to collect the necessary geological, hydrogeological and chemical data needed for safety analyses of repositories in deep bedrock systems. Only crystalline rock is considered and in many cases this has been gneisses of sedimentary origin but granites and gabbros are also represented. Core drilled holes have been made at nine sites. Up to 15 holes may be core drilled at one site, the deepest down to 1000 m. In addition to this a number of boreholes are percussion drilled at each site to depths of about 100 m. When possible drilling water is taken from percussion drilled holes. The first objective is to survey the hydraulic conditions. Core drilled boreholes and sections selected for sampling of deep groundwater are summarized. (orig./HP)

  8. Deep Reinforcement Fuzzing

    OpenAIRE

    Böttinger, Konstantin; Godefroid, Patrice; Singh, Rishabh

    2018-01-01

    Fuzzing is the process of finding security vulnerabilities in input-processing code by repeatedly testing the code with modified inputs. In this paper, we formalize fuzzing as a reinforcement learning problem using the concept of Markov decision processes. This in turn allows us to apply state-of-the-art deep Q-learning algorithms that optimize rewards, which we define from runtime properties of the program under test. By observing the rewards caused by mutating with a specific set of actions...

  9. Deep Visual Attention Prediction

    Science.gov (United States)

    Wang, Wenguan; Shen, Jianbing

    2018-05-01

    In this work, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although Convolutional Neural Networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve CNN based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark datasets demonstrate our method yields state-of-the-art performance with competitive inference time.

  10. Deep Red (Profondo Rosso)

    CERN Multimedia

    Cine Club

    2015-01-01

    Wednesday 29 April 2015 at 20:00 CERN Council Chamber    Deep Red (Profondo Rosso) Directed by Dario Argento (Italy, 1975) 126 minutes A psychic who can read minds picks up the thoughts of a murderer in the audience and soon becomes a victim. An English pianist gets involved in solving the murders, but finds many of his avenues of inquiry cut off by new murders, and he begins to wonder how the murderer can track his movements so closely. Original version Italian; English subtitles

  11. Reversible deep disposal

    International Nuclear Information System (INIS)

    2009-10-01

    This presentation, given by the national agency of radioactive waste management (ANDRA) at the meeting of October 8, 2009 of the high committee for the nuclear safety transparency and information (HCTISN), describes the concept of deep reversible disposal for high level/long living radioactive wastes, as considered by the ANDRA in the framework of the program law of June 28, 2006 about the sustainable management of radioactive materials and wastes. The document presents the social and political reasons of reversibility, the technical means considered (containers, disposal cavities, monitoring system, test facilities and industrial prototypes), the decisional process (progressive development and blocked off of the facility, public information and debate). (J.S.)

  12. Deep inelastic neutron scattering

    International Nuclear Information System (INIS)

    Mayers, J.

    1989-03-01

    The report is based on an invited talk given at a conference on ''Neutron Scattering at ISIS: Recent Highlights in Condensed Matter Research'', which was held in Rome, 1988, and is intended as an introduction to the techniques of Deep Inelastic Neutron Scattering. The subject is discussed under the following topic headings:- the impulse approximation I.A., scaling behaviour, kinematical consequences of energy and momentum conservation, examples of measurements, derivation of the I.A., the I.A. in a harmonic system, and validity of the I.A. in neutron scattering. (U.K.)

  13. [Deep mycoses rarely described].

    Science.gov (United States)

    Charles, D

    1986-01-01

    Beside deep mycoses very well known: histoplasmosis, candidosis, cryptococcosis, there are other mycoses less frequently described. Some of them are endemic in some countries: South American blastomycosis in Brazil, coccidioidomycosis in California; some others are cosmopolitan and may affect everyone: sporotrichosis, or may affect only immunodeficient persons: mucormycosis. They do not spare Africa, we may encounter basidiobolomycosis, rhinophycomycosis, dermatophytosis, sporotrichosis and, more recently reported, rhinosporidiosis. Important therapeutic progresses have been accomplished with amphotericin B and with antifungus imidazole compounds (miconazole and ketoconazole). Surgical intervention is sometime recommended in chromomycosis and rhinosporidiosis.

  14. Deep penetration calculations

    International Nuclear Information System (INIS)

    Thompson, W.L.; Deutsch, O.L.; Booth, T.E.

    1980-04-01

    Several Monte Carlo techniques are compared in the transport of neutrons of different source energies through two different deep-penetration problems each with two parts. The first problem involves transmission through a 200-cm concrete slab. The second problem is a 90 0 bent pipe jacketed by concrete. In one case the pipe is void, and in the other it is filled with liquid sodium. Calculations are made with two different Los Alamos Monte Carlo codes: the continuous-energy code MCNP and the multigroup code MCMG

  15. Deep Super Learner: A Deep Ensemble for Classification Problems

    OpenAIRE

    Young, Steven; Abdou, Tamer; Bener, Ayse

    2018-01-01

    Deep learning has become very popular for tasks such as predictive modeling and pattern recognition in handling big data. Deep learning is a powerful machine learning method that extracts lower level features and feeds them forward for the next layer to identify higher level features that improve performance. However, deep neural networks have drawbacks, which include many hyper-parameters and infinite architectures, opaqueness into results, and relatively slower convergence on smaller datase...

  16. Deep sea biophysics

    International Nuclear Information System (INIS)

    Yayanos, A.A.

    1982-01-01

    A collection of deep-sea bacterial cultures was completed. Procedures were instituted to shelter the culture collection from accidential warming. A substantial data base on the rates of reproduction of more than 100 strains of bacteria from that collection was obtained from experiments and the analysis of that data was begun. The data on the rates of reproduction were obtained under conditions of temperature and pressure found in the deep sea. The experiments were facilitated by inexpensively fabricated pressure vessels, by the streamlining of the methods for the study of kinetics at high pressures, and by computer-assisted methods. A polybarothermostat was used to study the growth of bacteria along temperature gradients at eight distinct pressures. This device should allow for the study of microbial processes in the temperature field simulating the environment around buried HLW. It is small enough to allow placement in a radiation field in future studies. A flow fluorocytometer was fabricated. This device will be used to determine the DNA content per cell in bacteria grown in laboratory culture and in microorganisms in samples from the ocean. The technique will be tested for its rapidity in determining the concentration of cells (standing stock of microorganisms) in samples from the ocean

  17. Deep Learning in Radiology.

    Science.gov (United States)

    McBee, Morgan P; Awan, Omer A; Colucci, Andrew T; Ghobadi, Comeron W; Kadom, Nadja; Kansagra, Akash P; Tridandapani, Srini; Auffermann, William F

    2018-03-29

    As radiology is inherently a data-driven specialty, it is especially conducive to utilizing data processing techniques. One such technique, deep learning (DL), has become a remarkably powerful tool for image processing in recent years. In this work, the Association of University Radiologists Radiology Research Alliance Task Force on Deep Learning provides an overview of DL for the radiologist. This article aims to present an overview of DL in a manner that is understandable to radiologists; to examine past, present, and future applications; as well as to evaluate how radiologists may benefit from this remarkable new tool. We describe several areas within radiology in which DL techniques are having the most significant impact: lesion or disease detection, classification, quantification, and segmentation. The legal and ethical hurdles to implementation are also discussed. By taking advantage of this powerful tool, radiologists can become increasingly more accurate in their interpretations with fewer errors and spend more time to focus on patient care. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  18. Deep Learning Microscopy

    KAUST Repository

    Rivenson, Yair

    2017-05-12

    We demonstrate that a deep neural network can significantly improve optical microscopy, enhancing its spatial resolution over a large field-of-view and depth-of-field. After its training, the only input to this network is an image acquired using a regular optical microscope, without any changes to its design. We blindly tested this deep learning approach using various tissue samples that are imaged with low-resolution and wide-field systems, where the network rapidly outputs an image with remarkably better resolution, matching the performance of higher numerical aperture lenses, also significantly surpassing their limited field-of-view and depth-of-field. These results are transformative for various fields that use microscopy tools, including e.g., life sciences, where optical microscopy is considered as one of the most widely used and deployed techniques. Beyond such applications, our presented approach is broadly applicable to other imaging modalities, also spanning different parts of the electromagnetic spectrum, and can be used to design computational imagers that get better and better as they continue to image specimen and establish new transformations among different modes of imaging.

  19. Deep Transfer Metric Learning.

    Science.gov (United States)

    Junlin Hu; Jiwen Lu; Yap-Peng Tan; Jie Zhou

    2016-12-01

    Conventional metric learning methods usually assume that the training and test samples are captured in similar scenarios so that their distributions are assumed to be the same. This assumption does not hold in many real visual recognition applications, especially when samples are captured across different data sets. In this paper, we propose a new deep transfer metric learning (DTML) method to learn a set of hierarchical nonlinear transformations for cross-domain visual recognition by transferring discriminative knowledge from the labeled source domain to the unlabeled target domain. Specifically, our DTML learns a deep metric network by maximizing the inter-class variations and minimizing the intra-class variations, and minimizing the distribution divergence between the source domain and the target domain at the top layer of the network. To better exploit the discriminative information from the source domain, we further develop a deeply supervised transfer metric learning (DSTML) method by including an additional objective on DTML, where the output of both the hidden layers and the top layer are optimized jointly. To preserve the local manifold of input data points in the metric space, we present two new methods, DTML with autoencoder regularization and DSTML with autoencoder regularization. Experimental results on face verification, person re-identification, and handwritten digit recognition validate the effectiveness of the proposed methods.

  20. Deep Reinforcement Learning: An Overview

    OpenAIRE

    Li, Yuxi

    2017-01-01

    We give an overview of recent exciting achievements of deep reinforcement learning (RL). We discuss six core elements, six important mechanisms, and twelve applications. We start with background of machine learning, deep learning and reinforcement learning. Next we discuss core RL elements, including value function, in particular, Deep Q-Network (DQN), policy, reward, model, planning, and exploration. After that, we discuss important mechanisms for RL, including attention and memory, unsuperv...

  1. Deep Feature Consistent Variational Autoencoder

    OpenAIRE

    Hou, Xianxu; Shen, Linlin; Sun, Ke; Qiu, Guoping

    2016-01-01

    We present a novel method for constructing Variational Autoencoder (VAE). Instead of using pixel-by-pixel loss, we enforce deep feature consistency between the input and the output of a VAE, which ensures the VAE's output to preserve the spatial correlation characteristics of the input, thus leading the output to have a more natural visual appearance and better perceptual quality. Based on recent deep learning works such as style transfer, we employ a pre-trained deep convolutional neural net...

  2. Deep learning for image classification

    Science.gov (United States)

    McCoppin, Ryan; Rizki, Mateen

    2014-06-01

    This paper provides an overview of deep learning and introduces the several subfields of deep learning including a specific tutorial of convolutional neural networks. Traditional methods for learning image features are compared to deep learning techniques. In addition, we present our preliminary classification results, our basic implementation of a convolutional restricted Boltzmann machine on the Mixed National Institute of Standards and Technology database (MNIST), and we explain how to use deep learning networks to assist in our development of a robust gender classification system.

  3. Deep learning? What deep learning? | Fourie | South African ...

    African Journals Online (AJOL)

    In teaching generally over the past twenty years, there has been a move towards teaching methods that encourage deep, rather than surface approaches to learning. The reason for this being that students, who adopt a deep approach to learning are considered to have learning outcomes of a better quality and desirability ...

  4. Deep sea radionuclides

    International Nuclear Information System (INIS)

    Kanisch, G.; Vobach, M.

    1993-01-01

    Every year since 1979, either in sping or in summer, the fishing research vessel 'Walther Herwig' goes to the North Atlantic disposal areas of solid radioactive wastes, and, for comparative purposes, to other areas, in order to collect water samples, plankton and nekton, and, from the deep sea bed, sediment samples and benthos organisms. In addition to data on the radionuclide contents of various media, information about the plankton, nekton and benthos organisms living in those areas and about their biomasses could be gathered. The investigations are aimed at acquiring scientifically founded knowledge of the uptake of radioactive substances by microorganisms, and their migration from the sea bottom to the areas used by man. (orig.) [de

  5. Deep inelastic phenomena

    International Nuclear Information System (INIS)

    Aubert, J.J.

    1982-01-01

    The experimental situation of the deep inelastic scattering for electrons (muons) is reviewed. A brief history of experimentation highlights Mohr and Nicoll's 1932 experiment on electron-atom scattering and Hofstadter's 1950 experiment on electron-nucleus scattering. The phenomenology of electron-nucleon scattering carried out between 1960 and 1970 is described, with emphasis on the parton model, and scaling. Experiments at SLAC and FNAL since 1974 exhibit scaling violations. Three muon-nucleon scattering experiments at BFP, BCDMA, and EMA, currently producing new results in the high Q 2 domain suggest a rather flat behaviour of the structure function at fixed x as a function of Q 2 . It is seen that the structure measured in DIS can then be projected into a pure hadronic process to predict a cross section. Protonneutron difference, moment analysis, and Drell-Yan pairs are also considered

  6. Context and Deep Learning Design

    Science.gov (United States)

    Boyle, Tom; Ravenscroft, Andrew

    2012-01-01

    Conceptual clarification is essential if we are to establish a stable and deep discipline of technology enhanced learning. The technology is alluring; this can distract from deep design in a surface rush to exploit the affordances of the new technology. We need a basis for design, and a conceptual unit of organization, that are applicable across…

  7. Deep Learning Fluid Mechanics

    Science.gov (United States)

    Barati Farimani, Amir; Gomes, Joseph; Pande, Vijay

    2017-11-01

    We have developed a new data-driven model paradigm for the rapid inference and solution of the constitutive equations of fluid mechanic by deep learning models. Using generative adversarial networks (GAN), we train models for the direct generation of solutions to steady state heat conduction and incompressible fluid flow without knowledge of the underlying governing equations. Rather than using artificial neural networks to approximate the solution of the constitutive equations, GANs can directly generate the solutions to these equations conditional upon an arbitrary set of boundary conditions. Both models predict temperature, velocity and pressure fields with great test accuracy (>99.5%). The application of our framework for inferring and generating the solutions of partial differential equations can be applied to any physical phenomena and can be used to learn directly from experiments where the underlying physical model is complex or unknown. We also have shown that our framework can be used to couple multiple physics simultaneously, making it amenable to tackle multi-physics problems.

  8. Deep video deblurring

    KAUST Repository

    Su, Shuochen

    2016-11-25

    Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.

  9. Deep space telescopes

    CERN Multimedia

    CERN. Geneva

    2006-01-01

    The short series of seminars will address results and aims of current and future space astrophysics as the cultural framework for the development of deep space telescopes. It will then present such new tools, as they are currently available to, or imagined by, the scientific community, in the context of the science plans of ESA and of all major world space agencies. Ground-based astronomy, in the 400 years since Galileo’s telescope, has given us a profound phenomenological comprehension of our Universe, but has traditionally been limited to the narrow band(s) to which our terrestrial atmosphere is transparent. Celestial objects, however, do not care about our limitations, and distribute most of the information about their physics throughout the complete electromagnetic spectrum. Such information is there for the taking, from millimiter wavelengths to gamma rays. Forty years astronomy from space, covering now most of the e.m. spectrum, have thus given us a better understanding of our physical Universe then t...

  10. Deep inelastic final states

    International Nuclear Information System (INIS)

    Girardi, G.

    1980-11-01

    In these lectures we attempt to describe the final states of deep inelastic scattering as given by QCD. In the first section we shall briefly comment on the parton model and give the main properties of decay functions which are of interest for the study of semi-inclusive leptoproduction. The second section is devoted to the QCD approach to single hadron leptoproduction. First we recall basic facts on QCD log's and derive after that the evolution equations for the fragmentation functions. For this purpose we make a short detour in e + e - annihilation. The rest of the section is a study of the factorization of long distance effects associated with the initial and final states. We then show how when one includes next to leading QCD corrections one induces factorization breaking and describe the double moments useful for testing such effects. The next section contains a review on the QCD jets in the hadronic final state. We begin by introducing the notion of infrared safe variable and defining a few useful examples. Distributions in these variables are studied to first order in QCD, with some comments on the resummation of logs encountered in higher orders. Finally the last section is a 'gaullimaufry' of jet studies

  11. Deep Mapping and Spatial Anthropology

    Directory of Open Access Journals (Sweden)

    Les Roberts

    2016-01-01

    Full Text Available This paper provides an introduction to the Humanities Special Issue on “Deep Mapping”. It sets out the rationale for the collection and explores the broad-ranging nature of perspectives and practices that fall within the “undisciplined” interdisciplinary domain of spatial humanities. Sketching a cross-current of ideas that have begun to coalesce around the concept of “deep mapping”, the paper argues that rather than attempting to outline a set of defining characteristics and “deep” cartographic features, a more instructive approach is to pay closer attention to the multivalent ways deep mapping is performatively put to work. Casting a critical and reflexive gaze over the developing discourse of deep mapping, it is argued that what deep mapping “is” cannot be reduced to the otherwise a-spatial and a-temporal fixity of the “deep map”. In this respect, as an undisciplined survey of this increasing expansive field of study and practice, the paper explores the ways in which deep mapping can engage broader discussion around questions of spatial anthropology.

  12. Deep learning for computational chemistry.

    Science.gov (United States)

    Goh, Garrett B; Hodas, Nathan O; Vishnu, Abhinav

    2017-06-15

    The rise and fall of artificial neural networks is well documented in the scientific literature of both computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on multilayer neural networks. Within the last few years, we have seen the transformative impact of deep learning in many domains, particularly in speech recognition and computer vision, to the extent that the majority of expert practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. In this review, we provide an introductory overview into the theory of deep neural networks and their unique properties that distinguish them from traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including quantitative structure activity relationship, virtual screening, protein structure prediction, quantum chemistry, materials design, and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non-neural networks state-of-the-art models across disparate research topics, and deep neural network-based models often exceeded the "glass ceiling" expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a valuable tool for computational chemistry. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  13. Deep learning for computational chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Goh, Garrett B. [Advanced Computing, Mathematics, and Data Division, Pacific Northwest National Laboratory, 902 Battelle Blvd Richland Washington 99354; Hodas, Nathan O. [Advanced Computing, Mathematics, and Data Division, Pacific Northwest National Laboratory, 902 Battelle Blvd Richland Washington 99354; Vishnu, Abhinav [Advanced Computing, Mathematics, and Data Division, Pacific Northwest National Laboratory, 902 Battelle Blvd Richland Washington 99354

    2017-03-08

    The rise and fall of artificial neural networks is well documented in the scientific literature of both the fields of computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on “deep” neural networks. Within the last few years, we have seen the transformative impact of deep learning the computer science domain, notably in speech recognition and computer vision, to the extent that the majority of practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. In this review, we provide an introductory overview into the theory of deep neural networks and their unique properties as compared to traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including QSAR, virtual screening, protein structure modeling, QM calculations, materials synthesis and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non neural networks state-of-the-art models across disparate research topics, and deep neural network based models often exceeded the “glass ceiling” expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a useful tool and may grow into a pivotal role for various challenges in the computational chemistry field.

  14. DeepSimulator: a deep simulator for Nanopore sequencing

    KAUST Repository

    Li, Yu; Han, Renmin; Bi, Chongwei; Li, Mo; Wang, Sheng; Gao, Xin

    2017-01-01

    or assembled contigs, we simulate the electrical current signals by a context-dependent deep learning model, followed by a base-calling procedure to yield simulated reads. This workflow mimics the sequencing procedure more naturally. The thorough experiments

  15. Deep UV LEDs

    Science.gov (United States)

    Han, Jung; Amano, Hiroshi; Schowalter, Leo

    2014-06-01

    Deep ultraviolet (DUV) photons interact strongly with a broad range of chemical and biological molecules; compact DUV light sources could enable a wide range of applications in chemi/bio-sensing, sterilization, agriculture, and industrial curing. The much shorter wavelength also results in useful characteristics related to optical diffraction (for lithography) and scattering (non-line-of-sight communication). The family of III-N (AlGaInN) compound semiconductors offers a tunable energy gap from infrared to DUV. While InGaN-based blue light emitters have been the primary focus for the obvious application of solid state lighting, there is a growing interest in the development of efficient UV and DUV light-emitting devices. In the past few years we have witnessed an increasing investment from both government and industry sectors to further the state of DUV light-emitting devices. The contributions in Semiconductor Science and Technology 's special issue on DUV devices provide an up-to-date snapshot covering many relevant topics in this field. Given the expected importance of bulk AlN substrate in DUV technology, we are pleased to include a review article by Hartmann et al on the growth of AlN bulk crystal by physical vapour transport. The issue of polarization field within the deep ultraviolet LEDs is examined in the article by Braut et al. Several commercial companies provide useful updates in their development of DUV emitters, including Nichia (Fujioka et al ), Nitride Semiconductors (Muramoto et al ) and Sensor Electronic Technology (Shatalov et al ). We believe these articles will provide an excellent overview of the state of technology. The growth of AlGaN heterostructures by molecular beam epitaxy, in contrast to the common organo-metallic vapour phase epitaxy, is discussed by Ivanov et al. Since hexagonal boron nitride (BN) has received much attention as both a UV and a two-dimensional electronic material, we believe it serves readers well to include the

  16. DEEP INFILTRATING ENDOMETRIOSIS

    Directory of Open Access Journals (Sweden)

    Martina Ribič-Pucelj

    2018-02-01

    Full Text Available Background: Endometriosis is not considered a unified disease, but a disease encompassing three differ- ent forms differentiated by aetiology and pathogenesis: peritoneal endometriosis, ovarian endometriosis and deep infiltrating endometriosis (DIE. The disease is classified as DIE when the lesions penetrate 5 mm or more into the retroperitoneal space. The estimated incidence of endometriosis in women of reproductive age ranges from 10–15 % and that of DIE from 3–10 %, the highest being in infertile women and in those with chronic pelvic pain. The leading symptoms of DIE are chronic pelvic pain which increases with age and correlates with the depth of infiltration and infertility. The most important diagnostic procedures are patient’s history and proper gynecological examination. The diagnosis is confirmed with laparoscopy. DIE can affect, beside reproductive organs, also bowel, bladder and ureters, therefore adi- tional diagnostic procedures must be performed preopertively to confirm or to exclude the involvement of the mentioned organs. Endometriosis is hormon dependent disease, there- fore several hormonal treatment regims are used to supress estrogen production but the symptoms recurr soon after caesation of the treatment. At the moment, surgical treatment with excision of all lesions, including those of bowel, bladder and ureters, is the method of choice but requires frequently interdisciplinary approach. Surgical treatment significantly reduces pain and improves fertility in inferile patients. Conclusions: DIE is not a rare form of endometriosis characterized by chronic pelvic pain and infertility. Medical treatment is not efficient. The method of choice is surgical treatment with excision of all lesions. It significantly reduces pelvic pain and enables high spontaneus and IVF preg- nacy rates.Therefore such patients should be treated at centres with experience in treatment of DIE and with possibility of interdisciplinary approach.

  17. Telepresence for Deep Space Missions

    Data.gov (United States)

    National Aeronautics and Space Administration — Incorporating telepresence technologies into deep space mission operations can give the crew and ground personnel the impression that they are in a location at time...

  18. Hybrid mask for deep etching

    KAUST Repository

    Ghoneim, Mohamed T.

    2017-01-01

    Deep reactive ion etching is essential for creating high aspect ratio micro-structures for microelectromechanical systems, sensors and actuators, and emerging flexible electronics. A novel hybrid dual soft/hard mask bilayer may be deposited during

  19. Deep Learning and Bayesian Methods

    OpenAIRE

    Prosper Harrison B.

    2017-01-01

    A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such meth...

  20. Density functionals from deep learning

    OpenAIRE

    McMahon, Jeffrey M.

    2016-01-01

    Density-functional theory is a formally exact description of a many-body quantum system in terms of its density; in practice, however, approximations to the universal density functional are required. In this work, a model based on deep learning is developed to approximate this functional. Deep learning allows computational models that are capable of naturally discovering intricate structure in large and/or high-dimensional data sets, with multiple levels of abstraction. As no assumptions are ...

  1. Deep Unfolding for Topic Models.

    Science.gov (United States)

    Chien, Jen-Tzung; Lee, Chao-Hsi

    2018-02-01

    Deep unfolding provides an approach to integrate the probabilistic generative models and the deterministic neural networks. Such an approach is benefited by deep representation, easy interpretation, flexible learning and stochastic modeling. This study develops the unsupervised and supervised learning of deep unfolded topic models for document representation and classification. Conventionally, the unsupervised and supervised topic models are inferred via the variational inference algorithm where the model parameters are estimated by maximizing the lower bound of logarithm of marginal likelihood using input documents without and with class labels, respectively. The representation capability or classification accuracy is constrained by the variational lower bound and the tied model parameters across inference procedure. This paper aims to relax these constraints by directly maximizing the end performance criterion and continuously untying the parameters in learning process via deep unfolding inference (DUI). The inference procedure is treated as the layer-wise learning in a deep neural network. The end performance is iteratively improved by using the estimated topic parameters according to the exponentiated updates. Deep learning of topic models is therefore implemented through a back-propagation procedure. Experimental results show the merits of DUI with increasing number of layers compared with variational inference in unsupervised as well as supervised topic models.

  2. Hot, deep origin of petroleum: deep basin evidence and application

    Science.gov (United States)

    Price, Leigh C.

    1978-01-01

    Use of the model of a hot deep origin of oil places rigid constraints on the migration and entrapment of crude oil. Specifically, oil originating from depth migrates vertically up faults and is emplaced in traps at shallower depths. Review of petroleum-producing basins worldwide shows oil occurrence in these basins conforms to the restraints of and therefore supports the hypothesis. Most of the world's oil is found in the very deepest sedimentary basins, and production over or adjacent to the deep basin is cut by or directly updip from faults dipping into the basin deep. Generally the greater the fault throw the greater the reserves. Fault-block highs next to deep sedimentary troughs are the best target areas by the present concept. Traps along major basin-forming faults are quite prospective. The structural style of a basin governs the distribution, types, and amounts of hydrocarbons expected and hence the exploration strategy. Production in delta depocenters (Niger) is in structures cut by or updip from major growth faults, and structures not associated with such faults are barren. Production in block fault basins is on horsts next to deep sedimentary troughs (Sirte, North Sea). In basins whose sediment thickness, structure and geologic history are known to a moderate degree, the main oil occurrences can be specifically predicted by analysis of fault systems and possible hydrocarbon migration routes. Use of the concept permits the identification of significant targets which have either been downgraded or ignored in the past, such as production in or just updip from thrust belts, stratigraphic traps over the deep basin associated with major faulting, production over the basin deep, and regional stratigraphic trapping updip from established production along major fault zones.

  3. How Stressful Is "Deep Bubbling"?

    Science.gov (United States)

    Tyrmi, Jaana; Laukkanen, Anne-Maria

    2017-03-01

    Water resistance therapy by phonating through a tube into the water is used to treat dysphonia. Deep submersion (≥10 cm in water, "deep bubbling") is used for hypofunctional voice disorders. Using it with caution is recommended to avoid vocal overloading. This experimental study aimed to investigate how strenuous "deep bubbling" is. Fourteen subjects, half of them with voice training, repeated the syllable [pa:] in comfortable speaking pitch and loudness, loudly, and in strained voice. Thereafter, they phonated a vowel-like sound both in comfortable loudness and loudly into a glass resonance tube immersed 10 cm into the water. Oral pressure, contact quotient (CQ, calculated from electroglottographic signal), and sound pressure level were studied. The peak oral pressure P(oral) during [p] and shuttering of the outer end of the tube was measured to estimate the subglottic pressure P(sub) and the mean P(oral) during vowel portions to enable calculation of transglottic pressure P(trans). Sensations during phonation were reported with an open-ended interview. P(sub) and P(oral) were higher in "deep bubbling" and P(trans) lower than in loud syllable phonation, but the CQ did not differ significantly. Similar results were obtained for the comparison between loud "deep bubbling" and strained phonation, although P(sub) did not differ significantly. Most of the subjects reported "deep bubbling" to be stressful only for respiratory and lip muscles. No big differences were found between trained and untrained subjects. The CQ values suggest that "deep bubbling" may increase vocal fold loading. Further studies should address impact stress during water resistance exercises. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  4. Accelerating Deep Learning with Shrinkage and Recall

    OpenAIRE

    Zheng, Shuai; Vishnu, Abhinav; Ding, Chris

    2016-01-01

    Deep Learning is a very powerful machine learning model. Deep Learning trains a large number of parameters for multiple layers and is very slow when data is in large scale and the architecture size is large. Inspired from the shrinking technique used in accelerating computation of Support Vector Machines (SVM) algorithm and screening technique used in LASSO, we propose a shrinking Deep Learning with recall (sDLr) approach to speed up deep learning computation. We experiment shrinking Deep Lea...

  5. What Really is Deep Learning Doing?

    OpenAIRE

    Xiong, Chuyu

    2017-01-01

    Deep learning has achieved a great success in many areas, from computer vision to natural language processing, to game playing, and much more. Yet, what deep learning is really doing is still an open question. There are a lot of works in this direction. For example, [5] tried to explain deep learning by group renormalization, and [6] tried to explain deep learning from the view of functional approximation. In order to address this very crucial question, here we see deep learning from perspect...

  6. Deep Learning and Bayesian Methods

    Directory of Open Access Journals (Sweden)

    Prosper Harrison B.

    2017-01-01

    Full Text Available A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such methods might be used to automate certain aspects of data analysis in particle physics. Next, the connection to Bayesian methods is discussed and the paper ends with thoughts on a significant practical issue, namely, how, from a Bayesian perspective, one might optimize the construction of deep neural networks.

  7. Deep Learning in Drug Discovery.

    Science.gov (United States)

    Gawehn, Erik; Hiss, Jan A; Schneider, Gisbert

    2016-01-01

    Artificial neural networks had their first heyday in molecular informatics and drug discovery approximately two decades ago. Currently, we are witnessing renewed interest in adapting advanced neural network architectures for pharmaceutical research by borrowing from the field of "deep learning". Compared with some of the other life sciences, their application in drug discovery is still limited. Here, we provide an overview of this emerging field of molecular informatics, present the basic concepts of prominent deep learning methods and offer motivation to explore these techniques for their usefulness in computer-assisted drug discovery and design. We specifically emphasize deep neural networks, restricted Boltzmann machine networks and convolutional networks. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Eric Davidson and deep time.

    Science.gov (United States)

    Erwin, Douglas H

    2017-10-13

    Eric Davidson had a deep and abiding interest in the role developmental mechanisms played in generating evolutionary patterns documented in deep time, from the origin of the euechinoids to the processes responsible for the morphological architectures of major animal clades. Although not an evolutionary biologist, Davidson's interests long preceded the current excitement over comparative evolutionary developmental biology. Here I discuss three aspects at the intersection between his research and evolutionary patterns in deep time: First, understanding the mechanisms of body plan formation, particularly those associated with the early diversification of major metazoan clades. Second, a critique of early claims about ancestral metazoans based on the discoveries of highly conserved genes across bilaterian animals. Third, Davidson's own involvement in paleontology through a collaborative study of the fossil embryos from the Ediacaran Doushantuo Formation in south China.

  9. Deep Learning in Gastrointestinal Endoscopy.

    Science.gov (United States)

    Patel, Vivek; Armstrong, David; Ganguli, Malika; Roopra, Sandeep; Kantipudi, Neha; Albashir, Siwar; Kamath, Markad V

    2016-01-01

    Gastrointestinal (GI) endoscopy is used to inspect the lumen or interior of the GI tract for several purposes, including, (1) making a clinical diagnosis, in real time, based on the visual appearances; (2) taking targeted tissue samples for subsequent histopathological examination; and (3) in some cases, performing therapeutic interventions targeted at specific lesions. GI endoscopy is therefore predicated on the assumption that the operator-the endoscopist-is able to identify and characterize abnormalities or lesions accurately and reproducibly. However, as in other areas of clinical medicine, such as histopathology and radiology, many studies have documented marked interobserver and intraobserver variability in lesion recognition. Thus, there is a clear need and opportunity for techniques or methodologies that will enhance the quality of lesion recognition and diagnosis and improve the outcomes of GI endoscopy. Deep learning models provide a basis to make better clinical decisions in medical image analysis. Biomedical image segmentation, classification, and registration can be improved with deep learning. Recent evidence suggests that the application of deep learning methods to medical image analysis can contribute significantly to computer-aided diagnosis. Deep learning models are usually considered to be more flexible and provide reliable solutions for image analysis problems compared to conventional computer vision models. The use of fast computers offers the possibility of real-time support that is important for endoscopic diagnosis, which has to be made in real time. Advanced graphics processing units and cloud computing have also favored the use of machine learning, and more particularly, deep learning for patient care. This paper reviews the rapidly evolving literature on the feasibility of applying deep learning algorithms to endoscopic imaging.

  10. Deep mycoses in Amazon region.

    Science.gov (United States)

    Talhari, S; Cunha, M G; Schettini, A P; Talhari, A C

    1988-09-01

    Patients with deep mycoses diagnosed in dermatologic clinics of Manaus (state of Amazonas, Brazil) were studied from November 1973 to December 1983. They came from the Brazilian states of Amazonas, Pará, Acre, and Rondônia and the Federal Territory of Roraima. All of these regions, with the exception of Pará, are situated in the western part of the Amazon Basin. The climatic conditions in this region are almost the same: tropical forest, high rainfall, and mean annual temperature of 26C. The deep mycoses diagnosed, in order of frequency, were Jorge Lobo's disease, paracoccidioidomycosis, chromomycosis, sporotrichosis, mycetoma, cryptococcosis, zygomycosis, and histoplasmosis.

  11. Producing deep-water hydrocarbons

    International Nuclear Information System (INIS)

    Pilenko, Thierry

    2011-01-01

    Several studies relate the history and progress made in offshore production from oil and gas fields in relation to reserves and the techniques for producing oil offshore. The intention herein is not to review these studies but rather to argue that the activities of prospecting and producing deep-water oil and gas call for a combination of technology and project management and, above all, of devotion and innovation. Without this sense of commitment motivating men and women in this industry, the human adventure of deep-water production would never have taken place

  12. DeepSimulator: a deep simulator for Nanopore sequencing

    KAUST Repository

    Li, Yu

    2017-12-23

    Motivation: Oxford Nanopore sequencing is a rapidly developed sequencing technology in recent years. To keep pace with the explosion of the downstream data analytical tools, a versatile Nanopore sequencing simulator is needed to complement the experimental data as well as to benchmark those newly developed tools. However, all the currently available simulators are based on simple statistics of the produced reads, which have difficulty in capturing the complex nature of the Nanopore sequencing procedure, the main task of which is the generation of raw electrical current signals. Results: Here we propose a deep learning based simulator, DeepSimulator, to mimic the entire pipeline of Nanopore sequencing. Starting from a given reference genome or assembled contigs, we simulate the electrical current signals by a context-dependent deep learning model, followed by a base-calling procedure to yield simulated reads. This workflow mimics the sequencing procedure more naturally. The thorough experiments performed across four species show that the signals generated by our context-dependent model are more similar to the experimentally obtained signals than the ones generated by the official context-independent pore model. In terms of the simulated reads, we provide a parameter interface to users so that they can obtain the reads with different accuracies ranging from 83% to 97%. The reads generated by the default parameter have almost the same properties as the real data. Two case studies demonstrate the application of DeepSimulator to benefit the development of tools in de novo assembly and in low coverage SNP detection. Availability: The software can be accessed freely at: https://github.com/lykaust15/DeepSimulator.

  13. Stimulation Technologies for Deep Well Completions

    Energy Technology Data Exchange (ETDEWEB)

    None

    2003-09-30

    The Department of Energy (DOE) is sponsoring the Deep Trek Program targeted at improving the economics of drilling and completing deep gas wells. Under the DOE program, Pinnacle Technologies is conducting a study to evaluate the stimulation of deep wells. The objective of the project is to assess U.S. deep well drilling & stimulation activity, review rock mechanics & fracture growth in deep, high pressure/temperature wells and evaluate stimulation technology in several key deep plays. An assessment of historical deep gas well drilling activity and forecast of future trends was completed during the first six months of the project; this segment of the project was covered in Technical Project Report No. 1. The second progress report covers the next six months of the project during which efforts were primarily split between summarizing rock mechanics and fracture growth in deep reservoirs and contacting operators about case studies of deep gas well stimulation.

  14. STIMULATION TECHNOLOGIES FOR DEEP WELL COMPLETIONS

    Energy Technology Data Exchange (ETDEWEB)

    Stephen Wolhart

    2003-06-01

    The Department of Energy (DOE) is sponsoring a Deep Trek Program targeted at improving the economics of drilling and completing deep gas wells. Under the DOE program, Pinnacle Technologies is conducting a project to evaluate the stimulation of deep wells. The objective of the project is to assess U.S. deep well drilling & stimulation activity, review rock mechanics & fracture growth in deep, high pressure/temperature wells and evaluate stimulation technology in several key deep plays. Phase 1 was recently completed and consisted of assessing deep gas well drilling activity (1995-2007) and an industry survey on deep gas well stimulation practices by region. Of the 29,000 oil, gas and dry holes drilled in 2002, about 300 were drilled in the deep well; 25% were dry, 50% were high temperature/high pressure completions and 25% were simply deep completions. South Texas has about 30% of these wells, Oklahoma 20%, Gulf of Mexico Shelf 15% and the Gulf Coast about 15%. The Rockies represent only 2% of deep drilling. Of the 60 operators who drill deep and HTHP wells, the top 20 drill almost 80% of the wells. Six operators drill half the U.S. deep wells. Deep drilling peaked at 425 wells in 1998 and fell to 250 in 1999. Drilling is expected to rise through 2004 after which drilling should cycle down as overall drilling declines.

  15. Deep Space Climate Observatory (DSCOVR)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Deep Space Climate ObserVatoRy (DSCOVR) satellite is a NOAA operated asset at the first Lagrange (L1) point. The primary space weather instrument is the PlasMag...

  16. Ploughing the deep sea floor.

    Science.gov (United States)

    Puig, Pere; Canals, Miquel; Company, Joan B; Martín, Jacobo; Amblas, David; Lastras, Galderic; Palanques, Albert

    2012-09-13

    Bottom trawling is a non-selective commercial fishing technique whereby heavy nets and gear are pulled along the sea floor. The direct impact of this technique on fish populations and benthic communities has received much attention, but trawling can also modify the physical properties of seafloor sediments, water–sediment chemical exchanges and sediment fluxes. Most of the studies addressing the physical disturbances of trawl gear on the seabed have been undertaken in coastal and shelf environments, however, where the capacity of trawling to modify the seafloor morphology coexists with high-energy natural processes driving sediment erosion, transport and deposition. Here we show that on upper continental slopes, the reworking of the deep sea floor by trawling gradually modifies the shape of the submarine landscape over large spatial scales. We found that trawling-induced sediment displacement and removal from fishing grounds causes the morphology of the deep sea floor to become smoother over time, reducing its original complexity as shown by high-resolution seafloor relief maps. Our results suggest that in recent decades, following the industrialization of fishing fleets, bottom trawling has become an important driver of deep seascape evolution. Given the global dimension of this type of fishery, we anticipate that the morphology of the upper continental slope in many parts of the world’s oceans could be altered by intensive bottom trawling, producing comparable effects on the deep sea floor to those generated by agricultural ploughing on land.

  17. FOSTERING DEEP LEARNING AMONGST ENTREPRENEURSHIP ...

    African Journals Online (AJOL)

    An important prerequisite for this important objective to be achieved is that lecturers ensure that students adopt a deep learning approach towards entrepreneurship courses been taught, as this will enable them to truly understand key entrepreneurial concepts and strategies and how they can be implemented in the real ...

  18. Deep Space Gateway "Recycler" Mission

    Science.gov (United States)

    Graham, L.; Fries, M.; Hamilton, J.; Landis, R.; John, K.; O'Hara, W.

    2018-02-01

    Use of the Deep Space Gateway provides a hub for a reusable planetary sample return vehicle for missions to gather star dust as well as samples from various parts of the solar system including main belt asteroids, near-Earth asteroids, and Mars moon.

  19. Deep freezers with heat recovery

    Energy Technology Data Exchange (ETDEWEB)

    Kistler, J.

    1981-09-02

    Together with space and water heating systems, deep freezers are the biggest energy consumers in households. The article investigates the possibility of using the waste heat for water heating. The design principle of such a system is presented in a wiring diagram.

  20. A Deep-Sea Simulation.

    Science.gov (United States)

    Montes, Georgia E.

    1997-01-01

    Describes an activity that simulates exploration techniques used in deep-sea explorations and teaches students how this technology can be used to take a closer look inside volcanoes, inspect hazardous waste sites such as nuclear reactors, and explore other environments dangerous to humans. (DDR)

  1. Barbabos Deep-Water Sponges

    NARCIS (Netherlands)

    Soest, van R.W.M.; Stentoft, N.

    1988-01-01

    Deep-water sponges dredged up in two locations off the west coast of Barbados are systematically described. A total of 69 species is recorded, among which 16 are new to science, viz. Pachymatisma geodiformis, Asteropus syringiferus, Cinachyra arenosa, Theonella atlantica. Corallistes paratypus,

  2. Deep learning for visual understanding

    NARCIS (Netherlands)

    Guo, Y.

    2017-01-01

    With the dramatic growth of the image data on the web, there is an increasing demand of the algorithms capable of understanding the visual information automatically. Deep learning, served as one of the most significant breakthroughs, has brought revolutionary success in diverse visual applications,

  3. Deep-Sky Video Astronomy

    CERN Document Server

    Massey, Steve

    2009-01-01

    A guide to using modern integrating video cameras for deep-sky viewing and imaging with the kinds of modest telescopes available commercially to amateur astronomers. It includes an introduction and a brief history of the technology and camera types. It examines the pros and cons of this unrefrigerated yet highly efficient technology

  4. DM Considerations for Deep Drilling

    OpenAIRE

    Dubois-Felsmann, Gregory

    2016-01-01

    An outline of the current situation regarding the DM plans for the Deep Drilling surveys and an invitation to the community to provide feedback on what they would like to see included in the data processing and visualization of these surveys.

  5. Lessons from Earth's Deep Time

    Science.gov (United States)

    Soreghan, G. S.

    2005-01-01

    Earth is a repository of data on climatic changes from its deep-time history. Article discusses the collection and study of these data to predict future climatic changes, the need to create national study centers for the purpose, and the necessary cooperation between different branches of science in climatic research.

  6. Digging Deeper: The Deep Web.

    Science.gov (United States)

    Turner, Laura

    2001-01-01

    Focuses on the Deep Web, defined as Web content in searchable databases of the type that can be found only by direct query. Discusses the problems of indexing; inability to find information not indexed in the search engine's database; and metasearch engines. Describes 10 sites created to access online databases or directly search them. Lists ways…

  7. Deep Learning and Music Adversaries

    DEFF Research Database (Denmark)

    Kereliuk, Corey Mose; Sturm, Bob L.; Larsen, Jan

    2015-01-01

    the minimal perturbation of the input image such that the system misclassifies it with high confidence. We adapt this approach to construct and deploy an adversary of deep learning systems applied to music content analysis. In our case, however, the system inputs are magnitude spectral frames, which require...

  8. Stimulation Technologies for Deep Well Completions

    Energy Technology Data Exchange (ETDEWEB)

    Stephen Wolhart

    2005-06-30

    The Department of Energy (DOE) is sponsoring the Deep Trek Program targeted at improving the economics of drilling and completing deep gas wells. Under the DOE program, Pinnacle Technologies conducted a study to evaluate the stimulation of deep wells. The objective of the project was to review U.S. deep well drilling and stimulation activity, review rock mechanics and fracture growth in deep, high-pressure/temperature wells and evaluate stimulation technology in several key deep plays. This report documents results from this project.

  9. Deep Web and Dark Web: Deep World of the Internet

    OpenAIRE

    Çelik, Emine

    2018-01-01

    The Internet is undoubtedly still a revolutionary breakthrough in the history of humanity. Many people use the internet for communication, social media, shopping, political and social agenda, and more. Deep Web and Dark Web concepts not only handled by computer, software engineers but also handled by social siciensists because of the role of internet for the States in international arenas, public institutions and human life. By the moving point that very importantrole of internet for social s...

  10. DeepNAT: Deep convolutional neural network for segmenting neuroanatomy.

    Science.gov (United States)

    Wachinger, Christian; Reuter, Martin; Klein, Tassilo

    2018-04-15

    We introduce DeepNAT, a 3D Deep convolutional neural network for the automatic segmentation of NeuroAnaTomy in T1-weighted magnetic resonance images. DeepNAT is an end-to-end learning-based approach to brain segmentation that jointly learns an abstract feature representation and a multi-class classification. We propose a 3D patch-based approach, where we do not only predict the center voxel of the patch but also neighbors, which is formulated as multi-task learning. To address a class imbalance problem, we arrange two networks hierarchically, where the first one separates foreground from background, and the second one identifies 25 brain structures on the foreground. Since patches lack spatial context, we augment them with coordinates. To this end, we introduce a novel intrinsic parameterization of the brain volume, formed by eigenfunctions of the Laplace-Beltrami operator. As network architecture, we use three convolutional layers with pooling, batch normalization, and non-linearities, followed by fully connected layers with dropout. The final segmentation is inferred from the probabilistic output of the network with a 3D fully connected conditional random field, which ensures label agreement between close voxels. The roughly 2.7million parameters in the network are learned with stochastic gradient descent. Our results show that DeepNAT compares favorably to state-of-the-art methods. Finally, the purely learning-based method may have a high potential for the adaptation to young, old, or diseased brains by fine-tuning the pre-trained network with a small training sample on the target application, where the availability of larger datasets with manual annotations may boost the overall segmentation accuracy in the future. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Deep Phenotyping: Deep Learning For Temporal Phenotype/Genotype Classification

    OpenAIRE

    Najafi, Mohammad; Namin, Sarah; Esmaeilzadeh, Mohammad; Brown, Tim; Borevitz, Justin

    2017-01-01

    High resolution and high throughput, genotype to phenotype studies in plants are underway to accelerate breeding of climate ready crops. Complex developmental phenotypes are observed by imaging a variety of accessions in different environment conditions, however extracting the genetically heritable traits is challenging. In the recent years, deep learning techniques and in particular Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs) and Long-Short Term Memories (LSTMs), h...

  12. Deep Neuromuscular Blockade Improves Laparoscopic Surgical Conditions

    DEFF Research Database (Denmark)

    Rosenberg, Jacob; Herring, W Joseph; Blobner, Manfred

    2017-01-01

    INTRODUCTION: Sustained deep neuromuscular blockade (NMB) during laparoscopic surgery may facilitate optimal surgical conditions. This exploratory study assessed whether deep NMB improves surgical conditions and, in doing so, allows use of lower insufflation pressures during laparoscopic cholecys...

  13. Joint Training of Deep Boltzmann Machines

    OpenAIRE

    Goodfellow, Ian; Courville, Aaron; Bengio, Yoshua

    2012-01-01

    We introduce a new method for training deep Boltzmann machines jointly. Prior methods require an initial learning pass that trains the deep Boltzmann machine greedily, one layer at a time, or do not perform well on classifi- cation tasks.

  14. Building Program Vector Representations for Deep Learning

    OpenAIRE

    Mou, Lili; Li, Ge; Liu, Yuxuan; Peng, Hao; Jin, Zhi; Xu, Yan; Zhang, Lu

    2014-01-01

    Deep learning has made significant breakthroughs in various fields of artificial intelligence. Advantages of deep learning include the ability to capture highly complicated features, weak involvement of human engineering, etc. However, it is still virtually impossible to use deep learning to analyze programs since deep architectures cannot be trained effectively with pure back propagation. In this pioneering paper, we propose the "coding criterion" to build program vector representations, whi...

  15. Is Multitask Deep Learning Practical for Pharma?

    Science.gov (United States)

    Ramsundar, Bharath; Liu, Bowen; Wu, Zhenqin; Verras, Andreas; Tudor, Matthew; Sheridan, Robert P; Pande, Vijay

    2017-08-28

    Multitask deep learning has emerged as a powerful tool for computational drug discovery. However, despite a number of preliminary studies, multitask deep networks have yet to be widely deployed in the pharmaceutical and biotech industries. This lack of acceptance stems from both software difficulties and lack of understanding of the robustness of multitask deep networks. Our work aims to resolve both of these barriers to adoption. We introduce a high-quality open-source implementation of multitask deep networks as part of the DeepChem open-source platform. Our implementation enables simple python scripts to construct, fit, and evaluate sophisticated deep models. We use our implementation to analyze the performance of multitask deep networks and related deep models on four collections of pharmaceutical data (three of which have not previously been analyzed in the literature). We split these data sets into train/valid/test using time and neighbor splits to test multitask deep learning performance under challenging conditions. Our results demonstrate that multitask deep networks are surprisingly robust and can offer strong improvement over random forests. Our analysis and open-source implementation in DeepChem provide an argument that multitask deep networks are ready for widespread use in commercial drug discovery.

  16. Evaluation of the DeepWind concept

    DEFF Research Database (Denmark)

    Schmidt Paulsen, Uwe; Borg, Michael; Gonzales Seabra, Luis Alberto

    The report describes the DeepWind 5 MW conceptual design as a baseline for results obtained in the scientific and technical work packages of the DeepWind project. A comparison of DeepWi nd with existing VAWTs and paper projects are carried out and the evaluation of the concept in terms of cost...

  17. Consolidated Deep Actor Critic Networks (DRAFT)

    NARCIS (Netherlands)

    Van der Laan, T.A.

    2015-01-01

    The works [Volodymyr et al. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.] and [Volodymyr et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.] have demonstrated the power of combining deep neural networks with

  18. Simulator Studies of the Deep Stall

    Science.gov (United States)

    White, Maurice D.; Cooper, George E.

    1965-01-01

    Simulator studies of the deep-stall problem encountered with modern airplanes are discussed. The results indicate that the basic deep-stall tendencies produced by aerodynamic characteristics are augmented by operational considerations. Because of control difficulties to be anticipated in the deep stall, it is desirable that adequate safeguards be provided against inadvertent penetrations.

  19. TOPIC MODELING: CLUSTERING OF DEEP WEBPAGES

    OpenAIRE

    Muhunthaadithya C; Rohit J.V; Sadhana Kesavan; E. Sivasankar

    2015-01-01

    The internet is comprised of massive amount of information in the form of zillions of web pages.This information can be categorized into the surface web and the deep web. The existing search engines can effectively make use of surface web information.But the deep web remains unexploited yet. Machine learning techniques have been commonly employed to access deep web content.

  20. DeepFlavour in CMS

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Flavour-tagging of jets is an important task in collider based high energy physics and a field where machine learning tools are applied by all major experiments. A new tagger (DeepFlavour) was developed and commissioned in CMS that is based on an advanced machine learning procedure. A deep neural network is used to do multi-classification of jets that origin from a b-quark, two b-quarks, a c-quark, two c-quarks or light colored particles (u, d, s-quark or gluon). The performance was measured in both, data and simulation. The talk will also include the measured performance of all taggers in CMS. The different taggers and results will be discussed and compared with some focus on details of the newest tagger.

  1. Deep Learning for ECG Classification

    Science.gov (United States)

    Pyakillya, B.; Kazachenko, N.; Mikhailovsky, N.

    2017-10-01

    The importance of ECG classification is very high now due to many current medical applications where this problem can be stated. Currently, there are many machine learning (ML) solutions which can be used for analyzing and classifying ECG data. However, the main disadvantages of these ML results is use of heuristic hand-crafted or engineered features with shallow feature learning architectures. The problem relies in the possibility not to find most appropriate features which will give high classification accuracy in this ECG problem. One of the proposing solution is to use deep learning architectures where first layers of convolutional neurons behave as feature extractors and in the end some fully-connected (FCN) layers are used for making final decision about ECG classes. In this work the deep learning architecture with 1D convolutional layers and FCN layers for ECG classification is presented and some classification results are showed.

  2. Deep Space Habitat Concept Demonstrator

    Science.gov (United States)

    Bookout, Paul S.; Smitherman, David

    2015-01-01

    This project will develop, integrate, test, and evaluate Habitation Systems that will be utilized as technology testbeds and will advance NASA's understanding of alternative deep space mission architectures, requirements, and operations concepts. Rapid prototyping and existing hardware will be utilized to develop full-scale habitat demonstrators. FY 2014 focused on the development of a large volume Space Launch System (SLS) class habitat (Skylab Gen 2) based on the SLS hydrogen tank components. Similar to the original Skylab, a tank section of the SLS rocket can be outfitted with a deep space habitat configuration and launched as a payload on an SLS rocket. This concept can be used to support extended stay at the Lunar Distant Retrograde Orbit to support the Asteroid Retrieval Mission and provide a habitat suitable for human missions to Mars.

  3. Hybrid mask for deep etching

    KAUST Repository

    Ghoneim, Mohamed T.

    2017-08-10

    Deep reactive ion etching is essential for creating high aspect ratio micro-structures for microelectromechanical systems, sensors and actuators, and emerging flexible electronics. A novel hybrid dual soft/hard mask bilayer may be deposited during semiconductor manufacturing for deep reactive etches. Such a manufacturing process may include depositing a first mask material on a substrate; depositing a second mask material on the first mask material; depositing a third mask material on the second mask material; patterning the third mask material with a pattern corresponding to one or more trenches for transfer to the substrate; transferring the pattern from the third mask material to the second mask material; transferring the pattern from the second mask material to the first mask material; and/or transferring the pattern from the first mask material to the substrate.

  4. Soft-Deep Boltzmann Machines

    OpenAIRE

    Kiwaki, Taichi

    2015-01-01

    We present a layered Boltzmann machine (BM) that can better exploit the advantages of a distributed representation. It is widely believed that deep BMs (DBMs) have far greater representational power than its shallow counterpart, restricted Boltzmann machines (RBMs). However, this expectation on the supremacy of DBMs over RBMs has not ever been validated in a theoretical fashion. In this paper, we provide both theoretical and empirical evidences that the representational power of DBMs can be a...

  5. Evolving Deep Networks Using HPC

    Energy Technology Data Exchange (ETDEWEB)

    Young, Steven R. [ORNL, Oak Ridge; Rose, Derek C. [ORNL, Oak Ridge; Johnston, Travis [ORNL, Oak Ridge; Heller, William T. [ORNL, Oak Ridge; Karnowski, thomas P. [ORNL, Oak Ridge; Potok, Thomas E. [ORNL, Oak Ridge; Patton, Robert M. [ORNL, Oak Ridge; Perdue, Gabriel [Fermilab; Miller, Jonathan [Santa Maria U., Valparaiso

    2017-01-01

    While a large number of deep learning networks have been studied and published that produce outstanding results on natural image datasets, these datasets only make up a fraction of those to which deep learning can be applied. These datasets include text data, audio data, and arrays of sensors that have very different characteristics than natural images. As these “best” networks for natural images have been largely discovered through experimentation and cannot be proven optimal on some theoretical basis, there is no reason to believe that they are the optimal network for these drastically different datasets. Hyperparameter search is thus often a very important process when applying deep learning to a new problem. In this work we present an evolutionary approach to searching the possible space of network hyperparameters and construction that can scale to 18, 000 nodes. This approach is applied to datasets of varying types and characteristics where we demonstrate the ability to rapidly find best hyperparameters in order to enable practitioners to quickly iterate between idea and result.

  6. Deep Space Gateway Science Opportunities

    Science.gov (United States)

    Quincy, C. D.; Charles, J. B.; Hamill, Doris; Sidney, S. C.

    2018-01-01

    The NASA Life Sciences Research Capabilities Team (LSRCT) has been discussing deep space research needs for the last two years. NASA's programs conducting life sciences studies - the Human Research Program, Space Biology, Astrobiology, and Planetary Protection - see the Deep Space Gateway (DSG) as affording enormous opportunities to investigate biological organisms in a unique environment that cannot be replicated in Earth-based laboratories or on Low Earth Orbit science platforms. These investigations may provide in many cases the definitive answers to risks associated with exploration and living outside Earth's protective magnetic field. Unlike Low Earth Orbit or terrestrial locations, the Gateway location will be subjected to the true deep space spectrum and influence of both galactic cosmic and solar particle radiation and thus presents an opportunity to investigate their long-term exposure effects. The question of how a community of biological organisms change over time within the harsh environment of space flight outside of the magnetic field protection can be investigated. The biological response to the absence of Earth's geomagnetic field can be studied for the first time. Will organisms change in new and unique ways under these new conditions? This may be specifically true on investigations of microbial communities. The Gateway provides a platform for microbiology experiments both inside, to improve understanding of interactions between microbes and human habitats, and outside, to improve understanding of microbe-hardware interactions exposed to the space environment.

  7. Deep water recycling through time.

    Science.gov (United States)

    Magni, Valentina; Bouilhol, Pierre; van Hunen, Jeroen

    2014-11-01

    We investigate the dehydration processes in subduction zones and their implications for the water cycle throughout Earth's history. We use a numerical tool that combines thermo-mechanical models with a thermodynamic database to examine slab dehydration for present-day and early Earth settings and its consequences for the deep water recycling. We investigate the reactions responsible for releasing water from the crust and the hydrated lithospheric mantle and how they change with subduction velocity ( v s ), slab age ( a ) and mantle temperature (T m ). Our results show that faster slabs dehydrate over a wide area: they start dehydrating shallower and they carry water deeper into the mantle. We parameterize the amount of water that can be carried deep into the mantle, W (×10 5 kg/m 2 ), as a function of v s (cm/yr), a (Myrs), and T m (°C):[Formula: see text]. We generally observe that a 1) 100°C increase in the mantle temperature, or 2) ∼15 Myr decrease of plate age, or 3) decrease in subduction velocity of ∼2 cm/yr all have the same effect on the amount of water retained in the slab at depth, corresponding to a decrease of ∼2.2×10 5 kg/m 2 of H 2 O. We estimate that for present-day conditions ∼26% of the global influx water, or 7×10 8 Tg/Myr of H 2 O, is recycled into the mantle. Using a realistic distribution of subduction parameters, we illustrate that deep water recycling might still be possible in early Earth conditions, although its efficiency would generally decrease. Indeed, 0.5-3.7 × 10 8 Tg/Myr of H 2 O could still be recycled in the mantle at 2.8 Ga. Deep water recycling might be possible even in early Earth conditions We provide a scaling law to estimate the amount of H 2 O flux deep into the mantle Subduction velocity has a a major control on the crustal dehydration pattern.

  8. Vision in the deep sea.

    Science.gov (United States)

    Warrant, Eric J; Locket, N Adam

    2004-08-01

    The deep sea is the largest habitat on earth. Its three great faunal environments--the twilight mesopelagic zone, the dark bathypelagic zone and the vast flat expanses of the benthic habitat--are home to a rich fauna of vertebrates and invertebrates. In the mesopelagic zone (150-1000 m), the down-welling daylight creates an extended scene that becomes increasingly dimmer and bluer with depth. The available daylight also originates increasingly from vertically above, and bioluminescent point-source flashes, well contrasted against the dim background daylight, become increasingly visible. In the bathypelagic zone below 1000 m no daylight remains, and the scene becomes entirely dominated by point-like bioluminescence. This changing nature of visual scenes with depth--from extended source to point source--has had a profound effect on the designs of deep-sea eyes, both optically and neurally, a fact that until recently was not fully appreciated. Recent measurements of the sensitivity and spatial resolution of deep-sea eyes--particularly from the camera eyes of fishes and cephalopods and the compound eyes of crustaceans--reveal that ocular designs are well matched to the nature of the visual scene at any given depth. This match between eye design and visual scene is the subject of this review. The greatest variation in eye design is found in the mesopelagic zone, where dim down-welling daylight and bio-luminescent point sources may be visible simultaneously. Some mesopelagic eyes rely on spatial and temporal summation to increase sensitivity to a dim extended scene, while others sacrifice this sensitivity to localise pinpoints of bright bioluminescence. Yet other eyes have retinal regions separately specialised for each type of light. In the bathypelagic zone, eyes generally get smaller and therefore less sensitive to point sources with increasing depth. In fishes, this insensitivity, combined with surprisingly high spatial resolution, is very well adapted to the

  9. The deep Canary poleward undercurrent

    Science.gov (United States)

    Velez-Belchi, P. J.; Hernandez-Guerra, A.; González-Pola, C.; Fraile, E.; Collins, C. A.; Machín, F.

    2012-12-01

    Poleward undercurrents are well known features in Eastern Boundary systems. In the California upwelling system (CalCEBS), the deep poleward flow has been observed along the entire outer continental shelf and upper-slope, using indirect methods based on geostrophic estimates and also using direct current measurements. The importance of the poleward undercurrents in the CalCEBS, among others, is to maintain its high productivity by means of the transport of equatorial Pacific waters all the way northward to Vancouver Island and the subpolar gyre but there is also concern about the low oxygen concentration of these waters. However, in the case of the Canary Current Eastern Boundary upwelling system (CanCEBS), there are very few observations of the poleward undercurrent. Most of these observations are short-term mooring records, or drifter trajectories of the upper-slope flow. Hence, the importance of the subsurface poleward flow in the CanCEBS has been only hypothesized. Moreover, due to the large differences between the shape of the coastline and topography between the California and the Canary Current system, the results obtained for the CalCEBS are not completely applicable to the CanCEBS. In this study we report the first direct observations of the continuity of the deep poleward flow of the Canary Deep Poleward undercurrent (CdPU) in the North-Africa sector of the CanCEBS, and one of the few direct observations in the North-Africa sector of the Canary Current eastern boundary. The results indicate that the Canary Island archipelago disrupts the deep poleward undercurrent even at depths where the flow is not blocked by the bathymetry. The deep poleward undercurrent flows west around the eastern-most islands and north east of the Conception Bank to rejoin the intermittent branch that follows the African slope in the Lanzarote Passage. This hypothesis is consistent with the AAIW found west of Lanzarote, as far as 17 W. But also, this hypothesis would be coherent

  10. Deep iCrawl: An Intelligent Vision-Based Deep Web Crawler

    OpenAIRE

    R.Anita; V.Ganga Bharani; N.Nityanandam; Pradeep Kumar Sahoo

    2011-01-01

    The explosive growth of World Wide Web has posed a challenging problem in extracting relevant data. Traditional web crawlers focus only on the surface web while the deep web keeps expanding behind the scene. Deep web pages are created dynamically as a result of queries posed to specific web databases. The structure of the deep web pages makes it impossible for traditional web crawlers to access deep web contents. This paper, Deep iCrawl, gives a novel and vision-based app...

  11. Deep Corals, Deep Learning: Moving the Deep Net Towards Real-Time Image Annotation

    OpenAIRE

    Lea-Anne Henry; Sankha S. Mukherjee; Neil M. Roberston; Laurence De Clippele; J. Murray Roberts

    2016-01-01

    The mismatch between human capacity and the acquisition of Big Data such as Earth imagery undermines commitments to Convention on Biological Diversity (CBD) and Aichi targets. Artificial intelligence (AI) solutions to Big Data issues are urgently needed as these could prove to be faster, more accurate, and cheaper. Reducing costs of managing protected areas in remote deep waters and in the High Seas is of great importance, and this is a realm where autonomous technology will be transformative.

  12. Invited talk: Deep Learning Meets Physics

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Deep Learning has emerged as one of the most successful fields of machine learning and artificial intelligence with overwhelming success in industrial speech, text and vision benchmarks. Consequently it evolved into the central field of research for IT giants like Google, facebook, Microsoft, Baidu, and Amazon. Deep Learning is founded on novel neural network techniques, the recent availability of very fast computers, and massive data sets. In its core, Deep Learning discovers multiple levels of abstract representations of the input. The main obstacle to learning deep neural networks is the vanishing gradient problem. The vanishing gradient impedes credit assignment to the first layers of a deep network or to early elements of a sequence, therefore limits model selection. Major advances in Deep Learning can be related to avoiding the vanishing gradient like stacking, ReLUs, residual networks, highway networks, and LSTM. For Deep Learning, we suggested self-normalizing neural networks (SNNs) which automatica...

  13. Deep remission: a new concept?

    Science.gov (United States)

    Colombel, Jean-Frédéric; Louis, Edouard; Peyrin-Biroulet, Laurent; Sandborn, William J; Panaccione, Remo

    2012-01-01

    Crohn's disease (CD) is a chronic inflammatory disorder characterized by periods of clinical remission alternating with periods of relapse defined by recurrent clinical symptoms. Persistent inflammation is believed to lead to progressive bowel damage over time, which manifests with the development of strictures, fistulae and abscesses. These disease complications frequently lead to a need for surgical resection, which in turn leads to disability. So CD can be characterized as a chronic, progressive, destructive and disabling disease. In rheumatoid arthritis, treatment paradigms have evolved beyond partial symptom control alone toward the induction and maintenance of sustained biological remission, also known as a 'treat to target' strategy, with the goal of improving long-term disease outcomes. In CD, there is currently no accepted, well-defined, comprehensive treatment goal that entails the treatment of both clinical symptoms and biologic inflammation. It is important that such a treatment concept begins to evolve for CD. A treatment strategy that delays or halts the progression of CD to increasing damage and disability is a priority. As a starting point, a working definition of sustained deep remission (that includes long-term biological remission and symptom control) with defined patient outcomes (including no disease progression) has been proposed. The concept of sustained deep remission represents a goal for CD management that may still evolve. It is not clear if the concept also applies to ulcerative colitis. Clinical trials are needed to evaluate whether treatment algorithms that tailor therapy to achieve deep remission in patients with CD can prevent disease progression and disability. Copyright © 2012 S. Karger AG, Basel.

  14. Topics in deep inelastic scattering

    International Nuclear Information System (INIS)

    Wandzura, S.M.

    1977-01-01

    Several topics in deep inelastic lepton--nucleon scattering are discussed, with emphasis on the structure functions appearing in polarized experiments. The major results are: infinite set of new sum rules reducing the number of independent spin dependent structure functions (for electroproduction) from two to one; the application of the techniques of Nachtmann to extract the coefficients appearing in the Wilson operator product expansion; and radiative corrections to the Wilson coefficients of free field theory. Also discussed are the use of dimensional regularization to simplify the calculation of these radiative corrections

  15. Deep groundwater flow at Palmottu

    International Nuclear Information System (INIS)

    Niini, H.; Vesterinen, M.; Tuokko, T.

    1993-01-01

    Further observations, measurements, and calculations aimed at determining the groundwater flow regimes and periodical variations in flow at deeper levels were carried out in the Lake Palmottu (a natural analogue study site for radioactive waste disposal in southwestern Finland) drainage basin. These water movements affect the migration of radionuclides from the Palmottu U-Th deposit. The deep water flow is essentially restricted to the bedrock fractures which developed under, and are still affected by, the stress state of the bedrock. Determination of the detailed variations was based on fracture-tectonic modelling of the 12 most significant underground water-flow channels that cross the surficial water of the Palmottu area. According to the direction of the hydraulic gradient the deep water flow is mostly outwards from the Palmottu catchment but in the westernmost section it is partly towards the centre. Estimation of the water flow through the U-Th deposit by the water-balance method is still only approximate and needs continued observation series and improved field measurements

  16. Deep ocean model penetrator experiments

    International Nuclear Information System (INIS)

    Freeman, T.J.; Burdett, J.R.F.

    1986-01-01

    Preliminary trials of experimental model penetrators in the deep ocean have been conducted as an international collaborative exercise by participating members (national bodies and the CEC) of the Engineering Studies Task Group of the Nuclear Energy Agency's Seabed Working Group. This report describes and gives the results of these experiments, which were conducted at two deep ocean study areas in the Atlantic: Great Meteor East and the Nares Abyssal Plain. Velocity profiles of penetrators of differing dimensions and weights have been determined as they free-fell through the water column and impacted the sediment. These velocity profiles are used to determine the final embedment depth of the penetrators and the resistance to penetration offered by the sediment. The results are compared with predictions of embedment depth derived from elementary models of a penetrator impacting with a sediment. It is tentatively concluded that once the resistance to penetration offered by a sediment at a particular site has been determined, this quantity can be used to sucessfully predict the embedment that penetrators of differing sizes and weights would achieve at the same site

  17. Academic Training: Deep Space Telescopes

    CERN Multimedia

    Françoise Benz

    2006-01-01

    2005-2006 ACADEMIC TRAINING PROGRAMME LECTURE SERIES 20, 21, 22, 23, 24 February from 11:00 to 12:00 - Council Chamber on 20, 21, 23, 24 February, TH Auditorium, bldg 4 - 3-006, on 22 February Deep Space Telescopes G. BIGNAMI / CNRS, Toulouse, F & Univ. di Pavia, I The short series of seminars will address results and aims of current and future space astrophysics as the cultural framework for the development of deep space telescopes. It will then present such new tools, as they are currently available to, or imagined by, the scientific community, in the context of the science plans of ESA and of all major world space agencies. Ground-based astronomy, in the 400 years since Galileo's telescope, has given us a profound phenomenological comprehension of our Universe, but has traditionally been limited to the narrow band(s) to which our terrestrial atmosphere is transparent. Celestial objects, however, do not care about our limitations, and distribute most of the information about their physics thro...

  18. Magnetohydrodynamic throttling and control of liquid-metal flows

    International Nuclear Information System (INIS)

    Gel'fgat, Yu.M.; Gorbunov, L.A.; Vitkovskij, I.V.

    1989-01-01

    Systematic description of complex of purposeful physical and technical investigations of new trend of applied magnetic hydrodynamics, the main purpose of which includes investigation into physical regularities of behaviour of conducting melts under conditions specially provided to achieve maximal effect on electromagnetic field liquid, as well as, development of MHD-equipment specialized means using the detected effects and investigation of their application possibilities in different practical uses, is given in monography for the first time. 299 refs.; 245 figs.; 15 tabs

  19. Image Captioning with Deep Bidirectional LSTMs

    OpenAIRE

    Wang, Cheng; Yang, Haojin; Bartz, Christian; Meinel, Christoph

    2016-01-01

    This work presents an end-to-end trainable deep bidirectional LSTM (Long-Short Term Memory) model for image captioning. Our model builds on a deep convolutional neural network (CNN) and two separate LSTM networks. It is capable of learning long term visual-language interactions by making use of history and future context information at high level semantic space. Two novel deep bidirectional variant models, in which we increase the depth of nonlinearity transition in different way, are propose...

  20. Deep inelastic processes and the parton model

    International Nuclear Information System (INIS)

    Altarelli, G.

    The lecture was intended as an elementary introduction to the physics of deep inelastic phenomena from the point of view of theory. General formulae and facts concerning inclusive deep inelastic processes in the form: l+N→l'+hadrons (electroproduction, neutrino scattering) are first recalled. The deep inelastic annihilation e + e - →hadrons is then envisaged. The light cone approach, the parton model and their relation are mainly emphasized

  1. Deep inelastic electron and muon scattering

    International Nuclear Information System (INIS)

    Taylor, R.E.

    1975-07-01

    From the review of deep inelastic electron and muon scattering it is concluded that the puzzle of deep inelastic scattering versus annihilation was replaced with the challenge of the new particles, that the evidence for the simplest quark-algebra models of deep inelastic processes is weaker than a year ago. Definite evidence of scale breaking was found but the specific form of that scale breaking is difficult to extract from the data. 59 references

  2. Fast, Distributed Algorithms in Deep Networks

    Science.gov (United States)

    2016-05-11

    shallow networks, additional work will need to be done in order to allow for the application of ADMM to deep nets. The ADMM method allows for quick...Quock V Le, et al. Large scale distributed deep networks. In Advances in Neural Information Processing Systems, pages 1223–1231, 2012. [11] Ken-Ichi...A TRIDENT SCHOLAR PROJECT REPORT NO. 446 Fast, Distributed Algorithms in Deep Networks by Midshipman 1/C Ryan J. Burmeister, USN

  3. Learning Transferable Features with Deep Adaptation Networks

    OpenAIRE

    Long, Mingsheng; Cao, Yue; Wang, Jianmin; Jordan, Michael I.

    2015-01-01

    Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation...

  4. An overview of latest deep water technologies

    International Nuclear Information System (INIS)

    Anon.

    1995-01-01

    The 8th Deep Offshore Technology Conference (DOT VIII, Rio de Janeiro, October 30 - November 3, 1995) has brought together renowned specialists in deep water development projects, as well as managers from oil companies and engineering/service companies to discuss state-of-the-art technologies and ongoing projects in the deep offshore. This paper is a compilation of the session summaries about sub sea technologies, mooring and dynamic positioning, floaters (Tension Leg Platforms (TLP) and Floating Production Storage and Off loading (FPSO)), pipelines and risers, exploration and drilling, and other deep water techniques. (J.S.)

  5. Deep learning in neural networks: an overview.

    Science.gov (United States)

    Schmidhuber, Jürgen

    2015-01-01

    In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

  6. The deep ocean under climate change

    Science.gov (United States)

    Levin, Lisa A.; Le Bris, Nadine

    2015-11-01

    The deep ocean absorbs vast amounts of heat and carbon dioxide, providing a critical buffer to climate change but exposing vulnerable ecosystems to combined stresses of warming, ocean acidification, deoxygenation, and altered food inputs. Resulting changes may threaten biodiversity and compromise key ocean services that maintain a healthy planet and human livelihoods. There exist large gaps in understanding of the physical and ecological feedbacks that will occur. Explicit recognition of deep-ocean climate mitigation and inclusion in adaptation planning by the United Nations Framework Convention on Climate Change (UNFCCC) could help to expand deep-ocean research and observation and to protect the integrity and functions of deep-ocean ecosystems.

  7. Docker Containers for Deep Learning Experiments

    OpenAIRE

    Gerke, Paul K.

    2017-01-01

    Deep learning is a powerful tool to solve problems in the area of image analysis. The dominant compute platform for deep learning is Nvidia’s proprietary CUDA, which can only be used together with Nvidia graphics cards. The nivida-docker project allows exposing Nvidia graphics cards to docker containers and thus makes it possible to run deep learning experiments in docker containers.In our department, we use deep learning to solve problems in the area of medical image analysis and use docker ...

  8. Deep Brain Stimulation for Parkinson's Disease

    Science.gov (United States)

    ... about the BRAIN initiative, see www.nih.gov/science/brain . Show More Show Less Search Disorders SEARCH SEARCH Definition Treatment Prognosis Clinical Trials Organizations Publications Definition Deep ...

  9. Cultivating the Deep Subsurface Microbiome

    Science.gov (United States)

    Casar, C. P.; Osburn, M. R.; Flynn, T. M.; Masterson, A.; Kruger, B.

    2017-12-01

    Subterranean ecosystems are poorly understood because many microbes detected in metagenomic surveys are only distantly related to characterized isolates. Cultivating microorganisms from the deep subsurface is challenging due to its inaccessibility and potential for contamination. The Deep Mine Microbial Observatory (DeMMO) in Lead, SD however, offers access to deep microbial life via pristine fracture fluids in bedrock to a depth of 1478 m. The metabolic landscape of DeMMO was previously characterized via thermodynamic modeling coupled with genomic data, illustrating the potential for microbial inhabitants of DeMMO to utilize mineral substrates as energy sources. Here, we employ field and lab based cultivation approaches with pure minerals to link phylogeny to metabolism at DeMMO. Fracture fluids were directed through reactors filled with Fe3O4, Fe2O3, FeS2, MnO2, and FeCO3 at two sites (610 m and 1478 m) for 2 months prior to harvesting for subsequent analyses. We examined mineralogical, geochemical, and microbiological composition of the reactors via DNA sequencing, microscopy, lipid biomarker characterization, and bulk C and N isotope ratios to determine the influence of mineralogy on biofilm community development. Pre-characterized mineral chips were imaged via SEM to assay microbial growth; preliminary results suggest MnO2, Fe3O4, and Fe2O3 were most conducive to colonization. Solid materials from reactors were used as inoculum for batch cultivation experiments. Media designed to mimic fracture fluid chemistry was supplemented with mineral substrates targeting metal reducers. DNA sequences and microscopy of iron oxide-rich biofilms and fracture fluids suggest iron oxidation is a major energy source at redox transition zones where anaerobic fluids meet more oxidizing conditions. We utilized these biofilms and fluids as inoculum in gradient cultivation experiments targeting microaerophilic iron oxidizers. Cultivation of microbes endemic to DeMMO, a system

  10. Deep inelastic scattering and disquarks

    International Nuclear Information System (INIS)

    Anselmino, M.

    1993-01-01

    The most comprehensive and detailed analyses of the existing data on the structure function F 2 (x, Q 2 ) of free nucleons, from the deep inelastic scattering (DIS) of charged leptons on hydrogen and deuterium targets, have proved beyond any doubt that higher twist, 1/Q 2 corrections are needed in order to obtain a perfect agreement between perturbative QCD predictions and the data. These higher twist corrections take into account two quark correlations inside the nucleon; it is then natural to try to model them in the quark-diquark model of the proton. In so doing all interactions between the two quarks inside the diquark, both perturbative and non perturbative, are supposed to be taken into account. (orig./HSI)

  11. Detector for deep well logging

    International Nuclear Information System (INIS)

    1976-01-01

    A substantial improvement in the useful life and efficiency of a deep-well scintillation detector is achieved by a unique construction wherein the steel cylinder enclosing the sodium iodide scintillation crystal is provided with a tapered recess to receive a glass window which has a high transmittance at the critical wavelength and, for glass, a high coefficient of thermal expansion. A special high-temperature epoxy adhesive composition is employed to form a relatively thick sealing annulus which keeps the glass window in the tapered recess and compensates for the differences in coefficients of expansion between the container and glass so as to maintain a hermetic seal as the unit is subjected to a wide range of temperature

  12. Deep borehole disposal of plutonium

    International Nuclear Information System (INIS)

    Gibb, F. G. F.; Taylor, K. J.; Burakov, B. E.

    2008-01-01

    Excess plutonium not destined for burning as MOX or in Generation IV reactors is both a long-term waste management problem and a security threat. Immobilisation in mineral and ceramic-based waste forms for interim safe storage and eventual disposal is a widely proposed first step. The safest and most secure form of geological disposal for Pu yet suggested is in very deep boreholes and we propose here that the key to successful combination of these immobilisation and disposal concepts is the encapsulation of the waste form in small cylinders of recrystallized granite. The underlying science is discussed and the results of high pressure and temperature experiments on zircon, depleted UO 2 and Ce-doped cubic zirconia enclosed in granitic melts are presented. The outcomes of these experiments demonstrate the viability of the proposed solution and that Pu could be successfully isolated from its environment for many millions of years. (authors)

  13. Automatic Differentiation and Deep Learning

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Statistical learning has been getting more and more interest from the particle-physics community in recent times, with neural networks and gradient-based optimization being a focus. In this talk we shall discuss three things: automatic differention tools: tools to quickly build DAGs of computation that are fully differentiable. We shall focus on one such tool "PyTorch".  Easy deployment of trained neural networks into large systems with many constraints: for example, deploying a model at the reconstruction phase where the neural network has to be integrated into CERN's bulk data-processing C++-only environment Some recent models in deep learning for segmentation and generation that might be useful for particle physics problems.

  14. Jets in deep inelastic scattering

    International Nuclear Information System (INIS)

    Joensson, L.

    1995-01-01

    Jet production in deep inelastic scattering provides a basis for the investigation of various phenomena related to QCD. Two-jet production at large Q 2 has been studied and the distributions with respect to the partonic scaling variables have been compared to models and to next to leading order calculations. The first observations of azimuthal asymmetries of jets produced in first order α s processes have been obtained. The gluon initiated boson-gluon fusion process permits a direct determination of the gluon density of the proton from an analysis of the jets produced in the hard scattering process. A comparison of these results with those from indirect extractions of the gluon density provides an important test of QCD. (author)

  15. NESTOR Deep Sea Neutrino Telescope

    International Nuclear Information System (INIS)

    Aggouras, G.; Anassontzis, E.G.; Ball, A.E.; Bourlis, G.; Chinowsky, W.; Fahrun, E.; Grammatikakis, G.; Green, C.; Grieder, P.; Katrivanos, P.; Koske, P.; Leisos, A.; Markopoulos, E.; Minkowsky, P.; Nygren, D.; Papageorgiou, K.; Przybylski, G.; Resvanis, L.K.; Siotis, I.; Sopher, J.; Staveris-Polikalas, A.; Tsagli, V.; Tsirigotis, A.; Tzamarias, S.; Zhukov, V.A.

    2006-01-01

    One module of NESTOR, the Mediterranean deep-sea neutrino telescope, was deployed at a depth of 4000m, 14km off the Sapienza Island, off the South West coast of Greece. The deployment site provides excellent environmental characteristics. The deployed NESTOR module is constructed as a hexagonal star like latticed titanium star with 12 Optical Modules and an one-meter diameter titanium sphere which houses the electronics. Power and data were transferred through a 30km electro-optical cable to the shore laboratory. In this report we describe briefly the detector and the detector electronics and discuss the first physics data acquired and give the zenith angular distribution of the reconstructed muons

  16. Deep Borehole Disposal Safety Analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Freeze, Geoffrey A. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Stein, Emily [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Price, Laura L. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); MacKinnon, Robert J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Tillman, Jack Bruce [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2016-10-01

    This report presents a preliminary safety analysis for the deep borehole disposal (DBD) concept, using a safety case framework. A safety case is an integrated collection of qualitative and quantitative arguments, evidence, and analyses that substantiate the safety, and the level of confidence in the safety, of a geologic repository. This safety case framework for DBD follows the outline of the elements of a safety case, and identifies the types of information that will be required to satisfy these elements. At this very preliminary phase of development, the DBD safety case focuses on the generic feasibility of the DBD concept. It is based on potential system designs, waste forms, engineering, and geologic conditions; however, no specific site or regulatory framework exists. It will progress to a site-specific safety case as the DBD concept advances into a site-specific phase, progressing through consent-based site selection and site investigation and characterization.

  17. DeepRT: deep learning for peptide retention time prediction in proteomics

    OpenAIRE

    Ma, Chunwei; Zhu, Zhiyong; Ye, Jun; Yang, Jiarui; Pei, Jianguo; Xu, Shaohang; Zhou, Ruo; Yu, Chang; Mo, Fan; Wen, Bo; Liu, Siqi

    2017-01-01

    Accurate predictions of peptide retention times (RT) in liquid chromatography have many applications in mass spectrometry-based proteomics. Herein, we present DeepRT, a deep learning based software for peptide retention time prediction. DeepRT automatically learns features directly from the peptide sequences using the deep convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) model, which eliminates the need to use hand-crafted features or rules. After the feature learning, pr...

  18. Equivalent drawbead performance in deep drawing simulations

    NARCIS (Netherlands)

    Meinders, Vincent T.; Geijselaers, Hubertus J.M.; Huetink, Han

    1999-01-01

    Drawbeads are applied in the deep drawing process to improve the control of the material flow during the forming operation. In simulations of the deep drawing process these drawbeads can be replaced by an equivalent drawbead model. In this paper the usage of an equivalent drawbead model in the

  19. Is deep dreaming the new collage?

    Science.gov (United States)

    Boden, Margaret A.

    2017-10-01

    Deep dreaming (DD) can combine and transform images in surprising ways. But, being based in deep learning (DL), it is not analytically understood. Collage is an art form that is constrained along various dimensions. DD will not be able to generate collages until DL can be guided in a disciplined fashion.

  20. Deep web search: an overview and roadmap

    NARCIS (Netherlands)

    Tjin-Kam-Jet, Kien; Trieschnigg, Rudolf Berend; Hiemstra, Djoerd

    2011-01-01

    We review the state-of-the-art in deep web search and propose a novel classification scheme to better compare deep web search systems. The current binary classification (surfacing versus virtual integration) hides a number of implicit decisions that must be made by a developer. We make these

  1. Research Proposal for Distributed Deep Web Search

    NARCIS (Netherlands)

    Tjin-Kam-Jet, Kien

    2010-01-01

    This proposal identifies two main problems related to deep web search, and proposes a step by step solution for each of them. The first problem is about searching deep web content by means of a simple free-text interface (with just one input field, instead of a complex interface with many input

  2. Development of Hydro-Mechanical Deep Drawing

    DEFF Research Database (Denmark)

    Zhang, Shi-Hong; Danckert, Joachim

    1998-01-01

    The hydro-mechanical deep-drawing process is reviewed in this article. The process principles and features are introduced and the developments of the hydro-mechanical deep-drawing process in process performances, in theory and in numerical simulation are described. The applications are summarized....... Some other related hydraulic forming processes are also dealt with as a comparison....

  3. Stable architectures for deep neural networks

    Science.gov (United States)

    Haber, Eldad; Ruthotto, Lars

    2018-01-01

    Deep neural networks have become invaluable tools for supervised machine learning, e.g. classification of text or images. While often offering superior results over traditional techniques and successfully expressing complicated patterns in data, deep architectures are known to be challenging to design and train such that they generalize well to new data. Critical issues with deep architectures are numerical instabilities in derivative-based learning algorithms commonly called exploding or vanishing gradients. In this paper, we propose new forward propagation techniques inspired by systems of ordinary differential equations (ODE) that overcome this challenge and lead to well-posed learning problems for arbitrarily deep networks. The backbone of our approach is our interpretation of deep learning as a parameter estimation problem of nonlinear dynamical systems. Given this formulation, we analyze stability and well-posedness of deep learning and use this new understanding to develop new network architectures. We relate the exploding and vanishing gradient phenomenon to the stability of the discrete ODE and present several strategies for stabilizing deep learning for very deep networks. While our new architectures restrict the solution space, several numerical experiments show their competitiveness with state-of-the-art networks.

  4. Temperature impacts on deep-sea biodiversity.

    Science.gov (United States)

    Yasuhara, Moriaki; Danovaro, Roberto

    2016-05-01

    Temperature is considered to be a fundamental factor controlling biodiversity in marine ecosystems, but precisely what role temperature plays in modulating diversity is still not clear. The deep ocean, lacking light and in situ photosynthetic primary production, is an ideal model system to test the effects of temperature changes on biodiversity. Here we synthesize current knowledge on temperature-diversity relationships in the deep sea. Our results from both present and past deep-sea assemblages suggest that, when a wide range of deep-sea bottom-water temperatures is considered, a unimodal relationship exists between temperature and diversity (that may be right skewed). It is possible that temperature is important only when at relatively high and low levels but does not play a major role in the intermediate temperature range. Possible mechanisms explaining the temperature-biodiversity relationship include the physiological-tolerance hypothesis, the metabolic hypothesis, island biogeography theory, or some combination of these. The possible unimodal relationship discussed here may allow us to identify tipping points at which on-going global change and deep-water warming may increase or decrease deep-sea biodiversity. Predicted changes in deep-sea temperatures due to human-induced climate change may have more adverse consequences than expected considering the sensitivity of deep-sea ecosystems to temperature changes. © 2014 Cambridge Philosophical Society.

  5. Towards deep learning with segregated dendrites.

    Science.gov (United States)

    Guerguiev, Jordan; Lillicrap, Timothy P; Richards, Blake A

    2017-12-05

    Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network learns to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful higher-order representations-the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the morphology of neocortical pyramidal neurons.

  6. Analyses of the deep borehole drilling status for a deep borehole disposal system

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Youl; Choi, Heui Joo; Lee, Min Soo; Kim, Geon Young; Kim, Kyung Su [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    The purpose of disposal for radioactive wastes is not only to isolate them from humans, but also to inhibit leakage of any radioactive materials into the accessible environment. Because of the extremely high level and long-time scale radioactivity of HLW(High-level radioactive waste), a mined deep geological disposal concept, the disposal depth is about 500 m below ground, is considered as the safest method to isolate the spent fuels or high-level radioactive waste from the human environment with the best available technology at present time. Therefore, as an alternative disposal concept, i.e., deep borehole disposal technology is under consideration in number of countries in terms of its outstanding safety and cost effectiveness. In this paper, the general status of deep drilling technologies was reviewed for deep borehole disposal of high level radioactive wastes. Based on the results of these review, very preliminary applicability of deep drilling technology for deep borehole disposal analyzed. In this paper, as one of key technologies of deep borehole disposal system, the general status of deep drilling technologies in oil industry, geothermal industry and geo scientific field was reviewed for deep borehole disposal of high level radioactive wastes. Based on the results of these review, the very preliminary applicability of deep drilling technology for deep borehole disposal such as relation between depth and diameter, drilling time and feasibility classification was analyzed.

  7. Overview of deep learning in medical imaging.

    Science.gov (United States)

    Suzuki, Kenji

    2017-09-01

    The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a

  8. WFIRST: Science from Deep Field Surveys

    Science.gov (United States)

    Koekemoer, Anton; Foley, Ryan; WFIRST Deep Field Working Group

    2018-01-01

    WFIRST will enable deep field imaging across much larger areas than those previously obtained with Hubble, opening up completely new areas of parameter space for extragalactic deep fields including cosmology, supernova and galaxy evolution science. The instantaneous field of view of the Wide Field Instrument (WFI) is about 0.3 square degrees, which would for example yield an Ultra Deep Field (UDF) reaching similar depths at visible and near-infrared wavelengths to that obtained with Hubble, over an area about 100-200 times larger, for a comparable investment in time. Moreover, wider fields on scales of 10-20 square degrees could achieve depths comparable to large HST surveys at medium depths such as GOODS and CANDELS, and would enable multi-epoch supernova science that could be matched in area to LSST Deep Drilling fields or other large survey areas. Such fields may benefit from being placed on locations in the sky that have ancillary multi-band imaging or spectroscopy from other facilities, from the ground or in space. The WFIRST Deep Fields Working Group has been examining the science considerations for various types of deep fields that may be obtained with WFIRST, and present here a summary of the various properties of different locations in the sky that may be considered for future deep fields with WFIRST.

  9. Deep Learning and Its Applications in Biomedicine.

    Science.gov (United States)

    Cao, Chensi; Liu, Feng; Tan, Hai; Song, Deshou; Shu, Wenjie; Li, Weizhong; Zhou, Yiming; Bo, Xiaochen; Xie, Zhi

    2018-02-01

    Advances in biological and medical technologies have been providing us explosive volumes of biological and physiological data, such as medical images, electroencephalography, genomic and protein sequences. Learning from these data facilitates the understanding of human health and disease. Developed from artificial neural networks, deep learning-based algorithms show great promise in extracting features and learning patterns from complex data. The aim of this paper is to provide an overview of deep learning techniques and some of the state-of-the-art applications in the biomedical field. We first introduce the development of artificial neural network and deep learning. We then describe two main components of deep learning, i.e., deep learning architectures and model optimization. Subsequently, some examples are demonstrated for deep learning applications, including medical image classification, genomic sequence analysis, as well as protein structure classification and prediction. Finally, we offer our perspectives for the future directions in the field of deep learning. Copyright © 2018. Production and hosting by Elsevier B.V.

  10. The deep lymphatic anatomy of the hand.

    Science.gov (United States)

    Ma, Chuan-Xiang; Pan, Wei-Ren; Liu, Zhi-An; Zeng, Fan-Qiang; Qiu, Zhi-Qiang

    2018-04-03

    The deep lymphatic anatomy of the hand still remains the least described in medical literature. Eight hands were harvested from four nonembalmed human cadavers amputated above the wrist. A small amount of 6% hydrogen peroxide was employed to detect the lymphatic vessels around the superficial and deep palmar vascular arches, in webs from the index to little fingers, the thenar and hypothenar areas. A 30-gauge needle was inserted into the vessels and injected with a barium sulphate compound. Each specimen was dissected, photographed and radiographed to demonstrate deep lymphatic distribution of the hand. Five groups of deep collecting lymph vessels were found in the hand: superficial palmar arch lymph vessel (SPALV); deep palmar arch lymph vessel (DPALV); thenar lymph vessel (TLV); hypothenar lymph vessel (HTLV); deep finger web lymph vessel (DFWLV). Each group of vessels drained in different directions first, then all turned and ran towards the wrist in different layers. The deep lymphatic drainage of the hand has been presented. The results will provide an anatomical basis for clinical management, educational reference and scientific research. Copyright © 2018 Elsevier GmbH. All rights reserved.

  11. Hello World Deep Learning in Medical Imaging.

    Science.gov (United States)

    Lakhani, Paras; Gray, Daniel L; Pett, Carl R; Nagy, Paul; Shih, George

    2018-05-03

    There is recent popularity in applying machine learning to medical imaging, notably deep learning, which has achieved state-of-the-art performance in image analysis and processing. The rapid adoption of deep learning may be attributed to the availability of machine learning frameworks and libraries to simplify their use. In this tutorial, we provide a high-level overview of how to build a deep neural network for medical image classification, and provide code that can help those new to the field begin their informatics projects.

  12. Deep Generative Models for Molecular Science

    DEFF Research Database (Denmark)

    Jørgensen, Peter Bjørn; Schmidt, Mikkel Nørgaard; Winther, Ole

    2018-01-01

    Generative deep machine learning models now rival traditional quantum-mechanical computations in predicting properties of new structures, and they come with a significantly lower computational cost, opening new avenues in computational molecular science. In the last few years, a variety of deep...... generative models have been proposed for modeling molecules, which differ in both their model structure and choice of input features. We review these recent advances within deep generative models for predicting molecular properties, with particular focus on models based on the probabilistic autoencoder (or...

  13. Harnessing the Deep Web: Present and Future

    OpenAIRE

    Madhavan, Jayant; Afanasiev, Loredana; Antova, Lyublena; Halevy, Alon

    2009-01-01

    Over the past few years, we have built a system that has exposed large volumes of Deep-Web content to Google.com users. The content that our system exposes contributes to more than 1000 search queries per-second and spans over 50 languages and hundreds of domains. The Deep Web has long been acknowledged to be a major source of structured data on the web, and hence accessing Deep-Web content has long been a problem of interest in the data management community. In this paper, we report on where...

  14. Desalination Economic Evaluation Program (DEEP). User's manual

    International Nuclear Information System (INIS)

    2000-01-01

    DEEP (formerly named ''Co-generation and Desalination Economic Evaluation'' Spreadsheet, CDEE) has been developed originally by General Atomics under contract, and has been used in the IAEA's feasibility studies. For further confidence in the software, it was validated in March 1998. After that, a user friendly version has been issued under the name of DEEP at the end of 1998. DEEP output includes the levelised cost of water and power, a breakdown of cost components, energy consumption and net saleable power for each selected option. Specific power plants can be modelled by adjustment of input data including design power, power cycle parameters and costs

  15. Zooplankton at deep Red Sea brine pools

    KAUST Repository

    Kaartvedt, Stein

    2016-03-02

    The deep-sea anoxic brines of the Red Sea comprise unique, complex and extreme habitats. These environments are too harsh for metazoans, while the brine–seawater interface harbors dense microbial populations. We investigated the adjacent pelagic fauna at two brine pools using net tows, video records from a remotely operated vehicle and submerged echosounders. Waters just above the brine pool of Atlantis II Deep (2000 m depth) appeared depleted of macrofauna. In contrast, the fauna appeared to be enriched at the Kebrit Deep brine–seawater interface (1466 m).

  16. NATURAL GAS RESOURCES IN DEEP SEDIMENTARY BASINS

    Energy Technology Data Exchange (ETDEWEB)

    Thaddeus S. Dyman; Troy Cook; Robert A. Crovelli; Allison A. Henry; Timothy C. Hester; Ronald C. Johnson; Michael D. Lewan; Vito F. Nuccio; James W. Schmoker; Dennis B. Riggin; Christopher J. Schenk

    2002-02-05

    From a geological perspective, deep natural gas resources are generally defined as resources occurring in reservoirs at or below 15,000 feet, whereas ultra-deep gas occurs below 25,000 feet. From an operational point of view, ''deep'' is often thought of in a relative sense based on the geologic and engineering knowledge of gas (and oil) resources in a particular area. Deep gas can be found in either conventionally-trapped or unconventional basin-center accumulations that are essentially large single fields having spatial dimensions often exceeding those of conventional fields. Exploration for deep conventional and unconventional basin-center natural gas resources deserves special attention because these resources are widespread and occur in diverse geologic environments. In 1995, the U.S. Geological Survey estimated that 939 TCF of technically recoverable natural gas remained to be discovered or was part of reserve appreciation from known fields in the onshore areas and State waters of the United. Of this USGS resource, nearly 114 trillion cubic feet (Tcf) of technically-recoverable gas remains to be discovered from deep sedimentary basins. Worldwide estimates of deep gas are also high. The U.S. Geological Survey World Petroleum Assessment 2000 Project recently estimated a world mean undiscovered conventional gas resource outside the U.S. of 844 Tcf below 4.5 km (about 15,000 feet). Less is known about the origins of deep gas than about the origins of gas at shallower depths because fewer wells have been drilled into the deeper portions of many basins. Some of the many factors contributing to the origin of deep gas include the thermal stability of methane, the role of water and non-hydrocarbon gases in natural gas generation, porosity loss with increasing thermal maturity, the kinetics of deep gas generation, thermal cracking of oil to gas, and source rock potential based on thermal maturity and kerogen type. Recent experimental simulations

  17. Comet Dust After Deep Impact

    Science.gov (United States)

    Wooden, Diane H.; Harker, David E.; Woodward, Charles E.

    2006-01-01

    When the Deep Impact Mission hit Jupiter Family comet 9P/Tempel 1, an ejecta crater was formed and an pocket of volatile gases and ices from 10-30 m below the surface was exposed (A Hearn et aI. 2005). This resulted in a gas geyser that persisted for a few hours (Sugita et al, 2005). The gas geyser pushed dust grains into the coma (Sugita et a1. 2005), as well as ice grains (Schulz et al. 2006). The smaller of the dust grains were submicron in radii (0-25.3 micron), and were primarily composed of highly refractory minerals including amorphous (non-graphitic) carbon, and silicate minerals including amorphous (disordered) olivine (Fe,Mg)2SiO4 and pyroxene (Fe,Mg)SiO3 and crystalline Mg-rich olivine. The smaller grains moved faster, as expected from the size-dependent velocity law produced by gas-drag on grains. The mineralogy evolved with time: progressively larger grains persisted in the near nuclear region, having been imparted with slower velocities, and the mineralogies of these larger grains appeared simpler and without crystals. The smaller 0.2-0.3 micron grains reached the coma in about 1.5 hours (1 arc sec = 740 km), were more diverse in mineralogy than the larger grains and contained crystals, and appeared to travel through the coma together. No smaller grains appeared at larger coma distances later (with slower velocities), implying that if grain fragmentation occurred, it happened within the gas acceleration zone. These results of the high spatial resolution spectroscopy (GEMINI+Michelle: Harker et 4. 2005, 2006; Subaru+COMICS: Sugita et al. 2005) revealed that the grains released from the interior were different from the nominally active areas of this comet by their: (a) crystalline content, (b) smaller size, (c) more diverse mineralogy. The temporal changes in the spectra, recorded by GEMIM+Michelle every 7 minutes, indicated that the dust mineralogy is inhomogeneous and, unexpectedly, the portion of the size distribution dominated by smaller grains has

  18. Anisotropy in the deep Earth

    Science.gov (United States)

    Romanowicz, Barbara; Wenk, Hans-Rudolf

    2017-08-01

    Seismic anisotropy has been found in many regions of the Earth's interior. Its presence in the Earth's crust has been known since the 19th century, and is due in part to the alignment of anisotropic crystals in rocks, and in part to patterns in the distribution of fractures and pores. In the upper mantle, seismic anisotropy was discovered 50 years ago, and can be attributed for the most part, to the alignment of intrinsically anisotropic olivine crystals during large scale deformation associated with convection. There is some indication for anisotropy in the transition zone, particularly in the vicinity of subducted slabs. Here we focus on the deep Earth - the lower mantle and core, where anisotropy is not yet mapped in detail, nor is there consensus on its origin. Most of the lower mantle appears largely isotropic, except in the last 200-300 km, in the D″ region, where evidence for seismic anisotropy has been accumulating since the late 1980s, mostly from shear wave splitting measurements. Recently, a picture has been emerging, where strong anisotropy is associated with high shear velocities at the edges of the large low shear velocity provinces (LLSVPs) in the central Pacific and under Africa. These observations are consistent with being due to the presence of highly anisotropic MgSiO3 post-perovskite crystals, aligned during the deformation of slabs impinging on the core-mantle boundary, and upwelling flow within the LLSVPs. We also discuss mineral physics aspects such as ultrahigh pressure deformation experiments, first principles calculations to obtain information about elastic properties, and derivation of dislocation activity based on bonding characteristics. Polycrystal plasticity simulations can predict anisotropy but models are still highly idealized and neglect the complex microstructure of polyphase aggregates with strong and weak components. A promising direction for future progress in understanding the origin of seismic anisotropy in the deep mantle

  19. DeepDive: Declarative Knowledge Base Construction.

    Science.gov (United States)

    De Sa, Christopher; Ratner, Alex; Ré, Christopher; Shin, Jaeho; Wang, Feiran; Wu, Sen; Zhang, Ce

    2016-03-01

    The dark data extraction or knowledge base construction (KBC) problem is to populate a SQL database with information from unstructured data sources including emails, webpages, and pdf reports. KBC is a long-standing problem in industry and research that encompasses problems of data extraction, cleaning, and integration. We describe DeepDive, a system that combines database and machine learning ideas to help develop KBC systems. The key idea in DeepDive is that statistical inference and machine learning are key tools to attack classical data problems in extraction, cleaning, and integration in a unified and more effective manner. DeepDive programs are declarative in that one cannot write probabilistic inference algorithms; instead, one interacts by defining features or rules about the domain. A key reason for this design choice is to enable domain experts to build their own KBC systems. We present the applications, abstractions, and techniques of DeepDive employed to accelerate construction of KBC systems.

  20. Variational inference & deep learning : A new synthesis

    NARCIS (Netherlands)

    Kingma, D.P.

    2017-01-01

    In this thesis, Variational Inference and Deep Learning: A New Synthesis, we propose novel solutions to the problems of variational (Bayesian) inference, generative modeling, representation learning, semi-supervised learning, and stochastic optimization.

  1. Pathways to deep decarbonization - 2015 report

    International Nuclear Information System (INIS)

    Ribera, Teresa; Colombier, Michel; Waisman, Henri; Bataille, Chris; Pierfederici, Roberta; Sachs, Jeffrey; Schmidt-Traub, Guido; Williams, Jim; Segafredo, Laura; Hamburg Coplan, Jill; Pharabod, Ivan; Oury, Christian

    2015-12-01

    In September 2015, the Deep Decarbonization Pathways Project published the Executive Summary of the Pathways to Deep Decarbonization: 2015 Synthesis Report. The full 2015 Synthesis Report was launched in Paris on December 3, 2015, at a technical workshop with the Mitigation Action Plans and Scenarios (MAPS) program. The Deep Decarbonization Pathways Project (DDPP) is a collaborative initiative to understand and show how individual countries can transition to a low-carbon economy and how the world can meet the internationally agreed target of limiting the increase in global mean surface temperature to less than 2 degrees Celsius (deg. C). Achieving the 2 deg. C limit will require that global net emissions of greenhouse gases (GHG) approach zero by the second half of the century. In turn, this will require a profound transformation of energy systems by mid-century through steep declines in carbon intensity in all sectors of the economy, a transition we call 'deep decarbonization'

  2. Variational inference & deep learning: A new synthesis

    OpenAIRE

    Kingma, D.P.

    2017-01-01

    In this thesis, Variational Inference and Deep Learning: A New Synthesis, we propose novel solutions to the problems of variational (Bayesian) inference, generative modeling, representation learning, semi-supervised learning, and stochastic optimization.

  3. DNA Replication Profiling Using Deep Sequencing.

    Science.gov (United States)

    Saayman, Xanita; Ramos-Pérez, Cristina; Brown, Grant W

    2018-01-01

    Profiling of DNA replication during progression through S phase allows a quantitative snap-shot of replication origin usage and DNA replication fork progression. We present a method for using deep sequencing data to profile DNA replication in S. cerevisiae.

  4. DAPs: Deep Action Proposals for Action Understanding

    KAUST Repository

    Escorcia, Victor; Caba Heilbron, Fabian; Niebles, Juan Carlos; Ghanem, Bernard

    2016-01-01

    action proposals from long videos. We show how to take advantage of the vast capacity of deep learning models and memory cells to retrieve from untrimmed videos temporal segments, which are likely to contain actions. A comprehensive evaluation indicates

  5. Evaluation of static resistance of deep foundations.

    Science.gov (United States)

    2017-05-01

    The focus of this research was to evaluate and improve Florida Department of Transportation (FDOT) FB-Deep software prediction of nominal resistance of H-piles, prestressed concrete piles in limestone, large diameter (> 36) open steel and concrete...

  6. The deep ocean under climate change.

    Science.gov (United States)

    Levin, Lisa A; Le Bris, Nadine

    2015-11-13

    The deep ocean absorbs vast amounts of heat and carbon dioxide, providing a critical buffer to climate change but exposing vulnerable ecosystems to combined stresses of warming, ocean acidification, deoxygenation, and altered food inputs. Resulting changes may threaten biodiversity and compromise key ocean services that maintain a healthy planet and human livelihoods. There exist large gaps in understanding of the physical and ecological feedbacks that will occur. Explicit recognition of deep-ocean climate mitigation and inclusion in adaptation planning by the United Nations Framework Convention on Climate Change (UNFCCC) could help to expand deep-ocean research and observation and to protect the integrity and functions of deep-ocean ecosystems. Copyright © 2015, American Association for the Advancement of Science.

  7. Deep gold mine fracture zone behaviour

    CSIR Research Space (South Africa)

    Napier, JAL

    1998-12-01

    Full Text Available The investigation of the behaviour of the fracture zone surrounding deep level gold mine stopes is detailed in three main sections of this report. Section 2 outlines the ongoing study of fundamental fracture process and their numerical...

  8. Deep Ultraviolet Macroporous Silicon Filters, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This SBIR Phase I proposal describes a novel method to make deep and far UV optical filters from macroporous silicon. This type of filter consists of an array of...

  9. Toolkits and Libraries for Deep Learning.

    Science.gov (United States)

    Erickson, Bradley J; Korfiatis, Panagiotis; Akkus, Zeynettin; Kline, Timothy; Philbrick, Kenneth

    2017-08-01

    Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data. In this paper, we will describe some of the libraries and tools that are available to aid in the construction and efficient execution of deep learning as applied to medical images.

  10. Deep-Sea Soft Coral Habitat Suitability

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Deep-sea corals, also known as cold water corals, create complex communities that provide habitat for a variety of invertebrate and fish species, such as grouper,...

  11. Photon diffractive dissociation in deep inelastic scattering

    International Nuclear Information System (INIS)

    Ryskin, M.G.

    1990-01-01

    The new ep-collider HERA gives us the possibility to study the diffractive dissociation of virtual photon in deep inelastic ep-collision. The process of photon dissociation in deep inelastic scattering is the most direct way to measure the value of triple-pomeron vertex G 3P . It was shown that the value of the correct bare vertex G 3P may more than 4 times exceeds its effective value measuring in the triple-reggeon region and reaches the value of about 40-50% of the elastic pp-pomeron vertex. On the contrary in deep inelastic processes the perpendicular momenta q t of the secondary particles are large enough. Thus in deep inelastic reactions one can measure the absolute value of G 3P vertex in the most direct way and compare its value and q t dependence with the leading log QCD predictions

  12. Applications of Deep Learning in Biomedicine.

    Science.gov (United States)

    Mamoshina, Polina; Vieira, Armando; Putin, Evgeny; Zhavoronkov, Alex

    2016-05-02

    Increases in throughput and installed base of biomedical research equipment led to a massive accumulation of -omics data known to be highly variable, high-dimensional, and sourced from multiple often incompatible data platforms. While this data may be useful for biomarker identification and drug discovery, the bulk of it remains underutilized. Deep neural networks (DNNs) are efficient algorithms based on the use of compositional layers of neurons, with advantages well matched to the challenges -omics data presents. While achieving state-of-the-art results and even surpassing human accuracy in many challenging tasks, the adoption of deep learning in biomedicine has been comparatively slow. Here, we discuss key features of deep learning that may give this approach an edge over other machine learning methods. We then consider limitations and review a number of applications of deep learning in biomedical studies demonstrating proof of concept and practical utility.

  13. Mean associative multiplicities in deep inelastic processes

    International Nuclear Information System (INIS)

    Dzhaparidze, G.Sh.; Kiselev, A.V.; Petrov, V.A.

    1981-01-01

    The associative hadron multiplicities in deep inelastic and Drell--Yan processes are studied. In particular the mean multiplicities in different hard processes in QCD are found to be determined by the mean multiplicity in parton jet [ru

  14. Deep-Sea Stony Coral Habitat Suitability

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Deep-sea corals, also known as cold water corals, create complex communities that provide habitat for a variety of invertebrate and fish species, such as grouper,...

  15. Deep Learning and Applications in Computational Biology

    KAUST Repository

    Zeng, Jianyang

    2016-01-01

    In this work, we develop a general and flexible deep learning framework for modeling structural binding preferences and predicting binding sites of RBPs, which takes (predicted) RNA tertiary structural information

  16. Leading particle in deep inelastic scattering

    International Nuclear Information System (INIS)

    Petrov, V.A.

    1984-01-01

    The leading particle effect in deep inelastic scattering is considered. The change of the characteris cs shape of the leading particle inclusive spectrum with Q 2 is estimated to be rather significant at very high Q 2

  17. Progress in deep-UV photoresists

    Indian Academy of Sciences (India)

    Unknown

    This paper reviews the recent development and challenges of deep-UV photoresists and their ... small amount of acid, when exposed to light by photo- chemical ... anomalous insoluble skin and linewidth shift when the. PEB was delayed.

  18. Methods in mooring deep sea sediment traps

    Digital Repository Service at National Institute of Oceanography (India)

    Venkatesan, R.; Fernando, V.; Rajaraman, V.S.; Janakiraman, G.

    The experience gained during the process of deployment and retrieval of nearly 39 sets of deep sea sediment trap moorings on various ships like FS Sonne, ORV Sagarkanya and DSV Nand Rachit are outlined. The various problems encountered...

  19. Deep Water Horizon (HB1006, EK60)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Monitor and measure the biological, chemical, and physical environment in the area of the oil spill from the deep water horizon oil rig in the Gulf of Mexico. A wide...

  20. Biodiversity loss from deep-sea mining

    OpenAIRE

    C. L. Van Dover; J. A. Ardron; E. Escobar; M. Gianni; K. M. Gjerde; A. Jaeckel; D. O. B. Jones; L. A. Levin; H. Niner; L. Pendleton; C. R. Smith; T. Thiele; P. J. Turner; L. Watling; P. P. E. Weaver

    2017-01-01

    The emerging deep-sea mining industry is seen by some to be an engine for economic development in the maritime sector. The International Seabed Authority (ISA) – the body that regulates mining activities on the seabed beyond national jurisdiction – must also protect the marine environment from harmful effects that arise from mining. The ISA is currently drafting a regulatory framework for deep-sea mining that includes measures for environmental protection. Responsible mining increasingly stri...

  1. DEEP VADOSE ZONE TREATABILITY TEST PLAN

    International Nuclear Information System (INIS)

    Chronister, G.B.; Truex, M.J.

    2009-01-01

    (sm b ullet) Treatability test plan published in 2008 (sm b ullet) Outlines technology treatability activities for evaluating application of in situ technologies and surface barriers to deep vadose zone contamination (technetium and uranium) (sm b ullet) Key elements - Desiccation testing - Testing of gas-delivered reactants for in situ treatment of uranium - Evaluating surface barrier application to deep vadose zone - Evaluating in situ grouting and soil flushing

  2. Deep inelastic inclusive weak and electromagnetic interactions

    International Nuclear Information System (INIS)

    Adler, S.L.

    1976-01-01

    The theory of deep inelastic inclusive interactions is reviewed, emphasizing applications to electromagnetic and weak charged current processes. The following reactions are considered: e + N → e + X, ν + N → μ - + X, anti ν + N → μ + + X where X denotes a summation over all final state hadrons and the ν's are muon neutrinos. After a discussion of scaling, the quark-parton model is invoked to explain the principle experimental features of deep inelastic inclusive reactions

  3. Short-term Memory of Deep RNN

    OpenAIRE

    Gallicchio, Claudio

    2018-01-01

    The extension of deep learning towards temporal data processing is gaining an increasing research interest. In this paper we investigate the properties of state dynamics developed in successive levels of deep recurrent neural networks (RNNs) in terms of short-term memory abilities. Our results reveal interesting insights that shed light on the nature of layering as a factor of RNN design. Noticeably, higher layers in a hierarchically organized RNN architecture results to be inherently biased ...

  4. Deep Learning for Video Game Playing

    OpenAIRE

    Justesen, Niels; Bontrager, Philip; Togelius, Julian; Risi, Sebastian

    2017-01-01

    In this article, we review recent Deep Learning advances in the context of how they have been applied to play different types of video games such as first-person shooters, arcade games, and real-time strategy games. We analyze the unique requirements that different game genres pose to a deep learning system and highlight important open challenges in the context of applying these machine learning methods to video games, such as general game playing, dealing with extremely large decision spaces...

  5. Life Support for Deep Space and Mars

    Science.gov (United States)

    Jones, Harry W.; Hodgson, Edward W.; Kliss, Mark H.

    2014-01-01

    How should life support for deep space be developed? The International Space Station (ISS) life support system is the operational result of many decades of research and development. Long duration deep space missions such as Mars have been expected to use matured and upgraded versions of ISS life support. Deep space life support must use the knowledge base incorporated in ISS but it must also meet much more difficult requirements. The primary new requirement is that life support in deep space must be considerably more reliable than on ISS or anywhere in the Earth-Moon system, where emergency resupply and a quick return are possible. Due to the great distance from Earth and the long duration of deep space missions, if life support systems fail, the traditional approaches for emergency supply of oxygen and water, emergency supply of parts, and crew return to Earth or escape to a safe haven are likely infeasible. The Orbital Replacement Unit (ORU) maintenance approach used by ISS is unsuitable for deep space with ORU's as large and complex as those originally provided in ISS designs because it minimizes opportunities for commonality of spares, requires replacement of many functional parts with each failure, and results in substantial launch mass and volume penalties. It has become impractical even for ISS after the shuttle era, resulting in the need for ad hoc repair activity at lower assembly levels with consequent crew time penalties and extended repair timelines. Less complex, more robust technical approaches may be needed to meet the difficult deep space requirements for reliability, maintainability, and reparability. Developing an entirely new life support system would neglect what has been achieved. The suggested approach is use the ISS life support technologies as a platform to build on and to continue to improve ISS subsystems while also developing new subsystems where needed to meet deep space requirements.

  6. Deep Predictive Models in Interactive Music

    OpenAIRE

    Martin, Charles P.; Ellefsen, Kai Olav; Torresen, Jim

    2018-01-01

    Automatic music generation is a compelling task where much recent progress has been made with deep learning models. In this paper, we ask how these models can be integrated into interactive music systems; how can they encourage or enhance the music making of human users? Musical performance requires prediction to operate instruments, and perform in groups. We argue that predictive models could help interactive systems to understand their temporal context, and ensemble behaviour. Deep learning...

  7. Predicting Process Behaviour using Deep Learning

    OpenAIRE

    Evermann, Joerg; Rehse, Jana-Rebecca; Fettke, Peter

    2016-01-01

    Predicting business process behaviour is an important aspect of business process management. Motivated by research in natural language processing, this paper describes an application of deep learning with recurrent neural networks to the problem of predicting the next event in a business process. This is both a novel method in process prediction, which has largely relied on explicit process models, and also a novel application of deep learning methods. The approach is evaluated on two real da...

  8. A Deep Learning Approach to Drone Monitoring

    OpenAIRE

    Chen, Yueru; Aggarwal, Pranav; Choi, Jongmoo; Kuo, C. -C. Jay

    2017-01-01

    A drone monitoring system that integrates deep-learning-based detection and tracking modules is proposed in this work. The biggest challenge in adopting deep learning methods for drone detection is the limited amount of training drone images. To address this issue, we develop a model-based drone augmentation technique that automatically generates drone images with a bounding box label on drone's location. To track a small flying drone, we utilize the residual information between consecutive i...

  9. Bank of Weight Filters for Deep CNNs

    Science.gov (United States)

    2016-11-22

    very large even on the best available hardware . In some studies in transfer learning it has been observed that the network learnt on one task can be...CNNs. Keywords: CNN, deep learning , neural networks, transfer learning , bank of weigh filters, BWF 1. Introduction Object recognition is an important...of CNNs (or, in general, of deep neural networks) is that feature generation part is fused with the classifier part and both parts are learned together

  10. Leveraging multiple datasets for deep leaf counting

    OpenAIRE

    Dobrescu, Andrei; Giuffrida, Mario Valerio; Tsaftaris, Sotirios A

    2017-01-01

    The number of leaves a plant has is one of the key traits (phenotypes) describing its development and growth. Here, we propose an automated, deep learning based approach for counting leaves in model rosette plants. While state-of-the-art results on leaf counting with deep learning methods have recently been reported, they obtain the count as a result of leaf segmentation and thus require per-leaf (instance) segmentation to train the models (a rather strong annotation). Instead, our method tre...

  11. DeepSpark: A Spark-Based Distributed Deep Learning Framework for Commodity Clusters

    OpenAIRE

    Kim, Hanjoo; Park, Jaehong; Jang, Jaehee; Yoon, Sungroh

    2016-01-01

    The increasing complexity of deep neural networks (DNNs) has made it challenging to exploit existing large-scale data processing pipelines for handling massive data and parameters involved in DNN training. Distributed computing platforms and GPGPU-based acceleration provide a mainstream solution to this computational challenge. In this paper, we propose DeepSpark, a distributed and parallel deep learning framework that exploits Apache Spark on commodity clusters. To support parallel operation...

  12. Contemporary deep recurrent learning for recognition

    Science.gov (United States)

    Iftekharuddin, K. M.; Alam, M.; Vidyaratne, L.

    2017-05-01

    Large-scale feed-forward neural networks have seen intense application in many computer vision problems. However, these networks can get hefty and computationally intensive with increasing complexity of the task. Our work, for the first time in literature, introduces a Cellular Simultaneous Recurrent Network (CSRN) based hierarchical neural network for object detection. CSRN has shown to be more effective to solving complex tasks such as maze traversal and image processing when compared to generic feed forward networks. While deep neural networks (DNN) have exhibited excellent performance in object detection and recognition, such hierarchical structure has largely been absent in neural networks with recurrency. Further, our work introduces deep hierarchy in SRN for object recognition. The simultaneous recurrency results in an unfolding effect of the SRN through time, potentially enabling the design of an arbitrarily deep network. This paper shows experiments using face, facial expression and character recognition tasks using novel deep recurrent model and compares recognition performance with that of generic deep feed forward model. Finally, we demonstrate the flexibility of incorporating our proposed deep SRN based recognition framework in a humanoid robotic platform called NAO.

  13. Diabetic retinopathy screening using deep neural network.

    Science.gov (United States)

    Ramachandran, Nishanthan; Hong, Sheng Chiong; Sime, Mary J; Wilson, Graham A

    2017-09-07

    There is a burgeoning interest in the use of deep neural network in diabetic retinal screening. To determine whether a deep neural network could satisfactorily detect diabetic retinopathy that requires referral to an ophthalmologist from a local diabetic retinal screening programme and an international database. Retrospective audit. Diabetic retinal photos from Otago database photographed during October 2016 (485 photos), and 1200 photos from Messidor international database. Receiver operating characteristic curve to illustrate the ability of a deep neural network to identify referable diabetic retinopathy (moderate or worse diabetic retinopathy or exudates within one disc diameter of the fovea). Area under the receiver operating characteristic curve, sensitivity and specificity. For detecting referable diabetic retinopathy, the deep neural network had an area under receiver operating characteristic curve of 0.901 (95% confidence interval 0.807-0.995), with 84.6% sensitivity and 79.7% specificity for Otago and 0.980 (95% confidence interval 0.973-0.986), with 96.0% sensitivity and 90.0% specificity for Messidor. This study has shown that a deep neural network can detect referable diabetic retinopathy with sensitivities and specificities close to or better than 80% from both an international and a domestic (New Zealand) database. We believe that deep neural networks can be integrated into community screening once they can successfully detect both diabetic retinopathy and diabetic macular oedema. © 2017 Royal Australian and New Zealand College of Ophthalmologists.

  14. Some Challenges of Deep Mining†

    Directory of Open Access Journals (Sweden)

    Charles Fairhurst

    2017-08-01

    Full Text Available An increased global supply of minerals is essential to meet the needs and expectations of a rapidly rising world population. This implies extraction from greater depths. Autonomous mining systems, developed through sustained R&D by equipment suppliers, reduce miner exposure to hostile work environments and increase safety. This places increased focus on “ground control” and on rock mechanics to define the depth to which minerals may be extracted economically. Although significant efforts have been made since the end of World War II to apply mechanics to mine design, there have been both technological and organizational obstacles. Rock in situ is a more complex engineering material than is typically encountered in most other engineering disciplines. Mining engineering has relied heavily on empirical procedures in design for thousands of years. These are no longer adequate to address the challenges of the 21st century, as mines venture to increasingly greater depths. The development of the synthetic rock mass (SRM in 2008 provides researchers with the ability to analyze the deformational behavior of rock masses that are anisotropic and discontinuous—attributes that were described as the defining characteristics of in situ rock by Leopold Müller, the president and founder of the International Society for Rock Mechanics (ISRM, in 1966. Recent developments in the numerical modeling of large-scale mining operations (e.g., caving using the SRM reveal unanticipated deformational behavior of the rock. The application of massive parallelization and cloud computational techniques offers major opportunities: for example, to assess uncertainties in numerical predictions; to establish the mechanics basis for the empirical rules now used in rock engineering and their validity for the prediction of rock mass behavior beyond current experience; and to use the discrete element method (DEM in the optimization of deep mine design. For the first time, mining

  15. DeepInfer: open-source deep learning deployment toolkit for image-guided therapy

    Science.gov (United States)

    Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A.; Kapur, Tina; Wells, William M.; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang

    2017-03-01

    Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research work ows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.

  16. Ultra Deep Wave Equation Imaging and Illumination

    Energy Technology Data Exchange (ETDEWEB)

    Alexander M. Popovici; Sergey Fomel; Paul Sava; Sean Crawley; Yining Li; Cristian Lupascu

    2006-09-30

    In this project we developed and tested a novel technology, designed to enhance seismic resolution and imaging of ultra-deep complex geologic structures by using state-of-the-art wave-equation depth migration and wave-equation velocity model building technology for deeper data penetration and recovery, steeper dip and ultra-deep structure imaging, accurate velocity estimation for imaging and pore pressure prediction and accurate illumination and amplitude processing for extending the AVO prediction window. Ultra-deep wave-equation imaging provides greater resolution and accuracy under complex geologic structures where energy multipathing occurs, than what can be accomplished today with standard imaging technology. The objective of the research effort was to examine the feasibility of imaging ultra-deep structures onshore and offshore, by using (1) wave-equation migration, (2) angle-gathers velocity model building, and (3) wave-equation illumination and amplitude compensation. The effort consisted of answering critical technical questions that determine the feasibility of the proposed methodology, testing the theory on synthetic data, and finally applying the technology for imaging ultra-deep real data. Some of the questions answered by this research addressed: (1) the handling of true amplitudes in the downward continuation and imaging algorithm and the preservation of the amplitude with offset or amplitude with angle information required for AVO studies, (2) the effect of several imaging conditions on amplitudes, (3) non-elastic attenuation and approaches for recovering the amplitude and frequency, (4) the effect of aperture and illumination on imaging steep dips and on discriminating the velocities in the ultra-deep structures. All these effects were incorporated in the final imaging step of a real data set acquired specifically to address ultra-deep imaging issues, with large offsets (12,500 m) and long recording time (20 s).

  17. DeepPVP: phenotype-based prioritization of causative variants using deep learning

    KAUST Repository

    Boudellioua, Imene; Kulmanov, Maxat; Schofield, Paul N; Gkoutos, Georgios V; Hoehndorf, Robert

    2018-01-01

    phenotype-based methods that use similar features. DeepPVP is freely available at https://github.com/bio-ontology-research-group/phenomenet-vp Conclusions: DeepPVP further improves on existing variant prioritization methods both in terms of speed as well

  18. Assessment of deep geological environment condition

    International Nuclear Information System (INIS)

    Bae, Dae Seok; Han, Kyung Won; Joen, Kwan Sik

    2003-05-01

    The main tasks of geoscientific study in the 2nd stage was characterized focusing mainly on a near-field condition of deep geologic environment, and aimed to generate the geologic input data for a Korean reference disposal system for high level radioactive wastes and to establish site characterization methodology, including neotectonic features, fracture systems and mechanical properties of plutonic rocks, and hydrogeochemical characteristics. The preliminary assessment of neotectonics in the Korean peninsula was performed on the basis of seismicity recorded, Quarternary faults investigated, uplift characteristics studied on limited areas, distribution of the major regional faults and their characteristics. The local fracture system was studied in detail from the data obtained from deep boreholes in granitic terrain. Through this deep drilling project, the geometrical and hydraulic properties of different fracture sets are statistically analysed on a block scale. The mechanical properties of intact rocks were evaluated from the core samples by laboratory testing and the in-situ stress conditions were estimated by a hydro fracturing test in the boreholes. The hydrogeochemical conditions in the deep boreholes were characterized based on hydrochemical composition and isotopic signatures and were attempted to assess the interrelation with a major fracture system. The residence time of deep groundwater was estimated by C-14 dating. For the travel time of groundwater between the boreholes, the methodology and equipment for tracer test were established

  19. Molecular analysis of deep subsurface bacteria

    International Nuclear Information System (INIS)

    Jimenez Baez, L.E.

    1989-09-01

    Deep sediments samples from site C10a, in Appleton, and sites, P24, P28, and P29, at the Savannah River Site (SRS), near Aiken, South Carolina were studied to determine their microbial community composition, DNA homology and mol %G+C. Different geological formations with great variability in hydrogeological parameters were found across the depth profile. Phenotypic identification of deep subsurface bacteria underestimated the bacterial diversity at the three SRS sites, since bacteria with the same phenotype have different DNA composition and less than 70% DNA homology. Total DNA hybridization and mol %G+C analysis of deep sediment bacterial isolates suggested that each formation is comprised of different microbial communities. Depositional environment was more important than site and geological formation on the DNA relatedness between deep subsurface bacteria, since more 70% of bacteria with 20% or more of DNA homology came from the same depositional environments. Based on phenotypic and genotypic tests Pseudomonas spp. and Acinetobacter spp.-like bacteria were identified in 85 million years old sediments. This suggests that these microbial communities might have been adapted during a long period of time to the environmental conditions of the deep subsurface

  20. Preface: Deep Slab and Mantle Dynamics

    Science.gov (United States)

    Suetsugu, Daisuke; Bina, Craig R.; Inoue, Toru; Wiens, Douglas A.

    2010-11-01

    We are pleased to publish this special issue of the journal Physics of the Earth and Planetary Interiors entitled "Deep Slab and Mantle Dynamics". This issue is an outgrowth of the international symposium "Deep Slab and Mantle Dynamics", which was held on February 25-27, 2009, in Kyoto, Japan. This symposium was organized by the "Stagnant Slab Project" (SSP) research group to present the results of the 5-year project and to facilitate intensive discussion with well-known international researchers in related fields. The SSP and the symposium were supported by a Grant-in-Aid for Scientific Research (16075101) from the Ministry of Education, Culture, Sports, Science and Technology of the Japanese Government. In the symposium, key issues discussed by participants included: transportation of water into the deep mantle and its role in slab-related dynamics; observational and experimental constraints on deep slab properties and the slab environment; modeling of slab stagnation to constrain its mechanisms in comparison with observational and experimental data; observational, experimental and modeling constraints on the fate of stagnant slabs; eventual accumulation of stagnant slabs on the core-mantle boundary and its geodynamic implications. This special issue is a collection of papers presented in the symposium and other papers related to the subject of the symposium. The collected papers provide an overview of the wide range of multidisciplinary studies of mantle dynamics, particularly in the context of subduction, stagnation, and the fate of deep slabs.

  1. Training Deep Spiking Neural Networks Using Backpropagation.

    Science.gov (United States)

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael

    2016-01-01

    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  2. Deep Ocean Contribution to Sea Level Rise

    Science.gov (United States)

    Chang, L.; Sun, W.; Tang, H.; Wang, Q.

    2017-12-01

    The ocean temperature and salinity change in the upper 2000m can be detected by Argo floats, so we can know the steric height change of the ocean. But the ocean layers above 2000m represent only 50% of the total ocean volume. Although the temperature and salinity change are small compared to the upper ocean, the deep ocean contribution to sea level might be significant because of its large volume. There has been some research on the deep ocean rely on the very sparse situ observation and are limited to decadal and longer-term rates of change. The available observational data in the deep ocean are too spares to determine the temporal variability, and the long-term changes may have a bias. We will use the Argo date and combine the situ data and topographic data to estimate the temperature and salinity of the sea water below 2000m, so we can obtain a monthly data. We will analyze the seasonal and annual change of the steric height change due to the deep ocean between 2005 and 2016. And we will evaluate the result combination the present-day satellite and in situ observing systems. The deep ocean contribution can be inferred indirectly as the difference between the altimetry minus GRACE and Argo-based steric sea level.

  3. Deep Learning: A Primer for Radiologists.

    Science.gov (United States)

    Chartrand, Gabriel; Cheng, Phillip M; Vorontsov, Eugene; Drozdzal, Michal; Turcotte, Simon; Pal, Christopher J; Kadoury, Samuel; Tang, An

    2017-01-01

    Deep learning is a class of machine learning methods that are gaining success and attracting interest in many domains, including computer vision, speech recognition, natural language processing, and playing games. Deep learning methods produce a mapping from raw inputs to desired outputs (eg, image classes). Unlike traditional machine learning methods, which require hand-engineered feature extraction from inputs, deep learning methods learn these features directly from data. With the advent of large datasets and increased computing power, these methods can produce models with exceptional performance. These models are multilayer artificial neural networks, loosely inspired by biologic neural systems. Weighted connections between nodes (neurons) in the network are iteratively adjusted based on example pairs of inputs and target outputs by back-propagating a corrective error signal through the network. For computer vision tasks, convolutional neural networks (CNNs) have proven to be effective. Recently, several clinical applications of CNNs have been proposed and studied in radiology for classification, detection, and segmentation tasks. This article reviews the key concepts of deep learning for clinical radiologists, discusses technical requirements, describes emerging applications in clinical radiology, and outlines limitations and future directions in this field. Radiologists should become familiar with the principles and potential applications of deep learning in medical imaging. © RSNA, 2017.

  4. DeepPVP: phenotype-based prioritization of causative variants using deep learning

    KAUST Repository

    Boudellioua, Imene

    2018-05-02

    Background: Prioritization of variants in personal genomic data is a major challenge. Recently, computational methods that rely on comparing phenotype similarity have shown to be useful to identify causative variants. In these methods, pathogenicity prediction is combined with a semantic similarity measure to prioritize not only variants that are likely to be dysfunctional but those that are likely involved in the pathogenesis of a patient\\'s phenotype. Results: We have developed DeepPVP, a variant prioritization method that combined automated inference with deep neural networks to identify the likely causative variants in whole exome or whole genome sequence data. We demonstrate that DeepPVP performs significantly better than existing methods, including phenotype-based methods that use similar features. DeepPVP is freely available at https://github.com/bio-ontology-research-group/phenomenet-vp Conclusions: DeepPVP further improves on existing variant prioritization methods both in terms of speed as well as accuracy.

  5. Deep learning in TMVA Benchmarking Benchmarking TMVA DNN Integration of a Deep Autoencoder

    CERN Document Server

    Huwiler, Marc

    2017-01-01

    The TMVA library in ROOT is dedicated to multivariate analysis, and in partic- ular oers numerous machine learning algorithms in a standardized framework. It is widely used in High Energy Physics for data analysis, mainly to perform regression and classication. To keep up to date with the state of the art in deep learning, a new deep learning module was being developed this summer, oering deep neural net- work, convolutional neural network, and autoencoder. TMVA did not have yet any autoencoder method, and the present project consists in implementing the TMVA autoencoder class based on the deep learning module. It also includes some bench- marking performed on the actual deep neural network implementation, in comparison to the Keras framework with Tensorflow and Theano backend.

  6. DeepSurv: personalized treatment recommender system using a Cox proportional hazards deep neural network.

    Science.gov (United States)

    Katzman, Jared L; Shaham, Uri; Cloninger, Alexander; Bates, Jonathan; Jiang, Tingting; Kluger, Yuval

    2018-02-26

    Medical practitioners use survival models to explore and understand the relationships between patients' covariates (e.g. clinical and genetic features) and the effectiveness of various treatment options. Standard survival models like the linear Cox proportional hazards model require extensive feature engineering or prior medical knowledge to model treatment interaction at an individual level. While nonlinear survival methods, such as neural networks and survival forests, can inherently model these high-level interaction terms, they have yet to be shown as effective treatment recommender systems. We introduce DeepSurv, a Cox proportional hazards deep neural network and state-of-the-art survival method for modeling interactions between a patient's covariates and treatment effectiveness in order to provide personalized treatment recommendations. We perform a number of experiments training DeepSurv on simulated and real survival data. We demonstrate that DeepSurv performs as well as or better than other state-of-the-art survival models and validate that DeepSurv successfully models increasingly complex relationships between a patient's covariates and their risk of failure. We then show how DeepSurv models the relationship between a patient's features and effectiveness of different treatment options to show how DeepSurv can be used to provide individual treatment recommendations. Finally, we train DeepSurv on real clinical studies to demonstrate how it's personalized treatment recommendations would increase the survival time of a set of patients. The predictive and modeling capabilities of DeepSurv will enable medical researchers to use deep neural networks as a tool in their exploration, understanding, and prediction of the effects of a patient's characteristics on their risk of failure.

  7. Deep Learning in Open Source Learning Streams

    DEFF Research Database (Denmark)

    Kjærgaard, Thomas

    2016-01-01

    This chapter presents research on deep learning in a digital learning environment and raises the question if digital instructional designs can catalyze deeper learning than traditional classroom teaching. As a theoretical point of departure the notion of ‘situated learning’ is utilized...... and contrasted to the notion of functionalistic learning in a digital context. The mechanism that enables deep learning in this context is ‘The Open Source Learning Stream’. ‘The Open Source Learning Stream’ is the notion of sharing ‘learning instances’ in a digital space (discussion board, Facebook group......, unistructural, multistructural or relational learning. The research concludes that ‘The Open Source Learning Stream’ can catalyze deep learning and that there are four types of ‘Open Source Learning streams’; individual/ asynchronous, individual/synchronous, shared/asynchronous and shared...

  8. Deep learning in medical imaging: General overview

    Energy Technology Data Exchange (ETDEWEB)

    Lee, June Goo; Jun, Sang Hoon; Cho, Young Won; Lee, Hyun Na; KIm, Guk Bae; Seo, Joon Beom; Kim, Nam Kug [University of Ulsan College of Medicine, Asan Medical Center, Seoul (Korea, Republic of)

    2017-08-01

    The artificial neural network (ANN)–a machine learning technique inspired by the human neuronal synapse system–was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and health care, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging.

  9. Deep-seated sarcomas of the penis

    Directory of Open Access Journals (Sweden)

    Alberto A. Antunes

    2005-06-01

    Full Text Available Mesenchymal neoplasias represent 5% of tumors affecting the penis. Due to the rarity of such tumors, there is no agreement concerning the best method for staging and managing these patients. Sarcomas of the penis can be classified as deep-seated if they derive from the structures forming the spongy body and the cavernous bodies. Superficial lesions are usually low-grade and show a small tendency towards distant metastasis. In contrast, deep-seated lesions usually show behavior that is more aggressive and have poorer prognosis. The authors report 3 cases of deep-seated primary sarcomas of the penis and review the literature on this rare and aggressive neoplasia.

  10. Strategic Technologies for Deep Space Transport

    Science.gov (United States)

    Litchford, Ronald J.

    2016-01-01

    Deep space transportation capability for science and exploration is fundamentally limited by available propulsion technologies. Traditional chemical systems are performance plateaued and require enormous Initial Mass in Low Earth Orbit (IMLEO) whereas solar electric propulsion systems are power limited and unable to execute rapid transits. Nuclear based propulsion and alternative energetic methods, on the other hand, represent potential avenues, perhaps the only viable avenues, to high specific power space transport evincing reduced trip time, reduced IMLEO, and expanded deep space reach. Here, key deep space transport mission capability objectives are reviewed in relation to STMD technology portfolio needs, and the advanced propulsion technology solution landscape is examined including open questions, technical challenges, and developmental prospects. Options for potential future investment across the full compliment of STMD programs are presented based on an informed awareness of complimentary activities in industry, academia, OGAs, and NASA mission directorates.

  11. Deep learning in medical imaging: General overview

    International Nuclear Information System (INIS)

    Lee, June Goo; Jun, Sang Hoon; Cho, Young Won; Lee, Hyun Na; KIm, Guk Bae; Seo, Joon Beom; Kim, Nam Kug

    2017-01-01

    The artificial neural network (ANN)–a machine learning technique inspired by the human neuronal synapse system–was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and health care, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging

  12. Deep learning for SAR image formation

    Science.gov (United States)

    Mason, Eric; Yonel, Bariscan; Yazici, Birsen

    2017-04-01

    The recent success of deep learning has lead to growing interest in applying these methods to signal processing problems. This paper explores the applications of deep learning to synthetic aperture radar (SAR) image formation. We review deep learning from a perspective relevant to SAR image formation. Our objective is to address SAR image formation in the presence of uncertainties in the SAR forward model. We present a recurrent auto-encoder network architecture based on the iterative shrinkage thresholding algorithm (ISTA) that incorporates SAR modeling. We then present an off-line training method using stochastic gradient descent and discuss the challenges and key steps of learning. Lastly, we show experimentally that our method can be used to form focused images in the presence of phase uncertainties. We demonstrate that the resulting algorithm has faster convergence and decreased reconstruction error than that of ISTA.

  13. Oceanography related to deep sea waste disposal

    International Nuclear Information System (INIS)

    1978-09-01

    In connection with studies on the feasibility of the safe disposal of radioactive waste, from a large scale nuclear power programme, either on the bed of the deep ocean or within the deep ocean bed, preparation of the present document was commissioned by the (United Kingdom) Department of the Environment. It attempts (a) to summarize the present state of knowledge of the deep ocean environment relevant to the disposal options and assess the processes which could aid or hinder dispersal of material released from its container; (b) to identify areas of research in which more work is needed before the safety of disposal on, or beneath, the ocean bed can be assessed; and (c) to indicate which areas of research can or should be undertaken by British scientists. The programmes of international cooperation in this field are discussed. The report is divided into four chapters dealing respectively with geology and geophysics, geochemistry, physical oceanography and marine biology. (U.K.)

  14. In Brief: Deep-sea observatory

    Science.gov (United States)

    Showstack, Randy

    2008-11-01

    The first deep-sea ocean observatory offshore of the continental United States has begun operating in the waters off central California. The remotely operated Monterey Accelerated Research System (MARS) will allow scientists to monitor the deep sea continuously. Among the first devices to be hooked up to the observatory are instruments to monitor earthquakes, videotape deep-sea animals, and study the effects of acidification on seafloor animals. ``Some day we may look back at the first packets of data streaming in from the MARS observatory as the equivalent of those first words spoken by Alexander Graham Bell: `Watson, come here, I need you!','' commented Marcia McNutt, president and CEO of the Monterey Bay Aquarium Research Institute, which coordinated construction of the observatory. For more information, see http://www.mbari.org/news/news_releases/2008/mars-live/mars-live.html.

  15. Deep learning in jet reconstruction at CMS

    CERN Document Server

    Stoye, Markus

    2017-01-01

    Deep learning has led to several breakthroughs outside the field of high energy physics, yet in jet reconstruction for the CMS experiment at the CERN LHC it has not been used so far. This report shows results of applying deep learning strategies to jet reconstruction at the stage of identifying the original parton association of the jet (jet tagging), which is crucial for physics analyses at the LHC experiments. We introduce a custom deep neural network architecture for jet tagging. We compare the performance of this novel method with the other established approaches at CMS and show that the proposed strategy provides a significant improvement. The strategy provides the first multi-class classifier, instead of the few binary classifiers that previously were used, and thus yields more information and in a more convenient way. The performance results obtained with simulation imply a significant improvement for a large number of important physics analysis at the CMS experiment.

  16. Deep Learning in Medical Imaging: General Overview

    Science.gov (United States)

    Lee, June-Goo; Jun, Sanghoon; Cho, Young-Won; Lee, Hyunna; Kim, Guk Bae

    2017-01-01

    The artificial neural network (ANN)–a machine learning technique inspired by the human neuronal synapse system–was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and healthcare, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging. PMID:28670152

  17. Deep Learning in Medical Image Analysis.

    Science.gov (United States)

    Shen, Dinggang; Wu, Guorong; Suk, Heung-Il

    2017-06-21

    This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement.

  18. Pathways to deep decarbonization - Interim 2014 Report

    International Nuclear Information System (INIS)

    2014-01-01

    The interim 2014 report by the Deep Decarbonization Pathways Project (DDPP), coordinated and published by IDDRI and the Sustainable Development Solutions Network (SDSN), presents preliminary findings of the pathways developed by the DDPP Country Research Teams with the objective of achieving emission reductions consistent with limiting global warming to less than 2 deg. C. The DDPP is a knowledge network comprising 15 Country Research Teams and several Partner Organizations who develop and share methods, assumptions, and findings related to deep decarbonization. Each DDPP Country Research Team has developed an illustrative road-map for the transition to a low-carbon economy, with the intent of taking into account national socio-economic conditions, development aspirations, infrastructure stocks, resource endowments, and other relevant factors. The interim 2014 report focuses on technically feasible pathways to deep decarbonization

  19. Excess plutonium disposition: The deep borehole option

    International Nuclear Information System (INIS)

    Ferguson, K.L.

    1994-01-01

    This report reviews the current status of technologies required for the disposition of plutonium in Very Deep Holes (VDH). It is in response to a recent National Academy of Sciences (NAS) report which addressed the management of excess weapons plutonium and recommended three approaches to the ultimate disposition of excess plutonium: (1) fabrication and use as a fuel in existing or modified reactors in a once-through cycle, (2) vitrification with high-level radioactive waste for repository disposition, (3) burial in deep boreholes. As indicated in the NAS report, substantial effort would be required to address the broad range of issues related to deep bore-hole emplacement. Subjects reviewed in this report include geology and hydrology, design and engineering, safety and licensing, policy decisions that can impact the viability of the concept, and applicable international programs. Key technical areas that would require attention should decisions be made to further develop the borehole emplacement option are identified

  20. Deep Learning in Medical Imaging: General Overview.

    Science.gov (United States)

    Lee, June-Goo; Jun, Sanghoon; Cho, Young-Won; Lee, Hyunna; Kim, Guk Bae; Seo, Joon Beom; Kim, Namkug

    2017-01-01

    The artificial neural network (ANN)-a machine learning technique inspired by the human neuronal synapse system-was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and healthcare, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging.

  1. Stable isotope geochemistry of deep sea cherts

    Energy Technology Data Exchange (ETDEWEB)

    Kolodny, Y; Epstein, S [California Inst. of Tech., Pasadena (USA). Div. of Geological Sciences

    1976-10-01

    Seventy four samples of DSDP (Deep Sea Drilling Project) recovered cherts of Jurassic to Miocene age from varying locations, and 27 samples of on-land exposed cherts were analyzed for the isotopic composition of their oxygen and hydrogen. These studies were accompanied by mineralogical analyses and some isotopic analyses of the coexisting carbonates. delta/sup 18/0 of chert ranges between 27 and 39 parts per thousand relative to SMOW, delta/sup 18/0 of porcellanite - between 30 and 42 parts per thousand. The consistent enrichment of opal-CT in porcellanites in /sup 18/0 with respect to coexisting microcrystalline quartz in chert is probably a reflection of a different temperature (depth) of diagenesis of the two phases. delta/sup 18/0 of deep sea cherts generally decrease with increasing age, indicating an overall cooling of the ocean bottom during the last 150 m.y. A comparison of this trend with that recorded by benthonic foraminifera (Douglas et al., Initial Reports of the Deep Sea Drilling Project; 32:509(1975)) indicates the possibility of delta/sup 18/0 in deep sea cherts not being frozen in until several tens of millions of years after deposition. Cherts of any Age show a spread of delta/sup 18/0 values, increasing diagenesis being reflected in a lowering of delta/sup 18/0. Drusy quartz has the lowest delta/sup 18/0 values. On land exposed cherts are consistently depleted in /sup 18/0 in comparison to their deep sea time equivalent cherts. Water extracted from deep sea cherts ranges between 0.5 and 1.4 wt%. deltaD of this water ranges between -78 and -95 parts per thousand and is not a function of delta/sup 18/0 of the cherts (or the temperature of their formation).

  2. Deep Space Detection of Oriented Ice Crystals

    Science.gov (United States)

    Marshak, A.; Varnai, T.; Kostinski, A. B.

    2017-12-01

    The deep space climate observatory (DSCOVR) spacecraft resides at the first Lagrangian point about one million miles from Earth. A polychromatic imaging camera onboard delivers nearly hourly observations of the entire sun-lit face of the Earth. Many images contain unexpected bright flashes of light over both ocean and land. We constructed a yearlong time series of flash latitudes, scattering angles and oxygen absorption to demonstrate conclusively that the flashes over land are specular reflections off tiny ice crystals floating in the air nearly horizontally. Such deep space detection of tropospheric ice can be used to constrain the likelihood of oriented crystals and their contribution to Earth albedo.

  3. A clinical study on deep neck abscess

    International Nuclear Information System (INIS)

    Ota, Yumi; Ogawa, Yoshiko; Takemura, Teiji; Sawada, Toru

    2007-01-01

    Although various effective antibiotics have been synthesized, deep neck abscess is still a serious and life-threatening infection. It is important to diagnose promptly and treat adequately, and contrast-enhanced CT is useful and indispensable for diagnosis. We reviewed our patients with deep neck abscess, and analyzed the location by reviewing CT images, and discussed the treatment. Surgical drainage is a fundamental treatment for abscess but if it exists in only one area such as the parotid gland space, it can be cured with needle aspiration and suitable antibiotics. (author)

  4. Approximate Inference and Deep Generative Models

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Advances in deep generative models are at the forefront of deep learning research because of the promise they offer for allowing data-efficient learning, and for model-based reinforcement learning. In this talk I'll review a few standard methods for approximate inference and introduce modern approximations which allow for efficient large-scale training of a wide variety of generative models. Finally, I'll demonstrate several important application of these models to density estimation, missing data imputation, data compression and planning.

  5. Deep Belief Nets for Topic Modeling

    DEFF Research Database (Denmark)

    Maaløe, Lars; Arngren, Morten; Winther, Ole

    2015-01-01

    -formative. In this paper we describe large-scale content based collaborative filtering for digital publishing. To solve the digital publishing recommender problem we compare two approaches: latent Dirichlet allocation (LDA) and deep be-lief nets (DBN) that both find low-dimensional latent representations for documents....... Efficient retrieval can be carried out in the latent representation. We work both on public benchmarks and digital media content provided by Issuu, an on-line publishing platform. This article also comes with a newly developed deep belief nets toolbox for topic modeling tailored towards performance...

  6. Un paseo por la Deep Web

    OpenAIRE

    Ortega Castillo, Carlos

    2018-01-01

    Este documento busca presentar una mirada técnica e inclusiva a algunas de las tecnologías de interconexión desarrolladas en la DeepWeb, primero desde un punto de vista teórico y después con una breve introducción práctica. La desmitificación de los procesos desarrollados bajo la DeepWeb, brinda herramientas a los usuarios para esclarecer y construir nuevos paradigmas de sociedad, conocimiento y tecnología que aporten al desarrollo responsable de este tipo de redes y contribuyan al crecimi...

  7. Deep fracturation of granitic rock mass

    International Nuclear Information System (INIS)

    Bles, J.L.; Blanchin, R.; Bonijoly, D.; Dutartre, P.; Feybesse, J.L.; Gros, Y.; Landry, J.; Martin, P.

    1986-01-01

    This documentary study realized with the financial support of the European Communities and the CEA aims at the utilization of available data for the understanding of the evolution of natural fractures in granitic rocks from the surface to deep underground, in various feasibility studies dealing with radioactive wastes disposal. The Mont Blanc road tunnel, the EDF Arc-Isere gallerie, the Auriat deep borehole and the Pyrenean rock mass of Bassies are studied. In this study are more particularly analyzed the relationship between small fractures and large faults, evolution with depth of fracture density and direction, consequences of rock decompression and relationship between fracturation and groundwater [fr

  8. Gamma-rays from deep inelastic collisions

    International Nuclear Information System (INIS)

    Stephens, F.S.

    1979-01-01

    The γ-rays associated with deep inelastic collisions can give information about the magnitude and orientation of the angular momentum transferred in these events. In this review, special emphasis is placed on understanding the origin and nature of these γ-rays in order to avoid some of the ambiguities that can arise. The experimental information coming from these γ-ray studies is reviewed, and compared briefly with that obtained by other methods and also with the expectations from current models for deep inelastic collisions. 15 figures

  9. Fractal measures in a deep penetration problem

    International Nuclear Information System (INIS)

    Murthy, K.P.N.; Indira, R.; John, T.M.

    1993-01-01

    In the Monte Carlo simulation of a deep penetration problem the parameter, say b in the importance function must be assigned a value b' such that variance is minimum. If b b' the sample mean is still not reliable; but the sample fluctuations would be small and misleading, though the actual fluctuations are quite large. This is because the distribution of transmission has a tail which becomes prominent when b > b'. Considering a model deep penetration problem, and employing exact enumeration techniques, it is shown that in the limit of large biasing the long tailed distribution to the transmission is multifractal. (author). 5 refs., 3 figs

  10. La deep web : el mercado negro global

    OpenAIRE

    Gay Fernández, José

    2015-01-01

    La deep web es un espacio oculto de internet donde la primera garantía es el anonimato. En líneas generales, la deep web contiene todo aquello que los buscadores convencionales no pueden localizar. Esta garantía sirve para albergar una vasta red de servicios ilegales, como el narcotráfico, la trata de blancas, la contratación de sicarios, la compra-venta de pasaportes y cuentas bancarias, o la pornografía infantil, entre otros muchos. Pero el anonimato también posibilita que activ...

  11. Quantitative phase microscopy using deep neural networks

    Science.gov (United States)

    Li, Shuai; Sinha, Ayan; Lee, Justin; Barbastathis, George

    2018-02-01

    Deep learning has been proven to achieve ground-breaking accuracy in various tasks. In this paper, we implemented a deep neural network (DNN) to achieve phase retrieval in a wide-field microscope. Our DNN utilized the residual neural network (ResNet) architecture and was trained using the data generated by a phase SLM. The results showed that our DNN was able to reconstruct the profile of the phase target qualitatively. In the meantime, large error still existed, which indicated that our approach still need to be improved.

  12. Nuclear structure in deep-inelastic reactions

    International Nuclear Information System (INIS)

    Rehm, K.E.

    1986-01-01

    The paper concentrates on recent deep inelastic experiments conducted at Argonne National Laboratory and the nuclear structure effects evident in reactions between super heavy nuclei. Experiments indicate that these reactions evolve gradually from simple transfer processes which have been studied extensively for lighter nuclei such as 16 O, suggesting a theoretical approach connecting the one-step DWBA theory to the multistep statistical models of nuclear reactions. This transition between quasi-elastic and deep inelastic reactions is achieved by a simple random walk model. Some typical examples of nuclear structure effects are shown. 24 refs., 9 figs

  13. Deep Learning For Sequential Pattern Recognition

    OpenAIRE

    Safari, Pooyan

    2013-01-01

    Projecte realitzat en el marc d’un programa de mobilitat amb la Technische Universität München (TUM) In recent years, deep learning has opened a new research line in pattern recognition tasks. It has been hypothesized that this kind of learning would capture more abstract patterns concealed in data. It is motivated by the new findings both in biological aspects of the brain and hardware developments which have made the parallel processing possible. Deep learning methods come along with ...

  14. Environmental challenges of deep water activities

    International Nuclear Information System (INIS)

    Sande, Arvid

    1998-01-01

    In this presentation there are discussed the experiences of petroleum industry, and the projects that have been conducted in connection with the planning and drilling of the first deep water wells in Norway. There are also presented views on where to put more effort in the years to come, so as to increase the knowledge of deep water areas. Attention is laid on exploration drilling as this is the only activity with environmental potential that will take place during the next five years or so. The challenges for future field developments in these water depths are briefly discussed. 7 refs

  15. DeepMitosis: Mitosis detection via deep detection, verification and segmentation networks.

    Science.gov (United States)

    Li, Chao; Wang, Xinggang; Liu, Wenyu; Latecki, Longin Jan

    2018-04-01

    Mitotic count is a critical predictor of tumor aggressiveness in the breast cancer diagnosis. Nowadays mitosis counting is mainly performed by pathologists manually, which is extremely arduous and time-consuming. In this paper, we propose an accurate method for detecting the mitotic cells from histopathological slides using a novel multi-stage deep learning framework. Our method consists of a deep segmentation network for generating mitosis region when only a weak label is given (i.e., only the centroid pixel of mitosis is annotated), an elaborately designed deep detection network for localizing mitosis by using contextual region information, and a deep verification network for improving detection accuracy by removing false positives. We validate the proposed deep learning method on two widely used Mitosis Detection in Breast Cancer Histological Images (MITOSIS) datasets. Experimental results show that we can achieve the highest F-score on the MITOSIS dataset from ICPR 2012 grand challenge merely using the deep detection network. For the ICPR 2014 MITOSIS dataset that only provides the centroid location of mitosis, we employ the segmentation model to estimate the bounding box annotation for training the deep detection network. We also apply the verification model to eliminate some false positives produced from the detection model. By fusing scores of the detection and verification models, we achieve the state-of-the-art results. Moreover, our method is very fast with GPU computing, which makes it feasible for clinical practice. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Deep Seawater Intrusion Enhanced by Geothermal Through Deep Faults in Xinzhou Geothermal Field in Guangdong, China

    Science.gov (United States)

    Lu, G.; Ou, H.; Hu, B. X.; Wang, X.

    2017-12-01

    This study investigates abnormal sea water intrusion from deep depth, riding an inland-ward deep groundwater flow, which is enhanced by deep faults and geothermal processes. The study site Xinzhou geothermal field is 20 km from the coast line. It is in southern China's Guangdong coast, a part of China's long coastal geothermal belt. The geothermal water is salty, having fueled an speculation that it was ancient sea water retained. However, the perpetual "pumping" of the self-flowing outflow of geothermal waters might alter the deep underground flow to favor large-scale or long distant sea water intrusion. We studied geochemical characteristics of the geothermal water and found it as a mixture of the sea water with rain water or pore water, with no indication of dilution involved. And we conducted numerical studies of the buoyancy-driven geothermal flow in the deep ground and find that deep down in thousand meters there is favorable hydraulic gradient favoring inland-ward groundwater flow, allowing seawater intrude inland for an unusually long tens of kilometers in a granitic groundwater flow system. This work formed the first in understanding geo-environment for deep ground water flow.

  17. DeepBipolar: Identifying genomic mutations for bipolar disorder via deep learning.

    Science.gov (United States)

    Laksshman, Sundaram; Bhat, Rajendra Rana; Viswanath, Vivek; Li, Xiaolin

    2017-09-01

    Bipolar disorder, also known as manic depression, is a brain disorder that affects the brain structure of a patient. It results in extreme mood swings, severe states of depression, and overexcitement simultaneously. It is estimated that roughly 3% of the population of the United States (about 5.3 million adults) suffers from bipolar disorder. Recent research efforts like the Twin studies have demonstrated a high heritability factor for the disorder, making genomics a viable alternative for detecting and treating bipolar disorder, in addition to the conventional lengthy and costly postsymptom clinical diagnosis. Motivated by this study, leveraging several emerging deep learning algorithms, we design an end-to-end deep learning architecture (called DeepBipolar) to predict bipolar disorder based on limited genomic data. DeepBipolar adopts the Deep Convolutional Neural Network (DCNN) architecture that automatically extracts features from genotype information to predict the bipolar phenotype. We participated in the Critical Assessment of Genome Interpretation (CAGI) bipolar disorder challenge and DeepBipolar was considered the most successful by the independent assessor. In this work, we thoroughly evaluate the performance of DeepBipolar and analyze the type of signals we believe could have affected the classifier in distinguishing the case samples from the control set. © 2017 Wiley Periodicals, Inc.

  18. DeepPicker: A deep learning approach for fully automated particle picking in cryo-EM.

    Science.gov (United States)

    Wang, Feng; Gong, Huichao; Liu, Gaochao; Li, Meijing; Yan, Chuangye; Xia, Tian; Li, Xueming; Zeng, Jianyang

    2016-09-01

    Particle picking is a time-consuming step in single-particle analysis and often requires significant interventions from users, which has become a bottleneck for future automated electron cryo-microscopy (cryo-EM). Here we report a deep learning framework, called DeepPicker, to address this problem and fill the current gaps toward a fully automated cryo-EM pipeline. DeepPicker employs a novel cross-molecule training strategy to capture common features of particles from previously-analyzed micrographs, and thus does not require any human intervention during particle picking. Tests on the recently-published cryo-EM data of three complexes have demonstrated that our deep learning based scheme can successfully accomplish the human-level particle picking process and identify a sufficient number of particles that are comparable to those picked manually by human experts. These results indicate that DeepPicker can provide a practically useful tool to significantly reduce the time and manual effort spent in single-particle analysis and thus greatly facilitate high-resolution cryo-EM structure determination. DeepPicker is released as an open-source program, which can be downloaded from https://github.com/nejyeah/DeepPicker-python. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. DeepQA: improving the estimation of single protein model quality with deep belief networks.

    Science.gov (United States)

    Cao, Renzhi; Bhattacharya, Debswapna; Hou, Jie; Cheng, Jianlin

    2016-12-05

    Protein quality assessment (QA) useful for ranking and selecting protein models has long been viewed as one of the major challenges for protein tertiary structure prediction. Especially, estimating the quality of a single protein model, which is important for selecting a few good models out of a large model pool consisting of mostly low-quality models, is still a largely unsolved problem. We introduce a novel single-model quality assessment method DeepQA based on deep belief network that utilizes a number of selected features describing the quality of a model from different perspectives, such as energy, physio-chemical characteristics, and structural information. The deep belief network is trained on several large datasets consisting of models from the Critical Assessment of Protein Structure Prediction (CASP) experiments, several publicly available datasets, and models generated by our in-house ab initio method. Our experiments demonstrate that deep belief network has better performance compared to Support Vector Machines and Neural Networks on the protein model quality assessment problem, and our method DeepQA achieves the state-of-the-art performance on CASP11 dataset. It also outperformed two well-established methods in selecting good outlier models from a large set of models of mostly low quality generated by ab initio modeling methods. DeepQA is a useful deep learning tool for protein single model quality assessment and protein structure prediction. The source code, executable, document and training/test datasets of DeepQA for Linux is freely available to non-commercial users at http://cactus.rnet.missouri.edu/DeepQA/ .

  20. Deep sedation during pneumatic reduction of intussusception.

    Science.gov (United States)

    Ilivitzki, Anat; Shtark, Luda Glozman; Arish, Karin; Engel, Ahuva

    2012-05-01

    Pneumatic reduction of intussusception under fluoroscopic guidance is a routine procedure. The unsedated child may resist the procedure, which may lengthen its duration and increase the radiation dose. We use deep sedation during the procedure to overcome these difficulties. The purpose of this study was to summarize our experience with deep sedation during fluoroscopic reduction of intussusception and assess the added value and complication rate of deep sedation. All children with intussusception who underwent pneumatic reduction in our hospital between January 2004 and June 2011 were included in this retrospective study. Anesthetists sedated the children using propofol. The fluoroscopic studies, ultrasound (US) studies and the childrens' charts were reviewed. One hundred thirty-one attempted reductions were performed in 119 children, of which 121 (92%) were successful and 10 (8%) failed. Two perforations (1.5%) occurred during attempted reduction. Average fluoroscopic time was 1.5 minutes. No complication to sedation was recorded. Deep sedation with propofol did not add any complication to the pneumatic reduction. The fluoroscopic time was short. The success rate of reduction was high,raising the possibility that sedation is beneficial, possibly by smooth muscle relaxation.

  1. Evaluation of Deep Discount Fare Strategies

    Science.gov (United States)

    1995-08-01

    This report evaluates the success of a fare pricing strategy known as deep discounting, that entails the bulk sale of transit tickets or tokens to customers at a significant discount compared to the full fare single ticket price. This market-driven s...

  2. Parity violation in deep inelastic scattering

    Energy Technology Data Exchange (ETDEWEB)

    Souder, P. [Syracuse Univ., NY (United States)

    1994-04-01

    AA beam of polarized electrons at CEBAF with an energy of 8 GeV or more will be useful for performing precision measurements of parity violation in deep inelastic scattering. Possible applications include precision tests of the Standard Model, model-independent measurements of parton distribution functions, and studies of quark correlations.

  3. Into the depths of deep eutectic solvents

    NARCIS (Netherlands)

    Rodriguez, N.; Alves da Rocha, M.A.; Kroon, M.C.

    2015-01-01

    Ionic liquids (ILs) have been successfully tested in a wide range of applications; however, their high price and complicated synthesis make them infeasible for large scale implementation. A decade ago, a new generation of solvents so called deep eutectic solvents (DESs) was reported for the first

  4. Modern problems of deep processing of coal

    International Nuclear Information System (INIS)

    Ismagilov, Z.R.

    2013-01-01

    Present article is devoted to modern problems of deep processing of coal. The history and development of new Institute of Coal Chemistry and Material Sciences of Siberian Branch of Russian Academy of Science was described. The aims and purposes of new institute were discussed.

  5. Case Studies and Monitoring of Deep Excavations

    NARCIS (Netherlands)

    Korff, M.

    2017-01-01

    Several case histories from Dutch underground deep excavation projects are presented in this paper, including the lessons learned and the learning processes involved. The focus of the paper is on how the learning takes places and how it is documented. It is necessary to learn in a systematic and

  6. Performance of deep geothermal energy systems

    Science.gov (United States)

    Manikonda, Nikhil

    Geothermal energy is an important source of clean and renewable energy. This project deals with the study of deep geothermal power plants for the generation of electricity. The design involves the extraction of heat from the Earth and its conversion into electricity. This is performed by allowing fluid deep into the Earth where it gets heated due to the surrounding rock. The fluid gets vaporized and returns to the surface in a heat pipe. Finally, the energy of the fluid is converted into electricity using turbine or organic rankine cycle (ORC). The main feature of the system is the employment of side channels to increase the amount of thermal energy extracted. A finite difference computer model is developed to solve the heat transport equation. The numerical model was employed to evaluate the performance of the design. The major goal was to optimize the output power as a function of parameters such as thermal diffusivity of the rock, depth of the main well, number and length of lateral channels. The sustainable lifetime of the system for a target output power of 2 MW has been calculated for deep geothermal systems with drilling depths of 8000 and 10000 meters, and a financial analysis has been performed to evaluate the economic feasibility of the system for a practical range of geothermal parameters. Results show promising an outlook for deep geothermal systems for practical applications.

  7. Fingerprint Minutiae Extraction using Deep Learning

    CSIR Research Space (South Africa)

    Darlow, Luke Nicholas

    2017-10-01

    Full Text Available components, such as image enhancement. We pose minutiae extraction as a machine learning problem and propose a deep neural network – MENet, for Minutiae Extraction Network – to learn a data-driven representation of minutiae points. By using the existing...

  8. Evolutionary Scheduler for the Deep Space Network

    Science.gov (United States)

    Guillaume, Alexandre; Lee, Seungwon; Wang, Yeou-Fang; Zheng, Hua; Chau, Savio; Tung, Yu-Wen; Terrile, Richard J.; Hovden, Robert

    2010-01-01

    A computer program assists human schedulers in satisfying, to the maximum extent possible, competing demands from multiple spacecraft missions for utilization of the transmitting/receiving Earth stations of NASA s Deep Space Network. The program embodies a concept of optimal scheduling to attain multiple objectives in the presence of multiple constraints.

  9. Deep Water Coral (HB1402, EK60)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The cruise will survey and collect samples of deep-sea corals and related marine life in the canyons in the northern Gulf of Maine in U.S. and Canadian waters. The...

  10. Particle Production in Deep Inelastic Muon Scattering

    Energy Technology Data Exchange (ETDEWEB)

    Ryan, John James [MIT

    1991-01-01

    The E665 spectrometer at Fermila.b measured Deep-Inelastic Scattering of 490 GeV /c muons off several targets: Hydrogen, Deuterium, and Xenon. Events were selected from the Xenon and Deuterium targets, with a range of energy exchange, $\

  11. Influence functionals in deep inelastic reactions

    International Nuclear Information System (INIS)

    Avishai, Y.

    1978-01-01

    It is suggested that the concept of influence functionals introduced by Feynman and Vernon could be applied to the study of deep inelastic reactions among heavy ions if the coupling between the relative motion and the internal degrees of freedom has a separable form as suggested by Hofmann and Siemens. (Auth.)

  12. Optimizing interplanetary trajectories with deep space maneuvers

    Science.gov (United States)

    Navagh, John

    1993-09-01

    Analysis of interplanetary trajectories is a crucial area for both manned and unmanned missions of the Space Exploration Initiative. A deep space maneuver (DSM) can improve a trajectory in much the same way as a planetary swingby. However, instead of using a gravitational field to alter the trajectory, the on-board propulsion system of the spacecraft is used when the vehicle is not near a planet. The purpose is to develop an algorithm to determine where and when to use deep space maneuvers to reduce the cost of a trajectory. The approach taken to solve this problem uses primer vector theory in combination with a non-linear optimizing program to minimize Delta(V). A set of necessary conditions on the primer vector is shown to indicate whether a deep space maneuver will be beneficial. Deep space maneuvers are applied to a round trip mission to Mars to determine their effect on the launch opportunities. Other studies which were performed include cycler trajectories and Mars mission abort scenarios. It was found that the software developed was able to locate quickly DSM's which lower the total Delta(V) on these trajectories.

  13. Deep learning for studies of galaxy morphology

    Science.gov (United States)

    Tuccillo, D.; Huertas-Company, M.; Decencière, E.; Velasco-Forero, S.

    2017-06-01

    Establishing accurate morphological measurements of galaxies in a reasonable amount of time for future big-data surveys such as EUCLID, the Large Synoptic Survey Telescope or the Wide Field Infrared Survey Telescope is a challenge. Because of its high level of abstraction with little human intervention, deep learning appears to be a promising approach. Deep learning is a rapidly growing discipline that models high-level patterns in data as complex multilayered networks. In this work we test the ability of deep convolutional networks to provide parametric properties of Hubble Space Telescope like galaxies (half-light radii, Sérsic indices, total flux etc..). We simulate a set of galaxies including point spread function and realistic noise from the CANDELS survey and try to recover the main galaxy parameters using deep-learning. We compare the results with the ones obtained with the commonly used profile fitting based software GALFIT. This way showing that with our method we obtain results at least equally good as the ones obtained with GALFIT but, once trained, with a factor 5 hundred time faster.

  14. Pre-cementation of deep shaft

    Science.gov (United States)

    Heinz, W. F.

    1988-12-01

    Pre-cementation or pre-grouting of deep shafts in South Africa is an established technique to improve safety and reduce water ingress during shaft sinking. The recent completion of several pre-cementation projects for shafts deeper than 1000m has once again highlighted the effectiveness of pre-grouting of shafts utilizing deep slimline boreholes and incorporating wireline technique for drilling and conventional deep borehole grouting techniques for pre-cementation. Pre-cementation of deep shaft will: (i) Increase the safety of shaft sinking operation (ii) Minimize water and gas inflow during shaft sinking (iii) Minimize the time lost due to additional grouting operations during sinking of the shaft and hence minimize costly delays and standing time of shaft sinking crews and equipment. (iv) Provide detailed information of the geology of the proposed shaft site. Informations on anomalies, dykes, faults as well as reef (gold bearing conglomerates) intersections can be obtained from the evaluation of cores of the pre-cementation boreholes. (v) Provide improved rock strength for excavations in the immediate vicinity of the shaft area. The paper describes pre-cementation techniques recently applied successfully from surface and some conclusions drawn for further considerations.

  15. Should deep seabed mining be allowed?

    NARCIS (Netherlands)

    Kim, Rak

    2017-01-01

    Abstract Commercial interest in deep sea minerals in the area beyond the limits of national jurisdiction has rapidly increased in recent years. The International Seabed Authority has already given out 26 exploration contracts and it is currently in the process of developing the Mining Code for

  16. Dust Measurements Onboard the Deep Space Gateway

    Science.gov (United States)

    Horanyi, M.; Kempf, S.; Malaspina, D.; Poppe, A.; Srama, R.; Sternovsky, Z.; Szalay, J.

    2018-02-01

    A dust instrument onboard the Deep Space Gateway will revolutionize our understanding of the dust environment at 1 AU, help our understanding of the evolution of the solar system, and improve dust hazard models for the safety of crewed and robotic missions.

  17. A quantitative lubricant test for deep drawing

    DEFF Research Database (Denmark)

    Olsson, David Dam; Bay, Niels; Andreasen, Jan L.

    2010-01-01

    A tribological test for deep drawing has been developed by which the performance of lubricants may be evaluated quantitatively measuring the maximum backstroke force on the punch owing to friction between tool and workpiece surface. The forming force is found not to give useful information...

  18. Deep underground disposal facility and the public

    International Nuclear Information System (INIS)

    Sumberova, V.

    1997-01-01

    Factors arousing public anxiety in relation to the deep burial of radioactive wastes are highlighted based on Czech and foreign analyses, and guidelines are presented to minimize public opposition when planning a geologic disposal site in the Czech Republic. (P.A.)

  19. Emotional arousal and memory after deep encoding.

    Science.gov (United States)

    Leventon, Jacqueline S; Camacho, Gabriela L; Ramos Rojas, Maria D; Ruedas, Angelica

    2018-05-22

    Emotion often enhances long-term memory. One mechanism for this enhancement is heightened arousal during encoding. However, reducing arousal, via emotion regulation (ER) instructions, has not been associated with reduced memory. In fact, the opposite pattern has been observed: stronger memory for emotional stimuli encoded with an ER instruction to reduce arousal. This pattern may be due to deeper encoding required by ER instructions. In the current research, we examine the effects of emotional arousal and deep-encoding on memory across three studies. In Study 1, adult participants completed a writing task (deep-encoding) for encoding negative, neutral, and positive picture stimuli, whereby half the emotion stimuli had the ER instruction to reduce the emotion. Memory was strong across conditions, and no memory enhancement was observed for any condition. In Study 2, adult participants completed the same writing task as Study 1, as well as a shallow-encoding task for one-third of negative, neutral, and positive trials. Memory was strongest for deep vs. shallow encoding trials, with no effects of emotion or ER instruction. In Study 3, adult participants completed a shallow-encoding task for negative, neutral, and positive stimuli, with findings indicating enhanced memory for negative emotional stimuli. Findings suggest that deep encoding must be acknowledged as a source of memory enhancement when examining manipulations of emotion-related arousal. Copyright © 2018. Published by Elsevier B.V.

  20. Coherence effects in deep inelastic scattering

    International Nuclear Information System (INIS)

    Andersson, B.; Gustafson, G.; Loennblad, L.; Pettersson, U.

    1988-09-01

    We present a framework for deep inelastic scattering, with bound state properties in accordance with a QCD force field acting like a vortex line in a colour superconducting vacuum, which implies some simple coherence effects. Within this scheme one may describe the results of present energies very well, but one obtains an appreciable depletion of gluon radiation in the HERA energy regime. (authors)

  1. Regulatory issues for deep borehole plutonium disposition

    International Nuclear Information System (INIS)

    Halsey, W.G.

    1995-03-01

    As a result of recent changes throughout the world, a substantial inventory of excess separated plutonium is expected to result from dismantlement of US nuclear weapons. The safe and secure management and eventual disposition of this plutonium, and of a similar inventory in Russia, is a high priority. A variety of options (both interim and permanent) are under consideration to manage this material. The permanent solutions can be categorized into two broad groups: direct disposal and utilization. The deep borehole disposition concept involves placing excess plutonium deep into old stable rock formations with little free water present. Issues of concern include the regulatory, statutory and policy status of such a facility, the availability of sites with desirable characteristics and the technologies required for drilling deep holes, characterizing them, emplacing excess plutonium and sealing the holes. This white paper discusses the regulatory issues. Regulatory issues concerning construction, operation and decommissioning of the surface facility do not appear to be controversial, with existing regulations providing adequate coverage. It is in the areas of siting, licensing and long term environmental protection that current regulations may be inappropriate. This is because many current regulations are by intent or by default specific to waste forms, facilities or missions significantly different from deep borehole disposition of excess weapons usable fissile material. It is expected that custom regulations can be evolved in the context of this mission

  2. Deep Reflection on My Pedagogical Transformations

    Science.gov (United States)

    Suzawa, Gilbert S.

    2014-01-01

    This retrospective essay contains my reflection on the deep concept of ambiguity (uncertainty) and a concomitant epistemological theory that all of our human knowledge is ultimately self-referential in nature. This new epistemological perspective is subsequently utilized as a platform for gaining insights into my experiences in conjunction with…

  3. North Jamaican Deep Fore-Reef Sponges

    NARCIS (Netherlands)

    Lehnert, Helmut; Soest, van R.W.M.

    1996-01-01

    An unexpectedly high amount of new species, revealed within only one hour of summarized bottom time, leads to the conclusion that the sponge fauna of the steep slopes of the deep fore-reef is still largely unknown. Four mixed gas dives at depths between 70 and 90 m, performed in May and June, 1993,

  4. Top Tagging by Deep Learning Algorithm

    CERN Document Server

    Akil, Ali

    2015-01-01

    In this report I will show the application of a deep learning algorithm on a Monte Carlo simulation sample to test its performance in tagging hadronic decays of boosted top quarks and compare what we get with the results of the application of some other algorithms.

  5. Semantic Tagging with Deep Residual Networks

    NARCIS (Netherlands)

    Bjerva, Johannes; Plank, Barbara; Bos, Johan

    2016-01-01

    We propose a novel semantic tagging task, semtagging, tailored for the purpose of multilingual semantic parsing, and present the first tagger using deep residual networks (ResNets). Our tagger uses both word and character representations and includes a novel residual bypass architecture. We evaluate

  6. Priapulus from the deep sea (Vermes, Priapulida)

    NARCIS (Netherlands)

    Land, van der J.

    1972-01-01

    INTRODUCTION The species of the genus Priapulus occur in rather cold water. Hence, their shallow-water distribution is restricted to northern and southern waters (fig. 1); there are only a few isolated records from sub-tropical localities. However, in deep water the genus apparently has a world-wide

  7. Gamma-rays from deep inelastic collisions

    International Nuclear Information System (INIS)

    Stephens, F.S.

    1981-01-01

    My objective in this talk is to consider the question: 'What can be learned about deep inelastic collisions (DIC) from studying the associated gamma-rays'. First, I discuss the origin and nature of the gamma-rays from DIC, then the kinds of information gamma-ray spectra contain, and finally come to the combination of these two subjects. (orig./HSI)

  8. Deep Support Vector Machines for Regression Problems

    NARCIS (Netherlands)

    Wiering, Marco; Schutten, Marten; Millea, Adrian; Meijster, Arnold; Schomaker, Lambertus

    2013-01-01

    In this paper we describe a novel extension of the support vector machine, called the deep support vector machine (DSVM). The original SVM has a single layer with kernel functions and is therefore a shallow model. The DSVM can use an arbitrary number of layers, in which lower-level layers contain

  9. Using Cooperative Structures to Promote Deep Learning

    Science.gov (United States)

    Millis, Barbara J.

    2014-01-01

    The author explores concrete ways to help students learn more and have fun doing it while they support each other's learning. The article specifically shows the relationships between cooperative learning and deep learning. Readers will become familiar with the tenets of cooperative learning and its power to enhance learning--even more so when…

  10. Stimulating Deep Learning Using Active Learning Techniques

    Science.gov (United States)

    Yew, Tee Meng; Dawood, Fauziah K. P.; a/p S. Narayansany, Kannaki; a/p Palaniappa Manickam, M. Kamala; Jen, Leong Siok; Hoay, Kuan Chin

    2016-01-01

    When students and teachers behave in ways that reinforce learning as a spectator sport, the result can often be a classroom and overall learning environment that is mostly limited to transmission of information and rote learning rather than deep approaches towards meaningful construction and application of knowledge. A group of college instructors…

  11. Survey on deep learning for radiotherapy.

    Science.gov (United States)

    Meyer, Philippe; Noblet, Vincent; Mazzara, Christophe; Lallement, Alex

    2018-05-17

    More than 50% of cancer patients are treated with radiotherapy, either exclusively or in combination with other methods. The planning and delivery of radiotherapy treatment is a complex process, but can now be greatly facilitated by artificial intelligence technology. Deep learning is the fastest-growing field in artificial intelligence and has been successfully used in recent years in many domains, including medicine. In this article, we first explain the concept of deep learning, addressing it in the broader context of machine learning. The most common network architectures are presented, with a more specific focus on convolutional neural networks. We then present a review of the published works on deep learning methods that can be applied to radiotherapy, which are classified into seven categories related to the patient workflow, and can provide some insights of potential future applications. We have attempted to make this paper accessible to both radiotherapy and deep learning communities, and hope that it will inspire new collaborations between these two communities to develop dedicated radiotherapy applications. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Pathways to deep decarbonization in India

    DEFF Research Database (Denmark)

    Shukla, P.; Dhar, Subash; Pathak, Minal

    This report is a part of the global Deep Decarbonisation Pathways (DDP) Project. The analysis consider two development scenarios for India and assess alternate roadmaps for transiting to a low carbon economy consistent with the globally agreed 2°C stabilization target. The report does not conside...

  13. Pressure induced deep tissue injury explained

    NARCIS (Netherlands)

    Oomens, C.W.J.; Bader, D.L.; Loerakker, S.; Baaijens, F.P.T.

    The paper describes the current views on the cause of a sub-class of pressure ulcers known as pressure induced deep tissue injury (DTI). A multi-scale approach was adopted using model systems ranging from single cells in culture, tissue engineered muscle to animal studies with small animals. This

  14. Deep inelastic scattering near the Coulomb barrier

    International Nuclear Information System (INIS)

    Gehring, J.; Back, B.; Chan, K.

    1995-01-01

    Deep inelastic scattering was recently observed in heavy ion reactions at incident energies near and below the Coulomb barrier. Traditional models of this process are based on frictional forces and are designed to predict the features of deep inelastic processes at energies above the barrier. They cannot be applied at energies below the barrier where the nuclear overlap is small and friction is negligible. The presence of deep inelastic scattering at these energies requires a different explanation. The first observation of deep inelastic scattering near the barrier was in the systems 124,112 Sn + 58,64 Ni by Wolfs et al. We previously extended these measurements to the system 136 Xe + 64 Ni and currently measured the system 124 Xe + 58 Ni. We obtained better statistics, better mass and energy resolution, and more complete angular coverage in the Xe + Ni measurements. The cross sections and angular distributions are similar in all of the Sn + Ni and Xe + Ni systems. The data are currently being analyzed and compared with new theoretical calculations. They will be part of the thesis of J. Gehring

  15. Mean associated multiplicities in deep inelastic processes

    International Nuclear Information System (INIS)

    Dzhaparidze, G.Sh.; Kiselev, A.V.; Petrov, V.A.

    1982-01-01

    A formula is derived for the mean hadron multiplicity in the target fragmentation range of deep inelastic scattering processes. It is shown that in the high-x region the ratio of the mean multiplicities in the current fragmentation region and in the target fragmentation region tends to unity at high energies. The mean multiplicity for the Drell-Yan process is considered

  16. Deep inelastic scattering near the Coulomb barrier

    Energy Technology Data Exchange (ETDEWEB)

    Gehring, J.; Back, B.; Chan, K. [and others

    1995-08-01

    Deep inelastic scattering was recently observed in heavy ion reactions at incident energies near and below the Coulomb barrier. Traditional models of this process are based on frictional forces and are designed to predict the features of deep inelastic processes at energies above the barrier. They cannot be applied at energies below the barrier where the nuclear overlap is small and friction is negligible. The presence of deep inelastic scattering at these energies requires a different explanation. The first observation of deep inelastic scattering near the barrier was in the systems {sup 124,112}Sn + {sup 58,64}Ni by Wolfs et al. We previously extended these measurements to the system {sup 136}Xe + {sup 64}Ni and currently measured the system {sup 124}Xe + {sup 58}Ni. We obtained better statistics, better mass and energy resolution, and more complete angular coverage in the Xe + Ni measurements. The cross sections and angular distributions are similar in all of the Sn + Ni and Xe + Ni systems. The data are currently being analyzed and compared with new theoretical calculations. They will be part of the thesis of J. Gehring.

  17. Predicting galling behaviour in deep drawing processes

    NARCIS (Netherlands)

    van der Linde, G.

    2011-01-01

    Deep drawing is a sheet metal forming process which is widely used in, for example, the automotive industry. With this process it is possible to form complex shaped parts of sheet metal and it is suitable for products that have to be produced in large numbers. The tools for this process are required

  18. Deep Belief Networks for dimensionality reduction

    NARCIS (Netherlands)

    Noulas, A.K.; Kröse, B.J.A.

    2008-01-01

    Deep Belief Networks are probabilistic generative models which are composed by multiple layers of latent stochastic variables. The top two layers have symmetric undirected connections, while the lower layers receive directed top-down connections from the layer above. The current state-of-the-art

  19. DRREP: deep ridge regressed epitope predictor.

    Science.gov (United States)

    Sher, Gene; Zhi, Degui; Zhang, Shaojie

    2017-10-03

    The ability to predict epitopes plays an enormous role in vaccine development in terms of our ability to zero in on where to do a more thorough in-vivo analysis of the protein in question. Though for the past decade there have been numerous advancements and improvements in epitope prediction, on average the best benchmark prediction accuracies are still only around 60%. New machine learning algorithms have arisen within the domain of deep learning, text mining, and convolutional networks. This paper presents a novel analytically trained and string kernel using deep neural network, which is tailored for continuous epitope prediction, called: Deep Ridge Regressed Epitope Predictor (DRREP). DRREP was tested on long protein sequences from the following datasets: SARS, Pellequer, HIV, AntiJen, and SEQ194. DRREP was compared to numerous state of the art epitope predictors, including the most recently published predictors called LBtope and DMNLBE. Using area under ROC curve (AUC), DRREP achieved a performance improvement over the best performing predictors on SARS (13.7%), HIV (8.9%), Pellequer (1.5%), and SEQ194 (3.1%), with its performance being matched only on the AntiJen dataset, by the LBtope predictor, where both DRREP and LBtope achieved an AUC of 0.702. DRREP is an analytically trained deep neural network, thus capable of learning in a single step through regression. By combining the features of deep learning, string kernels, and convolutional networks, the system is able to perform residue-by-residue prediction of continues epitopes with higher accuracy than the current state of the art predictors.

  20. DeepARG: a deep learning approach for predicting antibiotic resistance genes from metagenomic data.

    Science.gov (United States)

    Arango-Argoty, Gustavo; Garner, Emily; Pruden, Amy; Heath, Lenwood S; Vikesland, Peter; Zhang, Liqing

    2018-02-01

    Growing concerns about increasing rates of antibiotic resistance call for expanded and comprehensive global monitoring. Advancing methods for monitoring of environmental media (e.g., wastewater, agricultural waste, food, and water) is especially needed for identifying potential resources of novel antibiotic resistance genes (ARGs), hot spots for gene exchange, and as pathways for the spread of ARGs and human exposure. Next-generation sequencing now enables direct access and profiling of the total metagenomic DNA pool, where ARGs are typically identified or predicted based on the "best hits" of sequence searches against existing databases. Unfortunately, this approach produces a high rate of false negatives. To address such limitations, we propose here a deep learning approach, taking into account a dissimilarity matrix created using all known categories of ARGs. Two deep learning models, DeepARG-SS and DeepARG-LS, were constructed for short read sequences and full gene length sequences, respectively. Evaluation of the deep learning models over 30 antibiotic resistance categories demonstrates that the DeepARG models can predict ARGs with both high precision (> 0.97) and recall (> 0.90). The models displayed an advantage over the typical best hit approach, yielding consistently lower false negative rates and thus higher overall recall (> 0.9). As more data become available for under-represented ARG categories, the DeepARG models' performance can be expected to be further enhanced due to the nature of the underlying neural networks. Our newly developed ARG database, DeepARG-DB, encompasses ARGs predicted with a high degree of confidence and extensive manual inspection, greatly expanding current ARG repositories. The deep learning models developed here offer more accurate antimicrobial resistance annotation relative to current bioinformatics practice. DeepARG does not require strict cutoffs, which enables identification of a much broader diversity of ARGs. The

  1. DeepQA: Improving the estimation of single protein model quality with deep belief networks

    OpenAIRE

    Cao, Renzhi; Bhattacharya, Debswapna; Hou, Jie; Cheng, Jianlin

    2016-01-01

    Background Protein quality assessment (QA) useful for ranking and selecting protein models has long been viewed as one of the major challenges for protein tertiary structure prediction. Especially, estimating the quality of a single protein model, which is important for selecting a few good models out of a large model pool consisting of mostly low-quality models, is still a largely unsolved problem. Results We introduce a novel single-model quality assessment method DeepQA based on deep belie...

  2. Deep Galaxy: Classification of Galaxies based on Deep Convolutional Neural Networks

    OpenAIRE

    Khalifa, Nour Eldeen M.; Taha, Mohamed Hamed N.; Hassanien, Aboul Ella; Selim, I. M.

    2017-01-01

    In this paper, a deep convolutional neural network architecture for galaxies classification is presented. The galaxy can be classified based on its features into main three categories Elliptical, Spiral, and Irregular. The proposed deep galaxies architecture consists of 8 layers, one main convolutional layer for features extraction with 96 filters, followed by two principles fully connected layers for classification. It is trained over 1356 images and achieved 97.272% in testing accuracy. A c...

  3. Combining shallow and deep processing for a robust, fast, deep-linguistic dependency parser

    OpenAIRE

    Schneider, G

    2004-01-01

    This paper describes Pro3Gres, a fast, robust, broad-coverage parser that delivers deep-linguistic grammatical relation structures as output, which are closer to predicate-argument structures and more informative than pure constituency structures. The parser stays as shallow as is possible for each task, combining shallow and deep-linguistic methods by integrating chunking and by expressing the majority of long-distance dependencies in a context-free way. It combines statistical and rule-base...

  4. New optimized drill pipe size for deep-water, extended reach and ultra-deep drilling

    Energy Technology Data Exchange (ETDEWEB)

    Jellison, Michael J.; Delgado, Ivanni [Grant Prideco, Inc., Hoston, TX (United States); Falcao, Jose Luiz; Sato, Ademar Takashi [PETROBRAS, Rio de Janeiro, RJ (Brazil); Moura, Carlos Amsler [Comercial Perfuradora Delba Baiana Ltda., Rio de Janeiro, RJ (Brazil)

    2004-07-01

    A new drill pipe size, 5-7/8 in. OD, represents enabling technology for Extended Reach Drilling (ERD), deep water and other deep well applications. Most world-class ERD and deep water wells have traditionally been drilled with 5-1/2 in. drill pipe or a combination of 6-5/8 in. and 5-1/2 in. drill pipe. The hydraulic performance of 5-1/2 in. drill pipe can be a major limitation in substantial ERD and deep water wells resulting in poor cuttings removal, slower penetration rates, diminished control over well trajectory and more tendency for drill pipe sticking. The 5-7/8 in. drill pipe provides a significant improvement in hydraulic efficiency compared to 5-1/2 in. drill pipe and does not suffer from the disadvantages associated with use of 6-5/8 in. drill pipe. It represents a drill pipe assembly that is optimized dimensionally and on a performance basis for casing and bit programs that are commonly used for ERD, deep water and ultra-deep wells. The paper discusses the engineering philosophy behind 5-7/8 in. drill pipe, the design challenges associated with development of the product and reviews the features and capabilities of the second-generation double-shoulder connection. The paper provides drilling case history information on significant projects where the pipe has been used and details results achieved with the pipe. (author)

  5. Deep-Sea Corals: A New Oceanic Archive

    National Research Council Canada - National Science Library

    Adkins, Jess

    1998-01-01

    Deep-sea corals are an extraordinary new archive of deep ocean behavior. The species Desmophyllum cristagalli is a solitary coral composed of uranium rich, density banded aragonite that I have calibrated for several paleoclimate tracers...

  6. The development of deep learning in synthetic aperture radar imagery

    CSIR Research Space (South Africa)

    Schwegmann, Colin P

    2017-05-01

    Full Text Available sensing techniques but comes at the price of additional complexities. To adequately cope with these, researchers have begun to employ advanced machine learning techniques known as deep learning to Synthetic Aperture Radar data. Deep learning represents...

  7. Minimally invasive trans-portal resection of deep intracranial lesions.

    NARCIS (Netherlands)

    Raza, S.M.; Recinos, P.F.; Avendano, J.; Adams, H.; Jallo, G.I.; Quinones-Hinojosa, A.

    2011-01-01

    BACKGROUND: The surgical management of deep intra-axial lesions still requires microsurgical approaches that utilize retraction of deep white matter to obtain adequate visualization. We report our experience with a new tubular retractor system, designed specifically for intracranial applications,

  8. Extreme Longevity in Proteinaceous Deep-Sea Corals

    Energy Technology Data Exchange (ETDEWEB)

    Roark, E B; Guilderson, T P; Dunbar, R B; Fallon, S J; Mucciarone, D A

    2009-02-09

    Deep-sea corals are found on hard substrates on seamounts and continental margins world-wide at depths of 300 to {approx}3000 meters. Deep-sea coral communities are hotspots of deep ocean biomass and biodiversity, providing critical habitat for fish and invertebrates. Newly applied radiocarbon age date from the deep water proteinaceous corals Gerardia sp. and Leiopathes glaberrima show that radial growth rates are as low as 4 to 35 {micro}m yr{sup -1} and that individual colony longevities are on the order of thousands of years. The management and conservation of deep sea coral communities is challenged by their commercial harvest for the jewelry trade and damage caused by deep water fishing practices. In light of their unusual longevity, a better understanding of deep sea coral ecology and their interrelationships with associated benthic communities is needed to inform coherent international conservation strategies for these important deep-sea ecosystems.

  9. Assessing Deep Sea Communities Through Seabed Imagery

    Science.gov (United States)

    Matkin, A. G.; Cross, K.; Milititsky, M.

    2016-02-01

    The deep sea still remains virtually unexplored. Human activity, such as oil and gas exploration and deep sea mining, is expanding further into the deep sea, increasing the need to survey and map extensive areas of this habitat in order to assess ecosystem health and value. The technology needed to explore this remote environment has been advancing. Seabed imagery can cover extensive areas of the seafloor and investigate areas where sampling with traditional coring methodologies is just not possible (e.g. cold water coral reefs). Remotely operated vehicles (ROVs) are an expensive option, so drop or towed camera systems can provide a more viable and affordable alternative, while still allowing for real-time control. Assessment of seabed imagery in terms of presence, abundance and density of particular species can be conducted by bringing together a variety of analytical tools for a holistic approach. Sixteen deep sea transects located offshore West Africa were investigated with a towed digital video telemetry system (DTS). Both digital stills and video footage were acquired. An extensive data set was obtained from over 13,000 usable photographs, allowing for characterisation of the different habitats present in terms of community composition and abundance. All observed fauna were identified to the lowest taxonomic level and enumerated when possible, with densities derived after the seabed area was calculated for each suitable photograph. This methodology allowed for consistent assessment of the different habitat types present, overcoming constraints, such as specific taxa that cannot be enumerated, such as sponges, corals or bryozoans, the presence of mobile and sessile species, or the level of taxonomic detail. Although this methodology will not enable a full characterisation of a deep sea community, in terms of species composition for instance, itt will allow a robust assessment of large areas of the deep sea in terms of sensitive habitats present and community

  10. Microbiological characterization of deep geological compartments

    International Nuclear Information System (INIS)

    Barsotti, V.; Sergeant, C.; Vesvres, M.H.; Coulon, S.; Joulian, C.; Garrido, F.; Ollivier, B.

    2012-01-01

    Document available in extended abstract form only. Microbial life in deep sediments and Earth's crust is now acknowledged by the scientific world. The deep subsurface biosphere contributes significantly to fundamental biogeochemical processes. However, despite great advances in geo-microbiological studies, deep terrestrial ecosystems are microbiologically poorly understood, mainly due to their inaccessibility. The drilling down to the base of the Triassic (1980 meters deep) in the geological formations of the eastern Paris Basin performed by ANDRA (EST433) in 2008 provides us a good opportunity to explore the deep biosphere. We conditioned the samples on the coring site, in as aseptic conditions as possible. In addition to storage at atmospheric pressure, a portion of the four Triassic samples was placed in a 190 bars pressurized chamber to investigate the influence of the conservation pressure factor on the found microflora. In parallel, in order to evaluate a potential bacterial contamination of the cores by the drilling fluids, samples of mud just before each sample drilling were taken and analyzed. The microbial exploration can be divided in two parts: - A cultural approach in different culture media for metabolic groups as methanogens, fermenters and sulphate reducing bacteria to stimulate their growth and to isolate microbial cells still viable. - A molecular approach by direct extraction of genomic DNA from the geological samples to explore a larger biodiversity. The limits are here the difficulties to extract DNA from these low biomass containing rocks. After comparison and optimization of several DNA extraction methods, the bacterial diversity present in rock cores was analyzed using DGGE (Denaturating Gel Gradient Electrophoresis) and cloning. The detailed results of all these investigations will be presented: - Despite all 400 cultural conditions experimented (with various media, salinities, temperatures, conservation pressure, agitation), no viable and

  11. Deep Recurrent Convolutional Neural Network: Improving Performance For Speech Recognition

    OpenAIRE

    Zhang, Zewang; Sun, Zheng; Liu, Jiaqi; Chen, Jingwen; Huo, Zhao; Zhang, Xiao

    2016-01-01

    A deep learning approach has been widely applied in sequence modeling problems. In terms of automatic speech recognition (ASR), its performance has significantly been improved by increasing large speech corpus and deeper neural network. Especially, recurrent neural network and deep convolutional neural network have been applied in ASR successfully. Given the arising problem of training speed, we build a novel deep recurrent convolutional network for acoustic modeling and then apply deep resid...

  12. How to study deep roots - and why it matters

    OpenAIRE

    Maeght, Jean-Luc; Rewald, B.; Pierret, Alain

    2013-01-01

    The drivers underlying the development of deep root systems, whether genetic or environmental, are poorly understood but evidence has accumulated that deep rooting could be a more widespread and important trait among plants than commonly anticipated from their share of root biomass. Even though a distinct classification of "deep roots" is missing to date, deep roots provide important functions for individual plants such as nutrient and water uptake but can also shape plant communities by hydr...

  13. Benchmarking State-of-the-Art Deep Learning Software Tools

    OpenAIRE

    Shi, Shaohuai; Wang, Qiang; Xu, Pengfei; Chu, Xiaowen

    2016-01-01

    Deep learning has been shown as a successful machine learning method for a variety of tasks, and its popularity results in numerous open-source deep learning software tools. Training a deep network is usually a very time-consuming process. To address the computational challenge in deep learning, many tools exploit hardware features such as multi-core CPUs and many-core GPUs to shorten the training time. However, different tools exhibit different features and running performance when training ...

  14. Deep Complementary Bottleneck Features for Visual Speech Recognition

    NARCIS (Netherlands)

    Petridis, Stavros; Pantic, Maja

    Deep bottleneck features (DBNFs) have been used successfully in the past for acoustic speech recognition from audio. However, research on extracting DBNFs for visual speech recognition is very limited. In this work, we present an approach to extract deep bottleneck visual features based on deep

  15. Automatic Segmentation and Deep Learning of Bird Sounds

    NARCIS (Netherlands)

    Koops, Hendrik Vincent; Van Balen, J.M.H.; Wiering, F.

    2015-01-01

    We present a study on automatic birdsong recognition with deep neural networks using the BIRDCLEF2014 dataset. Through deep learning, feature hierarchies are learned that represent the data on several levels of abstraction. Deep learning has been applied with success to problems in fields such as

  16. An Ensemble of Deep Support Vector Machines for Image Categorization

    NARCIS (Netherlands)

    Abdullah, Azizi; Veltkamp, Remco C.; Wiering, Marco

    2009-01-01

    This paper presents the deep support vector machine (D-SVM) inspired by the increasing popularity of deep belief networks for image recognition. Our deep SVM trains an SVM in the standard way and then uses the kernel activations of support vectors as inputs for training another SVM at the next

  17. Deep brain stimulation as a functional scalpel.

    Science.gov (United States)

    Broggi, G; Franzini, A; Tringali, G; Ferroli, P; Marras, C; Romito, L; Maccagnano, E

    2006-01-01

    Since 1995, at the Istituto Nazionale Neurologico "Carlo Besta" in Milan (INNCB,) 401 deep brain electrodes were implanted to treat several drug-resistant neurological syndromes (Fig. 1). More than 200 patients are still available for follow-up and therapeutical considerations. In this paper our experience is reviewed and pioneered fields are highlighted. The reported series of patients extends the use of deep brain stimulation beyond the field of Parkinson's disease to new fields such as cluster headache, disruptive behaviour, SUNCt, epilepsy and tardive dystonia. The low complication rate, the reversibility of the procedure and the available image guided surgery tools will further increase the therapeutic applications of DBS. New therapeutical applications are expected for this functional scalpel.

  18. Preliminary results from NOAMP deep drifting floats

    International Nuclear Information System (INIS)

    Ollitrault, M.

    1989-01-01

    This paper is a very brief and preliminary outline of first results obtained with deep SOFAR floats in the NOAMP area. The work is now going toward more precise statistical estimations of mean and variable currents, together with better tracking to resolve submesoscales and estimate diffusivities due to mesoscale and smaller scale motions. However the preliminary results confirm that the NOAMP region (and surroundings) has a deep mesoscale eddy field that is considerably more energetic that the mean field (r.m.s. velocities are of order 5 cm s -1 ), although both values are diminished compared to the western basin. A data report containing trajectories and statistics is scheduled to be published by IFREMER in the near future. The project main task is to especially study the dispersion of radioactive substances

  19. Ion transport in deep-sea sediments

    International Nuclear Information System (INIS)

    Heath, G.R.

    1979-01-01

    Initial assessment of the ability of deep-sea clays to contain nuclear waste is optimistic. Yet, the investigators have no delusions about the complexity of the natural geochemical system and the perturbations that may result from emplacement of thermally-hot waste cannisters. Even though they may never be able to predict the exact nature of all these perturbations, containment of the nuclides by the waste form/cannister system until most of the heat has decayed, and burial of the waste to a sufficient depth that the altered zone can be treated as a black box source of dissolved nuclides to the enclosing unperturbed sediment, encourage them to believe that ion migration in the deep seabed can be modeled accurately and that our preliminary estimates of migration rates are likely to be reasonably realistic

  20. Deep learning for automated drivetrain fault detection

    DEFF Research Database (Denmark)

    Bach-Andersen, Martin; Rømer-Odgaard, Bo; Winther, Ole

    2018-01-01

    A novel data-driven deep-learning system for large-scale wind turbine drivetrain monitoring applications is presented. It uses convolutional neural network processing on complex vibration signal inputs. The system is demonstrated to learn successfully from the actions of human diagnostic experts...... the fleet-wide diagnostic model performance. The analysis also explores the time dependence of the diagnostic performance, providing a detailed view of the timeliness and accuracy of the diagnostic outputs across the different architectures. Deep architectures are shown to outperform the human analyst...... as well as shallow-learning architectures, and the results demonstrate that when applied in a large-scale monitoring system, machine intelligence is now able to handle some of the most challenging diagnostic tasks related to wind turbines....

  1. Jet-images — deep learning edition

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Luke de [Institute for Computational and Mathematical Engineering, Stanford University,Huang Building 475 Via Ortega, Stanford, CA 94305 (United States); Kagan, Michael [SLAC National Accelerator Laboratory, Stanford University,2575 Sand Hill Rd, Menlo Park, CA 94025 (United States); Mackey, Lester [Department of Statistics, Stanford University,390 Serra Mall, Stanford, CA 94305 (United States); Nachman, Benjamin; Schwartzman, Ariel [SLAC National Accelerator Laboratory, Stanford University,2575 Sand Hill Rd, Menlo Park, CA 94025 (United States)

    2016-07-13

    Building on the notion of a particle physics detector as a camera and the collimated streams of high energy particles, or jets, it measures as an image, we investigate the potential of machine learning techniques based on deep learning architectures to identify highly boosted W bosons. Modern deep learning algorithms trained on jet images can out-perform standard physically-motivated feature driven approaches to jet tagging. We develop techniques for visualizing how these features are learned by the network and what additional information is used to improve performance. This interplay between physically-motivated feature driven tools and supervised learning algorithms is general and can be used to significantly increase the sensitivity to discover new particles and new forces, and gain a deeper understanding of the physics within jets.

  2. JGR special issue on Deep Earthquakes

    Science.gov (United States)

    The editor and associate editors of the Journal of Geophysical Research—Solid Earth and Planets invite the submission of manuscripts for a special issue on the topic “Deep- and Intermediate-Focus Earthquakes, Phase Transitions, and the Mechanics of Deep Subduction.”Manuscripts should be submitted to JGR Editor Gerald Schubert (Department of Earth and Space Sciences, University of California, Los Angeles, Los Angeles, CA 90024) before July 1, 1986, in accordance with the usual rules for manuscript submission. Submitted papers will undergo the normal JGR review procedure. For more information, contact either Schubert or the special guest associate editor, Cliff Frohlich (Institute for Geophysics, University of Texas at Austin, 4920 North IH-35, Austin, TX 78751; telephone: 512-451-6223).

  3. Ensemble Network Architecture for Deep Reinforcement Learning

    Directory of Open Access Journals (Sweden)

    Xi-liang Chen

    2018-01-01

    Full Text Available The popular deep Q learning algorithm is known to be instability because of the Q-value’s shake and overestimation action values under certain conditions. These issues tend to adversely affect their performance. In this paper, we develop the ensemble network architecture for deep reinforcement learning which is based on value function approximation. The temporal ensemble stabilizes the training process by reducing the variance of target approximation error and the ensemble of target values reduces the overestimate and makes better performance by estimating more accurate Q-value. Our results show that this architecture leads to statistically significant better value evaluation and more stable and better performance on several classical control tasks at OpenAI Gym environment.

  4. DCMDN: Deep Convolutional Mixture Density Network

    Science.gov (United States)

    D'Isanto, Antonio; Polsterer, Kai Lars

    2017-09-01

    Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.

  5. An Introduction to Deep Learning with Keras

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Keras is a modular, powerful and intuitive open-source Deep Learning library built on Theano and TensorFlow. Thanks to its minimalist, user-friendly interface, it has become one of the most popular packages to build, train and test neural networks, for both beginners and experts. In this tutorial, we will start with an introduction of the basics of neural networks and will work through fully functioning examples, with an eye towards deployment strategies within the context of CERN. Fundamental steps and parameters in Deep Learning will be presented both from a conceptual and a practical standpoint, by looking at the way Keras implements them and exposes them to the users. Please visit the [Keras website][1] prior to the workshop for installation instructions and feel free to reach out for any issue. [1]: http://keras.io/#installation

  6. Deep space test bed for radiation studies

    International Nuclear Information System (INIS)

    Adams, James H.; Adcock, Leonard; Apple, Jeffery; Christl, Mark; Cleveand, William; Cox, Mark; Dietz, Kurt; Ferguson, Cynthia; Fountain, Walt; Ghita, Bogdan; Kuznetsov, Evgeny; Milton, Martha; Myers, Jeremy; O'Brien, Sue; Seaquist, Jim; Smith, Edward A.; Smith, Guy; Warden, Lance; Watts, John

    2007-01-01

    The Deep Space Test-Bed (DSTB) Facility is designed to investigate the effects of galactic cosmic rays on crews and systems during missions to the Moon or Mars. To gain access to the interplanetary ionizing radiation environment the DSTB uses high-altitude polar balloon flights. The DSTB provides a platform for measurements to validate the radiation transport codes that are used by NASA to calculate the radiation environment within crewed space systems. It is also designed to support other exploration related investigations such as measuring the shielding effectiveness of candidate spacecraft and habitat materials, testing new radiation monitoring instrumentation, flight avionics and investigating the biological effects of deep space radiation. We describe the work completed thus far in the development of the DSTB and its current status

  7. Nanostructured Deep Carbon: A Wealth of Possibilities

    Science.gov (United States)

    Navrotsky, A.

    2012-12-01

    The materials science community has been investigating novel forms of carbon including C60 buckyballs, nanodiamond, graphene, carbon "onion" structures with a mixture of sp2 and sp3 bonding , and multicomponent nanostructured Si-O-C-N polymer derived ceramics. Though such materials are generally viewed as metastable, recently measured energetics of several materials suggest that this may not always be the case in multicomponent systems. Finely disseminated carbon phases, including nanodiamonds, have been found in rocks from a variety of deep earth settings. The question then is whether some of the more exotic forms of carbon can also exist in the deep earth or other planetary interiors. This presentation discusses thermodynamic constraints related to surface and interface energies, nanodomain structures, and compositional effects on the possible existence of complex carbon, carbide and oxycarbide nanomaterials at high pressure.

  8. Randomized Clinical Trials on Deep Carious Lesions

    DEFF Research Database (Denmark)

    Bjørndal, Lars; Fransson, Helena; Bruun, Gitte

    2017-01-01

    nonselective carious removal to hard dentin with or without pulp exposure. The aim of this article was to report the 5-y outcome on these previously treated patients having radiographically well-defined carious lesions extending into the pulpal quarter of the dentin but with a well-defined radiodense zone...... pulp exposures per se were included as failures. Pulp exposure rate was significantly lower in the stepwise carious removal group (21.2% vs. 35.5%; P = 0.014). Irrespective of pulp exposure status, the difference (13.3%) was still significant when sustained pulp vitality without apical radiolucency......) in deep carious lesions in adults. In conclusion, the stepwise carious removal group had a significantly higher proportion of pulps with sustained vitality without apical radiolucency versus nonselective carious removal of deep carious lesions in adult teeth at 5-y follow-up (ClinicalTrials.gov NCT...

  9. Gas Classification Using Deep Convolutional Neural Networks

    Science.gov (United States)

    Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin

    2018-01-01

    In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP). PMID:29316723

  10. Recanalization after acute deep vein thrombosis

    Directory of Open Access Journals (Sweden)

    Gustavo Mucoucah Sampaio Brandao

    2013-12-01

    Full Text Available The process of recanalization of the veins of the lower limbs after an episode of acute deep venous thrombosis is part of the natural evolution of the remodeling of the venous thrombus in patients on anticoagulation with heparin and vitamin K inhibitors. This remodeling involves the complex process of adhesion of thrombus to the wall of the vein, the inflammatory response of the vessel wall leading to organization and subsequent contraction of the thrombus, neovascularization and spontaneous lysis of areas within the thrombus. The occurrence of spontaneous arterial flow in recanalized thrombosed veins has been described as secondary to neovascularization and is characterized by the development of flow patterns characteristic of arteriovenous fistulae that can be identified by color duplex scanning. In this review, we discuss some controversial aspects of the natural history of deep vein thrombosis to provide a better understanding of its course and its impact on venous disease.

  11. Learning with hierarchical-deep models.

    Science.gov (United States)

    Salakhutdinov, Ruslan; Tenenbaum, Joshua B; Torralba, Antonio

    2013-08-01

    We introduce HD (or “Hierarchical-Deep”) models, a new compositional learning architecture that integrates deep learning models with structured hierarchical Bayesian (HB) models. Specifically, we show how we can learn a hierarchical Dirichlet process (HDP) prior over the activities of the top-level features in a deep Boltzmann machine (DBM). This compound HDP-DBM model learns to learn novel concepts from very few training example by learning low-level generic features, high-level features that capture correlations among low-level features, and a category hierarchy for sharing priors over the high-level features that are typical of different kinds of concepts. We present efficient learning and inference algorithms for the HDP-DBM model and show that it is able to learn new concepts from very few examples on CIFAR-100 object recognition, handwritten character recognition, and human motion capture datasets.

  12. Gas Classification Using Deep Convolutional Neural Networks.

    Science.gov (United States)

    Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin

    2018-01-08

    In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP).

  13. Computational ghost imaging using deep learning

    Science.gov (United States)

    Shimobaba, Tomoyoshi; Endo, Yutaka; Nishitsuji, Takashi; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Shiraki, Atsushi; Ito, Tomoyoshi

    2018-04-01

    Computational ghost imaging (CGI) is a single-pixel imaging technique that exploits the correlation between known random patterns and the measured intensity of light transmitted (or reflected) by an object. Although CGI can obtain two- or three-dimensional images with a single or a few bucket detectors, the quality of the reconstructed images is reduced by noise due to the reconstruction of images from random patterns. In this study, we improve the quality of CGI images using deep learning. A deep neural network is used to automatically learn the features of noise-contaminated CGI images. After training, the network is able to predict low-noise images from new noise-contaminated CGI images.

  14. Jet-images — deep learning edition

    International Nuclear Information System (INIS)

    Oliveira, Luke de; Kagan, Michael; Mackey, Lester; Nachman, Benjamin; Schwartzman, Ariel

    2016-01-01

    Building on the notion of a particle physics detector as a camera and the collimated streams of high energy particles, or jets, it measures as an image, we investigate the potential of machine learning techniques based on deep learning architectures to identify highly boosted W bosons. Modern deep learning algorithms trained on jet images can out-perform standard physically-motivated feature driven approaches to jet tagging. We develop techniques for visualizing how these features are learned by the network and what additional information is used to improve performance. This interplay between physically-motivated feature driven tools and supervised learning algorithms is general and can be used to significantly increase the sensitivity to discover new particles and new forces, and gain a deeper understanding of the physics within jets.

  15. On deep inelastic lepton-nuclear interactions

    International Nuclear Information System (INIS)

    Garsevanishvili, V.R.; Darbaidze, Ya.Z.; Menteshashvili, Z.R.; Ehsakiya, Sh.M.

    1981-01-01

    The problem of building relativistic theory of nuclear reactions by way of involving relativistic methods, developed in the elementary particle theory, becomes rather actual at the time being. The paper presents some results of investigations into deep inelastic lepton-nuclear processes lA → l'(A-1)x, with the spectator nucleus-fragment in the finite state. To describe the reactions lA → l'(A-1)x (where l=an electron, muan, neutrino, antineutrino), the use is made of the self-similarity principle and multiparticle quasipotential formalism in the ''light front'' variables. The expressions are obtained for the differential cross-sections of lepton-nuclear processes and for the structure functions of deep inelastic scattering of neutrinos (antineutrinos) and charged leptons by nuclei

  16. DAPs: Deep Action Proposals for Action Understanding

    KAUST Repository

    Escorcia, Victor

    2016-09-17

    Object proposals have contributed significantly to recent advances in object understanding in images. Inspired by the success of this approach, we introduce Deep Action Proposals (DAPs), an effective and efficient algorithm for generating temporal action proposals from long videos. We show how to take advantage of the vast capacity of deep learning models and memory cells to retrieve from untrimmed videos temporal segments, which are likely to contain actions. A comprehensive evaluation indicates that our approach outperforms previous work on a large scale action benchmark, runs at 134 FPS making it practical for large-scale scenarios, and exhibits an appealing ability to generalize, i.e. to retrieve good quality temporal proposals of actions unseen in training.

  17. pathways to deep decarbonization - 2014 report

    International Nuclear Information System (INIS)

    Sachs, Jeffrey; Guerin, Emmanuel; Mas, Carl; Schmidt-Traub, Guido; Tubiana, Laurence; Waisman, Henri; Colombier, Michel; Bulger, Claire; Sulakshana, Elana; Zhang, Kathy; Barthelemy, Pierre; Spinazze, Lena; Pharabod, Ivan

    2014-09-01

    The Deep Decarbonization Pathways Project (DDPP) is a collaborative initiative to understand and show how individual countries can transition to a low-carbon economy and how the world can meet the internationally agreed target of limiting the increase in global mean surface temperature to less than 2 degrees Celsius (deg. C). Achieving the 2 deg. C limit will require that global net emissions of greenhouse gases (GHG) approach zero by the second half of the century. This will require a profound transformation of energy systems by mid-century through steep declines in carbon intensity in all sectors of the economy, a transition we call 'deep decarbonization.' Successfully transition to a low-carbon economy will require unprecedented global cooperation, including a global cooperative effort to accelerate the development and diffusion of some key low carbon technologies. As underscored throughout this report, the results of the DDPP analyses remain preliminary and incomplete. The DDPP proceeds in two phases. This 2014 report describes the DDPP's approach to deep decarbonization at the country level and presents preliminary findings on technically feasible pathways to deep decarbonization, utilizing technology assumptions and timelines provided by the DDPP Secretariat. At this stage we have not yet considered the economic and social costs and benefits of deep decarbonization, which will be the topic for the next report. The DDPP is issuing this 2014 report to the UN Secretary-General Ban Ki-moon in support of the Climate Leaders' Summit at the United Nations on September 23, 2014. This 2014 report by the Deep Decarbonization Pathway Project (DDPP) summarizes preliminary findings of the technical pathways developed by the DDPP Country Research Partners with the objective of achieving emission reductions consistent with limiting global warming to less than 2 deg. C., without, at this stage, consideration of economic and social costs and benefits. The DDPP is a knowledge

  18. Deep soft tissue leiomyoma of the thigh

    International Nuclear Information System (INIS)

    Watson, G.M.T.; Saifuddin, A.; Sandison, A.

    1999-01-01

    A case of ossified leiomyoma of the deep soft tissues of the left thigh is presented. The radiographic appearance suggested a low-grade chondrosarcoma. MRI of the lesion showed signal characteristics similar to muscle on both T1- and T2-weighted spin echo sequences with linear areas of high signal intensity on T1-weighted images consistent with medullary fat in metaplastic bone. Histopathological examination of the resected specimen revealed a benign ossified soft tissue leiomyoma. (orig.)

  19. Blind source deconvolution for deep Earth seismology

    Science.gov (United States)

    Stefan, W.; Renaut, R.; Garnero, E. J.; Lay, T.

    2007-12-01

    We present an approach to automatically estimate an empirical source characterization of deep earthquakes recorded teleseismically and subsequently remove the source from the recordings by applying regularized deconvolution. A principle goal in this work is to effectively deblur the seismograms, resulting in more impulsive and narrower pulses, permitting better constraints in high resolution waveform analyses. Our method consists of two stages: (1) we first estimate the empirical source by automatically registering traces to their 1st principal component with a weighting scheme based on their deviation from this shape, we then use this shape as an estimation of the earthquake source. (2) We compare different deconvolution techniques to remove the source characteristic from the trace. In particular Total Variation (TV) regularized deconvolution is used which utilizes the fact that most natural signals have an underlying spareness in an appropriate basis, in this case, impulsive onsets of seismic arrivals. We show several examples of deep focus Fiji-Tonga region earthquakes for the phases S and ScS, comparing source responses for the separate phases. TV deconvolution is compared to the water level deconvolution, Tikenov deconvolution, and L1 norm deconvolution, for both data and synthetics. This approach significantly improves our ability to study subtle waveform features that are commonly masked by either noise or the earthquake source. Eliminating source complexities improves our ability to resolve deep mantle triplications, waveform complexities associated with possible double crossings of the post-perovskite phase transition, as well as increasing stability in waveform analyses used for deep mantle anisotropy measurements.

  20. Tephrostratigraphy the DEEP site record, Lake Ohrid

    Science.gov (United States)

    Leicher, N.; Zanchetta, G.; Sulpizio, R.; Giaccio, B.; Wagner, B.; Francke, A.

    2016-12-01

    In the central Mediterranean region, tephrostratigraphy has been proofed to be a suitable and powerful tool for dating and correlating marine and terrestrial records. However, for the period older 200 ka, tephrostratigraphy is incomplete and restricted to some Italian continental basins (e.g. Sulmona, Acerno, Mercure), and continuous records downwind of the Italian volcanoes are rare. Lake Ohrid (Macedonia/Albania) in the eastern Mediterranean region fits this requisite and is assumed to be the oldest continuously existing lake of Europe. A continous record (DEEP) was recovered within the scope of the ICDP deep-drilling campaign SCOPSCO (Scientific Collaboration on Past Speciation Conditions in Lake Ohrid). In the uppermost 450 meters of the record, covering more than 1.2 Myrs of Italian volcanism, 54 tephra layers were identified during core-opening and description. A first tephrostratigraphic record was established for the uppermost 248 m ( 637 ka). Major element analyses (EDS/WDS) were carried out on juvenile glass fragments and 15 out of 35 tephra layers have been identified and correlated with known and dated eruptions of Italian volcanoes. Existing 40Ar/39Ar ages were re-calculated by using the same flux standard and used as first order tie points to develop a robust chronology for the DEEP site succession. Between 248 and 450 m of the DEEP site record, another 19 tephra horizons were identified and are subject of ongoing work. These deposits, once correlated with known and dated tephra, will hopefully enable dating this part of the succession, likely supported by major paleomagnetic events, such as the Brunhes-Matuyama boundary, or the Cobb-Mountain or the Jaramillo excursions. This makes the Lake Ohrid record a unique continuous, distal record of Italian volcanic activity, which is candidate to become the template for the central Mediterranean tephrostratigraphy, especially for the hitherto poorly known and explored lower Middle Pleistocene period.

  1. Uncertain Photometric Redshifts with Deep Learning Methods

    Science.gov (United States)

    D'Isanto, A.

    2017-06-01

    The need for accurate photometric redshifts estimation is a topic that has fundamental importance in Astronomy, due to the necessity of efficiently obtaining redshift information without the need of spectroscopic analysis. We propose a method for determining accurate multi-modal photo-z probability density functions (PDFs) using Mixture Density Networks (MDN) and Deep Convolutional Networks (DCN). A comparison with a Random Forest (RF) is performed.

  2. Brucellosis associated with deep vein thrombosis.

    Science.gov (United States)

    Tolaj, Ilir; Mehmeti, Murat; Ramadani, Hamdi; Tolaj, Jasmina; Dedushi, Kreshnike; Fejza, Hajrullah

    2014-11-19

    Over the past 10 years more than 700 cases of brucellosis have been reported in Kosovo, which is heavily oriented towards agriculture and animal husbandry. Here, brucellosis is still endemic and represents an uncontrolled public health problem. Human brucellosis may present with a broad spectrum of clinical manifestations; among them, vascular complications are uncommon. Hereby we describe the case of a 37-year-old male patient with brucellosis complicated by deep vein thrombosis on his left leg.

  3. Gene expression inference with deep learning.

    Science.gov (United States)

    Chen, Yifei; Li, Yi; Narayan, Rajiv; Subramanian, Aravind; Xie, Xiaohui

    2016-06-15

    Large-scale gene expression profiling has been widely used to characterize cellular states in response to various disease conditions, genetic perturbations, etc. Although the cost of whole-genome expression profiles has been dropping steadily, generating a compendium of expression profiling over thousands of samples is still very expensive. Recognizing that gene expressions are often highly correlated, researchers from the NIH LINCS program have developed a cost-effective strategy of profiling only ∼1000 carefully selected landmark genes and relying on computational methods to infer the expression of remaining target genes. However, the computational approach adopted by the LINCS program is currently based on linear regression (LR), limiting its accuracy since it does not capture complex nonlinear relationship between expressions of genes. We present a deep learning method (abbreviated as D-GEX) to infer the expression of target genes from the expression of landmark genes. We used the microarray-based Gene Expression Omnibus dataset, consisting of 111K expression profiles, to train our model and compare its performance to those from other methods. In terms of mean absolute error averaged across all genes, deep learning significantly outperforms LR with 15.33% relative improvement. A gene-wise comparative analysis shows that deep learning achieves lower error than LR in 99.97% of the target genes. We also tested the performance of our learned model on an independent RNA-Seq-based GTEx dataset, which consists of 2921 expression profiles. Deep learning still outperforms LR with 6.57% relative improvement, and achieves lower error in 81.31% of the target genes. D-GEX is available at https://github.com/uci-cbcl/D-GEX CONTACT: xhx@ics.uci.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. A Moessbauer study of deep sea sediments

    International Nuclear Information System (INIS)

    Minai, Y.; Tominaga, T.; Furuta, T.; Kobayashi, K.

    1981-01-01

    In order to determine the chemical states of iron in deep sea sediments, Moessbauer spectra of the sediments collected from various areas of the Pacific have been measured. The Moessbauer spectra were composed of paramagnetic ferric, high-spin ferrous, and magnetic components. The correlation of their relative abundance to the sampling location and the kind of sediments may afford clues to infer the origin of each iron-bearing phase. (author)

  5. Deep inelastic collisions viewed as Brownian motion

    International Nuclear Information System (INIS)

    Gross, D.H.E.; Freie Univ. Berlin

    1980-01-01

    Non-equilibrium transport processes like Brownian motion, are studied since perhaps 100 years and one should ask why does one not use these theories to explain deep inelastic collision data. These theories have reached a high standard of sophistication, experience, and precision that I believe them to be very usefull for our problem. I will try to sketch a possible form of an advanced theory of Brownian motion that seems to be suitable for low energy heavy ion collisions. (orig./FKS)

  6. Deep electroproduction of exotic hybrid mesons

    International Nuclear Information System (INIS)

    Anikin, I.V.; Pire, B.; Szymanowski, L.; Teryaev, O.V.; Wallon, S.

    2004-01-01

    We evaluate the leading order amplitude for the deep exclusive electroproduction of an exotic hybrid meson in the Bjorken regime. We show that, contrarily to naive expectation, this amplitude factorizes at the twist 2 level and thus scales like usual meson electroproduction when the virtual photon and the hybrid meson are longitudinally polarized. Exotic hybrid mesons may thus be studied in electroproduction experiments at JLAB, HERA (HERMES) or CERN (Compass)

  7. Deep primary production in coastal pelagic systems

    DEFF Research Database (Denmark)

    Lyngsgaard, Maren Moltke; Richardson, Katherine; Markager, Stiig

    2014-01-01

    produced. The primary production (PP) occurring below the surface layer, i.e. in the pycnocline-bottom layer (PBL), is shown to contribute significantly to total PP. Oxygen concentrations in the PBL are shown to correlate significantly with the deep primary production (DPP) as well as with salinity...... that eutrophication effects may include changes in the structure of planktonic food webs and element cycling in the water column, both brought about through an altered vertical distribution of PP....

  8. Environmental studies for deep seabed mining

    Digital Repository Service at National Institute of Oceanography (India)

    Sharma, R.

    tests of prototype mining systems and components in the Pacific Ocean in the seventies (DOMES, 1976). The subsequent progress in the development of mining technology for deep-sea minerals has been slow since 1980 due to unfavorable metal market... assume instead, because commercial system size and type are likely different depending upon the use of nodule elements and can vary with metal market situation. 2. How do we generalize the test parameters from the tow-sled collector and miner...

  9. Deep vein thrombosis: diagnosis, treatment, and prevention

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, W.P.; Youngswick, F.D.

    Deep vein thrombosis (DVT) is a dangerous complication that may present after elective foot surgery. Because of the frequency with which DVT occurs in the elderly patient, as well as in the podiatric surgical population, the podiatrist should be acquainted with this entity. A review of the diagnosis, treatment, prevention, and the role of podiatry in the management of DVT is discussed in this paper.

  10. Deep Learning with Dynamic Computation Graphs

    OpenAIRE

    Looks, Moshe; Herreshoff, Marcello; Hutchins, DeLesley; Norvig, Peter

    2017-01-01

    Neural networks that compute over graph structures are a natural fit for problems in a variety of domains, including natural language (parse trees) and cheminformatics (molecular graphs). However, since the computation graph has a different shape and size for every input, such networks do not directly support batched training or inference. They are also difficult to implement in popular deep learning libraries, which are based on static data-flow graphs. We introduce a technique called dynami...

  11. Critical study of high efficiency deep grinding

    OpenAIRE

    Johnstone, lain

    2002-01-01

    The recent years, the aerospace industry in particular has embraced and actively pursued the development of stronger high performance materials, namely nickel based superalloys and hardwearing steels. This has resulted in a need for a more efficient method of machining, and this need was answered with the advent of High Efficiency Deep Grinding (HEDG). This relatively new process using Cubic Boron Nitride (CBN) electroplated grinding wheels has been investigated through experim...

  12. Young Children in Deep Poverty. Fact Sheet

    Science.gov (United States)

    Ekono, Mercedes; Jiang, Yang; Smith, Sheila

    2016-01-01

    A U.S. family of three living in deep poverty survives on an annual income below $9,276, or less than $9.00 a day per family member. The struggle to raise children on such a meager income is not a rare circumstance among U.S. families, especially those with young children. Currently, 11 percent of young children (0-9 years) live in households with…

  13. Deuteron structure in the deep inelastic regime

    Energy Technology Data Exchange (ETDEWEB)

    Garcia Canal, C.A.; Tarutina, T. [Universidad Nacional de La Plata, IFLP/CONICET y Departamento de Fisica, La Plata (Argentina); Vento, V. [Universidad de Valencia-CSIC, Departamento de Fisica Teorica-IFIC, Burjassot (Valencia) (Spain)

    2017-06-15

    We study nuclear effects in the deuteron in the deep inelastic regime using the newest available data. We put special emphasis on their Q{sup 2} dependence. The study is carried out using a scheme which parameterizes, in a simple manner, these effects by changing the proton and neutron stucture functions in medium. The result of our analysis is compared with other recent proposals. We conclude that precise EMC ratios cannot be obtained without considering the nuclear effects in the deuteron. (orig.)

  14. Deep Inelastic Scattering at the Amplitude Level

    International Nuclear Information System (INIS)

    Brodsky, Stanley J.

    2005-01-01

    The deep inelastic lepton scattering and deeply virtual Compton scattering cross sections can be interpreted in terms of the fundamental wavefunctions defined by the light-front Fock expansion, thus allowing tests of QCD at the amplitude level. The AdS/CFT correspondence between gauge theory and string theory provides remarkable new insights into QCD, including a model for hadronic wavefunctions which display conformal scaling at short distances and color confinement at large distances

  15. Mean associated multiplicities in deep inelastic processes

    International Nuclear Information System (INIS)

    Dzhaparidze, G.S.; Kiselev, A.V.; Petrov, V.A.

    1982-01-01

    A formula is derived for the mean multiplicity of hadrons in the target-fragmentation region in the process of deep inelastic scattering. It is shown that in the region of large x the ratio of the mean multiplicities in the current- and target-fragmentation regions tends to unity at high energies. The mean multiplicity in the Drell-Yan process is also discussed

  16. Microplastic pollution in deep-sea sediments

    International Nuclear Information System (INIS)

    Van Cauwenberghe, Lisbeth; Vanreusel, Ann; Mees, Jan; Janssen, Colin R.

    2013-01-01

    Microplastics are small plastic particles ( 3 was observed. •The depths from where these microplastics were recovered range from 1176 to 4843 m. •The sizes of the particles range from 75 to 161 μm at their largest cross-section. -- Here, we demonstrate that microplastics have invaded the marine environment to an extent that they appear to even be present in the remote deep sea

  17. Colour coherence in deep inelastic Compton scattering

    Energy Technology Data Exchange (ETDEWEB)

    Lebedev, A.I.; Vazdik, J.A. (Lebedev Physical Inst., Academy of Sciences, Moscow (USSR))

    1992-01-01

    MC simulation of Deep Inelastic Compton on proton - both QED and QCD - was performed on the basis of LUCIFER program for HERA energies. Charged hadron flow was calculated for string and independent fragmentation with different cuts on p{sub t} and x. It is shown that interjet colour coherence leads in the case of QCD Compton to the drag effects diminishing the hadron flow in the direction between quark jet and proton remnant jet. (orig.).

  18. Colour coherence in deep inelastic Compton scattering

    International Nuclear Information System (INIS)

    Lebedev, A.I.; Vazdik, J.A.

    1992-01-01

    MC simulation of Deep Inelastic Compton on proton - both QED and QCD - was performed on the basis of LUCIFER program for HERA energies. Charged hadron flow was calculated for string and independent fragmentation with different cuts on p t and x. It is shown that interjet colour coherence leads in the case of QCD Compton to the drag effects diminishing the hadron flow in the direction between quark jet and proton remnant jet. (orig.)

  19. Brucellosis associated with deep vein thrombosis

    Directory of Open Access Journals (Sweden)

    Ilir Tolaj

    2014-11-01

    Full Text Available Over the past 10 years more than 700 cases of brucellosis have been reported in Kosovo, which is heavily oriented towards agriculture and animal husbandry. Here, brucellosis is still endemic and represents an uncontrolled public health problem. Human brucellosis may present with a broad spectrum of clinical manifestations; among them, vascular complications are uncommon. Hereby we describe the case of a 37-year-old male patient with brucellosis complicated by deep vein thrombosis on his left leg.

  20. Brucellosis Associated with Deep Vein Thrombosis

    Science.gov (United States)

    Tolaj, Ilir; Mehmeti, Murat; Ramadani, Hamdi; Tolaj, Jasmina; Dedushi, Kreshnike; Fejza, Hajrullah

    2014-01-01

    Over the past 10 years more than 700 cases of brucellosis have been reported in Kosovo, which is heavily oriented towards agriculture and animal husbandry. Here, brucellosis is still endemic and represents an uncontrolled public health problem. Human brucellosis may present with a broad spectrum of clinical manifestations; among them, vascular complications are uncommon. Hereby we describe the case of a 37-year-old male patient with brucellosis complicated by deep vein thrombosis on his left leg. PMID:25568754

  1. Brucellosis Associated with Deep Vein Thrombosis

    OpenAIRE

    Tolaj, Ilir; Mehmeti, Murat; Ramadani, Hamdi; Tolaj, Jasmina; Dedushi, Kreshnike; Fejza, Hajrullah

    2014-01-01

    Over the past 10 years more than 700 cases of brucellosis have been reported in Kosovo, which is heavily oriented towards agriculture and animal husbandry. Here, brucellosis is still endemic and represents an uncontrolled public health problem. Human brucellosis may present with a broad spectrum of clinical manifestations; among them, vascular complications are uncommon. Hereby we describe the case of a 37-year-old male patient with brucellosis complicated by deep vein thrombosis on his left ...

  2. Study deep geothermal energy; Studie dypgeotermisk energi

    Energy Technology Data Exchange (ETDEWEB)

    Havellen, Vidar; Eri, Lars Sigurd; Andersen, Andreas; Tuttle, Kevin J.; Ruden, Dorottya Bartucz; Ruden, Fridtjof; Rigler, Balazs; Pascal, Christophe; Larsen, Bjoern Tore

    2012-07-01

    The study aims to analyze the potential energy with current technology, challenges, issues and opportunities for deep geothermal energy using quantitative analysis. It should especially be made to identify and investigate critical connections between geothermal potential, the size of the heating requirements and technical solutions. Examples of critical relationships may be acceptable cost of technology in relation to heating, local geothermal gradient / drilling depth / temperature levels and profitability. (eb)

  3. Clinical analysis of deep neck space infections

    International Nuclear Information System (INIS)

    Hatano, Atsushi; Ui, Naoya; Shigeta, Yasushi; Iimura, Jiro; Rikitake, Masahiro; Endo, Tomonori; Kimura, Akihiro

    2009-01-01

    Deep neck space infections, which affect soft tissues and fascial compartments of the head and neck, can lead to lethal complications unless treated carefully and quickly, even with the advanced antibiotics available. We reviewed our seventeen patients with deep neck abscesses, analyzed their location by reviewing CT images, and discussed the treatment. Deep neck space infections were classified according to the degree of diffusion of infection diagnosed by CT images. Neck space infection in two cases was localized to the upper neck space above the hyoid bone (Stage I). Neck space infection in 12 cases extended to the lower neck space (Stage II), and further extended to the mediastinum in one case (Stage III). The two cases of Stage I and the four cases of Stage II were managed with incision and drainage through a submental approach. The seven cases of Stage II were managed with incision and drainage parallel to the anterior border of the sternocleidomastoid muscle, the ''Dean'' approach. The one case of Stage III received treatment through transcervicotomy and anterior mediastinal drainage through a subxiphodal incision. The parapharyngeal space played an important role in that the inflammatory change can spread to the neck space inferiorly. The anterior cervical space in the infrahyoid neck was important for mediastinal extension of parapharyngeal abscesses. It is important to diagnose deep neck space infections promptly and treat them adequately, and contrast-enhanced CT is useful and indispensable for diagnosis. The point is which kind of drainage has to be performed. If the surgical method of drainage is chosen according to the level of involvement in the neck space and mediastinum, excellent results may be obtained in terms of survival and morbidity. (author)

  4. Deep sky observing an astronomical tour

    CERN Document Server

    Coe, Steven R

    2016-01-01

    This updated second edition has all of the information needed for your successful forays into deep sky observing. Coe uses his years of experience to give detailed practical advice about how to find the best observing site, how to make the most of the time spent there, and what equipment and instruments to take along. There are comprehensive lists of deep sky objects of all kinds, along with Steve's own observations describing how they look through telescopes with apertures ranging from 4 inches to 36 inches (0.1 - 0.9 meters). Binocular observing also gets its due, while the lists of objects have been amended to highlight only the best targets. A new index makes finding targets easier than ever before, while the selection of viewing targets has been revised from the first edition. Most of all, this book is all about how to enjoy astronomy. The author's enthusiasm and sense of wonder shine through every page as he invites you along on a tour of some of the most beautiful and fascinating sites in the deep ...

  5. Deep-water subsea lifting operations

    Energy Technology Data Exchange (ETDEWEB)

    Nestegaard, Arne; Boee, Tormod

    2010-07-01

    Significant costs are related to marine operations in the installation phase of deep water subsea field developments. In order to establish safe operational criteria and procedures for the installation, detailed planning is necessary, including numerical modelling and analysis of the environmental conditions and hydrodynamic loads on the installed object as well as the installation equipment. This paper presents recommendations for modelling and analysis of deep water subsea lifting operations developed for the new DNV RP-H103 [1]. During installation of subsea structures, the highest dynamic forces are most often encountered in the splash zone. Recommendations for estimation of maximum forces will be presented. For small structures and tools, installation through the moon pool of a small installation vessel is often preferred. Calculation methods for loading on structures installed through a moon pool will be presented. During intervention or installation in deep water a significant amplification of amplitude and forces can be experienced when the frequency range of vertical crane tip motion coincides with the natural vertical oscillation of the lift wire and load. Vertical resonance may reduce the operability of the operation. Simplified calculation methods for such operations are presented. (Author)

  6. Quantitative phenotyping via deep barcode sequencing.

    Science.gov (United States)

    Smith, Andrew M; Heisler, Lawrence E; Mellor, Joseph; Kaper, Fiona; Thompson, Michael J; Chee, Mark; Roth, Frederick P; Giaever, Guri; Nislow, Corey

    2009-10-01

    Next-generation DNA sequencing technologies have revolutionized diverse genomics applications, including de novo genome sequencing, SNP detection, chromatin immunoprecipitation, and transcriptome analysis. Here we apply deep sequencing to genome-scale fitness profiling to evaluate yeast strain collections in parallel. This method, Barcode analysis by Sequencing, or "Bar-seq," outperforms the current benchmark barcode microarray assay in terms of both dynamic range and throughput. When applied to a complex chemogenomic assay, Bar-seq quantitatively identifies drug targets, with performance superior to the benchmark microarray assay. We also show that Bar-seq is well-suited for a multiplex format. We completely re-sequenced and re-annotated the yeast deletion collection using deep sequencing, found that approximately 20% of the barcodes and common priming sequences varied from expectation, and used this revised list of barcode sequences to improve data quality. Together, this new assay and analysis routine provide a deep-sequencing-based toolkit for identifying gene-environment interactions on a genome-wide scale.

  7. Deep Learning for Population Genetic Inference.

    Science.gov (United States)

    Sheehan, Sara; Song, Yun S

    2016-03-01

    Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme.

  8. Opportunities and constraints of deep water projects

    International Nuclear Information System (INIS)

    Anon.

    1995-01-01

    While oil output from deep water areas still is scarce, it however has become a reality in water depths over 300 m. Specific constraints linked to these developments lead to the selection of appropriate concepts for production supports. First deep water developments occurred off Brazil (see other articles in this issue) and the Gulf of Mexico and now expand to other areas worldwide, such as the West of Shetland discoveries, the Northern part of the Norwegian waters and potentially West Africa, the Barents sea and South-East Asia. Fixed platforms and compliant towers have shown their limits (in terms of water depth capacity) and new deep water projects mainly rely on tension leg platforms (TLP) and floaters, either FPSOs or semi-sub based. Research is at work on alternative materials for lighter flexible risers and mooring systems. Operators and manufacturers are eager to develop for the 300 m range systems and equipments that could be used with little modification for oil fields located in deeper waters. (author). 1 fig., 1 tab

  9. Deep levels in silicon–oxygen superlattices

    International Nuclear Information System (INIS)

    Simoen, E; Jayachandran, S; Delabie, A; Caymax, M; Heyns, M

    2016-01-01

    This work reports on the deep levels observed in Pt/Al 2 O 3 /p-type Si metal-oxide-semiconductor capacitors containing a silicon–oxygen superlattice (SL) by deep-level transient spectroscopy. It is shown that the presence of the SL gives rise to a broad band of hole traps occurring around the silicon mid gap, which is absent in reference samples with a silicon epitaxial layer. In addition, the density of states of the deep layers roughly scales with the number of SL periods for the as-deposited samples. Annealing in a forming gas atmosphere reduces the maximum concentration significantly, while the peak energy position shifts from close-to mid-gap towards the valence band edge. Based on the flat-band voltage shift of the Capacitance–Voltage characteristics it is inferred that positive charge is introduced by the oxygen atomic layers in the SL, indicating the donor nature of the underlying hole traps. In some cases, a minor peak associated with P b dangling bond centers at the Si/SiO 2 interface has been observed as well. (paper)

  10. Gene expression in the deep biosphere.

    Science.gov (United States)

    Orsi, William D; Edgcomb, Virginia P; Christman, Glenn D; Biddle, Jennifer F

    2013-07-11

    Scientific ocean drilling has revealed a deep biosphere of widespread microbial life in sub-seafloor sediment. Microbial metabolism in the marine subsurface probably has an important role in global biogeochemical cycles, but deep biosphere activities are not well understood. Here we describe and analyse the first sub-seafloor metatranscriptomes from anaerobic Peru Margin sediment up to 159 metres below the sea floor, represented by over 1 billion complementary DNA (cDNA) sequence reads. Anaerobic metabolism of amino acids, carbohydrates and lipids seem to be the dominant metabolic processes, and profiles of dissimilatory sulfite reductase (dsr) transcripts are consistent with pore-water sulphate concentration profiles. Moreover, transcripts involved in cell division increase as a function of microbial cell concentration, indicating that increases in sub-seafloor microbial abundance are a function of cell division across all three domains of life. These data support calculations and models of sub-seafloor microbial metabolism and represent the first holistic picture of deep biosphere activities.

  11. Stellar Atmospheric Parameterization Based on Deep Learning

    Science.gov (United States)

    Pan, Ru-yang; Li, Xiang-ru

    2017-07-01

    Deep learning is a typical learning method widely studied in the fields of machine learning, pattern recognition, and artificial intelligence. This work investigates the problem of stellar atmospheric parameterization by constructing a deep neural network with five layers, and the node number in each layer of the network is respectively 3821-500-100-50-1. The proposed scheme is verified on both the real spectra measured by the Sloan Digital Sky Survey (SDSS) and the theoretic spectra computed with the Kurucz's New Opacity Distribution Function (NEWODF) model, to make an automatic estimation for three physical parameters: the effective temperature (Teff), surface gravitational acceleration (lg g), and metallic abundance (Fe/H). The results show that the stacked autoencoder deep neural network has a better accuracy for the estimation. On the SDSS spectra, the mean absolute errors (MAEs) are 79.95 for Teff/K, 0.0058 for (lg Teff/K), 0.1706 for lg (g/(cm·s-2)), and 0.1294 dex for the [Fe/H], respectively; On the theoretic spectra, the MAEs are 15.34 for Teff/K, 0.0011 for lg (Teff/K), 0.0214 for lg(g/(cm · s-2)), and 0.0121 dex for [Fe/H], respectively.

  12. Deep Learning for Population Genetic Inference.

    Directory of Open Access Journals (Sweden)

    Sara Sheehan

    2016-03-01

    Full Text Available Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data to the output (e.g., population genetic parameters of interest. We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history. Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme.

  13. Deep Learning for Population Genetic Inference

    Science.gov (United States)

    Sheehan, Sara; Song, Yun S.

    2016-01-01

    Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme. PMID:27018908

  14. Deep-inelastic electron-proton diffraction

    International Nuclear Information System (INIS)

    Dainton, J.B.

    1995-11-01

    Recent measurements by the H1 collaboration at HERA of the cross section for deep-inelastic electron-proton scattering in which the proton interacts with minimal energy transfer and limited 4-momentum transfer squared are presented in the form of the contribution F 2 D(3) to the proton structure function F 2 . By parametrising the cross section phenomenologically in terms of a leading effective Regge pole exchange and comparing the result with a similar parametrisation of hadronic pp physics, the proton interaction is demonstrated to be dominantly of a diffractive nature. The quantitative interpretation of the parametrisation in terms of the properties of an effective leading Regge pole exchange, the pomeron (IP), shows that there is no evidence for a 'harder' BFKL-motivated IP in such deep-inelastic proton diffraction. The total contribution of proton diffraction to deep-inelastic electron-proton scattering is measured to be ∝10% and to be rather insensitive to Bjorken-x and Q 2 . A first measurement of the partonic structure of diffractive exchange is presented. It is shown to be readily interpreted in terms of the exchange of gluons, and to suggest that the bulk of diffractive momentum transfer is carried by a leading gluon. (orig.)

  15. Optimization of lining design in deep clays

    International Nuclear Information System (INIS)

    Rousset, G.; Bublitz, D.

    1989-01-01

    The main features of the mechanical behaviour of deep clay are time dependent effects and also the existence of a long term cohesion which may be taken into account for dimensioning galleries. In this text, a lining optimization test is presented. It concerns a gallery driven in deep clay, 230 m. deep, at Mol (Belgium). We show that sliding rib lining gives both: - an optimal tunnel face advance speed, a minimal closure of the gallery wall before setting the lining and therefore less likelihood of failure developing inside the rock mass. - limitation of the length of the non-lined part of the gallery. The chosen process allows on one hand the preservation of the rock mass integrity, and, on the other, use of the confinement effect to allow closure under high average stress conditions; this process can be considered as an optimal application of the convergence-confinement method. An important set of measurement devices is then presented along with results obtained for one year's operation. We show in particular that stress distribution in the lining is homogeneous and that the sliding limit can be measured with high precision

  16. Deep Space Habitat Wireless Smart Plug

    Science.gov (United States)

    Morgan, Joseph A.; Porter, Jay; Rojdev, Kristina; Carrejo, Daniel B.; Colozza, Anthony J.

    2014-01-01

    NASA has been interested in technology development for deep space exploration, and one avenue of developing these technologies is via the eXploration Habitat (X-Hab) Academic Innovation Challenge. In 2013, NASA's Deep Space Habitat (DSH) project was in need of sensors that could monitor the power consumption of various devices in the habitat with added capability to control the power to these devices for load shedding in emergency situations. Texas A&M University's Electronic Systems Engineering Technology Program (ESET) in conjunction with their Mobile Integrated Solutions Laboratory (MISL) accepted this challenge, and over the course of 2013, several undergraduate students in a Capstone design course developed five wireless DC Smart Plugs for NASA. The wireless DC Smart Plugs developed by Texas A&M in conjunction with NASA's Deep Space Habitat team is a first step in developing wireless instrumentation for future flight hardware. This paper will further discuss the X-Hab challenge and requirements set out by NASA, the detailed design and testing performed by Texas A&M, challenges faced by the team and lessons learned, and potential future work on this design.

  17. Deep vein thrombosis: a clinical review

    Directory of Open Access Journals (Sweden)

    Kesieme EB

    2011-04-01

    Full Text Available Emeka Kesieme1, Chinenye Kesieme2, Nze Jebbin3, Eshiobo Irekpita1, Andrew Dongo11Department of Surgery, Irrua Specialist Teaching Hospital, Irrua, Nigeria; 2Department of Paediatrics, Irrua Specialist Teaching Hospital, Irrua, Nigeria; 3Department of Surgery, University of Port Harcourt Teaching Hospital, Port-Harcourt, NigeriaBackground: Deep vein thrombosis (DVT is the formation of blood clots (thrombi in the deep veins. It commonly affects the deep leg veins (such as the calf veins, femoral vein, or popliteal vein or the deep veins of the pelvis. It is a potentially dangerous condition that can lead to preventable morbidity and mortality.Aim: To present an update on the causes and management of DVT.Methods: A review of publications obtained from Medline search, medical libraries, and Google.Results: DVT affects 0.1% of persons per year. It is predominantly a disease of the elderly and has a slight male preponderance. The approach to making a diagnosis currently involves an algorithm combining pretest probability, D-dimer testing, and compression ultrasonography. This will guide further investigations if necessary. Prophylaxis is both mechanical and pharmacological. The goals of treatment are to prevent extension of thrombi, pulmonary embolism, recurrence of thrombi, and the development of complications such as pulmonary hypertension and post-thrombotic syndrome.Conclusion: DVT is a potentially dangerous condition with a myriad of risk factors. Prophylaxis is very important and can be mechanical and pharmacological. The mainstay of treatment is anticoagulant therapy. Low-molecular-weight heparin, unfractionated heparin, and vitamin K antagonists have been the treatment of choice. Currently anticoagulants specifically targeting components of the common pathway have been recommended for prophylaxis. These include fondaparinux, a selective indirect factor Xa inhibitor and the new oral selective direct thrombin inhibitors (dabigatran and selective

  18. Exploring the Earth Using Deep Learning Techniques

    Science.gov (United States)

    Larraondo, P. R.; Evans, B. J. K.; Antony, J.

    2016-12-01

    Research using deep neural networks have significantly matured in recent times, and there is now a surge in interest to apply such methods to Earth systems science and the geosciences. When combined with Big Data, we believe there are opportunities for significantly transforming a number of areas relevant to researchers and policy makers. In particular, by using a combination of data from a range of satellite Earth observations as well as computer simulations from climate models and reanalysis, we can gain new insights into the information that is locked within the data. Global geospatial datasets describe a wide range of physical and chemical parameters, which are mostly available using regular grids covering large spatial and temporal extents. This makes them perfect candidates to apply deep learning methods. So far, these techniques have been successfully applied to image analysis through the use of convolutional neural networks. However, this is only one field of interest, and there is potential for many more use cases to be explored. The deep learning algorithms require fast access to large amounts of data in the form of tensors and make intensive use of CPU in order to train its models. The Australian National Computational Infrastructure (NCI) has recently augmented its Raijin 1.2 PFlop supercomputer with hardware accelerators. Together with NCI's 3000 core high performance OpenStack cloud, these computational systems have direct access to NCI's 10+ PBytes of datasets and associated Big Data software technologies (see http://geonetwork.nci.org.au/ and http://nci.org.au/systems-services/national-facility/nerdip/). To effectively use these computing infrastructures requires that both the data and software are organised in a way that readily supports the deep learning software ecosystem. Deep learning software, such as the open source TensorFlow library, has allowed us to demonstrate the possibility of generating geospatial models by combining information from

  19. Iris Transponder-Communications and Navigation for Deep Space

    Science.gov (United States)

    Duncan, Courtney B.; Smith, Amy E.; Aguirre, Fernando H.

    2014-01-01

    The Jet Propulsion Laboratory has developed the Iris CubeSat compatible deep space transponder for INSPIRE, the first CubeSat to deep space. Iris is 0.4 U, 0.4 kg, consumes 12.8 W, and interoperates with NASA's Deep Space Network (DSN) on X-Band frequencies (7.2 GHz uplink, 8.4 GHz downlink) for command, telemetry, and navigation. This talk discusses the Iris for INSPIRE, it's features and requirements; future developments and improvements underway; deep space and proximity operations applications for Iris; high rate earth orbit variants; and ground requirements, such as are implemented in the DSN, for deep space operations.

  20. DeepMirTar: a deep-learning approach for predicting human miRNA targets.

    Science.gov (United States)

    Wen, Ming; Cong, Peisheng; Zhang, Zhimin; Lu, Hongmei; Li, Tonghua

    2018-06-01

    MicroRNAs (miRNAs) are small noncoding RNAs that function in RNA silencing and post-transcriptional regulation of gene expression by targeting messenger RNAs (mRNAs). Because the underlying mechanisms associated with miRNA binding to mRNA are not fully understood, a major challenge of miRNA studies involves the identification of miRNA-target sites on mRNA. In silico prediction of miRNA-target sites can expedite costly and time-consuming experimental work by providing the most promising miRNA-target-site candidates. In this study, we reported the design and implementation of DeepMirTar, a deep-learning-based approach for accurately predicting human miRNA targets at the site level. The predicted miRNA-target sites are those having canonical or non-canonical seed, and features, including high-level expert-designed, low-level expert-designed, and raw-data-level, were used to represent the miRNA-target site. Comparison with other state-of-the-art machine-learning methods and existing miRNA-target-prediction tools indicated that DeepMirTar improved overall predictive performance. DeepMirTar is freely available at https://github.com/Bjoux2/DeepMirTar_SdA. lith@tongji.edu.cn, hongmeilu@csu.edu.cn. Supplementary data are available at Bioinformatics online.

  1. deepTools2: a next generation web server for deep-sequencing data analysis.

    Science.gov (United States)

    Ramírez, Fidel; Ryan, Devon P; Grüning, Björn; Bhardwaj, Vivek; Kilpert, Fabian; Richter, Andreas S; Heyne, Steffen; Dündar, Friederike; Manke, Thomas

    2016-07-08

    We present an update to our Galaxy-based web server for processing and visualizing deeply sequenced data. Its core tool set, deepTools, allows users to perform complete bioinformatic workflows ranging from quality controls and normalizations of aligned reads to integrative analyses, including clustering and visualization approaches. Since we first described our deepTools Galaxy server in 2014, we have implemented new solutions for many requests from the community and our users. Here, we introduce significant enhancements and new tools to further improve data visualization and interpretation. deepTools continue to be open to all users and freely available as a web service at deeptools.ie-freiburg.mpg.de The new deepTools2 suite can be easily deployed within any Galaxy framework via the toolshed repository, and we also provide source code for command line usage under Linux and Mac OS X. A public and documented API for access to deepTools functionality is also available. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. Deep Sludge Gas Release Event Analytical Evaluation

    International Nuclear Information System (INIS)

    Sams, Terry L.

    2013-01-01

    Long Abstract. Full Text. The purpose of the Deep Sludge Gas Release Event Analytical Evaluation (DSGRE-AE) is to evaluate the postulated hypothesis that a hydrogen GRE may occur in Hanford tanks containing waste sludges at levels greater than previously experienced. There is a need to understand gas retention and release hazards in sludge beds which are 200 -300 inches deep. These sludge beds are deeper than historical Hanford sludge waste beds, and are created when waste is retrieved from older single-shell tanks (SST) and transferred to newer double-shell tanks (DST).Retrieval of waste from SSTs reduces the risk to the environment from leakage or potential leakage of waste into the ground from these tanks. However, the possibility of an energetic event (flammable gas accident) in the retrieval receiver DST is worse than slow leakage. Lines of inquiry, therefore, are (1) can sludge waste be stored safely in deep beds; (2) can gas release events (GRE) be prevented by periodically degassing the sludge (e.g., mixer pump); or (3) does the retrieval strategy need to be altered to limit sludge bed height by retrieving into additional DSTs? The scope of this effort is to provide expert advice on whether or not to move forward with the generation of deep beds of sludge through retrieval of C-Farm tanks. Evaluation of possible mitigation methods (e.g., using mixer pumps to release gas, retrieving into an additional DST) are being evaluated by a second team and are not discussed in this report. While available data and engineering judgment indicate that increased gas retention (retained gas fraction) in DST sludge at depths resulting from the completion of SST 241-C Tank Farm retrievals is not expected and, even if gas releases were to occur, they would be small and local, a positive USQ was declared (Occurrence Report EM-RP--WRPS-TANKFARM-2012-0014, 'Potential Exists for a Large Spontaneous Gas Release Event in Deep Settled Waste Sludge'). The purpose of this technical

  3. Text feature extraction based on deep learning: a review.

    Science.gov (United States)

    Liang, Hong; Sun, Xiao; Sun, Yunlei; Gao, Yuan

    2017-01-01

    Selection of text feature item is a basic and important matter for text mining and information retrieval. Traditional methods of feature extraction require handcrafted features. To hand-design, an effective feature is a lengthy process, but aiming at new applications, deep learning enables to acquire new effective feature representation from training data. As a new feature extraction method, deep learning has made achievements in text mining. The major difference between deep learning and conventional methods is that deep learning automatically learns features from big data, instead of adopting handcrafted features, which mainly depends on priori knowledge of designers and is highly impossible to take the advantage of big data. Deep learning can automatically learn feature representation from big data, including millions of parameters. This thesis outlines the common methods used in text feature extraction first, and then expands frequently used deep learning methods in text feature extraction and its applications, and forecasts the application of deep learning in feature extraction.

  4. Deep learning beyond cats and dogs: recent advances in diagnosing breast cancer with deep neural networks.

    Science.gov (United States)

    Burt, Jeremy R; Torosdagli, Neslisah; Khosravan, Naji; RaviPrakash, Harish; Mortazi, Aliasghar; Tissavirasingham, Fiona; Hussein, Sarfaraz; Bagci, Ulas

    2018-04-10

    Deep learning has demonstrated tremendous revolutionary changes in the computing industry and its effects in radiology and imaging sciences have begun to dramatically change screening paradigms. Specifically, these advances have influenced the development of computer-aided detection and diagnosis (CAD) systems. These technologies have long been thought of as "second-opinion" tools for radiologists and clinicians. However, with significant improvements in deep neural networks, the diagnostic capabilities of learning algorithms are approaching levels of human expertise (radiologists, clinicians etc.), shifting the CAD paradigm from a "second opinion" tool to a more collaborative utility. This paper reviews recently developed CAD systems based on deep learning technologies for breast cancer diagnosis, explains their superiorities with respect to previously established systems, defines the methodologies behind the improved achievements including algorithmic developments, and describes remaining challenges in breast cancer screening and diagnosis. We also discuss possible future directions for new CAD models that continue to change as artificial intelligence algorithms evolve.

  5. Deep Borehole Disposal as an Alternative Concept to Deep Geological Disposal

    International Nuclear Information System (INIS)

    Lee, Jongyoul; Lee, Minsoo; Choi, Heuijoo; Kim, Kyungsu

    2016-01-01

    In this paper, the general concept and key technologies for deep borehole disposal of spent fuels or HLW, as an alternative method to the mined geological disposal method, were reviewed. After then an analysis on the distance between boreholes for the disposal of HLW was carried out. Based on the results, a disposal area were calculated approximately and compared with that of mined geological disposal. These results will be used as an input for the analyses of applicability for DBD in Korea. The disposal safety of this system has been demonstrated with underground research laboratory and some advanced countries such as Finland and Sweden are implementing their disposal project on commercial stage. However, if the spent fuels or the high-level radioactive wastes can be disposed of in the depth of 3-5 km and more stable rock formation, it has several advantages. Therefore, as an alternative disposal concept to the mined deep geological disposal concept (DGD), very deep borehole disposal (DBD) technology is under consideration in number of countries in terms of its outstanding safety and cost effectiveness. In this paper, the general concept of deep borehole disposal for spent fuels or high level radioactive wastes was reviewed. And the key technologies, such as drilling technology of large diameter borehole, packaging and emplacement technology, sealing technology and performance/safety analyses technologies, and their challenges in development of deep borehole disposal system were analyzed. Also, very preliminary deep borehole disposal concept including disposal canister concept was developed according to the nuclear environment in Korea

  6. Deep Borehole Disposal as an Alternative Concept to Deep Geological Disposal

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jongyoul; Lee, Minsoo; Choi, Heuijoo; Kim, Kyungsu [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    In this paper, the general concept and key technologies for deep borehole disposal of spent fuels or HLW, as an alternative method to the mined geological disposal method, were reviewed. After then an analysis on the distance between boreholes for the disposal of HLW was carried out. Based on the results, a disposal area were calculated approximately and compared with that of mined geological disposal. These results will be used as an input for the analyses of applicability for DBD in Korea. The disposal safety of this system has been demonstrated with underground research laboratory and some advanced countries such as Finland and Sweden are implementing their disposal project on commercial stage. However, if the spent fuels or the high-level radioactive wastes can be disposed of in the depth of 3-5 km and more stable rock formation, it has several advantages. Therefore, as an alternative disposal concept to the mined deep geological disposal concept (DGD), very deep borehole disposal (DBD) technology is under consideration in number of countries in terms of its outstanding safety and cost effectiveness. In this paper, the general concept of deep borehole disposal for spent fuels or high level radioactive wastes was reviewed. And the key technologies, such as drilling technology of large diameter borehole, packaging and emplacement technology, sealing technology and performance/safety analyses technologies, and their challenges in development of deep borehole disposal system were analyzed. Also, very preliminary deep borehole disposal concept including disposal canister concept was developed according to the nuclear environment in Korea.

  7. SEDS: THE SPITZER EXTENDED DEEP SURVEY. SURVEY DESIGN, PHOTOMETRY, AND DEEP IRAC SOURCE COUNTS

    International Nuclear Information System (INIS)

    Ashby, M. L. N.; Willner, S. P.; Fazio, G. G.; Huang, J.-S.; Hernquist, L.; Hora, J. L.; Arendt, R.; Barmby, P.; Barro, G.; Faber, S.; Guhathakurta, P.; Bell, E. F.; Bouwens, R.; Cattaneo, A.; Croton, D.; Davé, R.; Dunlop, J. S.; Egami, E.; Finlator, K.; Grogin, N. A.

    2013-01-01

    The Spitzer Extended Deep Survey (SEDS) is a very deep infrared survey within five well-known extragalactic science fields: the UKIDSS Ultra-Deep Survey, the Extended Chandra Deep Field South, COSMOS, the Hubble Deep Field North, and the Extended Groth Strip. SEDS covers a total area of 1.46 deg 2 to a depth of 26 AB mag (3σ) in both of the warm Infrared Array Camera (IRAC) bands at 3.6 and 4.5 μm. Because of its uniform depth of coverage in so many widely-separated fields, SEDS is subject to roughly 25% smaller errors due to cosmic variance than a single-field survey of the same size. SEDS was designed to detect and characterize galaxies from intermediate to high redshifts (z = 2-7) with a built-in means of assessing the impact of cosmic variance on the individual fields. Because the full SEDS depth was accumulated in at least three separate visits to each field, typically with six-month intervals between visits, SEDS also furnishes an opportunity to assess the infrared variability of faint objects. This paper describes the SEDS survey design, processing, and publicly-available data products. Deep IRAC counts for the more than 300,000 galaxies detected by SEDS are consistent with models based on known galaxy populations. Discrete IRAC sources contribute 5.6 ± 1.0 and 4.4 ± 0.8 nW m –2 sr –1 at 3.6 and 4.5 μm to the diffuse cosmic infrared background (CIB). IRAC sources cannot contribute more than half of the total CIB flux estimated from DIRBE data. Barring an unexpected error in the DIRBE flux estimates, half the CIB flux must therefore come from a diffuse component.

  8. Diverless pipeline repair system for deep water

    Energy Technology Data Exchange (ETDEWEB)

    Spinelli, Carlo M. [Eni Gas and Power, Milan (Italy); Fabbri, Sergio; Bachetta, Giuseppe [Saipem/SES, Venice (Italy)

    2009-07-01

    SiRCoS (Sistema Riparazione Condotte Sottomarine) is a diverless pipeline repair system composed of a suite of tools to perform a reliable subsea pipeline repair intervention in deep and ultra deep water which has been on the ground of the long lasting experience of Eni and Saipem in designing, laying and operating deep water pipelines. The key element of SiRCoS is a Connection System comprising two end connectors and a repair spool piece to replace a damaged pipeline section. A Repair Clamp with elastomeric seals is also available for pipe local damages. The Connection System is based on pipe cold forging process, consisting in swaging the pipe inside connectors with suitable profile, by using high pressure seawater. Three swaging operations have to be performed to replace the damaged pipe length. This technology has been developed through extensive theoretical work and laboratory testing, ending in a Type Approval by DNV over pipe sizes ranging from 20 inches to 48 inches OD. A complete SiRCoS system has been realised for the Green Stream pipeline, thoroughly tested in workshop as well as in shallow water and is now ready, in the event of an emergency situation.The key functional requirements for the system are: diverless repair intervention and fully piggability after repair. Eni owns this technology and is now available to other operators under Repair Club arrangement providing stand-by repair services carried out by Saipem Energy Services. The paper gives a description of the main features of the Repair System as well as an insight into the technological developments on pipe cold forging reliability and long term duration evaluation. (author)

  9. Deep Packet/Flow Analysis using GPUs

    Energy Technology Data Exchange (ETDEWEB)

    Gong, Qian [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Wu, Wenji [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); DeMar, Phil [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)

    2017-11-12

    Deep packet inspection (DPI) faces severe performance challenges in high-speed networks (40/100 GE) as it requires a large amount of raw computing power and high I/O throughputs. Recently, researchers have tentatively used GPUs to address the above issues and boost the performance of DPI. Typically, DPI applications involve highly complex operations in both per-packet and per-flow data level, often in real-time. The parallel architecture of GPUs fits exceptionally well for per-packet network traffic processing. However, for stateful network protocols such as TCP, their data stream need to be reconstructed in a per-flow level to deliver a consistent content analysis. Since the flow-centric operations are naturally antiparallel and often require large memory space for buffering out-of-sequence packets, they can be problematic for GPUs, whose memory is normally limited to several gigabytes. In this work, we present a highly efficient GPU-based deep packet/flow analysis framework. The proposed design includes a purely GPU-implemented flow tracking and TCP stream reassembly. Instead of buffering and waiting for TCP packets to become in sequence, our framework process the packets in batch and uses a deterministic finite automaton (DFA) with prefix-/suffix- tree method to detect patterns across out-of-sequence packets that happen to be located in different batches. In conclusion, evaluation shows that our code can reassemble and forward tens of millions of packets per second and conduct a stateful signature-based deep packet inspection at 55 Gbit/s using an NVIDIA K40 GPU.

  10. Generating Seismograms with Deep Neural Networks

    Science.gov (United States)

    Krischer, L.; Fichtner, A.

    2017-12-01

    The recent surge of successful uses of deep neural networks in computer vision, speech recognition, and natural language processing, mainly enabled by the availability of fast GPUs and extremely large data sets, is starting to see many applications across all natural sciences. In seismology these are largely confined to classification and discrimination tasks. In this contribution we explore the use of deep neural networks for another class of problems: so called generative models.Generative modelling is a branch of statistics concerned with generating new observed data samples, usually by drawing from some underlying probability distribution. Samples with specific attributes can be generated by conditioning on input variables. In this work we condition on seismic source (mechanism and location) and receiver (location) parameters to generate multi-component seismograms.The deep neural networks are trained on synthetic data calculated with Instaseis (http://instaseis.net, van Driel et al. (2015)) and waveforms from the global ShakeMovie project (http://global.shakemovie.princeton.edu, Tromp et al. (2010)). The underlying radially symmetric or smoothly three dimensional Earth structures result in comparatively small waveform differences from similar events or at close receivers and the networks learn to interpolate between training data samples.Of particular importance is the chosen misfit functional. Generative adversarial networks (Goodfellow et al. (2014)) implement a system in which two networks compete: the generator network creates samples and the discriminator network distinguishes these from the true training examples. Both are trained in an adversarial fashion until the discriminator can no longer distinguish between generated and real samples. We show how this can be applied to seismograms and in particular how it compares to networks trained with more conventional misfit metrics. Last but not least we attempt to shed some light on the black-box nature of

  11. Deep vein thrombosis of the leg

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eun Hee; Rhee, Kwang Woo; Jeon, Suk Chul; Joo, Kyung Bin; Lee, Seung Ro; Seo, Heung Suk; Hahm, Chang Kok [College of Medicine, Hanyang University, Seoul (Korea, Republic of)

    1987-04-15

    Ascending contrast venography is the definitive standard method for the diagnosis of deep vein thrombosis (DVT) of the lower extremities. Authors analysed 22 cases of DVT clinically and radiographically. 1.The patients ranged in age from 15 to 70 yrs and the most prevalent age group was 7th decade (31%). There was an equal distribution of males and females. 2.In 11 cases of 22 cases, variable etiologic factors were recognized, such as abdominal surgery, chronic bedridden state, local trauma on the leg, pregnancy, postpartum, Behcet's syndrome, iliac artery aneurysm, and chronic medication of estrogen. 3.Nineteen cases out of 22 cases showed primary venographic signs of DVT, such as well-defined filling defect in opacified veins and narrowed, irregularly filled venous lumen. In only 3 cases, the diagnosis of DVT was base upon the segmental nonvisualization of deep veins with good opacification of proximal and distal veins and presence of collaterals. 4.Extent of thrombosis: 3 cases were confined to calf vein, 4 cases extended to femoral vein, and 15 cases had involvement above iliac vein. 5.In 17 cases involving relatively long segment of deep veins, propagation pattern of thrombus was evaluated by its radiologic morphology according to the age of thrombus: 9 cases suggested central or antegrade propagation pattern and 8 cases, peripheral or retrograde pattern. 6.None of 22 cases showed clinical evidence of pulmonary embolism. The cause of the rarity of pulmonary embolism in Korean in presumed to be related to the difference in major involving site and propagation pattern of DVT in the leg.

  12. Deep vein thrombosis of the leg

    International Nuclear Information System (INIS)

    Lee, Eun Hee; Rhee, Kwang Woo; Jeon, Suk Chul; Joo, Kyung Bin; Lee, Seung Ro; Seo, Heung Suk; Hahm, Chang Kok

    1987-01-01

    Ascending contrast venography is the definitive standard method for the diagnosis of deep vein thrombosis (DVT) of the lower extremities. Authors analysed 22 cases of DVT clinically and radiographically. 1.The patients ranged in age from 15 to 70 yrs and the most prevalent age group was 7th decade (31%). There was an equal distribution of males and females. 2.In 11 cases of 22 cases, variable etiologic factors were recognized, such as abdominal surgery, chronic bedridden state, local trauma on the leg, pregnancy, postpartum, Behcet's syndrome, iliac artery aneurysm, and chronic medication of estrogen. 3.Nineteen cases out of 22 cases showed primary venographic signs of DVT, such as well-defined filling defect in opacified veins and narrowed, irregularly filled venous lumen. In only 3 cases, the diagnosis of DVT was base upon the segmental nonvisualization of deep veins with good opacification of proximal and distal veins and presence of collaterals. 4.Extent of thrombosis: 3 cases were confined to calf vein, 4 cases extended to femoral vein, and 15 cases had involvement above iliac vein. 5.In 17 cases involving relatively long segment of deep veins, propagation pattern of thrombus was evaluated by its radiologic morphology according to the age of thrombus: 9 cases suggested central or antegrade propagation pattern and 8 cases, peripheral or retrograde pattern. 6.None of 22 cases showed clinical evidence of pulmonary embolism. The cause of the rarity of pulmonary embolism in Korean in presumed to be related to the difference in major involving site and propagation pattern of DVT in the leg

  13. Mammalian niche conservation through deep time.

    Directory of Open Access Journals (Sweden)

    Larisa R G DeSantis

    Full Text Available Climate change alters species distributions, causing plants and animals to move north or to higher elevations with current warming. Bioclimatic models predict species distributions based on extant realized niches and assume niche conservation. Here, we evaluate if proxies for niches (i.e., range areas are conserved at the family level through deep time, from the Eocene to the Pleistocene. We analyze the occurrence of all mammalian families in the continental USA, calculating range area, percent range area occupied, range area rank, and range polygon centroids during each epoch. Percent range area occupied significantly increases from the Oligocene to the Miocene and again from the Pliocene to the Pleistocene; however, mammalian families maintain statistical concordance between rank orders across time. Families with greater taxonomic diversity occupy a greater percent of available range area during each epoch and net changes in taxonomic diversity are significantly positively related to changes in percent range area occupied from the Eocene to the Pleistocene. Furthermore, gains and losses in generic and species diversity are remarkably consistent with ~2.3 species gained per generic increase. Centroids demonstrate southeastern shifts from the Eocene through the Pleistocene that may correspond to major environmental events and/or climate changes during the Cenozoic. These results demonstrate range conservation at the family level and support the idea that niche conservation at higher taxonomic levels operates over deep time and may be controlled by life history traits. Furthermore, families containing megafauna and/or terminal Pleistocene extinction victims do not incur significantly greater declines in range area rank than families containing only smaller taxa and/or only survivors, from the Pliocene to Pleistocene. Collectively, these data evince the resilience of families to climate and/or environmental change in deep time, the absence of

  14. The Nature of Thinking, Shallow and Deep

    Directory of Open Access Journals (Sweden)

    Gary L. Brase

    2014-05-01

    Full Text Available Because the criteria for success differ across various domains of life, no single normative standard will ever work for all types of thinking. One method for dealing with this apparent dilemma is to propose that the mind is made up of a large number of specialized modules. This review describes how this multi-modular framework for the mind overcomes several critical conceptual and theoretical challenges to our understanding of human thinking, and hopefully clarifies what are (and are not some of the implications based on this framework. In particular, an evolutionarily informed deep rationality conception of human thinking can guide psychological research out of clusters of ad hoc models which currently occupy some fields. First, the idea of deep rationality helps theoretical frameworks in terms of orienting themselves with regard to time scale references, which can alter the nature of rationality assessments. Second, the functional domains of deep rationality can be hypothesized (non-exhaustively to include the areas of self-protection, status, affiliation, mate acquisition, mate retention, kin care, and disease avoidance. Thus, although there is no single normative standard of rationality across all of human cognition, there are sensible and objective standards by which we can evaluate multiple, fundamental, domain-specific motives underlying human cognition and behavior. This review concludes with two examples to illustrate the implications of this framework. The first example, decisions about having a child, illustrates how competing models can be understood by realizing that different fundamental motives guiding people’s thinking can sometimes be in conflict. The second example is that of personifications within modern financial markets (e.g., in the form of corporations, which are entities specifically constructed to have just one fundamental motive. This single focus is the source of both the strengths and flaws in how such entities

  15. Workshop on ROVs and deep submergence

    Science.gov (United States)

    The deep-submergence community has an opportunity on March 6 to participate in a unique teleconferencing demonstration of a state-of-the-art, remotely operated underwater research vehicle known as the Jason-Medea System. Jason-Medea has been developed over the past decade by scientists, engineers, and technicians at the Deep Submergence Laboratory at Woods Hole Oceanographic Institution. The U.S. Navy, the Office of the Chief of Naval Research, and the National Science Foundation are sponsoring the workshop to explore the roles that modern computational, communications, and robotics technologies can play in deep-sea oceanographic research.Through the cooperation of Electronic Data Systems, Inc., the Jason Foundation, and Turner Broadcasting System, Inc., 2-1/2 hours of air time will be available from 3:00 to 5:30 PM EST on March 6. Twenty-seven satellite downlink sites will link one operating research vessel and the land-based operation with workshop participants in the United States, Canada, the United Kingdom, and Bermuda. The research ship Laney Chouest will be in the midst of a 3-week educational/research program in the Sea of Cortez, between Baja California and mainland Mexico. This effort is focused on active hydrothermal vents driven by heat flow from the volcanically active East Pacific Rise, which underlies the sediment-covered Guaymas Basin. The project combines into a single-operation, newly-developed robotic systems, state-of-the-art mapping and sampling tools, fiber-optic data transmission from the seafloor, instantaneous satellite communication from ship to shore, and a sophisticated array of computational and telecommunications networks. During the workshop, land-based scientists will observe and participate directly with their seagoing colleagues as they conduct seafloor research.

  16. Cultivation Of Deep Subsurface Microbial Communities

    Science.gov (United States)

    Obrzut, Natalia; Casar, Caitlin; Osburn, Magdalena R.

    2018-01-01

    The potential habitability of surface environments on other planets in our solar system is limited by exposure to extreme radiation and desiccation. In contrast, subsurface environments may offer protection from these stressors and are potential reservoirs for liquid water and energy that support microbial life (Michalski et al., 2013) and are thus of interest to the astrobiology community. The samples used in this project were extracted from the Deep Mine Microbial Observatory (DeMMO) in the former Homestake Mine at depths of 800 to 2000 feet underground (Osburn et al., 2014). Phylogenetic data from these sites indicates the lack of cultured representatives within the community. We used geochemical data to guide media design to cultivate and isolate organisms from the DeMMO communities. Media used for cultivation varied from heterotrophic with oxygen, nitrate or sulfate to autotrophic media with ammonia or ferrous iron. Environmental fluid was used as inoculum in batch cultivation and strains were isolated via serial transfers or dilution to extinction. These methods resulted in isolating aerobic heterotrophs, nitrate reducers, sulfate reducers, ammonia oxidizers, and ferric iron reducers. DNA sequencing of these strains is underway to confirm which species they belong to. This project is part of the NASA Astrobiology Institute Life Underground initiative to detect and characterize subsurface microbial life; by characterizing the intraterrestrials, the life living deep within Earth’s crust, we aim to understand the controls on how and where life survives in subsurface settings. Cultivation of terrestrial deep subsurface microbes will provide insight into the survival mechanisms of intraterrestrials guiding the search for these life forms on other planets.

  17. The Newberry Deep Drilling Project (NDDP)

    Science.gov (United States)

    Bonneville, A.; Cladouhos, T. T.; Petty, S.; Schultz, A.; Sorle, C.; Asanuma, H.; Friðleifsson, G. Ó.; Jaupart, C. P.; Moran, S. C.; de Natale, G.

    2017-12-01

    We present the arguments to drill a deep well to the ductile/brittle transition zone (T>400°C) at Newberry Volcano, central Oregon state, U.S.A. The main research goals are related to heat and mass transfer in the crust from the point of view of natural hazards and geothermal energy: enhanced geothermal system (EGS supercritical and beyond-brittle), volcanic hazards, mechanisms of magmatic intrusions, geomechanics close to a magmatic system, calibration of geophysical imaging techniques and drilling in a high temperature environment. Drilling at Newberry will bring additional information to a very promising field of research initiated by ICDP in the Deep Drilling project in Iceland with IDDP-1 on Krafla in 2009, followed by IDDP-2 on the Reykjanes ridge in 2016, and the future Japan Beyond-Brittle project and Krafla Magma Testbed. Newberry Volcano contains one of the largest geothermal heat reservoirs in the western United States, extensively studied for the last 40 years. All the knowledge and experience collected make this an excellent choice for drilling a well that will reach high temperatures at relatively shallow depths (< 5000 m). The large conductive thermal anomaly (320°C at 3000 m depth), has already been well-characterized by extensive drilling and geophysical surveys. This will extend current knowledge from the existing 3000 m deep boreholes at the sites into and through the brittle-ductile transition approaching regions of partial melt like lateral dykes. The important scientific questions that will form the basis of a full drilling proposal, have been addressed during an International Continental Drilling Program (ICDP) workshop held in Bend, Oregon in September 2017. They will be presented and discussed as well as the strategic plan to address them.

  18. Towards testing quantum physics in deep space

    Science.gov (United States)

    Kaltenbaek, Rainer

    2016-07-01

    MAQRO is a proposal for a medium-sized space mission to use the unique environment of deep space in combination with novel developments in space technology and quantum technology to test the foundations of physics. The goal is to perform matter-wave interferometry with dielectric particles of up to 10^{11} atomic mass units and testing for deviations from the predictions of quantum theory. Novel techniques from quantum optomechanics with optically trapped particles are to be used for preparing the test particles for these experiments. The core elements of the instrument are placed outside the spacecraft and insulated from the hot spacecraft via multiple thermal shields allowing to achieve cryogenic temperatures via passive cooling and ultra-high vacuum levels by venting to deep space. In combination with low force-noise microthrusters and inertial sensors, this allows realizing an environment well suited for long coherence times of macroscopic quantum superpositions and long integration times. Since the original proposal in 2010, significant progress has been made in terms of technology development and in refining the instrument design. Based on these new developments, we submitted/will submit updated versions of the MAQRO proposal in 2015 and 2016 in response to Cosmic-Vision calls of ESA for a medium-sized mission. A central goal has been to address and overcome potentially critical issues regarding the readiness of core technologies and to provide realistic concepts for further technology development. We present the progress on the road towards realizing this ground-breaking mission harnessing deep space in novel ways for testing the foundations of physics, a technology pathfinder for macroscopic quantum technology and quantum optomechanics in space.

  19. The nature of thinking, shallow and deep.

    Science.gov (United States)

    Brase, Gary L

    2014-01-01

    Because the criteria for success differ across various domains of life, no single normative standard will ever work for all types of thinking. One method for dealing with this apparent dilemma is to propose that the mind is made up of a large number of specialized modules. This review describes how this multi-modular framework for the mind overcomes several critical conceptual and theoretical challenges to our understanding of human thinking, and hopefully clarifies what are (and are not) some of the implications based on this framework. In particular, an evolutionarily informed "deep rationality" conception of human thinking can guide psychological research out of clusters of ad hoc models which currently occupy some fields. First, the idea of deep rationality helps theoretical frameworks in terms of orienting themselves with regard to time scale references, which can alter the nature of rationality assessments. Second, the functional domains of deep rationality can be hypothesized (non-exhaustively) to include the areas of self-protection, status, affiliation, mate acquisition, mate retention, kin care, and disease avoidance. Thus, although there is no single normative standard of rationality across all of human cognition, there are sensible and objective standards by which we can evaluate multiple, fundamental, domain-specific motives underlying human cognition and behavior. This review concludes with two examples to illustrate the implications of this framework. The first example, decisions about having a child, illustrates how competing models can be understood by realizing that different fundamental motives guiding people's thinking can sometimes be in conflict. The second example is that of personifications within modern financial markets (e.g., in the form of corporations), which are entities specifically constructed to have just one fundamental motive. This single focus is the source of both the strengths and flaws in how such entities behave.

  20. Deep soft tissue leiomyoma of the thigh

    Energy Technology Data Exchange (ETDEWEB)

    Watson, G.M.T.; Saifuddin, A. [Department of Radiology, The Royal National Orthopaedic Hospital Trust, Brockley Hill (United Kingdom); Sandison, A. [Department of Pathology, The Royal National Orthopaedic Hospital Trust, Stanmore, Middlesex (United Kingdom)

    1999-07-01

    A case of ossified leiomyoma of the deep soft tissues of the left thigh is presented. The radiographic appearance suggested a low-grade chondrosarcoma. MRI of the lesion showed signal characteristics similar to muscle on both T1- and T2-weighted spin echo sequences with linear areas of high signal intensity on T1-weighted images consistent with medullary fat in metaplastic bone. Histopathological examination of the resected specimen revealed a benign ossified soft tissue leiomyoma. (orig.) With 3 figs., 13 refs.

  1. Repository and deep borehole disposition of plutonium

    International Nuclear Information System (INIS)

    Halsey, W.G.

    1996-02-01

    Control and disposition of excess weapons plutonium is a growing issue as both the US and Russia retire a large number of nuclear weapons> A variety of options are under consideration to ultimately dispose of this material. Permanent disposition includes tow broad categories: direct Pu disposal where the material is considered waste and disposed of, and Pu utilization, where the potential energy content of the material is exploited via fissioning. The primary alternative to a high-level radioactive waste repository for the ultimate disposal of plutonium is development of a custom geologic facility. A variety of geologic facility types have been considered, but the concept currently being assessed is the deep borehole

  2. Top tagging with deep neural networks [Vidyo

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Recent literature on deep neural networks for top tagging has focussed on image based techniques or multivariate approaches using high level jet substructure variables. Here, we take a sequential approach to this task by using anordered sequence of energy deposits as training inputs. Unlike previous approaches, this strategy does not result in a loss of information during pixelization or the calculation of high level features. We also propose new preprocessing methods that do not alter key physical quantities such as jet mass. We compare the performance of this approach to standard tagging techniques and present results evaluating the robustness of the neural network to pileup.

  3. A medium energy neutron deep penetration experiment

    International Nuclear Information System (INIS)

    Amian, W.; Cloth, P.; Druecke, V.; Filges, D.; Paul, N.; Schaal, H.

    1986-11-01

    A deep penetration experiment conducted at the Los Alamos WNR facility's Spallation Neutron Target is compared with calculations using intra-nuclear-cascade and S N -transport codes installed at KFA-IRE. In the experiment medium energy reactions induced by neutrons between 15 MeV and about 150 MeV inside a quasi infinite slab of iron have been measured using copper foil monitors. Details of the experimental procedure and the theoretical methods are described. A comparison of absolute reaction rates for both experimentally and theoretically derived reactions is given. The present knowledge of the corresponding monitor reaction cross sections is discussed. (orig.)

  4. Intelligent Shimming for Deep Drawing Processes

    DEFF Research Database (Denmark)

    Tommerup, Søren; Endelt, Benny Ørtoft; Danckert, Joachim

    2011-01-01

    cavities the blank-holder force distribution can be controlled during the punch stroke. By means of a sequence of numerical simulations abrasive wear is imposed to the deep drawing of a rectangular cup. The abrasive wear is modelled by changing the tool surface geometry using an algorithm based...... on the sliding energy density. As the tool surfaces are changed the material draw-in is significantly altered when using conventional open-loop control of the blank-holder force. A feed-back controller is presented which is capable of reducing the draw-in difference to a certain degree. Further a learning...

  5. Deep inelastic scattering and asymptotic freedom

    International Nuclear Information System (INIS)

    Nachtmann, O.

    1985-01-01

    I recall some facets of the history of the field of deep inelastic scattering. I show how there was a very fruitful interplay between phenomenology on the one side and more abstract field theoretical considerations on the other side, where Kurt Symanzik, whose memory we honour today, made important contributions. Finally I make some remarks on the most recent developments in this field which have to do with the so-called EMC-effect, where EMC stands for European Muon Collaboration. (orig./HSI)

  6. Deep inelastic scattering of heavy ions

    International Nuclear Information System (INIS)

    Brink, D.M.

    1980-01-01

    These lectures developed path integral methods for use in the theory of heavy ion reactions. The effects of internal degrees of freedom on the relative motion were contained in an influence functional which was calculated for several simple models of the internal structure. In each model the influence functional had a simple Gaussian structure suggesting that the relative motion of the nuclei in a deep inelastic collision could be described by a Langevin equation. The form of the influence functional determines the average damping force and the correlation function of the fluctuating Langevin force. (author)

  7. Deep inelastic scattering of heavy ions

    International Nuclear Information System (INIS)

    Brink, D.M.

    1980-01-01

    These lecture notes show how path integral methods can be used in the theory of heavy ion reactions. The effects of internal degrees of freedom on the relative motion are contained in an influence functional which is calculated for several simple models of the internal structure. In each model the influence functional has a simple Gaussian structure which suggests that the relative motion of the nuclei in a deep inelastic collision can be described by a Langevin equation. The form of the influence functional determines the average damping force and the correlation function of the fluctuating Langevin force. (author)

  8. Colored condensates deep inside neutron stars

    Directory of Open Access Journals (Sweden)

    Blaschke David

    2014-01-01

    Full Text Available It is demonstrated how in the absence of solutions for QCD under conditions deep inside compact stars an equation of state can be obtained within a model that is built on the basic symmetries of the QCD Lagrangian, in particular chiral symmetry and color symmetry. While in the vacuum the chiral symmetry is spontaneously broken, it gets restored at high densities. Color symmetry, however, gets broken simultaneously by the formation of colorful diquark condensates. It is shown that a strong diquark condensate in cold dense quark matter is essential for supporting the possibility that such states could exist in the recently observed pulsars with masses of 2 Mʘ.

  9. A 6-cm deep sky survey

    International Nuclear Information System (INIS)

    Fomalont, E.B.; Kellermann, K.I.; Wall, J.V.

    1983-01-01

    In order to extend radio source counts to lower flux density, the authors have used the VLA to survey a small region of sky at 4.885 GHz (6 cm) to a limiting flux density of 50 μJy. Details of this deep survey are given in the paper by kellermann et al. (these proceedings). In addition, they have observed 10 other nearby fields to a limiting flux density of 350 μJy in order to provide better statistics on sources of intermediate flux density. (Auth.)

  10. Hadron final states in deep inelastic processes

    International Nuclear Information System (INIS)

    Bjorken, J.D.

    1976-05-01

    Lectures are presented dealing mainly with the description and discussion of hadron final states in electroproduction, colliding beams, and neutrino reactions from the point of view of the simple parton model. Also the space-time evolution of final states in the parton model is considered. It is found that the picture of space-time evolution of hadron final states in deep inelastic processes isn't totally trivial and that it can be made consistent with the hypotheses of the parton model. 39 references

  11. Roadside video data analysis deep learning

    CERN Document Server

    Verma, Brijesh; Stockwell, David

    2017-01-01

    This book highlights the methods and applications for roadside video data analysis, with a particular focus on the use of deep learning to solve roadside video data segmentation and classification problems. It describes system architectures and methodologies that are specifically built upon learning concepts for roadside video data processing, and offers a detailed analysis of the segmentation, feature extraction and classification processes. Lastly, it demonstrates the applications of roadside video data analysis including scene labelling, roadside vegetation classification and vegetation biomass estimation in fire risk assessment.

  12. Deep brain stimulation for Tourette syndrome.

    Science.gov (United States)

    Kim, Won; Pouratian, Nader

    2014-01-01

    Gilles de la Tourette syndrome is a movement disorder characterized by repetitive stereotyped motor and phonic movements with varying degrees of psychiatric comorbidity. Deep brain stimulation (DBS) has emerged as a novel therapeutic intervention for patients with refractory Tourette syndrome. Since 1999, more than 100 patients have undergone DBS at various targets within the corticostriatothalamocortical network thought to be implicated in the underlying pathophysiology of Tourette syndrome. Future multicenter clinical trials and the use of a centralized online database to compare the results are necessary to determine the efficacy of DBS for Tourette syndrome. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. National Grid Deep Energy Retrofit Pilot

    Energy Technology Data Exchange (ETDEWEB)

    Neuhauser, K. [Building Science Corporation (BSC), Somervgille, MA (United States)

    2012-03-01

    Through discussion of five case studies (test homes), this project evaluates strategies to elevate the performance of existing homes to a level commensurate with best-in-class implementation of high-performance new construction homes. The test homes featured in this research activity participated in Deep Energy Retrofit (DER) Pilot Program sponsored by the electric and gas utility National Grid in Massachusetts and Rhode Island. Building enclosure retrofit strategies are evaluated for impact on durability and indoor air quality in addition to energy performance.

  14. Deep Secrets of the Neutrino: Physics Underground

    Energy Technology Data Exchange (ETDEWEB)

    Rowson, P.C.

    2010-03-23

    Among the many beautiful, unexpected and sometimes revolutionary discoveries to emerge from subatomic physics, probably none is more bizarre than an elementary particle known as the 'neutrino'. More than a trillion of these microscopic phantoms pass unnoticed through our bodies every second, and indeed, through the entire Earth - but their properties remain poorly understood. In recent years, exquisitely sensitive experiments, often conducted deep below ground, have brought neutrino physics to the forefront. In this talk, we will explore the neutrino - what we know, what we want to know, and how one experiment in a New Mexico mine is trying to get there.

  15. Silicon germanium mask for deep silicon etching

    KAUST Repository

    Serry, Mohamed

    2014-07-29

    Polycrystalline silicon germanium (SiGe) can offer excellent etch selectivity to silicon during cryogenic deep reactive ion etching in an SF.sub.6/O.sub.2 plasma. Etch selectivity of over 800:1 (Si:SiGe) may be achieved at etch temperatures from -80 degrees Celsius to -140 degrees Celsius. High aspect ratio structures with high resolution may be patterned into Si substrates using SiGe as a hard mask layer for construction of microelectromechanical systems (MEMS) devices and semiconductor devices.

  16. Deep seismic sounding in northern Eurasia

    Science.gov (United States)

    Benz, H.M.; Unger, J.D.; Leith, W.S.; Mooney, W.D.; Solodilov, L.; Egorkin, A.V.; Ryaboy, V.Z.

    1992-01-01

    For nearly 40 years, the former Soviet Union has carried out an extensive program of seismic studies of the Earth's crust and upper mantle, known as “Deep Seismic Sounding” or DSS [Piwinskii, 1979; Zverev and Kosminskaya, 1980; Egorkin and Pavlenkova, 1981; Egorkin and Chernyshov, 1983; Scheimer and Borg, 1985]. Beginning in 1939–1940 with a series of small-scale seismic experiments near Moscow, DSS profiling has broadened into a national multiinstitutional exploration effort that has completed almost 150,000 km of profiles covering all major geological provinces of northern Eurasia [Ryaboy, 1989].

  17. Deep Learning for Distribution Channels' Management

    Directory of Open Access Journals (Sweden)

    Sabina-Cristiana NECULA

    2017-01-01

    Full Text Available This paper presents an experiment of using deep learning models for distribution channel management. We present an approach that combines self-organizing maps with artificial neural network with multiple hidden layers in order to identify the potential sales that might be addressed for channel distribution change/ management. Our study aims to highlight the evolution of techniques from simple features/learners to more complex learners and feature engineering or sampling techniques. This paper will allow researchers to choose best suited techniques and features to prepare their churn prediction models.

  18. Radiative corrections to deep inelastic muon scattering

    International Nuclear Information System (INIS)

    Akhundov, A.A.; Bardin, D.Yu.; Lohman, W.

    1986-01-01

    A summary is given of the most recent results for the calculaion of radiative corrections to deep inelastic muon-nucleon scattering. Contributions from leptonic electromagnetic processes up to the order a 4 , vacuum polarization by leptons and hadrons, hadronic electromagnetic processes approximately a 3 and γZ interference have been taken into account. The dependence of the individual contributions on kinematical variables is studied. Contributions, not considered in earlier calculations of radiative corrections, reach in certain kinematical regions several per cent at energies above 100 GeV

  19. Visual Pretraining for Deep Q-Learning

    OpenAIRE

    Sandven, Torstein

    2016-01-01

    Recent advances in reinforcement learning enable computers to learn human level polices for Atari 2600 games. This is done by training a convolutional neural network to play based on screenshots and in-game rewards. The network is referred to as a deep Q-network (DQN). The main disadvantage to this approach is a long training time. A computer will typically learn for approximately one week. In this time it processes 38 days of game play. This thesis explores the possibility of using visual pr...

  20. Radionuclide Geomicrobiology of the Deep Biosphere

    DEFF Research Database (Denmark)

    Anderson, Craig; Johnsson, Anna; Moll, Henry

    2011-01-01

    aerobic bacterial metabolism, microbial proliferation, biofilm development, and iron oxide formation. In these environments, the stalk-forming bacterium Gallionella may act as a scaffold for iron oxide precipitation on biological material. In situ work in the Aspo Hard Rock Laboratory tunnel indicated......This review summarizes research into interactions between microorganisms and radionuclides under conditions typical of a repository for high-level radioactive waste in deep hard rock environments at a depth of approximately 500 m. The cell-radionuclide interactions of strains of two bacterial...

  1. Deep web query interface understanding and integration

    CERN Document Server

    Dragut, Eduard C; Yu, Clement T

    2012-01-01

    There are millions of searchable data sources on the Web and to a large extent their contents can only be reached through their own query interfaces. There is an enormous interest in making the data in these sources easily accessible. There are primarily two general approaches to achieve this objective. The first is to surface the contents of these sources from the deep Web and add the contents to the index of regular search engines. The second is to integrate the searching capabilities of these sources and support integrated access to them. In this book, we introduce the state-of-the-art tech

  2. The consumption of electricity in Deep River

    International Nuclear Information System (INIS)

    Phillips, G.J.

    1981-08-01

    This report records the development of the use of electricity in the Town of Deep River, Ontario since its incorporation in 1958. Electricity use is expressed as peak power demand, and as energy use by various classes of customers. Relevant data on population, weather and fuel prices are included. The accelerating displacment of fuel oil by electricity as an energy source for space heating is clearly demonstrated. The study was undertaken to document the effects of a large and rapidly growing seasonal load on the operation of a small electrical utility, and to provide background information for an experimental study of hybrid oil/electric space heating

  3. Current Status of Deep Geological Repository Development

    International Nuclear Information System (INIS)

    Budnitz, R J

    2005-01-01

    This talk provided an overview of the current status of deep-geological-repository development worldwide. Its principal observation is that a broad consensus exists internationally that deep-geological disposal is the only long-term solution for disposition of highly radioactive nuclear waste. Also, it is now clear that the institutional and political aspects are as important as the technical aspects in achieving overall progress. Different nations have taken different approaches to overall management of their highly radioactive wastes. Some have begun active programs to develop a deep repository for permanent disposal: the most active such programs are in the United States, Sweden, and Finland. Other countries (including France and Russia) are still deciding on whether to proceed quickly to develop such a repository, while still others (including the UK, China, Japan) have affirmatively decided to delay repository development for a long time, typically for a generation of two. In recent years, a major conclusion has been reached around the world that there is very high confidence that deep repositories can be built, operated, and closed safely and can meet whatever safety requirements are imposed by the regulatory agencies. This confidence, which has emerged in the last few years, is based on extensive work around the world in understanding how repositories behave, including both the engineering aspects and the natural-setting aspects, and how they interact together. The construction of repositories is now understood to be technically feasible, and no major barriers have been identified that would stand in the way of a successful project. Another major conclusion around the world is that the overall cost of a deep repository is not as high as some had predicted or feared. While the actual cost will not be known in detail until the costs are incurred, the general consensus is that the total life-cycle cost will not exceed a few percent of the value of the

  4. Identifying QCD Transition Using Deep Learning

    Science.gov (United States)

    Zhou, Kai; Pang, Long-gang; Su, Nan; Petersen, Hannah; Stoecker, Horst; Wang, Xin-Nian

    2018-02-01

    In this proceeding we review our recent work using supervised learning with a deep convolutional neural network (CNN) to identify the QCD equation of state (EoS) employed in hydrodynamic modeling of heavy-ion collisions given only final-state particle spectra ρ(pT, V). We showed that there is a traceable encoder of the dynamical information from phase structure (EoS) that survives the evolution and exists in the final snapshot, which enables the trained CNN to act as an effective "EoS-meter" in detecting the nature of the QCD transition.

  5. Deep space optical communication via relay satellite

    Science.gov (United States)

    Dolinar, S.; Vilnrotter, V.; Gagliardi, R.

    1981-01-01

    The application of optical communications for a deep space link via an earth-orbiting relay satellite is discussed. The system uses optical frequencies for the free-space channel and RF links for atmospheric transmission. The relay satellite is in geostationary orbit and contains the optics necessary for data processing and formatting. It returns the data to earth through the RF terrestrial link and also transmits an optical beacon to the satellite for spacecraft return pointing and for the alignment of the transmitting optics. Future work will turn to modulation and coding, pointing and tracking, and optical-RF interfacing.

  6. Silicon germanium mask for deep silicon etching

    KAUST Repository

    Serry, Mohamed; Rubin, Andrew; Refaat, Mohamed; Sedky, Sherif; Abdo, Mohammad

    2014-01-01

    Polycrystalline silicon germanium (SiGe) can offer excellent etch selectivity to silicon during cryogenic deep reactive ion etching in an SF.sub.6/O.sub.2 plasma. Etch selectivity of over 800:1 (Si:SiGe) may be achieved at etch temperatures from -80 degrees Celsius to -140 degrees Celsius. High aspect ratio structures with high resolution may be patterned into Si substrates using SiGe as a hard mask layer for construction of microelectromechanical systems (MEMS) devices and semiconductor devices.

  7. pDeep: Predicting MS/MS Spectra of Peptides with Deep Learning.

    Science.gov (United States)

    Zhou, Xie-Xuan; Zeng, Wen-Feng; Chi, Hao; Luo, Chunjie; Liu, Chao; Zhan, Jianfeng; He, Si-Min; Zhang, Zhifei

    2017-12-05

    In tandem mass spectrometry (MS/MS)-based proteomics, search engines rely on comparison between an experimental MS/MS spectrum and the theoretical spectra of the candidate peptides. Hence, accurate prediction of the theoretical spectra of peptides appears to be particularly important. Here, we present pDeep, a deep neural network-based model for the spectrum prediction of peptides. Using the bidirectional long short-term memory (BiLSTM), pDeep can predict higher-energy collisional dissociation, electron-transfer dissociation, and electron-transfer and higher-energy collision dissociation MS/MS spectra of peptides with >0.9 median Pearson correlation coefficients. Further, we showed that intermediate layer of the neural network could reveal physicochemical properties of amino acids, for example the similarities of fragmentation behaviors between amino acids. We also showed the potential of pDeep to distinguish extremely similar peptides (peptides that contain isobaric amino acids, for example, GG = N, AG = Q, or even I = L), which were very difficult to distinguish using traditional search engines.

  8. DeepVel: Deep learning for the estimation of horizontal velocities at the solar surface

    Science.gov (United States)

    Asensio Ramos, A.; Requerey, I. S.; Vitas, N.

    2017-07-01

    Many phenomena taking place in the solar photosphere are controlled by plasma motions. Although the line-of-sight component of the velocity can be estimated using the Doppler effect, we do not have direct spectroscopic access to the components that are perpendicular to the line of sight. These components are typically estimated using methods based on local correlation tracking. We have designed DeepVel, an end-to-end deep neural network that produces an estimation of the velocity at every single pixel, every time step, and at three different heights in the atmosphere from just two consecutive continuum images. We confront DeepVel with local correlation tracking, pointing out that they give very similar results in the time and spatially averaged cases. We use the network to study the evolution in height of the horizontal velocity field in fragmenting granules, supporting the buoyancy-braking mechanism for the formation of integranular lanes in these granules. We also show that DeepVel can capture very small vortices, so that we can potentially expand the scaling cascade of vortices to very small sizes and durations. The movie attached to Fig. 3 is available at http://www.aanda.org

  9. Deep learning architecture for iris recognition based on optimal Gabor filters and deep belief network

    Science.gov (United States)

    He, Fei; Han, Ye; Wang, Han; Ji, Jinchao; Liu, Yuanning; Ma, Zhiqiang

    2017-03-01

    Gabor filters are widely utilized to detect iris texture information in several state-of-the-art iris recognition systems. However, the proper Gabor kernels and the generative pattern of iris Gabor features need to be predetermined in application. The traditional empirical Gabor filters and shallow iris encoding ways are incapable of dealing with such complex variations in iris imaging including illumination, aging, deformation, and device variations. Thereby, an adaptive Gabor filter selection strategy and deep learning architecture are presented. We first employ particle swarm optimization approach and its binary version to define a set of data-driven Gabor kernels for fitting the most informative filtering bands, and then capture complex pattern from the optimal Gabor filtered coefficients by a trained deep belief network. A succession of comparative experiments validate that our optimal Gabor filters may produce more distinctive Gabor coefficients and our iris deep representations be more robust and stable than traditional iris Gabor codes. Furthermore, the depth and scales of the deep learning architecture are also discussed.

  10. Deep lake water cooling a renewable technology

    Energy Technology Data Exchange (ETDEWEB)

    Eliadis, C.

    2003-06-01

    In the face of increasing electrical demand for air conditioning, the damage to the ozone layer by CFCs used in conventional chillers, and efforts to reduce the greenhouse gases emitted into the atmosphere by coal-fired power generating stations more and more attention is focused on developing alternative strategies for sustainable energy. This article describes one such strategy, namely deep lake water cooling, of which the Enwave project recently completed on the north shore of Lake Ontario is a prime example. The Enwave Deep Lake Water Cooling (DLWC) project is a joint undertaking by Enwave and the City of Toronto. The $180 million project is unique in design and concept, using the coldness of the lake water from the depths of Lake Ontario (not the water itself) to provide environmentally friendly air conditioning to office towers. Concurrently, the system also provides improved quality raw cold water to the city's potable water supply. The plant has a rated capacity of 52,200 tons of refrigeration. The DLWC project is estimated to save 75-90 per cent of the electricity that would have been generated by a coal-fired power station. Enwave, established over 20 years ago, is North America's largest district energy system, delivering steam, hot water and chilled water to buildings from a central plant via an underground piping distribution network. 2 figs.

  11. Matrix completion by deep matrix factorization.

    Science.gov (United States)

    Fan, Jicong; Cheng, Jieyu

    2018-02-01

    Conventional methods of matrix completion are linear methods that are not effective in handling data of nonlinear structures. Recently a few researchers attempted to incorporate nonlinear techniques into matrix completion but there still exists considerable limitations. In this paper, a novel method called deep matrix factorization (DMF) is proposed for nonlinear matrix completion. Different from conventional matrix completion methods that are based on linear latent variable models, DMF is on the basis of a nonlinear latent variable model. DMF is formulated as a deep-structure neural network, in which the inputs are the low-dimensional unknown latent variables and the outputs are the partially observed variables. In DMF, the inputs and the parameters of the multilayer neural network are simultaneously optimized to minimize the reconstruction errors for the observed entries. Then the missing entries can be readily recovered by propagating the latent variables to the output layer. DMF is compared with state-of-the-art methods of linear and nonlinear matrix completion in the tasks of toy matrix completion, image inpainting and collaborative filtering. The experimental results verify that DMF is able to provide higher matrix completion accuracy than existing methods do and DMF is applicable to large matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Deep geological disposal research in Argentina

    International Nuclear Information System (INIS)

    Ninci Martinez, Carlos A.; Ferreyra, Raul E.; Vullien, Alicia R.; Elena, Oscar; Lopez, Luis E.; Maloberti, Alejandro; Nievas, Humberto O.; Reyes, Nancy C.; Zarco, Juan J.; Bevilacqua, Arturo M.; Maset, Elvira R.; Jolivet, Luis A.

    2001-01-01

    Argentina shall require a deep geological repository for the final disposal of radioactive wastes, mainly high-level waste (HLW) and spent nuclear fuel produced at two nuclear power plants and two research reactors. In the period 1980-1990 the first part of feasibility studies and a basic engineering project for a radioactive high level waste repository were performed. From the geological point of view it was based on the study of granitic rocks. The area of Sierra del Medio, Province of Chubut, was selected to carry out detailed geological, geophysical and hydrogeological studies. Nevertheless, by the end of the eighties the project was socially rejected and CNEA decided to stop it at the beginning of the nineties. That decision was strongly linked with the little attention paid to social communication issues. Government authorities were under a strong pressure from social groups which demanded the interruption of the project, due to lack of information and the fear it generated. The lesson learned was: social communication activities shall be carried out very carefully in order to advance in the final disposal of HLW at deep geological repositories (author)

  13. Cost reduction in deep water production systems

    International Nuclear Information System (INIS)

    Beltrao, R.L.C.

    1995-01-01

    This paper describes a cost reduction program that Petrobras has conceived for its deep water field. Beginning with the Floating Production Unit, a new concept of FPSO was established where a simple system, designed to long term testing, can be upgraded, on the location, to be the definitive production unit. Regarding to the subsea system, the following projects will be considered. (1) Subsea Manifold: There are two 8-well-diverless manifolds designed for 1,000 meters presently under construction and after a value analysis, a new design was achieved for the next generation. Both projects will be discussed and a cost evaluation will also be provided. (2) Subsea Pipelines: Petrobras has just started a large program aiming to reduce cost on this important item. There are several projects such as hybrid (flexible and rigid) pipes for large diameter in deep water, alternatives laying methods, rigid riser on FPS, new material...etc. The authors intend to provide an overview of each project

  14. Phenomenology of deep-inelastic processes

    International Nuclear Information System (INIS)

    Moretto, L.G.

    1983-03-01

    The field of heavy-ion deep-inelastic reactions is reviewed with particular attention to the experimental picture. The most important degrees of freedom involved in the process are identified and illustrated with relevant experiments. Energy dissipation and mass transfer are discussed in terms of particles and/or phonons exchanged in the process. The equilibration of the fragment neutron-to-proton ratios is inspected for evidence of giant isovector resonances. The angular momentum effects are observed in the fragment angular distributions and the angular momentum transfer is inferred from the magnitude and alignment of the fragments spins. The possible sources of light particles accompanying the deep-inelastic reactions are discussed. The use of the sequentially emitted particles as angular momentum probes is illustrated. The significance and uses of a thermalized component emitted by the dinucleus is reviewed. The possible presence of Fermi jets in the prompt component is shown to be critical to the justification of the one-body theories

  15. The deep sea Acoustic Detection system AMADEUS

    International Nuclear Information System (INIS)

    Naumann, Christopher Lindsay

    2008-01-01

    As a part of the ANTARES neutrino telescope, the AMADEUS (ANTARES Modules for Acoustic Detection Under the Sea) system is an array of acoustical sensors designed to investigate the possibilities of acoustic detection of ultra-high energy neutrinos in the deep sea. The complete system will comprise a total of 36 acoustic sensors in six clusters on two of the ANTARES detector lines. With an inter-sensor spacing of about one metre inside the clusters and between 15 and 340 metres between the different clusters, it will cover a wide range of distances as will as provide a considerable lever arm for point source triangulation. Three of these clusters have already been deployed in 2007 and have been in operation since, currently yielding around 2GB of acoustic data per day. The remaining three clusters are scheduled to be deployed in May 2008 together with the final ANTARES detector line. Apart from proving the feasibility of operating an acoustic detection system in the deep sea, the main aim of this project is an in-depth survey of both the acoustic properties of the sea water and the acoustic background present at the detector site. It will also serve as a platform for the development and refinement of triggering, filtering and reconstruction algorithms for acoustic particle detection. In this presentation, a description of the acoustic sensor and read-out system is given, together with examples for the reconstruction and evaluation of the acoustic data.

  16. Position paper: the science of deep specification.

    Science.gov (United States)

    Appel, Andrew W; Beringer, Lennart; Chlipala, Adam; Pierce, Benjamin C; Shao, Zhong; Weirich, Stephanie; Zdancewic, Steve

    2017-10-13

    We introduce our efforts within the project 'The science of deep specification' to work out the key formal underpinnings of industrial-scale formal specifications of software and hardware components, anticipating a world where large verified systems are routinely built out of smaller verified components that are also used by many other projects. We identify an important class of specification that has already been used in a few experiments that connect strong component-correctness theorems across the work of different teams. To help popularize the unique advantages of that style, we dub it deep specification , and we say that it encompasses specifications that are rich , two-sided , formal and live (terms that we define in the article). Our core team is developing a proof-of-concept system (based on the Coq proof assistant) whose specification and verification work is divided across largely decoupled subteams at our four institutions, encompassing hardware microarchitecture, compilers, operating systems and applications, along with cross-cutting principles and tools for effective specification. We also aim to catalyse interest in the approach, not just by basic researchers but also by users in industry.This article is part of the themed issue 'Verified trustworthy software systems'. © 2017 The Author(s).

  17. National Grid Deep Energy Retrofit Pilot

    Energy Technology Data Exchange (ETDEWEB)

    Neuhauser, K.

    2012-03-01

    Through discussion of five case studies (test homes), this project evaluates strategies to elevate the performance of existing homes to a level commensurate with best-in-class implementation of high-performance new construction homes. The test homes featured in this research activity participated in Deep Energy Retrofit (DER) Pilot Program sponsored by the electric and gas utility National Grid in Massachusetts and Rhode Island. Building enclosure retrofit strategies are evaluated for impact on durability and indoor air quality in addition to energy performance. Evaluation of strategies is structured around the critical control functions of water, airflow, vapor flow, and thermal control. The aim of the research project is to develop guidance that could serve as a foundation for wider adoption of high performance, 'deep' retrofit work. The project will identify risk factors endemic to advanced retrofit in the context of the general building type, configuration and vintage encountered in the National Grid DER Pilot. Results for the test homes are based on observation and performance testing of recently completed projects. Additional observation would be needed to fully gauge long-term energy performance, durability, and occupant comfort.

  18. The deep space 1 extended mission

    Science.gov (United States)

    Rayman, Marc D.; Varghese, Philip

    2001-03-01

    The primary mission of Deep Space 1 (DS1), the first flight of the New Millennium program, completed successfully in September 1999, having exceeded its objectives of testing new, high-risk technologies important for future space and Earth science missions. DS1 is now in its extended mission, with plans to take advantage of the advanced technologies, including solar electric propulsion, to conduct an encounter with comet 19P/Borrelly in September 2001. During the extended mission, the spacecraft's commercial star tracker failed; this critical loss prevented the spacecraft from achieving three-axis attitude control or knowledge. A two-phase approach to recovering the mission was undertaken. The first involved devising a new method of pointing the high-gain antenna to Earth using the radio signal received at the Deep Space Network as an indicator of spacecraft attitude. The second was the development of new flight software that allowed the spacecraft to return to three-axis operation without substantial ground assistance. The principal new feature of this software is the use of the science camera as an attitude sensor. The differences between the science camera and the star tracker have important implications not only for the design of the new software but also for the methods of operating the spacecraft and conducting the mission. The ambitious rescue was fully successful, and the extended mission is back on track.

  19. Deep water challenges for drilling rig design

    Energy Technology Data Exchange (ETDEWEB)

    Roth, M [Transocean Sedco Forex, Houston, TX (United States)

    2001-07-01

    Drilling rigs designed for deep water must meet specific design considerations for harsh environments. The early lessons for rig design came from experiences in the North Sea. Rig efficiency and safety considerations must include structural integrity, isolated/redundant ballast controls, triple redundant DP systems, enclosed heated work spaces, and automated equipment such as bridge cranes, pipe handling gear, offline capabilities, subsea tree handling, and computerized drill floors. All components must be designed to harmonize man and machine. Some challenges which are unique to Eastern Canada include frequent storms and fog, cold temperature, icebergs, rig ice, and difficult logistics. This power point presentation described station keeping and mooring issues in terms of dynamic positioning issues. The environmental influence on riser management during forced disconnects was also described. Design issues for connected deep water risers must insure elastic stability, and control deflected shape. The design must also keep stresses within acceptable limits. Codes and standards for stress limits, flex joints and tension were also presented. tabs., figs.

  20. Geotechnical deep ocean research apparatus (DORA)

    International Nuclear Information System (INIS)

    1986-01-01

    As part of the research programme on radioactive waste disposal in seabed geological formations, a Deep Ocean Research Apparatus (DORA) seabed machine has been conceptually designed and prototypes of principal subsystems built and tested by four DORA Project partners. The DORA is designed to operate in 6000 m of water and drive a string of test rods and a piezocone about 50 m into soft soil. Partner responsibility was Fugro for project management and the penetration apparatus; ISMES for data acquisition and control; Laboratorium voor Grondmechanica for the piezocone probe and its sensors; and Marine Structure Consultants for the mission profile and DORA handling requirements. The DORA will have a maximum thrust of 50 kN. The probe will measure cone resistance, sleeve friction, pore pressure and inclination. Stability on the seabed will be assisted by using a combination of polyester and polypropylene-nylon (double) braided rope. A continuous wheel-drive subsystem will drive the test rods. Gelled or lead-acid batteries can power a hydraulic powerpack. Acoustic data transmission will be used. Software for data processing automation has been tested with simulation of all input channels. Successful operation of subsystem prototypes indicates that a DORA can be constructed at any future time for use on fundamental or applied deep ocean science and seafloor engineering investigations by industry, government and universities