WorldWideScience

Sample records for deep astrometric standards

  1. An astrometric standard field in omega Cen

    Science.gov (United States)

    Wyse, Rosemary

    2003-07-01

    We propose to obtain a high-precision astrometric standard in a two-step procedure. First, we will create a ground-based astrometric standard field around omega Cen down to V=22 with a 3 mas accuracy in positions and better than 0.5 mas/yr in proper motions. This standard will be used to obtain precise absolute plate solutions for selected WFPC2 CCD frames and refine the self-calibrated mean distortion solution for the WFPC2 CCD chips. This will eliminate systematic errors inherent in the self-calibration techniques down to the rms=0.3 mas level, thus opening new opportunities to perform precision astrometry with WFPC2 alone or in combination with the other HST imaging instruments. We will also address the issue of the distortion's variation which has a paramount significance for space astrometry such as spearheaded by the HST or being under development {SIM, GAIA}. Second, all reduced WFPC2 CCD frames will be combined into the two field catalogs {astrometric flat fields} of positions in omega Cen of unprecedented precision {s.e.=0.1 mas} down to V=22 and will be available to the GO community and readily applicable to calibrating the ACS.

  2. Accuracy of the HST Standard Astrometric Catalogs w.r.t. Gaia

    Science.gov (United States)

    Kozhurina-Platais, V.; Grogin, N.; Sabbi, E.

    2018-02-01

    The goal of astrometric calibration of the HST ACS/WFC and WFC3/UVIS imaging instruments is to provide a coordinate system free of distortion to the precision level of 0.1 pixel 4-5 mas or better. This astrometric calibration is based on two HST astrometric standard fields in the vicinity of the globular clusters, 47 Tuc and omega Cen, respectively. The derived calibration of the geometric distortion is assumed to be accurate down to 2-3 mas. Is this accuracy in agreement with the true value? Now, with the access to globally accurate positions from the first Gaia data release (DR1), we found that there are measurable offsets, rotation, scale and other deviations of distortion parameters in two HST standard astrometric catalogs. These deviations from the distortion-free and properly aligned coordinate system should be accounted and corrected for, so that the high precision HST positions are free of any systematic errors. We also found that the precision of the HST pixel coordinates is substantially better than the accuracy listed in the Gaia DR1. Therefore, in order to finalize the components of distortion in the HST standard catalogs, the next release of Gaia data is needed.

  3. Using Gaia as an Astrometric Tool for Deep Ground-based Surveys

    Science.gov (United States)

    Casetti-Dinescu, Dana I.; Girard, Terrence M.; Schriefer, Michael

    2018-04-01

    Gaia DR1 positions are used to astrometrically calibrate three epochs' worth of Subaru SuprimeCam images in the fields of globular cluster NGC 2419 and the Sextans dwarf spheroidal galaxy. Distortion-correction ``maps'' are constructed from a combination of offset dithers and reference to Gaia DR1. These are used to derive absolute proper motions in the field of NGC 2419. Notably, we identify the photometrically-detected Monoceros structure in the foreground of NGC 2419 as a kinematically-cold population of stars, distinct from Galactic-field stars. This project demonstrates the feasibility of combining Gaia with deep, ground-based surveys, thus extending high-quality astrometry to magnitudes beyond the limits of Gaia.

  4. ASTROMETRIC REVERBERATION MAPPING

    International Nuclear Information System (INIS)

    Shen Yue

    2012-01-01

    Spatially extended emission regions of active galactic nuclei respond to continuum variations, if such emission regions are powered by energy reprocessing of the continuum. The response from different parts of the reverberating region arrives at different times lagging behind the continuum variation. The lags can be used to map the geometry and kinematics of the emission region (i.e., reverberation mapping, RM). If the extended emission region is not spherically symmetric in configuration and velocity space, reverberation may produce astrometric offsets in the emission region photocenter as a function of time delay and velocity, detectable with future μas to tens of μas astrometry. Such astrometric responses provide independent constraints on the geometric and kinematic structure of the extended emission region, complementary to traditional RM. In addition, astrometric RM is more sensitive to infer the inclination of a flattened geometry and the rotation angle of the extended emission region.

  5. Astrometric Observation of MACHO Gravitational Microlensing

    Science.gov (United States)

    Boden, A. F.; Shao, M.; Van Buren, D.

    1997-01-01

    This paper discusses the prospects for astrometric observation of MACHO gravitational microlensing events. We derive the expected astrometric observables for a simple microlensing event assuming a dark MACHO, and demonstrate that accurate astrometry can determine the lens mass, distance, and proper motion in a very general fashion.

  6. The Tycho-Gaia Astrometric Solution

    Science.gov (United States)

    Lindegren, Lennart

    2018-04-01

    Gaia DR1 is based on the first 14 months of Gaia's observations. This is not long enough to reliably disentangle the parallax effect from proper motion. For most sources, therefore, only positions and magnitudes are given. Parallaxes and proper motions were nevertheless obtained for about two million of the brighter stars through the Tycho-Gaia astrometric solution (TGAS), combining the Gaia observations with the much earlier Hipparcos and Tycho-2 positions. In this review I focus on some important characteristics and limitations of TGAS, in particular the reference frame, astrometric uncertainties, correlations, and systematic errors.

  7. ESPRI: Astrometric planet search with PRIMA at the VLTI

    Directory of Open Access Journals (Sweden)

    Ségransan D.

    2011-07-01

    Full Text Available The ESPRI consortium will conduct an astrometric survey for extrasolar planets, using the PRIMA facility at the Very Large Telescope Interferometer. Our scientific goals include determining orbital inclinations and masses for planets already known from radial-velocity surveys, searches for planets around nearby stars of all masses, and around young stars. The consortium has built the PRIMA differential delay lines, developed an astrometric operation and calibration plan, and will deliver astrometric data reduction software.

  8. ASTROMETRIC JITTER OF THE SUN AS A STAR

    International Nuclear Information System (INIS)

    Makarov, V. V.; Parker, D.; Ulrich, R. K.

    2010-01-01

    The daily variation of the solar photocenter over some 11 yr is derived from the Mount Wilson data reprocessed by Ulrich et al. to closely match the surface distribution of solar irradiance. The standard deviations of astrometric jitter are 0.52 μAU and 0.39 μAU in the equatorial and the axial dimensions, respectively. The overall dispersion is strongly correlated with solar cycle, reaching 0.91 μAU at maximum activity in 2000. The largest short-term deviations from the running average (up to 2.6 μAU) occur when a group of large spots happen to lie on one side with respect to the center of the disk. The amplitude spectrum of the photocenter variations never exceeds 0.033 μAU for the range of periods 0.6-1.4 yr, corresponding to the orbital periods of planets in the habitable zone. Astrometric detection of Earth-like planets around stars as quiet as the Sun is not affected by star spot noise, but the prospects for more active stars may be limited to giant planets.

  9. COMPANIONS TO NEARBY STARS WITH ASTROMETRIC ACCELERATION. II

    International Nuclear Information System (INIS)

    Tokovinin, Andrei; Hartung, Markus; Hayward, Thomas L.

    2013-01-01

    Hipparcos astrometric binaries were observed with the NICI adaptive optics system at Gemini-S, completing the work of Paper I. Among the 65 F, G, and K dwarfs within 67 pc of the Sun studied here, we resolve 18 new subarcsecond companions, remeasure 7 known astrometric pairs, and establish the physical nature of yet another 3 wider companions. The 107 astrometric binaries targeted at Gemini so far have 38 resolved companions with separations under 3''. Modeling shows that bright enough companions with separations on the order of an arcsecond can perturb the Hipparcos astrometry when they are not accounted for in the data reduction. However, the resulting bias of parallax and proper motion is generally below formal errors and such companions cannot produce fake acceleration. This work contributes to the multiplicity statistics of nearby dwarfs by bridging the gap between spectroscopic and visual binaries and by providing estimates of periods and mass ratios for many astrometric binaries.

  10. Astrometric vs. photometric microlensing

    NARCIS (Netherlands)

    Dominik, M; Brainerd, TG; Kochanek, CS

    2001-01-01

    I discuss the differences between the properties of astrometric and photometric microlensing and between the arising prospects for survey and follow-up experiments based on these two different signatures. In particular, the prospects for binary stars and extra-solar planets are considered.

  11. Improving the Astrometric Calibration of ACS/WFC for the Most Useful Filters

    Science.gov (United States)

    Anderson, Jay

    2004-07-01

    The distortion correction for the WFC, with which most ACS astrometry is done, is filter-dependent, and is not sufficiently accurate for the most useful filters to the community, F606W and F814W. We propose to derive improved corrections using 1 orbit for each filter. A by-product will be an astrometric standard field at the center of Omega Centauri.

  12. Astrometric Results of NEOs from the Characterization and Astrometric Follow-up Program at Adler Planetarium

    Science.gov (United States)

    Nault, Kristie A.; Brucker, Melissa J.; Hammergren, Mark; Gyuk, Geza; Solontoi, Mike R.

    2015-11-01

    We present astrometric results of near-Earth objects (NEOs) targeted in fourth quarter 2014 and in 2015. This is part of Adler Planetarium’s NEO characterization and astrometric follow-up program, which uses the Astrophysical Research Consortium (ARC) 3.5-m telescope at Apache Point Observatory (APO). The program utilizes a 17% share of telescope time, amounting to a total of 500 hours per year. This time is divided up into two hour observing runs approximately every other night for astrometry and frequent half-night runs approximately several times a month for spectroscopy (see poster by M. Hammergren et. al.) and light curve studies (see poster by M. J. Brucker et. al.).Observations were made using Seaver Prototype Imaging Camera (SPIcam), a visible-wavelength, direct imaging CCD camera with 2048 x 2048 pixels and a field of view of 4.78’ x 4.78’. Observations were made using 2 x 2 binning.Special emphasis has been made to focus on the smallest NEOs, particularly around 140m in diameter. Targets were selected based on absolute magnitude (prioritizing for those with H > 25 mag to select small objects) and a 3σ uncertainty less than 400” to ensure that the target is in the FOV. Targets were drawn from the Minor Planet Center (MPC) NEA Observing Planning Aid, the JPL What’s Observable tool, and the Spaceguard priority list and faint NEO list.As of August 2015, we have detected 670 NEOs for astrometric follow-up, on point with our goal of providing astrometry on a thousand NEOs per year. Astrometric calculations were done using the interactive software tool Astrometrica, which is used for data reduction focusing on the minor bodies of the solar system. The program includes automatic reference star identification from new-generation star catalogs, access to the complete MPC database of orbital elements, and automatic moving object detection and identification.This work is based on observations done using the 3.5-m telescope at Apache Point Observatory

  13. Preliminary Astrometric Results from the PS1 Demo Month and Routine Survey Operations

    Science.gov (United States)

    2010-09-01

    with the  2MASS  catalog 3 to  produce preliminary astrometric solutions.  Using these  coordinates, the NOFS astrometric pipeline correlates PS1...objects with other catalogs (USNO‐B1.0, SDSS, Tycho‐2,  2MASS , etc.) so that unique star identification numbers can  be assigned across all catalogs. This...correlated  pair, and the standard deviation for these pairings is about  0.3 arcsec.  Whereas the  2MASS  catalog error for a brighter  star is believed to be

  14. The astrometric lessons of Gaia-GBOT experiment

    Science.gov (United States)

    Bouquillon, S.; Mendez, R. A.; Altmann, M.

    2017-07-01

    To ensure the full capabilities of the Gaia's measurements, a programme of daily observations with Earth-based telescopes of the satellite itself - called Ground Based Optical Tracking (GBOT) - was implemented since the beginning of the Gaia mission (for more details concerning GBOT operating see Altmann et al. 2014 and concerning GBOT software facilities see Bouquillon et al. 2014). These observations are carried out mainly with two facilities: the 2.6m VLT Survey Telescope (ESO's VST) at the Cerro Paranal in Chile and the 2.0m Liverpool Telescope (LT) on the Canary Island of La Palma. The constraint of 20 mas on the tracking astrometric quality and the fact that Gaia is a faint and relatively fast moving target (its magnitude in a red passband is around 21 and its apparent speed around 0.04"/s), lead us to rigorously analyse the reachable astrometric precision for CCD observations of this kind of celestial objects. During LARIM 2016, we presented the main results of this study which uses the Cramér-Rao lower bound to characterize the precision limit for the PSF center when drifting in the CCD-frame. This work extends earlier studies dealing with one-dimensional detectors and stationary sources (Mendez et al. 2013 & 2014) firstly to the case of standard two-dimensional CCD sensors, and then, to moving sources. These new results have been submitted for a publication in A&A journal this year (Bouquillon et al. 2017).

  15. PACMAN: PRIMA astrometric instrument software

    Science.gov (United States)

    Abuter, Roberto; Sahlmann, Johannes; Pozna, Eszter

    2010-07-01

    The dual feed astrometric instrument software of PRIMA (PACMAN) that is currently being integrated at the VLTI will use two spatially modulated fringe sensor units and a laser metrology system to carry out differential astrometry. Its software and hardware compromises a distributed system involving many real time computers and workstations operating in a synchronized manner. Its architecture has been designed to allow the construction of efficient and flexible calibration and observation procedures. In parallel, a novel scheme of integrating M-code (MATLAB/OCTAVE) with standard VLT (Very Large Telescope) control software applications had to be devised in order to support numerically intensive operations and to have the capacity of adapting to fast varying strategies and algorithms. This paper presents the instrument software, including the current operational sequences for the laboratory calibration and sky calibration. Finally, a detailed description of the algorithms with their implementation, both under M and C code, are shown together with a comparative analysis of their performance and maintainability.

  16. News on Seeking Gaia's Astrometric Core Solution with AGIS

    Science.gov (United States)

    Lammers, U.; Lindegren, L.

    We report on recent new developments around the Astrometric Global Iterative Solution system. This includes the availability of an efficient Conjugate Gradient solver and the Generic Astrometric Calibration scheme that had been proposed a while ago. The number of primary stars to be included in the core solution is now believed to be significantly higher than the 100 Million that served as baseline until now. Cloud computing services are being studied as a possible cost-effective alternative to running AGIS on dedicated computing hardware at ESAC during the operational phase.

  17. A SEARCH FOR STELLAR-MASS BLACK HOLES VIA ASTROMETRIC MICROLENSING

    Energy Technology Data Exchange (ETDEWEB)

    Lu, J. R. [Astronomy Department, University of California, Berkeley, CA 94720 (United States); Sinukoff, E. [Institute for Astronomy, University of Hawai‘i at Mānoa, Honolulu, HI 96822 (United States); Ofek, E. O. [Department of Particle Physics and Astrophysics, Weizmann Institute of Science, Rehovot, 76100 (Israel); Udalski, A.; Kozlowski, S. [Warsaw University Observatory, Al. Ujazdowskie 4, 00-478 Warszawa (Poland)

    2016-10-10

    While dozens of stellar-mass black holes (BHs) have been discovered in binary systems, isolated BHs have eluded detection. Their presence can be inferred when they lens light from a background star. We attempt to detect the astrometric lensing signatures of three photometrically identified microlensing events, OGLE-2011-BLG-0022, OGLE-2011-BLG-0125, and OGLE-2012-BLG-0169 (OB110022, OB110125, and OB120169), located toward the Galactic Bulge. These events were selected because of their long durations, which statistically favors more massive lenses. Astrometric measurements were made over one to two years using laser-guided adaptive optics observations from the W. M. Keck Observatory. Lens model parameters were first constrained by the photometric light curves. The OB120169 light curve is well fit by a single-lens model, while both OB110022 and OB110125 light curves favor binary lens models. Using the photometric fits as prior information, no significant astrometric lensing signal was detected and all targets were consistent with linear motion. The significant lack of astrometric signal constrains the lens mass of OB110022 to 0.05–1.79 M {sub ⊙} in a 99.7% confidence interval, which disfavors a BH lens. Fits to OB110125 yielded a reduced Einstein crossing time and insufficient observations during the peak, so no mass limits were obtained. Two degenerate solutions exist for OB120169, which have a lens mass between 0.2–38.8 M {sub ⊙} and 0.4–39.8 M {sub ⊙} for a 99.7% confidence interval. Follow-up observations of OB120169 will further constrain the lens mass. Based on our experience, we use simulations to design optimal astrometric observing strategies and show that with more typical observing conditions the detection of BHs is feasible.

  18. Implementation of the Global Parameters Determination in Gaia's Astrometric Solution (AGIS)

    Science.gov (United States)

    Raison, F.; Olias, A.; Hobbs, D.; Lindegren, L.

    2010-12-01

    Gaia is ESA’s space astrometry mission with a foreseen launch date in early 2012. Its main objective is to perform a stellar census of the 1000 Million brightest objects in our galaxy (completeness to V=20 mag) from which an astrometric catalog of micro-arcsec level accuracy will be constructed. A key element in this endeavor is the Astrometric Global Iterative Solution (AGIS). A core part of AGIS is to determine the accurate spacecraft attitude, geometric instrument calibration and astrometric model parameters for a well-behaved subset of all the objects (the ‘primary stars’). In addition, a small number of global parameters will be estimated, one of these being PPN γ. We present here the implementation of the algorithms dedicated to the determination of the global parameters.

  19. Inferring Binary and Trinary Stellar Populations in Photometric and Astrometric Surveys

    Science.gov (United States)

    Widmark, Axel; Leistedt, Boris; Hogg, David W.

    2018-04-01

    Multiple stellar systems are ubiquitous in the Milky Way but are often unresolved and seen as single objects in spectroscopic, photometric, and astrometric surveys. However, modeling them is essential for developing a full understanding of large surveys such as Gaia and connecting them to stellar and Galactic models. In this paper, we address this problem by jointly fitting the Gaia and Two Micron All Sky Survey photometric and astrometric data using a data-driven Bayesian hierarchical model that includes populations of binary and trinary systems. This allows us to classify observations into singles, binaries, and trinaries, in a robust and efficient manner, without resorting to external models. We are able to identify multiple systems and, in some cases, make strong predictions for the properties of their unresolved stars. We will be able to compare such predictions with Gaia Data Release 4, which will contain astrometric identification and analysis of binary systems.

  20. Faster, Better, Cheaper: News on Seeking Gaia's Astrometric Solution with AGIS

    Science.gov (United States)

    Lammers, U.; Lindegren, L.; Bombrun, A.; O'Mullane, W.; Hobbs, D.

    2010-12-01

    Gaia is ESA’s ambitious space astrometry mission with a foreseen launch date in early 2012. Its main objective is to perform a stellar census of the 1000 Million brightest objects in our galaxy (completeness to V=20 mag) from which an astrometric catalog of micro-arcsec level accuracy will be constructed. A key element in this endeavor is the Astrometric Global Iterative Solution (AGIS) - the mathematical and numerical framework for combining the ≍80 available observations per star obtained during Gaia’s 5yr lifetime into a single global astrometric solution. At last year’s ADASS XVIII we presented (O4.1) in detail the fundamental working principles of AGIS, its development status, and selected results obtained by running the system on processing hardware at ESAC, Madrid with large-scale simulated data sets. We present here the latest developments around AGIS highlighting in particular a much improved algebraic solving method that has recently been implemented. This Conjugate Gradient scheme improves the convergence behavior in significant ways and leads to a solution of much higher scientific quality. We also report on a new collaboration aiming at processing the data from the future small Japanese astrometry mission Nano-Jasmine with AGIS.

  1. Astrometric properties of the Tautenburg Plate Scanner

    Science.gov (United States)

    Brunzendorf, Jens; Meusinger, Helmut

    The Tautenburg Plate Scanner (TPS) is an advanced plate-measuring machine run by the Thüringer Landessternwarte Tautenburg (Karl Schwarzschild Observatory), where the machine is housed. It is capable of digitising photographic plates up to 30 cm × 30 cm in size. In our poster, we reported on tests and preliminary results of its astrometric properties. The essential components of the TPS consist of an x-y table movable between an illumination system and a direct imaging system. A telecentric lens images the light transmitted through the photographic emulsion onto a CCD line of 6000 pixels of 10 µm square size each. All components are mounted on a massive air-bearing table. Scanning is performed in lanes of up to 55 mm width by moving the x-y table in a continuous drift-scan mode perpendicular to the CCD line. The analogue output from the CCD is digitised to 12 bit with a total signal/noise ratio of 1000 : 1, corresponding to a photographic density range of three. The pixel map is produced as a series of optionally overlapping lane scans. The pixel data are stored onto CD-ROM or DAT. A Tautenburg Schmidt plate 24 cm × 24 cm in size is digitised within 2.5 hours resulting in 1.3 GB of data. Subsequent high-level data processing is performed off-line on other computers. During the scanning process, the geometry of the optical components is kept fixed. The optimal focussing of the optics is performed prior to the scan. Due to the telecentric lens refocussing is not required. Therefore, the main source of astrometric errors (beside the emulsion itself) are mechanical imperfections in the drive system, which have to be divided into random and systematic ones. The r.m.s. repeatability over the whole plate as measured by repeated scans of the same plate is about 0.5 µm for each axis. The mean plate-to-plate accuracy of the object positions on two plates with the same epoch and the same plate centre has been determined to be about 1 µm. This accuracy is comparable to

  2. A Predicted Astrometric Microlensing Event by a Nearby White Dwarf

    Science.gov (United States)

    McGill, Peter; Smith, Leigh C.; Wyn Evans, N.; Belokurov, Vasily; Smart, R. L.

    2018-04-01

    We used the Tycho-Gaia Astrometric Solution catalogue, part of Gaia Data Release 1, to search for candidate astrometric microlensing events expected to occur within the remaining lifetime of the Gaia satellite. Our search yielded one promising candidate. We predict that the nearby DQ type white dwarf LAWD 37 (WD 1142-645) will lens a background star and will reach closest approach on November 11th 2019 (± 4 days) with impact parameter 380 ± 10 mas. This will produce an apparent maximum deviation of the source position of 2.8 ± 0.1 mas. In the most propitious circumstance, Gaia will be able to determine the mass of LAWD 37 to ˜3%. This mass determination will provide an independent check on atmospheric models of white dwarfs with helium rich atmospheres, as well as tests of white dwarf mass radius relationships and evolutionary theory.

  3. Verification of the astrometric performance of the Korean VLBI network, using comparative SFPR studies with the VLBA AT 14/7 mm

    Energy Technology Data Exchange (ETDEWEB)

    Rioja, María J.; Dodson, Richard; Jung, TaeHyun; Sohn, Bong Won; Byun, Do-Young; Cho, Se-Hyung; Lee, Sang-Sung; Kim, Jongsoo; Kim, Kee-Tae; Oh, Chung Sik; Han, Seog-Tae; Je, Do-Heung; Chung, Moon-Hee; Wi, Seog-Oh; Kang, Jiman; Lee, Jung-Won; Chung, Hyunsoo; Kim, Hyo Ryoung; Kim, Hyun-Goo [Korea Astronomy and Space Science Institute, Daedeokdae-ro 776, Yuseong-gu, Daejeon 305-348 (Korea, Republic of); Agudo, Iván, E-mail: maria.rioja@icrar.org [Joint Institute for VLBI in Europe, Postbus 2, NL-7990 AA Dwingeloo (Netherlands); and others

    2014-11-01

    The Korean VLBI Network (KVN) is a new millimeter VLBI dedicated array with the capability to simultaneously observe at multiple frequencies, up to 129 GHz. The innovative multi-channel receivers present significant benefits for astrometric measurements in the frequency domain. The aim of this work is to verify the astrometric performance of the KVN using a comparative study with the VLBA, a well-established instrument. For that purpose, we carried out nearly contemporaneous observations with the KVN and the VLBA, at 14/7 mm, in 2013 April. The KVN observations consisted of simultaneous dual frequency observations, while the VLBA used fast frequency switching observations. We used the Source Frequency Phase Referencing technique for the observational and analysis strategy. We find that having simultaneous observations results in superior compensation for all atmospheric terms in the observables, in addition to offering other significant benefits for astrometric analysis. We have compared the KVN astrometry measurements to those from the VLBA. We find that the structure blending effects introduce dominant systematic astrometric shifts, and these need to be taken into account. We have tested multiple analytical routes to characterize the impact of the low-resolution effects for extended sources in the astrometric measurements. The results from the analysis of the KVN and full VLBA data sets agree within 2σ of the thermal error estimate. We interpret the discrepancy as arising from the different resolutions. We find that the KVN provides astrometric results with excellent agreement, within 1σ, when compared to a VLBA configuration that has a similar resolution. Therefore, this comparative study verifies the astrometric performance of the KVN using SFPR at 14/7 mm, and validates the KVN as an astrometric instrument.

  4. Optical design for the Laser Astrometric Test of Relativity

    Science.gov (United States)

    Turyshev, Slava G.; Shao, Michael; Nordtvedt, Kenneth L., Jr.

    2004-01-01

    This paper discusses the Laser Astrometric Test of Relativity (LATOR) mission. LATOR is a Michelson-Morley-type experiment designed to test the pure tensor metric nature of gravitation the fundamental postulate of Einstein's theory of general relativity. With its focus on gravity's action on light propagation it complements other tests which rely on the gravitational dynamics of bodies.

  5. VLBI FOR GRAVITY PROBE B. IV. A NEW ASTROMETRIC ANALYSIS TECHNIQUE AND A COMPARISON WITH RESULTS FROM OTHER TECHNIQUES

    International Nuclear Information System (INIS)

    Lebach, D. E.; Ratner, M. I.; Shapiro, I. I.; Bartel, N.; Bietenholz, M. F.; Lederman, J. I.; Ransom, R. R.; Campbell, R. M.; Gordon, D.; Lestrade, J.-F.

    2012-01-01

    When very long baseline interferometry (VLBI) observations are used to determine the position or motion of a radio source relative to reference sources nearby on the sky, the astrometric information is usually obtained via (1) phase-referenced maps or (2) parametric model fits to measured fringe phases or multiband delays. In this paper, we describe a 'merged' analysis technique which combines some of the most important advantages of these other two approaches. In particular, our merged technique combines the superior model-correction capabilities of parametric model fits with the ability of phase-referenced maps to yield astrometric measurements of sources that are too weak to be used in parametric model fits. We compare the results from this merged technique with the results from phase-referenced maps and from parametric model fits in the analysis of astrometric VLBI observations of the radio-bright star IM Pegasi (HR 8703) and the radio source B2252+172 nearby on the sky. In these studies we use central-core components of radio sources 3C 454.3 and B2250+194 as our positional references. We obtain astrometric results for IM Peg with our merged technique even when the source is too weak to be used in parametric model fits, and we find that our merged technique yields astrometric results superior to the phase-referenced mapping technique. We used our merged technique to estimate the proper motion and other astrometric parameters of IM Peg in support of the NASA/Stanford Gravity Probe B mission.

  6. Quantum astrometric observables I: time delay in classical and quantum gravity

    NARCIS (Netherlands)

    Khavkine, I.

    2012-01-01

    A class of diffeomorphism invariant, physical observables, so-called astrometric observables, is introduced. A particularly simple example, the time delay, which expresses the difference between two initially synchronized proper time clocks in relative inertial motion, is analyzed in detail. It is

  7. On an Allan variance approach to classify VLBI radio-sources on the basis of their astrometric stability

    Science.gov (United States)

    Gattano, C.; Lambert, S.; Bizouard, C.

    2017-12-01

    In the context of selecting sources defining the celestial reference frame, we compute astrometric time series of all VLBI radio-sources from observations in the International VLBI Service database. The time series are then analyzed with Allan variance in order to estimate the astrometric stability. From results, we establish a new classification that takes into account the whole multi-time scales information. The algorithm is flexible on the definition of ``stable source" through an adjustable threshold.

  8. Magnetic Field Studies in BL Lacertae through Faraday Rotation and a Novel Astrometric Technique

    Directory of Open Access Journals (Sweden)

    Sol N. Molina

    2017-12-01

    Full Text Available It is thought that dynamically important helical magnetic fields twisted by the differential rotation of the black hole’s accretion disk or ergosphere play an important role in the launching, acceleration, and collimation of active galactic nuclei (AGN jets. We present multi-frequency astrometric and polarimetric Very Long Baseline Array (VLBA images at 15, 22, and 43 GHz, as well as Faraday rotation analyses of the jet in BL Lacertae as part of a sample of AGN jets aimed to probe the magnetic field structure at the innermost scales to test jet formation models. The novel astrometric technique applied allows us to obtain the absolute position at mm wavelengths without any external calibrator.

  9. Astrometric and photometric study of the open cluster NGC 2323

    Directory of Open Access Journals (Sweden)

    Amin M.Y.

    2017-01-01

    Full Text Available We present a study of the open cluster NGC 2323 using astrometric and photometric data. In our study we used two methods that are able to separate open cluster’s stars from those that belong to the stellar background. Our results of calculations by these two methods indicate that: 1 according to the membership probability, NGC 2323 should contain 497 stars, 2 the cluster center should be at 07h 02m 48.s02 and -08° 20' 17''74,3 the limiting radius of NGC 2323 is 2.31 ± 0.04 pc, the surface number density at this radius is 98.16 stars pc −2, 4 the magnitude function has a maximum at about mv = 14 mag, 5 the total mass of NGC 2323 is estimated dynamically by using astrometric data to be 890 M_, and statistically by using photometric data to be 900 M_, and 6 the distance and age of the cluster are found to be equal to 900 ± 100 pc, and 140 ± 20 Myr, respectively. Finally the dynamical evolution parameter τ of the cluster is about 436.2.

  10. High Astrometric Precision in the Calculation of the Coordinates of Orbiters in the GEO Ring

    Science.gov (United States)

    Lacruz, E.; Abad, C.; Downes, J. J.; Hernández-Pérez, F.; Casanova, D.; Tresaco, E.

    2018-04-01

    We present an astrometric method for the calculation of the positions of orbiters in the GEO ring with a high precision, through a rigorous astrometric treatment of observations with a 1-m class telescope, which are part of the CIDA survey of the GEO ring. We compute the distortion pattern to correct for the systematic errors introduced by the optics and electronics of the telescope, resulting in absolute mean errors of 0.16″ and 0.12″ in right ascension and declination, respectively. These correspond to ≍25 m at the mean distance of the GEO ring, and are thus good quality results.

  11. Automatic measurement of images on astrometric plates

    Science.gov (United States)

    Ortiz Gil, A.; Lopez Garcia, A.; Martinez Gonzalez, J. M.; Yershov, V.

    1994-04-01

    We present some results on the process of automatic detection and measurement of objects in overlapped fields of astrometric plates. The main steps of our algorithm are the following: determination of the Scale and Tilt between charge coupled devices (CCD) and microscope coordinate systems and estimation of signal-to-noise ratio in each field;--image identification and improvement of its position and size;--image final centering;--image selection and storage. Several parameters allow the use of variable criteria for image identification, characterization and selection. Problems related with faint images and crowded fields will be approached by special techniques (morphological filters, histogram properties and fitting models).

  12. Properties of comet Halley derived from thermal models and astrometric data

    International Nuclear Information System (INIS)

    Hechler, F.W.; Morley, T.A.; Mahr, P.

    1986-01-01

    The motion of a comet nucleus is influenced by outgassing forces. The orbit determination from astrometric data of comet Halley using empiric force and observation bias models and the incorporation of thermal models developed at ESOC into the orbit determination allows to draw some conclusions on the comet Halley dynamics and physics. 21 references

  13. Astrometric detectability of systems with unseen companions: effects of the Earth orbital motion

    Science.gov (United States)

    Butkevich, Alexey G.

    2018-06-01

    The astrometric detection of an unseen companion is based on an analysis of the apparent motion of its host star around the system's barycentre. Systems with an orbital period close to 1 yr may escape detection if the orbital motion of their host stars is observationally indistinguishable from the effects of parallax. Additionally, an astrometric solution may produce a biased parallax estimation for such systems. We examine the effects of the orbital motion of the Earth on astrometric detectability in terms of a correlation between the Earth's orbital position and the position of the star relative to its system barycentre. The χ2 statistic for parallax estimation is calculated analytically, leading to expressions that relate the decrease in detectability and accompanying parallax bias to the position correlation function. The impact of the Earth's motion critically depends on the exoplanet's orbital period, diminishing rapidly as the period deviates from 1 yr. Selection effects against 1-yr-period systems is, therefore, expected. Statistical estimation shows that the corresponding loss of sensitivity results in a typical 10 per cent increase in the detection threshold. Consideration of eccentric orbits shows that the Earth's motion has no effect on detectability for e≳ 0.5. The dependence of the detectability on other parameters, such as orbital phases and inclination of the orbital plane to the ecliptic, are smooth and monotonic because they are described by simple trigonometric functions.

  14. The science, technology and mission design for the Laser Astrometric test of relativity

    Science.gov (United States)

    Turyshev, Slava G.

    2006-01-01

    The Laser Astrometric Test of Relativity (LATOR) is a Michelson-Morley-type experiment designed to test the Einstein's general theory of relativity in the most intense gravitational environment available in the solar system - the close proximity to the Sun.

  15. [Deep brain stimulation in movement disorders: evidence and therapy standards].

    Science.gov (United States)

    Parpaley, Yaroslav; Skodda, Sabine

    2017-07-01

    The deep brain stimulation (DBS) in movement disorders is well established and in many aspects evidence-based procedure. The treatment indications are very heterogeneous and very specific in their course and therapy. The deep brain stimulation plays very important, but usually not the central role in this conditions. The success in the application of DBS is essentially associated with the correct, appropriate and timely indication of the therapy in the course of these diseases. Thanks to the good standardization of the DBS procedure and sufficient published data, the recommendations for indication, diagnosis and operative procedures can be generated. The following article attempts to summarize the most important decision-making criteria and current therapy standards in this fairly comprehensive subject and to present them in close proximity to practice. Georg Thieme Verlag KG Stuttgart · New York.

  16. Detailed Astrometric Analysis of Pluto

    Science.gov (United States)

    ROSSI, GUSTAVO B.; Vieira-Martins, R.; Camargo, J. I.; Assafin, M.

    2013-05-01

    Abstract (2,250 Maximum Characters): Pluto is the main representant of the transneptunian objects (TNO's), presenting some peculiarities such as an atmosphere and a satellite system with 5 known moons: Charon, discovered in 1978, Nix and Hydra, in 2006, P4 in 2011 and P5 in 2012. Until the arrival of the New Horizons spacecraft to this system (july 2015), stellar occultations are the most efficient method, from the ground, to know physical and dinamical properties of this system. In 2010, it was evident a drift in declinations (about 20 mas/year) comparing to the ephemerides. This fact motivated us to remake the reductions and analysis of a great set of our observations at OPD/LNA, in a total of 15 years. The ephemerides and occultations results was then compared with the astrometric and photometric reductions of CCD images of Pluto (around 6500 images). Two corrections were used for a refinement of the data set: diferential chromatic refraction and photocenter. The first is due to the mean color of background stars beeing redder than the color of Pluto, resulting in a slightly different path of light through the atmosphere (that may cause a difference in position of 0.1”). It became more evident because Pluto is crossing the region of the galactic plane. The photocenter correction is based on two gaussians curves overlapped, with different hights and non-coincident centers, corresponding to Pluto and Charon (since they have less than 1” of angular separation). The objective is to separate these two gaussian curves from the observed one and find the right position of Pluto. The method is strongly dependent of the hight of each of the gaussian curves, related to the respective albedos of charon and Pluto. A detailed analysis of the astrometric results, as well a comparison with occultation results was made. Since Pluto has an orbital period of 248,9 years and our interval of observation is about 15 years, we have around 12% of its observed orbit and also, our

  17. Implementing the Gaia Astrometric Global Iterative Solution (AGIS) in Java

    OpenAIRE

    O'Mullane, William; Lammers, Uwe; Lindegren, Lennart; Hernandez, Jose; Hobbs, David

    2011-01-01

    This paper provides a description of the Java software framework which has been constructed to run the Astrometric Global Iterative Solution for the Gaia mission. This is the mathematical framework to provide the rigid reference frame for Gaia observations from the Gaia data itself. This process makes Gaia a self calibrated, and input catalogue independent, mission. The framework is highly distributed typically running on a cluster of machines with a database back end. All code is written in ...

  18. Tycho- Gaia Astrometric Solution Parallaxes and Proper Motions for Five Galactic Globular Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Watkins, Laura L.; Van der Marel, Roeland P., E-mail: lwatkins@stsci.edu [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore MD 21218 (United States)

    2017-04-20

    We present a pilot study of Galactic globular cluster (GC) proper motion (PM) determinations using Gaia data. We search for GC stars in the Tycho- Gaia Astrometric Solution (TGAS) catalog from Gaia Data Release 1 (DR1), and identify five members of NGC 104 (47 Tucanae), one member of NGC 5272 (M3), five members of NGC 6121 (M4), seven members of NGC 6397, and two members of NGC 6656 (M22). By taking a weighted average of member stars, fully accounting for the correlations between parameters, we estimate the parallax (and, hence, distance) and PM of the GCs. This provides a homogeneous PM study of multiple GCs based on an astrometric catalog with small and well-controlled systematic errors and yields random PM errors similar to existing measurements. Detailed comparison to the available Hubble Space Telescope ( HST ) measurements generally shows excellent agreement, validating the astrometric quality of both TGAS and HST . By contrast, comparison to ground-based measurements shows that some of those must have systematic errors exceeding the random errors. Our parallax estimates have uncertainties an order of magnitude larger than previous studies, but nevertheless imply distances consistent with previous estimates. By combining our PM measurements with literature positions, distances, and radial velocities, we measure Galactocentric space motions for the clusters and find that these also agree well with previous analyses. Our analysis provides a framework for determining more accurate distances and PMs of Galactic GCs using future Gaia data releases. This will provide crucial constraints on the near end of the cosmic distance ladder and provide accurate GC orbital histories.

  19. What stellar orbit is needed to measure the spin of the Galactic centre black hole from astrometric data?

    Science.gov (United States)

    Waisberg, Idel; Dexter, Jason; Gillessen, Stefan; Pfuhl, Oliver; Eisenhauer, Frank; Plewa, Phillip M.; Bauböck, Michi; Jimenez-Rosales, Alejandra; Habibi, Maryam; Ott, Thomas; von Fellenberg, Sebastiano; Gao, Feng; Widmann, Felix; Genzel, Reinhard

    2018-05-01

    Astrometric and spectroscopic monitoring of individual stars orbiting the supermassive black hole in the Galactic Center offer a promising way to detect general relativistic effects. While low-order effects are expected to be detected following the periastron passage of S2 in Spring 2018, detecting higher order effects due to black hole spin will require the discovery of closer stars. In this paper, we set out to determine the requirements such a star would have to satisfy to allow the detection of black hole spin. We focus on the instrument GRAVITY, which saw first light in 2016 and which is expected to achieve astrometric accuracies 10-100 μas. For an observing campaign with duration T years, total observations Nobs, astrometric precision σx, and normalized black hole spin χ, we find that a_orb(1-e^2)^{3/4} ≲ 300 R_S √{T/4 {yr}} (N_obs/120)^{0.25} √{10 μ as/σ _x} √{χ /0.9} is needed. For χ = 0.9 and a potential observing campaign with σ _x = 10 μas, 30 observations yr-1 and duration 4-10 yr, we expect ˜0.1 star with K < 19 satisfying this constraint based on the current knowledge about the stellar population in the central 1 arcsec. We also propose a method through which GRAVITY could potentially measure radial velocities with precision ˜50 km s-1. If the astrometric precision can be maintained, adding radial velocity information increases the expected number of stars by roughly a factor of 2. While we focus on GRAVITY, the results can also be scaled to parameters relevant for future extremely large telescopes.

  20. NEAT: an astrometric space telescope to search for habitable exoplanets in the solar neighborhood

    Science.gov (United States)

    Crouzier, A.; Malbet, F.; Kern, P.; Feautrier, P.; Preiss, O.; Martin, G.; Henault, F.; Stadler, E.; Lafrasse, S.; Behar, E.; Saintpe, M.; Dupont, J.; Potin, S.; Lagage, P.-O.; Cara, C.; Leger, A.; Leduigou, J.-M.; Shao, M.; Goullioud, R.

    2014-03-01

    The last decade has witnessed a spectacular development of exoplanet detection techniques, which led to an exponential number of discoveries and a great diversity of known exoplanets. However, it must be noted that the quest for the holy grail of astrobiology, i.e. a nearby terrestrial exoplanet in habitable zone around a solar type star, is still ongoing and proves to be very hard. Radial velocities will have to overcome stellar noise if there are to discover habitable planets around stars more massive than M ones. For very close systems, transits are impeded by their low geometrical probability. Here we present an alternative concept: space astrometry. NEAT (Nearby Earth Astrometric Telescope) is a concept of astrometric mission proposed to ESA which goal is to make a whole sky survey of close (less then 20 pc) planetary systems. The detection limit required for the instrument is the astrometric signal of an Earth analog (at 10 pc). Differential astrometry is a very interesting tool to detect nearby habitable exoplanets. Indeed, for F, G and K main sequence stars, the astrophysical noise is smaller than the astrometric signal, contrary to the case for radial velocities. The difficulty lies in the fact that the signal of an exo-Earth around a G type star at 10 pc is a tiny 0.3 micro arc sec, which is equivalent to a coin on the moon, seen from the Earth: the main challenge is related to instrumentation. In order to reach this specification, NEAT consists of two formation flying spacecraft at a 40m distance, one carries the mirror and the other one the focal plane. Thus NEAT has a configuration with only one optical surface: an off-axis parabola. Consequently, beamwalk errors are common to the whole field of view and have a small effect on differential astrometry. Moreover a metrology system projects young fringes on the focal plane, which can characterize the pixels whenever necessary during the mission. NEAT has two main scientific objectives: combined with

  1. Implementing the Gaia Astrometric Global Iterative Solution (AGIS) in Java

    Science.gov (United States)

    O'Mullane, William; Lammers, Uwe; Lindegren, Lennart; Hernandez, Jose; Hobbs, David

    2011-10-01

    This paper provides a description of the Java software framework which has been constructed to run the Astrometric Global Iterative Solution for the Gaia mission. This is the mathematical framework to provide the rigid reference frame for Gaia observations from the Gaia data itself. This process makes Gaia a self calibrated, and input catalogue independent, mission. The framework is highly distributed typically running on a cluster of machines with a database back end. All code is written in the Java language. We describe the overall architecture and some of the details of the implementation.

  2. Gravitational lensing statistics with extragalactic surveys - II. Analysis of the Jodrell Bank-VLA Astrometric Survey

    NARCIS (Netherlands)

    Helbig, P; Marlow, D; Quast, R; Wilkinson, PN; Browne, IWA; Koopmans, LVE

    We present constraints on the cosmological constant lambda(0) from gravitational lensing statistics of the Jodrell Bank-VLA Astrometric Survey (JVAS). Although this is the largest gravitational lens survey which has been analysed, cosmological constraints are only comparable to those from optical

  3. Gravitational lensing statistics with extragalactic surveys; 2, Analysis of the Jodrell Bank-VLA Astrometric Survey

    NARCIS (Netherlands)

    Helbig, P.; Marlow, D. R.; Quast, R.; Wilkinson, P. N.; Browne, I. W. A.; Koopmans, L. V. E.

    1999-01-01

    Published in: Astron. Astrophys. Suppl. Ser. 136 (1999) no. 2, pp.297-305 citations recorded in [Science Citation Index] Abstract: We present constraints on the cosmological constant $lambda_{0}$ from gravitational lensing statistics of the Jodrell Bank-VLA Astrometric Survey (JVAS). Although this

  4. VizieR Online Data Catalog: HD 128311 radial velocity and astrometric data (McArthur+, 2014)

    Science.gov (United States)

    McArthur, B. E.; Benedict, G. F.; Henry, G. W.; Hatzes, A.; Cochran, W. D.; Harrison, T. E.; Johns-Krull, C.; Nelan, E.

    2017-05-01

    The High Resolution Spectrograph (HRS; Tull, 1998SPIE.3355..387T) at the HET at McDonald Observatory was used to make the spectroscopic observations using the iodine absorption cell method (Butler et al. 1996PASP..108..500B). Our reduction of HET HRS data is given in Bean et al. (2007AJ....134..749B), which uses the REDUCE package (Piskunov & Valenti, 2002A&A...385.1095P). Our observations include a total of 355 high-resolution spectra which were obtained between 2005 April and 2011 January. Because typically two or more observations were made in less than 1 hr per night, we observed at 161 epochs with the HET HRS. The astrometric observations were made with the Hubble Space Telescope (HST) Fine Guidance Sensor (FGS) 1r, a two-axis interferometer, in position (POS) "fringe-tracking" mode. Twenty-nine orbits of HST astrometric observations were made between 2007 December and 2009 August. (2 data files).

  5. Astrometric surveys in the Gaia era

    Science.gov (United States)

    Zacharias, Norbert

    2018-04-01

    The Gaia first data release (DR1) already provides an almost error free optical reference frame on the milli-arcsecond (mas) level allowing significantly better calibration of ground-based astrometric data than ever before. Gaia DR1 provides positions, proper motions and trigonometric parallaxes for just over 2 million stars in the Tycho-2 catalog. For over 1.1 billion additional stars DR1 gives positions. Proper motions for these, mainly fainter stars (G >= 11.5) are currently provided by several new projects which combine earlier epoch ground-based observations with Gaia DR1 positions. These data are very helpful in the interim period but will become obsolete with the second Gaia data release (DR2) expected in April 2018. The era of traditional, ground-based, wide-field astrometry with the goal to provide accurate reference stars has come to an end. Future ground-based astrometry will fill in some gaps (very bright stars, observations needed at many or specific epochs) and mainly will go fainter than the Gaia limit, like the PanSTARRS and the upcoming LSST surveys.

  6. HIGH-PRECISION ASTROMETRIC MILLIMETER VERY LONG BASELINE INTERFEROMETRY USING A NEW METHOD FOR MULTI-FREQUENCY CALIBRATION

    Energy Technology Data Exchange (ETDEWEB)

    Dodson, Richard; Rioja, María J. [International Centre for Radio Astronomy Research, M468, The University of Western Australia, 35 Stirling Hwy, Crawley, Western Australia 6009 (Australia); Molina, Sol N.; Gómez, José L., E-mail: richard.dodson@icrar.org [Instituto de Astrofísica de Andalucía-CSIC, Glorieta de la Astronomía s/n, E-18008 Granada (Spain)

    2017-01-10

    In this paper we describe a new approach for millimeter Very Long Baseline Interferometry (mm-VLBI) calibration that provides bona-fide astrometric alignment of the millimeter-wavelength images from a single source, for the measurement of frequency-dependent effects, such as “core-shifts” near the black hole of active galactic nucleus jets. We achieve our astrometric alignment by solving first for the ionospheric (dispersive) contributions using wide-band centimeter-wavelength observations. Second, we solve for the tropospheric (non-dispersive) contributions by using fast frequency-switching at the target millimeter-wavelengths. These solutions can be scaled and transferred from low frequency to the high frequency. To complete the calibration chain an additional step is required to remove a residual constant phase offset on each antenna. The result is an astrometric calibration and the measurement of the core-shift between 22 and 43 GHz for the jet in BL Lacertae to be −8 ± 5, 20 ± 6 μ as, in R.A. and decl., respectively. By comparison to conventional phase referencing at centimeter-wavelengths we are able to show that this core shift at millimeter-wavelengths is significantly less than what would be predicted by extrapolating the low-frequency result, which closely followed the predictions of the Blandford and Königl conical jet model. As such it would be the first demonstration for the association of the VLBI core with a recollimation shock, normally hidden at low frequencies due to the optical depth, which could be responsible for the γ -ray production in blazar jets.

  7. HIGH-PRECISION ASTROMETRIC MILLIMETER VERY LONG BASELINE INTERFEROMETRY USING A NEW METHOD FOR MULTI-FREQUENCY CALIBRATION

    International Nuclear Information System (INIS)

    Dodson, Richard; Rioja, María J.; Molina, Sol N.; Gómez, José L.

    2017-01-01

    In this paper we describe a new approach for millimeter Very Long Baseline Interferometry (mm-VLBI) calibration that provides bona-fide astrometric alignment of the millimeter-wavelength images from a single source, for the measurement of frequency-dependent effects, such as “core-shifts” near the black hole of active galactic nucleus jets. We achieve our astrometric alignment by solving first for the ionospheric (dispersive) contributions using wide-band centimeter-wavelength observations. Second, we solve for the tropospheric (non-dispersive) contributions by using fast frequency-switching at the target millimeter-wavelengths. These solutions can be scaled and transferred from low frequency to the high frequency. To complete the calibration chain an additional step is required to remove a residual constant phase offset on each antenna. The result is an astrometric calibration and the measurement of the core-shift between 22 and 43 GHz for the jet in BL Lacertae to be −8 ± 5, 20 ± 6 μ as, in R.A. and decl., respectively. By comparison to conventional phase referencing at centimeter-wavelengths we are able to show that this core shift at millimeter-wavelengths is significantly less than what would be predicted by extrapolating the low-frequency result, which closely followed the predictions of the Blandford and Königl conical jet model. As such it would be the first demonstration for the association of the VLBI core with a recollimation shock, normally hidden at low frequencies due to the optical depth, which could be responsible for the γ -ray production in blazar jets.

  8. Nano-JASMINE: use of AGIS for the next astrometric satellite

    Science.gov (United States)

    Yamada, Y.; Gouda, N.; Lammers, U.

    The core data reduction for the Nano-JASMINE mission is planned to be done with Gaia's Astrometric Global Iterative Solution (AGIS). The collaboration started at 2007 prompted by Uwe Lammers' proposal. In addition to similar design and operating principles of the two missions, this is possible thanks to the encapsulation of all Gaia-specific aspects of AGIS in a Parameter Database. Nano-JASMINE will be the test bench for Gaia AGIS software. We present this idea in detail and the necessary practical steps to make AGIS work with Nano-JASMINE data. We also show the key mission parameters, goals, and status of the data reduction for the Nano-JASMINE.

  9. A real-time standard parts inspection based on deep learning

    Science.gov (United States)

    Xu, Kuan; Li, XuDong; Jiang, Hongzhi; Zhao, Huijie

    2017-10-01

    Since standard parts are necessary components in mechanical structure like bogie and connector. These mechanical structures will be shattered or loosen if standard parts are lost. So real-time standard parts inspection systems are essential to guarantee their safety. Researchers would like to take inspection systems based on deep learning because it works well in image with complex backgrounds which is common in standard parts inspection situation. A typical inspection detection system contains two basic components: feature extractors and object classifiers. For the object classifier, Region Proposal Network (RPN) is one of the most essential architectures in most state-of-art object detection systems. However, in the basic RPN architecture, the proposals of Region of Interest (ROI) have fixed sizes (9 anchors for each pixel), they are effective but they waste much computing resources and time. In standard parts detection situations, standard parts have given size, thus we can manually choose sizes of anchors based on the ground-truths through machine learning. The experiments prove that we could use 2 anchors to achieve almost the same accuracy and recall rate. Basically, our standard parts detection system could reach 15fps on NVIDIA GTX1080 (GPU), while achieving detection accuracy 90.01% mAP.

  10. Astrometric tests of General Relativity in the Solar system

    International Nuclear Information System (INIS)

    Gai, M; Vecchiato, A; Riva, A; Lattanzi, M G; Sozzetti, A; Crosta, M T; Busonero, D

    2014-01-01

    Micro-arcsec astronomy is able to verify the predictions of theoretical models of gravitation at a level adequate to constraint relevant parameters and select among different formulations. In particular, this concerns the weak field limit applicable to the Sun neighborhood, where competing models can be expressed in a common framework as the Parametrised Post-Newtonian and Parametrised Post-Post-Newtonian formulations. The mission Gaia is going to provide an unprecedented determination of the γ PPN parameter at the 10 −6 level. Other recently proposed concepts, as GAME, may improve the precision on γ by one or two orders of magnitude and provide constraints on other crucial phenomenological aspects. We review the key concepts of astrometric tests of General Relativity and discuss a possible development scenario

  11. Iterative methods used in overlap astrometric reduction techniques do not always converge

    Science.gov (United States)

    Rapaport, M.; Ducourant, C.; Colin, J.; Le Campion, J. F.

    1993-04-01

    In this paper we prove that the classical Gauss-Seidel type iterative methods used for the solution of the reduced normal equations occurring in overlapping reduction methods of astrometry do not always converge. We exhibit examples of divergence. We then analyze an alternative algorithm proposed by Wang (1985). We prove the consistency of this algorithm and verify that it can be convergent while the Gauss-Seidel method is divergent. We conjecture the convergence of Wang method for the solution of astrometric problems using overlap techniques.

  12. Precision Orbit of δ Delphini and Prospects for Astrometric Detection of Exoplanets

    Science.gov (United States)

    Gardner, Tyler; Monnier, John D.; Fekel, Francis C.; Williamson, Mike; Duncan, Douglas K.; White, Timothy R.; Ireland, Michael; Adams, Fred C.; Barman, Travis; Baron, Fabien; ten Brummelaar, Theo; Che, Xiao; Huber, Daniel; Kraus, Stefan; Roettenbacher, Rachael M.; Schaefer, Gail; Sturmann, Judit; Sturmann, Laszlo; Swihart, Samuel J.; Zhao, Ming

    2018-03-01

    Combining visual and spectroscopic orbits of binary stars leads to a determination of the full 3D orbit, individual masses, and distance to the system. We present a full analysis of the evolved binary system δ Delphini using astrometric data from the MIRC and PAVO instruments on the CHARA long-baseline interferometer, 97 new spectra from the Fairborn Observatory, and 87 unpublished spectra from the Lick Observatory. We determine the full set of orbital elements for δ Del, along with masses of 1.78 ± 0.07 M ⊙ and 1.62 ± 0.07 M ⊙ for each component, and a distance of 63.61 ± 0.89 pc. These results are important in two contexts: for testing stellar evolution models and for defining the detection capabilities for future planet searches. We find that the evolutionary state of this system is puzzling, as our measured flux ratios, radii, and masses imply a ∼200 Myr age difference between the components, using standard stellar evolution models. Possible explanations for this age discrepancy include mass transfer scenarios with a now-ejected tertiary companion. For individual measurements taken over a span of two years, we achieve 2 M J on orbits >0.75 au around individual components of hot binary stars via differential astrometry.

  13. ASTROMETRIC MASSES OF 26 ASTEROIDS AND OBSERVATIONS ON ASTEROID POROSITY

    International Nuclear Information System (INIS)

    Baer, James; Chesley, Steven R.; Matson, Robert D.

    2011-01-01

    As an application of our recent observational error model, we present the astrometric masses of 26 main-belt asteroids. We also present an integrated ephemeris of 300 large asteroids, which was used in the mass determination algorithm to model significant perturbations from the rest of the main belt. After combining our mass estimates with those of other authors, we study the bulk porosities of over 50 main-belt asteroids and observe that asteroids as large as 300 km in diameter may be loose aggregates. This finding may place specific constraints on models of main-belt collisional evolution. Additionally, we observe that C-group asteroids tend to have significantly higher macroporosity than S-group asteroids.

  14. Deep round window insertion versus standard approach in cochlear implant surgery.

    Science.gov (United States)

    Nordfalk, Karl Fredrik; Rasmussen, Kjell; Bunne, Marie; Jablonski, Greg Eigner

    2016-01-01

    The aim of this study was to compare the outcomes of vestibular tests and the residual hearing of patients who have undergone full insertion cochlear implant surgery using the round window approach with a hearing preservation protocol (RW-HP) or the standard cochleostomy approach (SCA) without hearing preservation. A prospective study of 34 adults who underwent unilateral cochlear implantation was carried out. One group was operated using the RW-HP (n = 17) approach with Med-El +Flex(SOFT) electrode array with full insertion, while the control group underwent a more conventional SCA surgery (n = 17) with shorter perimodiolar electrodes. Assessments of residual hearing, cervical vestibular-evoked myogenic potentials (cVEMP), videonystagmography, subjective visual vertical/horizontal (SVH/SVV) were performed before and after surgery. There was a significantly (p < 0.05) greater number of subjects who exhibited complete or partial hearing preservation in the deep insertion RW-HP group (9/17) compared to the SCA group (2/15). A higher degree of vestibular loss but a lower degree of vertigo symptoms could be seen in the RW-HP group, but the differences were not statistically significant. It is possible to preserve residual hearing to a certain extent also with deep insertion. Full insertion with hearing preservation was less harmful to residual hearing particularly at 125 Hz (p < 0.05), than was the standard cochleostomy approach.

  15. Astrometric analysis of the unresolved binary MU Cassiopeiae from photographs taken with the Sproul 61 centimeter refractor

    International Nuclear Information System (INIS)

    Lippincott, S.L.

    1981-01-01

    Mu Cassiopeiae, a high-velocity Population II subdwarf, is an astrometric binary which has been on the Sproul Observatory astrometric program since 1937. The data yield P = 21.43 yr, with a photocentric semiaxis major, α = 0''.186 +- 0''.001 (p.e) and a relative parallax, π = +0''.130 +- 0''.001. Rigorous masses for the components from the Sproul results will follow in the future only in conjunction with reliable values for Δm and separation derived from other techniques. The best tentative values of Δm and separation so far found suggest M/sub A/ = 0.7 M/sub sun/, and M/sub B/roughly-equal0.2Msun with Δmapprox.4.5, which indicate higher. He content for μ Cas A than for the Sun. The masses are of particular interest because they hold a clue to the chemical composition of the system which is likely to be similar to that in the interstellar medium during the early stages of our Galaxy at the time μ Cas is thought to have originated

  16. To Boldly Go Where No Man has Gone Before: Seeking Gaia's Astrometric Solution with AGIS

    Science.gov (United States)

    Lammers, U.; Lindegren, L.; O'Mullane, W.; Hobbs, D.

    2009-09-01

    Gaia is ESA's ambitious space astrometry mission with a foreseen launch date in late 2011. Its main objective is to perform a stellar census of the 1,000 million brightest objects in our galaxy (completeness to V=20 mag) from which an astrometric catalog of micro-arcsec (μas) level accuracy will be constructed. A key element in this endeavor is the Astrometric Global Iterative Solution (AGIS) - the mathematical and numerical framework for combining the ≈80 available observations per star obtained during Gaia's 5 yr lifetime into a single global astrometic solution. AGIS consists of four main algorithmic cores which improve the source astrometic parameters, satellite attitude, calibration, and global parameters in a block-iterative manner. We present and discuss this basic scheme, the algorithms themselves and the overarching system architecture. The latter is a data-driven distributed processing framework designed to achieve an overall system performance that is not I/O limited. AGIS is being developed as a pure Java system by a small number of geographically distributed European groups. We present some of the software engineering aspects of the project and show used methodologies and tools. Finally we will briefly discuss how AGIS is embedded into the overall Gaia data processing architecture.

  17. A NEW APPLICATION OF THE ASTROMETRIC METHOD TO BREAK SEVERE DEGENERACIES IN BINARY MICROLENSING EVENTS

    International Nuclear Information System (INIS)

    Chung, Sun-Ju; Park, Byeong-Gon; Humphrey, Andrew; Ryu, Yoon-Hyun

    2009-01-01

    When a source star is microlensed by one stellar component of widely separated binary stellar components, after finishing the lensing event, the event induced by the other binary star can be additionally detected. In this paper, we investigate whether the close/wide degeneracies in binary lensing events can be resolved by detecting the additional centroid shift of the source images induced by the secondary binary star in wide binary lensing events. From this investigation, we find that if the source star passes close to the Einstein ring of the secondary companion, the degeneracy can be easily resolved by using future astrometric follow-up observations with high astrometric precision. We determine the probability of detecting the additional centroid shift in binary lensing events with high magnification. From this, we find that the degeneracy of binary lensing events with a separation of ∼<20.0 AU can be resolved with a significant efficiency. We also estimate the waiting time for the detection of the additional centroid shift in wide binary lensing events. We find that for typical Galactic lensing events with a separation of ∼<20.0 AU, the additional centroid shift can be detected within 100 days, and thus the degeneracy of those events can be sufficiently broken within a year.

  18. The principle of measuring unusual change of underground mass by optical astrometric instrument

    Directory of Open Access Journals (Sweden)

    Wang Jiancheng

    2012-11-01

    In this study, we estimate the deflection angle of the plumb line on a ground site, and give a relation between the angle, abnormal mass and site distance (depth and horizontal distance. Then we derive the abnormality of underground material density using the plumb lines measured at different sites, and study the earthquake gestation, development and occurrence. Using the deflection angles of plumb lines observed at two sites, we give a method to calculate the mass and the center of gravity of underground materials. We also estimate the abnormal masses of latent seismic zones with different energy, using thermodynamic relations, and introduce a new optical astrometric instrument we had developed.

  19. The Caviar software package for the astrometric reduction of Cassini ISS images: description and examples

    Science.gov (United States)

    Cooper, N. J.; Lainey, V.; Meunier, L.-E.; Murray, C. D.; Zhang, Q.-F.; Baillie, K.; Evans, M. W.; Thuillot, W.; Vienne, A.

    2018-02-01

    Aims: Caviar is a software package designed for the astrometric measurement of natural satellite positions in images taken using the Imaging Science Subsystem (ISS) of the Cassini spacecraft. Aspects of the structure, functionality, and use of the software are described, and examples are provided. The integrity of the software is demonstrated by generating new measurements of the positions of selected major satellites of Saturn, 2013-2016, along with their observed minus computed (O-C) residuals relative to published ephemerides. Methods: Satellite positions were estimated by fitting a model to the imaged limbs of the target satellites. Corrections to the nominal spacecraft pointing were computed using background star positions based on the UCAC5 and Tycho2 star catalogues. UCAC5 is currently used in preference to Gaia-DR1 because of the availability of proper motion information in UCAC5. Results: The Caviar package is available for free download. A total of 256 new astrometric observations of the Saturnian moons Mimas (44), Tethys (58), Dione (55), Rhea (33), Iapetus (63), and Hyperion (3) have been made, in addition to opportunistic detections of Pandora (20), Enceladus (4), Janus (2), and Helene (5), giving an overall total of 287 new detections. Mean observed-minus-computed residuals for the main moons relative to the JPL SAT375 ephemeris were - 0.66 ± 1.30 pixels in the line direction and 0.05 ± 1.47 pixels in the sample direction. Mean residuals relative to the IMCCE NOE-6-2015-MAIN-coorb2 ephemeris were -0.34 ± 0.91 pixels in the line direction and 0.15 ± 1.65 pixels in the sample direction. The reduced astrometric data are provided in the form of satellite positions for each image. The reference star positions are included in order to allow reprocessing at some later date using improved star catalogues, such as later releases of Gaia, without the need to re-estimate the imaged star positions. The Caviar software is available for free download from: ftp

  20. Deep learning evaluation using deep linguistic processing

    OpenAIRE

    Kuhnle, Alexander; Copestake, Ann

    2017-01-01

    We discuss problems with the standard approaches to evaluation for tasks like visual question answering, and argue that artificial data can be used to address these as a complement to current practice. We demonstrate that with the help of existing 'deep' linguistic processing technology we are able to create challenging abstract datasets, which enable us to investigate the language understanding abilities of multimodal deep learning models in detail, as compared to a single performance value ...

  1. Double-blind test program for astrometric planet detection with Gaia

    Science.gov (United States)

    Casertano, S.; Lattanzi, M. G.; Sozzetti, A.; Spagna, A.; Jancart, S.; Morbidelli, R.; Pannunzio, R.; Pourbaix, D.; Queloz, D.

    2008-05-01

    Aims: The scope of this paper is twofold. First, it describes the simulation scenarios and the results of a large-scale, double-blind test campaign carried out to estimate the potential of Gaia for detecting and measuring planetary systems. The identified capabilities are then put in context by highlighting the unique contribution that the Gaia exoplanet discoveries will be able to bring to the science of extrasolar planets in the next decade. Methods: We use detailed simulations of the Gaia observations of synthetic planetary systems and develop and utilize independent software codes in double-blind mode to analyze the data, including statistical tools for planet detection and different algorithms for single and multiple Keplerian orbit fitting that use no a priori knowledge of the true orbital parameters of the systems. Results: 1) Planets with astrometric signatures α≃ 3 times the assumed single-measurement error σ_ψ and period P≤ 5 yr can be detected reliably and consistently, with a very small number of false positives. 2) At twice the detection limit, uncertainties in orbital parameters and masses are typically 15-20%. 3) Over 70% of two-planet systems with well-separated periods in the range 0.2≤ P≤ 9 yr, astrometric signal-to-noise ratio 2≤α/σ_ψ≤ 50, and eccentricity e≤ 0.6 are correctly identified. 4) Favorable orbital configurations (both planets with P≤ 4 yr and α/σ_ψ≥ 10, redundancy over a factor of 2 in the number of observations) have orbital elements measured to better than 10% accuracy > 90% of the time, and the value of the mutual inclination angle i_rel determined with uncertainties ≤ 10°. 5) Finally, nominal uncertainties obtained from the fitting procedures are a good estimate of the actual errors in the orbit reconstruction. Extrapolating from the present-day statistical properties of the exoplanet sample, the results imply that a Gaia with σ_ψ = 8 μas, in its unbiased and complete magnitude-limited census of

  2. A 481pJ/decision 3.4M decision/s Multifunctional Deep In-memory Inference Processor using Standard 6T SRAM Array

    OpenAIRE

    Kang, Mingu; Gonugondla, Sujan; Patil, Ameya; Shanbhag, Naresh

    2016-01-01

    This paper describes a multi-functional deep in-memory processor for inference applications. Deep in-memory processing is achieved by embedding pitch-matched low-SNR analog processing into a standard 6T 16KB SRAM array in 65 nm CMOS. Four applications are demonstrated. The prototype achieves up to 5.6X (9.7X estimated for multi-bank scenario) energy savings with negligible (

  3. Gaia’s Cepheids and RR Lyrae stars and luminosity calibrations based on Tycho-Gaia Astrometric Solution

    Directory of Open Access Journals (Sweden)

    Clementini Gisella

    2017-01-01

    Full Text Available Gaia Data Release 1 contains parallaxes for more than 700 Galactic Cepheids and RR Lyrae stars, computed as part of the Tycho-Gaia Astrometric Solution (TGAS. We have used TGAS parallaxes, along with literature (V, I, J, Ks, W1 photometry and spectroscopy, to calibrate the zero point of the period-luminosity and period-Wesenheit relations of classical and type II Cepheids, and the near-infrared period-luminosity, period-luminosity-metallicity and optical luminosity-metallicity relations of RR Lyrae stars. In this contribution we briefly summarise results obtained by fitting these basic relations adopting different techniques that operate either in parallax or distance (absolute magnitude space.

  4. Effects of a non-standard W± magnetic moment in W± production via deep inelastic e-P scattering

    International Nuclear Information System (INIS)

    Boehm, M.; Rosado, A.

    1989-01-01

    We calculate the production of charged bosons in deep inelastic e - P scattering in the context of an electroweak model in which the vector boson self interactions may be different from those prescribed by the electroweak standard model. We present results which show the dependence of the cross section on the anomalous magnetic dipole moment κ of the W ± . We find for energies available at HERA that even small deviations from the standard model value of κ imply observable deviations in the W ± production rates. We also show that the contributions from heavy boson exchange diagrams are very important. (orig.)

  5. A New Browser-based, Ontology-driven Tool for Generating Standardized, Deep Descriptions of Geoscience Models

    Science.gov (United States)

    Peckham, S. D.; Kelbert, A.; Rudan, S.; Stoica, M.

    2016-12-01

    Standardized metadata for models is the key to reliable and greatly simplified coupling in model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System). This model metadata also helps model users to understand the important details that underpin computational models and to compare the capabilities of different models. These details include simplifying assumptions on the physics, governing equations and the numerical methods used to solve them, discretization of space (the grid) and time (the time-stepping scheme), state variables (input or output), model configuration parameters. This kind of metadata provides a "deep description" of a computational model that goes well beyond other types of metadata (e.g. author, purpose, scientific domain, programming language, digital rights, provenance, execution) and captures the science that underpins a model. While having this kind of standardized metadata for each model in a repository opens up a wide range of exciting possibilities, it is difficult to collect this information and a carefully conceived "data model" or schema is needed to store it. Automated harvesting and scraping methods can provide some useful information, but they often result in metadata that is inaccurate or incomplete, and this is not sufficient to enable the desired capabilities. In order to address this problem, we have developed a browser-based tool called the MCM Tool (Model Component Metadata) which runs on notebooks, tablets and smart phones. This tool was partially inspired by the TurboTax software, which greatly simplifies the necessary task of preparing tax documents. It allows a model developer or advanced user to provide a standardized, deep description of a computational geoscience model, including hydrologic models. Under the hood, the tool uses a new ontology for models built on the CSDMS Standard Names, expressed as a collection of RDF files (Resource Description Framework). This ontology is based on core concepts

  6. The laser astrometric test of relativity mission

    International Nuclear Information System (INIS)

    Turyshev, Slava G.; Shao, Michael; Nordtvedt, Kenneth L.

    2004-01-01

    This paper discusses new fundamental physics experiment to test relativistic gravity at the accuracy better than the effects of the 2nd order in the gravitational field strength, ∝ G 2 . The Laser Astrometric Test Of Relativity (LATOR) mission uses laser interferometry between two micro-spacecraft whose lines of sight pass close by the Sun to accurately measure deflection of light in the solar gravity. The key element of the experimental design is a redundant geometry optical truss provided by a long-baseline (100 m) multi-channel stellar optical interferometer placed on the International Space Station (ISS). The interferometer is used for measuring the angles between the two spacecraft. In Euclidean geometry, determination of a triangle's three sides determines any angle therein; with gravity changing the optical lengths of sides passing close by the Sun and deflecting the light, the Euclidean relationships are overthrown. The geometric redundancy enables LATOR to measure the departure from Euclidean geometry caused by the solar gravity field to a very high accuracy. LATOR will not only improve the value of the parameterized post-Newtonian (PPN) parameter γ to unprecedented levels of accuracy of 10 -8 , it will also reach ability to measure effects of the next post-Newtonian order (c -4 ) of light deflection resulting from gravity's intrinsic non-linearity. The solar quadrupole moment parameter, J2, will be measured with high precision, as well as a variety of other relativistic effects including Lense-Thirring precession. LATOR will lead to very robust advances in the tests of fundamental physics: this mission could discover a violation or extension of general relativity, or reveal the presence of an additional long range interaction in the physical law. There are no analogs to the LATOR experiment; it is unique and is a natural culmination of solar system gravity experiments

  7. First results of astrometric and photometric processing of scanned plates DLFA MAO NAS of Ukraine

    Science.gov (United States)

    Shatokhina, S.; Andruk, V.; Yatsenko, A.

    2011-02-01

    In the paper the first estimation of astrometric and photometric results of digitization of images on plates of Double Long Focus Astrograph (DLFA) was made. The digitization of plates was carried out with the scanner Microtek ScanMaker 9800XL TMA. For image processing the package LINUX/MIDAS/ROMAFOT was used. For selected plates DLFA mean square errors for equatorial coordinates (in a system of TYCHO-2 catalogue) and stellar magnitudes (in the Johnson B-system) per one image are 0.06" and 0.13m. The errors are of random nature and there are no systematic dependences on coordinates, magnitudes and colour of stars. The comparison of obtained results with that of earlier plate measurements obtained with complex PARSEC was made.

  8. Estimation of position and velocity for a low dynamic vehicle in near space using nonresolved photometric and astrometric data.

    Science.gov (United States)

    Jing, Nan; Li, Chuang; Chong, Yaqin

    2017-01-20

    An estimation method for indirectly observable parameters for a typical low dynamic vehicle (LDV) is presented. The estimation method utilizes apparent magnitude, azimuth angle, and elevation angle to estimate the position and velocity of a typical LDV, such as a high altitude balloon (HAB). In order to validate the accuracy of the estimated parameters gained from an unscented Kalman filter, two sets of experiments are carried out to obtain the nonresolved photometric and astrometric data. In the experiments, a HAB launch is planned; models of the HAB dynamics and kinematics and observation models are built to use as time update and measurement update functions, respectively. When the HAB is launched, a ground-based optoelectronic detector is used to capture the object images, which are processed using aperture photometry technology to obtain the time-varying apparent magnitude of the HAB. Two sets of actual and estimated parameters are given to clearly indicate the parameter differences. Two sets of errors between the actual and estimated parameters are also given to show how the estimated position and velocity differ with respect to the observation time. The similar distribution curve results from the two scenarios, which agree within 3σ, verify that nonresolved photometric and astrometric data can be used to estimate the indirectly observable state parameters (position and velocity) for a typical LDV. This technique can be applied to small and dim space objects in the future.

  9. Absolute Nuv magnitudes of Gaia DR1 astrometric stars and a search for hot companions in nearby systems

    Science.gov (United States)

    Makarov, V. V.

    2017-10-01

    Accurate parallaxes from Gaia DR1 (TGAS) are combined with GALEX visual Nuv magnitudes to produce absolute Mnuv magnitudes and an ultraviolet HR diagram for a large sample of astrometric stars. A functional fit is derived of the lower envelope main sequence of the nearest 1403 stars (distance Pleiades, or, most likely, tight interacting binaries of the BY Dra-type. A separate collection of 40 stars with precise trigonometric parallaxes and Nuv-G colors bluer than 2 mag is presented. It includes several known novae, white dwarfs, and binaries with hot subdwarf (sdOB) components, but most remain unexplored.

  10. Deep learning for image classification

    Science.gov (United States)

    McCoppin, Ryan; Rizki, Mateen

    2014-06-01

    This paper provides an overview of deep learning and introduces the several subfields of deep learning including a specific tutorial of convolutional neural networks. Traditional methods for learning image features are compared to deep learning techniques. In addition, we present our preliminary classification results, our basic implementation of a convolutional restricted Boltzmann machine on the Mixed National Institute of Standards and Technology database (MNIST), and we explain how to use deep learning networks to assist in our development of a robust gender classification system.

  11. Analyses of the Short Periodical Part of the Spectrum of Pole Coordinate Variations Determined by the Astrometric and Laser Technique

    Science.gov (United States)

    Kołaczek, B.; Kosek, W.; Galas, R.

    Series of BIH astrometric (BIH-ASTR) pole coordinates and of CSR LAGEOS laser ranging (CSR-LALAR) pole coordinates determined in the MERIT Campaign in the years 1972 - 1986, 1983 - 1986, respectively, have been filtered by different band pass filters consisting of the law pass Gauss filter and of the high pass Butterworth filter. Filtered residuals were analysed by the MESA-Maximum Entropy Spectra Analysis and by the Ormsby narrow band pass filters in order to find numerically modeled signals approximating these residuals in the best way.

  12. THE 2012 HUBBLE ULTRA DEEP FIELD (UDF12): OBSERVATIONAL OVERVIEW

    Energy Technology Data Exchange (ETDEWEB)

    Koekemoer, Anton M. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Ellis, Richard S.; Schenker, Matthew A. [Department of Astrophysics, California Institute of Technology, MS 249-17, Pasadena, CA 91125 (United States); McLure, Ross J.; Dunlop, James S.; Bowler, Rebecca A. A.; Rogers, Alexander B.; Curtis-Lake, Emma; Cirasuolo, Michele; Wild, V.; Targett, T. [Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh EH9 3HJ (United Kingdom); Robertson, Brant E.; Schneider, Evan; Stark, Daniel P. [Department of Astronomy and Steward Observatory, University of Arizona, Tucson, AZ 85721 (United States); Ono, Yoshiaki; Ouchi, Masami [Institute for Cosmic Ray Research, University of Tokyo, Kashiwa City, Chiba 277-8582 (Japan); Charlot, Stephane [UPMC-CNRS, UMR7095, Institut d' Astrophysique de Paris, F-75014, Paris (France); Furlanetto, Steven R. [Department of Physics and Astronomy, University of California, Los Angeles, CA 90095 (United States)

    2013-11-01

    We present the 2012 Hubble Ultra Deep Field campaign (UDF12), a large 128 orbit Cycle 19 Hubble Space Telescope program aimed at extending previous Wide Field Camera 3 (WFC3)/IR observations of the UDF by quadrupling the exposure time in the F105W filter, imaging in an additional F140W filter, and extending the F160W exposure time by 50%, as well as adding an extremely deep parallel field with the Advanced Camera for Surveys (ACS) in the F814W filter with a total exposure time of 128 orbits. The principal scientific goal of this project is to determine whether galaxies reionized the universe; our observations are designed to provide a robust determination of the star formation density at z ∼> 8, improve measurements of the ultraviolet continuum slope at z ∼ 7-8, facilitate the construction of new samples of z ∼ 9-10 candidates, and enable the detection of sources up to z ∼ 12. For this project we committed to combining these and other WFC3/IR imaging observations of the UDF area into a single homogeneous dataset to provide the deepest near-infrared observations of the sky. In this paper we present the observational overview of the project and describe the procedures used in reducing the data as well as the final products that were produced. We present the details of several special procedures that we implemented to correct calibration issues in the data for both the WFC3/IR observations of the main UDF field and our deep 128 orbit ACS/WFC F814W parallel field image, including treatment for persistence, correction for time-variable sky backgrounds, and astrometric alignment to an accuracy of a few milliarcseconds. We release the full, combined mosaics comprising a single, unified set of mosaics of the UDF, providing the deepest near-infrared blank-field view of the universe currently achievable, reaching magnitudes as deep as AB ∼ 30 mag in the near-infrared, and yielding a legacy dataset on this field.

  13. THE 2012 HUBBLE ULTRA DEEP FIELD (UDF12): OBSERVATIONAL OVERVIEW

    International Nuclear Information System (INIS)

    Koekemoer, Anton M.; Ellis, Richard S.; Schenker, Matthew A.; McLure, Ross J.; Dunlop, James S.; Bowler, Rebecca A. A.; Rogers, Alexander B.; Curtis-Lake, Emma; Cirasuolo, Michele; Wild, V.; Targett, T.; Robertson, Brant E.; Schneider, Evan; Stark, Daniel P.; Ono, Yoshiaki; Ouchi, Masami; Charlot, Stephane; Furlanetto, Steven R.

    2013-01-01

    We present the 2012 Hubble Ultra Deep Field campaign (UDF12), a large 128 orbit Cycle 19 Hubble Space Telescope program aimed at extending previous Wide Field Camera 3 (WFC3)/IR observations of the UDF by quadrupling the exposure time in the F105W filter, imaging in an additional F140W filter, and extending the F160W exposure time by 50%, as well as adding an extremely deep parallel field with the Advanced Camera for Surveys (ACS) in the F814W filter with a total exposure time of 128 orbits. The principal scientific goal of this project is to determine whether galaxies reionized the universe; our observations are designed to provide a robust determination of the star formation density at z ∼> 8, improve measurements of the ultraviolet continuum slope at z ∼ 7-8, facilitate the construction of new samples of z ∼ 9-10 candidates, and enable the detection of sources up to z ∼ 12. For this project we committed to combining these and other WFC3/IR imaging observations of the UDF area into a single homogeneous dataset to provide the deepest near-infrared observations of the sky. In this paper we present the observational overview of the project and describe the procedures used in reducing the data as well as the final products that were produced. We present the details of several special procedures that we implemented to correct calibration issues in the data for both the WFC3/IR observations of the main UDF field and our deep 128 orbit ACS/WFC F814W parallel field image, including treatment for persistence, correction for time-variable sky backgrounds, and astrometric alignment to an accuracy of a few milliarcseconds. We release the full, combined mosaics comprising a single, unified set of mosaics of the UDF, providing the deepest near-infrared blank-field view of the universe currently achievable, reaching magnitudes as deep as AB ∼ 30 mag in the near-infrared, and yielding a legacy dataset on this field

  14. Deep learning in TMVA Benchmarking Benchmarking TMVA DNN Integration of a Deep Autoencoder

    CERN Document Server

    Huwiler, Marc

    2017-01-01

    The TMVA library in ROOT is dedicated to multivariate analysis, and in partic- ular oers numerous machine learning algorithms in a standardized framework. It is widely used in High Energy Physics for data analysis, mainly to perform regression and classication. To keep up to date with the state of the art in deep learning, a new deep learning module was being developed this summer, oering deep neural net- work, convolutional neural network, and autoencoder. TMVA did not have yet any autoencoder method, and the present project consists in implementing the TMVA autoencoder class based on the deep learning module. It also includes some bench- marking performed on the actual deep neural network implementation, in comparison to the Keras framework with Tensorflow and Theano backend.

  15. Stereotactically Standard Areas: Applied Mathematics in the Service of Brain Targeting in Deep Brain Stimulation.

    Science.gov (United States)

    Mavridis, Ioannis N

    2017-12-11

    The concept of stereotactically standard areas (SSAs) within human brain nuclei belongs to the knowledge of the modern field of stereotactic brain microanatomy. These are areas resisting the individual variability of the nuclear location in stereotactic space. This paper summarizes the current knowledge regarding SSAs. A mathematical formula of SSAs was recently invented, allowing for their robust, reproducible, and accurate application to laboratory studies and clinical practice. Thus, SSAs open new doors for the application of stereotactic microanatomy to highly accurate brain targeting, which is mainly useful for minimally invasive neurosurgical procedures, such as deep brain stimulation.

  16. Stereotactically Standard Areas: Applied Mathematics in the Service of Brain Targeting in Deep Brain Stimulation

    Directory of Open Access Journals (Sweden)

    Ioannis N. Mavridis

    2017-12-01

    Full Text Available The concept of stereotactically standard areas (SSAs within human brain nuclei belongs to the knowledge of the modern field of stereotactic brain microanatomy. These are areas resisting the individual variability of the nuclear location in stereotactic space. This paper summarizes the current knowledge regarding SSAs. A mathematical formula of SSAs was recently invented, allowing for their robust, reproducible, and accurate application to laboratory studies and clinical practice. Thus, SSAs open new doors for the application of stereotactic microanatomy to highly accurate brain targeting, which is mainly useful for minimally invasive neurosurgical procedures, such as deep brain stimulation.

  17. Mixed deep learning and natural language processing method for fake-food image recognition and standardization to help automated dietary assessment.

    Science.gov (United States)

    Mezgec, Simon; Eftimov, Tome; Bucher, Tamara; Koroušić Seljak, Barbara

    2018-04-06

    The present study tested the combination of an established and a validated food-choice research method (the 'fake food buffet') with a new food-matching technology to automate the data collection and analysis. The methodology combines fake-food image recognition using deep learning and food matching and standardization based on natural language processing. The former is specific because it uses a single deep learning network to perform both the segmentation and the classification at the pixel level of the image. To assess its performance, measures based on the standard pixel accuracy and Intersection over Union were applied. Food matching firstly describes each of the recognized food items in the image and then matches the food items with their compositional data, considering both their food names and their descriptors. The final accuracy of the deep learning model trained on fake-food images acquired by 124 study participants and providing fifty-five food classes was 92·18 %, while the food matching was performed with a classification accuracy of 93 %. The present findings are a step towards automating dietary assessment and food-choice research. The methodology outperforms other approaches in pixel accuracy, and since it is the first automatic solution for recognizing the images of fake foods, the results could be used as a baseline for possible future studies. As the approach enables a semi-automatic description of recognized food items (e.g. with respect to FoodEx2), these can be linked to any food composition database that applies the same classification and description system.

  18. Revisiting TW Hydrae in light of new astrometric data

    Science.gov (United States)

    Teixeira, R.; Ducourant, C.; Galli, P. A. B.; Le Campion, J. F.; Zuckerman, B.; Krone-Martins, A. G. O.; Chauvin, G.; Song, I.

    2014-10-01

    Our efforts in the present work focused mainly on refining and improving the previous description and understanding of the stellar association TW Hydrae (TWA) including a very detailed membership analysis and its dynamical and evolutionary age.To achieve our objectives in a fully reliable way we take advantage of our own astrometric measurements (Ducourant et al. 2013) performed with NTT/EFOSC2 - ESO (La Silla - Chile) spread over three years (2007 - 2010) and of those published in the literature.A very detailed membership analysis based on the convergent point strategy as developed by our team (Galli et al. 2012, 2013) allowed us to define a consistent kinematic group containing 31 stars among the 44 proposed as TWA member in the literature. Assuming that our sample of stars may be contaminated by non-members and to get rid of the particular influence of each star we applied a Jacknife resampling technique generating 2000 random lists of 13 stars taken from our 16 stars and calculated for each the epoch of convergence when the radius is minimum. The mean of the epochs obtained and the dispersion about the mean give a dynamical age of 7.5± 0.7 Myr for the association that is in good agreement with the previous traceback age (De La Reza et al. 2006). We also estimated age for TWA moving group members from pre-main sequence evolutionary models (Siess et al. 2000) and find a mean age of 7.4± 1.2 Myr. These results show that the dynamical age of the association obtained via the traceback technique and the average age derived from theoretical evolutionary models are in good agreement.

  19. THE APPLICATION OF MULTIVIEW METHODS FOR HIGH-PRECISION ASTROMETRIC SPACE VLBI AT LOW FREQUENCIES

    Energy Technology Data Exchange (ETDEWEB)

    Dodson, R.; Rioja, M.; Imai, H. [International Centre for Radio Astronomy Research, M468, University of Western Australia, 35 Stirling Hwy, Crawley, Western Australia 6009 (Australia); Asaki, Y. [Institute of Space and Astronautical Science, 3-1-1 Yoshinodai, Chuou, Sagamihara, Kanagawa 252-5210 (Japan); Hong, X.-Y.; Shen, Z., E-mail: richard.dodson@icrar.org [Shanghai Astronomical Observatory, CAS, 200030 Shanghai (China)

    2013-06-15

    High-precision astrometric space very long baseline interferometry (S-VLBI) at the low end of the conventional frequency range, i.e., 20 cm, is a requirement for a number of high-priority science goals. These are headlined by obtaining trigonometric parallax distances to pulsars in pulsar-black hole pairs and OH masers anywhere in the Milky Way and the Magellanic Clouds. We propose a solution for the most difficult technical problems in S-VLBI by the MultiView approach where multiple sources, separated by several degrees on the sky, are observed simultaneously. We simulated a number of challenging S-VLBI configurations, with orbit errors up to 8 m in size and with ionospheric atmospheres consistent with poor conditions. In these simulations we performed MultiView analysis to achieve the required science goals. This approach removes the need for beam switching requiring a Control Moment Gyro, and the space and ground infrastructure required for high-quality orbit reconstruction of a space-based radio telescope. This will dramatically reduce the complexity of S-VLBI missions which implement the phase-referencing technique.

  20. THE APPLICATION OF MULTIVIEW METHODS FOR HIGH-PRECISION ASTROMETRIC SPACE VLBI AT LOW FREQUENCIES

    International Nuclear Information System (INIS)

    Dodson, R.; Rioja, M.; Imai, H.; Asaki, Y.; Hong, X.-Y.; Shen, Z.

    2013-01-01

    High-precision astrometric space very long baseline interferometry (S-VLBI) at the low end of the conventional frequency range, i.e., 20 cm, is a requirement for a number of high-priority science goals. These are headlined by obtaining trigonometric parallax distances to pulsars in pulsar-black hole pairs and OH masers anywhere in the Milky Way and the Magellanic Clouds. We propose a solution for the most difficult technical problems in S-VLBI by the MultiView approach where multiple sources, separated by several degrees on the sky, are observed simultaneously. We simulated a number of challenging S-VLBI configurations, with orbit errors up to 8 m in size and with ionospheric atmospheres consistent with poor conditions. In these simulations we performed MultiView analysis to achieve the required science goals. This approach removes the need for beam switching requiring a Control Moment Gyro, and the space and ground infrastructure required for high-quality orbit reconstruction of a space-based radio telescope. This will dramatically reduce the complexity of S-VLBI missions which implement the phase-referencing technique.

  1. Low incidence of clonality in cold water corals revealed through the novel use of a standardized protocol adapted to deep sea sampling

    Science.gov (United States)

    Becheler, Ronan; Cassone, Anne-Laure; Noël, Philippe; Mouchel, Olivier; Morrison, Cheryl L.; Arnaud-Haond, Sophie

    2017-11-01

    Sampling in the deep sea is a technical challenge, which has hindered the acquisition of robust datasets that are necessary to determine the fine-grained biological patterns and processes that may shape genetic diversity. Estimates of the extent of clonality in deep-sea species, despite the importance of clonality in shaping the local dynamics and evolutionary trajectories, have been largely obscured by such limitations. Cold-water coral reefs along European margins are formed mainly by two reef-building species, Lophelia pertusa and Madrepora oculata. Here we present a fine-grained analysis of the genotypic and genetic composition of reefs occurring in the Bay of Biscay, based on an innovative deep-sea sampling protocol. This strategy was designed to be standardized, random, and allowed the georeferencing of all sampled colonies. Clonal lineages discriminated through their Multi-Locus Genotypes (MLG) at 6-7 microsatellite markers could thus be mapped to assess the level of clonality and the spatial spread of clonal lineages. High values of clonal richness were observed for both species across all sites suggesting a limited occurrence of clonality, which likely originated through fragmentation. Additionally, spatial autocorrelation analysis underlined the possible occurrence of fine-grained genetic structure in several populations of both L. pertusa and M. oculata. The two cold-water coral species examined had contrasting patterns of connectivity among canyons, with among-canyon genetic structuring detected in M. oculata, whereas L. pertusa was panmictic at the canyon scale. This study exemplifies that a standardized, random and georeferenced sampling strategy, while challenging, can be applied in the deep sea, and associated benefits outlined here include improved estimates of fine grained patterns of clonality and dispersal that are comparable across sites and among species.

  2. Search for sterile neutrinos with IceCube DeepCore

    Energy Technology Data Exchange (ETDEWEB)

    Terliuk, Andrii [DESY, Platanenallee 6, 15738 Zeuthen (Germany); Collaboration: IceCube-Collaboration

    2016-07-01

    The DeepCore detector is a sub-array of the IceCube Neutrino Observatory that lowers the energy threshold for neutrino detection down to approximately 10 GeV. DeepCore is used for a variety of studies including atmospheric neutrino oscillations. The standard three-neutrino oscillation paradigm is tested using the DeepCore detector by searching for an additional light, sterile neutrino with a mass on the order of 1 eV. Sterile neutrinos do not interact with the ordinary matter, however they can be mixed with the three active neutrino states. Such mixture changes the picture of standard neutrino oscillations for atmospheric neutrinos with energies below 100 GeV. The capabilities of DeepCore detector to measure such sterile neutrino mixing will be presented in this talk.

  3. An Ensemble of Deep Support Vector Machines for Image Categorization

    NARCIS (Netherlands)

    Abdullah, Azizi; Veltkamp, Remco C.; Wiering, Marco

    2009-01-01

    This paper presents the deep support vector machine (D-SVM) inspired by the increasing popularity of deep belief networks for image recognition. Our deep SVM trains an SVM in the standard way and then uses the kernel activations of support vectors as inputs for training another SVM at the next

  4. DeepSurv: personalized treatment recommender system using a Cox proportional hazards deep neural network.

    Science.gov (United States)

    Katzman, Jared L; Shaham, Uri; Cloninger, Alexander; Bates, Jonathan; Jiang, Tingting; Kluger, Yuval

    2018-02-26

    Medical practitioners use survival models to explore and understand the relationships between patients' covariates (e.g. clinical and genetic features) and the effectiveness of various treatment options. Standard survival models like the linear Cox proportional hazards model require extensive feature engineering or prior medical knowledge to model treatment interaction at an individual level. While nonlinear survival methods, such as neural networks and survival forests, can inherently model these high-level interaction terms, they have yet to be shown as effective treatment recommender systems. We introduce DeepSurv, a Cox proportional hazards deep neural network and state-of-the-art survival method for modeling interactions between a patient's covariates and treatment effectiveness in order to provide personalized treatment recommendations. We perform a number of experiments training DeepSurv on simulated and real survival data. We demonstrate that DeepSurv performs as well as or better than other state-of-the-art survival models and validate that DeepSurv successfully models increasingly complex relationships between a patient's covariates and their risk of failure. We then show how DeepSurv models the relationship between a patient's features and effectiveness of different treatment options to show how DeepSurv can be used to provide individual treatment recommendations. Finally, we train DeepSurv on real clinical studies to demonstrate how it's personalized treatment recommendations would increase the survival time of a set of patients. The predictive and modeling capabilities of DeepSurv will enable medical researchers to use deep neural networks as a tool in their exploration, understanding, and prediction of the effects of a patient's characteristics on their risk of failure.

  5. Application of the Coastal and Marine Ecological Classification Standard to ROV Video Data for Enhanced Analysis of Deep-Sea Habitats in the Gulf of Mexico

    Science.gov (United States)

    Ruby, C.; Skarke, A. D.; Mesick, S.

    2016-02-01

    The Coastal and Marine Ecological Classification Standard (CMECS) is a network of common nomenclature that provides a comprehensive framework for organizing physical, biological, and chemical information about marine ecosystems. It was developed by the National Oceanic and Atmospheric Administration (NOAA) Coastal Services Center, in collaboration with other feral agencies and academic institutions, as a means for scientists to more easily access, compare, and integrate marine environmental data from a wide range of sources and time frames. CMECS has been endorsed by the Federal Geographic Data Committee (FGDC) as a national metadata standard. The research presented here is focused on the application of CMECS to deep-sea video and environmental data collected by the NOAA ROV Deep Discoverer and the NOAA Ship Okeanos Explorer in the Gulf of Mexico in 2011-2014. Specifically, a spatiotemporal index of the physical, chemical, biological, and geological features observed in ROV video records was developed in order to allow scientist, otherwise unfamiliar with the specific content of existing video data, to rapidly determine the abundance and distribution of features of interest, and thus evaluate the applicability of those video data to their research. CMECS units (setting, component, or modifier) for seafloor images extracted from high-definition ROV video data were established based upon visual assessment as well as analysis of coincident environmental sensor (temperature, conductivity), navigation (ROV position, depth, attitude), and log (narrative dive summary) data. The resulting classification units were integrated into easily searchable textual and geo-databases as well as an interactive web map. The spatial distribution and associations of deep-sea habitats as indicated by CMECS classifications are described and optimized methodological approaches for application of CMECS to deep-sea video and environmental data are presented.

  6. Standard high-resolution pelvic MRI vs. low-resolution pelvic MRI in the evaluation of deep infiltrating endometriosis

    International Nuclear Information System (INIS)

    Scardapane, Arnaldo; Lorusso, Filomenamila; Ferrante, Annunziata; Stabile Ianora, Amato Antonio; Angelelli, Giuseppe; Scioscia, Marco

    2014-01-01

    To compare the capabilities of standard pelvic MRI with low-resolution pelvic MRI using fast breath-hold sequences to evaluate deep infiltrating endometriosis (DIE). Sixty-eight consecutive women with suspected DIE were studied with pelvic MRI. A double-acquisition protocol was carried out in each case. High-resolution (HR)-MRI consisted of axial, sagittal, and coronal TSE T2W images, axial TSE T1W, and axial THRIVE. Low-resolution (LR)-MRI was acquired using fast single shot (SSH) T2 and T1 images. Two radiologists with 10 and 2 years of experience reviewed HR and LR images in two separate sessions. The presence of endometriotic lesions of the uterosacral ligament (USL), rectovaginal septum (RVS), pouch of Douglas (POD), and rectal wall was noted. The accuracies of LR-MRI and HR-MRI were compared with the laparoscopic and histopathological findings. Average acquisition times were 24 minutes for HR-MRI and 7 minutes for LR-MRI. The more experienced radiologist achieved higher accuracy with both HR-MRI and LR-MRI. The values of sensitivity, specificity, PPV, NPV, and accuracy did not significantly change between HR and LR images or interobserver agreement for all of the considered anatomic sites. LR-MRI performs as well as HR-MRI and is a valuable tool for the detection of deep endometriosis extension. (orig.)

  7. Standard high-resolution pelvic MRI vs. low-resolution pelvic MRI in the evaluation of deep infiltrating endometriosis

    Energy Technology Data Exchange (ETDEWEB)

    Scardapane, Arnaldo; Lorusso, Filomenamila; Ferrante, Annunziata; Stabile Ianora, Amato Antonio; Angelelli, Giuseppe [University Hospital ' ' Policlinico' ' of Bari, Interdisciplinary Department of Medicine, Bari (Italy); Scioscia, Marco [Sacro Cuore Don Calabria General Hospital, Department of Obstetrics and Gynecology, Negrar, Verona (Italy)

    2014-10-15

    To compare the capabilities of standard pelvic MRI with low-resolution pelvic MRI using fast breath-hold sequences to evaluate deep infiltrating endometriosis (DIE). Sixty-eight consecutive women with suspected DIE were studied with pelvic MRI. A double-acquisition protocol was carried out in each case. High-resolution (HR)-MRI consisted of axial, sagittal, and coronal TSE T2W images, axial TSE T1W, and axial THRIVE. Low-resolution (LR)-MRI was acquired using fast single shot (SSH) T2 and T1 images. Two radiologists with 10 and 2 years of experience reviewed HR and LR images in two separate sessions. The presence of endometriotic lesions of the uterosacral ligament (USL), rectovaginal septum (RVS), pouch of Douglas (POD), and rectal wall was noted. The accuracies of LR-MRI and HR-MRI were compared with the laparoscopic and histopathological findings. Average acquisition times were 24 minutes for HR-MRI and 7 minutes for LR-MRI. The more experienced radiologist achieved higher accuracy with both HR-MRI and LR-MRI. The values of sensitivity, specificity, PPV, NPV, and accuracy did not significantly change between HR and LR images or interobserver agreement for all of the considered anatomic sites. LR-MRI performs as well as HR-MRI and is a valuable tool for the detection of deep endometriosis extension. (orig.)

  8. Approximate Inference and Deep Generative Models

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Advances in deep generative models are at the forefront of deep learning research because of the promise they offer for allowing data-efficient learning, and for model-based reinforcement learning. In this talk I'll review a few standard methods for approximate inference and introduce modern approximations which allow for efficient large-scale training of a wide variety of generative models. Finally, I'll demonstrate several important application of these models to density estimation, missing data imputation, data compression and planning.

  9. Opportunities and Challenges in Deep Mining: A Brief Review

    Directory of Open Access Journals (Sweden)

    Pathegama G. Ranjith

    2017-08-01

    Full Text Available Mineral consumption is increasing rapidly as more consumers enter the market for minerals and as the global standard of living increases. As a result, underground mining continues to progress to deeper levels in order to tackle the mineral supply crisis in the 21st century. However, deep mining occurs in a very technical and challenging environment, in which significant innovative solutions and best practice are required and additional safety standards must be implemented in order to overcome the challenges and reap huge economic gains. These challenges include the catastrophic events that are often met in deep mining engineering: rockbursts, gas outbursts, high in situ and redistributed stresses, large deformation, squeezing and creeping rocks, and high temperature. This review paper presents the current global status of deep mining and highlights some of the newest technological achievements and opportunities associated with rock mechanics and geotechnical engineering in deep mining. Of the various technical achievements, unmanned working-faces and unmanned mines based on fully automated mining and mineral extraction processes have become important fields in the 21st century.

  10. An astrometric search for a stellar companion to the sun

    International Nuclear Information System (INIS)

    Perlmutter, S.

    1986-01-01

    A companion star within 0.8 pc of the Sun has been postulated to explain a possible 26 Myr periodicity in mass extinctions of species on the Earth. Such a star would already be catalogued in the Yale Bright Star catalogue unless it is fainter than m/sub nu/ = 6.5; this limits the possible stellar types for an unseen companion to red dwarfs, brown dwarfs, or compact objects. Red dwarfs account for about 75% of these possible stars. We describe here the design and development of an astrometric search for a nearby red dwarf companion with a six-month peak-to-peak parallax of ≥2.5 arcseconds. We are measuring the parallax of 2770 candidate faint red stars selected from the Dearborn Observatory catalogue. An automated 30-inch telescope and CCD camera system collect digitized images of the candidate stars, along with a 13' x 16' surrounding field of background stars. Second-epoch images, taken a few months later, are registered to the first epoch images using the background stars as fiducials. An apparent motion, m/sub a/, of the candidate stars is found to a precision of σ/sub m//sub a/ ≅ 0.08 pixel ≅ 0.2 arcseconds for fields with N/sub fiducial/ ≥ 10 fiducial stars visible above the background noise. This precision is sufficient to detect the parallactic motion of a star at 0.8 pc with a two month interval between the observation epochs. Images with fewer fiducial stars above background noise are observed with a longer interval between epochs. If a star is found with high parallactic motion, we will confirm its distance with further parallax measurements, photometry, and spectral studies, and will measure radial velocity and proper motion to establish its orbit. We have demonstrated the search procedure with observations of 41 stars, and have shown that none of these is a nearby star. 37 refs., 16 figs., 3 tabs

  11. DeepPy: Pythonic deep learning

    DEFF Research Database (Denmark)

    Larsen, Anders Boesen Lindbo

    This technical report introduces DeepPy – a deep learning framework built on top of NumPy with GPU acceleration. DeepPy bridges the gap between highperformance neural networks and the ease of development from Python/NumPy. Users with a background in scientific computing in Python will quickly...... be able to understand and change the DeepPy codebase as it is mainly implemented using high-level NumPy primitives. Moreover, DeepPy supports complex network architectures by letting the user compose mathematical expressions as directed graphs. The latest version is available at http...

  12. ShapeShop: Towards Understanding Deep Learning Representations via Interactive Experimentation.

    Science.gov (United States)

    Hohman, Fred; Hodas, Nathan; Chau, Duen Horng

    2017-05-01

    Deep learning is the driving force behind many recent technologies; however, deep neural networks are often viewed as "black-boxes" due to their internal complexity that is hard to understand. Little research focuses on helping people explore and understand the relationship between a user's data and the learned representations in deep learning models. We present our ongoing work, ShapeShop, an interactive system for visualizing and understanding what semantics a neural network model has learned. Built using standard web technologies, ShapeShop allows users to experiment with and compare deep learning models to help explore the robustness of image classifiers.

  13. ShapeShop: Towards Understanding Deep Learning Representations via Interactive Experimentation

    Energy Technology Data Exchange (ETDEWEB)

    Hohman, Frederick M.; Hodas, Nathan O.; Chau, Duen Horng

    2017-05-30

    Deep learning is the driving force behind many recent technologies; however, deep neural networks are often viewed as “black-boxes” due to their internal complexity that is hard to understand. Little research focuses on helping people explore and understand the relationship between a user’s data and the learned representations in deep learning models. We present our ongoing work, ShapeShop, an interactive system for visualizing and understanding what semantics a neural network model has learned. Built using standard web technologies, ShapeShop allows users to experiment with and compare deep learning models to help explore the robustness of image classifiers.

  14. Revealing Companions to Nearby Stars with Astrometric Acceleration

    Science.gov (United States)

    2012-07-01

    CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência e Tecnologia (Brazil), and Ministerio de Ciencia , Tecnologı́a e...Most observations were done in the I or Strömgren y bands. The detection limits Δm(ρ) for the unresolved stars are published. They are not as deep...is the duration of the Hipparcos mission. The dis- placement of the photo-center in X, Y caused by motion due to a binary is calculated for each of

  15. Deep borehole disposal of high-level radioactive waste.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua S.; Freeze, Geoffrey A.; Brady, Patrick Vane; Swift, Peter N.; Rechard, Robert Paul; Arnold, Bill Walter; Kanney, Joseph F.; Bauer, Stephen J.

    2009-07-01

    Preliminary evaluation of deep borehole disposal of high-level radioactive waste and spent nuclear fuel indicates the potential for excellent long-term safety performance at costs competitive with mined repositories. Significant fluid flow through basement rock is prevented, in part, by low permeabilities, poorly connected transport pathways, and overburden self-sealing. Deep fluids also resist vertical movement because they are density stratified. Thermal hydrologic calculations estimate the thermal pulse from emplaced waste to be small (less than 20 C at 10 meters from the borehole, for less than a few hundred years), and to result in maximum total vertical fluid movement of {approx}100 m. Reducing conditions will sharply limit solubilities of most dose-critical radionuclides at depth, and high ionic strengths of deep fluids will prevent colloidal transport. For the bounding analysis of this report, waste is envisioned to be emplaced as fuel assemblies stacked inside drill casing that are lowered, and emplaced using off-the-shelf oilfield and geothermal drilling techniques, into the lower 1-2 km portion of a vertical borehole {approx}45 cm in diameter and 3-5 km deep, followed by borehole sealing. Deep borehole disposal of radioactive waste in the United States would require modifications to the Nuclear Waste Policy Act and to applicable regulatory standards for long-term performance set by the US Environmental Protection Agency (40 CFR part 191) and US Nuclear Regulatory Commission (10 CFR part 60). The performance analysis described here is based on the assumption that long-term standards for deep borehole disposal would be identical in the key regards to those prescribed for existing repositories (40 CFR part 197 and 10 CFR part 63).

  16. Shear Strengthening of RC Deep Beam Using Externally Bonded GFRP Fabrics

    Science.gov (United States)

    Kumari, A.; Patel, S. S.; Nayak, A. N.

    2018-06-01

    This work presents the experimental investigation of RC deep beams wrapped with externally bonded Glass Fibre Reinforced Polymer (GFRP) fabrics in order to study the Load versus deflection behavior, cracking pattern, failure modes and ultimate shear strength. A total number of five deep beams have been casted, which is designed with conventional steel reinforcement as per IS: 456 (Indian standard plain and reinforced concrete—code for practice, Bureau of Indian Standards, New Delhi, 2000). The spans to depth ratio for all RC deep beams have been kept less than 2 as per the above specification. Out of five RC deep beams, one without retrofitting serves as a reference beam and the rest four have been wrapped with GFRP fabrics in multiple layers and tested with two point loading condition. The first cracking load, ultimate load and the shear contribution of GFRP to the deep beams have been observed. A critical discussion is made with respect to the enhancement of the strength, behaviour and performance of retrofitted deep beams in comparison to the deep beam without GFRP in order to explore the potential use of GFRP for strengthening the RC deep beams. Test results have demonstrated that the deep beams retrofitted with GFRP shows a slower development of the diagonal cracks and improves shear carrying capacity of the RC deep beam. A comparative study of the experimental results with the theoretical ones predicted by various researchers available in the literatures has also been presented. It is observed that the ultimate load of the beams retrofitted with GFRP fabrics increases with increase of number of GFRP layers up to a specific number of layers, i.e. 3 layers, beyond which it decreases.

  17. Extracting Databases from Dark Data with DeepDive.

    Science.gov (United States)

    Zhang, Ce; Shin, Jaeho; Ré, Christopher; Cafarella, Michael; Niu, Feng

    2016-01-01

    DeepDive is a system for extracting relational databases from dark data : the mass of text, tables, and images that are widely collected and stored but which cannot be exploited by standard relational tools. If the information in dark data - scientific papers, Web classified ads, customer service notes, and so on - were instead in a relational database, it would give analysts a massive and valuable new set of "big data." DeepDive is distinctive when compared to previous information extraction systems in its ability to obtain very high precision and recall at reasonable engineering cost; in a number of applications, we have used DeepDive to create databases with accuracy that meets that of human annotators. To date we have successfully deployed DeepDive to create data-centric applications for insurance, materials science, genomics, paleontologists, law enforcement, and others. The data unlocked by DeepDive represents a massive opportunity for industry, government, and scientific researchers. DeepDive is enabled by an unusual design that combines large-scale probabilistic inference with a novel developer interaction cycle. This design is enabled by several core innovations around probabilistic training and inference.

  18. Parity violation in deep inelastic scattering

    Energy Technology Data Exchange (ETDEWEB)

    Souder, P. [Syracuse Univ., NY (United States)

    1994-04-01

    AA beam of polarized electrons at CEBAF with an energy of 8 GeV or more will be useful for performing precision measurements of parity violation in deep inelastic scattering. Possible applications include precision tests of the Standard Model, model-independent measurements of parton distribution functions, and studies of quark correlations.

  19. Introducing ADES: A New IAU Astrometry Data Exchange Standard

    Science.gov (United States)

    Chesley, Steven R.; Hockney, George M.; Holman, Matthew J.

    2017-10-01

    For several decades, small body astrometry has been exchanged, distributed and archived in the form of 80-column ASCII records. As a replacement for this obsolescent format, we have worked with a number of members of the community to develop the Astrometric Data Exchange Standard (ADES), which was formally adopted by IAU Commission 20 in August 2015 at the XXIX General Assembly in Honolulu, Hawaii.The purpose of ADES is to ensure that useful and available observational information is submitted, archived, and disseminated as needed. Availability of more complete information will allow orbit computers to process the data more correctly, leading to improved accuracy and reliability of orbital fits. In this way, it will be possible to fully exploit the improving accuracy and increasing number of both optical and radar observations. ADES overcomes several limitations of the previous format by allowing characterization of astrometric and photometric errors, adequate precision in time and angle fields, and flexibility and extensibility.To accommodate a diverse base of users, from automated surveys to hands-on follow-up observers, the ADES protocol allows for two file formats, eXtensible Markup Language (XML) and Pipe-Separated Values (PSV). Each format carries the same information and simple tools allow users to losslessly transform back and forth between XML and PSV.We have further developed and refined ADES since it was first announced in July 2015 [1]. The proposal at that time [2] has undergone several modest revisions to aid validation and avoid overloaded fields. We now have validation schema and file transformation utilities. Suitable example files, test suites, and input/output libraries in a number of modern programming languages are now available. Acknowledgements: Useful feedback during the development of ADES has been received from numerous colleagues in the community of observers and orbit specialists working on asteroids comets and planetary satellites

  20. Astrometrically registered simultaneous observations of the 22 GHz H{sub 2}O and 43 GHz SiO masers toward R Leonis Minoris using KVN and source/frequency phase referencing

    Energy Technology Data Exchange (ETDEWEB)

    Dodson, Richard; Rioja, María J.; Jung, Tae-Hyun; Sohn, Bong-Won; Byun, Do-Young; Cho, Se-Hyung; Lee, Sang-Sung; Kim, Jongsoo; Kim, Kee-Tae; Oh, Chung-Sik; Han, Seog-Tae; Je, Do-Heung; Chung, Moon-Hee; Wi, Seog-Oh; Kang, Jiman; Lee, Jung-Won; Chung, Hyunsoo; Kim, Hyo-Ryoung; Kim, Hyun-Goo; Lee, Chang-Hoon, E-mail: rdodson@kasi.re.kr [Korea Astronomy and Space Science Institute, Daedeokdae-ro 776, Yuseong-gu, Daejeon 305-348 (Korea, Republic of); and others

    2014-11-01

    Oxygen-rich asymptotic giant branch (AGB) stars can be intense emitters of SiO (v = 1 and 2, J = 1 → 0) and H{sub 2}O maser lines at 43 and 22 GHz, respectively. Very long baseline interferometry (VLBI) observations of the maser emission provide a unique tool to probe the innermost layers of the circumstellar envelopes in AGB stars. Nevertheless, the difficulties in achieving astrometrically aligned H{sub 2}O and v = 1 and v = 2 SiO maser maps have traditionally limited the physical constraints that can be placed on the SiO maser pumping mechanism. We present phase-referenced simultaneous spectral-line VLBI images for the SiO v = 1 and v = 2, J = 1 → 0, and H{sub 2}O maser emission around the AGB star R LMi, obtained from the Korean VLBI Network (KVN). The simultaneous multi-channel receivers of the KVN offer great possibilities for astrometry in the frequency domain. With this facility, we have produced images with bona fide absolute astrometric registration between high-frequency maser transitions of different species to provide the positions of the H{sub 2}O maser emission and the center of the SiO maser emission, hence reducing the uncertainty in the proper motions for R LMi by an order of magnitude over that from Hipparcos. This is the first successful demonstration of source frequency phase referencing for millimeter VLBI spectral-line observations and also where the ratio between the frequencies is not an integer.

  1. Gaia DR1 documentation

    Science.gov (United States)

    van Leeuwen, F.; de Bruijne, J. H. J.; Arenou, F.; Comoretto, G.; Eyer, L.; Farras Casas, M.; Hambly, N.; Hobbs, D.; Salgado, J.; Utrilla Molina, E.; Vogt, S.; van Leeuwen, M.; Abreu, A.; Altmann, M.; Andrei, A.; Babusiaux, C.; Bastian, U.; Biermann, M.; Blanco-Cuaresma, S.; Bombrun, A.; Borrachero, R.; Brown, A. G. A.; Busonero, D.; Busso, G.; Butkevich, A.; Cantat-Gaudin, T.; Carrasco, J. M.; Castañeda, J.; Charnas, J.; Cheek, N.; Clementini, G.; Crowley, C.; Cuypers, J.; Davidson, M.; De Angeli, F.; De Ridder, J.; Evans, D.; Fabricius, C.; Findeisen, K.; Fleitas, J. M.; Gracia, G.; Guerra, R.; Guy, L.; Helmi, A.; Hernandez, J.; Holl, B.; Hutton, A.; Klioner, S.; Lammers, U.; Lecoeur-Taïbi, I.; Lindegren, L.; Luri, X.; Marinoni, S.; Marrese, P.; Messineo, R.; Michalik, D.; Mignard, F.; Montegriffo, P.; Mora, A.; Mowlavi, N.; Nienartowicz, K.; Pancino, E.; Panem, C.; Portell, J.; Rimoldini, L.; Riva, A.; Robin, A.; Siddiqui, H.; Smart, R.; Sordo, R.; Soria, S.; Turon, C.; Vallenari, A.; Voss, H.

    2017-12-01

    We present the first Gaia data release, Gaia DR1, consisting of astrometry and photometry for over 1 billion sources brighter than magnitude 20.7 in the white-light photometric band G of Gaia. The Gaia Data Processing and Analysis Consortium (DPAC) processed the raw measurements collected with the Gaia instruments during the first 14 months of the mission, and turned these into an astrometric and photometric catalogue. Gaia DR1 consists of three parts: an astrometric data set which contains the positions, parallaxes, and mean proper motions for about 2 million of the brightest stars in common with the Hipparcos and Tycho-2 catalogues (the primary astrometric data set) and the positions for an additional 1.1 billion sources (the secondary astrometric data set). The primary set forms the realisation of the Tycho-Gaia Astrometric Solution (TGAS). The second part of Gaia DR1 is the photometric data set, which contains the mean G-band magnitudes for all sources. The third part consists of the G-band light curves and the characteristics of 3000 Cepheid and RR Lyrae stars observed at high cadence around the south ecliptic pole. The positions and proper motions in the astrometric data set are given in a reference frame that is aligned with the International Celestial Reference Frame (ICRF) to better than 0.1 mas at epoch J2015.0, and non-rotating with respect to the ICRF to within 0.03 mas yr^-1. For the primary astrometric data set, the typical standard error for the positions and parallaxes is about 0.3 mas, while for the proper motions the typical standard error is about 1 mas yr^-1. Whereas it has been suggested in Gaia Collaboration et al. (2016a) that a systematic component of ∼0.3 mas should be 'added' (in quadrature) to the parallax uncertainties, Brown (2017) clarifies that reported parallax standard errors already include local systematics as a result of the calibration of the TGAS parallax uncertainties by comparison to Hipparcos parallaxes. For the subset of

  2. Influence of deep neuromuscular block on the surgeonś assessment of surgical conditions during laparotomy

    DEFF Research Database (Denmark)

    Madsen, M V; Scheppan, S; Mørk, E

    2017-01-01

    Background: During laparotomy, surgeons may experience difficult surgical conditions if the patient's abdominal wall or diaphragm is tense. Deep neuromuscular block (NMB), defined as a post-tetanic-count (PTC) between 0-1, paralyses the abdominal wall muscles and the diaphragm. We hypothesized th...... time, occurrence of wound infection, and wound dehiscence were found. Conclusions: Deep NMB compared with standard NMB resulted in better subjective ratings of surgical conditions during laparotomy.......Background: During laparotomy, surgeons may experience difficult surgical conditions if the patient's abdominal wall or diaphragm is tense. Deep neuromuscular block (NMB), defined as a post-tetanic-count (PTC) between 0-1, paralyses the abdominal wall muscles and the diaphragm. We hypothesized...... that deep NMB (PTC 0-1) would improve subjective ratings of surgical conditions during upper laparotomy as compared with standard NMB. Methods: This was a double blinded, randomized study. A total of 128 patients undergoing elective upper laparotomy were randomized to either continuous deep NMB (infusion...

  3. Another look at AM Herculis - radio-astrometric campaign with the e-EVN at 6 cm

    Science.gov (United States)

    Gawroński, M. P.; Goździewski, K.; Katarzyński, K.; Rycyk, G.

    2018-03-01

    We conducted radio-interferometric observations of the well-known binary cataclysmic system AM Herculis. This particular system is formed from a magnetic white dwarf (primary) and a red dwarf (secondary), and it is the prototype of so-called polars. Our observations were conducted with the European VLBI Network (EVN) in e-EVN mode at 5 GHz. We obtained six astrometric measurements spanning 1 yr, which make it possible to update the annual parallax for this system with the best precision to date (π = 11.29 ± 0.08 mas), which is equivalent to a distance of 88.6 ± 0.6 pc. The system was observed mostly in the quiescent phase (visual magnitude mv ˜ 15.3), when the radio emission was at the level of about 300 μJy. Our analysis suggests that the radio flux of AM Herculis is modulated with the orbital motion. Such specific properties of the radiation can be explained using an emission mechanism like the scenario proposed for V471 Tau and, in general, for RS CVn-type stars. In this scenario, the radio emission arises near the surface of the red dwarf, where the global magnetic field strength may reach a few kG. We argue that the quiescent radio emission distinguishes AM Herculis and AR Ursae Majoris (a second known persistent radio polar) from other polars, which are systems with a magnetized secondary star.

  4. Deep Echo State Network (DeepESN): A Brief Survey

    OpenAIRE

    Gallicchio, Claudio; Micheli, Alessio

    2017-01-01

    The study of deep recurrent neural networks (RNNs) and, in particular, of deep Reservoir Computing (RC) is gaining an increasing research attention in the neural networks community. The recently introduced deep Echo State Network (deepESN) model opened the way to an extremely efficient approach for designing deep neural networks for temporal data. At the same time, the study of deepESNs allowed to shed light on the intrinsic properties of state dynamics developed by hierarchical compositions ...

  5. Deep defect levels in standard and oxygen enriched silicon detectors before and after **6**0Co-gamma-irradiation

    CERN Document Server

    Stahl, J; Lindström, G; Pintilie, I

    2003-01-01

    Capacitance Deep Level Transient Spectroscopy (C-DLTS) measurements have been performed on standard and oxygen-doped silicon detectors manufactured from high-resistivity n-type float zone material with left angle bracket 111 right angle bracket and left angle bracket 100 right angle bracket orientation. Three different oxygen concentrations were achieved by the so-called diffusion oxygenated float zone (DOFZ) process initiated by the CERN-RD48 (ROSE) collaboration. Before the irradiation a material characterization has been performed. In contrast to radiation damage by neutrons or high- energy charged hadrons, were the bulk damage is dominated by a mixture of clusters and point defects, the bulk damage caused by **6**0Co-gamma-radiation is only due to the introduction of point defects. The dominant electrically active defects which have been detected after **6**0Co-gamma-irradiation by C-DLTS are the electron traps VO//i, C//iC//s, V//2( = /-), V //2(-/0) and the hole trap C//i O//i. The main difference betwe...

  6. BCDForest: a boosting cascade deep forest model towards the classification of cancer subtypes based on gene expression data.

    Science.gov (United States)

    Guo, Yang; Liu, Shuhui; Li, Zhanhuai; Shang, Xuequn

    2018-04-11

    The classification of cancer subtypes is of great importance to cancer disease diagnosis and therapy. Many supervised learning approaches have been applied to cancer subtype classification in the past few years, especially of deep learning based approaches. Recently, the deep forest model has been proposed as an alternative of deep neural networks to learn hyper-representations by using cascade ensemble decision trees. It has been proved that the deep forest model has competitive or even better performance than deep neural networks in some extent. However, the standard deep forest model may face overfitting and ensemble diversity challenges when dealing with small sample size and high-dimensional biology data. In this paper, we propose a deep learning model, so-called BCDForest, to address cancer subtype classification on small-scale biology datasets, which can be viewed as a modification of the standard deep forest model. The BCDForest distinguishes from the standard deep forest model with the following two main contributions: First, a named multi-class-grained scanning method is proposed to train multiple binary classifiers to encourage diversity of ensemble. Meanwhile, the fitting quality of each classifier is considered in representation learning. Second, we propose a boosting strategy to emphasize more important features in cascade forests, thus to propagate the benefits of discriminative features among cascade layers to improve the classification performance. Systematic comparison experiments on both microarray and RNA-Seq gene expression datasets demonstrate that our method consistently outperforms the state-of-the-art methods in application of cancer subtype classification. The multi-class-grained scanning and boosting strategy in our model provide an effective solution to ease the overfitting challenge and improve the robustness of deep forest model working on small-scale data. Our model provides a useful approach to the classification of cancer subtypes

  7. Update on Astrometric Follow-Up at Apache Point Observatory by Adler Planetarium

    Science.gov (United States)

    Nault, Kristie A.; Brucker, Melissa; Hammergren, Mark

    2016-10-01

    We began our NEO astrometric follow-up and characterization program in 2014 Q4 using about 500 hours of observing time per year with the Astrophysical Research Consortium (ARC) 3.5m telescope at Apache Point Observatory (APO). Our observing is split into 2 hour blocks approximately every other night for astrometry (this poster) and several half-nights per month for spectroscopy (see poster by M. Hammergren et al.) and light curve studies.For astrometry, we use the ARC Telescope Imaging Camera (ARCTIC) with an SDSS r filter, in 2 hour observing blocks centered around midnight. ARCTIC has a magnitude limit of V~23 in 60s, and we target 20 NEOs per session. ARCTIC has a FOV 1.57 times larger and a readout time half as long as the previous imager, SPIcam, which we used from 2014 Q4 through 2015 Q3. Targets are selected primarily from the Minor Planet Center's (MPC) NEO Confirmation Page (NEOCP), and NEA Observation Planning Aid; we also refer to JPL's What's Observable page, the Spaceguard Priority List and Faint NEOs List, and requests from other observers. To quickly adapt to changing weather and seeing conditions, we create faint, midrange, and bright target lists. Detected NEOs are measured with Astrometrica and internal software, and the astrometry is reported to the MPC.As of June 19, 2016, we have targeted 2264 NEOs, 1955 with provisional designations, 1582 of which were detected. We began observing NEOCP asteroids on January 30, 2016, and have targeted 309, 207 of which were detected. In addition, we serendipitously observed 281 moving objects, 201 of which were identified as previously known objects.This work is based on observations obtained with the Apache Point Observatory 3.5m telescope, which is owned and operated by the Astrophysical Research Consortium. We gratefully acknowledge support from NASA NEOO award NNX14AL17G and thank the University of Chicago Department of Astronomy and Astrophysics for observing time in 2014.

  8. The Nature of Thinking, Shallow and Deep

    Directory of Open Access Journals (Sweden)

    Gary L. Brase

    2014-05-01

    Full Text Available Because the criteria for success differ across various domains of life, no single normative standard will ever work for all types of thinking. One method for dealing with this apparent dilemma is to propose that the mind is made up of a large number of specialized modules. This review describes how this multi-modular framework for the mind overcomes several critical conceptual and theoretical challenges to our understanding of human thinking, and hopefully clarifies what are (and are not some of the implications based on this framework. In particular, an evolutionarily informed deep rationality conception of human thinking can guide psychological research out of clusters of ad hoc models which currently occupy some fields. First, the idea of deep rationality helps theoretical frameworks in terms of orienting themselves with regard to time scale references, which can alter the nature of rationality assessments. Second, the functional domains of deep rationality can be hypothesized (non-exhaustively to include the areas of self-protection, status, affiliation, mate acquisition, mate retention, kin care, and disease avoidance. Thus, although there is no single normative standard of rationality across all of human cognition, there are sensible and objective standards by which we can evaluate multiple, fundamental, domain-specific motives underlying human cognition and behavior. This review concludes with two examples to illustrate the implications of this framework. The first example, decisions about having a child, illustrates how competing models can be understood by realizing that different fundamental motives guiding people’s thinking can sometimes be in conflict. The second example is that of personifications within modern financial markets (e.g., in the form of corporations, which are entities specifically constructed to have just one fundamental motive. This single focus is the source of both the strengths and flaws in how such entities

  9. The nature of thinking, shallow and deep.

    Science.gov (United States)

    Brase, Gary L

    2014-01-01

    Because the criteria for success differ across various domains of life, no single normative standard will ever work for all types of thinking. One method for dealing with this apparent dilemma is to propose that the mind is made up of a large number of specialized modules. This review describes how this multi-modular framework for the mind overcomes several critical conceptual and theoretical challenges to our understanding of human thinking, and hopefully clarifies what are (and are not) some of the implications based on this framework. In particular, an evolutionarily informed "deep rationality" conception of human thinking can guide psychological research out of clusters of ad hoc models which currently occupy some fields. First, the idea of deep rationality helps theoretical frameworks in terms of orienting themselves with regard to time scale references, which can alter the nature of rationality assessments. Second, the functional domains of deep rationality can be hypothesized (non-exhaustively) to include the areas of self-protection, status, affiliation, mate acquisition, mate retention, kin care, and disease avoidance. Thus, although there is no single normative standard of rationality across all of human cognition, there are sensible and objective standards by which we can evaluate multiple, fundamental, domain-specific motives underlying human cognition and behavior. This review concludes with two examples to illustrate the implications of this framework. The first example, decisions about having a child, illustrates how competing models can be understood by realizing that different fundamental motives guiding people's thinking can sometimes be in conflict. The second example is that of personifications within modern financial markets (e.g., in the form of corporations), which are entities specifically constructed to have just one fundamental motive. This single focus is the source of both the strengths and flaws in how such entities behave.

  10. Hydride vapor phase GaN films with reduced density of residual electrons and deep traps

    International Nuclear Information System (INIS)

    Polyakov, A. Y.; Smirnov, N. B.; Govorkov, A. V.; Yugova, T. G.; Cox, H.; Helava, H.; Makarov, Yu.; Usikov, A. S.

    2014-01-01

    Electrical properties and deep electron and hole traps spectra are compared for undoped n-GaN films grown by hydride vapor phase epitaxy (HVPE) in the regular process (standard HVPE samples) and in HVPE process optimized for decreasing the concentration of residual donor impurities (improved HVPE samples). It is shown that the residual donor density can be reduced by optimization from ∼10 17  cm −3 to (2–5) × 10 14  cm −3 . The density of deep hole traps and deep electron traps decreases with decreased donor density, so that the concentration of deep hole traps in the improved samples is reduced to ∼5 × 10 13  cm −3 versus 2.9 × 10 16  cm −3 in the standard samples, with a similar decrease in the electron traps concentration

  11. Why & When Deep Learning Works: Looking Inside Deep Learnings

    OpenAIRE

    Ronen, Ronny

    2017-01-01

    The Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI) has been heavily supporting Machine Learning and Deep Learning research from its foundation in 2012. We have asked six leading ICRI-CI Deep Learning researchers to address the challenge of "Why & When Deep Learning works", with the goal of looking inside Deep Learning, providing insights on how deep networks function, and uncovering key observations on their expressiveness, limitations, and potential. The outp...

  12. Jet-images — deep learning edition

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Luke de [Institute for Computational and Mathematical Engineering, Stanford University,Huang Building 475 Via Ortega, Stanford, CA 94305 (United States); Kagan, Michael [SLAC National Accelerator Laboratory, Stanford University,2575 Sand Hill Rd, Menlo Park, CA 94025 (United States); Mackey, Lester [Department of Statistics, Stanford University,390 Serra Mall, Stanford, CA 94305 (United States); Nachman, Benjamin; Schwartzman, Ariel [SLAC National Accelerator Laboratory, Stanford University,2575 Sand Hill Rd, Menlo Park, CA 94025 (United States)

    2016-07-13

    Building on the notion of a particle physics detector as a camera and the collimated streams of high energy particles, or jets, it measures as an image, we investigate the potential of machine learning techniques based on deep learning architectures to identify highly boosted W bosons. Modern deep learning algorithms trained on jet images can out-perform standard physically-motivated feature driven approaches to jet tagging. We develop techniques for visualizing how these features are learned by the network and what additional information is used to improve performance. This interplay between physically-motivated feature driven tools and supervised learning algorithms is general and can be used to significantly increase the sensitivity to discover new particles and new forces, and gain a deeper understanding of the physics within jets.

  13. Jet-images — deep learning edition

    International Nuclear Information System (INIS)

    Oliveira, Luke de; Kagan, Michael; Mackey, Lester; Nachman, Benjamin; Schwartzman, Ariel

    2016-01-01

    Building on the notion of a particle physics detector as a camera and the collimated streams of high energy particles, or jets, it measures as an image, we investigate the potential of machine learning techniques based on deep learning architectures to identify highly boosted W bosons. Modern deep learning algorithms trained on jet images can out-perform standard physically-motivated feature driven approaches to jet tagging. We develop techniques for visualizing how these features are learned by the network and what additional information is used to improve performance. This interplay between physically-motivated feature driven tools and supervised learning algorithms is general and can be used to significantly increase the sensitivity to discover new particles and new forces, and gain a deeper understanding of the physics within jets.

  14. Ultra Deep Wave Equation Imaging and Illumination

    Energy Technology Data Exchange (ETDEWEB)

    Alexander M. Popovici; Sergey Fomel; Paul Sava; Sean Crawley; Yining Li; Cristian Lupascu

    2006-09-30

    In this project we developed and tested a novel technology, designed to enhance seismic resolution and imaging of ultra-deep complex geologic structures by using state-of-the-art wave-equation depth migration and wave-equation velocity model building technology for deeper data penetration and recovery, steeper dip and ultra-deep structure imaging, accurate velocity estimation for imaging and pore pressure prediction and accurate illumination and amplitude processing for extending the AVO prediction window. Ultra-deep wave-equation imaging provides greater resolution and accuracy under complex geologic structures where energy multipathing occurs, than what can be accomplished today with standard imaging technology. The objective of the research effort was to examine the feasibility of imaging ultra-deep structures onshore and offshore, by using (1) wave-equation migration, (2) angle-gathers velocity model building, and (3) wave-equation illumination and amplitude compensation. The effort consisted of answering critical technical questions that determine the feasibility of the proposed methodology, testing the theory on synthetic data, and finally applying the technology for imaging ultra-deep real data. Some of the questions answered by this research addressed: (1) the handling of true amplitudes in the downward continuation and imaging algorithm and the preservation of the amplitude with offset or amplitude with angle information required for AVO studies, (2) the effect of several imaging conditions on amplitudes, (3) non-elastic attenuation and approaches for recovering the amplitude and frequency, (4) the effect of aperture and illumination on imaging steep dips and on discriminating the velocities in the ultra-deep structures. All these effects were incorporated in the final imaging step of a real data set acquired specifically to address ultra-deep imaging issues, with large offsets (12,500 m) and long recording time (20 s).

  15. Chinese expert consensus on programming deep brain stimulation for patients with Parkinson's disease.

    Science.gov (United States)

    Chen, Shengdi; Gao, Guodong; Feng, Tao; Zhang, Jianguo

    2018-01-01

    Deep Brain Stimulation (DBS) therapy for the treatment of Parkinson's Disease (PD) is now a well-established option for some patients. Postoperative standardized programming processes can improve the level of postoperative management and programming, relieve symptoms and improve quality of life. In order to improve the quality of the programming, the experts on DBS and PD in neurology and neurosurgery in China reviewed the relevant literatures and combined their own experiences and developed this expert consensus on the programming of deep brain stimulation in patients with PD in China. This Chinese expert consensus on postoperative programming can standardize and improve postoperative management and programming of DBS for PD.

  16. Anaesthetic management of a patient with deep brain stimulation implant for radical nephrectomy

    Directory of Open Access Journals (Sweden)

    Monica Khetarpal

    2014-01-01

    Full Text Available A 63-year-old man with severe Parkinson′s disease (PD who had been implanted with deep brain stimulators into both sides underwent radical nephrectomy under general anaesthesia with standard monitoring. Deep brain stimulation (DBS is an alternative and effective treatment option for severe and refractory PD and other illnesses such as essential tremor and intractable epilepsy. Anaesthesia in the patients with implanted neurostimulator requires special consideration because of the interaction between neurostimulator and the diathermy. The diathermy can damage the brain tissue at the site of electrode. There are no standard guidelines for the anaesthetic management of a patient with DBS electrode in situ posted for surgery.

  17. Deep Carbon Observatory investigates Carbon from Crust to Core: An Academic Record of the History of Deep Carbon Science

    Science.gov (United States)

    Mitton, S. A.

    2017-12-01

    Carbon plays an unparalleled role in our lives: as the element of life, as the basis of most of society's energy, as the backbone of most new materials, and as the central focus in efforts to understand Earth's variable and uncertain climate. Yet in spite of carbon's importance, scientists remain largely ignorant of the physical, chemical, and biological behavior of many of Earth's carbon-bearing systems. The Deep Carbon Observatory (DCO) is a global research program to transform our understanding of carbon in Earth. At its heart, DCO is a community of scientists, from biologists to physicists, geoscientists to chemists, and many others whose work crosses these disciplinary lines, forging a new, integrative field of deep carbon science. As a historian of science, I specialise in the history of planetary science and astronomy since 1900. This is directed toward understanding of the history of the steps on the road to discovering the internal dynamics of our planet. Within a framework that describes the historical background to the new field of Earth System Science, I present the first history of deep carbon science. This project will identifies the key discoveries of deep carbon science. It will assess the impact of new knowledge on geochemistry, geodynamics, and geobiology. The project will lead to publication, in book form in 2019, of an illuminating narrative that will highlight the engaging human stories of many remarkable scientists and natural philosophers from whom we have learned about the complexity of Earth's internal world. On this journey of discovery we will encounter not just the pioneering researchers of deep carbon science, but also their institutions, their instrumental inventiveness, and their passion for exploration. The book is organised thematically around the four communities of the Deep Carbon Observatory: Deep Life, Extreme Physics and Chemistry, Reservoirs and Fluxes, and Deep Energy. The presentation has a gallery and list of Deep Carbon

  18. Jet flavor tagging with Deep Learning using Python

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Besides the part that implements the resulting deep neural net in the ATLAS C++ software framework, a Python framework has been developed to connect HEP data to standard Data Science Python based libraries for Machine Learning. It makes use of HDF5, JSON and Pickle as intermediate data storage format, pandas and numpy for data handling and calculations, Keras for neural net construction and training as well as testing and matplotlib for plotting. It can be seen as an example of taking advantage of outside-HEP software developments without relying on the HEP standard ROOT.

  19. Deep iCrawl: An Intelligent Vision-Based Deep Web Crawler

    OpenAIRE

    R.Anita; V.Ganga Bharani; N.Nityanandam; Pradeep Kumar Sahoo

    2011-01-01

    The explosive growth of World Wide Web has posed a challenging problem in extracting relevant data. Traditional web crawlers focus only on the surface web while the deep web keeps expanding behind the scene. Deep web pages are created dynamically as a result of queries posed to specific web databases. The structure of the deep web pages makes it impossible for traditional web crawlers to access deep web contents. This paper, Deep iCrawl, gives a novel and vision-based app...

  20. Decadal trends in deep ocean salinity and regional effects on steric sea level

    Science.gov (United States)

    Purkey, S. G.; Llovel, W.

    2017-12-01

    We present deep (below 2000 m) and abyssal (below 4000 m) global ocean salinity trends from the 1990s through the 2010s and assess the role of deep salinity in local and global sea level budgets. Deep salinity trends are assessed using all deep basins with available full-depth, high-quality hydrographic section data that have been occupied two or more times since the 1980s through either the World Ocean Circulation Experiment (WOCE) Hydrographic Program or the Global Ship-Based Hydrographic Investigations Program (GO-SHIP). All salinity data is calibrated to standard seawater and any intercruise offsets applied. While the global mean deep halosteric contribution to sea level rise is close to zero (-0.017 +/- 0.023 mm/yr below 4000 m), there is a large regional variability with the southern deep basins becoming fresher and northern deep basins becoming more saline. This meridional gradient in the deep salinity trend reflects different mechanisms driving the deep salinity variability. The deep Southern Ocean is freshening owing to a recent increased flux of freshwater to the deep ocean. Outside of the Southern Ocean, the deep salinity and temperature changes are tied to isopycnal heave associated with a falling of deep isopycnals in recent decades. Therefore, regions of the ocean with a deep salinity minimum are experiencing both a halosteric contraction with a thermosteric expansion. While the thermosteric expansion is larger in most cases, in some regions the halosteric compensates for as much as 50% of the deep thermal expansion, making a significant contribution to local sea level rise budgets.

  1. Deep space propagation experiments at Ka-band

    Science.gov (United States)

    Butman, Stanley A.

    1990-01-01

    Propagation experiments as essential components of the general plan to develop an operational deep space telecommunications and navigation capability at Ka-band (32 to 35 GHz) by the end of the 20th century are discussed. Significant benefits of Ka-band over the current deep space standard X-band (8.4 GHz) are an improvement of 4 to 10 dB in telemetry capacity and a similar increase in radio navigation accuracy. Propagation experiments are planned on the Mars Observer Mission in 1992 in preparation for the Cassini Mission to Saturn in 1996, which will use Ka-band in the search for gravity waves as well as to enhance telemetry and navigation at Saturn in 2002. Subsequent uses of Ka-band are planned for the Solar Probe Mission and the Mars Program.

  2. Bed rest versus early ambulation with standard anticoagulation in the management of deep vein thrombosis: a meta-analysis.

    Directory of Open Access Journals (Sweden)

    Zhenlei Liu

    Full Text Available Bed rest has been considered as the cornerstone of management of deep vein thrombosis (DVT for a long time, though it is not evidence-base, and there is growing evidence favoring early ambulation.Electronic databases including Medline, PubMed, Cochrane Library and three Chinese databases were searched with key words of "deep vein thrombosis", "pulmonary embolism", "venous thrombosis", "bed rest", "immobilization", "mobilization" and "ambulation". We considered randomized controlled trials, prospective or retrospective cohort studies that compared the outcomes of acute DVT patients managed with early ambulation versus bed rest, in addition to standard anticoagulation. Meta-analysis pertaining to the incidence of new pulmonary embolism (PE, progression of DVT, and DVT related deaths were conducted, as well as the extent of remission of pain and edema.13 studies were included with a total of 3269 patients. Compared to bed rest, early ambulation was not associated with a higher incidence of new PE, progression of DVT, or DVT related deaths (RD -0.03, 95% CI -0.05∼ -0.02; Z = 1.24, p = 0.22; random effect model, Tau2 = 0.01. Moreover, if the patients suffered moderate or severe pain initially, early ambulation was related to a better outcome, with respect to remission of acute pain in the affected limb (SMD 0.42, 95%CI 0.09∼0.74; Z = 2.52, p = 0.01; random effect model, Tau2 = 0.04. Meta-analysis of alleviation of edema cannot elicit a solid conclusion because of significant heterogeneity among the few studies.Compared to bed rest, early ambulation of acute DVT patients with anticoagulation was not associated with a higher incidence of new PE, progression of DVT, and DVT related deaths. Furthermore, for the patients suffered moderate or severe pain initially, a better outcome can be seen in early ambulation group, regarding to the remission of acute pain in the affected limb.

  3. [Severity classification of chronic obstructive pulmonary disease based on deep learning].

    Science.gov (United States)

    Ying, Jun; Yang, Ceyuan; Li, Quanzheng; Xue, Wanguo; Li, Tanshi; Cao, Wenzhe

    2017-12-01

    In this paper, a deep learning method has been raised to build an automatic classification algorithm of severity of chronic obstructive pulmonary disease. Large sample clinical data as input feature were analyzed for their weights in classification. Through feature selection, model training, parameter optimization and model testing, a classification prediction model based on deep belief network was built to predict severity classification criteria raised by the Global Initiative for Chronic Obstructive Lung Disease (GOLD). We get accuracy over 90% in prediction for two different standardized versions of severity criteria raised in 2007 and 2011 respectively. Moreover, we also got the contribution ranking of different input features through analyzing the model coefficient matrix and confirmed that there was a certain degree of agreement between the more contributive input features and the clinical diagnostic knowledge. The validity of the deep belief network model was proved by this result. This study provides an effective solution for the application of deep learning method in automatic diagnostic decision making.

  4. Deep learning

    CERN Document Server

    Goodfellow, Ian; Courville, Aaron

    2016-01-01

    Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language proces...

  5. PEPSI deep spectra. II. Gaia benchmark stars and other M-K standards

    Science.gov (United States)

    Strassmeier, K. G.; Ilyin, I.; Weber, M.

    2018-04-01

    Context. High-resolution échelle spectra confine many essential stellar parameters once the data reach a quality appropriate to constrain the various physical processes that form these spectra. Aim. We provide a homogeneous library of high-resolution, high-S/N spectra for 48 bright AFGKM stars, some of them approaching the quality of solar-flux spectra. Our sample includes the northern Gaia benchmark stars, some solar analogs, and some other bright Morgan-Keenan (M-K) spectral standards. Methods: Well-exposed deep spectra were created by average-combining individual exposures. The data-reduction process relies on adaptive selection of parameters by using statistical inference and robust estimators. We employed spectrum synthesis techniques and statistics tools in order to characterize the spectra and give a first quick look at some of the science cases possible. Results: With an average spectral resolution of R ≈ 220 000 (1.36 km s-1), a continuous wavelength coverage from 383 nm to 912 nm, and S/N of between 70:1 for the faintest star in the extreme blue and 6000:1 for the brightest star in the red, these spectra are now made public for further data mining and analysis. Preliminary results include new stellar parameters for 70 Vir and α Tau, the detection of the rare-earth element dysprosium and the heavy elements uranium, thorium and neodymium in several RGB stars, and the use of the 12C to 13C isotope ratio for age-related determinations. We also found Arcturus to exhibit few-percent Ca II H&K and Hα residual profile changes with respect to the KPNO atlas taken in 1999. Based on data acquired with PEPSI using the Large Binocular Telescope (LBT) and the Vatican Advanced Technology Telescope (VATT). The LBT is an international collaboration among institutions in the United States, Italy, and Germany. LBT Corporation partners are the University of Arizona on behalf of the Arizona university system; Istituto Nazionale di Astrofisica, Italy; LBT

  6. Bacteriological examination and biological characteristics of deep frozen bone preserved by gamma sterilization

    International Nuclear Information System (INIS)

    Pham Quang Ngoc; Le The Trung; Vo Van Thuan; Ho Minh Duc

    1999-01-01

    To promote the surgical success in Vietnam, we should supply bone allografts of different sizes. For this reason we have developed a standard procedure in procurement, deep freezing, packaging and radiation sterilization of massive bone. The achievement in this attempt will be briefly reported. The dose of 10-15 kGy is proved to be suitable for radiation sterilization of massive bone allografts being treated in clean condition and preserved in deep frozen. Neither deep freezing nor radiation sterilization cause any significant loss of biochemical stability of massive bone allografts especially when deep freezing combines with radiation. There were neither cross infection nor change of biological characteristics found after 6 months of storage since radiation treatment. In addition to results of the previous research and development of tissue grafts for medical care, the deep freezing radiation sterilization has been established for preservation of massive bone that is of high demand for surgery in Vietnam

  7. Classification of Exacerbation Frequency in the COPDGene Cohort Using Deep Learning with Deep Belief Networks.

    Science.gov (United States)

    Ying, Jun; Dutta, Joyita; Guo, Ning; Hu, Chenhui; Zhou, Dan; Sitek, Arkadiusz; Li, Quanzheng

    2016-12-21

    This study aims to develop an automatic classifier based on deep learning for exacerbation frequency in patients with chronic obstructive pulmonary disease (COPD). A threelayer deep belief network (DBN) with two hidden layers and one visible layer was employed to develop classification models and the models' robustness to exacerbation was analyzed. Subjects from the COPDGene cohort were labeled with exacerbation frequency, defined as the number of exacerbation events per year. 10,300 subjects with 361 features each were included in the analysis. After feature selection and parameter optimization, the proposed classification method achieved an accuracy of 91.99%, using a 10-fold cross validation experiment. The analysis of DBN weights showed that there was a good visual spatial relationship between the underlying critical features of different layers. Our findings show that the most sensitive features obtained from the DBN weights are consistent with the consensus showed by clinical rules and standards for COPD diagnostics. We thus demonstrate that DBN is a competitive tool for exacerbation risk assessment for patients suffering from COPD.

  8. DeepGO: predicting protein functions from sequence and interactions using a deep ontology-aware classifier

    KAUST Repository

    Kulmanov, Maxat

    2017-09-27

    Motivation A large number of protein sequences are becoming available through the application of novel high-throughput sequencing technologies. Experimental functional characterization of these proteins is time-consuming and expensive, and is often only done rigorously for few selected model organisms. Computational function prediction approaches have been suggested to fill this gap. The functions of proteins are classified using the Gene Ontology (GO), which contains over 40 000 classes. Additionally, proteins have multiple functions, making function prediction a large-scale, multi-class, multi-label problem. Results We have developed a novel method to predict protein function from sequence. We use deep learning to learn features from protein sequences as well as a cross-species protein–protein interaction network. Our approach specifically outputs information in the structure of the GO and utilizes the dependencies between GO classes as background information to construct a deep learning model. We evaluate our method using the standards established by the Computational Assessment of Function Annotation (CAFA) and demonstrate a significant improvement over baseline methods such as BLAST, in particular for predicting cellular locations.

  9. Challenges for deep space communications in the 1990s

    Science.gov (United States)

    Dumas, Larry N.; Hornstein, Robert M.

    1991-01-01

    The discussion of NASA's Deep Space Network (DSN) examines the evolving character of aerospace missions and the corresponding changes in the DSN architecture. Deep space missions are reviewed, and it is noted that the two 34-m and the 70-m antenna subnets of the DSN are heavily loaded and more use is expected. High operational workload and the challenge of network cross-support are the design drivers for a flexible DSN architecture configuration. Incorporated in the design are antenna arraying for aperture augmentation, beam-waveguide antennas for frequency agility, and connectivity with non-DSN sites for cross-support. Compatibility between spacecraft and ground-facility designs is important for establishing common international standards of communication and data-system specification.

  10. Beyond the hype: deep neural networks outperform established methods using a ChEMBL bioactivity benchmark set.

    Science.gov (United States)

    Lenselink, Eelke B; Ten Dijke, Niels; Bongers, Brandon; Papadatos, George; van Vlijmen, Herman W T; Kowalczyk, Wojtek; IJzerman, Adriaan P; van Westen, Gerard J P

    2017-08-14

    The increase of publicly available bioactivity data in recent years has fueled and catalyzed research in chemogenomics, data mining, and modeling approaches. As a direct result, over the past few years a multitude of different methods have been reported and evaluated, such as target fishing, nearest neighbor similarity-based methods, and Quantitative Structure Activity Relationship (QSAR)-based protocols. However, such studies are typically conducted on different datasets, using different validation strategies, and different metrics. In this study, different methods were compared using one single standardized dataset obtained from ChEMBL, which is made available to the public, using standardized metrics (BEDROC and Matthews Correlation Coefficient). Specifically, the performance of Naïve Bayes, Random Forests, Support Vector Machines, Logistic Regression, and Deep Neural Networks was assessed using QSAR and proteochemometric (PCM) methods. All methods were validated using both a random split validation and a temporal validation, with the latter being a more realistic benchmark of expected prospective execution. Deep Neural Networks are the top performing classifiers, highlighting the added value of Deep Neural Networks over other more conventional methods. Moreover, the best method ('DNN_PCM') performed significantly better at almost one standard deviation higher than the mean performance. Furthermore, Multi-task and PCM implementations were shown to improve performance over single task Deep Neural Networks. Conversely, target prediction performed almost two standard deviations under the mean performance. Random Forests, Support Vector Machines, and Logistic Regression performed around mean performance. Finally, using an ensemble of DNNs, alongside additional tuning, enhanced the relative performance by another 27% (compared with unoptimized 'DNN_PCM'). Here, a standardized set to test and evaluate different machine learning algorithms in the context of multi

  11. Visualization of the internal globus pallidus: sequence and orientation for deep brain stimulation using a standard installation protocol at 3.0 Tesla.

    Science.gov (United States)

    Nölte, Ingo S; Gerigk, Lars; Al-Zghloul, Mansour; Groden, Christoph; Kerl, Hans U

    2012-03-01

    Deep-brain stimulation (DBS) of the internal globus pallidus (GPi) has shown remarkable therapeutic benefits for treatment-resistant neurological disorders including dystonia and Parkinson's disease (PD). The success of the DBS is critically dependent on the reliable visualization of the GPi. The aim of the study was to evaluate promising 3.0 Tesla magnetic resonance imaging (MRI) methods for pre-stereotactic visualization of the GPi using a standard installation protocol. MRI at 3.0 T of nine healthy individuals and of one patient with PD was acquired (FLAIR, T1-MPRAGE, T2-SPACE, T2*-FLASH2D, susceptibility-weighted imaging mapping (SWI)). Image quality and visualization of the GPi for each sequence were assessed by two neuroradiologists independently using a 6-point scale. Axial, coronal, and sagittal planes of the T2*-FLASH2D images were compared. Inter-rater reliability, contrast-to-noise ratios (CNR) and signal-to-noise ratios (SNR) for the GPi were determined. For illustration, axial T2*-FLASH2D images were fused with a section schema of the Schaltenbrand-Wahren stereotactic atlas. The GPi was best and reliably visualized in axial and to a lesser degree on coronal T2*-FLASH2D images. No major artifacts in the GPi were observed in any of the sequences. SWI offered a significantly higher CNR for the GPi compared to standard T2-weighted imaging using the standard parameters. The fusion of the axial T2*-FLASH2D images and the atlas projected the GPi clearly in the boundaries of the section schema. Using a standard installation protocol at 3.0 T T2*-FLASH2D imaging (particularly axial view) provides optimal and reliable delineation of the GPi.

  12. Method for manufacturing nuclear radiation detector with deep diffused junction

    International Nuclear Information System (INIS)

    Hall, R.N.

    1977-01-01

    Germanium radiation detectors are manufactured by diffusing lithium into high purity p-type germanium. The diffusion is most readily accomplished from a lithium-lead-bismuth alloy at approximately 430 0 C and is monitored by a quartz half cell containing a standard composition of this alloy. Detectors having n-type cores may be constructed by converting high purity p-type germanium to n-type by a lithium diffusion and subsequently diffusing some of the lithium back out through the surface to create a deep p-n junction. Production of coaxial germanium detectors comprising deep p-n junctions by the lithium diffusion process is described

  13. Deep Incremental Boosting

    OpenAIRE

    Mosca, Alan; Magoulas, George D

    2017-01-01

    This paper introduces Deep Incremental Boosting, a new technique derived from AdaBoost, specifically adapted to work with Deep Learning methods, that reduces the required training time and improves generalisation. We draw inspiration from Transfer of Learning approaches to reduce the start-up time to training each incremental Ensemble member. We show a set of experiments that outlines some preliminary results on some common Deep Learning datasets and discuss the potential improvements Deep In...

  14. Coaxial nuclear radiation detector with deep junction and radial field gradient

    International Nuclear Information System (INIS)

    Hall, R.N.

    1979-01-01

    Germanium radiation detectors are manufactured by diffusion lithium into high purity p-type germanium. The diffusion is most readily accomplished from a lithium-lead-bismuth alloy at approximately 430 0 and is monitored by a quartz half cell containing a standard composition of this alloy. Detectors having n-type cores may be constructed by converting high purity p-type germanium to n-type by a lithium diffusion and subsequently diffusing some of the lithium back out through the surface to create a deep p-n junction. Coaxial germanium detectors comprising deep p-n junctions are produced by the lithium diffusion process

  15. Deep Super Learner: A Deep Ensemble for Classification Problems

    OpenAIRE

    Young, Steven; Abdou, Tamer; Bener, Ayse

    2018-01-01

    Deep learning has become very popular for tasks such as predictive modeling and pattern recognition in handling big data. Deep learning is a powerful machine learning method that extracts lower level features and feeds them forward for the next layer to identify higher level features that improve performance. However, deep neural networks have drawbacks, which include many hyper-parameters and infinite architectures, opaqueness into results, and relatively slower convergence on smaller datase...

  16. DeepRT: deep learning for peptide retention time prediction in proteomics

    OpenAIRE

    Ma, Chunwei; Zhu, Zhiyong; Ye, Jun; Yang, Jiarui; Pei, Jianguo; Xu, Shaohang; Zhou, Ruo; Yu, Chang; Mo, Fan; Wen, Bo; Liu, Siqi

    2017-01-01

    Accurate predictions of peptide retention times (RT) in liquid chromatography have many applications in mass spectrometry-based proteomics. Herein, we present DeepRT, a deep learning based software for peptide retention time prediction. DeepRT automatically learns features directly from the peptide sequences using the deep convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) model, which eliminates the need to use hand-crafted features or rules. After the feature learning, pr...

  17. Application of positron annihilation lifetime technique to the study of deep level transients in semiconductors

    Science.gov (United States)

    Deng, A. H.; Shan, Y. Y.; Fung, S.; Beling, C. D.

    2002-03-01

    Unlike its conventional applications in lattice defect characterization, positron annihilation lifetime technique was applied to study temperature-dependent deep level transients in semiconductors. Defect levels in the band gap can be determined as they are determined by conventional deep level transient spectroscopy (DLTS) studies. The promising advantage of this application of positron annihilation over the conventional DLTS is that it could further extract extra microstructure information of deep-level defects, such as whether a deep level defect is vacancy related or not. A demonstration of EL2 defect level transient study in GaAs was shown and the EL2 level of 0.82±0.02 eV was obtained by a standard Arrhenius analysis, similar to that in conventional DLTS studies.

  18. Moby and Moby 2: creatures of the deep (web).

    Science.gov (United States)

    Vandervalk, Ben P; McCarthy, E Luke; Wilkinson, Mark D

    2009-03-01

    Facile and meaningful integration of data from disparate resources is the 'holy grail' of bioinformatics. Some resources have begun to address this problem by providing their data using Semantic Web standards, specifically the Resource Description Framework (RDF) and the Web Ontology Language (OWL). Unfortunately, adoption of Semantic Web standards has been slow overall, and even in cases where the standards are being utilized, interconnectivity between resources is rare. In response, we have seen the emergence of centralized 'semantic warehouses' that collect public data from third parties, integrate it, translate it into OWL/RDF and provide it to the community as a unified and queryable resource. One limitation of the warehouse approach is that queries are confined to the resources that have been selected for inclusion. A related problem, perhaps of greater concern, is that the majority of bioinformatics data exists in the 'Deep Web'-that is, the data does not exist until an application or analytical tool is invoked, and therefore does not have a predictable Web address. The inability to utilize Uniform Resource Identifiers (URIs) to address this data is a barrier to its accessibility via URI-centric Semantic Web technologies. Here we examine 'The State of the Union' for the adoption of Semantic Web standards in the health care and life sciences domain by key bioinformatics resources, explore the nature and connectivity of several community-driven semantic warehousing projects, and report on our own progress with the CardioSHARE/Moby-2 project, which aims to make the resources of the Deep Web transparently accessible through SPARQL queries.

  19. Magnetic resonance imaging in deep pelvic endometriosis: iconographic essay

    International Nuclear Information System (INIS)

    Coutinho Junior, Antonio Carlos; Coutinho, Elisa Pompeu Dias; Lima, Claudio Marcio Amaral de Oliveira; Ribeiro, Erica Barreiros; Aidar, Marisa Nassar; Gasparetto, Emerson Leandro

    2008-01-01

    Endometriosis is characterized by the presence of normal endometrial tissue outside the uterine cavity. In patients with deep pelvic endometriosis, uterosacral ligaments, rectum, rectovaginal septum, vagina or bladder may be involved. Clinical manifestations may be variable, including pelvic pain, dysmenorrhea, dyspareunia, urinary symptoms and infertility. Complete surgical excision is the gold standard for treating this disease, and hence the importance of the preoperative work-up that usually is limited to an evaluation of sonographic and clinical data. Magnetic resonance imaging is of paramount importance in the diagnosis of endometriosis, considering its high accuracy in the identification of lesions intermingled with adhesions, and in the determination of peritoneal lesions extent. The present pictorial review describes the main magnetic resonance imaging findings in deep pelvic endometriosis. (author)

  20. Magnetic resonance imaging in deep pelvic endometriosis: iconographic essay

    Energy Technology Data Exchange (ETDEWEB)

    Coutinho Junior, Antonio Carlos; Coutinho, Elisa Pompeu Dias; Lima, Claudio Marcio Amaral de Oliveira; Ribeiro, Erica Barreiros; Aidar, Marisa Nassar [Clinica de Diagnostico por Imagem (CDPI), Rio de Janeiro, RJ (Brazil); Clinica Multi-Imagem, Rio de Janeiro, RJ (Brazil); E-mail: cmaol@br.inter.net; Gasparetto, Emerson Leandro [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Dept. de Radiologia

    2008-03-15

    Endometriosis is characterized by the presence of normal endometrial tissue outside the uterine cavity. In patients with deep pelvic endometriosis, uterosacral ligaments, rectum, rectovaginal septum, vagina or bladder may be involved. Clinical manifestations may be variable, including pelvic pain, dysmenorrhea, dyspareunia, urinary symptoms and infertility. Complete surgical excision is the gold standard for treating this disease, and hence the importance of the preoperative work-up that usually is limited to an evaluation of sonographic and clinical data. Magnetic resonance imaging is of paramount importance in the diagnosis of endometriosis, considering its high accuracy in the identification of lesions intermingled with adhesions, and in the determination of peritoneal lesions extent. The present pictorial review describes the main magnetic resonance imaging findings in deep pelvic endometriosis. (author)

  1. Deep-learning top taggers or the end of QCD?

    Energy Technology Data Exchange (ETDEWEB)

    Kasieczka, Gregor [Institute for Particle Physics, ETH Zürich,Otto-Stern-Weg 5, Zürich (Switzerland); Plehn, Tilman [Institut für Theoretische Physik, Universität Heidelberg,Philosophenweg 16, Heidelberg (Germany); Russell, Michael [School of Physics and Astronomy, University of Glasgow,Glasgow G12 8QQ, Glasgow (United Kingdom); Schell, Torben [Institut für Theoretische Physik, Universität Heidelberg,Philosophenweg 16, Heidelberg (Germany)

    2017-05-02

    Machine learning based on convolutional neural networks can be used to study jet images from the LHC. Top tagging in fat jets offers a well-defined framework to establish our DeepTop approach and compare its performance to QCD-based top taggers. We first optimize a network architecture to identify top quarks in Monte Carlo simulations of the Standard Model production channel. Using standard fat jets we then compare its performance to a multivariate QCD-based top tagger. We find that both approaches lead to comparable performance, establishing convolutional networks as a promising new approach for multivariate hypothesis-based top tagging.

  2. Deep-learning top taggers or the end of QCD?

    International Nuclear Information System (INIS)

    Kasieczka, Gregor; Plehn, Tilman; Russell, Michael; Schell, Torben

    2017-01-01

    Machine learning based on convolutional neural networks can be used to study jet images from the LHC. Top tagging in fat jets offers a well-defined framework to establish our DeepTop approach and compare its performance to QCD-based top taggers. We first optimize a network architecture to identify top quarks in Monte Carlo simulations of the Standard Model production channel. Using standard fat jets we then compare its performance to a multivariate QCD-based top tagger. We find that both approaches lead to comparable performance, establishing convolutional networks as a promising new approach for multivariate hypothesis-based top tagging.

  3. Analyses of the deep borehole drilling status for a deep borehole disposal system

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Youl; Choi, Heui Joo; Lee, Min Soo; Kim, Geon Young; Kim, Kyung Su [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    The purpose of disposal for radioactive wastes is not only to isolate them from humans, but also to inhibit leakage of any radioactive materials into the accessible environment. Because of the extremely high level and long-time scale radioactivity of HLW(High-level radioactive waste), a mined deep geological disposal concept, the disposal depth is about 500 m below ground, is considered as the safest method to isolate the spent fuels or high-level radioactive waste from the human environment with the best available technology at present time. Therefore, as an alternative disposal concept, i.e., deep borehole disposal technology is under consideration in number of countries in terms of its outstanding safety and cost effectiveness. In this paper, the general status of deep drilling technologies was reviewed for deep borehole disposal of high level radioactive wastes. Based on the results of these review, very preliminary applicability of deep drilling technology for deep borehole disposal analyzed. In this paper, as one of key technologies of deep borehole disposal system, the general status of deep drilling technologies in oil industry, geothermal industry and geo scientific field was reviewed for deep borehole disposal of high level radioactive wastes. Based on the results of these review, the very preliminary applicability of deep drilling technology for deep borehole disposal such as relation between depth and diameter, drilling time and feasibility classification was analyzed.

  4. Beyond standard quantum chromodynamics

    International Nuclear Information System (INIS)

    Brodsky, S.J.

    1995-09-01

    Despite the many empirical successes of QCD, there are a number of intriguing experimental anomalies that have been observed in heavy flavor hadroproduction, in measurements of azimuthal correlations in deep inelastic processes, and in measurements of spin correlations in hadronic reactions. Such phenomena point to color coherence and multiparton correlations in the hadron wavefunctions and physics beyond standard leading twist factorization. Two new high precision tests of QCD and the Standard Model are discussed: classical polarized photoabsorption sum rules, which are sensitive to anomalous couplings and composite structure, and commensurate scale relations, which relate physical observables to each other without scale or scheme ambiguity. The relationship of anomalous couplings to composite structure is also discussed

  5. Deep waters : the Ottawa River and Canada's nuclear adventure

    International Nuclear Information System (INIS)

    Krenz, F.H.K.

    2004-01-01

    Deep Waters is an intimate account of the principal events and personalities involved in the successful development of the Canadian nuclear power system (CANDU), an achievement that is arguably one of Canada's greatest scientific and technical successes of the twentieth century. The author tells the stories of the people involved and the problems they faced and overcame and also relates the history of the development of the town of Deep River, built exclusively for the scientists and employees of the Chalk River Project and describes the impact of the Project on the traditional communities of the Ottawa Valley. Public understanding of nuclear power has remained confused, yet decisions about whether and how to use it are of vital importance to Canadians today - and will increase in importance as we seek to maintain our standard of living without doing irreparable damage to the environment around us. Deep Waters examines the issues involved in the use of nuclear power without over-emphasizing its positive aspects or avoiding its negative aspects.

  6. Deep Space Telecommunications

    Science.gov (United States)

    Kuiper, T. B. H.; Resch, G. M.

    2000-01-01

    The increasing load on NASA's deep Space Network, the new capabilities for deep space missions inherent in a next-generation radio telescope, and the potential of new telescope technology for reducing construction and operation costs suggest a natural marriage between radio astronomy and deep space telecommunications in developing advanced radio telescope concepts.

  7. Developing Deep Learning Applications for Life Science and Pharma Industry.

    Science.gov (United States)

    Siegismund, Daniel; Tolkachev, Vasily; Heyse, Stephan; Sick, Beate; Duerr, Oliver; Steigele, Stephan

    2018-06-01

    Deep Learning has boosted artificial intelligence over the past 5 years and is seen now as one of the major technological innovation areas, predicted to replace lots of repetitive, but complex tasks of human labor within the next decade. It is also expected to be 'game changing' for research activities in pharma and life sciences, where large sets of similar yet complex data samples are systematically analyzed. Deep learning is currently conquering formerly expert domains especially in areas requiring perception, previously not amenable to standard machine learning. A typical example is the automated analysis of images which are typically produced en-masse in many domains, e. g., in high-content screening or digital pathology. Deep learning enables to create competitive applications in so-far defined core domains of 'human intelligence'. Applications of artificial intelligence have been enabled in recent years by (i) the massive availability of data samples, collected in pharma driven drug programs (='big data') as well as (ii) deep learning algorithmic advancements and (iii) increase in compute power. Such applications are based on software frameworks with specific strengths and weaknesses. Here, we introduce typical applications and underlying frameworks for deep learning with a set of practical criteria for developing production ready solutions in life science and pharma research. Based on our own experience in successfully developing deep learning applications we provide suggestions and a baseline for selecting the most suited frameworks for a future-proof and cost-effective development. © Georg Thieme Verlag KG Stuttgart · New York.

  8. Deep learning guided stroke management: a review of clinical applications.

    Science.gov (United States)

    Feng, Rui; Badgeley, Marcus; Mocco, J; Oermann, Eric K

    2018-04-01

    Stroke is a leading cause of long-term disability, and outcome is directly related to timely intervention. Not all patients benefit from rapid intervention, however. Thus a significant amount of attention has been paid to using neuroimaging to assess potential benefit by identifying areas of ischemia that have not yet experienced cellular death. The perfusion-diffusion mismatch, is used as a simple metric for potential benefit with timely intervention, yet penumbral patterns provide an inaccurate predictor of clinical outcome. Machine learning research in the form of deep learning (artificial intelligence) techniques using deep neural networks (DNNs) excel at working with complex inputs. The key areas where deep learning may be imminently applied to stroke management are image segmentation, automated featurization (radiomics), and multimodal prognostication. The application of convolutional neural networks, the family of DNN architectures designed to work with images, to stroke imaging data is a perfect match between a mature deep learning technique and a data type that is naturally suited to benefit from deep learning's strengths. These powerful tools have opened up exciting opportunities for data-driven stroke management for acute intervention and for guiding prognosis. Deep learning techniques are useful for the speed and power of results they can deliver and will become an increasingly standard tool in the modern stroke specialist's arsenal for delivering personalized medicine to patients with ischemic stroke. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  9. Postoperative shoulder pain after laparoscopic hysterectomy with deep neuromuscular blockade and low-pressure pneumoperitoneum

    DEFF Research Database (Denmark)

    Madsen, Matias Vested; Istre, Olav; Staehr-Rye, Anne K

    2016-01-01

    indicate that the use of deep neuromuscular blockade (NMB) improves surgical conditions during a low-pressure pneumoperitoneum (8 mmHg). OBJECTIVE: The aim of this study was to investigate whether low-pressure pneumoperitoneum (8 mmHg) and deep NMB (posttetanic count 0 to 1) compared with standard......: Ninety-nine patients. INTERVENTIONS: Randomisation to either deep NMB and 8 mmHg pneumoperitoneum (Group 8-Deep) or moderate NMB and 12 mmHg pneumoperitoneum (Group 12-Mod). Pain was assessed on a visual analogue scale (VAS) for 14 postoperative days. MAIN OUTCOME MEASURES: The primary endpoint...... was the incidence of shoulder pain during 14 postoperative days. Secondary endpoints included area under curve VAS scores for shoulder, abdominal, incisional and overall pain during 4 and 14 postoperative days; opioid consumption; incidence of nausea and vomiting; antiemetic consumption; time to recovery...

  10. Deep-learnt classification of light curves

    DEFF Research Database (Denmark)

    Mahabal, Ashish; Gieseke, Fabian; Pai, Akshay Sadananda Uppinakudru

    2017-01-01

    is to derive statistical features from the time series and to use machine learning methods, generally supervised, to separate objects into a few of the standard classes. In this work, we transform the time series to two-dimensional light curve representations in order to classify them using modern deep......Astronomy light curves are sparse, gappy, and heteroscedastic. As a result standard time series methods regularly used for financial and similar datasets are of little help and astronomers are usually left to their own instruments and techniques to classify light curves. A common approach...... learning techniques. In particular, we show that convolutional neural networks based classifiers work well for broad characterization and classification. We use labeled datasets of periodic variables from CRTS survey and show how this opens doors for a quick classification of diverse classes with several...

  11. Deep inelastic collisions viewed as Brownian motion

    International Nuclear Information System (INIS)

    Gross, D.H.E.; Freie Univ. Berlin

    1980-01-01

    Non-equilibrium transport processes like Brownian motion, are studied since perhaps 100 years and one should ask why does one not use these theories to explain deep inelastic collision data. These theories have reached a high standard of sophistication, experience, and precision that I believe them to be very usefull for our problem. I will try to sketch a possible form of an advanced theory of Brownian motion that seems to be suitable for low energy heavy ion collisions. (orig./FKS)

  12. Greedy Deep Dictionary Learning

    OpenAIRE

    Tariyal, Snigdha; Majumdar, Angshul; Singh, Richa; Vatsa, Mayank

    2016-01-01

    In this work we propose a new deep learning tool called deep dictionary learning. Multi-level dictionaries are learnt in a greedy fashion, one layer at a time. This requires solving a simple (shallow) dictionary learning problem, the solution to this is well known. We apply the proposed technique on some benchmark deep learning datasets. We compare our results with other deep learning tools like stacked autoencoder and deep belief network; and state of the art supervised dictionary learning t...

  13. The consensus among Chinese interventional experts on the standard of interventional therapy for deep venous thrombosis of lower extremity

    International Nuclear Information System (INIS)

    Academic Group of Interventional Radiology, Radiology Branch of Chinese Medical Association

    2011-01-01

    This paper aims to introduce the indications and contraindications of catheter-directed thrombolysis, percutaneous mechanical thrombectomy, balloon angioplasty and stent implantation for deep venous thrombosis of lower extremity, and also aims to summarize and to illustrate the manipulating procedure, the points for attention, the perioperative complications and preventions in performing different kind of interventional technique. Great importance is attached to the interventional therapy for both acute and subacute deep venous thrombosis of lower extremity in order to effectively reduce the occurrence of post-thrombosis syndrome. (authors)

  14. Extraction of phenolic compounds from extra virgin olive oil by a natural deep eutectic solvent: Data on UV absorption of the extracts

    Directory of Open Access Journals (Sweden)

    Vito Michele Paradiso

    2016-09-01

    Full Text Available This data article refers to the paper “Towards green analysis of virgin olive oil phenolic compounds: extraction by a natural deep eutectic solvent and direct spectrophotometric detection” [1]. A deep eutectic solvent (DES based on lactic acid and glucose was used as green solvent for phenolic compounds. Eight standard phenolic compounds were solubilized in the DES. Then, a set of extra virgin olive oil (EVOO samples (n=65 were submitted to liquid–liquid extraction by the DES. The standard solutions and the extracts were analyzed by UV spectrophotometry. This article reports the spectral data of both the standard solutions and the 65 extracts, as well as the total phenolic content of the corresponding oils, assessed by the Folin–Ciocalteu assay. Keywords: Natural deep eutectic solvents, Extra virgin olive oil, Phenolic compounds, UV spectrophotometry

  15. DeepBipolar: Identifying genomic mutations for bipolar disorder via deep learning.

    Science.gov (United States)

    Laksshman, Sundaram; Bhat, Rajendra Rana; Viswanath, Vivek; Li, Xiaolin

    2017-09-01

    Bipolar disorder, also known as manic depression, is a brain disorder that affects the brain structure of a patient. It results in extreme mood swings, severe states of depression, and overexcitement simultaneously. It is estimated that roughly 3% of the population of the United States (about 5.3 million adults) suffers from bipolar disorder. Recent research efforts like the Twin studies have demonstrated a high heritability factor for the disorder, making genomics a viable alternative for detecting and treating bipolar disorder, in addition to the conventional lengthy and costly postsymptom clinical diagnosis. Motivated by this study, leveraging several emerging deep learning algorithms, we design an end-to-end deep learning architecture (called DeepBipolar) to predict bipolar disorder based on limited genomic data. DeepBipolar adopts the Deep Convolutional Neural Network (DCNN) architecture that automatically extracts features from genotype information to predict the bipolar phenotype. We participated in the Critical Assessment of Genome Interpretation (CAGI) bipolar disorder challenge and DeepBipolar was considered the most successful by the independent assessor. In this work, we thoroughly evaluate the performance of DeepBipolar and analyze the type of signals we believe could have affected the classifier in distinguishing the case samples from the control set. © 2017 Wiley Periodicals, Inc.

  16. Deep learning? What deep learning? | Fourie | South African ...

    African Journals Online (AJOL)

    In teaching generally over the past twenty years, there has been a move towards teaching methods that encourage deep, rather than surface approaches to learning. The reason for this being that students, who adopt a deep approach to learning are considered to have learning outcomes of a better quality and desirability ...

  17. Improved Automated Detection of Diabetic Retinopathy on a Publicly Available Dataset Through Integration of Deep Learning.

    Science.gov (United States)

    Abràmoff, Michael David; Lou, Yiyue; Erginay, Ali; Clarida, Warren; Amelon, Ryan; Folk, James C; Niemeijer, Meindert

    2016-10-01

    To compare performance of a deep-learning enhanced algorithm for automated detection of diabetic retinopathy (DR), to the previously published performance of that algorithm, the Iowa Detection Program (IDP)-without deep learning components-on the same publicly available set of fundus images and previously reported consensus reference standard set, by three US Board certified retinal specialists. We used the previously reported consensus reference standard of referable DR (rDR), defined as International Clinical Classification of Diabetic Retinopathy moderate, severe nonproliferative (NPDR), proliferative DR, and/or macular edema (ME). Neither Messidor-2 images, nor the three retinal specialists setting the Messidor-2 reference standard were used for training IDx-DR version X2.1. Sensitivity, specificity, negative predictive value, area under the curve (AUC), and their confidence intervals (CIs) were calculated. Sensitivity was 96.8% (95% CI: 93.3%-98.8%), specificity was 87.0% (95% CI: 84.2%-89.4%), with 6/874 false negatives, resulting in a negative predictive value of 99.0% (95% CI: 97.8%-99.6%). No cases of severe NPDR, PDR, or ME were missed. The AUC was 0.980 (95% CI: 0.968-0.992). Sensitivity was not statistically different from published IDP sensitivity, which had a CI of 94.4% to 99.3%, but specificity was significantly better than the published IDP specificity CI of 55.7% to 63.0%. A deep-learning enhanced algorithm for the automated detection of DR, achieves significantly better performance than a previously reported, otherwise essentially identical, algorithm that does not employ deep learning. Deep learning enhanced algorithms have the potential to improve the efficiency of DR screening, and thereby to prevent visual loss and blindness from this devastating disease.

  18. Deep water challenges for drilling rig design

    Energy Technology Data Exchange (ETDEWEB)

    Roth, M [Transocean Sedco Forex, Houston, TX (United States)

    2001-07-01

    Drilling rigs designed for deep water must meet specific design considerations for harsh environments. The early lessons for rig design came from experiences in the North Sea. Rig efficiency and safety considerations must include structural integrity, isolated/redundant ballast controls, triple redundant DP systems, enclosed heated work spaces, and automated equipment such as bridge cranes, pipe handling gear, offline capabilities, subsea tree handling, and computerized drill floors. All components must be designed to harmonize man and machine. Some challenges which are unique to Eastern Canada include frequent storms and fog, cold temperature, icebergs, rig ice, and difficult logistics. This power point presentation described station keeping and mooring issues in terms of dynamic positioning issues. The environmental influence on riser management during forced disconnects was also described. Design issues for connected deep water risers must insure elastic stability, and control deflected shape. The design must also keep stresses within acceptable limits. Codes and standards for stress limits, flex joints and tension were also presented. tabs., figs.

  19. Recent machine learning advancements in sensor-based mobility analysis: Deep learning for Parkinson's disease assessment.

    Science.gov (United States)

    Eskofier, Bjoern M; Lee, Sunghoon I; Daneault, Jean-Francois; Golabchi, Fatemeh N; Ferreira-Carvalho, Gabriela; Vergara-Diaz, Gloria; Sapienza, Stefano; Costante, Gianluca; Klucken, Jochen; Kautz, Thomas; Bonato, Paolo

    2016-08-01

    The development of wearable sensors has opened the door for long-term assessment of movement disorders. However, there is still a need for developing methods suitable to monitor motor symptoms in and outside the clinic. The purpose of this paper was to investigate deep learning as a method for this monitoring. Deep learning recently broke records in speech and image classification, but it has not been fully investigated as a potential approach to analyze wearable sensor data. We collected data from ten patients with idiopathic Parkinson's disease using inertial measurement units. Several motor tasks were expert-labeled and used for classification. We specifically focused on the detection of bradykinesia. For this, we compared standard machine learning pipelines with deep learning based on convolutional neural networks. Our results showed that deep learning outperformed other state-of-the-art machine learning algorithms by at least 4.6 % in terms of classification rate. We contribute a discussion of the advantages and disadvantages of deep learning for sensor-based movement assessment and conclude that deep learning is a promising method for this field.

  20. The application of deep brain stimulation in the treatment of psychiatric disorders

    NARCIS (Netherlands)

    Graat, Ilse; Figee, Martijn; Denys, D.

    2017-01-01

    Deep brain stimulation (DBS) is a last-resort treatment for neurological and psychiatric disorders that are refractory to standard treatment. Over the last decades, the progress of DBS in psychiatry has been slower than in neurology, in part owing to the heterogenic symptomatology and complex

  1. DeepInfer: open-source deep learning deployment toolkit for image-guided therapy

    Science.gov (United States)

    Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A.; Kapur, Tina; Wells, William M.; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang

    2017-03-01

    Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research work ows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.

  2. Deep learning with Python

    CERN Document Server

    Chollet, Francois

    2018-01-01

    DESCRIPTION Deep learning is applicable to a widening range of artificial intelligence problems, such as image classification, speech recognition, text classification, question answering, text-to-speech, and optical character recognition. Deep Learning with Python is structured around a series of practical code examples that illustrate each new concept introduced and demonstrate best practices. By the time you reach the end of this book, you will have become a Keras expert and will be able to apply deep learning in your own projects. KEY FEATURES • Practical code examples • In-depth introduction to Keras • Teaches the difference between Deep Learning and AI ABOUT THE TECHNOLOGY Deep learning is the technology behind photo tagging systems at Facebook and Google, self-driving cars, speech recognition systems on your smartphone, and much more. AUTHOR BIO Francois Chollet is the author of Keras, one of the most widely used libraries for deep learning in Python. He has been working with deep neural ...

  3. Deep Learning Techniques for Top-Quark Reconstruction

    CERN Document Server

    Naderi, Kiarash

    2017-01-01

    Top quarks are unique probes of the standard model (SM) predictions and have the potential to be a window for physics beyond the SM (BSM). Top quarks decay to a $Wb$ pair, and the $W$ can decay in leptons or jets. In a top pair event, assigning jets to their correct source is a challenge. In this study, I studied different methods for improving top reconstruction. The main motivation was to use Deep Learning Techniques in order to enhance the precision of top reconstruction.

  4. Discrimination of Breast Cancer with Microcalcifications on Mammography by Deep Learning.

    Science.gov (United States)

    Wang, Jinhua; Yang, Xi; Cai, Hongmin; Tan, Wanchang; Jin, Cangzheng; Li, Li

    2016-06-07

    Microcalcification is an effective indicator of early breast cancer. To improve the diagnostic accuracy of microcalcifications, this study evaluates the performance of deep learning-based models on large datasets for its discrimination. A semi-automated segmentation method was used to characterize all microcalcifications. A discrimination classifier model was constructed to assess the accuracies of microcalcifications and breast masses, either in isolation or combination, for classifying breast lesions. Performances were compared to benchmark models. Our deep learning model achieved a discriminative accuracy of 87.3% if microcalcifications were characterized alone, compared to 85.8% with a support vector machine. The accuracies were 61.3% for both methods with masses alone and improved to 89.7% and 85.8% after the combined analysis with microcalcifications. Image segmentation with our deep learning model yielded 15, 26 and 41 features for the three scenarios, respectively. Overall, deep learning based on large datasets was superior to standard methods for the discrimination of microcalcifications. Accuracy was increased by adopting a combinatorial approach to detect microcalcifications and masses simultaneously. This may have clinical value for early detection and treatment of breast cancer.

  5. Deep-learning Top Taggers or The End of QCD?

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    https://arxiv.org/abs/1701.08784 Machine learning based on convolutional neural networks can be used to study jet images from the LHC. Top tagging in fat jets offers a well-defined framework to establish our DeepTop approach and compare its performance to QCD-based top taggers. We first optimize a network architecture to identify top quarks in Monte Carlo simulations of the Standard Model production channel. Using standard fat jets we then compare its performance to a multivariate QCD-based top tagger. We find that both approaches lead to comparable performance, establishing convolutional networks as a promising new approach for multivariate hypothesis-based top tagging.

  6. High efficacy with deep nurse-administered propofol sedation for advanced gastroenterologic endoscopic procedures

    DEFF Research Database (Denmark)

    Jensen, Jeppe Thue; Hornslet, Pernille; Konge, Lars

    2016-01-01

    was requested eight times (0.4 %). One patient was intubated due to suspected aspiration. CONCLUSIONS: Intermittent deep NAPS for advanced endoscopies in selected patients provided an almost 100 % success rate. However, the rate of hypoxia, hypotension and respiratory support was high compared with previously......BACKGROUND AND STUDY AIMS: Whereas data on moderate nurse-administered propofol sedation (NAPS) efficacy and safety for standard endoscopy is abundant, few reports on the use of deep sedation by endoscopy nurses during advanced endoscopy, such as Endoscopic Retrograde Cholangiopancreatography (ERCP......) and Endoscopic Ultrasound (EUS) are available and potential benefits or hazards remain unclear. The aims of this study were to investigate the efficacy of intermittent deep sedation with propofol for a large cohort of advanced endoscopies and to provide data on the safety. PATIENTS AND METHODS: All available...

  7. Determining the dark matter mass with DeepCore

    Energy Technology Data Exchange (ETDEWEB)

    Das, Chitta R. [Centro de Física Teórica de Partículas, Instituto Superior Técnico (CFTP), Universidade Tćnica de Lisboa, Avenida Rovisco Pais 1, 1049-001 Lisboa (Portugal); Mena, Olga [Instituto de Física Corpuscular (IFIC), CSIC-Universitat de València, Apartado de Correos 22085, E-46071 Valencia (Spain); Palomares-Ruiz, Sergio, E-mail: sergio.palomares.ruiz@ist.utl.pt [Centro de Física Teórica de Partículas, Instituto Superior Técnico (CFTP), Universidade Tćnica de Lisboa, Avenida Rovisco Pais 1, 1049-001 Lisboa (Portugal); Instituto de Física Corpuscular (IFIC), CSIC-Universitat de València, Apartado de Correos 22085, E-46071 Valencia (Spain); Pascoli, Silvia [IPPP, Department of Physics, Durham University, Durham DH1 3LE (United Kingdom)

    2013-10-01

    Cosmological and astrophysical observations provide increasing evidence of the existence of dark matter in our Universe. Dark matter particles with a mass above a few GeV can be captured by the Sun, accumulate in the core, annihilate, and produce high energy neutrinos either directly or by subsequent decays of Standard Model particles. We investigate the prospects for indirect dark matter detection in the IceCube/DeepCore neutrino telescope and its capabilities to determine the dark matter mass.

  8. DeepMitosis: Mitosis detection via deep detection, verification and segmentation networks.

    Science.gov (United States)

    Li, Chao; Wang, Xinggang; Liu, Wenyu; Latecki, Longin Jan

    2018-04-01

    Mitotic count is a critical predictor of tumor aggressiveness in the breast cancer diagnosis. Nowadays mitosis counting is mainly performed by pathologists manually, which is extremely arduous and time-consuming. In this paper, we propose an accurate method for detecting the mitotic cells from histopathological slides using a novel multi-stage deep learning framework. Our method consists of a deep segmentation network for generating mitosis region when only a weak label is given (i.e., only the centroid pixel of mitosis is annotated), an elaborately designed deep detection network for localizing mitosis by using contextual region information, and a deep verification network for improving detection accuracy by removing false positives. We validate the proposed deep learning method on two widely used Mitosis Detection in Breast Cancer Histological Images (MITOSIS) datasets. Experimental results show that we can achieve the highest F-score on the MITOSIS dataset from ICPR 2012 grand challenge merely using the deep detection network. For the ICPR 2014 MITOSIS dataset that only provides the centroid location of mitosis, we employ the segmentation model to estimate the bounding box annotation for training the deep detection network. We also apply the verification model to eliminate some false positives produced from the detection model. By fusing scores of the detection and verification models, we achieve the state-of-the-art results. Moreover, our method is very fast with GPU computing, which makes it feasible for clinical practice. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Deep frying

    NARCIS (Netherlands)

    Koerten, van K.N.

    2016-01-01

    Deep frying is one of the most used methods in the food processing industry. Though practically any food can be fried, French fries are probably the most well-known deep fried products. The popularity of French fries stems from their unique taste and texture, a crispy outside with a mealy soft

  10. Deep-sea environment and biodiversity of the West African Equatorial margin

    Science.gov (United States)

    Sibuet, Myriam; Vangriesheim, Annick

    2009-12-01

    The long-term BIOZAIRE multidisciplinary deep-sea environmental program on the West Equatorial African margin organized in partnership between Ifremer and TOTAL aimed at characterizing the benthic community structure in relation with physical and chemical processes in a region of oil and gas interest. The morphology of the deep Congo submarine channel and the sedimentological structures of the deep-sea fan were established during the geological ZAIANGO project and helped to select study sites ranging from 350 to 4800 m water depth inside or near the channel and away from its influence. Ifremer conducted eight deep-sea cruises on board research vessels between 2000 and 2005. Standardized methods of sampling together with new technologies such as the ROV Victor 6000 and its associated instrumentation were used to investigate this poorly known continental margin. In addition to the study of sedimentary environments more or less influenced by turbidity events, the discovery of one of the largest cold seeps near the Congo channel and deep coral reefs extends our knowledge of the different habitats of this margin. This paper presents the background, objectives and major results of the BIOZAIRE Program. It highlights the work achieved in the 16 papers in this special issue. This synthesis paper describes the knowledge acquired at a regional and local scale of the Equatorial East Atlantic margin, and tackles new interdisciplinary questions to be answered in the various domains of physics, chemistry, taxonomy and ecology to better understand the deep-sea environment in the Gulf of Guinea.

  11. DeepPVP: phenotype-based prioritization of causative variants using deep learning

    KAUST Repository

    Boudellioua, Imene

    2018-05-02

    Background: Prioritization of variants in personal genomic data is a major challenge. Recently, computational methods that rely on comparing phenotype similarity have shown to be useful to identify causative variants. In these methods, pathogenicity prediction is combined with a semantic similarity measure to prioritize not only variants that are likely to be dysfunctional but those that are likely involved in the pathogenesis of a patient\\'s phenotype. Results: We have developed DeepPVP, a variant prioritization method that combined automated inference with deep neural networks to identify the likely causative variants in whole exome or whole genome sequence data. We demonstrate that DeepPVP performs significantly better than existing methods, including phenotype-based methods that use similar features. DeepPVP is freely available at https://github.com/bio-ontology-research-group/phenomenet-vp Conclusions: DeepPVP further improves on existing variant prioritization methods both in terms of speed as well as accuracy.

  12. Random Access Frames (RAF): Alternative to Rack and Standoff for Deep Space Habitat Outfitting

    Science.gov (United States)

    Howe, A. Scott; Polit-Casillas, Raul

    2014-01-01

    A modular Random Access Frame (RAF) system is proposed as an alternative to the International Standard Payload Rack (ISPR) for internal module layout and outfitting in a Deep Space Habitat (DSH). The ISPR approach was designed to allow for efficient interchangeability of payload and experiments for the International Space Station (ISS) when frequent resupply missions were available (particularly the now-retired Space Shuttle). Though the standard interface approach to the ISPR system allowed integration of subsystems and hardware from a variety of sources and manufacturers, the heavy rack and standoff approach may not be appropriate when resupply or swap-out capabilities are not available, such as on deep space, long-duration missions. The lightweight RAF concept can allow a more dense packing of stowage and equipment, and may be easily broken down for repurposing or reuse. Several example layouts and workstations are presented.

  13. Hot, deep origin of petroleum: deep basin evidence and application

    Science.gov (United States)

    Price, Leigh C.

    1978-01-01

    Use of the model of a hot deep origin of oil places rigid constraints on the migration and entrapment of crude oil. Specifically, oil originating from depth migrates vertically up faults and is emplaced in traps at shallower depths. Review of petroleum-producing basins worldwide shows oil occurrence in these basins conforms to the restraints of and therefore supports the hypothesis. Most of the world's oil is found in the very deepest sedimentary basins, and production over or adjacent to the deep basin is cut by or directly updip from faults dipping into the basin deep. Generally the greater the fault throw the greater the reserves. Fault-block highs next to deep sedimentary troughs are the best target areas by the present concept. Traps along major basin-forming faults are quite prospective. The structural style of a basin governs the distribution, types, and amounts of hydrocarbons expected and hence the exploration strategy. Production in delta depocenters (Niger) is in structures cut by or updip from major growth faults, and structures not associated with such faults are barren. Production in block fault basins is on horsts next to deep sedimentary troughs (Sirte, North Sea). In basins whose sediment thickness, structure and geologic history are known to a moderate degree, the main oil occurrences can be specifically predicted by analysis of fault systems and possible hydrocarbon migration routes. Use of the concept permits the identification of significant targets which have either been downgraded or ignored in the past, such as production in or just updip from thrust belts, stratigraphic traps over the deep basin associated with major faulting, production over the basin deep, and regional stratigraphic trapping updip from established production along major fault zones.

  14. Deep inelastic scattering as a probe of new hadronic mass scales

    International Nuclear Information System (INIS)

    Burges, C.J.C.; Schnitzer, H.J.

    1984-01-01

    We present the general form for deep-inelastic cross sections obtained from all SU(3) x SU(2) x U(1) invariant operators of dimension six or less. The operators of dimension six generate corrections to the predictions of the standard model, which serve as a probe of a possible new mass-scale Λ and other new physics. (orig.)

  15. Deep learning in bioinformatics.

    Science.gov (United States)

    Min, Seonwoo; Lee, Byunghan; Yoon, Sungroh

    2017-09-01

    In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Deep vein thrombosis of the leg

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eun Hee; Rhee, Kwang Woo; Jeon, Suk Chul; Joo, Kyung Bin; Lee, Seung Ro; Seo, Heung Suk; Hahm, Chang Kok [College of Medicine, Hanyang University, Seoul (Korea, Republic of)

    1987-04-15

    Ascending contrast venography is the definitive standard method for the diagnosis of deep vein thrombosis (DVT) of the lower extremities. Authors analysed 22 cases of DVT clinically and radiographically. 1.The patients ranged in age from 15 to 70 yrs and the most prevalent age group was 7th decade (31%). There was an equal distribution of males and females. 2.In 11 cases of 22 cases, variable etiologic factors were recognized, such as abdominal surgery, chronic bedridden state, local trauma on the leg, pregnancy, postpartum, Behcet's syndrome, iliac artery aneurysm, and chronic medication of estrogen. 3.Nineteen cases out of 22 cases showed primary venographic signs of DVT, such as well-defined filling defect in opacified veins and narrowed, irregularly filled venous lumen. In only 3 cases, the diagnosis of DVT was base upon the segmental nonvisualization of deep veins with good opacification of proximal and distal veins and presence of collaterals. 4.Extent of thrombosis: 3 cases were confined to calf vein, 4 cases extended to femoral vein, and 15 cases had involvement above iliac vein. 5.In 17 cases involving relatively long segment of deep veins, propagation pattern of thrombus was evaluated by its radiologic morphology according to the age of thrombus: 9 cases suggested central or antegrade propagation pattern and 8 cases, peripheral or retrograde pattern. 6.None of 22 cases showed clinical evidence of pulmonary embolism. The cause of the rarity of pulmonary embolism in Korean in presumed to be related to the difference in major involving site and propagation pattern of DVT in the leg.

  17. Deep vein thrombosis of the leg

    International Nuclear Information System (INIS)

    Lee, Eun Hee; Rhee, Kwang Woo; Jeon, Suk Chul; Joo, Kyung Bin; Lee, Seung Ro; Seo, Heung Suk; Hahm, Chang Kok

    1987-01-01

    Ascending contrast venography is the definitive standard method for the diagnosis of deep vein thrombosis (DVT) of the lower extremities. Authors analysed 22 cases of DVT clinically and radiographically. 1.The patients ranged in age from 15 to 70 yrs and the most prevalent age group was 7th decade (31%). There was an equal distribution of males and females. 2.In 11 cases of 22 cases, variable etiologic factors were recognized, such as abdominal surgery, chronic bedridden state, local trauma on the leg, pregnancy, postpartum, Behcet's syndrome, iliac artery aneurysm, and chronic medication of estrogen. 3.Nineteen cases out of 22 cases showed primary venographic signs of DVT, such as well-defined filling defect in opacified veins and narrowed, irregularly filled venous lumen. In only 3 cases, the diagnosis of DVT was base upon the segmental nonvisualization of deep veins with good opacification of proximal and distal veins and presence of collaterals. 4.Extent of thrombosis: 3 cases were confined to calf vein, 4 cases extended to femoral vein, and 15 cases had involvement above iliac vein. 5.In 17 cases involving relatively long segment of deep veins, propagation pattern of thrombus was evaluated by its radiologic morphology according to the age of thrombus: 9 cases suggested central or antegrade propagation pattern and 8 cases, peripheral or retrograde pattern. 6.None of 22 cases showed clinical evidence of pulmonary embolism. The cause of the rarity of pulmonary embolism in Korean in presumed to be related to the difference in major involving site and propagation pattern of DVT in the leg

  18. 75 FR 51838 - Public Review of Draft Coastal and Marine Ecological Classification Standard

    Science.gov (United States)

    2010-08-23

    ... Web site. DATES: Comments on the draft Coastal and Marine Ecological Classification Standard must be... marine and coastal environments of the United States. It was developed to provide a common language that... existing classification standards. The CMECS domain extends from the coastal tidal splash zone to the deep...

  19. Photoacoustic image reconstruction via deep learning

    Science.gov (United States)

    Antholzer, Stephan; Haltmeier, Markus; Nuster, Robert; Schwab, Johannes

    2018-02-01

    Applying standard algorithms to sparse data problems in photoacoustic tomography (PAT) yields low-quality images containing severe under-sampling artifacts. To some extent, these artifacts can be reduced by iterative image reconstruction algorithms which allow to include prior knowledge such as smoothness, total variation (TV) or sparsity constraints. These algorithms tend to be time consuming as the forward and adjoint problems have to be solved repeatedly. Further, iterative algorithms have additional drawbacks. For example, the reconstruction quality strongly depends on a-priori model assumptions about the objects to be recovered, which are often not strictly satisfied in practical applications. To overcome these issues, in this paper, we develop direct and efficient reconstruction algorithms based on deep learning. As opposed to iterative algorithms, we apply a convolutional neural network, whose parameters are trained before the reconstruction process based on a set of training data. For actual image reconstruction, a single evaluation of the trained network yields the desired result. Our presented numerical results (using two different network architectures) demonstrate that the proposed deep learning approach reconstructs images with a quality comparable to state of the art iterative reconstruction methods.

  20. DeepSimulator: a deep simulator for Nanopore sequencing

    KAUST Repository

    Li, Yu

    2017-12-23

    Motivation: Oxford Nanopore sequencing is a rapidly developed sequencing technology in recent years. To keep pace with the explosion of the downstream data analytical tools, a versatile Nanopore sequencing simulator is needed to complement the experimental data as well as to benchmark those newly developed tools. However, all the currently available simulators are based on simple statistics of the produced reads, which have difficulty in capturing the complex nature of the Nanopore sequencing procedure, the main task of which is the generation of raw electrical current signals. Results: Here we propose a deep learning based simulator, DeepSimulator, to mimic the entire pipeline of Nanopore sequencing. Starting from a given reference genome or assembled contigs, we simulate the electrical current signals by a context-dependent deep learning model, followed by a base-calling procedure to yield simulated reads. This workflow mimics the sequencing procedure more naturally. The thorough experiments performed across four species show that the signals generated by our context-dependent model are more similar to the experimentally obtained signals than the ones generated by the official context-independent pore model. In terms of the simulated reads, we provide a parameter interface to users so that they can obtain the reads with different accuracies ranging from 83% to 97%. The reads generated by the default parameter have almost the same properties as the real data. Two case studies demonstrate the application of DeepSimulator to benefit the development of tools in de novo assembly and in low coverage SNP detection. Availability: The software can be accessed freely at: https://github.com/lykaust15/DeepSimulator.

  1. Deep learning relevance

    DEFF Research Database (Denmark)

    Lioma, Christina; Larsen, Birger; Petersen, Casper

    2016-01-01

    train a Recurrent Neural Network (RNN) on existing relevant information to that query. We then use the RNN to "deep learn" a single, synthetic, and we assume, relevant document for that query. We design a crowdsourcing experiment to assess how relevant the "deep learned" document is, compared...... to existing relevant documents. Users are shown a query and four wordclouds (of three existing relevant documents and our deep learned synthetic document). The synthetic document is ranked on average most relevant of all....

  2. Tephrostratigraphy the DEEP site record, Lake Ohrid

    Science.gov (United States)

    Leicher, N.; Zanchetta, G.; Sulpizio, R.; Giaccio, B.; Wagner, B.; Francke, A.

    2016-12-01

    In the central Mediterranean region, tephrostratigraphy has been proofed to be a suitable and powerful tool for dating and correlating marine and terrestrial records. However, for the period older 200 ka, tephrostratigraphy is incomplete and restricted to some Italian continental basins (e.g. Sulmona, Acerno, Mercure), and continuous records downwind of the Italian volcanoes are rare. Lake Ohrid (Macedonia/Albania) in the eastern Mediterranean region fits this requisite and is assumed to be the oldest continuously existing lake of Europe. A continous record (DEEP) was recovered within the scope of the ICDP deep-drilling campaign SCOPSCO (Scientific Collaboration on Past Speciation Conditions in Lake Ohrid). In the uppermost 450 meters of the record, covering more than 1.2 Myrs of Italian volcanism, 54 tephra layers were identified during core-opening and description. A first tephrostratigraphic record was established for the uppermost 248 m ( 637 ka). Major element analyses (EDS/WDS) were carried out on juvenile glass fragments and 15 out of 35 tephra layers have been identified and correlated with known and dated eruptions of Italian volcanoes. Existing 40Ar/39Ar ages were re-calculated by using the same flux standard and used as first order tie points to develop a robust chronology for the DEEP site succession. Between 248 and 450 m of the DEEP site record, another 19 tephra horizons were identified and are subject of ongoing work. These deposits, once correlated with known and dated tephra, will hopefully enable dating this part of the succession, likely supported by major paleomagnetic events, such as the Brunhes-Matuyama boundary, or the Cobb-Mountain or the Jaramillo excursions. This makes the Lake Ohrid record a unique continuous, distal record of Italian volcanic activity, which is candidate to become the template for the central Mediterranean tephrostratigraphy, especially for the hitherto poorly known and explored lower Middle Pleistocene period.

  3. STIMULATION TECHNOLOGIES FOR DEEP WELL COMPLETIONS

    Energy Technology Data Exchange (ETDEWEB)

    Stephen Wolhart

    2003-06-01

    The Department of Energy (DOE) is sponsoring a Deep Trek Program targeted at improving the economics of drilling and completing deep gas wells. Under the DOE program, Pinnacle Technologies is conducting a project to evaluate the stimulation of deep wells. The objective of the project is to assess U.S. deep well drilling & stimulation activity, review rock mechanics & fracture growth in deep, high pressure/temperature wells and evaluate stimulation technology in several key deep plays. Phase 1 was recently completed and consisted of assessing deep gas well drilling activity (1995-2007) and an industry survey on deep gas well stimulation practices by region. Of the 29,000 oil, gas and dry holes drilled in 2002, about 300 were drilled in the deep well; 25% were dry, 50% were high temperature/high pressure completions and 25% were simply deep completions. South Texas has about 30% of these wells, Oklahoma 20%, Gulf of Mexico Shelf 15% and the Gulf Coast about 15%. The Rockies represent only 2% of deep drilling. Of the 60 operators who drill deep and HTHP wells, the top 20 drill almost 80% of the wells. Six operators drill half the U.S. deep wells. Deep drilling peaked at 425 wells in 1998 and fell to 250 in 1999. Drilling is expected to rise through 2004 after which drilling should cycle down as overall drilling declines.

  4. Fault Diagnosis for Rotating Machinery Using Vibration Measurement Deep Statistical Feature Learning

    Directory of Open Access Journals (Sweden)

    Chuan Li

    2016-06-01

    Full Text Available Fault diagnosis is important for the maintenance of rotating machinery. The detection of faults and fault patterns is a challenging part of machinery fault diagnosis. To tackle this problem, a model for deep statistical feature learning from vibration measurements of rotating machinery is presented in this paper. Vibration sensor signals collected from rotating mechanical systems are represented in the time, frequency, and time-frequency domains, each of which is then used to produce a statistical feature set. For learning statistical features, real-value Gaussian-Bernoulli restricted Boltzmann machines (GRBMs are stacked to develop a Gaussian-Bernoulli deep Boltzmann machine (GDBM. The suggested approach is applied as a deep statistical feature learning tool for both gearbox and bearing systems. The fault classification performances in experiments using this approach are 95.17% for the gearbox, and 91.75% for the bearing system. The proposed approach is compared to such standard methods as a support vector machine, GRBM and a combination model. In experiments, the best fault classification rate was detected using the proposed model. The results show that deep learning with statistical feature extraction has an essential improvement potential for diagnosing rotating machinery faults.

  5. Fault Diagnosis for Rotating Machinery Using Vibration Measurement Deep Statistical Feature Learning.

    Science.gov (United States)

    Li, Chuan; Sánchez, René-Vinicio; Zurita, Grover; Cerrada, Mariela; Cabrera, Diego

    2016-06-17

    Fault diagnosis is important for the maintenance of rotating machinery. The detection of faults and fault patterns is a challenging part of machinery fault diagnosis. To tackle this problem, a model for deep statistical feature learning from vibration measurements of rotating machinery is presented in this paper. Vibration sensor signals collected from rotating mechanical systems are represented in the time, frequency, and time-frequency domains, each of which is then used to produce a statistical feature set. For learning statistical features, real-value Gaussian-Bernoulli restricted Boltzmann machines (GRBMs) are stacked to develop a Gaussian-Bernoulli deep Boltzmann machine (GDBM). The suggested approach is applied as a deep statistical feature learning tool for both gearbox and bearing systems. The fault classification performances in experiments using this approach are 95.17% for the gearbox, and 91.75% for the bearing system. The proposed approach is compared to such standard methods as a support vector machine, GRBM and a combination model. In experiments, the best fault classification rate was detected using the proposed model. The results show that deep learning with statistical feature extraction has an essential improvement potential for diagnosing rotating machinery faults.

  6. Statistical Analysis of Deep Drilling Process Conditions Using Vibrations and Force Signals

    Directory of Open Access Journals (Sweden)

    Syafiq Hazwan

    2016-01-01

    Full Text Available Cooling systems is a key point for hot forming process of Ultra High Strength Steels (UHSS. Normally, cooling systems is made using deep drilling technique. Although deep twist drill is better than other drilling techniques in term of higher productivity however its main problem is premature tool breakage, which affects the production quality. In this paper, analysis of deep twist drill process parameters such as cutting speed, feed rate and depth of cut by using statistical analysis to identify the tool condition is presented. The comparisons between different two tool geometries are also studied. Measured data from vibrations and force sensors are being analyzed through several statistical parameters such as root mean square (RMS, mean, kurtosis, standard deviation and skewness. Result found that kurtosis and skewness value are the most appropriate parameters to represent the deep twist drill tool conditions behaviors from vibrations and forces data. The condition of the deep twist drill process been classified according to good, blunt and fracture. It also found that the different tool geometry parameters affect the performance of the tool drill. It believe the results of this study are useful in determining the suitable analysis method to be used for developing online tool condition monitoring system to identify the tertiary tool life stage and helps to avoid mature of tool fracture during drilling process.

  7. IMPROVEMENT OF RECOGNITION QUALITY IN DEEP LEARNING NETWORKS BY SIMULATED ANNEALING METHOD

    Directory of Open Access Journals (Sweden)

    A. S. Potapov

    2014-09-01

    Full Text Available The subject of this research is deep learning methods, in which automatic construction of feature transforms is taken place in tasks of pattern recognition. Multilayer autoencoders have been taken as the considered type of deep learning networks. Autoencoders perform nonlinear feature transform with logistic regression as an upper classification layer. In order to verify the hypothesis of possibility to improve recognition rate by global optimization of parameters for deep learning networks, which are traditionally trained layer-by-layer by gradient descent, a new method has been designed and implemented. The method applies simulated annealing for tuning connection weights of autoencoders while regression layer is simultaneously trained by stochastic gradient descent. Experiments held by means of standard MNIST handwritten digit database have shown the decrease of recognition error rate from 1.1 to 1.5 times in case of the modified method comparing to the traditional method, which is based on local optimization. Thus, overfitting effect doesn’t appear and the possibility to improve learning rate is confirmed in deep learning networks by global optimization methods (in terms of increasing recognition probability. Research results can be applied for improving the probability of pattern recognition in the fields, which require automatic construction of nonlinear feature transforms, in particular, in the image recognition. Keywords: pattern recognition, deep learning, autoencoder, logistic regression, simulated annealing.

  8. Deep ecology: A movement and a new approach to solving environmental problems

    Directory of Open Access Journals (Sweden)

    Mišković Milan M.

    2016-01-01

    Full Text Available In the industrial society, nature is conceived as a resource for unlimited exploitation, and the entropic effects of its pollution and depletion can be effectively controlled and resolved. Non-human entities are viewed as raw materials for technical manipulation and the increase in the standard of living for consumers in mass societies. Contrary to such utilitarian pragmatism, some new views on the relationship of man, society and nature are appearing, as well as different concepts of environmentally balanced development. According to these views, the transition to ecological society and ecological culture will not be possible without replacing the current anthropocentric ethics with the ecocentric or environmental ethics. Deep ecology arises in the spectrum of environmental ethics theories. It is considered as a movement and a new approach to solving environmental problems. Deep ecology is a type of ecosophy formed by Arne Nes, and it focuses on wisdom and ecological balance. It is based on ecological science, but it asks deeper questions about the causes of the ecological crisis and corresponds to the general discourse on sustainable development. The article discusses the platform of deep ecology movement and gives the basic principles of deep ecology. It gives explanations of the two basic norms of deep ecology (self-understanding and biospheric egalitarianism and criticism of these concepts.

  9. Development of a code of practice for deep geothermal wells

    International Nuclear Information System (INIS)

    Leaver, J.D.; Bolton, R.S.; Dench, N.D.; Fooks, L.

    1990-01-01

    Recent and on-going changes to the structure of the New Zealand geothermal industry has shifted responsibility for the development of geothermal resources from central government to private enterprise. The need for a code of practice for deep geothermal wells was identified by the Geothermal Inspectorate of the Ministry of Commerce to maintain adequate standards of health and safety and to assist with industry deregulation. This paper reports that the Code contains details of methods, procedures, formulae and design data necessary to attain those standards, and includes information which drilling engineers having experience only in the oil industry could not be expected to be familiar with

  10. Deep subsurface microbial processes

    Science.gov (United States)

    Lovley, D.R.; Chapelle, F.H.

    1995-01-01

    Information on the microbiology of the deep subsurface is necessary in order to understand the factors controlling the rate and extent of the microbially catalyzed redox reactions that influence the geophysical properties of these environments. Furthermore, there is an increasing threat that deep aquifers, an important drinking water resource, may be contaminated by man's activities, and there is a need to predict the extent to which microbial activity may remediate such contamination. Metabolically active microorganisms can be recovered from a diversity of deep subsurface environments. The available evidence suggests that these microorganisms are responsible for catalyzing the oxidation of organic matter coupled to a variety of electron acceptors just as microorganisms do in surface sediments, but at much slower rates. The technical difficulties in aseptically sampling deep subsurface sediments and the fact that microbial processes in laboratory incubations of deep subsurface material often do not mimic in situ processes frequently necessitate that microbial activity in the deep subsurface be inferred through nonmicrobiological analyses of ground water. These approaches include measurements of dissolved H2, which can predict the predominant microbially catalyzed redox reactions in aquifers, as well as geochemical and groundwater flow modeling, which can be used to estimate the rates of microbial processes. Microorganisms recovered from the deep subsurface have the potential to affect the fate of toxic organics and inorganic contaminants in groundwater. Microbial activity also greatly influences 1 the chemistry of many pristine groundwaters and contributes to such phenomena as porosity development in carbonate aquifers, accumulation of undesirably high concentrations of dissolved iron, and production of methane and hydrogen sulfide. Although the last decade has seen a dramatic increase in interest in deep subsurface microbiology, in comparison with the study of

  11. Pathogenesis of deep endometriosis.

    Science.gov (United States)

    Gordts, Stephan; Koninckx, Philippe; Brosens, Ivo

    2017-12-01

    The pathophysiology of (deep) endometriosis is still unclear. As originally suggested by Cullen, change the definition "deeper than 5 mm" to "adenomyosis externa." With the discovery of the old European literature on uterine bleeding in 5%-10% of the neonates and histologic evidence that the bleeding represents decidual shedding, it is postulated/hypothesized that endometrial stem/progenitor cells, implanted in the pelvic cavity after birth, may be at the origin of adolescent and even the occasionally premenarcheal pelvic endometriosis. Endometriosis in the adolescent is characterized by angiogenic and hemorrhagic peritoneal and ovarian lesions. The development of deep endometriosis at a later age suggests that deep infiltrating endometriosis is a delayed stage of endometriosis. Another hypothesis is that the endometriotic cell has undergone genetic or epigenetic changes and those specific changes determine the development into deep endometriosis. This is compatible with the hereditary aspects, and with the clonality of deep and cystic ovarian endometriosis. It explains the predisposition and an eventual causal effect by dioxin or radiation. Specific genetic/epigenetic changes could explain the various expressions and thus typical, cystic, and deep endometriosis become three different diseases. Subtle lesions are not a disease until epi(genetic) changes occur. A classification should reflect that deep endometriosis is a specific disease. In conclusion the pathophysiology of deep endometriosis remains debated and the mechanisms of disease progression, as well as the role of genetics and epigenetics in the process, still needs to be unraveled. Copyright © 2017 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  12. DeepSpark: A Spark-Based Distributed Deep Learning Framework for Commodity Clusters

    OpenAIRE

    Kim, Hanjoo; Park, Jaehong; Jang, Jaehee; Yoon, Sungroh

    2016-01-01

    The increasing complexity of deep neural networks (DNNs) has made it challenging to exploit existing large-scale data processing pipelines for handling massive data and parameters involved in DNN training. Distributed computing platforms and GPGPU-based acceleration provide a mainstream solution to this computational challenge. In this paper, we propose DeepSpark, a distributed and parallel deep learning framework that exploits Apache Spark on commodity clusters. To support parallel operation...

  13. Deep Learning MR Imaging-based Attenuation Correction for PET/MR Imaging.

    Science.gov (United States)

    Liu, Fang; Jang, Hyungseok; Kijowski, Richard; Bradshaw, Tyler; McMillan, Alan B

    2018-02-01

    Purpose To develop and evaluate the feasibility of deep learning approaches for magnetic resonance (MR) imaging-based attenuation correction (AC) (termed deep MRAC) in brain positron emission tomography (PET)/MR imaging. Materials and Methods A PET/MR imaging AC pipeline was built by using a deep learning approach to generate pseudo computed tomographic (CT) scans from MR images. A deep convolutional auto-encoder network was trained to identify air, bone, and soft tissue in volumetric head MR images coregistered to CT data for training. A set of 30 retrospective three-dimensional T1-weighted head images was used to train the model, which was then evaluated in 10 patients by comparing the generated pseudo CT scan to an acquired CT scan. A prospective study was carried out for utilizing simultaneous PET/MR imaging for five subjects by using the proposed approach. Analysis of covariance and paired-sample t tests were used for statistical analysis to compare PET reconstruction error with deep MRAC and two existing MR imaging-based AC approaches with CT-based AC. Results Deep MRAC provides an accurate pseudo CT scan with a mean Dice coefficient of 0.971 ± 0.005 for air, 0.936 ± 0.011 for soft tissue, and 0.803 ± 0.021 for bone. Furthermore, deep MRAC provides good PET results, with average errors of less than 1% in most brain regions. Significantly lower PET reconstruction errors were realized with deep MRAC (-0.7% ± 1.1) compared with Dixon-based soft-tissue and air segmentation (-5.8% ± 3.1) and anatomic CT-based template registration (-4.8% ± 2.2). Conclusion The authors developed an automated approach that allows generation of discrete-valued pseudo CT scans (soft tissue, bone, and air) from a single high-spatial-resolution diagnostic-quality three-dimensional MR image and evaluated it in brain PET/MR imaging. This deep learning approach for MR imaging-based AC provided reduced PET reconstruction error relative to a CT-based standard within the brain compared

  14. DeepPVP: phenotype-based prioritization of causative variants using deep learning

    KAUST Repository

    Boudellioua, Imene; Kulmanov, Maxat; Schofield, Paul N; Gkoutos, Georgios V; Hoehndorf, Robert

    2018-01-01

    phenotype-based methods that use similar features. DeepPVP is freely available at https://github.com/bio-ontology-research-group/phenomenet-vp Conclusions: DeepPVP further improves on existing variant prioritization methods both in terms of speed as well

  15. DeepARG: a deep learning approach for predicting antibiotic resistance genes from metagenomic data.

    Science.gov (United States)

    Arango-Argoty, Gustavo; Garner, Emily; Pruden, Amy; Heath, Lenwood S; Vikesland, Peter; Zhang, Liqing

    2018-02-01

    Growing concerns about increasing rates of antibiotic resistance call for expanded and comprehensive global monitoring. Advancing methods for monitoring of environmental media (e.g., wastewater, agricultural waste, food, and water) is especially needed for identifying potential resources of novel antibiotic resistance genes (ARGs), hot spots for gene exchange, and as pathways for the spread of ARGs and human exposure. Next-generation sequencing now enables direct access and profiling of the total metagenomic DNA pool, where ARGs are typically identified or predicted based on the "best hits" of sequence searches against existing databases. Unfortunately, this approach produces a high rate of false negatives. To address such limitations, we propose here a deep learning approach, taking into account a dissimilarity matrix created using all known categories of ARGs. Two deep learning models, DeepARG-SS and DeepARG-LS, were constructed for short read sequences and full gene length sequences, respectively. Evaluation of the deep learning models over 30 antibiotic resistance categories demonstrates that the DeepARG models can predict ARGs with both high precision (> 0.97) and recall (> 0.90). The models displayed an advantage over the typical best hit approach, yielding consistently lower false negative rates and thus higher overall recall (> 0.9). As more data become available for under-represented ARG categories, the DeepARG models' performance can be expected to be further enhanced due to the nature of the underlying neural networks. Our newly developed ARG database, DeepARG-DB, encompasses ARGs predicted with a high degree of confidence and extensive manual inspection, greatly expanding current ARG repositories. The deep learning models developed here offer more accurate antimicrobial resistance annotation relative to current bioinformatics practice. DeepARG does not require strict cutoffs, which enables identification of a much broader diversity of ARGs. The

  16. The standard model in a nutshell

    CERN Document Server

    Goldberg, Dave

    2017-01-01

    For a theory as genuinely elegant as the Standard Model--the current framework describing elementary particles and their forces--it can sometimes appear to students to be little more than a complicated collection of particles and ranked list of interactions. The Standard Model in a Nutshell provides a comprehensive and uncommonly accessible introduction to one of the most important subjects in modern physics, revealing why, despite initial appearances, the entire framework really is as elegant as physicists say. Dave Goldberg uses a "just-in-time" approach to instruction that enables students to gradually develop a deep understanding of the Standard Model even if this is their first exposure to it. He covers everything from relativity, group theory, and relativistic quantum mechanics to the Higgs boson, unification schemes, and physics beyond the Standard Model. The book also looks at new avenues of research that could answer still-unresolved questions and features numerous worked examples, helpful illustrat...

  17. Free-living energy expenditure reduced after deep brain stimulation surgery for Parkinson's disease

    DEFF Research Database (Denmark)

    Jørgensen, Hans Ulrik; Werdelin, Lene; Lokkegaard, Annemette

    2012-01-01

    with deep brain stimulation in the subthalamic nucleus (STN-DBS) is now considered the gold standard in fluctuating PD. Many patients experience a gain of weight following the surgery. The aim of this study was to identify possible mechanisms, which may contribute to body weight gain in patients with PD...

  18. PATELLOFEMORAL MODEL OF THE KNEE JOINT UNDER NON-STANDARD SQUATTING

    OpenAIRE

    FEKETE, GUSZTÁV; CSIZMADIA, BÉLA MÁLNÁSI; WAHAB, MAGD ABDEL; DE BAETS, PATRICK; VANEGAS-USECHE, LIBARDO V.; BÍRÓ, ISTVÁN

    2014-01-01

    The available analytical models for calculating knee patellofemoral forces are limited to the standard squat motion when the center of gravity is fixed horizontally. In this paper, an analytical model is presented to calculate accurately patellofemoral forces by taking into account the change in position of the trunk's center of gravity under deep squat (non-standard squatting). The accuracy of the derived model is validated through comparisons with results of the inverse dynamics technique. ...

  19. Stimulation Technologies for Deep Well Completions

    Energy Technology Data Exchange (ETDEWEB)

    None

    2003-09-30

    The Department of Energy (DOE) is sponsoring the Deep Trek Program targeted at improving the economics of drilling and completing deep gas wells. Under the DOE program, Pinnacle Technologies is conducting a study to evaluate the stimulation of deep wells. The objective of the project is to assess U.S. deep well drilling & stimulation activity, review rock mechanics & fracture growth in deep, high pressure/temperature wells and evaluate stimulation technology in several key deep plays. An assessment of historical deep gas well drilling activity and forecast of future trends was completed during the first six months of the project; this segment of the project was covered in Technical Project Report No. 1. The second progress report covers the next six months of the project during which efforts were primarily split between summarizing rock mechanics and fracture growth in deep reservoirs and contacting operators about case studies of deep gas well stimulation.

  20. DeepQA: improving the estimation of single protein model quality with deep belief networks.

    Science.gov (United States)

    Cao, Renzhi; Bhattacharya, Debswapna; Hou, Jie; Cheng, Jianlin

    2016-12-05

    Protein quality assessment (QA) useful for ranking and selecting protein models has long been viewed as one of the major challenges for protein tertiary structure prediction. Especially, estimating the quality of a single protein model, which is important for selecting a few good models out of a large model pool consisting of mostly low-quality models, is still a largely unsolved problem. We introduce a novel single-model quality assessment method DeepQA based on deep belief network that utilizes a number of selected features describing the quality of a model from different perspectives, such as energy, physio-chemical characteristics, and structural information. The deep belief network is trained on several large datasets consisting of models from the Critical Assessment of Protein Structure Prediction (CASP) experiments, several publicly available datasets, and models generated by our in-house ab initio method. Our experiments demonstrate that deep belief network has better performance compared to Support Vector Machines and Neural Networks on the protein model quality assessment problem, and our method DeepQA achieves the state-of-the-art performance on CASP11 dataset. It also outperformed two well-established methods in selecting good outlier models from a large set of models of mostly low quality generated by ab initio modeling methods. DeepQA is a useful deep learning tool for protein single model quality assessment and protein structure prediction. The source code, executable, document and training/test datasets of DeepQA for Linux is freely available to non-commercial users at http://cactus.rnet.missouri.edu/DeepQA/ .

  1. Stimulation Technologies for Deep Well Completions

    Energy Technology Data Exchange (ETDEWEB)

    Stephen Wolhart

    2005-06-30

    The Department of Energy (DOE) is sponsoring the Deep Trek Program targeted at improving the economics of drilling and completing deep gas wells. Under the DOE program, Pinnacle Technologies conducted a study to evaluate the stimulation of deep wells. The objective of the project was to review U.S. deep well drilling and stimulation activity, review rock mechanics and fracture growth in deep, high-pressure/temperature wells and evaluate stimulation technology in several key deep plays. This report documents results from this project.

  2. DeepPicker: A deep learning approach for fully automated particle picking in cryo-EM.

    Science.gov (United States)

    Wang, Feng; Gong, Huichao; Liu, Gaochao; Li, Meijing; Yan, Chuangye; Xia, Tian; Li, Xueming; Zeng, Jianyang

    2016-09-01

    Particle picking is a time-consuming step in single-particle analysis and often requires significant interventions from users, which has become a bottleneck for future automated electron cryo-microscopy (cryo-EM). Here we report a deep learning framework, called DeepPicker, to address this problem and fill the current gaps toward a fully automated cryo-EM pipeline. DeepPicker employs a novel cross-molecule training strategy to capture common features of particles from previously-analyzed micrographs, and thus does not require any human intervention during particle picking. Tests on the recently-published cryo-EM data of three complexes have demonstrated that our deep learning based scheme can successfully accomplish the human-level particle picking process and identify a sufficient number of particles that are comparable to those picked manually by human experts. These results indicate that DeepPicker can provide a practically useful tool to significantly reduce the time and manual effort spent in single-particle analysis and thus greatly facilitate high-resolution cryo-EM structure determination. DeepPicker is released as an open-source program, which can be downloaded from https://github.com/nejyeah/DeepPicker-python. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. The National Deep-Sea Coral and Sponge Database: A Comprehensive Resource for United States Deep-Sea Coral and Sponge Records

    Science.gov (United States)

    Dornback, M.; Hourigan, T.; Etnoyer, P.; McGuinn, R.; Cross, S. L.

    2014-12-01

    Research on deep-sea corals has expanded rapidly over the last two decades, as scientists began to realize their value as long-lived structural components of high biodiversity habitats and archives of environmental information. The NOAA Deep Sea Coral Research and Technology Program's National Database for Deep-Sea Corals and Sponges is a comprehensive resource for georeferenced data on these organisms in U.S. waters. The National Database currently includes more than 220,000 deep-sea coral records representing approximately 880 unique species. Database records from museum archives, commercial and scientific bycatch, and from journal publications provide baseline information with relatively coarse spatial resolution dating back as far as 1842. These data are complemented by modern, in-situ submersible observations with high spatial resolution, from surveys conducted by NOAA and NOAA partners. Management of high volumes of modern high-resolution observational data can be challenging. NOAA is working with our data partners to incorporate this occurrence data into the National Database, along with images and associated information related to geoposition, time, biology, taxonomy, environment, provenance, and accuracy. NOAA is also working to link associated datasets collected by our program's research, to properly archive them to the NOAA National Data Centers, to build a robust metadata record, and to establish a standard protocol to simplify the process. Access to the National Database is provided through an online mapping portal. The map displays point based records from the database. Records can be refined by taxon, region, time, and depth. The queries and extent used to view the map can also be used to download subsets of the database. The database, map, and website is already in use by NOAA, regional fishery management councils, and regional ocean planning bodies, but we envision it as a model that can expand to accommodate data on a global scale.

  4. Linear: A Novel Algorithm for Reconstructing Slitless Spectroscopy from HST/WFC3

    Science.gov (United States)

    Ryan, R. E., Jr.; Casertano, S.; Pirzkal, N.

    2018-03-01

    We present a grism extraction package (LINEAR) designed to reconstruct 1D spectra from a collection of slitless spectroscopic images, ideally taken at a variety of orientations, dispersion directions, and/or dither positions. Our approach is to enumerate every transformation between all direct image positions (i.e., a potential source) and the collection of grism images at all relevant wavelengths. This leads to solving a large, sparse system of linear equations, which we invert using the standard LSQR algorithm. We implement a number of color and geometric corrections (such as flat field, pixel-area map, source morphology, and spectral bandwidth), but assume many effects have been calibrated out (such as basic reductions, background subtraction, and astrometric refinement). We demonstrate the power of our approach with several Monte Carlo simulations and the analysis of archival data. The simulations include astrometric and photometric uncertainties, sky-background estimation, and signal-to-noise calculations. The data are G141 observations obtained with the Wide-Field Camera 3 of the Hubble Ultra-Deep Field, and show the power of our formalism by improving the spectral resolution without sacrificing the signal-to-noise (a tradeoff that is often made by current approaches). Additionally, our approach naturally accounts for source contamination, which is only handled heuristically by present softwares. We conclude with a discussion of various observations where our approach will provide much improved spectral 1D spectra, such as crowded fields (star or galaxy clusters), spatially resolved spectroscopy, or surveys with strict completeness requirements. At present our software is heavily geared for Wide-Field Camera 3 IR, however we plan extend the codebase for additional instruments.

  5. Hourly air pollution concentrations and their important predictors over Houston, Texas using deep neural networks: case study of DISCOVER-AQ time period

    Science.gov (United States)

    Eslami, E.; Choi, Y.; Roy, A.

    2017-12-01

    Air quality forecasting carried out by chemical transport models often show significant error. This study uses a deep-learning approach over the Houston-Galveston-Brazoria (HGB) area to overcome this forecasting challenge, for the DISCOVER-AQ period (September 2013). Two approaches, deep neural network (DNN) using a Multi-Layer Perceptron (MLP) and Restricted Boltzmann Machine (RBM) were utilized. The proposed approaches analyzed input data by identifying features abstracted from its previous layer using a stepwise method. The approaches predicted hourly ozone and PM in September 2013 using several predictors of prior three days, including wind fields, temperature, relative humidity, cloud fraction, precipitation along with PM, ozone, and NOx concentrations. Model-measurement comparisons for available monitoring sites reported Indexes of Agreement (IOA) of around 0.95 for both DNN and RBM. A standard artificial neural network (ANN) (IOA=0.90) with similar architecture showed poorer performance than the deep networks, clearly demonstrating the superiority of the deep approaches. Additionally, each network (both deep and standard) performed significantly better than a previous CMAQ study, which showed an IOA of less than 0.80. The most influential input variables were identified using their associated weights, which represented the sensitivity of ozone to input parameters. The results indicate deep learning approaches can achieve more accurate ozone forecasting and identify the important input variables for ozone predictions in metropolitan areas.

  6. DeepBase: annotation and discovery of microRNAs and other noncoding RNAs from deep-sequencing data.

    Science.gov (United States)

    Yang, Jian-Hua; Qu, Liang-Hu

    2012-01-01

    Recent advances in high-throughput deep-sequencing technology have produced large numbers of short and long RNA sequences and enabled the detection and profiling of known and novel microRNAs (miRNAs) and other noncoding RNAs (ncRNAs) at unprecedented sensitivity and depth. In this chapter, we describe the use of deepBase, a database that we have developed to integrate all public deep-sequencing data and to facilitate the comprehensive annotation and discovery of miRNAs and other ncRNAs from these data. deepBase provides an integrative, interactive, and versatile web graphical interface to evaluate miRBase-annotated miRNA genes and other known ncRNAs, explores the expression patterns of miRNAs and other ncRNAs, and discovers novel miRNAs and other ncRNAs from deep-sequencing data. deepBase also provides a deepView genome browser to comparatively analyze these data at multiple levels. deepBase is available at http://deepbase.sysu.edu.cn/.

  7. Deep Borehole Disposal as an Alternative Concept to Deep Geological Disposal

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jongyoul; Lee, Minsoo; Choi, Heuijoo; Kim, Kyungsu [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    In this paper, the general concept and key technologies for deep borehole disposal of spent fuels or HLW, as an alternative method to the mined geological disposal method, were reviewed. After then an analysis on the distance between boreholes for the disposal of HLW was carried out. Based on the results, a disposal area were calculated approximately and compared with that of mined geological disposal. These results will be used as an input for the analyses of applicability for DBD in Korea. The disposal safety of this system has been demonstrated with underground research laboratory and some advanced countries such as Finland and Sweden are implementing their disposal project on commercial stage. However, if the spent fuels or the high-level radioactive wastes can be disposed of in the depth of 3-5 km and more stable rock formation, it has several advantages. Therefore, as an alternative disposal concept to the mined deep geological disposal concept (DGD), very deep borehole disposal (DBD) technology is under consideration in number of countries in terms of its outstanding safety and cost effectiveness. In this paper, the general concept of deep borehole disposal for spent fuels or high level radioactive wastes was reviewed. And the key technologies, such as drilling technology of large diameter borehole, packaging and emplacement technology, sealing technology and performance/safety analyses technologies, and their challenges in development of deep borehole disposal system were analyzed. Also, very preliminary deep borehole disposal concept including disposal canister concept was developed according to the nuclear environment in Korea.

  8. Deep Borehole Disposal as an Alternative Concept to Deep Geological Disposal

    International Nuclear Information System (INIS)

    Lee, Jongyoul; Lee, Minsoo; Choi, Heuijoo; Kim, Kyungsu

    2016-01-01

    In this paper, the general concept and key technologies for deep borehole disposal of spent fuels or HLW, as an alternative method to the mined geological disposal method, were reviewed. After then an analysis on the distance between boreholes for the disposal of HLW was carried out. Based on the results, a disposal area were calculated approximately and compared with that of mined geological disposal. These results will be used as an input for the analyses of applicability for DBD in Korea. The disposal safety of this system has been demonstrated with underground research laboratory and some advanced countries such as Finland and Sweden are implementing their disposal project on commercial stage. However, if the spent fuels or the high-level radioactive wastes can be disposed of in the depth of 3-5 km and more stable rock formation, it has several advantages. Therefore, as an alternative disposal concept to the mined deep geological disposal concept (DGD), very deep borehole disposal (DBD) technology is under consideration in number of countries in terms of its outstanding safety and cost effectiveness. In this paper, the general concept of deep borehole disposal for spent fuels or high level radioactive wastes was reviewed. And the key technologies, such as drilling technology of large diameter borehole, packaging and emplacement technology, sealing technology and performance/safety analyses technologies, and their challenges in development of deep borehole disposal system were analyzed. Also, very preliminary deep borehole disposal concept including disposal canister concept was developed according to the nuclear environment in Korea

  9. DeepGO: predicting protein functions from sequence and interactions using a deep ontology-aware classifier.

    Science.gov (United States)

    Kulmanov, Maxat; Khan, Mohammed Asif; Hoehndorf, Robert; Wren, Jonathan

    2018-02-15

    A large number of protein sequences are becoming available through the application of novel high-throughput sequencing technologies. Experimental functional characterization of these proteins is time-consuming and expensive, and is often only done rigorously for few selected model organisms. Computational function prediction approaches have been suggested to fill this gap. The functions of proteins are classified using the Gene Ontology (GO), which contains over 40 000 classes. Additionally, proteins have multiple functions, making function prediction a large-scale, multi-class, multi-label problem. We have developed a novel method to predict protein function from sequence. We use deep learning to learn features from protein sequences as well as a cross-species protein-protein interaction network. Our approach specifically outputs information in the structure of the GO and utilizes the dependencies between GO classes as background information to construct a deep learning model. We evaluate our method using the standards established by the Computational Assessment of Function Annotation (CAFA) and demonstrate a significant improvement over baseline methods such as BLAST, in particular for predicting cellular locations. Web server: http://deepgo.bio2vec.net, Source code: https://github.com/bio-ontology-research-group/deepgo. robert.hoehndorf@kaust.edu.sa. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  10. Pole-to-pole biogeography of surface and deep marine bacterial communities

    Science.gov (United States)

    Ghiglione, Jean-François; Galand, Pierre E.; Pommier, Thomas; Pedrós-Alió, Carlos; Maas, Elizabeth W.; Bakker, Kevin; Bertilson, Stefan; Kirchman, David L.; Lovejoy, Connie; Yager, Patricia L.; Murray, Alison E.

    2012-01-01

    The Antarctic and Arctic regions offer a unique opportunity to test factors shaping biogeography of marine microbial communities because these regions are geographically far apart, yet share similar selection pressures. Here, we report a comprehensive comparison of bacterioplankton diversity between polar oceans, using standardized methods for pyrosequencing the V6 region of the small subunit ribosomal (SSU) rRNA gene. Bacterial communities from lower latitude oceans were included, providing a global perspective. A clear difference between Southern and Arctic Ocean surface communities was evident, with 78% of operational taxonomic units (OTUs) unique to the Southern Ocean and 70% unique to the Arctic Ocean. Although polar ocean bacterial communities were more similar to each other than to lower latitude pelagic communities, analyses of depths, seasons, and coastal vs. open waters, the Southern and Arctic Ocean bacterioplankton communities consistently clustered separately from each other. Coastal surface Southern and Arctic Ocean communities were more dissimilar from their respective open ocean communities. In contrast, deep ocean communities differed less between poles and lower latitude deep waters and displayed different diversity patterns compared with the surface. In addition, estimated diversity (Chao1) for surface and deep communities did not correlate significantly with latitude or temperature. Our results suggest differences in environmental conditions at the poles and different selection mechanisms controlling surface and deep ocean community structure and diversity. Surface bacterioplankton may be subjected to more short-term, variable conditions, whereas deep communities appear to be structured by longer water-mass residence time and connectivity through ocean circulation. PMID:23045668

  11. Creating Deep Time Diaries: An English/Earth Science Unit for Middle School Students

    Science.gov (United States)

    Jordan, Vicky; Barnes, Mark

    2006-01-01

    Students love a good story. That is why incorporating literary fiction that parallels teaching goals and standards can be effective. In the interdisciplinary, thematic six-week unit described in this article, the authors use the fictional book "The Deep Time Diaries," by Gary Raham, to explore topics in paleontology, Earth science, and creative…

  12. Deep Mapping and Spatial Anthropology

    Directory of Open Access Journals (Sweden)

    Les Roberts

    2016-01-01

    Full Text Available This paper provides an introduction to the Humanities Special Issue on “Deep Mapping”. It sets out the rationale for the collection and explores the broad-ranging nature of perspectives and practices that fall within the “undisciplined” interdisciplinary domain of spatial humanities. Sketching a cross-current of ideas that have begun to coalesce around the concept of “deep mapping”, the paper argues that rather than attempting to outline a set of defining characteristics and “deep” cartographic features, a more instructive approach is to pay closer attention to the multivalent ways deep mapping is performatively put to work. Casting a critical and reflexive gaze over the developing discourse of deep mapping, it is argued that what deep mapping “is” cannot be reduced to the otherwise a-spatial and a-temporal fixity of the “deep map”. In this respect, as an undisciplined survey of this increasing expansive field of study and practice, the paper explores the ways in which deep mapping can engage broader discussion around questions of spatial anthropology.

  13. Deep Vein Thrombosis

    African Journals Online (AJOL)

    OWNER

    Deep Vein Thrombosis: Risk Factors and Prevention in Surgical Patients. Deep Vein ... preventable morbidity and mortality in hospitalized surgical patients. ... the elderly.3,4 It is very rare before the age ... depends on the risk level; therefore an .... but also in the post-operative period. ... is continuing uncertainty regarding.

  14. pDeep: Predicting MS/MS Spectra of Peptides with Deep Learning.

    Science.gov (United States)

    Zhou, Xie-Xuan; Zeng, Wen-Feng; Chi, Hao; Luo, Chunjie; Liu, Chao; Zhan, Jianfeng; He, Si-Min; Zhang, Zhifei

    2017-12-05

    In tandem mass spectrometry (MS/MS)-based proteomics, search engines rely on comparison between an experimental MS/MS spectrum and the theoretical spectra of the candidate peptides. Hence, accurate prediction of the theoretical spectra of peptides appears to be particularly important. Here, we present pDeep, a deep neural network-based model for the spectrum prediction of peptides. Using the bidirectional long short-term memory (BiLSTM), pDeep can predict higher-energy collisional dissociation, electron-transfer dissociation, and electron-transfer and higher-energy collision dissociation MS/MS spectra of peptides with >0.9 median Pearson correlation coefficients. Further, we showed that intermediate layer of the neural network could reveal physicochemical properties of amino acids, for example the similarities of fragmentation behaviors between amino acids. We also showed the potential of pDeep to distinguish extremely similar peptides (peptides that contain isobaric amino acids, for example, GG = N, AG = Q, or even I = L), which were very difficult to distinguish using traditional search engines.

  15. Extraction of phenolic compounds from extra virgin olive oil by a natural deep eutectic solvent: Data on UV absorption of the extracts.

    Science.gov (United States)

    Paradiso, Vito Michele; Clemente, Antonia; Summo, Carmine; Pasqualone, Antonella; Caponio, Francesco

    2016-09-01

    This data article refers to the paper "Towards green analysis of virgin olive oil phenolic compounds: extraction by a natural deep eutectic solvent and direct spectrophotometric detection" [1]. A deep eutectic solvent (DES) based on lactic acid and glucose was used as green solvent for phenolic compounds. Eight standard phenolic compounds were solubilized in the DES. Then, a set of extra virgin olive oil (EVOO) samples (n=65) were submitted to liquid-liquid extraction by the DES. The standard solutions and the extracts were analyzed by UV spectrophotometry. This article reports the spectral data of both the standard solutions and the 65 extracts, as well as the total phenolic content of the corresponding oils, assessed by the Folin-Ciocalteu assay.

  16. Duplex scanning in the diagnosis of acute deep vein thrombosis of the lower extremity

    NARCIS (Netherlands)

    van Ramshorst, B.; Legemate, D. A.; Verzijlbergen, J. F.; Hoeneveld, H.; Eikelboom, B. C.; de Valois, J. C.; Meuwissen, O. J.

    1991-01-01

    In a prospective study the value of duplex scanning in the diagnosis of acute femoro-popliteal thrombosis was compared to conventional contrast venography (CV) as a gold standard. A total of 126 legs in 117 patients suspected of having deep vein thrombosis (DVT) or pulmonary embolism (PE) were

  17. 3D Deep Learning Angiography (3D-DLA) from C-arm Conebeam CT.

    Science.gov (United States)

    Montoya, J C; Li, Y; Strother, C; Chen, G-H

    2018-05-01

    Deep learning is a branch of artificial intelligence that has demonstrated unprecedented performance in many medical imaging applications. Our purpose was to develop a deep learning angiography method to generate 3D cerebral angiograms from a single contrast-enhanced C-arm conebeam CT acquisition in order to reduce image artifacts and radiation dose. A set of 105 3D rotational angiography examinations were randomly selected from an internal data base. All were acquired using a clinical system in conjunction with a standard injection protocol. More than 150 million labeled voxels from 35 subjects were used for training. A deep convolutional neural network was trained to classify each image voxel into 3 tissue types (vasculature, bone, and soft tissue). The trained deep learning angiography model was then applied for tissue classification into a validation cohort of 8 subjects and a final testing cohort of the remaining 62 subjects. The final vasculature tissue class was used to generate the 3D deep learning angiography images. To quantify the generalization error of the trained model, we calculated the accuracy, sensitivity, precision, and Dice similarity coefficients for vasculature classification in relevant anatomy. The 3D deep learning angiography and clinical 3D rotational angiography images were subjected to a qualitative assessment for the presence of intersweep motion artifacts. Vasculature classification accuracy and 95% CI in the testing dataset were 98.7% (98.3%-99.1%). No residual signal from osseous structures was observed for any 3D deep learning angiography testing cases except for small regions in the otic capsule and nasal cavity compared with 37% (23/62) of the 3D rotational angiographies. Deep learning angiography accurately recreated the vascular anatomy of the 3D rotational angiography reconstructions without a mask. Deep learning angiography reduced misregistration artifacts induced by intersweep motion, and it reduced radiation exposure

  18. Catheter directed thrombolysis for deep vein thrombosis during the first trimester of pregnancy: two case report

    International Nuclear Information System (INIS)

    Kim, Kum Rae; Park, Won Kyu; Kim, Jae Woon; Kwun, Woo Hyung; Suh, Bo Yang; Park, Kyeong Seok

    2008-01-01

    Anticoagulation with heparin has been the standard management therapy of deep vein thrombosis during pregnancy. Pregnancy is generally considered as a contraindication for thrombolysis. However, anticoagulation therapy alone does not protect the limbs from post-thrombotic syndrome and venous valve insufficiency. Catheter-directed thrombolysis, combined with angioplasty and stenting, can remove the thrombus and restore patency of the veins, resulting in prevention of post-thrombotic syndrome and valve insufficiency. We report successful catheter-directed thrombolysis and stenting in two early gestation patients with a deep vein thrombosis of the left lower extremity

  19. Catheter directed thrombolysis for deep vein thrombosis during the first trimester of pregnancy: two case report

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kum Rae; Park, Won Kyu; Kim, Jae Woon; Kwun, Woo Hyung; Suh, Bo Yang [College of Medicine, Yeungnam University, Daegu (Korea, Republic of); Park, Kyeong Seok [Yeungnam University, Medical Center, Daegu (Korea, Republic of)

    2008-02-15

    Anticoagulation with heparin has been the standard management therapy of deep vein thrombosis during pregnancy. Pregnancy is generally considered as a contraindication for thrombolysis. However, anticoagulation therapy alone does not protect the limbs from post-thrombotic syndrome and venous valve insufficiency. Catheter-directed thrombolysis, combined with angioplasty and stenting, can remove the thrombus and restore patency of the veins, resulting in prevention of post-thrombotic syndrome and valve insufficiency. We report successful catheter-directed thrombolysis and stenting in two early gestation patients with a deep vein thrombosis of the left lower extremity.

  20. Should the U.S. proceed to consider licensing deep geological disposal of high-level nuclear waste

    International Nuclear Information System (INIS)

    Curtiss, J.R.

    1993-01-01

    The United States, as well as other countries facing the question of how to handle high-level nuclear waste, has decided that the most appropriate means of disposal is in a deep geologic repository. In recent years, the Radioactive Waste Management Committee of the Nuclear Energy Agency has developed several position papers on the technical achievability of deep geologic disposal, thus demonstrating the serious consideration of deep geologic disposal in the international community. The Committee has not, as yet, formally endorsed disposal in a deep geologic repository as the preferred method of handling high-level nuclear waste. The United States, on the other hand, has studied the various methods of disposing of high-level nuclear waste, and has determined that deep geologic disposal is the method that should be developed. The purpose of this paper is to present a review of the United States' decision on selecting deep geologic disposal as the preferred method of addressing the high-level waste problem. It presents a short history of the steps taken by the U.S. in determining what method to use, discusses the NRC's waste Confidence Decision, and provides information on other issues in the U.S. program such as reconsideration of the final disposal standard and the growing inventory of spent fuel in storage

  1. Global astrometry with the space interferometry mission

    Science.gov (United States)

    Boden, A.; Unwin, S.; Shao, M.

    1997-01-01

    The prospects for global astrometric measurements with the space interferometry mission (SIM) are discussed. The SIM mission will perform four microarcsec astrometric measurements on objects as faint as 20 mag using the optical interferometry technique with a 10 m baseline. The SIM satellite will perform narrow angle astrometry and global astrometry by means of an astrometric grid. The sensitivities of the SIM global astrometric performance and the grid accuracy versus instrumental parameters and sky coverage schemes are reported on. The problems in finding suitable astrometric grid objects to support microarcsec astrometry, and related ground-based observation programs are discussed.

  2. Joint Segmentation of Multiple Thoracic Organs in CT Images with Two Collaborative Deep Architectures.

    Science.gov (United States)

    Trullo, Roger; Petitjean, Caroline; Nie, Dong; Shen, Dinggang; Ruan, Su

    2017-09-01

    Computed Tomography (CT) is the standard imaging technique for radiotherapy planning. The delineation of Organs at Risk (OAR) in thoracic CT images is a necessary step before radiotherapy, for preventing irradiation of healthy organs. However, due to low contrast, multi-organ segmentation is a challenge. In this paper, we focus on developing a novel framework for automatic delineation of OARs. Different from previous works in OAR segmentation where each organ is segmented separately, we propose two collaborative deep architectures to jointly segment all organs, including esophagus, heart, aorta and trachea. Since most of the organ borders are ill-defined, we believe spatial relationships must be taken into account to overcome the lack of contrast. The aim of combining two networks is to learn anatomical constraints with the first network, which will be used in the second network, when each OAR is segmented in turn. Specifically, we use the first deep architecture, a deep SharpMask architecture, for providing an effective combination of low-level representations with deep high-level features, and then take into account the spatial relationships between organs by the use of Conditional Random Fields (CRF). Next, the second deep architecture is employed to refine the segmentation of each organ by using the maps obtained on the first deep architecture to learn anatomical constraints for guiding and refining the segmentations. Experimental results show superior performance on 30 CT scans, comparing with other state-of-the-art methods.

  3. Searching for prostate cancer by fully automated magnetic resonance imaging classification: deep learning versus non-deep learning.

    Science.gov (United States)

    Wang, Xinggang; Yang, Wei; Weinreb, Jeffrey; Han, Juan; Li, Qiubai; Kong, Xiangchuang; Yan, Yongluan; Ke, Zan; Luo, Bo; Liu, Tao; Wang, Liang

    2017-11-13

    Prostate cancer (PCa) is a major cause of death since ancient time documented in Egyptian Ptolemaic mummy imaging. PCa detection is critical to personalized medicine and varies considerably under an MRI scan. 172 patients with 2,602 morphologic images (axial 2D T2-weighted imaging) of the prostate were obtained. A deep learning with deep convolutional neural network (DCNN) and a non-deep learning with SIFT image feature and bag-of-word (BoW), a representative method for image recognition and analysis, were used to distinguish pathologically confirmed PCa patients from prostate benign conditions (BCs) patients with prostatitis or prostate benign hyperplasia (BPH). In fully automated detection of PCa patients, deep learning had a statistically higher area under the receiver operating characteristics curve (AUC) than non-deep learning (P = 0.0007 deep learning method and 0.70 (95% CI 0.63-0.77) for non-deep learning method, respectively. Our results suggest that deep learning with DCNN is superior to non-deep learning with SIFT image feature and BoW model for fully automated PCa patients differentiation from prostate BCs patients. Our deep learning method is extensible to image modalities such as MR imaging, CT and PET of other organs.

  4. UKAEA's programme for the development of waste packages for deep disposal

    International Nuclear Information System (INIS)

    Graham, D.

    1996-01-01

    This paper describes UKAEA ILW, the development programme underpinning the proposed disposals, the case for cement as the immobilising matrix and the waste package performance required by the Deep Repository. The paper also seeks to show that UKAEA is effectively managing its ILW liability through a well managed programme which is convincingly best value whilst meeting appropriate national and international agreed standards for safety and environmental care. (author)

  5. Top tagging with deep neural networks [Vidyo

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Recent literature on deep neural networks for top tagging has focussed on image based techniques or multivariate approaches using high level jet substructure variables. Here, we take a sequential approach to this task by using anordered sequence of energy deposits as training inputs. Unlike previous approaches, this strategy does not result in a loss of information during pixelization or the calculation of high level features. We also propose new preprocessing methods that do not alter key physical quantities such as jet mass. We compare the performance of this approach to standard tagging techniques and present results evaluating the robustness of the neural network to pileup.

  6. Deep learning for computational chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Goh, Garrett B. [Advanced Computing, Mathematics, and Data Division, Pacific Northwest National Laboratory, 902 Battelle Blvd Richland Washington 99354; Hodas, Nathan O. [Advanced Computing, Mathematics, and Data Division, Pacific Northwest National Laboratory, 902 Battelle Blvd Richland Washington 99354; Vishnu, Abhinav [Advanced Computing, Mathematics, and Data Division, Pacific Northwest National Laboratory, 902 Battelle Blvd Richland Washington 99354

    2017-03-08

    The rise and fall of artificial neural networks is well documented in the scientific literature of both the fields of computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on “deep” neural networks. Within the last few years, we have seen the transformative impact of deep learning the computer science domain, notably in speech recognition and computer vision, to the extent that the majority of practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. In this review, we provide an introductory overview into the theory of deep neural networks and their unique properties as compared to traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including QSAR, virtual screening, protein structure modeling, QM calculations, materials synthesis and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non neural networks state-of-the-art models across disparate research topics, and deep neural network based models often exceeded the “glass ceiling” expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a useful tool and may grow into a pivotal role for various challenges in the computational chemistry field.

  7. Deep learning for computational chemistry.

    Science.gov (United States)

    Goh, Garrett B; Hodas, Nathan O; Vishnu, Abhinav

    2017-06-15

    The rise and fall of artificial neural networks is well documented in the scientific literature of both computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on multilayer neural networks. Within the last few years, we have seen the transformative impact of deep learning in many domains, particularly in speech recognition and computer vision, to the extent that the majority of expert practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. In this review, we provide an introductory overview into the theory of deep neural networks and their unique properties that distinguish them from traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including quantitative structure activity relationship, virtual screening, protein structure prediction, quantum chemistry, materials design, and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non-neural networks state-of-the-art models across disparate research topics, and deep neural network-based models often exceeded the "glass ceiling" expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a valuable tool for computational chemistry. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  8. DeepNet: An Ultrafast Neural Learning Code for Seismic Imaging

    International Nuclear Information System (INIS)

    Barhen, J.; Protopopescu, V.; Reister, D.

    1999-01-01

    A feed-forward multilayer neural net is trained to learn the correspondence between seismic data and well logs. The introduction of a virtual input layer, connected to the nominal input layer through a special nonlinear transfer function, enables ultrafast (single iteration), near-optimal training of the net using numerical algebraic techniques. A unique computer code, named DeepNet, has been developed, that has achieved, in actual field demonstrations, results unattainable to date with industry standard tools

  9. What Really is Deep Learning Doing?

    OpenAIRE

    Xiong, Chuyu

    2017-01-01

    Deep learning has achieved a great success in many areas, from computer vision to natural language processing, to game playing, and much more. Yet, what deep learning is really doing is still an open question. There are a lot of works in this direction. For example, [5] tried to explain deep learning by group renormalization, and [6] tried to explain deep learning from the view of functional approximation. In order to address this very crucial question, here we see deep learning from perspect...

  10. Taoism and Deep Ecology.

    Science.gov (United States)

    Sylvan, Richard; Bennett, David

    1988-01-01

    Contrasted are the philosophies of Deep Ecology and ancient Chinese. Discusses the cosmology, morality, lifestyle, views of power, politics, and environmental philosophies of each. Concludes that Deep Ecology could gain much from Taoism. (CW)

  11. deepTools2: a next generation web server for deep-sequencing data analysis.

    Science.gov (United States)

    Ramírez, Fidel; Ryan, Devon P; Grüning, Björn; Bhardwaj, Vivek; Kilpert, Fabian; Richter, Andreas S; Heyne, Steffen; Dündar, Friederike; Manke, Thomas

    2016-07-08

    We present an update to our Galaxy-based web server for processing and visualizing deeply sequenced data. Its core tool set, deepTools, allows users to perform complete bioinformatic workflows ranging from quality controls and normalizations of aligned reads to integrative analyses, including clustering and visualization approaches. Since we first described our deepTools Galaxy server in 2014, we have implemented new solutions for many requests from the community and our users. Here, we introduce significant enhancements and new tools to further improve data visualization and interpretation. deepTools continue to be open to all users and freely available as a web service at deeptools.ie-freiburg.mpg.de The new deepTools2 suite can be easily deployed within any Galaxy framework via the toolshed repository, and we also provide source code for command line usage under Linux and Mac OS X. A public and documented API for access to deepTools functionality is also available. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Is Multitask Deep Learning Practical for Pharma?

    Science.gov (United States)

    Ramsundar, Bharath; Liu, Bowen; Wu, Zhenqin; Verras, Andreas; Tudor, Matthew; Sheridan, Robert P; Pande, Vijay

    2017-08-28

    Multitask deep learning has emerged as a powerful tool for computational drug discovery. However, despite a number of preliminary studies, multitask deep networks have yet to be widely deployed in the pharmaceutical and biotech industries. This lack of acceptance stems from both software difficulties and lack of understanding of the robustness of multitask deep networks. Our work aims to resolve both of these barriers to adoption. We introduce a high-quality open-source implementation of multitask deep networks as part of the DeepChem open-source platform. Our implementation enables simple python scripts to construct, fit, and evaluate sophisticated deep models. We use our implementation to analyze the performance of multitask deep networks and related deep models on four collections of pharmaceutical data (three of which have not previously been analyzed in the literature). We split these data sets into train/valid/test using time and neighbor splits to test multitask deep learning performance under challenging conditions. Our results demonstrate that multitask deep networks are surprisingly robust and can offer strong improvement over random forests. Our analysis and open-source implementation in DeepChem provide an argument that multitask deep networks are ready for widespread use in commercial drug discovery.

  13. Smooth transition from sudden to adiabatic states in heavy-ion fusion reactions at deep-subbarrier incident energies

    International Nuclear Information System (INIS)

    Takatoshi, Ichikawa; Kouichi, Hagino; Akira, Iwamoto

    2011-01-01

    We propose a novel extension of the standard coupled-channel (CC) model in order to account for the steep falloff of fusion cross sections at deep-subbarrier incident energies. We introduce a damping factor in the coupling potential in the CC model, simulating smooth transitions from sudden to adiabatic states in deep- subbarrier fusion reactions. The CC model extended with the damping factor can reproduce well not only the steep falloff of the fusion cross section but also the saturation of the logarithmic derivatives for the fusion cross sections at deep-subbarrier energies for the 16 O+ 208 Pb, 64 Ni+ 64 Ni, and 58 Ni+ 58 Ni reactions at the deep-subbarrier energies. The important point in our model is that the transition takes place at different places for each Eigen channel. We conclude that the smooth transition from the two-body to the adiabatic one-body potential is responsible for the steep falloff of the fusion cross section

  14. DeepGait: A Learning Deep Convolutional Representation for View-Invariant Gait Recognition Using Joint Bayesian

    Directory of Open Access Journals (Sweden)

    Chao Li

    2017-02-01

    Full Text Available Human gait, as a soft biometric, helps to recognize people through their walking. To further improve the recognition performance, we propose a novel video sensor-based gait representation, DeepGait, using deep convolutional features and introduce Joint Bayesian to model view variance. DeepGait is generated by using a pre-trained “very deep” network “D-Net” (VGG-D without any fine-tuning. For non-view setting, DeepGait outperforms hand-crafted representations (e.g., Gait Energy Image, Frequency-Domain Feature and Gait Flow Image, etc.. Furthermore, for cross-view setting, 256-dimensional DeepGait after PCA significantly outperforms the state-of-the-art methods on the OU-ISR large population (OULP dataset. The OULP dataset, which includes 4007 subjects, makes our result reliable in a statistically reliable way.

  15. Clinical evaluation of atlas and deep learning based automatic contouring for lung cancer.

    Science.gov (United States)

    Lustberg, Tim; van Soest, Johan; Gooding, Mark; Peressutti, Devis; Aljabar, Paul; van der Stoep, Judith; van Elmpt, Wouter; Dekker, Andre

    2018-02-01

    Contouring of organs at risk (OARs) is an important but time consuming part of radiotherapy treatment planning. The aim of this study was to investigate whether using institutional created software-generated contouring will save time if used as a starting point for manual OAR contouring for lung cancer patients. Twenty CT scans of stage I-III NSCLC patients were used to compare user adjusted contours after an atlas-based and deep learning contour, against manual delineation. The lungs, esophagus, spinal cord, heart and mediastinum were contoured for this study. The time to perform the manual tasks was recorded. With a median time of 20 min for manual contouring, the total median time saved was 7.8 min when using atlas-based contouring and 10 min for deep learning contouring. Both atlas based and deep learning adjustment times were significantly lower than manual contouring time for all OARs except for the left lung and esophagus of the atlas based contouring. User adjustment of software generated contours is a viable strategy to reduce contouring time of OARs for lung radiotherapy while conforming to local clinical standards. In addition, deep learning contouring shows promising results compared to existing solutions. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Classification of ECG beats using deep belief network and active learning.

    Science.gov (United States)

    G, Sayantan; T, Kien P; V, Kadambari K

    2018-04-12

    A new semi-supervised approach based on deep learning and active learning for classification of electrocardiogram signals (ECG) is proposed. The objective of the proposed work is to model a scientific method for classification of cardiac irregularities using electrocardiogram beats. The model follows the Association for the Advancement of medical instrumentation (AAMI) standards and consists of three phases. In phase I, feature representation of ECG is learnt using Gaussian-Bernoulli deep belief network followed by a linear support vector machine (SVM) training in the consecutive phase. It yields three deep models which are based on AAMI-defined classes, namely N, V, S, and F. In the last phase, a query generator is introduced to interact with the expert to label few beats to improve accuracy and sensitivity. The proposed approach depicts significant improvement in accuracy with minimal queries posed to the expert and fast online training as tested on the MIT-BIH Arrhythmia Database and the MIT-BIH Supra-ventricular Arrhythmia Database (SVDB). With 100 queries labeled by the expert in phase III, the method achieves an accuracy of 99.5% in "S" versus all classifications (SVEB) and 99.4% accuracy in "V " versus all classifications (VEB) on MIT-BIH Arrhythmia Database. In a similar manner, it is attributed that an accuracy of 97.5% for SVEB and 98.6% for VEB on SVDB database is achieved respectively. Graphical Abstract Reply- Deep belief network augmented by active learning for efficient prediction of arrhythmia.

  17. Invited talk: Deep Learning Meets Physics

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Deep Learning has emerged as one of the most successful fields of machine learning and artificial intelligence with overwhelming success in industrial speech, text and vision benchmarks. Consequently it evolved into the central field of research for IT giants like Google, facebook, Microsoft, Baidu, and Amazon. Deep Learning is founded on novel neural network techniques, the recent availability of very fast computers, and massive data sets. In its core, Deep Learning discovers multiple levels of abstract representations of the input. The main obstacle to learning deep neural networks is the vanishing gradient problem. The vanishing gradient impedes credit assignment to the first layers of a deep network or to early elements of a sequence, therefore limits model selection. Major advances in Deep Learning can be related to avoiding the vanishing gradient like stacking, ReLUs, residual networks, highway networks, and LSTM. For Deep Learning, we suggested self-normalizing neural networks (SNNs) which automatica...

  18. Deep geothermics

    International Nuclear Information System (INIS)

    Anon.

    1995-01-01

    The hot-dry-rocks located at 3-4 km of depth correspond to low permeable rocks carrying a large amount of heat. The extraction of this heat usually requires artificial hydraulic fracturing of the rock to increase its permeability before water injection. Hot-dry-rocks geothermics or deep geothermics is not today a commercial channel but only a scientific and technological research field. The Soultz-sous-Forets site (Northern Alsace, France) is characterized by a 6 degrees per meter geothermal gradient and is used as a natural laboratory for deep geothermal and geological studies in the framework of a European research program. Two boreholes have been drilled up to 3600 m of depth in the highly-fractured granite massif beneath the site. The aim is to create a deep heat exchanger using only the natural fracturing for water transfer. A consortium of german, french and italian industrial companies (Pfalzwerke, Badenwerk, EdF and Enel) has been created for a more active participation to the pilot phase. (J.S.). 1 fig., 2 photos

  19. Stable architectures for deep neural networks

    Science.gov (United States)

    Haber, Eldad; Ruthotto, Lars

    2018-01-01

    Deep neural networks have become invaluable tools for supervised machine learning, e.g. classification of text or images. While often offering superior results over traditional techniques and successfully expressing complicated patterns in data, deep architectures are known to be challenging to design and train such that they generalize well to new data. Critical issues with deep architectures are numerical instabilities in derivative-based learning algorithms commonly called exploding or vanishing gradients. In this paper, we propose new forward propagation techniques inspired by systems of ordinary differential equations (ODE) that overcome this challenge and lead to well-posed learning problems for arbitrarily deep networks. The backbone of our approach is our interpretation of deep learning as a parameter estimation problem of nonlinear dynamical systems. Given this formulation, we analyze stability and well-posedness of deep learning and use this new understanding to develop new network architectures. We relate the exploding and vanishing gradient phenomenon to the stability of the discrete ODE and present several strategies for stabilizing deep learning for very deep networks. While our new architectures restrict the solution space, several numerical experiments show their competitiveness with state-of-the-art networks.

  20. Deep Unsupervised Learning on a Desktop PC: A Primer for Cognitive Scientists.

    Science.gov (United States)

    Testolin, Alberto; Stoianov, Ivilin; De Filippo De Grazia, Michele; Zorzi, Marco

    2013-01-01

    Deep belief networks hold great promise for the simulation of human cognition because they show how structured and abstract representations may emerge from probabilistic unsupervised learning. These networks build a hierarchy of progressively more complex distributed representations of the sensory data by fitting a hierarchical generative model. However, learning in deep networks typically requires big datasets and it can involve millions of connection weights, which implies that simulations on standard computers are unfeasible. Developing realistic, medium-to-large-scale learning models of cognition would therefore seem to require expertise in programing parallel-computing hardware, and this might explain why the use of this promising approach is still largely confined to the machine learning community. Here we show how simulations of deep unsupervised learning can be easily performed on a desktop PC by exploiting the processors of low cost graphic cards (graphic processor units) without any specific programing effort, thanks to the use of high-level programming routines (available in MATLAB or Python). We also show that even an entry-level graphic card can outperform a small high-performance computing cluster in terms of learning time and with no loss of learning quality. We therefore conclude that graphic card implementations pave the way for a widespread use of deep learning among cognitive scientists for modeling cognition and behavior.

  1. Deep unsupervised learning on a desktop PC: A primer for cognitive scientists

    Directory of Open Access Journals (Sweden)

    Alberto eTestolin

    2013-05-01

    Full Text Available Deep belief networks hold great promise for the simulation of human cognition because they show how structured and abstract representations may emerge from probabilistic unsupervised learning. These networks build a hierarchy of progressively more complex distributed representations of the sensory data by fitting a hierarchical generative model. However, learning in deep networks typically requires big datasets and it can involve millions of connection weights, which implies that simulations on standard computers are unfeasible. Developing realistic, medium-to-large-scale learning models of cognition would therefore seem to require expertise in programming parallel-computing hardware, and this might explain why the use of this promising approach is still largely confined to the machine learning community. Here we show how simulations of deep unsupervised learning can be easily performed on a desktop PC by exploiting the processors of low-cost graphic cards (GPUs without any specific programming effort, thanks to the use of high-level programming routines (available in MATLAB or Python. We also show that even an entry-level graphic card can outperform a small high-performance computing cluster in terms of learning time and with no loss of learning quality. We therefore conclude that graphic card implementations pave the way for a widespread use of deep learning among cognitive scientists for modeling cognition and behavior.

  2. Deep Learning for Automated Extraction of Primary Sites From Cancer Pathology Reports.

    Science.gov (United States)

    Qiu, John X; Yoon, Hong-Jun; Fearn, Paul A; Tourassi, Georgia D

    2018-01-01

    Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. In this study, we investigated deep learning and a convolutional neural network (CNN), for extracting ICD-O-3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations as the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro- and macro-F score increases of up to 0.132 and 0.226, respectively, when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on the CNN method and cancer site. These encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.

  3. Deep Unsupervised Learning on a Desktop PC: A Primer for Cognitive Scientists

    Science.gov (United States)

    Testolin, Alberto; Stoianov, Ivilin; De Filippo De Grazia, Michele; Zorzi, Marco

    2013-01-01

    Deep belief networks hold great promise for the simulation of human cognition because they show how structured and abstract representations may emerge from probabilistic unsupervised learning. These networks build a hierarchy of progressively more complex distributed representations of the sensory data by fitting a hierarchical generative model. However, learning in deep networks typically requires big datasets and it can involve millions of connection weights, which implies that simulations on standard computers are unfeasible. Developing realistic, medium-to-large-scale learning models of cognition would therefore seem to require expertise in programing parallel-computing hardware, and this might explain why the use of this promising approach is still largely confined to the machine learning community. Here we show how simulations of deep unsupervised learning can be easily performed on a desktop PC by exploiting the processors of low cost graphic cards (graphic processor units) without any specific programing effort, thanks to the use of high-level programming routines (available in MATLAB or Python). We also show that even an entry-level graphic card can outperform a small high-performance computing cluster in terms of learning time and with no loss of learning quality. We therefore conclude that graphic card implementations pave the way for a widespread use of deep learning among cognitive scientists for modeling cognition and behavior. PMID:23653617

  4. A deep convolutional neural network approach to single-particle recognition in cryo-electron microscopy.

    Science.gov (United States)

    Zhu, Yanan; Ouyang, Qi; Mao, Youdong

    2017-07-21

    Single-particle cryo-electron microscopy (cryo-EM) has become a mainstream tool for the structural determination of biological macromolecular complexes. However, high-resolution cryo-EM reconstruction often requires hundreds of thousands of single-particle images. Particle extraction from experimental micrographs thus can be laborious and presents a major practical bottleneck in cryo-EM structural determination. Existing computational methods for particle picking often use low-resolution templates for particle matching, making them susceptible to reference-dependent bias. It is critical to develop a highly efficient template-free method for the automatic recognition of particle images from cryo-EM micrographs. We developed a deep learning-based algorithmic framework, DeepEM, for single-particle recognition from noisy cryo-EM micrographs, enabling automated particle picking, selection and verification in an integrated fashion. The kernel of DeepEM is built upon a convolutional neural network (CNN) composed of eight layers, which can be recursively trained to be highly "knowledgeable". Our approach exhibits an improved performance and accuracy when tested on the standard KLH dataset. Application of DeepEM to several challenging experimental cryo-EM datasets demonstrated its ability to avoid the selection of un-wanted particles and non-particles even when true particles contain fewer features. The DeepEM methodology, derived from a deep CNN, allows automated particle extraction from raw cryo-EM micrographs in the absence of a template. It demonstrates an improved performance, objectivity and accuracy. Application of this novel method is expected to free the labor involved in single-particle verification, significantly improving the efficiency of cryo-EM data processing.

  5. Microbial metabolisms in a 2.5-km-deep ecosystem created by hydraulic fracturing in shales

    Energy Technology Data Exchange (ETDEWEB)

    Daly, Rebecca A.; Borton, Mikayla A.; Wilkins, Michael J.; Hoyt, David W.; Kountz, Duncan J.; Wolfe, Richard A.; Welch, Susan A.; Marcus, Daniel N.; Trexler, Ryan V.; MacRae, Jean D.; Krzycki, Joseph A.; Cole, David R.; Mouser, Paula J.; Wrighton, Kelly C.

    2016-09-05

    Hydraulic fracturing is the industry standard for extracting hydrocarbons from shale formations. Attention has been paid to the economic benefits and environmental impacts of this process, yet the biogeochemical changes induced in the deep subsurface are poorly understood. Recent single-gene investigations revealed that halotolerant microbial communities were enriched after hydraulic fracturing. Here the reconstruction of 31 unique genomes coupled to metabolite data from the Marcellus and Utica shales revealed that methylamine cycling supports methanogenesis in the deep biosphere. Fermentation of injected chemical additives also sustains long-term microbial persistence, while sulfide generation from thiosulfate represents a poorly recognized corrosion mechanism in shales. Extensive links between viruses and microbial hosts demonstrate active viral predation, which may contribute to the release of labile cellular constituents into the extracellular environment. Our analyses show that hydraulic fracturing provides the organismal and chemical inputs for colonization and persistence in the deep terrestrial subsurface.

  6. Deep Seawater Intrusion Enhanced by Geothermal Through Deep Faults in Xinzhou Geothermal Field in Guangdong, China

    Science.gov (United States)

    Lu, G.; Ou, H.; Hu, B. X.; Wang, X.

    2017-12-01

    This study investigates abnormal sea water intrusion from deep depth, riding an inland-ward deep groundwater flow, which is enhanced by deep faults and geothermal processes. The study site Xinzhou geothermal field is 20 km from the coast line. It is in southern China's Guangdong coast, a part of China's long coastal geothermal belt. The geothermal water is salty, having fueled an speculation that it was ancient sea water retained. However, the perpetual "pumping" of the self-flowing outflow of geothermal waters might alter the deep underground flow to favor large-scale or long distant sea water intrusion. We studied geochemical characteristics of the geothermal water and found it as a mixture of the sea water with rain water or pore water, with no indication of dilution involved. And we conducted numerical studies of the buoyancy-driven geothermal flow in the deep ground and find that deep down in thousand meters there is favorable hydraulic gradient favoring inland-ward groundwater flow, allowing seawater intrude inland for an unusually long tens of kilometers in a granitic groundwater flow system. This work formed the first in understanding geo-environment for deep ground water flow.

  7. Deep Learning in Neuroradiology.

    Science.gov (United States)

    Zaharchuk, G; Gong, E; Wintermark, M; Rubin, D; Langlotz, C P

    2018-02-01

    Deep learning is a form of machine learning using a convolutional neural network architecture that shows tremendous promise for imaging applications. It is increasingly being adapted from its original demonstration in computer vision applications to medical imaging. Because of the high volume and wealth of multimodal imaging information acquired in typical studies, neuroradiology is poised to be an early adopter of deep learning. Compelling deep learning research applications have been demonstrated, and their use is likely to grow rapidly. This review article describes the reasons, outlines the basic methods used to train and test deep learning models, and presents a brief overview of current and potential clinical applications with an emphasis on how they are likely to change future neuroradiology practice. Facility with these methods among neuroimaging researchers and clinicians will be important to channel and harness the vast potential of this new method. © 2018 by American Journal of Neuroradiology.

  8. Temperature impacts on deep-sea biodiversity.

    Science.gov (United States)

    Yasuhara, Moriaki; Danovaro, Roberto

    2016-05-01

    Temperature is considered to be a fundamental factor controlling biodiversity in marine ecosystems, but precisely what role temperature plays in modulating diversity is still not clear. The deep ocean, lacking light and in situ photosynthetic primary production, is an ideal model system to test the effects of temperature changes on biodiversity. Here we synthesize current knowledge on temperature-diversity relationships in the deep sea. Our results from both present and past deep-sea assemblages suggest that, when a wide range of deep-sea bottom-water temperatures is considered, a unimodal relationship exists between temperature and diversity (that may be right skewed). It is possible that temperature is important only when at relatively high and low levels but does not play a major role in the intermediate temperature range. Possible mechanisms explaining the temperature-biodiversity relationship include the physiological-tolerance hypothesis, the metabolic hypothesis, island biogeography theory, or some combination of these. The possible unimodal relationship discussed here may allow us to identify tipping points at which on-going global change and deep-water warming may increase or decrease deep-sea biodiversity. Predicted changes in deep-sea temperatures due to human-induced climate change may have more adverse consequences than expected considering the sensitivity of deep-sea ecosystems to temperature changes. © 2014 Cambridge Philosophical Society.

  9. Deep-Sea Mining With No Net Loss of Biodiversity—An Impossible Aim

    Directory of Open Access Journals (Sweden)

    Holly J. Niner

    2018-03-01

    Full Text Available Deep-sea mining is likely to result in biodiversity loss, and the significance of this to ecosystem function is not known. “Out of kind” biodiversity offsets substituting one ecosystem type (e.g., coral reefs for another (e.g., abyssal nodule fields have been proposed to compensate for such loss. Here we consider a goal of no net loss (NNL of biodiversity and explore the challenges of applying this aim to deep seabed mining, based on the associated mitigation hierarchy (avoid, minimize, remediate. We conclude that the industry cannot at present deliver an outcome of NNL. This results from the vulnerable nature of deep-sea environments to mining impacts, currently limited technological capacity to minimize harm, significant gaps in ecological knowledge, and uncertainties of recovery potential of deep-sea ecosystems. Avoidance and minimization of impacts are therefore the only presently viable means of reducing biodiversity losses from seabed mining. Because of these constraints, when and if deep-sea mining proceeds, it must be approached in a precautionary and step-wise manner to integrate new and developing knowledge. Each step should be subject to explicit environmental management goals, monitoring protocols, and binding standards to avoid serious environmental harm and minimize loss of biodiversity. “Out of kind” measures, an option for compensation currently proposed, cannot replicate biodiversity and ecosystem services lost through mining of the deep seabed and thus cannot be considered true offsets. The ecosystem functions provided by deep-sea biodiversity contribute to a wide range of provisioning services (e.g., the exploitation of fish, energy, pharmaceuticals, and cosmetics, play an essential role in regulatory services (e.g., carbon sequestration and are important culturally. The level of “acceptable” biodiversity loss in the deep sea requires public, transparent, and well-informed consideration, as well as wide agreement

  10. Historical Cost Curves for Hydrogen Masers and Cesium Beam Frequency and Timing Standards

    Science.gov (United States)

    Remer, D. S.; Moore, R. C.

    1985-01-01

    Historical cost curves were developed for hydrogen masers and cesium beam standards used for frequency and timing calibration in the Deep Space Network. These curves may be used to calculate the cost of future hydrogen masers or cesium beam standards in either future or current dollars. The cesium beam standards are decreasing in cost by about 2.3% per year since 1966, and hydrogen masers are decreasing by about 0.8% per year since 1978 relative to the National Aeronautics and Space Administration inflation index.

  11. Extreme Longevity in Proteinaceous Deep-Sea Corals

    Energy Technology Data Exchange (ETDEWEB)

    Roark, E B; Guilderson, T P; Dunbar, R B; Fallon, S J; Mucciarone, D A

    2009-02-09

    Deep-sea corals are found on hard substrates on seamounts and continental margins world-wide at depths of 300 to {approx}3000 meters. Deep-sea coral communities are hotspots of deep ocean biomass and biodiversity, providing critical habitat for fish and invertebrates. Newly applied radiocarbon age date from the deep water proteinaceous corals Gerardia sp. and Leiopathes glaberrima show that radial growth rates are as low as 4 to 35 {micro}m yr{sup -1} and that individual colony longevities are on the order of thousands of years. The management and conservation of deep sea coral communities is challenged by their commercial harvest for the jewelry trade and damage caused by deep water fishing practices. In light of their unusual longevity, a better understanding of deep sea coral ecology and their interrelationships with associated benthic communities is needed to inform coherent international conservation strategies for these important deep-sea ecosystems.

  12. Gaia , an all sky astrometric and photometric survey

    International Nuclear Information System (INIS)

    Carrasco, J.M.

    2017-01-01

    Gaia space mission includes a low resolution spectroscopic instrument to classify and parametrize the observed sources. Gaia is a full-sky unbiased survey down to about 20th magnitude. The scanning law yields a rather uniform coverage of the sky over the full mission. The data reduction is a global one over the full mission. Both sky coverage and data reduction strategy ensure an unprecedented all-sky homogeneous spectrophotometric survey. Certainly, that survey is of interest for future on-ground and space projects (LSST, PLATO, EUCLID, ...). This work addresses the exploitation of the Gaia spectrophotometry as standard photometry reference through the discussion of the sky coverage, the spectrophotometric precision and the expected uncertainties of the synthetic photometry derived from the low resolution Gaia spectra and photometry.

  13. New optimized drill pipe size for deep-water, extended reach and ultra-deep drilling

    Energy Technology Data Exchange (ETDEWEB)

    Jellison, Michael J.; Delgado, Ivanni [Grant Prideco, Inc., Hoston, TX (United States); Falcao, Jose Luiz; Sato, Ademar Takashi [PETROBRAS, Rio de Janeiro, RJ (Brazil); Moura, Carlos Amsler [Comercial Perfuradora Delba Baiana Ltda., Rio de Janeiro, RJ (Brazil)

    2004-07-01

    A new drill pipe size, 5-7/8 in. OD, represents enabling technology for Extended Reach Drilling (ERD), deep water and other deep well applications. Most world-class ERD and deep water wells have traditionally been drilled with 5-1/2 in. drill pipe or a combination of 6-5/8 in. and 5-1/2 in. drill pipe. The hydraulic performance of 5-1/2 in. drill pipe can be a major limitation in substantial ERD and deep water wells resulting in poor cuttings removal, slower penetration rates, diminished control over well trajectory and more tendency for drill pipe sticking. The 5-7/8 in. drill pipe provides a significant improvement in hydraulic efficiency compared to 5-1/2 in. drill pipe and does not suffer from the disadvantages associated with use of 6-5/8 in. drill pipe. It represents a drill pipe assembly that is optimized dimensionally and on a performance basis for casing and bit programs that are commonly used for ERD, deep water and ultra-deep wells. The paper discusses the engineering philosophy behind 5-7/8 in. drill pipe, the design challenges associated with development of the product and reviews the features and capabilities of the second-generation double-shoulder connection. The paper provides drilling case history information on significant projects where the pipe has been used and details results achieved with the pipe. (author)

  14. New Geometric-distortion Solution for STIS FUV-MAMA

    Science.gov (United States)

    Sohn, S. Tony

    2018-04-01

    We derived a new geometric distortion solution for the STIS FUV-MAMA detector. To do this, positions of stars in 89 FUV-MAMA observations of NGC 6681 were compared to an astrometric standard catalog created using WFC3/UVIS imaging data to derive a fourth-order polynomial solution that transforms raw (x, y) positions to geometrically- corrected (x, y) positions. When compared to astrometric catalog positions, the FUV- MAMA position measurements based on the IDCTAB showed residuals with an RMS of ∼ 30 mas in each coordinate. Using the new IDCTAB, the RMS is reduced to ∼ 4 mas, or 0.16 FUV-MAMA pixels, in each coordinate. The updated IDCTAB is now being used in the HST STIS pipeline to process all STIS FUV-MAMA images.

  15. Deep Reinforcement Learning: An Overview

    OpenAIRE

    Li, Yuxi

    2017-01-01

    We give an overview of recent exciting achievements of deep reinforcement learning (RL). We discuss six core elements, six important mechanisms, and twelve applications. We start with background of machine learning, deep learning and reinforcement learning. Next we discuss core RL elements, including value function, in particular, Deep Q-Network (DQN), policy, reward, model, planning, and exploration. After that, we discuss important mechanisms for RL, including attention and memory, unsuperv...

  16. Groundwater quality for irrigation of deep aquifer in southwestern zone of Bangladesh

    Directory of Open Access Journals (Sweden)

    Mirza A.T.M. Tanvir Rahman

    2012-07-01

    Full Text Available In coastal regions of Bangladesh, sources of irrigation are rain, surface and groundwater. Due to rainfall anomaly andsaline contamination, it is important to identify deep groundwater that is eligible for irrigation. The main goal of the study wasto identify deep groundwater which is suitable for irrigation. Satkhira Sadar Upazila, at the southwestern coastal zone ofBangladesh, was the study area, which was divided into North, Center and South zones. Twenty samples of groundwaterwere analyzed for salinity (0.65-4.79 ppt, sodium absorption ratio (1.14-11.62, soluble sodium percentage (32.95-82.21, electricalconductivity (614-2082.11 μS/cm, magnesium adsorption ratio (21.96-26.97, Kelly’s ratio (0.48-4.62, total hardness(150.76-313.33 mg/l, permeability index (68.02-94.16 and residual sodium bi-carbonate (79.68-230.72 mg/l. Chemical constituentsand values were compared with national and international standards. Northern deep groundwater has the highest salinityand chemical concentrations. Salinity and other chemical concentrations show a decreasing trend towards the south. Lowchemical concentrations in the southern region indicate the best quality groundwater for irrigation.

  17. DeepMirTar: a deep-learning approach for predicting human miRNA targets.

    Science.gov (United States)

    Wen, Ming; Cong, Peisheng; Zhang, Zhimin; Lu, Hongmei; Li, Tonghua

    2018-06-01

    MicroRNAs (miRNAs) are small noncoding RNAs that function in RNA silencing and post-transcriptional regulation of gene expression by targeting messenger RNAs (mRNAs). Because the underlying mechanisms associated with miRNA binding to mRNA are not fully understood, a major challenge of miRNA studies involves the identification of miRNA-target sites on mRNA. In silico prediction of miRNA-target sites can expedite costly and time-consuming experimental work by providing the most promising miRNA-target-site candidates. In this study, we reported the design and implementation of DeepMirTar, a deep-learning-based approach for accurately predicting human miRNA targets at the site level. The predicted miRNA-target sites are those having canonical or non-canonical seed, and features, including high-level expert-designed, low-level expert-designed, and raw-data-level, were used to represent the miRNA-target site. Comparison with other state-of-the-art machine-learning methods and existing miRNA-target-prediction tools indicated that DeepMirTar improved overall predictive performance. DeepMirTar is freely available at https://github.com/Bjoux2/DeepMirTar_SdA. lith@tongji.edu.cn, hongmeilu@csu.edu.cn. Supplementary data are available at Bioinformatics online.

  18. Deep Unfolding for Topic Models.

    Science.gov (United States)

    Chien, Jen-Tzung; Lee, Chao-Hsi

    2018-02-01

    Deep unfolding provides an approach to integrate the probabilistic generative models and the deterministic neural networks. Such an approach is benefited by deep representation, easy interpretation, flexible learning and stochastic modeling. This study develops the unsupervised and supervised learning of deep unfolded topic models for document representation and classification. Conventionally, the unsupervised and supervised topic models are inferred via the variational inference algorithm where the model parameters are estimated by maximizing the lower bound of logarithm of marginal likelihood using input documents without and with class labels, respectively. The representation capability or classification accuracy is constrained by the variational lower bound and the tied model parameters across inference procedure. This paper aims to relax these constraints by directly maximizing the end performance criterion and continuously untying the parameters in learning process via deep unfolding inference (DUI). The inference procedure is treated as the layer-wise learning in a deep neural network. The end performance is iteratively improved by using the estimated topic parameters according to the exponentiated updates. Deep learning of topic models is therefore implemented through a back-propagation procedure. Experimental results show the merits of DUI with increasing number of layers compared with variational inference in unsupervised as well as supervised topic models.

  19. The value of MR angiography in the diagnosis of deep vein thrombosis of the lower limbs: comparative study with DSA

    International Nuclear Information System (INIS)

    Feng Min; Wang Shuzhi; Gu Jianping; Sun Jun; Mao Cunnan; Lu Lingquan; Yin Xindao

    2007-01-01

    Objective: To assess the clinical values of MR angiography (MRA) in the detection of deep vein thrombosis of the lower limbs. Methods: Two-dimensional time of flight (2D TOF) MRA was performed in thirty patients who were suspected of having deep vein thrombosis in the lower limbs. The findings of MRA were compared to that of digital subtraction angiography (DSA). Results: twenty-five cases showed deep vein thrombosis in the lower limbs, the MRA findings included venous filling defect (14 cases), occlusions and interruptions of veins (8 cases), venous recanalizations (3 cases), collateral veins (25 cases). Taking the results of DSA as a golden standard, MRA detected all of the affected cases with only one case as the false positive. Conclusion: 2D TOF MRA is a method of choice in the diagnosis of deep vein thrombosis of the lower limbs. (authors)

  20. Stability of deep features across CT scanners and field of view using a physical phantom

    Science.gov (United States)

    Paul, Rahul; Shafiq-ul-Hassan, Muhammad; Moros, Eduardo G.; Gillies, Robert J.; Hall, Lawrence O.; Goldgof, Dmitry B.

    2018-02-01

    Radiomics is the process of analyzing radiological images by extracting quantitative features for monitoring and diagnosis of various cancers. Analyzing images acquired from different medical centers is confounded by many choices in acquisition, reconstruction parameters and differences among device manufacturers. Consequently, scanning the same patient or phantom using various acquisition/reconstruction parameters as well as different scanners may result in different feature values. To further evaluate this issue, in this study, CT images from a physical radiomic phantom were used. Recent studies showed that some quantitative features were dependent on voxel size and that this dependency could be reduced or removed by the appropriate normalization factor. Deep features extracted from a convolutional neural network, may also provide additional features for image analysis. Using a transfer learning approach, we obtained deep features from three convolutional neural networks pre-trained on color camera images. An we examination of the dependency of deep features on image pixel size was done. We found that some deep features were pixel size dependent, and to remove this dependency we proposed two effective normalization approaches. For analyzing the effects of normalization, a threshold has been used based on the calculated standard deviation and average distance from a best fit horizontal line among the features' underlying pixel size before and after normalization. The inter and intra scanner dependency of deep features has also been evaluated.

  1. Docker Containers for Deep Learning Experiments

    OpenAIRE

    Gerke, Paul K.

    2017-01-01

    Deep learning is a powerful tool to solve problems in the area of image analysis. The dominant compute platform for deep learning is Nvidia’s proprietary CUDA, which can only be used together with Nvidia graphics cards. The nivida-docker project allows exposing Nvidia graphics cards to docker containers and thus makes it possible to run deep learning experiments in docker containers.In our department, we use deep learning to solve problems in the area of medical image analysis and use docker ...

  2. Auxiliary Deep Generative Models

    DEFF Research Database (Denmark)

    Maaløe, Lars; Sønderby, Casper Kaae; Sønderby, Søren Kaae

    2016-01-01

    Deep generative models parameterized by neural networks have recently achieved state-of-the-art performance in unsupervised and semi-supervised learning. We extend deep generative models with auxiliary variables which improves the variational approximation. The auxiliary variables leave...... the generative model unchanged but make the variational distribution more expressive. Inspired by the structure of the auxiliary variable we also propose a model with two stochastic layers and skip connections. Our findings suggest that more expressive and properly specified deep generative models converge...... faster with better results. We show state-of-the-art performance within semi-supervised learning on MNIST (0.96%), SVHN (16.61%) and NORB (9.40%) datasets....

  3. Accelerating Deep Learning with Shrinkage and Recall

    OpenAIRE

    Zheng, Shuai; Vishnu, Abhinav; Ding, Chris

    2016-01-01

    Deep Learning is a very powerful machine learning model. Deep Learning trains a large number of parameters for multiple layers and is very slow when data is in large scale and the architecture size is large. Inspired from the shrinking technique used in accelerating computation of Support Vector Machines (SVM) algorithm and screening technique used in LASSO, we propose a shrinking Deep Learning with recall (sDLr) approach to speed up deep learning computation. We experiment shrinking Deep Lea...

  4. Isolated Deep Venous Thrombosis: Implications for 2-Point Compression Ultrasonography of the Lower Extremity.

    Science.gov (United States)

    Adhikari, Srikar; Zeger, Wes; Thom, Christopher; Fields, J Matthew

    2015-09-01

    Two-point compression ultrasonography focuses on the evaluation of common femoral and popliteal veins for complete compressibility. The presence of isolated thrombi in proximal veins other than the common femoral and popliteal veins should prompt modification of 2-point compression technique. The objective of this study is to determine the prevalence and distribution of deep venous thrombi isolated to lower-extremity veins other than the common femoral and popliteal veins in emergency department (ED) patients with clinically suspected deep venous thrombosis. This was a retrospective study of all adult ED patients who received a lower-extremity venous duplex ultrasonographic examination for evaluation of deep venous thrombosis during a 6-year period. The ultrasonographic protocol included B-mode, color-flow, and spectral Doppler scanning of the common femoral, femoral, deep femoral, popliteal, and calf veins. Deep venous thrombosis was detected in 362 of 2,451 patients (14.7%; 95% confidence interval [CI] 13.3% to 16.1%). Thrombus confined to the common femoral vein alone was found in 5 of 362 cases (1.4%; 95% CI 0.2% to 2.6%). Isolated femoral vein thrombus was identified in 20 of 362 patients (5.5%; 95% CI 3.2% to 7.9%). Isolated deep femoral vein thrombus was found in 3 of 362 cases (0.8%; 95% CI -0.1% to 1.8%). Thrombus in the popliteal vein alone was identified in 53 of 362 cases (14.6%; 95% CI 11% to 18.2%). In our study, 6.3% of ED patients with suspected deep venous thrombosis had isolated thrombi in proximal veins other than common femoral and popliteal veins. Our study results support the addition of femoral and deep femoral vein evaluation to standard compression ultrasonography of the common femoral and popliteal vein, assuming that this does not have a deleterious effect on specificity. Copyright © 2014 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  5. Consolidated Deep Actor Critic Networks (DRAFT)

    NARCIS (Netherlands)

    Van der Laan, T.A.

    2015-01-01

    The works [Volodymyr et al. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.] and [Volodymyr et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.] have demonstrated the power of combining deep neural networks with

  6. ChemNet: A Transferable and Generalizable Deep Neural Network for Small-Molecule Property Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Goh, Garrett B.; Siegel, Charles M.; Vishnu, Abhinav; Hodas, Nathan O.

    2017-12-08

    With access to large datasets, deep neural networks through representation learning have been able to identify patterns from raw data, achieving human-level accuracy in image and speech recognition tasks. However, in chemistry, availability of large standardized and labelled datasets is scarce, and with a multitude of chemical properties of interest, chemical data is inherently small and fragmented. In this work, we explore transfer learning techniques in conjunction with the existing Chemception CNN model, to create a transferable and generalizable deep neural network for small-molecule property prediction. Our latest model, ChemNet learns in a semi-supervised manner from inexpensive labels computed from the ChEMBL database. When fine-tuned to the Tox21, HIV and FreeSolv dataset, which are 3 separate chemical tasks that ChemNet was not originally trained on, we demonstrate that ChemNet exceeds the performance of existing Chemception models, contemporary MLP models that trains on molecular fingerprints, and it matches the performance of the ConvGraph algorithm, the current state-of-the-art. Furthermore, as ChemNet has been pre-trained on a large diverse chemical database, it can be used as a universal “plug-and-play” deep neural network, which accelerates the deployment of deep neural networks for the prediction of novel small-molecule chemical properties.

  7. Deep Galaxy: Classification of Galaxies based on Deep Convolutional Neural Networks

    OpenAIRE

    Khalifa, Nour Eldeen M.; Taha, Mohamed Hamed N.; Hassanien, Aboul Ella; Selim, I. M.

    2017-01-01

    In this paper, a deep convolutional neural network architecture for galaxies classification is presented. The galaxy can be classified based on its features into main three categories Elliptical, Spiral, and Irregular. The proposed deep galaxies architecture consists of 8 layers, one main convolutional layer for features extraction with 96 filters, followed by two principles fully connected layers for classification. It is trained over 1356 images and achieved 97.272% in testing accuracy. A c...

  8. Deep Feature Consistent Variational Autoencoder

    OpenAIRE

    Hou, Xianxu; Shen, Linlin; Sun, Ke; Qiu, Guoping

    2016-01-01

    We present a novel method for constructing Variational Autoencoder (VAE). Instead of using pixel-by-pixel loss, we enforce deep feature consistency between the input and the output of a VAE, which ensures the VAE's output to preserve the spatial correlation characteristics of the input, thus leading the output to have a more natural visual appearance and better perceptual quality. Based on recent deep learning works such as style transfer, we employ a pre-trained deep convolutional neural net...

  9. Comet P/2010 V1 as a Natural Disintegration Laboratory

    Science.gov (United States)

    Jewitt, David; Weaver, Harold A.; Mutchler, Maximilian J.; Agarwal, Jessica; Meech, Karen Jean; Li, Jing; Kleyna, Jan; Ishiguro, Masateru; Wainscoat, Richard J.; Hui, Man-To

    2016-10-01

    Discovered in outburst in 2010, Jupiter-family comet P/2010 V1 (Ikeya-Murukami) was found to be split in observations at the end of 2015. We used the Hubble Space Telescope to obtain deep images of P/2010 V1 at high angular resolution in the 2016 January to March period. The resulting data, by far the best yet obtained for any split or disrupting comet, show the astrometric, photometric and morphological evolution of about 30 fragments. We will present the first results for the velocity dispersion, photometric distribution and variability and discuss the measurements in terms of models for the breakup.

  10. Comparison of hand-craft feature based SVM and CNN based deep learning framework for automatic polyp classification.

    Science.gov (United States)

    Younghak Shin; Balasingham, Ilangko

    2017-07-01

    Colonoscopy is a standard method for screening polyps by highly trained physicians. Miss-detected polyps in colonoscopy are potential risk factor for colorectal cancer. In this study, we investigate an automatic polyp classification framework. We aim to compare two different approaches named hand-craft feature method and convolutional neural network (CNN) based deep learning method. Combined shape and color features are used for hand craft feature extraction and support vector machine (SVM) method is adopted for classification. For CNN approach, three convolution and pooling based deep learning framework is used for classification purpose. The proposed framework is evaluated using three public polyp databases. From the experimental results, we have shown that the CNN based deep learning framework shows better classification performance than the hand-craft feature based methods. It achieves over 90% of classification accuracy, sensitivity, specificity and precision.

  11. Efficacy of Laser Debridement With Autologous Split-Thickness Skin Grafting in Promoting Improved Wound Healing of Deep Cutaneous Sulfur Mustard Burns

    National Research Council Canada - National Science Library

    Graham, John

    2002-01-01

    ...) full thickness CO2 laser debridement followed by skin grafting, (2) full thickness sharp surgical tangential excision followed by skin grafting, the 'Gold Standard' used in deep thermal burns management, (3...

  12. Standard high-reliability integrated circuit logic packaging. [for deep space tracking stations

    Science.gov (United States)

    Slaughter, D. W.

    1977-01-01

    A family of standard, high-reliability hardware used for packaging digital integrated circuits is described. The design transition from early prototypes to production hardware is covered and future plans are discussed. Interconnections techniques are described as well as connectors and related hardware available at both the microcircuit packaging and main-frame level. General applications information is also provided.

  13. Deep Dyspareunia in Endometriosis: A Proposed Framework Based on Pain Mechanisms and Genito-Pelvic Pain Penetration Disorder.

    Science.gov (United States)

    Yong, Paul J

    2017-10-01

    Endometriosis is a common chronic disease affecting 1 in 10 women of reproductive age, with half of women with endometriosis experiencing deep dyspareunia. A review of research studies on endometriosis indicates a need for a validated question or questionnaire for deep dyspareunia. Moreover, placebo-controlled randomized trials have yet to demonstrate a clear benefit for traditional treatments of endometriosis for the outcome of deep dyspareunia. The reason some patients might not respond to traditional treatments is the multifactorial nature of deep dyspareunia in endometriosis, which can include comorbid conditions (eg, interstitial cystitis and bladder pain syndrome) and central sensitization underlying genito-pelvic pain penetration disorder. In general, there is a lack of a framework that integrates these multifactorial causes to provide a standardized approach to deep dyspareunia in endometriosis. To propose a clinical framework for deep dyspareunia based on a synthesis of pain mechanisms with genito-pelvic pain penetration disorder according to the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition. Narrative review after literature search with the terms (endometriosis AND dyspareunia) OR (dyspareunia AND deep) and after analysis of placebo-controlled randomized trials. Deep dyspareunia presence or absence or deep dyspareunia severity on a numeric rating scale or visual analog scale. Four types of deep dyspareunia are proposed in women with endometriosis: type I that is directly due to endometriosis; type II that is related to a comorbid condition; type III in which genito-pelvic pain penetration disorder is primary; and type IV that is secondary to a combination of types I to III. Four types of deep dyspareunia in endometriosis are proposed, which can be used as a framework in research studies and in clinical practice. Research trials could phenotype or stratify patients by each type. The framework also could give rise to more personalized

  14. How Stressful Is "Deep Bubbling"?

    Science.gov (United States)

    Tyrmi, Jaana; Laukkanen, Anne-Maria

    2017-03-01

    Water resistance therapy by phonating through a tube into the water is used to treat dysphonia. Deep submersion (≥10 cm in water, "deep bubbling") is used for hypofunctional voice disorders. Using it with caution is recommended to avoid vocal overloading. This experimental study aimed to investigate how strenuous "deep bubbling" is. Fourteen subjects, half of them with voice training, repeated the syllable [pa:] in comfortable speaking pitch and loudness, loudly, and in strained voice. Thereafter, they phonated a vowel-like sound both in comfortable loudness and loudly into a glass resonance tube immersed 10 cm into the water. Oral pressure, contact quotient (CQ, calculated from electroglottographic signal), and sound pressure level were studied. The peak oral pressure P(oral) during [p] and shuttering of the outer end of the tube was measured to estimate the subglottic pressure P(sub) and the mean P(oral) during vowel portions to enable calculation of transglottic pressure P(trans). Sensations during phonation were reported with an open-ended interview. P(sub) and P(oral) were higher in "deep bubbling" and P(trans) lower than in loud syllable phonation, but the CQ did not differ significantly. Similar results were obtained for the comparison between loud "deep bubbling" and strained phonation, although P(sub) did not differ significantly. Most of the subjects reported "deep bubbling" to be stressful only for respiratory and lip muscles. No big differences were found between trained and untrained subjects. The CQ values suggest that "deep bubbling" may increase vocal fold loading. Further studies should address impact stress during water resistance exercises. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  15. Evaluation of the DeepWind concept

    DEFF Research Database (Denmark)

    Schmidt Paulsen, Uwe; Borg, Michael; Gonzales Seabra, Luis Alberto

    The report describes the DeepWind 5 MW conceptual design as a baseline for results obtained in the scientific and technical work packages of the DeepWind project. A comparison of DeepWi nd with existing VAWTs and paper projects are carried out and the evaluation of the concept in terms of cost...

  16. Simulator Studies of the Deep Stall

    Science.gov (United States)

    White, Maurice D.; Cooper, George E.

    1965-01-01

    Simulator studies of the deep-stall problem encountered with modern airplanes are discussed. The results indicate that the basic deep-stall tendencies produced by aerodynamic characteristics are augmented by operational considerations. Because of control difficulties to be anticipated in the deep stall, it is desirable that adequate safeguards be provided against inadvertent penetrations.

  17. Deep Learning

    DEFF Research Database (Denmark)

    Jensen, Morten Bornø; Bahnsen, Chris Holmberg; Nasrollahi, Kamal

    2018-01-01

    I løbet af de sidste 10 år er kunstige neurale netværk gået fra at være en støvet, udstødt tekno-logi til at spille en hovedrolle i udviklingen af kunstig intelligens. Dette fænomen kaldes deep learning og er inspireret af hjernens opbygning.......I løbet af de sidste 10 år er kunstige neurale netværk gået fra at være en støvet, udstødt tekno-logi til at spille en hovedrolle i udviklingen af kunstig intelligens. Dette fænomen kaldes deep learning og er inspireret af hjernens opbygning....

  18. Interim radiological safety standards and evaluation procedures for subseabed high-level waste disposal

    International Nuclear Information System (INIS)

    Klett, R.D.

    1997-06-01

    The Seabed Disposal Project (SDP) was evaluating the technical feasibility of high-level nuclear waste disposal in deep ocean sediments. Working standards were needed for risk assessments, evaluation of alternative designs, sensitivity studies, and conceptual design guidelines. This report completes a three part program to develop radiological standards for the feasibility phase of the SDP. The characteristics of subseabed disposal and how they affect the selection of standards are discussed. General radiological protection standards are reviewed, along with some new methods, and a systematic approach to developing standards is presented. The selected interim radiological standards for the SDP and the reasons for their selection are given. These standards have no legal or regulatory status and will be replaced or modified by regulatory agencies if subseabed disposal is implemented. 56 refs., 29 figs., 15 tabs

  19. Deep learning for biomarker regression: application to osteoporosis and emphysema on chest CT scans

    Science.gov (United States)

    González, Germán.; Washko, George R.; San José Estépar, Raúl

    2018-03-01

    Introduction: Biomarker computation using deep-learning often relies on a two-step process, where the deep learning algorithm segments the region of interest and then the biomarker is measured. We propose an alternative paradigm, where the biomarker is estimated directly using a regression network. We showcase this image-tobiomarker paradigm using two biomarkers: the estimation of bone mineral density (BMD) and the estimation of lung percentage of emphysema from CT scans. Materials and methods: We use a large database of 9,925 CT scans to train, validate and test the network for which reference standard BMD and percentage emphysema have been already computed. First, the 3D dataset is reduced to a set of canonical 2D slices where the organ of interest is visible (either spine for BMD or lungs for emphysema). This data reduction is performed using an automatic object detector. Second, The regression neural network is composed of three convolutional layers, followed by a fully connected and an output layer. The network is optimized using a momentum optimizer with an exponential decay rate, using the root mean squared error as cost function. Results: The Pearson correlation coefficients obtained against the reference standards are r = 0.940 (p < 0.00001) and r = 0.976 (p < 0.00001) for BMD and percentage emphysema respectively. Conclusions: The deep-learning regression architecture can learn biomarkers from images directly, without indicating the structures of interest. This approach simplifies the development of biomarker extraction algorithms. The proposed data reduction based on object detectors conveys enough information to compute the biomarkers of interest.

  20. DeepNAT: Deep convolutional neural network for segmenting neuroanatomy.

    Science.gov (United States)

    Wachinger, Christian; Reuter, Martin; Klein, Tassilo

    2018-04-15

    We introduce DeepNAT, a 3D Deep convolutional neural network for the automatic segmentation of NeuroAnaTomy in T1-weighted magnetic resonance images. DeepNAT is an end-to-end learning-based approach to brain segmentation that jointly learns an abstract feature representation and a multi-class classification. We propose a 3D patch-based approach, where we do not only predict the center voxel of the patch but also neighbors, which is formulated as multi-task learning. To address a class imbalance problem, we arrange two networks hierarchically, where the first one separates foreground from background, and the second one identifies 25 brain structures on the foreground. Since patches lack spatial context, we augment them with coordinates. To this end, we introduce a novel intrinsic parameterization of the brain volume, formed by eigenfunctions of the Laplace-Beltrami operator. As network architecture, we use three convolutional layers with pooling, batch normalization, and non-linearities, followed by fully connected layers with dropout. The final segmentation is inferred from the probabilistic output of the network with a 3D fully connected conditional random field, which ensures label agreement between close voxels. The roughly 2.7million parameters in the network are learned with stochastic gradient descent. Our results show that DeepNAT compares favorably to state-of-the-art methods. Finally, the purely learning-based method may have a high potential for the adaptation to young, old, or diseased brains by fine-tuning the pre-trained network with a small training sample on the target application, where the availability of larger datasets with manual annotations may boost the overall segmentation accuracy in the future. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Deep Learning and Its Applications in Biomedicine.

    Science.gov (United States)

    Cao, Chensi; Liu, Feng; Tan, Hai; Song, Deshou; Shu, Wenjie; Li, Weizhong; Zhou, Yiming; Bo, Xiaochen; Xie, Zhi

    2018-02-01

    Advances in biological and medical technologies have been providing us explosive volumes of biological and physiological data, such as medical images, electroencephalography, genomic and protein sequences. Learning from these data facilitates the understanding of human health and disease. Developed from artificial neural networks, deep learning-based algorithms show great promise in extracting features and learning patterns from complex data. The aim of this paper is to provide an overview of deep learning techniques and some of the state-of-the-art applications in the biomedical field. We first introduce the development of artificial neural network and deep learning. We then describe two main components of deep learning, i.e., deep learning architectures and model optimization. Subsequently, some examples are demonstrated for deep learning applications, including medical image classification, genomic sequence analysis, as well as protein structure classification and prediction. Finally, we offer our perspectives for the future directions in the field of deep learning. Copyright © 2018. Production and hosting by Elsevier B.V.

  2. Deep Water Acoustics

    Science.gov (United States)

    2016-06-28

    the Deep Water project and participate in the NPAL Workshops, including Art Baggeroer (MIT), J. Beron- Vera (UMiami), M. Brown (UMiami), T...Kathleen E . Wage. The North Pacific Acoustic Laboratory deep-water acoustic propagation experiments in the Philippine Sea. J. Acoust. Soc. Am., 134(4...estimate of the angle α during PhilSea09, made from ADCP measurements at the site of the DVLA. Sim. A B1 B2 B3 C D E F Prof. # 0 4 4 4 5 10 16 20 α

  3. Deep learning architectures for multi-label classification of intelligent health risk prediction.

    Science.gov (United States)

    Maxwell, Andrew; Li, Runzhi; Yang, Bei; Weng, Heng; Ou, Aihua; Hong, Huixiao; Zhou, Zhaoxian; Gong, Ping; Zhang, Chaoyang

    2017-12-28

    Multi-label classification of data remains to be a challenging problem. Because of the complexity of the data, it is sometimes difficult to infer information about classes that are not mutually exclusive. For medical data, patients could have symptoms of multiple different diseases at the same time and it is important to develop tools that help to identify problems early. Intelligent health risk prediction models built with deep learning architectures offer a powerful tool for physicians to identify patterns in patient data that indicate risks associated with certain types of chronic diseases. Physical examination records of 110,300 anonymous patients were used to predict diabetes, hypertension, fatty liver, a combination of these three chronic diseases, and the absence of disease (8 classes in total). The dataset was split into training (90%) and testing (10%) sub-datasets. Ten-fold cross validation was used to evaluate prediction accuracy with metrics such as precision, recall, and F-score. Deep Learning (DL) architectures were compared with standard and state-of-the-art multi-label classification methods. Preliminary results suggest that Deep Neural Networks (DNN), a DL architecture, when applied to multi-label classification of chronic diseases, produced accuracy that was comparable to that of common methods such as Support Vector Machines. We have implemented DNNs to handle both problem transformation and algorithm adaption type multi-label methods and compare both to see which is preferable. Deep Learning architectures have the potential of inferring more information about the patterns of physical examination data than common classification methods. The advanced techniques of Deep Learning can be used to identify the significance of different features from physical examination data as well as to learn the contributions of each feature that impact a patient's risk for chronic diseases. However, accurate prediction of chronic disease risks remains a challenging

  4. Compendium of Single Event Effects (SEE) Test Results for COTS and Standard Electronics for Low Earth Orbit and Deep Space Applications

    Science.gov (United States)

    Reddell, Brandon; Bailey, Chuck; Nguyen, Kyson; O'Neill, Patrick; Gaza, Razvan; Patel, Chirag; Cooper, Jaime; Kalb, Theodore

    2017-01-01

    We present the results of SEE testing with high energy protons and with low and high energy heavy ions. This paper summarizes test results for components considered for Low Earth Orbit and Deep Space applications.

  5. Overview of deep learning in medical imaging.

    Science.gov (United States)

    Suzuki, Kenji

    2017-09-01

    The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a

  6. Measurement method of the distribution coefficient on the sorption process. Basic procedure of the method relevant to the barrier materials used for the deep geological disposal: 2006

    International Nuclear Information System (INIS)

    2006-08-01

    This standard was approved by Atomic Energy Society of Japan after deliberation of the Subcommittee on the Radioactive Waste Management, the Nuclear Cycle Technical Committee and the Standard Committee, and after obtaining about 600 comments from specialists of about 30 persons. This document defines the basic measurement procedure of the distribution coefficient (hereafter referred as Kd) to judge the reliability, reproducibility and applications and to provide the requirements for inter-comparison of Kd for a variety of barrier materials used for deep geological disposal of radioactive wastes. The basic measurement procedure of Kd is standardized, following the preceded standard, 'Measurement Method of the Distribution Coefficient on the Sorption Process - Basic Procedure of Batch Method Relevant to the Barrier Materials Used for the Shallow Land Disposal: 2002 (hereafter referred as Standard for the Shallow Land Disposal)', and considering recent progress after its publication and specific issues to the deep geological disposal. (J.P.N.)

  7. Deep-etch x-ray lithography at the ALS: First results

    Energy Technology Data Exchange (ETDEWEB)

    Malek, C.K.; Jackson, K.H. [Ernest Orlando Lawrence Berkeley National Lab., CA (United States); Brennen, R.A. [Jet Propulsion Lab., Pasadena, CA (United States)] [and others

    1997-04-01

    The fabrication of high-aspect-ratio and three-dimensional (3D) microstructures is of increasing interest in a multitude of applications in fields such as micromechanics, optics, and interconnect technology. Techniques and processes that enable lithography in thick materials differ from the planar technologies used in standard integrated circuit processing. Deep x-ray lithography permits extremely precise and deep proximity printing of a given pattern from a mask into a very thick resist. It requires a source of hard, intense, and well collimated x-ray radiation, as is provided by a synchrotron radiation source. The thick resist microstructures, so produced can be used as templates from which ultrahigh precision parts with high aspect ratios can be mass-produced out of a large variety of materials (metals, plastics, ceramics). This whole series of techniques and processes has been historically referred to as {open_quotes}LIGA,{close_quotes} from the German acronym for lithography, electroforming (Galvanoformung), and plastic molding (Abformung), the first development of the basic LIGA process having been performed at the Nuclear Research Center at Karlsruhe in Germany.

  8. The use of Hyalomatrix PA in the treatment of deep partial-thickness burns.

    Science.gov (United States)

    Gravante, Gianpiero; Delogu, Daniela; Giordan, Nicola; Morano, Giuseppina; Montone, Antonio; Esposito, Gaetano

    2007-01-01

    Since 2001, Hyalomatrix PA (Fidia Advanced Biopolymers, Abano Terme, Italy) has been used in our center on pediatric burned patients as a temporary dermal substitute to cover deep partial-thickness burns after dermabrasion. This "bridge" treatment was adopted to remove necrotic debris (dermabrasion) and to stimulate regeneration in a humid and protected environment (Hyalomatrix PA). We present results obtained with this approach. On the third to fifth day after admission, dermabrasion was practiced on deep burned areas, which were covered with Hyalomatrix PA. Change of dressings was performed every 7 days. On day 21, those areas still without signs of recovery were removed with classic escharectomy and covered with thin skin grafts. We treated 300 patients. Sixty-one percent needed only one dermabrasion treatment, 22.3% (67 patients) more than one, and 16.7% (50 patients) the classic escharectomy. A total of 83% of patients healed within 21 days. Our study suggests that the combination of dermabrasion with a temporary dermal substitute could be a good and feasible approach for treatment of deep partial-thickness burns. Prospective randomized studies are now necessary to compare our protocol with the gold standard treatment of topical dressings.

  9. WFIRST: Science from Deep Field Surveys

    Science.gov (United States)

    Koekemoer, Anton; Foley, Ryan; WFIRST Deep Field Working Group

    2018-01-01

    WFIRST will enable deep field imaging across much larger areas than those previously obtained with Hubble, opening up completely new areas of parameter space for extragalactic deep fields including cosmology, supernova and galaxy evolution science. The instantaneous field of view of the Wide Field Instrument (WFI) is about 0.3 square degrees, which would for example yield an Ultra Deep Field (UDF) reaching similar depths at visible and near-infrared wavelengths to that obtained with Hubble, over an area about 100-200 times larger, for a comparable investment in time. Moreover, wider fields on scales of 10-20 square degrees could achieve depths comparable to large HST surveys at medium depths such as GOODS and CANDELS, and would enable multi-epoch supernova science that could be matched in area to LSST Deep Drilling fields or other large survey areas. Such fields may benefit from being placed on locations in the sky that have ancillary multi-band imaging or spectroscopy from other facilities, from the ground or in space. The WFIRST Deep Fields Working Group has been examining the science considerations for various types of deep fields that may be obtained with WFIRST, and present here a summary of the various properties of different locations in the sky that may be considered for future deep fields with WFIRST.

  10. Search for nonstandard neutrino interactions with IceCube DeepCore

    Science.gov (United States)

    Aartsen, M. G.; Ackermann, M.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Ahrens, M.; Al Samarai, I.; Altmann, D.; Andeen, K.; Anderson, T.; Ansseau, I.; Anton, G.; Argüelles, C.; Auffenberg, J.; Axani, S.; Bagherpour, H.; Bai, X.; Barron, J. P.; Barwick, S. W.; Baum, V.; Bay, R.; Beatty, J. J.; Becker Tjus, J.; Becker, K.-H.; BenZvi, S.; Berley, D.; Bernardini, E.; Besson, D. Z.; Binder, G.; Bindig, D.; Blaufuss, E.; Blot, S.; Bohm, C.; Börner, M.; Bos, F.; Bose, D.; Böser, S.; Botner, O.; Bourbeau, E.; Bourbeau, J.; Bradascio, F.; Braun, J.; Brayeur, L.; Brenzke, M.; Bretz, H.-P.; Bron, S.; Brostean-Kaiser, J.; Burgman, A.; Carver, T.; Casey, J.; Casier, M.; Cheung, E.; Chirkin, D.; Christov, A.; Clark, K.; Classen, L.; Coenders, S.; Collin, G. H.; Conrad, J. M.; Cowen, D. F.; Cross, R.; Day, M.; de André, J. P. A. M.; De Clercq, C.; DeLaunay, J. J.; Dembinski, H.; De Ridder, S.; Desiati, P.; de Vries, K. D.; de Wasseige, G.; de With, M.; DeYoung, T.; Díaz-Vélez, J. C.; di Lorenzo, V.; Dujmovic, H.; Dumm, J. P.; Dunkman, M.; Dvorak, E.; Eberhardt, B.; Ehrhardt, T.; Eichmann, B.; Eller, P.; Evenson, P. A.; Fahey, S.; Fazely, A. R.; Felde, J.; Filimonov, K.; Finley, C.; Flis, S.; Franckowiak, A.; Friedman, E.; Fuchs, T.; Gaisser, T. K.; Gallagher, J.; Gerhardt, L.; Ghorbani, K.; Giang, W.; Glauch, T.; Glüsenkamp, T.; Goldschmidt, A.; Gonzalez, J. G.; Grant, D.; Griffith, Z.; Haack, C.; Hallgren, A.; Halzen, F.; Hanson, K.; Hebecker, D.; Heereman, D.; Helbing, K.; Hellauer, R.; Hickford, S.; Hignight, J.; Hill, G. C.; Hoffman, K. D.; Hoffmann, R.; Hokanson-Fasig, B.; Hoshina, K.; Huang, F.; Huber, M.; Hultqvist, K.; Hünnefeld, M.; In, S.; Ishihara, A.; Jacobi, E.; Japaridze, G. S.; Jeong, M.; Jero, K.; Jones, B. J. P.; Kalaczynski, P.; Kang, W.; Kappes, A.; Karg, T.; Karle, A.; Katz, U.; Kauer, M.; Keivani, A.; Kelley, J. L.; Kheirandish, A.; Kim, J.; Kim, M.; Kintscher, T.; Kirby, C.; Kiryluk, J.; Kittler, T.; Klein, S. R.; Kohnen, G.; Koirala, R.; Kolanoski, H.; Köpke, L.; Kopper, C.; Kopper, S.; Koschinsky, J. P.; Koskinen, D. J.; Kowalski, M.; Krings, K.; Kroll, M.; Krückl, G.; Kunnen, J.; Kunwar, S.; Kurahashi, N.; Kuwabara, T.; Kyriacou, A.; Labare, M.; Lanfranchi, J. L.; Larson, M. J.; Lauber, F.; Lennarz, D.; Lesiak-Bzdak, M.; Leuermann, M.; Liu, Q. R.; Lu, L.; Lünemann, J.; Luszczak, W.; Madsen, J.; Maggi, G.; Mahn, K. B. M.; Mancina, S.; Maruyama, R.; Mase, K.; Maunu, R.; McNally, F.; Meagher, K.; Medici, M.; Meier, M.; Menne, T.; Merino, G.; Meures, T.; Miarecki, S.; Micallef, J.; Momenté, G.; Montaruli, T.; Moore, R. W.; Moulai, M.; Nahnhauer, R.; Nakarmi, P.; Naumann, U.; Neer, G.; Niederhausen, H.; Nowicki, S. C.; Nygren, D. R.; Obertacke Pollmann, A.; Olivas, A.; O'Murchadha, A.; Palczewski, T.; Pandya, H.; Pankova, D. V.; Peiffer, P.; Pepper, J. A.; Pérez de los Heros, C.; Pieloth, D.; Pinat, E.; Plum, M.; Price, P. B.; Przybylski, G. T.; Raab, C.; Rädel, L.; Rameez, M.; Rawlins, K.; Rea, I. C.; Reimann, R.; Relethford, B.; Relich, M.; Resconi, E.; Rhode, W.; Richman, M.; Robertson, S.; Rongen, M.; Rott, C.; Ruhe, T.; Ryckbosch, D.; Rysewyk, D.; Sälzer, T.; Sanchez Herrera, S. E.; Sandrock, A.; Sandroos, J.; Santander, M.; Sarkar, S.; Sarkar, S.; Satalecka, K.; Schlunder, P.; Schmidt, T.; Schneider, A.; Schoenen, S.; Schöneberg, S.; Schumacher, L.; Seckel, D.; Seunarine, S.; Soedingrekso, J.; Soldin, D.; Song, M.; Spiczak, G. M.; Spiering, C.; Stachurska, J.; Stamatikos, M.; Stanev, T.; Stasik, A.; Stettner, J.; Steuer, A.; Stezelberger, T.; Stokstad, R. G.; Stößl, A.; Strotjohann, N. L.; Stuttard, T.; Sullivan, G. W.; Sutherland, M.; Taboada, I.; Tatar, J.; Tenholt, F.; Ter-Antonyan, S.; Terliuk, A.; Tešić, G.; Tilav, S.; Toale, P. A.; Tobin, M. N.; Toscano, S.; Tosi, D.; Tselengidou, M.; Tung, C. F.; Turcati, A.; Turley, C. F.; Ty, B.; Unger, E.; Usner, M.; Vandenbroucke, J.; Van Driessche, W.; van Eijndhoven, N.; Vanheule, S.; van Santen, J.; Vehring, M.; Vogel, E.; Vraeghe, M.; Walck, C.; Wallace, A.; Wallraff, M.; Wandler, F. D.; Wandkowsky, N.; Waza, A.; Weaver, C.; Weiss, M. J.; Wendt, C.; Werthebach, J.; Westerhoff, S.; Whelan, B. J.; Wiebe, K.; Wiebusch, C. H.; Wille, L.; Williams, D. R.; Wills, L.; Wolf, M.; Wood, J.; Wood, T. R.; Woolsey, E.; Woschnagg, K.; Xu, D. L.; Xu, X. W.; Xu, Y.; Yanez, J. P.; Yodh, G.; Yoshida, S.; Yuan, T.; Zoll, M.; IceCube Collaboration

    2018-04-01

    As atmospheric neutrinos propagate through the Earth, vacuumlike oscillations are modified by Standard Model neutral- and charged-current interactions with electrons. Theories beyond the Standard Model introduce heavy, TeV-scale bosons that can produce nonstandard neutrino interactions. These additional interactions may modify the Standard Model matter effect producing a measurable deviation from the prediction for atmospheric neutrino oscillations. The result described in this paper constrains nonstandard interaction parameters, building upon a previous analysis of atmospheric muon-neutrino disappearance with three years of IceCube DeepCore data. The best fit for the muon to tau flavor changing term is ɛμ τ=-0.0005 , with a 90% C.L. allowed range of -0.0067 <ɛμ τ<0.0081 . This result is more restrictive than recent limits from other experiments for ɛμ τ. Furthermore, our result is complementary to a recent constraint on ɛμ τ using another publicly available IceCube high-energy event selection. Together, they constitute the world's best limits on nonstandard interactions in the μ -τ sector.

  11. Minimally invasive trans-portal resection of deep intracranial lesions.

    Science.gov (United States)

    Raza, S M; Recinos, P F; Avendano, J; Adams, H; Jallo, G I; Quinones-Hinojosa, A

    2011-02-01

    The surgical management of deep intra-axial lesions still requires microsurgical approaches that utilize retraction of deep white matter to obtain adequate visualization. We report our experience with a new tubular retractor system, designed specifically for intracranial applications, linked with frameless neuronavigation for a cohort of intraventricular and deep intra-axial tumors. The ViewSite Brain Access System (Vycor, Inc) was used in a series of 9 adult and pediatric patients with a variety of pathologies. Histological diagnoses either resected or biopsied with the system included: colloid cyst, DNET, papillary pineal tumor, anaplastic astrocytoma, toxoplasmosis and lymphoma. The locations of the lesions approached include: lateral ventricle, basal ganglia, pulvinar/posterior thalamus and insular cortex. Post-operative imaging was assessed to determine extent of resection and extent of white matter damage along the surgical trajectory (based on T (2)/FLAIR and diffusion restriction/ADC signal). Satisfactory resection or biopsy was obtained in all patients. Radiographic analysis demonstrated evidence of white matter damage along the surgical trajectory in one patient. None of the patients experienced neurological deficits as a result of white matter retraction/manipulation. Based on a retrospective review of our experience, we feel that this access system, when used in conjunction with frameless neuronavigational systems, provides adequate visualization for tumor resection while permitting the use of standard microsurgical techniques through minimally invasive craniotomies. Our initial data indicate that this system may minimize white matter injury, but further studies are necessary. © Georg Thieme Verlag KG Stuttgart · New York.

  12. Scientific and ethical issues related to deep brain stimulation for disorders of mood, behavior, and thought.

    Science.gov (United States)

    Rabins, Peter; Appleby, Brian S; Brandt, Jason; DeLong, Mahlon R; Dunn, Laura B; Gabriëls, Loes; Greenberg, Benjamin D; Haber, Suzanne N; Holtzheimer, Paul E; Mari, Zoltan; Mayberg, Helen S; McCann, Evelyn; Mink, Sallie P; Rasmussen, Steven; Schlaepfer, Thomas E; Vawter, Dorothy E; Vitek, Jerrold L; Walkup, John; Mathews, Debra J H

    2009-09-01

    A 2-day consensus conference was held to examine scientific and ethical issues in the application of deep brain stimulation for treating mood and behavioral disorders, such as major depression, obsessive-compulsive disorder, and Tourette syndrome. The primary objectives of the conference were to (1) establish consensus among participants about the design of future clinical trials of deep brain stimulation for disorders of mood, behavior, and thought and (2) develop standards for the protection of human subjects participating in such studies. Conference participants identified 16 key points for guiding research in this growing field. The adoption of the described guidelines would help to protect the safety and rights of research subjects who participate in clinical trials of deep brain stimulation for disorders of mood, behavior, and thought and have further potential to benefit other stakeholders in the research process, including clinical researchers and device manufactures. That said, the adoption of the guidelines will require broad and substantial commitment from many of these same stakeholders.

  13. TOPIC MODELING: CLUSTERING OF DEEP WEBPAGES

    OpenAIRE

    Muhunthaadithya C; Rohit J.V; Sadhana Kesavan; E. Sivasankar

    2015-01-01

    The internet is comprised of massive amount of information in the form of zillions of web pages.This information can be categorized into the surface web and the deep web. The existing search engines can effectively make use of surface web information.But the deep web remains unexploited yet. Machine learning techniques have been commonly employed to access deep web content.

  14. Results of Using the Global Positioning System to Maintain the Time and Frequency Synchronization in the Jet Propulsion Laboratory's Deep Space Network

    National Research Council Canada - National Science Library

    Clements, P. A; Kirk, A; Unglaub, R

    1986-01-01

    The Jet Propulsion Laboratory's Deep Space Network (DSN) consists of three tracking stations located in California, Australia, and Spain, each with two hydrogen maser clocks as the time and frequency standard...

  15. DeepSimulator: a deep simulator for Nanopore sequencing

    KAUST Repository

    Li, Yu; Han, Renmin; Bi, Chongwei; Li, Mo; Wang, Sheng; Gao, Xin

    2017-01-01

    or assembled contigs, we simulate the electrical current signals by a context-dependent deep learning model, followed by a base-calling procedure to yield simulated reads. This workflow mimics the sequencing procedure more naturally. The thorough experiments

  16. Building Program Vector Representations for Deep Learning

    OpenAIRE

    Mou, Lili; Li, Ge; Liu, Yuxuan; Peng, Hao; Jin, Zhi; Xu, Yan; Zhang, Lu

    2014-01-01

    Deep learning has made significant breakthroughs in various fields of artificial intelligence. Advantages of deep learning include the ability to capture highly complicated features, weak involvement of human engineering, etc. However, it is still virtually impossible to use deep learning to analyze programs since deep architectures cannot be trained effectively with pure back propagation. In this pioneering paper, we propose the "coding criterion" to build program vector representations, whi...

  17. MOVING OBJECTS IN THE HUBBLE ULTRA DEEP FIELD

    Energy Technology Data Exchange (ETDEWEB)

    Kilic, Mukremin; Gianninas, Alexandros [Homer L. Dodge Department of Physics and Astronomy, University of Oklahoma, 440 W. Brooks St., Norman, OK 73019 (United States); Von Hippel, Ted, E-mail: kilic@ou.edu, E-mail: alexg@nhn.ou.edu, E-mail: ted.vonhippel@erau.edu [Embry-Riddle Aeronautical University, 600 S. Clyde Morris Blvd., Daytona Beach, FL 32114 (United States)

    2013-09-01

    We identify proper motion objects in the Hubble Ultra Deep Field (UDF) using the optical data from the original UDF program in 2004 and the near-infrared data from the 128 orbit UDF 2012 campaign. There are 12 sources brighter than I = 27 mag that display >3{sigma} significant proper motions. We do not find any proper motion objects fainter than this magnitude limit. Combining optical and near-infrared photometry, we model the spectral energy distribution of each point-source using stellar templates and state-of-the-art white dwarf models. For I {<=} 27 mag, we identify 23 stars with K0-M6 spectral types and two faint blue objects that are clearly old, thick disk white dwarfs. We measure a thick disk white dwarf space density of 0.1-1.7 Multiplication-Sign 10{sup -3} pc{sup -3} from these two objects. There are no halo white dwarfs in the UDF down to I = 27 mag. Combining the Hubble Deep Field North, South, and the UDF data, we do not see any evidence for dark matter in the form of faint halo white dwarfs, and the observed population of white dwarfs can be explained with the standard Galactic models.

  18. [Deep vein thrombosis prophylaxis.

    Science.gov (United States)

    Sandoval-Chagoya, Gloria Alejandra; Laniado-Laborín, Rafael

    2013-01-01

    Background: despite the proven effectiveness of preventive therapy for deep vein thrombosis, a significant proportion of patients at risk for thromboembolism do not receive prophylaxis during hospitalization. Our objective was to determine the adherence to thrombosis prophylaxis guidelines in a general hospital as a quality control strategy. Methods: a random audit of clinical charts was conducted at the Tijuana General Hospital, Baja California, Mexico, to determine the degree of adherence to deep vein thrombosis prophylaxis guidelines. The instrument used was the Caprini's checklist for thrombosis risk assessment in adult patients. Results: the sample included 300 patient charts; 182 (60.7 %) were surgical patients and 118 were medical patients. Forty six patients (15.3 %) received deep vein thrombosis pharmacologic prophylaxis; 27.1 % of medical patients received deep vein thrombosis prophylaxis versus 8.3 % of surgical patients (p < 0.0001). Conclusions: our results show that adherence to DVT prophylaxis at our hospital is extremely low. Only 15.3 % of our patients at risk received treatment, and even patients with very high risk received treatment in less than 25 % of the cases. We have implemented strategies to increase compliance with clinical guidelines.

  19. Contemporary deep recurrent learning for recognition

    Science.gov (United States)

    Iftekharuddin, K. M.; Alam, M.; Vidyaratne, L.

    2017-05-01

    Large-scale feed-forward neural networks have seen intense application in many computer vision problems. However, these networks can get hefty and computationally intensive with increasing complexity of the task. Our work, for the first time in literature, introduces a Cellular Simultaneous Recurrent Network (CSRN) based hierarchical neural network for object detection. CSRN has shown to be more effective to solving complex tasks such as maze traversal and image processing when compared to generic feed forward networks. While deep neural networks (DNN) have exhibited excellent performance in object detection and recognition, such hierarchical structure has largely been absent in neural networks with recurrency. Further, our work introduces deep hierarchy in SRN for object recognition. The simultaneous recurrency results in an unfolding effect of the SRN through time, potentially enabling the design of an arbitrarily deep network. This paper shows experiments using face, facial expression and character recognition tasks using novel deep recurrent model and compares recognition performance with that of generic deep feed forward model. Finally, we demonstrate the flexibility of incorporating our proposed deep SRN based recognition framework in a humanoid robotic platform called NAO.

  20. Towards deep learning with segregated dendrites.

    Science.gov (United States)

    Guerguiev, Jordan; Lillicrap, Timothy P; Richards, Blake A

    2017-12-05

    Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network learns to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful higher-order representations-the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the morphology of neocortical pyramidal neurons.

  1. Deep Learning Methods for Improved Decoding of Linear Codes

    Science.gov (United States)

    Nachmani, Eliya; Marciano, Elad; Lugosch, Loren; Gross, Warren J.; Burshtein, David; Be'ery, Yair

    2018-02-01

    The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.

  2. The deep ocean under climate change

    Science.gov (United States)

    Levin, Lisa A.; Le Bris, Nadine

    2015-11-01

    The deep ocean absorbs vast amounts of heat and carbon dioxide, providing a critical buffer to climate change but exposing vulnerable ecosystems to combined stresses of warming, ocean acidification, deoxygenation, and altered food inputs. Resulting changes may threaten biodiversity and compromise key ocean services that maintain a healthy planet and human livelihoods. There exist large gaps in understanding of the physical and ecological feedbacks that will occur. Explicit recognition of deep-ocean climate mitigation and inclusion in adaptation planning by the United Nations Framework Convention on Climate Change (UNFCCC) could help to expand deep-ocean research and observation and to protect the integrity and functions of deep-ocean ecosystems.

  3. Search for contact interactions and graviton effects in deep inelastic scattering at HERA

    International Nuclear Information System (INIS)

    Scheins, J.J.

    2001-09-01

    Neutral current events in deep inelastic scattering at HERA taken with the H1 detector are examined with respect to standard model expectations. The measured inclusive cross section dσ/dQ 2 for Q 2 >200 GeV 2 in reactions e ± p → e ± X is analysed in terms of contact interactions or graviton effects in combination with large extra dimensions. The total amount of analysed data corresponds to an integrated luminosity of L int =115 pb -1 . The comparison of all data sets to their corresponding standard model expectation shows no evidence for new phenomena. Therefore exclusion limits are derived for the mentioned physical scenarios beyond the standard model. The combination of all data sets leads to maximum sensitivity and significantly improved limits compared to earlier results of H1. (orig.) [de

  4. NATURAL GAS RESOURCES IN DEEP SEDIMENTARY BASINS

    Energy Technology Data Exchange (ETDEWEB)

    Thaddeus S. Dyman; Troy Cook; Robert A. Crovelli; Allison A. Henry; Timothy C. Hester; Ronald C. Johnson; Michael D. Lewan; Vito F. Nuccio; James W. Schmoker; Dennis B. Riggin; Christopher J. Schenk

    2002-02-05

    From a geological perspective, deep natural gas resources are generally defined as resources occurring in reservoirs at or below 15,000 feet, whereas ultra-deep gas occurs below 25,000 feet. From an operational point of view, ''deep'' is often thought of in a relative sense based on the geologic and engineering knowledge of gas (and oil) resources in a particular area. Deep gas can be found in either conventionally-trapped or unconventional basin-center accumulations that are essentially large single fields having spatial dimensions often exceeding those of conventional fields. Exploration for deep conventional and unconventional basin-center natural gas resources deserves special attention because these resources are widespread and occur in diverse geologic environments. In 1995, the U.S. Geological Survey estimated that 939 TCF of technically recoverable natural gas remained to be discovered or was part of reserve appreciation from known fields in the onshore areas and State waters of the United. Of this USGS resource, nearly 114 trillion cubic feet (Tcf) of technically-recoverable gas remains to be discovered from deep sedimentary basins. Worldwide estimates of deep gas are also high. The U.S. Geological Survey World Petroleum Assessment 2000 Project recently estimated a world mean undiscovered conventional gas resource outside the U.S. of 844 Tcf below 4.5 km (about 15,000 feet). Less is known about the origins of deep gas than about the origins of gas at shallower depths because fewer wells have been drilled into the deeper portions of many basins. Some of the many factors contributing to the origin of deep gas include the thermal stability of methane, the role of water and non-hydrocarbon gases in natural gas generation, porosity loss with increasing thermal maturity, the kinetics of deep gas generation, thermal cracking of oil to gas, and source rock potential based on thermal maturity and kerogen type. Recent experimental simulations

  5. VERIFICATION OF THE FOOD SAFETY MANAGEMENT SYSTEM IN DEEP FROZEN FOOD PRODUCTION PLANT

    Directory of Open Access Journals (Sweden)

    Peter Zajác

    2010-07-01

    Full Text Available In work is presented verification of food safety management system of deep frozen food. Main emphasis is on creating set of verification questions within articles of standard STN EN ISO 22000:2006 and on searching of effectiveness in food safety management system. Information were acquired from scientific literature sources and they pointed out importance of implementation and upkeep of effective food safety management system. doi:10.5219/28

  6. Achieving Innovation and Affordability Through Standardization of Materials Development and Testing

    Science.gov (United States)

    Bray, M. H.; Zook, L. M.; Raley, R. E.; Chapman, C.

    2011-01-01

    The successful expansion of development, innovation, and production within the aeronautics industry during the 20th century was facilitated by collaboration of government agencies with the commercial aviation companies. One of the initial products conceived from the collaboration was the ANC-5 Bulletin, first published in 1937. The ANC-5 Bulletin had intended to standardize the requirements of various government agencies in the design of aircraft structure. The national space policy shift in priority for NASA with an emphasis on transferring the travel to low earth orbit to commercial space providers highlights an opportunity and a need for the national and global space industries. The same collaboration and standardization that is documented and maintained by the industry within MIL-HDBK-5 (MMPDS-01) and MIL-HBDK-17 (nonmetallic mechanical properties) can also be exploited to standardize the thermal performance properties, processing methods, test methods, and analytical methods for use in aircraft and spacecraft design and associated propulsion systems. In addition to the definition of thermal performance description and standardization, the standardization for test methods and analysis for extreme environments (high temperature, cryogenics, deep space radiation, etc) would also be highly valuable to the industry. Its subsequent revisions and conversion to MIL-HDBK-5 and then MMPDS-01 established and then expanded to contain standardized mechanical property design values and other related design information for metallic materials used in aircraft, missiles, and space vehicles. It also includes guidance on standardization of composition, processing, and analytical methods for presentation and inclusion into the handbook. This standardization enabled an expansion of the technologies to provide efficiency and reliability to the consumers. It can be established that many individual programs within the government agencies have been overcome with development costs

  7. Deep smarts.

    Science.gov (United States)

    Leonard, Dorothy; Swap, Walter

    2004-09-01

    When a person sizes up a complex situation and rapidly comes to a decision that proves to be not just good but brilliant, you think, "That was smart." After you watch him do this a few times, you realize you're in the presence of something special. It's not raw brainpower, though that helps. It's not emotional intelligence, either, though that, too, is often involved. It's deep smarts. Deep smarts are not philosophical--they're not"wisdom" in that sense, but they're as close to wisdom as business gets. You see them in the manager who understands when and how to move into a new international market, in the executive who knows just what kind of talk to give when her organization is in crisis, in the technician who can track a product failure back to an interaction between independently produced elements. These are people whose knowledge would be hard to purchase on the open market. Their insight is based on know-how more than on know-what; it comprises a system view as well as expertise in individual areas. Because deep smarts are experienced based and often context specific, they can't be produced overnight or readily imported into an organization. It takes years for an individual to develop them--and no time at all for an organization to lose them when a valued veteran walks out the door. They can be taught, however, with the right techniques. Drawing on their forthcoming book Deep Smarts, Dorothy Leonard and Walter Swap say the best way to transfer such expertise to novices--and, on a larger scale, to make individual knowledge institutional--isn't through PowerPoint slides, a Web site of best practices, online training, project reports, or lectures. Rather, the sage needs to teach the neophyte individually how to draw wisdom from experience. Companies have to be willing to dedicate time and effort to such extensive training, but the investment more than pays for itself.

  8. Deep Learning and Developmental Learning: Emergence of Fine-to-Coarse Conceptual Categories at Layers of Deep Belief Network.

    Science.gov (United States)

    Sadeghi, Zahra

    2016-09-01

    In this paper, I investigate conceptual categories derived from developmental processing in a deep neural network. The similarity matrices of deep representation at each layer of neural network are computed and compared with their raw representation. While the clusters generated by raw representation stand at the basic level of abstraction, conceptual categories obtained from deep representation shows a bottom-up transition procedure. Results demonstrate a developmental course of learning from specific to general level of abstraction through learned layers of representations in a deep belief network. © The Author(s) 2016.

  9. Climate, carbon cycling, and deep-ocean ecosystems.

    Science.gov (United States)

    Smith, K L; Ruhl, H A; Bett, B J; Billett, D S M; Lampitt, R S; Kaufmann, R S

    2009-11-17

    Climate variation affects surface ocean processes and the production of organic carbon, which ultimately comprises the primary food supply to the deep-sea ecosystems that occupy approximately 60% of the Earth's surface. Warming trends in atmospheric and upper ocean temperatures, attributed to anthropogenic influence, have occurred over the past four decades. Changes in upper ocean temperature influence stratification and can affect the availability of nutrients for phytoplankton production. Global warming has been predicted to intensify stratification and reduce vertical mixing. Research also suggests that such reduced mixing will enhance variability in primary production and carbon export flux to the deep sea. The dependence of deep-sea communities on surface water production has raised important questions about how climate change will affect carbon cycling and deep-ocean ecosystem function. Recently, unprecedented time-series studies conducted over the past two decades in the North Pacific and the North Atlantic at >4,000-m depth have revealed unexpectedly large changes in deep-ocean ecosystems significantly correlated to climate-driven changes in the surface ocean that can impact the global carbon cycle. Climate-driven variation affects oceanic communities from surface waters to the much-overlooked deep sea and will have impacts on the global carbon cycle. Data from these two widely separated areas of the deep ocean provide compelling evidence that changes in climate can readily influence deep-sea processes. However, the limited geographic coverage of these existing time-series studies stresses the importance of developing a more global effort to monitor deep-sea ecosystems under modern conditions of rapidly changing climate.

  10. The deep ocean under climate change.

    Science.gov (United States)

    Levin, Lisa A; Le Bris, Nadine

    2015-11-13

    The deep ocean absorbs vast amounts of heat and carbon dioxide, providing a critical buffer to climate change but exposing vulnerable ecosystems to combined stresses of warming, ocean acidification, deoxygenation, and altered food inputs. Resulting changes may threaten biodiversity and compromise key ocean services that maintain a healthy planet and human livelihoods. There exist large gaps in understanding of the physical and ecological feedbacks that will occur. Explicit recognition of deep-ocean climate mitigation and inclusion in adaptation planning by the United Nations Framework Convention on Climate Change (UNFCCC) could help to expand deep-ocean research and observation and to protect the integrity and functions of deep-ocean ecosystems. Copyright © 2015, American Association for the Advancement of Science.

  11. SEDS: THE SPITZER EXTENDED DEEP SURVEY. SURVEY DESIGN, PHOTOMETRY, AND DEEP IRAC SOURCE COUNTS

    International Nuclear Information System (INIS)

    Ashby, M. L. N.; Willner, S. P.; Fazio, G. G.; Huang, J.-S.; Hernquist, L.; Hora, J. L.; Arendt, R.; Barmby, P.; Barro, G.; Faber, S.; Guhathakurta, P.; Bell, E. F.; Bouwens, R.; Cattaneo, A.; Croton, D.; Davé, R.; Dunlop, J. S.; Egami, E.; Finlator, K.; Grogin, N. A.

    2013-01-01

    The Spitzer Extended Deep Survey (SEDS) is a very deep infrared survey within five well-known extragalactic science fields: the UKIDSS Ultra-Deep Survey, the Extended Chandra Deep Field South, COSMOS, the Hubble Deep Field North, and the Extended Groth Strip. SEDS covers a total area of 1.46 deg 2 to a depth of 26 AB mag (3σ) in both of the warm Infrared Array Camera (IRAC) bands at 3.6 and 4.5 μm. Because of its uniform depth of coverage in so many widely-separated fields, SEDS is subject to roughly 25% smaller errors due to cosmic variance than a single-field survey of the same size. SEDS was designed to detect and characterize galaxies from intermediate to high redshifts (z = 2-7) with a built-in means of assessing the impact of cosmic variance on the individual fields. Because the full SEDS depth was accumulated in at least three separate visits to each field, typically with six-month intervals between visits, SEDS also furnishes an opportunity to assess the infrared variability of faint objects. This paper describes the SEDS survey design, processing, and publicly-available data products. Deep IRAC counts for the more than 300,000 galaxies detected by SEDS are consistent with models based on known galaxy populations. Discrete IRAC sources contribute 5.6 ± 1.0 and 4.4 ± 0.8 nW m –2 sr –1 at 3.6 and 4.5 μm to the diffuse cosmic infrared background (CIB). IRAC sources cannot contribute more than half of the total CIB flux estimated from DIRBE data. Barring an unexpected error in the DIRBE flux estimates, half the CIB flux must therefore come from a diffuse component.

  12. The deep lymphatic anatomy of the hand.

    Science.gov (United States)

    Ma, Chuan-Xiang; Pan, Wei-Ren; Liu, Zhi-An; Zeng, Fan-Qiang; Qiu, Zhi-Qiang

    2018-04-03

    The deep lymphatic anatomy of the hand still remains the least described in medical literature. Eight hands were harvested from four nonembalmed human cadavers amputated above the wrist. A small amount of 6% hydrogen peroxide was employed to detect the lymphatic vessels around the superficial and deep palmar vascular arches, in webs from the index to little fingers, the thenar and hypothenar areas. A 30-gauge needle was inserted into the vessels and injected with a barium sulphate compound. Each specimen was dissected, photographed and radiographed to demonstrate deep lymphatic distribution of the hand. Five groups of deep collecting lymph vessels were found in the hand: superficial palmar arch lymph vessel (SPALV); deep palmar arch lymph vessel (DPALV); thenar lymph vessel (TLV); hypothenar lymph vessel (HTLV); deep finger web lymph vessel (DFWLV). Each group of vessels drained in different directions first, then all turned and ran towards the wrist in different layers. The deep lymphatic drainage of the hand has been presented. The results will provide an anatomical basis for clinical management, educational reference and scientific research. Copyright © 2018 Elsevier GmbH. All rights reserved.

  13. New Insights Offered by a Computational Model of Deep Brain Stimulation

    DEFF Research Database (Denmark)

    Modolo, J.; Mosekilde, Erik; Beuter, A.

    2007-01-01

    Deep brain stimulation (DBS) is a standard neurosurgical procedure used to treat motor symptoms in about 5% of patients with Parkinson's disease (PD). Despite the indisputable success of this procedure, the biological mechanisms underlying the clinical benefits of DBS have not yet been fully...... and exploring the physiological mechanisms which respond to this treatment strategy (i.e., DBS). Finally, we present new insights into the ways this computational model may help to elucidate the dynamic network effects produced in a cerebral structure when DBS is applied. (C) 2007 Elsevier Ltd. All rights...

  14. The First U.S. Naval Observatory Robotic Astrometric Telescope Catalog

    Science.gov (United States)

    2015-10-01

    over 188 million objects matched with the Two Micron All Sky Survey ( 2MASS ) point-source catalog proper motions (typically 5–7 masyr–1 standard...errors) are provided. These data are supplemented by 2MASS and AAVSO Photometric All-Sky Survey (APASS) photometry. Observations, reductions, and catalog...reference star catalog for current epochs about 4 times more precise than UCAC with a density similar to the Two Micron All Sky Survey ( 2MASS

  15. Diagnosis and Treatment of Lower Extremity Deep Vein Thrombosis: Korean Practice Guidelines

    Science.gov (United States)

    Min, Seung-Kee; Kim, Young Hwan; Joh, Jin Hyun; Kang, Jin Mo; Park, Ui Jun; Kim, Hyung-Kee; Chang, Jeong-Hwan; Park, Sang Jun; Kim, Jang Yong; Bae, Jae Ik; Choi, Sun Young; Kim, Chang Won; Park, Sung Il; Yim, Nam Yeol; Jeon, Yong Sun; Yoon, Hyun-Ki; Park, Ki Hyuk

    2016-01-01

    Lower extremity deep vein thrombosis is a serious medical condition that can result in death or major disability due to pulmonary embolism or post-thrombotic syndrome. Appropriate diagnosis and treatment are required to improve symptoms and salvage the affected limb. Early thrombus clearance rapidly resolves symptoms related to venous obstruction, restores valve function and reduces the incidence of post-thrombotic syndrome. Recently, endovascular treatment has been established as a standard method for early thrombus removal. However, there are a variety of views regarding the indications and procedures among medical institutions and operators. Therefore, we intend to provide evidence-based guidelines for diagnosis and treatment of lower extremity deep vein thrombosis by multidisciplinary consensus. These guidelines are the result of a close collaboration between interventional radiologists and vascular surgeons. The goals of these guidelines are to improve treatment, to serve as a guide to the clinician, and consequently to contribute to public health care. PMID:27699156

  16. Deep ECGNet: An Optimal Deep Learning Framework for Monitoring Mental Stress Using Ultra Short-Term ECG Signals.

    Science.gov (United States)

    Hwang, Bosun; You, Jiwoo; Vaessen, Thomas; Myin-Germeys, Inez; Park, Cheolsoo; Zhang, Byoung-Tak

    2018-02-08

    Stress recognition using electrocardiogram (ECG) signals requires the intractable long-term heart rate variability (HRV) parameter extraction process. This study proposes a novel deep learning framework to recognize the stressful states, the Deep ECGNet, using ultra short-term raw ECG signals without any feature engineering methods. The Deep ECGNet was developed through various experiments and analysis of ECG waveforms. We proposed the optimal recurrent and convolutional neural networks architecture, and also the optimal convolution filter length (related to the P, Q, R, S, and T wave durations of ECG) and pooling length (related to the heart beat period) based on the optimization experiments and analysis on the waveform characteristics of ECG signals. The experiments were also conducted with conventional methods using HRV parameters and frequency features as a benchmark test. The data used in this study were obtained from Kwangwoon University in Korea (13 subjects, Case 1) and KU Leuven University in Belgium (9 subjects, Case 2). Experiments were designed according to various experimental protocols to elicit stressful conditions. The proposed framework to recognize stress conditions, the Deep ECGNet, outperformed the conventional approaches with the highest accuracy of 87.39% for Case 1 and 73.96% for Case 2, respectively, that is, 16.22% and 10.98% improvements compared with those of the conventional HRV method. We proposed an optimal deep learning architecture and its parameters for stress recognition, and the theoretical consideration on how to design the deep learning structure based on the periodic patterns of the raw ECG data. Experimental results in this study have proved that the proposed deep learning model, the Deep ECGNet, is an optimal structure to recognize the stress conditions using ultra short-term ECG data.

  17. Heart-Rate Variability During Deep Sleep in World-Class Alpine Skiers: A Time-Efficient Alternative to Morning Supine Measurements.

    Science.gov (United States)

    Herzig, David; Testorelli, Moreno; Olstad, Daniela Schäfer; Erlacher, Daniel; Achermann, Peter; Eser, Prisca; Wilhelm, Matthias

    2017-05-01

    It is increasingly popular to use heart-rate variability (HRV) to tailor training for athletes. A time-efficient method is HRV assessment during deep sleep. To validate the selection of deep-sleep segments identified by RR intervals with simultaneous electroencephalography (EEG) recordings and to compare HRV parameters of these segments with those of standard morning supine measurements. In 11 world-class alpine skiers, RR intervals were monitored during 10 nights, and simultaneous EEGs were recorded during 2-4 nights. Deep sleep was determined from the HRV signal and verified by delta power from the EEG recordings. Four further segments were chosen for HRV determination, namely, a 4-h segment from midnight to 4 AM and three 5-min segments: 1 just before awakening, 1 after waking in supine position, and 1 in standing after orthostatic challenge. Training load was recorded every day. A total of 80 night and 68 morning measurements of 9 athletes were analyzed. Good correspondence between the phases selected by RR intervals vs those selected by EEG was found. Concerning root-mean-squared difference of successive RR intervals (RMSSD), a marker for parasympathetic activity, the best relationship with the morning supine measurement was found in deep sleep. HRV is a simple tool for approximating deep-sleep phases, and HRV measurement during deep sleep could provide a time-efficient alternative to HRV in supine position.

  18. Deep inelastic electron and muon scattering

    International Nuclear Information System (INIS)

    Taylor, R.E.

    1975-07-01

    From the review of deep inelastic electron and muon scattering it is concluded that the puzzle of deep inelastic scattering versus annihilation was replaced with the challenge of the new particles, that the evidence for the simplest quark-algebra models of deep inelastic processes is weaker than a year ago. Definite evidence of scale breaking was found but the specific form of that scale breaking is difficult to extract from the data. 59 references

  19. Fast, Distributed Algorithms in Deep Networks

    Science.gov (United States)

    2016-05-11

    shallow networks, additional work will need to be done in order to allow for the application of ADMM to deep nets. The ADMM method allows for quick...Quock V Le, et al. Large scale distributed deep networks. In Advances in Neural Information Processing Systems, pages 1223–1231, 2012. [11] Ken-Ichi...A TRIDENT SCHOLAR PROJECT REPORT NO. 446 Fast, Distributed Algorithms in Deep Networks by Midshipman 1/C Ryan J. Burmeister, USN

  20. Non-standard neutrino interactions in the mu–tau sector

    Directory of Open Access Journals (Sweden)

    Irina Mocioiu

    2015-04-01

    Full Text Available We discuss neutrino mass hierarchy implications arising from the effects of non-standard neutrino interactions on muon rates in high statistics atmospheric neutrino oscillation experiments like IceCube DeepCore. We concentrate on the mu–tau sector, which is presently the least constrained. It is shown that the magnitude of the effects depends strongly on the sign of the ϵμτ parameter describing this non-standard interaction. A simple analytic model is used to understand the parameter space where differences between the two signs are maximized. We discuss how this effect is partially degenerate with changing the neutrino mass hierarchy, as well as how this degeneracy could be lifted.

  1. Deep Learning from Crowds

    DEFF Research Database (Denmark)

    Rodrigues, Filipe; Pereira, Francisco Camara

    Over the last few years, deep learning has revolutionized the field of machine learning by dramatically improving the stateof-the-art in various domains. However, as the size of supervised artificial neural networks grows, typically so does the need for larger labeled datasets. Recently...... networks from crowds. We begin by describing an EM algorithm for jointly learning the parameters of the network and the reliabilities of the annotators. Then, a novel general-purpose crowd layer is proposed, which allows us to train deep neural networks end-to-end, directly from the noisy labels......, crowdsourcing has established itself as an efficient and cost-effective solution for labeling large sets of data in a scalable manner, but it often requires aggregating labels from multiple noisy contributors with different levels of expertise. In this paper, we address the problem of learning deep neural...

  2. Deep learning methods for protein torsion angle prediction.

    Science.gov (United States)

    Li, Haiou; Hou, Jie; Adhikari, Badri; Lyu, Qiang; Cheng, Jianlin

    2017-09-18

    Deep learning is one of the most powerful machine learning methods that has achieved the state-of-the-art performance in many domains. Since deep learning was introduced to the field of bioinformatics in 2012, it has achieved success in a number of areas such as protein residue-residue contact prediction, secondary structure prediction, and fold recognition. In this work, we developed deep learning methods to improve the prediction of torsion (dihedral) angles of proteins. We design four different deep learning architectures to predict protein torsion angles. The architectures including deep neural network (DNN) and deep restricted Boltzmann machine (DRBN), deep recurrent neural network (DRNN) and deep recurrent restricted Boltzmann machine (DReRBM) since the protein torsion angle prediction is a sequence related problem. In addition to existing protein features, two new features (predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments) are used as input to each of the four deep learning architectures to predict phi and psi angles of protein backbone. The mean absolute error (MAE) of phi and psi angles predicted by DRNN, DReRBM, DRBM and DNN is about 20-21° and 29-30° on an independent dataset. The MAE of phi angle is comparable to the existing methods, but the MAE of psi angle is 29°, 2° lower than the existing methods. On the latest CASP12 targets, our methods also achieved the performance better than or comparable to a state-of-the art method. Our experiment demonstrates that deep learning is a valuable method for predicting protein torsion angles. The deep recurrent network architecture performs slightly better than deep feed-forward architecture, and the predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments are useful features for improving prediction accuracy.

  3. Deep Learning in Gastrointestinal Endoscopy.

    Science.gov (United States)

    Patel, Vivek; Armstrong, David; Ganguli, Malika; Roopra, Sandeep; Kantipudi, Neha; Albashir, Siwar; Kamath, Markad V

    2016-01-01

    Gastrointestinal (GI) endoscopy is used to inspect the lumen or interior of the GI tract for several purposes, including, (1) making a clinical diagnosis, in real time, based on the visual appearances; (2) taking targeted tissue samples for subsequent histopathological examination; and (3) in some cases, performing therapeutic interventions targeted at specific lesions. GI endoscopy is therefore predicated on the assumption that the operator-the endoscopist-is able to identify and characterize abnormalities or lesions accurately and reproducibly. However, as in other areas of clinical medicine, such as histopathology and radiology, many studies have documented marked interobserver and intraobserver variability in lesion recognition. Thus, there is a clear need and opportunity for techniques or methodologies that will enhance the quality of lesion recognition and diagnosis and improve the outcomes of GI endoscopy. Deep learning models provide a basis to make better clinical decisions in medical image analysis. Biomedical image segmentation, classification, and registration can be improved with deep learning. Recent evidence suggests that the application of deep learning methods to medical image analysis can contribute significantly to computer-aided diagnosis. Deep learning models are usually considered to be more flexible and provide reliable solutions for image analysis problems compared to conventional computer vision models. The use of fast computers offers the possibility of real-time support that is important for endoscopic diagnosis, which has to be made in real time. Advanced graphics processing units and cloud computing have also favored the use of machine learning, and more particularly, deep learning for patient care. This paper reviews the rapidly evolving literature on the feasibility of applying deep learning algorithms to endoscopic imaging.

  4. Neuromorphic Deep Learning Machines

    OpenAIRE

    Neftci, E; Augustine, C; Paul, S; Detorakis, G

    2017-01-01

    An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Back Propagation (BP) rule, often relies on the immediate availability of network-wide...

  5. Deep Restricted Kernel Machines Using Conjugate Feature Duality.

    Science.gov (United States)

    Suykens, Johan A K

    2017-08-01

    The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.

  6. Preliminary analyses of the deep geoenvironmental characteristics for the deep borehole disposal of high-level radioactive waste in Korea

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Youl; Lee, Min Soo; Choi, Heui Joo; Kim, Geon Young; Kim, Kyung Su [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-06-15

    Spent fuels from nuclear power plants, as well as high-level radioactive waste from the recycling of spent fuels, should be safely isolated from human environment for an extremely long time. Recently, meaningful studies on the development of deep borehole radioactive waste disposal system in 3-5 km depth have been carried out in USA and some countries in Europe, due to great advance in deep borehole drilling technology. In this paper, domestic deep geoenvironmental characteristics are preliminarily investigated to analyze the applicability of deep borehole disposal technology in Korea. To do this, state-of-the art technologies in USA and some countries in Europe are reviewed, and geological and geothermal data from the deep boreholes for geothermal usage are analyzed. Based on the results on the crystalline rock depth, the geothermal gradient and the spent fuel types generated in Korea, a preliminary deep borehole concept including disposal canister and sealing system, is suggested.

  7. Preliminary analyses of the deep geoenvironmental characteristics for the deep borehole disposal of high-level radioactive waste in Korea

    International Nuclear Information System (INIS)

    Lee, Jong Youl; Lee, Min Soo; Choi, Heui Joo; Kim, Geon Young; Kim, Kyung Su

    2016-01-01

    Spent fuels from nuclear power plants, as well as high-level radioactive waste from the recycling of spent fuels, should be safely isolated from human environment for an extremely long time. Recently, meaningful studies on the development of deep borehole radioactive waste disposal system in 3-5 km depth have been carried out in USA and some countries in Europe, due to great advance in deep borehole drilling technology. In this paper, domestic deep geoenvironmental characteristics are preliminarily investigated to analyze the applicability of deep borehole disposal technology in Korea. To do this, state-of-the art technologies in USA and some countries in Europe are reviewed, and geological and geothermal data from the deep boreholes for geothermal usage are analyzed. Based on the results on the crystalline rock depth, the geothermal gradient and the spent fuel types generated in Korea, a preliminary deep borehole concept including disposal canister and sealing system, is suggested

  8. Toolkits and Libraries for Deep Learning.

    Science.gov (United States)

    Erickson, Bradley J; Korfiatis, Panagiotis; Akkus, Zeynettin; Kline, Timothy; Philbrick, Kenneth

    2017-08-01

    Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data. In this paper, we will describe some of the libraries and tools that are available to aid in the construction and efficient execution of deep learning as applied to medical images.

  9. Deep-sea coral research and technology program: Alaska deep-sea coral and sponge initiative final report

    Science.gov (United States)

    Rooper, Chris; Stone, Robert P.; Etnoyer, Peter; Conrath, Christina; Reynolds, Jennifer; Greene, H. Gary; Williams, Branwen; Salgado, Enrique; Morrison, Cheryl L.; Waller, Rhian G.; Demopoulos, Amanda W.J.

    2017-01-01

    Deep-sea coral and sponge ecosystems are widespread throughout most of Alaska’s marine waters. In some places, such as the central and western Aleutian Islands, deep-sea coral and sponge resources can be extremely diverse and may rank among the most abundant deep-sea coral and sponge communities in the world. Many different species of fishes and invertebrates are associated with deep-sea coral and sponge communities in Alaska. Because of their biology, these benthic invertebrates are potentially impacted by climate change and ocean acidification. Deepsea coral and sponge ecosystems are also vulnerable to the effects of commercial fishing activities. Because of the size and scope of Alaska’s continental shelf and slope, the vast majority of the area has not been visually surveyed for deep-sea corals and sponges. NOAA’s Deep Sea Coral Research and Technology Program (DSCRTP) sponsored a field research program in the Alaska region between 2012–2015, referred to hereafter as the Alaska Initiative. The priorities for Alaska were derived from ongoing data needs and objectives identified by the DSCRTP, the North Pacific Fishery Management Council (NPFMC), and Essential Fish Habitat-Environmental Impact Statement (EFH-EIS) process.This report presents the results of 15 projects conducted using DSCRTP funds from 2012-2015. Three of the projects conducted as part of the Alaska deep-sea coral and sponge initiative included dedicated at-sea cruises and fieldwork spread across multiple years. These projects were the eastern Gulf of Alaska Primnoa pacifica study, the Aleutian Islands mapping study, and the Gulf of Alaska fish productivity study. In all, there were nine separate research cruises carried out with a total of 109 at-sea days conducting research. The remaining projects either used data and samples collected by the three major fieldwork projects or were piggy-backed onto existing research programs at the Alaska Fisheries Science Center (AFSC).

  10. Image Captioning with Deep Bidirectional LSTMs

    OpenAIRE

    Wang, Cheng; Yang, Haojin; Bartz, Christian; Meinel, Christoph

    2016-01-01

    This work presents an end-to-end trainable deep bidirectional LSTM (Long-Short Term Memory) model for image captioning. Our model builds on a deep convolutional neural network (CNN) and two separate LSTM networks. It is capable of learning long term visual-language interactions by making use of history and future context information at high level semantic space. Two novel deep bidirectional variant models, in which we increase the depth of nonlinearity transition in different way, are propose...

  11. New Techniques for Deep Learning with Geospatial Data using TensorFlow, Earth Engine, and Google Cloud Platform

    Science.gov (United States)

    Hancher, M.

    2017-12-01

    Recent years have seen promising results from many research teams applying deep learning techniques to geospatial data processing. In that same timeframe, TensorFlow has emerged as the most popular framework for deep learning in general, and Google has assembled petabytes of Earth observation data from a wide variety of sources and made them available in analysis-ready form in the cloud through Google Earth Engine. Nevertheless, developing and applying deep learning to geospatial data at scale has been somewhat cumbersome to date. We present a new set of tools and techniques that simplify this process. Our approach combines the strengths of several underlying tools: TensorFlow for its expressive deep learning framework; Earth Engine for data management, preprocessing, postprocessing, and visualization; and other tools in Google Cloud Platform to train TensorFlow models at scale, perform additional custom parallel data processing, and drive the entire process from a single familiar Python development environment. These tools can be used to easily apply standard deep neural networks, convolutional neural networks, and other custom model architectures to a variety of geospatial data structures. We discuss our experiences applying these and related tools to a range of machine learning problems, including classic problems like cloud detection, building detection, land cover classification, as well as more novel problems like illegal fishing detection. Our improved tools will make it easier for geospatial data scientists to apply modern deep learning techniques to their own problems, and will also make it easier for machine learning researchers to advance the state of the art of those techniques.

  12. An overview of latest deep water technologies

    International Nuclear Information System (INIS)

    Anon.

    1995-01-01

    The 8th Deep Offshore Technology Conference (DOT VIII, Rio de Janeiro, October 30 - November 3, 1995) has brought together renowned specialists in deep water development projects, as well as managers from oil companies and engineering/service companies to discuss state-of-the-art technologies and ongoing projects in the deep offshore. This paper is a compilation of the session summaries about sub sea technologies, mooring and dynamic positioning, floaters (Tension Leg Platforms (TLP) and Floating Production Storage and Off loading (FPSO)), pipelines and risers, exploration and drilling, and other deep water techniques. (J.S.)

  13. Deep learning in neural networks: an overview.

    Science.gov (United States)

    Schmidhuber, Jürgen

    2015-01-01

    In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

  14. Deep level centers in electron-irradiated silicon crystals doped with copper at different temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Yarykin, Nikolai [Institute of Microelectronics Technology, RAS, Chernogolovka (Russian Federation); Weber, Joerg [Technische Universitaet Dresden (Germany)

    2017-07-15

    The effect of bombardment with energetic particles on the deep-level spectrum of copper-contaminated silicon wafers is studied by space charge spectroscopy methods. The p-type FZ-Si wafers were doped with copper in the temperature range of 645-750 C and then irradiated with the 10{sup 15} cm{sup -2} fluence of 5 MeV electrons at room temperature. Only the mobile Cu{sub i} species and the Cu{sub PL} centers are detected in significant concentrations in the non-irradiated Cu-doped wafers. The properties of the irradiated samples are found to qualitatively depend on the copper in-diffusion temperature T{sub diff}. For T{sub diff} > 700 C, the irradiation partially reduces the Cu{sub i} concentration and introduces additional Cu{sub PL} centers while no standard radiation defects are detected. If T{sub diff} was below ∝700 C, the irradiation totally removes the mobile Cu{sub i} species. Instead, the standard radiation defects and their complexes with copper appear in the deep-level spectrum. A model for the defects reaction scheme during the irradiation is derived and discussed. DLTS spectrum of the Cu-contaminated and then irradiated silicon qualitatively depends on the copper in-diffusion temperature. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  15. Combining shallow and deep processing for a robust, fast, deep-linguistic dependency parser

    OpenAIRE

    Schneider, G

    2004-01-01

    This paper describes Pro3Gres, a fast, robust, broad-coverage parser that delivers deep-linguistic grammatical relation structures as output, which are closer to predicate-argument structures and more informative than pure constituency structures. The parser stays as shallow as is possible for each task, combining shallow and deep-linguistic methods by integrating chunking and by expressing the majority of long-distance dependencies in a context-free way. It combines statistical and rule-base...

  16. GMSK Modulation for Deep Space Applications

    Science.gov (United States)

    Shambayati, Shervin; Lee, Dennis K.

    2012-01-01

    Due to scarcity of spectrum at 8.42 GHz deep space Xband allocation, many deep space missions are now considering the use of higher order modulation schemes instead of the traditional binary phase shift keying (BPSK). One such scheme is pre-coded Gaussian minimum shift keying (GMSK). GMSK is an excellent candidate for deep space missions. GMSK is a constant envelope, bandwidth efficien modulation whose frame error rate (FER) performance with perfect carrier tracking and proper receiver structure is nearly identical to that of BPSK. There are several issues that need to be addressed with GMSK however. Specificall, we are interested in the combined effects of spectrum limitations and receiver structure on the coded performance of the X-band link using GMSK. The receivers that are typically used for GMSK demodulations are variations on offset quadrature phase shift keying (OQPSK) receivers. In this paper we consider three receivers: the standard DSN OQPSK receiver, DSN OQPSK receiver with filte ed input, and an optimum OQPSK receiver with filte ed input. For the DSN OQPSK receiver we show experimental results with (8920, 1/2), (8920, 1/3) and (8920, 1/6) turbo codes in terms of their error rate performance. We also consider the tracking performance of this receiver as a function of data rate, channel code and the carrier loop signal-to-noise ratio (SNR). For the other two receivers we derive theoretical results that will show that for a given loop bandwidth, a receiver structure, and a channel code, there is a lower data rate limit on the GMSK below which a higher SNR than what is required to achieve the required FER on the link is needed. These limits stem from the minimum loop signal-to-noise ratio requirements on the receivers for achieving lock. As a result of this, for a given channel code and a given FER, there could be a gap between the maximum data rate that BPSK can support without violating the spectrum limits and the minimum data rate that GMSK can support

  17. Astrometry and Geostationary Satellites in Venezuela

    Science.gov (United States)

    Lacruz, E.; Abad, C.

    2015-10-01

    We present the current status and the first results of the astrometric project CIDA - ABAE for tracking geo-stationary satellites. This project aims to determine a preliminary orbit for the Venezuelan satellite VENESAT-1, using astrometric positions obtained from an optical telescope. The results presented here are based on observations from the Luepa space tracking ground station in Venezuela, which were processed using astrometric procedures.

  18. Evaluation of Deep Vein Thrombosis with Multidetector Row CT after Orthopedic Arthroplasty: a Prospective Study for Comparison with Doppler Sonography

    International Nuclear Information System (INIS)

    Byun, Sung Su; Kim, Youn Jeong; Chun, Yong Sun; Kim, Won Hong; Kim, Jeong Ho; Park, Chul Hi

    2008-01-01

    This prospective study evaluated the ability of indirect 16-row multidetector CT venography, in comparison with Doppler sonography, to detect deep vein thrombosis after total hip or knee replacement. Sixty-two patients had undergone orthopedic replacement surgery on a total of 30 hip joints and 54 knee joints. The CT venography (scan delay time: 180 seconds; slice thickness/increment: 2/1.5 mm) and Doppler sonography were performed 8 to 40 days after surgery. We measured the z-axis length of the beam hardening artifact that degraded the image quality so that the presence of deep vein thrombosis couldn't be evaluated on the axial CT images. The incidence and location of deep vein thrombosis was analyzed. The diagnostic performance of the CT venograms was evaluated and compared with that of Doppler sonography as a standard of reference. The z-axis length (mean±standard deviation) of the beam hardening artifact was 4.5±0.8 cm in the arthroplastic knees and 3.9±2.9 cm in the arthroplastic hips. Deep vein thrombosis (DVT) was found in the popliteal or calf veins on Doppler sonography in 30 (48%) of the 62 patients. The CT venography has a sensitivity, specificity, positive predictive value, negative predictive value and accuracy of 90%, 97%, 96%, 91% and 94%, respectively. The ability of CT venography to detect DVT was comparable to that of Doppler sonography despite of beam hardening artifact. Therefore, CT venography is feasible to use as an alternative modality for evaluating postarthroplasty patients

  19. Evaluation of Deep Vein Thrombosis with Multidetector Row CT after Orthopedic Arthroplasty: a Prospective Study for Comparison with Doppler Sonography

    Energy Technology Data Exchange (ETDEWEB)

    Byun, Sung Su; Kim, Youn Jeong; Chun, Yong Sun; Kim, Won Hong [Inha University, College of Medicine, Incheon (Korea, Republic of); Kim, Jeong Ho; Park, Chul Hi [Gachon University, Gil Medical Center, Incheon (Korea, Republic of)

    2008-02-15

    This prospective study evaluated the ability of indirect 16-row multidetector CT venography, in comparison with Doppler sonography, to detect deep vein thrombosis after total hip or knee replacement. Sixty-two patients had undergone orthopedic replacement surgery on a total of 30 hip joints and 54 knee joints. The CT venography (scan delay time: 180 seconds; slice thickness/increment: 2/1.5 mm) and Doppler sonography were performed 8 to 40 days after surgery. We measured the z-axis length of the beam hardening artifact that degraded the image quality so that the presence of deep vein thrombosis couldn't be evaluated on the axial CT images. The incidence and location of deep vein thrombosis was analyzed. The diagnostic performance of the CT venograms was evaluated and compared with that of Doppler sonography as a standard of reference. The z-axis length (mean{+-}standard deviation) of the beam hardening artifact was 4.5{+-}0.8 cm in the arthroplastic knees and 3.9{+-}2.9 cm in the arthroplastic hips. Deep vein thrombosis (DVT) was found in the popliteal or calf veins on Doppler sonography in 30 (48%) of the 62 patients. The CT venography has a sensitivity, specificity, positive predictive value, negative predictive value and accuracy of 90%, 97%, 96%, 91% and 94%, respectively. The ability of CT venography to detect DVT was comparable to that of Doppler sonography despite of beam hardening artifact. Therefore, CT venography is feasible to use as an alternative modality for evaluating postarthroplasty patients.

  20. DeepVel: Deep learning for the estimation of horizontal velocities at the solar surface

    Science.gov (United States)

    Asensio Ramos, A.; Requerey, I. S.; Vitas, N.

    2017-07-01

    Many phenomena taking place in the solar photosphere are controlled by plasma motions. Although the line-of-sight component of the velocity can be estimated using the Doppler effect, we do not have direct spectroscopic access to the components that are perpendicular to the line of sight. These components are typically estimated using methods based on local correlation tracking. We have designed DeepVel, an end-to-end deep neural network that produces an estimation of the velocity at every single pixel, every time step, and at three different heights in the atmosphere from just two consecutive continuum images. We confront DeepVel with local correlation tracking, pointing out that they give very similar results in the time and spatially averaged cases. We use the network to study the evolution in height of the horizontal velocity field in fragmenting granules, supporting the buoyancy-braking mechanism for the formation of integranular lanes in these granules. We also show that DeepVel can capture very small vortices, so that we can potentially expand the scaling cascade of vortices to very small sizes and durations. The movie attached to Fig. 3 is available at http://www.aanda.org

  1. Deep Learning in Drug Discovery.

    Science.gov (United States)

    Gawehn, Erik; Hiss, Jan A; Schneider, Gisbert

    2016-01-01

    Artificial neural networks had their first heyday in molecular informatics and drug discovery approximately two decades ago. Currently, we are witnessing renewed interest in adapting advanced neural network architectures for pharmaceutical research by borrowing from the field of "deep learning". Compared with some of the other life sciences, their application in drug discovery is still limited. Here, we provide an overview of this emerging field of molecular informatics, present the basic concepts of prominent deep learning methods and offer motivation to explore these techniques for their usefulness in computer-assisted drug discovery and design. We specifically emphasize deep neural networks, restricted Boltzmann machine networks and convolutional networks. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Iris Transponder-Communications and Navigation for Deep Space

    Science.gov (United States)

    Duncan, Courtney B.; Smith, Amy E.; Aguirre, Fernando H.

    2014-01-01

    The Jet Propulsion Laboratory has developed the Iris CubeSat compatible deep space transponder for INSPIRE, the first CubeSat to deep space. Iris is 0.4 U, 0.4 kg, consumes 12.8 W, and interoperates with NASA's Deep Space Network (DSN) on X-Band frequencies (7.2 GHz uplink, 8.4 GHz downlink) for command, telemetry, and navigation. This talk discusses the Iris for INSPIRE, it's features and requirements; future developments and improvements underway; deep space and proximity operations applications for Iris; high rate earth orbit variants; and ground requirements, such as are implemented in the DSN, for deep space operations.

  3. Add-on deep transcranial magnetic stimulation (dTMS) in patients with dysthymic disorder comorbid with alcohol use disorder: a comparison with standard treatment.

    Science.gov (United States)

    Girardi, Paolo; Rapinesi, Chiara; Chiarotti, Flavia; Kotzalidis, Georgios D; Piacentino, Daria; Serata, Daniele; Del Casale, Antonio; Scatena, Paola; Mascioli, Flavia; Raccah, Ruggero N; Brugnoli, Roberto; Digiacomantonio, Vittorio; Ferri, Vittoria Rachele; Ferracuti, Stefano; Zangen, Abraham; Angeletti, Gloria

    2015-01-01

    Dorsolateral prefrontal cortex (DLPFC) is dysfunctional in mood and substance use disorders. We predicted higher efficacy for add-on bilateral prefrontal high-frequency deep transcranial magnetic stimulation (dTMS), compared with standard drug treatment (SDT) in patients with dysthymic disorder (DD)/alcohol use disorder (AUD) comorbidity. We carried-out a 6-month open-label study involving 20 abstinent patients with DSM-IV-TR AUD comorbid with previously developed DD. Ten patients received SDT for AUD with add-on bilateral dTMS (dTMS-AO) over the DLPFC, while another 10 received SDT alone. We rated alcohol craving with the Obsessive Compulsive Drinking Scale (OCDS), depression with the Hamilton Depression Rating Scale (HDRS), clinical status with the Clinical Global Impressions scale (CGI), and global functioning with the Global Assessment of Functioning (GAF). At the end of the 20-session dTMS period (or an equivalent period in the SDT group), craving scores and depressive symptoms in the dTMS-AO group dropped significantly more than in the SDT group (P < 0.001 and P < 0.02, respectively). High frequency bilateral DLPFC dTMS with left preference was well tolerated and found to be effective as add-on in AUD. The potential of dTMS for reducing craving in substance use disorder patients deserves to be further investigated.

  4. How and Why to Do VLBI on GPS

    Science.gov (United States)

    Dickey, J. M.

    2010-01-01

    In order to establish the position of the center of mass of the Earth in the International Celestial Reference Frame, observations of the Global Positioning Satellite (GPS) constellation using the IVS network are important. With a good frame-tie between the coordinates of the IVS telescopes and nearby GPS receivers, plus a common local oscillator reference signal, it should be possible to observe and record simultaneously signals from the astrometric calibration sources and the GPS satellites. The standard IVS solution would give the atmospheric delay and clock offsets to use in analysis of the GPS data. Correlation of the GPS signals would then give accurate orbital parameters of the satellites in the ICRF reference frame, i.e., relative to the positions of the astrometric sources. This is particularly needed to determine motion of the center of mass of the earth along the rotation axis.

  5. Context and Deep Learning Design

    Science.gov (United States)

    Boyle, Tom; Ravenscroft, Andrew

    2012-01-01

    Conceptual clarification is essential if we are to establish a stable and deep discipline of technology enhanced learning. The technology is alluring; this can distract from deep design in a surface rush to exploit the affordances of the new technology. We need a basis for design, and a conceptual unit of organization, that are applicable across…

  6. Programming Deep Brain Stimulation for Tremor and Dystonia: The Toronto Western Hospital Algorithms.

    Science.gov (United States)

    Picillo, Marina; Lozano, Andres M; Kou, Nancy; Munhoz, Renato Puppi; Fasano, Alfonso

    2016-01-01

    Deep brain stimulation (DBS) is an effective treatment for essential tremor (ET) and dystonia. After surgery, a number of extensive programming sessions are performed, mainly relying on neurologist's personal experience as no programming guidelines have been provided so far, with the exception of recommendations provided by groups of experts. Finally, fewer information is available for the management of DBS in ET and dystonia compared with Parkinson's disease. Our aim is to review the literature on initial and follow-up DBS programming procedures for ET and dystonia and integrate the results with our current practice at Toronto Western Hospital (TWH) to develop standardized DBS programming protocols. We conducted a literature search of PubMed from inception to July 2014 with the keywords "balance", "bradykinesia", "deep brain stimulation", "dysarthria", "dystonia", "gait disturbances", "initial programming", "loss of benefit", "micrographia", "speech", "speech difficulties" and "tremor". Seventy-six papers were considered for this review. Based on the literature review and our experience at TWH, we refined three algorithms for management of ET, including: (1) initial programming, (2) management of balance and speech issues and (3) loss of stimulation benefit. We also depicted algorithms for the management of dystonia, including: (1) initial programming and (2) management of stimulation-induced hypokinesia (shuffling gait, micrographia and speech impairment). We propose five algorithms tailored to an individualized approach to managing ET and dystonia patients with DBS. We encourage the application of these algorithms to supplement current standards of care in established as well as new DBS centers to test the clinical usefulness of these algorithms in supplementing the current standards of care. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Archaeological Excavation and Deep Mapping in Historic Rural Communities

    Directory of Open Access Journals (Sweden)

    Carenza Lewis

    2015-09-01

    Full Text Available This paper reviews the results of more than a hundred small archaeological “test pit” excavations carried out in 2013 within four rural communities in eastern England. Each excavation used standardized protocols in a different location within the host village, with the finds dated and mapped to create a series of maps spanning more than 3500 years, in order to advance understanding of the spatial development of settlements and landscapes over time. The excavations were all carried out by local volunteers working physically within their own communities, supported and advised by professional archaeologists, with most test pits sited in volunteers’ own gardens or those of their friends, family or neighbors. Site-by-site, the results provided glimpses of the use made by humans of each of the excavated sites spanning prehistory to the present day; while in aggregate the mapped data show how settlement and land-use developed and changed over time. Feedback from participants also demonstrates the diverse positive impacts the project had on individuals and communities. The results are presented and reviewed here in order to highlight the contribution archaeological test pit excavation can make to deep mapping, and the contribution that deep mapping can make to rural communities.

  8. Development and test of a plastic deep-well pump

    International Nuclear Information System (INIS)

    Zhang, Q H; Gao, X F; Xu, Y; Shi, W D; Lu, W G; Liu, W

    2013-01-01

    To develop a plastic deep-well pump, three methods are proposed on structural and forming technique. First, the major hydraulic components are constructed by plastics, and the connection component is constructed by steel. Thus the pump structure is more concise and slim, greatly reducing its weight and easing its transportation, installation, and maintenance. Second, the impeller is designed by maximum diameter method. Using same pump casing, the stage head is greatly increased. Third, a sealing is formed by impeller front end face and steel end face, and two slots are designed on the impeller front end face, thus when the two end faces approach, a lubricating pair is formed, leading to an effective sealing. With above methods, the pump's axial length is greatly reduced, and its stage head is larger and more efficient. Especially, the pump's axial force is effectively balanced. To examine the above proposals, a prototype pump is constructed, and its testing results show that the pump efficiency exceeds the national standard by 6%, and the stage head is improved by 41%, meanwhile, its structure is more concise and ease of transportation. Development of this pump would provide useful experiences for further popularity of plastic deep-well pumps

  9. Deep Generative Models for Molecular Science

    DEFF Research Database (Denmark)

    Jørgensen, Peter Bjørn; Schmidt, Mikkel Nørgaard; Winther, Ole

    2018-01-01

    Generative deep machine learning models now rival traditional quantum-mechanical computations in predicting properties of new structures, and they come with a significantly lower computational cost, opening new avenues in computational molecular science. In the last few years, a variety of deep...... generative models have been proposed for modeling molecules, which differ in both their model structure and choice of input features. We review these recent advances within deep generative models for predicting molecular properties, with particular focus on models based on the probabilistic autoencoder (or...

  10. Too Deep or Not Too Deep?: A Propensity-Matched Comparison of the Analgesic Effects of a Superficial Versus Deep Serratus Fascial Plane Block for Ambulatory Breast Cancer Surgery.

    Science.gov (United States)

    Abdallah, Faraj W; Cil, Tulin; MacLean, David; Madjdpour, Caveh; Escallon, Jaime; Semple, John; Brull, Richard

    2018-07-01

    Serratus fascial plane block can reduce pain following breast surgery, but the question of whether to inject the local anesthetic superficial or deep to the serratus muscle has not been answered. This cohort study compares the analgesic benefits of superficial versus deep serratus plane blocks in ambulatory breast cancer surgery patients at Women's College Hospital between February 2014 and December 2016. We tested the joint hypothesis that deep serratus block is noninferior to superficial serratus block for postoperative in-hospital (pre-discharge) opioid consumption and pain severity. One hundred sixty-six patients were propensity matched among 2 groups (83/group): superficial and deep serratus blocks. The cohort was used to evaluate the effect of blocks on postoperative oral morphine equivalent consumption and area under the curve for rest pain scores. We considered deep serratus block to be noninferior to superficial serratus block if it were noninferior for both outcomes, within 15 mg morphine and 4 cm·h units margins. Other outcomes included intraoperative fentanyl requirements, time to first analgesic request, recovery room stay, and incidence of postoperative nausea and vomiting. Deep serratus block was associated with postoperative morphine consumption and pain scores area under the curve that were noninferior to those of the superficial serratus block. Intraoperative fentanyl requirements, time to first analgesic request, recovery room stay, and postoperative nausea and vomiting were not different between blocks. The postoperative in-hospital analgesia associated with deep serratus block is as effective (within an acceptable margin) as superficial serratus block following ambulatory breast cancer surgery. These new findings are important to inform both current clinical practices and future prospective studies.

  11. Programming Deep Brain Stimulation for Parkinson's Disease: The Toronto Western Hospital Algorithms.

    Science.gov (United States)

    Picillo, Marina; Lozano, Andres M; Kou, Nancy; Puppi Munhoz, Renato; Fasano, Alfonso

    2016-01-01

    Deep brain stimulation (DBS) is an established and effective treatment for Parkinson's disease (PD). After surgery, a number of extensive programming sessions are performed to define the most optimal stimulation parameters. Programming sessions mainly rely only on neurologist's experience. As a result, patients often undergo inconsistent and inefficient stimulation changes, as well as unnecessary visits. We reviewed the literature on initial and follow-up DBS programming procedures and integrated our current practice at Toronto Western Hospital (TWH) to develop standardized DBS programming protocols. We propose four algorithms including the initial programming and specific algorithms tailored to symptoms experienced by patients following DBS: speech disturbances, stimulation-induced dyskinesia and gait impairment. We conducted a literature search of PubMed from inception to July 2014 with the keywords "deep brain stimulation", "festination", "freezing", "initial programming", "Parkinson's disease", "postural instability", "speech disturbances", and "stimulation induced dyskinesia". Seventy papers were considered for this review. Based on the literature review and our experience at TWH, we refined four algorithms for: (1) the initial programming stage, and management of symptoms following DBS, particularly addressing (2) speech disturbances, (3) stimulation-induced dyskinesia, and (4) gait impairment. We propose four algorithms tailored to an individualized approach to managing symptoms associated with DBS and disease progression in patients with PD. We encourage established as well as new DBS centers to test the clinical usefulness of these algorithms in supplementing the current standards of care. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Deep learning architecture for iris recognition based on optimal Gabor filters and deep belief network

    Science.gov (United States)

    He, Fei; Han, Ye; Wang, Han; Ji, Jinchao; Liu, Yuanning; Ma, Zhiqiang

    2017-03-01

    Gabor filters are widely utilized to detect iris texture information in several state-of-the-art iris recognition systems. However, the proper Gabor kernels and the generative pattern of iris Gabor features need to be predetermined in application. The traditional empirical Gabor filters and shallow iris encoding ways are incapable of dealing with such complex variations in iris imaging including illumination, aging, deformation, and device variations. Thereby, an adaptive Gabor filter selection strategy and deep learning architecture are presented. We first employ particle swarm optimization approach and its binary version to define a set of data-driven Gabor kernels for fitting the most informative filtering bands, and then capture complex pattern from the optimal Gabor filtered coefficients by a trained deep belief network. A succession of comparative experiments validate that our optimal Gabor filters may produce more distinctive Gabor coefficients and our iris deep representations be more robust and stable than traditional iris Gabor codes. Furthermore, the depth and scales of the deep learning architecture are also discussed.

  13. Mining Fashion Outfit Composition Using An End-to-End Deep Learning Approach on Set Data

    OpenAIRE

    Li, Yuncheng; Cao, LiangLiang; Zhu, Jiang; Luo, Jiebo

    2016-01-01

    Composing fashion outfits involves deep understanding of fashion standards while incorporating creativity for choosing multiple fashion items (e.g., Jewelry, Bag, Pants, Dress). In fashion websites, popular or high-quality fashion outfits are usually designed by fashion experts and followed by large audiences. In this paper, we propose a machine learning system to compose fashion outfits automatically. The core of the proposed automatic composition system is to score fashion outfit candidates...

  14. Molecular analysis of deep subsurface bacteria

    International Nuclear Information System (INIS)

    Jimenez Baez, L.E.

    1989-09-01

    Deep sediments samples from site C10a, in Appleton, and sites, P24, P28, and P29, at the Savannah River Site (SRS), near Aiken, South Carolina were studied to determine their microbial community composition, DNA homology and mol %G+C. Different geological formations with great variability in hydrogeological parameters were found across the depth profile. Phenotypic identification of deep subsurface bacteria underestimated the bacterial diversity at the three SRS sites, since bacteria with the same phenotype have different DNA composition and less than 70% DNA homology. Total DNA hybridization and mol %G+C analysis of deep sediment bacterial isolates suggested that each formation is comprised of different microbial communities. Depositional environment was more important than site and geological formation on the DNA relatedness between deep subsurface bacteria, since more 70% of bacteria with 20% or more of DNA homology came from the same depositional environments. Based on phenotypic and genotypic tests Pseudomonas spp. and Acinetobacter spp.-like bacteria were identified in 85 million years old sediments. This suggests that these microbial communities might have been adapted during a long period of time to the environmental conditions of the deep subsurface

  15. Joint Training of Deep Boltzmann Machines

    OpenAIRE

    Goodfellow, Ian; Courville, Aaron; Bengio, Yoshua

    2012-01-01

    We introduce a new method for training deep Boltzmann machines jointly. Prior methods require an initial learning pass that trains the deep Boltzmann machine greedily, one layer at a time, or do not perform well on classifi- cation tasks.

  16. Deep learning enhanced mobile-phone microscopy

    KAUST Repository

    Rivenson, Yair

    2017-12-12

    Mobile-phones have facilitated the creation of field-portable, cost-effective imaging and sensing technologies that approach laboratory-grade instrument performance. However, the optical imaging interfaces of mobile-phones are not designed for microscopy and produce spatial and spectral distortions in imaging microscopic specimens. Here, we report on the use of deep learning to correct such distortions introduced by mobile-phone-based microscopes, facilitating the production of high-resolution, denoised and colour-corrected images, matching the performance of benchtop microscopes with high-end objective lenses, also extending their limited depth-of-field. After training a convolutional neural network, we successfully imaged various samples, including blood smears, histopathology tissue sections, and parasites, where the recorded images were highly compressed to ease storage and transmission for telemedicine applications. This method is applicable to other low-cost, aberrated imaging systems, and could offer alternatives for costly and bulky microscopes, while also providing a framework for standardization of optical images for clinical and biomedical applications.

  17. Deep boreholes; Tiefe Bohrloecher

    Energy Technology Data Exchange (ETDEWEB)

    Bracke, Guido [Gesellschaft fuer Anlagen- und Reaktorsicherheit gGmbH Koeln (Germany); Charlier, Frank [NSE international nuclear safety engineering gmbh, Aachen (Germany); Geckeis, Horst [Karlsruher Institut fuer Technologie (Germany). Inst. fuer Nukleare Entsorgung; and others

    2016-02-15

    The report on deep boreholes covers the following subject areas: methods for safe enclosure of radioactive wastes, requirements concerning the geological conditions of possible boreholes, reversibility of decisions and retrievability, status of drilling technology. The introduction covers national and international activities. Further chapters deal with the following issues: basic concept of the storage in deep bore holes, status of the drilling technology, safe enclosure, geomechanics and stability, reversibility of decisions, risk scenarios, compliancy with safe4ty requirements and site selection criteria, research and development demand.

  18. Industrial automation in floating production vessels for deep water oil and gas fields

    International Nuclear Information System (INIS)

    de Garcia, A.L.; Ferrante, A.J.

    1990-01-01

    The process supervision in offshore platforms was performed in the past through the use of local pneumatic instrumentation, based on relays, semi-graphic panels and button operated control panels. Considering the advanced technology used in the new floating production projects for deep water, it became mandatory to develop supervision systems capable of integrating different control panels, increasing the level of monitorization and reducing the number of operators and control rooms. From the point of view of field integration, a standardized architecture makes the communication between different production platforms and the regional headquarters, where all the equipment and support infrastructure for the computerized network is installed, possible. This test paper describes the characteristics of the initial systems, the main problems observed, the studies performed and the results obtained in relation to the design and implementation of computational systems with open architecture for automation of process control in floating production systems for deep water in Brazil

  19. Effect of Extracorporeal Shock Wave Treatment on Deep Partial-Thickness Burn Injury in Rats: A Pilot Study

    Directory of Open Access Journals (Sweden)

    Gabriel Djedovic

    2014-01-01

    Full Text Available Extracorporeal shock wave therapy (ESWT enhances tissue vascularization and neoangiogenesis. Recent animal studies showed improved soft tissue regeneration using ESWT. In most cases, deep partial-thickness burns require skin grafting; the outcome is often unsatisfactory in function and aesthetic appearance. The aim of this study was to demonstrate the effect of ESWT on skin regeneration after deep partial-thickness burns. Under general anesthesia, two standardized deep partial-thickness burns were induced on the back of 30 male Wistar rats. Immediately after the burn, ESWT was given to rats of group 1 (N=15, but not to group 2 (N=15. On days 5, 10, and 15, five rats of each group were analyzed. Reepithelialization rate was defined, perfusion units were measured, and histological analysis was performed. Digital photography was used for visual documentation. A wound score system was used. ESWT enhanced the percentage of wound closure in group 1 as compared to group 2 (P<0.05. The reepithelialization rate was improved significantly on day 15 (P<0.05. The wound score showed a significant increase in the ESWT group. ESWT improves skin regeneration of deep partial-thickness burns in rats. It may be a suitable and cost effective treatment alternative in this type of burn wounds in the future.

  20. DeepLoc: prediction of protein subcellular localization using deep learning

    DEFF Research Database (Denmark)

    Almagro Armenteros, Jose Juan; Sønderby, Casper Kaae; Sønderby, Søren Kaae

    2017-01-01

    The prediction of eukaryotic protein subcellular localization is a well-studied topic in bioinformatics due to its relevance in proteomics research. Many machine learning methods have been successfully applied in this task, but in most of them, predictions rely on annotation of homologues from...... knowledge databases. For novel proteins where no annotated homologues exist, and for predicting the effects of sequence variants, it is desirable to have methods for predicting protein properties from sequence information only. Here, we present a prediction algorithm using deep neural networks to predict...... current state-of-the-art algorithms, including those relying on homology information. The method is available as a web server at http://www.cbs.dtu.dk/services/DeepLoc . Example code is available at https://github.com/JJAlmagro/subcellular_localization . The dataset is available at http...

  1. Pre-cementation of deep shaft

    Science.gov (United States)

    Heinz, W. F.

    1988-12-01

    Pre-cementation or pre-grouting of deep shafts in South Africa is an established technique to improve safety and reduce water ingress during shaft sinking. The recent completion of several pre-cementation projects for shafts deeper than 1000m has once again highlighted the effectiveness of pre-grouting of shafts utilizing deep slimline boreholes and incorporating wireline technique for drilling and conventional deep borehole grouting techniques for pre-cementation. Pre-cementation of deep shaft will: (i) Increase the safety of shaft sinking operation (ii) Minimize water and gas inflow during shaft sinking (iii) Minimize the time lost due to additional grouting operations during sinking of the shaft and hence minimize costly delays and standing time of shaft sinking crews and equipment. (iv) Provide detailed information of the geology of the proposed shaft site. Informations on anomalies, dykes, faults as well as reef (gold bearing conglomerates) intersections can be obtained from the evaluation of cores of the pre-cementation boreholes. (v) Provide improved rock strength for excavations in the immediate vicinity of the shaft area. The paper describes pre-cementation techniques recently applied successfully from surface and some conclusions drawn for further considerations.

  2. Apply lightweight deep learning on internet of things for low-cost and easy-to-access skin cancer detection

    Science.gov (United States)

    Sahu, Pranjal; Yu, Dantong; Qin, Hong

    2018-03-01

    Melanoma is the most dangerous form of skin cancer that often resembles moles. Dermatologists often recommend regular skin examination to identify and eliminate Melanoma in its early stages. To facilitate this process, we propose a hand-held computer (smart-phone, Raspberry Pi) based assistant that classifies with the dermatologist-level accuracy skin lesion images into malignant and benign and works in a standalone mobile device without requiring network connectivity. In this paper, we propose and implement a hybrid approach based on advanced deep learning model and domain-specific knowledge and features that dermatologists use for the inspection purpose to improve the accuracy of classification between benign and malignant skin lesions. Here, domain-specific features include the texture of the lesion boundary, the symmetry of the mole, and the boundary characteristics of the region of interest. We also obtain standard deep features from a pre-trained network optimized for mobile devices called Google's MobileNet. The experiments conducted on ISIC 2017 skin cancer classification challenge demonstrate the effectiveness and complementary nature of these hybrid features over the standard deep features. We performed experiments with the training, testing and validation data splits provided in the competition. Our method achieved area of 0.805 under the receiver operating characteristic curve. Our ultimate goal is to extend the trained model in a commercial hand-held mobile and sensor device such as Raspberry Pi and democratize the access to preventive health care.

  3. Applications of Deep Learning in Biomedicine.

    Science.gov (United States)

    Mamoshina, Polina; Vieira, Armando; Putin, Evgeny; Zhavoronkov, Alex

    2016-05-02

    Increases in throughput and installed base of biomedical research equipment led to a massive accumulation of -omics data known to be highly variable, high-dimensional, and sourced from multiple often incompatible data platforms. While this data may be useful for biomarker identification and drug discovery, the bulk of it remains underutilized. Deep neural networks (DNNs) are efficient algorithms based on the use of compositional layers of neurons, with advantages well matched to the challenges -omics data presents. While achieving state-of-the-art results and even surpassing human accuracy in many challenging tasks, the adoption of deep learning in biomedicine has been comparatively slow. Here, we discuss key features of deep learning that may give this approach an edge over other machine learning methods. We then consider limitations and review a number of applications of deep learning in biomedical studies demonstrating proof of concept and practical utility.

  4. Partitioned learning of deep Boltzmann machines for SNP data.

    Science.gov (United States)

    Hess, Moritz; Lenz, Stefan; Blätte, Tamara J; Bullinger, Lars; Binder, Harald

    2017-10-15

    Learning the joint distributions of measurements, and in particular identification of an appropriate low-dimensional manifold, has been found to be a powerful ingredient of deep leaning approaches. Yet, such approaches have hardly been applied to single nucleotide polymorphism (SNP) data, probably due to the high number of features typically exceeding the number of studied individuals. After a brief overview of how deep Boltzmann machines (DBMs), a deep learning approach, can be adapted to SNP data in principle, we specifically present a way to alleviate the dimensionality problem by partitioned learning. We propose a sparse regression approach to coarsely screen the joint distribution of SNPs, followed by training several DBMs on SNP partitions that were identified by the screening. Aggregate features representing SNP patterns and the corresponding SNPs are extracted from the DBMs by a combination of statistical tests and sparse regression. In simulated case-control data, we show how this can uncover complex SNP patterns and augment results from univariate approaches, while maintaining type 1 error control. Time-to-event endpoints are considered in an application with acute myeloid leukemia patients, where SNP patterns are modeled after a pre-screening based on gene expression data. The proposed approach identified three SNPs that seem to jointly influence survival in a validation dataset. This indicates the added value of jointly investigating SNPs compared to standard univariate analyses and makes partitioned learning of DBMs an interesting complementary approach when analyzing SNP data. A Julia package is provided at 'http://github.com/binderh/BoltzmannMachines.jl'. binderh@imbi.uni-freiburg.de. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  5. Deep Complementary Bottleneck Features for Visual Speech Recognition

    NARCIS (Netherlands)

    Petridis, Stavros; Pantic, Maja

    Deep bottleneck features (DBNFs) have been used successfully in the past for acoustic speech recognition from audio. However, research on extracting DBNFs for visual speech recognition is very limited. In this work, we present an approach to extract deep bottleneck visual features based on deep

  6. Producing deep-water hydrocarbons

    International Nuclear Information System (INIS)

    Pilenko, Thierry

    2011-01-01

    Several studies relate the history and progress made in offshore production from oil and gas fields in relation to reserves and the techniques for producing oil offshore. The intention herein is not to review these studies but rather to argue that the activities of prospecting and producing deep-water oil and gas call for a combination of technology and project management and, above all, of devotion and innovation. Without this sense of commitment motivating men and women in this industry, the human adventure of deep-water production would never have taken place

  7. Deep inelastic processes and the parton model

    International Nuclear Information System (INIS)

    Altarelli, G.

    The lecture was intended as an elementary introduction to the physics of deep inelastic phenomena from the point of view of theory. General formulae and facts concerning inclusive deep inelastic processes in the form: l+N→l'+hadrons (electroproduction, neutrino scattering) are first recalled. The deep inelastic annihilation e + e - →hadrons is then envisaged. The light cone approach, the parton model and their relation are mainly emphasized

  8. Life Support for Deep Space and Mars

    Science.gov (United States)

    Jones, Harry W.; Hodgson, Edward W.; Kliss, Mark H.

    2014-01-01

    How should life support for deep space be developed? The International Space Station (ISS) life support system is the operational result of many decades of research and development. Long duration deep space missions such as Mars have been expected to use matured and upgraded versions of ISS life support. Deep space life support must use the knowledge base incorporated in ISS but it must also meet much more difficult requirements. The primary new requirement is that life support in deep space must be considerably more reliable than on ISS or anywhere in the Earth-Moon system, where emergency resupply and a quick return are possible. Due to the great distance from Earth and the long duration of deep space missions, if life support systems fail, the traditional approaches for emergency supply of oxygen and water, emergency supply of parts, and crew return to Earth or escape to a safe haven are likely infeasible. The Orbital Replacement Unit (ORU) maintenance approach used by ISS is unsuitable for deep space with ORU's as large and complex as those originally provided in ISS designs because it minimizes opportunities for commonality of spares, requires replacement of many functional parts with each failure, and results in substantial launch mass and volume penalties. It has become impractical even for ISS after the shuttle era, resulting in the need for ad hoc repair activity at lower assembly levels with consequent crew time penalties and extended repair timelines. Less complex, more robust technical approaches may be needed to meet the difficult deep space requirements for reliability, maintainability, and reparability. Developing an entirely new life support system would neglect what has been achieved. The suggested approach is use the ISS life support technologies as a platform to build on and to continue to improve ISS subsystems while also developing new subsystems where needed to meet deep space requirements.

  9. Automatic Segmentation and Deep Learning of Bird Sounds

    NARCIS (Netherlands)

    Koops, Hendrik Vincent; Van Balen, J.M.H.; Wiering, F.

    2015-01-01

    We present a study on automatic birdsong recognition with deep neural networks using the BIRDCLEF2014 dataset. Through deep learning, feature hierarchies are learned that represent the data on several levels of abstraction. Deep learning has been applied with success to problems in fields such as

  10. Deep Learning: A Primer for Radiologists.

    Science.gov (United States)

    Chartrand, Gabriel; Cheng, Phillip M; Vorontsov, Eugene; Drozdzal, Michal; Turcotte, Simon; Pal, Christopher J; Kadoury, Samuel; Tang, An

    2017-01-01

    Deep learning is a class of machine learning methods that are gaining success and attracting interest in many domains, including computer vision, speech recognition, natural language processing, and playing games. Deep learning methods produce a mapping from raw inputs to desired outputs (eg, image classes). Unlike traditional machine learning methods, which require hand-engineered feature extraction from inputs, deep learning methods learn these features directly from data. With the advent of large datasets and increased computing power, these methods can produce models with exceptional performance. These models are multilayer artificial neural networks, loosely inspired by biologic neural systems. Weighted connections between nodes (neurons) in the network are iteratively adjusted based on example pairs of inputs and target outputs by back-propagating a corrective error signal through the network. For computer vision tasks, convolutional neural networks (CNNs) have proven to be effective. Recently, several clinical applications of CNNs have been proposed and studied in radiology for classification, detection, and segmentation tasks. This article reviews the key concepts of deep learning for clinical radiologists, discusses technical requirements, describes emerging applications in clinical radiology, and outlines limitations and future directions in this field. Radiologists should become familiar with the principles and potential applications of deep learning in medical imaging. © RSNA, 2017.

  11. Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields.

    Science.gov (United States)

    Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo

    2016-01-11

    Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility.

  12. Stratification-Based Outlier Detection over the Deep Web.

    Science.gov (United States)

    Xian, Xuefeng; Zhao, Pengpeng; Sheng, Victor S; Fang, Ligang; Gu, Caidong; Yang, Yuanfeng; Cui, Zhiming

    2016-01-01

    For many applications, finding rare instances or outliers can be more interesting than finding common patterns. Existing work in outlier detection never considers the context of deep web. In this paper, we argue that, for many scenarios, it is more meaningful to detect outliers over deep web. In the context of deep web, users must submit queries through a query interface to retrieve corresponding data. Therefore, traditional data mining methods cannot be directly applied. The primary contribution of this paper is to develop a new data mining method for outlier detection over deep web. In our approach, the query space of a deep web data source is stratified based on a pilot sample. Neighborhood sampling and uncertainty sampling are developed in this paper with the goal of improving recall and precision based on stratification. Finally, a careful performance evaluation of our algorithm confirms that our approach can effectively detect outliers in deep web.

  13. Deep Learning and Bayesian Methods

    Directory of Open Access Journals (Sweden)

    Prosper Harrison B.

    2017-01-01

    Full Text Available A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such methods might be used to automate certain aspects of data analysis in particle physics. Next, the connection to Bayesian methods is discussed and the paper ends with thoughts on a significant practical issue, namely, how, from a Bayesian perspective, one might optimize the construction of deep neural networks.

  14. Eric Davidson and deep time.

    Science.gov (United States)

    Erwin, Douglas H

    2017-10-13

    Eric Davidson had a deep and abiding interest in the role developmental mechanisms played in generating evolutionary patterns documented in deep time, from the origin of the euechinoids to the processes responsible for the morphological architectures of major animal clades. Although not an evolutionary biologist, Davidson's interests long preceded the current excitement over comparative evolutionary developmental biology. Here I discuss three aspects at the intersection between his research and evolutionary patterns in deep time: First, understanding the mechanisms of body plan formation, particularly those associated with the early diversification of major metazoan clades. Second, a critique of early claims about ancestral metazoans based on the discoveries of highly conserved genes across bilaterian animals. Third, Davidson's own involvement in paleontology through a collaborative study of the fossil embryos from the Ediacaran Doushantuo Formation in south China.

  15. More Far-Side Deep Moonquake Nests Discovered

    Science.gov (United States)

    Nakamura, Y.; Jackson, John A.; Jackson, Katherine G.

    2004-01-01

    As reported last year, we started to reanalyze the seismic data acquired from 1969 to 1977 with a network of stations established on the Moon during the Apollo mission. The reason for the reanalysis was because recent advances in computer technology make it possible to employ much more sophisticated analysis techniques than was possible previously. The primary objective of the reanalysis was to search for deep moonquakes on the far side of the Moon and, if found, to use them to infer the structure of the Moon's deep interior, including a possible central core. The first step was to identify any new deep moonquakes that escaped our earlier search by applying a combination of waveform cross-correlation and single-link cluster analysis, and then to see if any of them are from previously unknown nests of deep moonquakes. We positively identified 7245 deep moonquakes, more than a five-fold increase from the previous 1360. We also found at least 88 previously unknown deep-moonquake nests. The question was whether any of these newly discovered nets were on the far side of the Moon, and we now report that our analysis of the data indicates that some of them are indeed on the far side.

  16. Highly Simple Deep Eutectic Solvent Extraction of Manganese in Vegetable Samples Prior to Its ICP-OES Analysis.

    Science.gov (United States)

    Bağda, Esra; Altundağ, Hüseyin; Soylak, Mustafa

    2017-10-01

    In the present work, simple and sensitive extraction methods for selective determination of manganese have been successfully developed. The methods were based on solubilization of manganese in deep eutectic solvent medium. Three deep eutectic solvents with choline chloride (vitamin B4) and tartaric/oxalic/citric acids have been prepared. Extraction parameters were optimized with using standard reference material (1573a tomato leaves). The quantitative recovery values were obtained with 1.25 g/L sample to deep eutectic solvent (DES) volume, at 95 °C for 2 h. The limit of detection was found as 0.50, 0.34, and 1.23 μg/L for DES/tartaric, DES/oxalic, and DES/citric acid, respectively. At optimum conditions, the analytical signal was linear for the range of 10-3000 μg/L for all studied DESs with the correlation coefficient >0.99. The extraction methods were applied to different real samples such as basil herb, spinach, dill, and cucumber barks. The known amount of manganese was spiked to samples, and good recovery results were obtained.

  17. DeepAnomaly: Combining Background Subtraction and Deep Learning for Detecting Obstacles and Anomalies in an Agricultural Field

    Directory of Open Access Journals (Sweden)

    Peter Christiansen

    2016-11-01

    Full Text Available Convolutional neural network (CNN-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks” (RCNN. In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45–90 m than RCNN. RCNN has a similar performance at a short range (0–30 m. However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms = a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit.

  18. High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks.

    Science.gov (United States)

    Rajkomar, Alvin; Lingam, Sneha; Taylor, Andrew G; Blum, Michael; Mongan, John

    2017-02-01

    The study aimed to determine if computer vision techniques rooted in deep learning can use a small set of radiographs to perform clinically relevant image classification with high fidelity. One thousand eight hundred eighty-five chest radiographs on 909 patients obtained between January 2013 and July 2015 at our institution were retrieved and anonymized. The source images were manually annotated as frontal or lateral and randomly divided into training, validation, and test sets. Training and validation sets were augmented to over 150,000 images using standard image manipulations. We then pre-trained a series of deep convolutional networks based on the open-source GoogLeNet with various transformations of the open-source ImageNet (non-radiology) images. These trained networks were then fine-tuned using the original and augmented radiology images. The model with highest validation accuracy was applied to our institutional test set and a publicly available set. Accuracy was assessed by using the Youden Index to set a binary cutoff for frontal or lateral classification. This retrospective study was IRB approved prior to initiation. A network pre-trained on 1.2 million greyscale ImageNet images and fine-tuned on augmented radiographs was chosen. The binary classification method correctly classified 100 % (95 % CI 99.73-100 %) of both our test set and the publicly available images. Classification was rapid, at 38 images per second. A deep convolutional neural network created using non-radiological images, and an augmented set of radiographs is effective in highly accurate classification of chest radiograph view type and is a feasible, rapid method for high-throughput annotation.

  19. Compendium of Single Event Effects Test Results for Commercial Off-The-Shelf and Standard Electronics for Low Earth Orbit and Deep Space Applications

    Science.gov (United States)

    Reddell, Brandon D.; Bailey, Charles R.; Nguyen, Kyson V.; O'Neill, Patrick M.; Wheeler, Scott; Gaza, Razvan; Cooper, Jaime; Kalb, Theodore; Patel, Chirag; Beach, Elden R.; hide

    2017-01-01

    We present the results of Single Event Effects (SEE) testing with high energy protons and with low and high energy heavy ions for electrical components considered for Low Earth Orbit (LEO) and for deep space applications.

  20. Learning Transferable Features with Deep Adaptation Networks

    OpenAIRE

    Long, Mingsheng; Cao, Yue; Wang, Jianmin; Jordan, Michael I.

    2015-01-01

    Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation...

  1. Theory of deep inelastic lepton-hadron scattering

    International Nuclear Information System (INIS)

    Geyer, B.; Robaschik, D.; Wieczorek, E.

    1979-01-01

    The description of deep inelastic lepton-nucleon scattering in the lowest order of the electromagnetic and weak coupling constants leads to a study of virtual Compton amplitudes and their absorptive parts. Some aspects of quantum chromodynamics are discussed. Deep inelastic scattering enables a central quantity of quantum field theory, namely the light cone behaviour of the current commutator. The moments of structure functions are used for the description of deep inelastic scattering. (author)

  2. Comparison of the induced fields using different coil configurations during deep transcranial magnetic stimulation.

    Directory of Open Access Journals (Sweden)

    Mai Lu

    Full Text Available Stimulation of deeper brain structures by transcranial magnetic stimulation (TMS plays a role in the study of reward and motivation mechanisms, which may be beneficial in the treatment of several neurological and psychiatric disorders. However, electric field distributions induced in the brain by deep transcranial magnetic stimulation (dTMS are still unknown. In this paper, the double cone coil, H-coil and Halo-circular assembly (HCA coil which have been proposed for dTMS have been numerically designed. The distributions of magnetic flux density, induced electric field in an anatomically based realistic head model by applying the dTMS coils were numerically calculated by the impedance method. Results were compared with that of standard figure-of-eight (Fo8 coil. Simulation results show that double cone, H- and HCA coils have significantly deep field penetration compared to the conventional Fo8 coil, at the expense of induced higher and wider spread electrical fields in superficial cortical regions. Double cone and HCA coils have better ability to stimulate deep brain subregions compared to that of the H-coil. In the mean time, both double cone and HCA coils increase risk for optical nerve excitation. Our results suggest although the dTMS coils offer new tool with potential for both research and clinical applications for psychiatric and neurological disorders associated with dysfunctions of deep brain regions, the selection of the most suitable coil settings for a specific clinical application should be based on a balanced evaluation between stimulation depth and focality.

  3. Multiscale deep drawing analysis of dual-phase steels using grain cluster-based RGC scheme

    International Nuclear Information System (INIS)

    Tjahjanto, D D; Eisenlohr, P; Roters, F

    2015-01-01

    Multiscale modelling and simulation play an important role in sheet metal forming analysis, since the overall material responses at macroscopic engineering scales, e.g. formability and anisotropy, are strongly influenced by microstructural properties, such as grain size and crystal orientations (texture). In the present report, multiscale analysis on deep drawing of dual-phase steels is performed using an efficient grain cluster-based homogenization scheme.The homogenization scheme, called relaxed grain cluster (RGC), is based on a generalization of the grain cluster concept, where a (representative) volume element consists of p  ×  q  ×  r (hexahedral) grains. In this scheme, variation of the strain or deformation of individual grains is taken into account through the, so-called, interface relaxation, which is formulated within an energy minimization framework. An interfacial penalty term is introduced into the energy minimization framework in order to account for the effects of grain boundaries.The grain cluster-based homogenization scheme has been implemented and incorporated into the advanced material simulation platform DAMASK, which purposes to bridge the macroscale boundary value problems associated with deep drawing analysis to the micromechanical constitutive law, e.g. crystal plasticity model. Standard Lankford anisotropy tests are performed to validate the model parameters prior to the deep drawing analysis. Model predictions for the deep drawing simulations are analyzed and compared to the corresponding experimental data. The result shows that the predictions of the model are in a very good agreement with the experimental measurement. (paper)

  4. DeepQA: Improving the estimation of single protein model quality with deep belief networks

    OpenAIRE

    Cao, Renzhi; Bhattacharya, Debswapna; Hou, Jie; Cheng, Jianlin

    2016-01-01

    Background Protein quality assessment (QA) useful for ranking and selecting protein models has long been viewed as one of the major challenges for protein tertiary structure prediction. Especially, estimating the quality of a single protein model, which is important for selecting a few good models out of a large model pool consisting of mostly low-quality models, is still a largely unsolved problem. Results We introduce a novel single-model quality assessment method DeepQA based on deep belie...

  5. DeepDive: Declarative Knowledge Base Construction.

    Science.gov (United States)

    De Sa, Christopher; Ratner, Alex; Ré, Christopher; Shin, Jaeho; Wang, Feiran; Wu, Sen; Zhang, Ce

    2016-03-01

    The dark data extraction or knowledge base construction (KBC) problem is to populate a SQL database with information from unstructured data sources including emails, webpages, and pdf reports. KBC is a long-standing problem in industry and research that encompasses problems of data extraction, cleaning, and integration. We describe DeepDive, a system that combines database and machine learning ideas to help develop KBC systems. The key idea in DeepDive is that statistical inference and machine learning are key tools to attack classical data problems in extraction, cleaning, and integration in a unified and more effective manner. DeepDive programs are declarative in that one cannot write probabilistic inference algorithms; instead, one interacts by defining features or rules about the domain. A key reason for this design choice is to enable domain experts to build their own KBC systems. We present the applications, abstractions, and techniques of DeepDive employed to accelerate construction of KBC systems.

  6. Pathways to deep decarbonization - 2015 report

    International Nuclear Information System (INIS)

    Ribera, Teresa; Colombier, Michel; Waisman, Henri; Bataille, Chris; Pierfederici, Roberta; Sachs, Jeffrey; Schmidt-Traub, Guido; Williams, Jim; Segafredo, Laura; Hamburg Coplan, Jill; Pharabod, Ivan; Oury, Christian

    2015-12-01

    In September 2015, the Deep Decarbonization Pathways Project published the Executive Summary of the Pathways to Deep Decarbonization: 2015 Synthesis Report. The full 2015 Synthesis Report was launched in Paris on December 3, 2015, at a technical workshop with the Mitigation Action Plans and Scenarios (MAPS) program. The Deep Decarbonization Pathways Project (DDPP) is a collaborative initiative to understand and show how individual countries can transition to a low-carbon economy and how the world can meet the internationally agreed target of limiting the increase in global mean surface temperature to less than 2 degrees Celsius (deg. C). Achieving the 2 deg. C limit will require that global net emissions of greenhouse gases (GHG) approach zero by the second half of the century. In turn, this will require a profound transformation of energy systems by mid-century through steep declines in carbon intensity in all sectors of the economy, a transition we call 'deep decarbonization'

  7. Resection of deep-seated gliomas using neuroimaging for stereotactic placement of guidance catheters

    Energy Technology Data Exchange (ETDEWEB)

    Matsumoto, Kengo; Higashi, Hisato; Tomita, Susumu; Furuta, Tomohisa; Ohmoto, Takashi [Okayama Univ. (Japan). School of Medicine

    1995-03-01

    A simple computed tomography- (CT) or magnetic resonance (MR) imaging-guided stereotactic method for guided microsurgical resection of either deep-seated gliomas or tumors adjacent to an eloquent area is described. The technique employs the Brown-Roberts-Wells stereotactic system and twist drills, 2.7 mm in diameter, for the stereotactic placement of 2.4 mm diameter scaled guidance catheters through the calvaria. In a patient with a deep-seated small glioma, less than 2 cm diameter, one catheter was implanted into the center of the enhanced mass through the cerebral cortex. In the other 14 patients, three to six catheters were used which made the tumor border clearer. After implantation of the guidance catheters, the stereotactic frame was removed and a standard open craniotomy performed. Target localization is not affected by brain movement, which is inevitable during open surgery. The tumor involved the frontal lobe in eight patients, the parietal lobe in two, and the thalamus in five. In all cases the lesion was quickly localized and radical removal was acheived. Neurological complications occurred in only one patient who suffered transient hemiparesis after the resection of a lesion in the pyramidal tract. The results demonstrate that microsurgery combined with CT- or MR imaging-guided stereotactic placement of guidance catheters is a new option for surgery of deep-seated gliomas or tumors adjacent to an eloquent area. (author).

  8. Convolutional deep belief network with feature encoding for classification of neuroblastoma histological images

    Directory of Open Access Journals (Sweden)

    Soheila Gheisari

    2018-01-01

    Full Text Available Background: Neuroblastoma is the most common extracranial solid tumor in children younger than 5 years old. Optimal management of neuroblastic tumors depends on many factors including histopathological classification. The gold standard for classification of neuroblastoma histological images is visual microscopic assessment. In this study, we propose and evaluate a deep learning approach to classify high-resolution digital images of neuroblastoma histology into five different classes determined by the Shimada classification. Subjects and Methods: We apply a combination of convolutional deep belief network (CDBN with feature encoding algorithm that automatically classifies digital images of neuroblastoma histology into five different classes. We design a three-layer CDBN to extract high-level features from neuroblastoma histological images and combine with a feature encoding model to extract features that are highly discriminative in the classification task. The extracted features are classified into five different classes using a support vector machine classifier. Data: We constructed a dataset of 1043 neuroblastoma histological images derived from Aperio scanner from 125 patients representing different classes of neuroblastoma tumors. Results: The weighted average F-measure of 86.01% was obtained from the selected high-level features, outperforming state-of-the-art methods. Conclusion: The proposed computer-aided classification system, which uses the combination of deep architecture and feature encoding to learn high-level features, is highly effective in the classification of neuroblastoma histological images.

  9. Deep hierarchical attention network for video description

    Science.gov (United States)

    Li, Shuohao; Tang, Min; Zhang, Jun

    2018-03-01

    Pairing video to natural language description remains a challenge in computer vision and machine translation. Inspired by image description, which uses an encoder-decoder model for reducing visual scene into a single sentence, we propose a deep hierarchical attention network for video description. The proposed model uses convolutional neural network (CNN) and bidirectional LSTM network as encoders while a hierarchical attention network is used as the decoder. Compared to encoder-decoder models used in video description, the bidirectional LSTM network can capture the temporal structure among video frames. Moreover, the hierarchical attention network has an advantage over single-layer attention network on global context modeling. To make a fair comparison with other methods, we evaluate the proposed architecture with different types of CNN structures and decoders. Experimental results on the standard datasets show that our model has a more superior performance than the state-of-the-art techniques.

  10. Measuring Habitat Quality for Deep-Sea Corals and Sponges to Add Conservation Value to Telepresence-Enabled Science and Technology

    Science.gov (United States)

    Etnoyer, P. J.; Hourigan, T. F.; Reser, B.; Monaco, M.

    2016-02-01

    The growing fleet of telepresence-enabled research vessels equipped with deep-sea imaging technology provides a new opportunity to catalyze and coordinate research efforts among ships. This development is particularly useful for studying the distribution and diversity of deep-sea corals, which occur worldwide from 50 to 8600 m depth. Marine managers around the world seek to conserve these habitats, but require a clear consensus on what types of information are most important and most relevant for marine conservation. The National Oceanic and Atmospheric Administration (NOAA) seeks to develop a reproducible, non-invasive set of ROV methods designed to measure conservation value, or habitat quality, for deep-sea corals and sponges. New tools and methods will be proposed to inform ocean resource management, as well as facilitate research, outreach, and education. A new database schema will be presented, building upon the Ocean Biogeographic Information System (OBIS) and efforts of submersible and ROV teams over the years. Visual information about corals and sponges has proven paramount, particularly high-quality images with standard attributes for marine geology and marine biology, including scientific names, colony size, health, abundance, and density. Improved habitat suitability models can be developed from these data if presence and absence are measured. Recent efforts to incorporate physical sampling into telepresence protocols further increase the value of such information. It is possible for systematic observations with small file sizes to be distributed as geo-referenced, time-stamped still images with environmental variables for water chemistry and a standardized habitat classification. The technique is common among researchers, but a distributed network for this information is still in its infancy. One goal of this presentation is to make progress towards a more integrated network of these measured observations of habitat quality to better facilitate

  11. Forecasting air quality time series using deep learning.

    Science.gov (United States)

    Freeman, Brian S; Taylor, Graham; Gharabaghi, Bahram; Thé, Jesse

    2018-04-13

    This paper presents one of the first applications of deep learning (DL) techniques to predict air pollution time series. Air quality management relies extensively on time series data captured at air monitoring stations as the basis of identifying population exposure to airborne pollutants and determining compliance with local ambient air standards. In this paper, 8 hr averaged surface ozone (O 3 ) concentrations were predicted using deep learning consisting of a recurrent neural network (RNN) with long short-term memory (LSTM). Hourly air quality and meteorological data were used to train and forecast values up to 72 hours with low error rates. The LSTM was able to forecast the duration of continuous O 3 exceedances as well. Prior to training the network, the dataset was reviewed for missing data and outliers. Missing data were imputed using a novel technique that averaged gaps less than eight time steps with incremental steps based on first-order differences of neighboring time periods. Data were then used to train decision trees to evaluate input feature importance over different time prediction horizons. The number of features used to train the LSTM model was reduced from 25 features to 5 features, resulting in improved accuracy as measured by Mean Absolute Error (MAE). Parameter sensitivity analysis identified look-back nodes associated with the RNN proved to be a significant source of error if not aligned with the prediction horizon. Overall, MAE's less than 2 were calculated for predictions out to 72 hours. Novel deep learning techniques were used to train an 8-hour averaged ozone forecast model. Missing data and outliers within the captured data set were replaced using a new imputation method that generated calculated values closer to the expected value based on the time and season. Decision trees were used to identify input variables with the greatest importance. The methods presented in this paper allow air managers to forecast long range air pollution

  12. Text feature extraction based on deep learning: a review.

    Science.gov (United States)

    Liang, Hong; Sun, Xiao; Sun, Yunlei; Gao, Yuan

    2017-01-01

    Selection of text feature item is a basic and important matter for text mining and information retrieval. Traditional methods of feature extraction require handcrafted features. To hand-design, an effective feature is a lengthy process, but aiming at new applications, deep learning enables to acquire new effective feature representation from training data. As a new feature extraction method, deep learning has made achievements in text mining. The major difference between deep learning and conventional methods is that deep learning automatically learns features from big data, instead of adopting handcrafted features, which mainly depends on priori knowledge of designers and is highly impossible to take the advantage of big data. Deep learning can automatically learn feature representation from big data, including millions of parameters. This thesis outlines the common methods used in text feature extraction first, and then expands frequently used deep learning methods in text feature extraction and its applications, and forecasts the application of deep learning in feature extraction.

  13. Constant-resistance deep-level transient spectroscopy in Si and Ge JFET's

    International Nuclear Information System (INIS)

    Kolev, P.V.; Deen, J.

    1999-01-01

    The recently introduced constant-resistance deep-level transient spectroscopy (CR-DLTS) was successfully applied to study virgin and radiation-damaged junction field-effect transistors (JFET's). The authors have studied three groups of devices: commercially available-discrete silicon JFET's; virgin and exposed to high-level neutron radiation silicon JFET's, custom-made by using a monolithic technology; and commercially available discrete germanium p-channel JFET's. CR-DLTS is similar to both the conductance DLTs and to the constant-capacitance variation (CC-DLTS). Unlike the conductance and current DLTS, it is independent of the transistor size and does not require simultaneous measurement of the transconductance or the free-carrier mobility for calculation of the trap concentration. Compared to the CC-DLTS, it measures only the traps inside the gate-controlled part of the space charge region. Comparisons have also been made with the CC-DLTS and standard capacitance DLTS. In addition, possibilities for defect profiling in the channel have been demonstrated. CR-DLTS was found to be a simple, very sensitive, and device area-independent technique which is well suited for measurement of a wide range of deep level concentrations in transistors

  14. Deep-seated sarcomas of the penis

    Directory of Open Access Journals (Sweden)

    Alberto A. Antunes

    2005-06-01

    Full Text Available Mesenchymal neoplasias represent 5% of tumors affecting the penis. Due to the rarity of such tumors, there is no agreement concerning the best method for staging and managing these patients. Sarcomas of the penis can be classified as deep-seated if they derive from the structures forming the spongy body and the cavernous bodies. Superficial lesions are usually low-grade and show a small tendency towards distant metastasis. In contrast, deep-seated lesions usually show behavior that is more aggressive and have poorer prognosis. The authors report 3 cases of deep-seated primary sarcomas of the penis and review the literature on this rare and aggressive neoplasia.

  15. In Brief: Deep-sea observatory

    Science.gov (United States)

    Showstack, Randy

    2008-11-01

    The first deep-sea ocean observatory offshore of the continental United States has begun operating in the waters off central California. The remotely operated Monterey Accelerated Research System (MARS) will allow scientists to monitor the deep sea continuously. Among the first devices to be hooked up to the observatory are instruments to monitor earthquakes, videotape deep-sea animals, and study the effects of acidification on seafloor animals. ``Some day we may look back at the first packets of data streaming in from the MARS observatory as the equivalent of those first words spoken by Alexander Graham Bell: `Watson, come here, I need you!','' commented Marcia McNutt, president and CEO of the Monterey Bay Aquarium Research Institute, which coordinated construction of the observatory. For more information, see http://www.mbari.org/news/news_releases/2008/mars-live/mars-live.html.

  16. Deep Learning for Computer Vision: A Brief Review

    Science.gov (United States)

    Doulamis, Nikolaos; Doulamis, Anastasios; Protopapadakis, Eftychios

    2018-01-01

    Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein. PMID:29487619

  17. Deep Learning for Computer Vision: A Brief Review

    Directory of Open Access Journals (Sweden)

    Athanasios Voulodimos

    2018-01-01

    Full Text Available Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein.

  18. Deep Learning for Computer Vision: A Brief Review.

    Science.gov (United States)

    Voulodimos, Athanasios; Doulamis, Nikolaos; Doulamis, Anastasios; Protopapadakis, Eftychios

    2018-01-01

    Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein.

  19. Is deep dreaming the new collage?

    Science.gov (United States)

    Boden, Margaret A.

    2017-10-01

    Deep dreaming (DD) can combine and transform images in surprising ways. But, being based in deep learning (DL), it is not analytically understood. Collage is an art form that is constrained along various dimensions. DD will not be able to generate collages until DL can be guided in a disciplined fashion.

  20. Improved detection of CXCR4-using HIV by V3 genotyping: application of population-based and "deep" sequencing to plasma RNA and proviral DNA.

    Science.gov (United States)

    Swenson, Luke C; Moores, Andrew; Low, Andrew J; Thielen, Alexander; Dong, Winnie; Woods, Conan; Jensen, Mark A; Wynhoven, Brian; Chan, Dennison; Glascock, Christopher; Harrigan, P Richard

    2010-08-01

    Tropism testing should rule out CXCR4-using HIV before treatment with CCR5 antagonists. Currently, the recombinant phenotypic Trofile assay (Monogram) is most widely utilized; however, genotypic tests may represent alternative methods. Independent triplicate amplifications of the HIV gp120 V3 region were made from either plasma HIV RNA or proviral DNA. These underwent standard, population-based sequencing with an ABI3730 (RNA n = 63; DNA n = 40), or "deep" sequencing with a Roche/454 Genome Sequencer-FLX (RNA n = 12; DNA n = 12). Position-specific scoring matrices (PSSMX4/R5) (-6.96 cutoff) and geno2pheno[coreceptor] (5% false-positive rate) inferred tropism from V3 sequence. These methods were then independently validated with a separate, blinded dataset (n = 278) of screening samples from the maraviroc MOTIVATE trials. Standard sequencing of HIV RNA with PSSM yielded 69% sensitivity and 91% specificity, relative to Trofile. The validation dataset gave 75% sensitivity and 83% specificity. Proviral DNA plus PSSM gave 77% sensitivity and 71% specificity. "Deep" sequencing of HIV RNA detected >2% inferred-CXCR4-using virus in 8/8 samples called non-R5 by Trofile, and <2% in 4/4 samples called R5. Triplicate analyses of V3 standard sequence data detect greater proportions of CXCR4-using samples than previously achieved. Sequencing proviral DNA and "deep" V3 sequencing may also be useful tools for assessing tropism.

  1. Application of the Allan Variance to Time Series Analysis in Astrometry and Geodesy: A Review.

    Science.gov (United States)

    Malkin, Zinovy

    2016-04-01

    The Allan variance (AVAR) was introduced 50 years ago as a statistical tool for assessing the frequency standards deviations. For the past decades, AVAR has increasingly been used in geodesy and astrometry to assess the noise characteristics in geodetic and astrometric time series. A specific feature of astrometric and geodetic measurements, as compared with clock measurements, is that they are generally associated with uncertainties; thus, an appropriate weighting should be applied during data analysis. In addition, some physically connected scalar time series naturally form series of multidimensional vectors. For example, three station coordinates time series X, Y, and Z can be combined to analyze 3-D station position variations. The classical AVAR is not intended for processing unevenly weighted and/or multidimensional data. Therefore, AVAR modifications, namely weighted AVAR (WAVAR), multidimensional AVAR (MAVAR), and weighted multidimensional AVAR (WMAVAR), were introduced to overcome these deficiencies. In this paper, a brief review is given of the experience of using AVAR and its modifications in processing astrogeodetic time series.

  2. Density functionals from deep learning

    OpenAIRE

    McMahon, Jeffrey M.

    2016-01-01

    Density-functional theory is a formally exact description of a many-body quantum system in terms of its density; in practice, however, approximations to the universal density functional are required. In this work, a model based on deep learning is developed to approximate this functional. Deep learning allows computational models that are capable of naturally discovering intricate structure in large and/or high-dimensional data sets, with multiple levels of abstraction. As no assumptions are ...

  3. A Survey: Time Travel in Deep Learning Space: An Introduction to Deep Learning Models and How Deep Learning Models Evolved from the Initial Ideas

    OpenAIRE

    Wang, Haohan; Raj, Bhiksha

    2015-01-01

    This report will show the history of deep learning evolves. It will trace back as far as the initial belief of connectionism modelling of brain, and come back to look at its early stage realization: neural networks. With the background of neural network, we will gradually introduce how convolutional neural network, as a representative of deep discriminative models, is developed from neural networks, together with many practical techniques that can help in optimization of neural networks. On t...

  4. Assessment of deep geological environment condition

    International Nuclear Information System (INIS)

    Bae, Dae Seok; Han, Kyung Won; Joen, Kwan Sik

    2003-05-01

    The main tasks of geoscientific study in the 2nd stage was characterized focusing mainly on a near-field condition of deep geologic environment, and aimed to generate the geologic input data for a Korean reference disposal system for high level radioactive wastes and to establish site characterization methodology, including neotectonic features, fracture systems and mechanical properties of plutonic rocks, and hydrogeochemical characteristics. The preliminary assessment of neotectonics in the Korean peninsula was performed on the basis of seismicity recorded, Quarternary faults investigated, uplift characteristics studied on limited areas, distribution of the major regional faults and their characteristics. The local fracture system was studied in detail from the data obtained from deep boreholes in granitic terrain. Through this deep drilling project, the geometrical and hydraulic properties of different fracture sets are statistically analysed on a block scale. The mechanical properties of intact rocks were evaluated from the core samples by laboratory testing and the in-situ stress conditions were estimated by a hydro fracturing test in the boreholes. The hydrogeochemical conditions in the deep boreholes were characterized based on hydrochemical composition and isotopic signatures and were attempted to assess the interrelation with a major fracture system. The residence time of deep groundwater was estimated by C-14 dating. For the travel time of groundwater between the boreholes, the methodology and equipment for tracer test were established

  5. Deep Energy Retrofit

    DEFF Research Database (Denmark)

    Zhivov, Alexander; Lohse, Rüdiger; Rose, Jørgen

    Deep Energy Retrofit – A Guide to Achieving Significant Energy User Reduction with Major Renovation Projects contains recommendations for characteristics of some of core technologies and measures that are based on studies conducted by national teams associated with the International Energy Agency...... Energy Conservation in Buildings and Communities Program (IEA-EBC) Annex 61 (Lohse et al. 2016, Case, et al. 2016, Rose et al. 2016, Yao, et al. 2016, Dake 2014, Stankevica et al. 2016, Kiatreungwattana 2014). Results of these studies provided a base for setting minimum requirements to the building...... envelope-related technologies to make Deep Energy Retrofit feasible and, in many situations, cost effective. Use of energy efficiency measures (EEMs) in addition to core technologies bundle and high-efficiency appliances will foster further energy use reduction. This Guide also provides best practice...

  6. Matching Matched Filtering with Deep Networks for Gravitational-Wave Astronomy

    Science.gov (United States)

    Gabbard, Hunter; Williams, Michael; Hayes, Fergus; Messenger, Chris

    2018-04-01

    We report on the construction of a deep convolutional neural network that can reproduce the sensitivity of a matched-filtering search for binary black hole gravitational-wave signals. The standard method for the detection of well-modeled transient gravitational-wave signals is matched filtering. We use only whitened time series of measured gravitational-wave strain as an input, and we train and test on simulated binary black hole signals in synthetic Gaussian noise representative of Advanced LIGO sensitivity. We show that our network can classify signal from noise with a performance that emulates that of match filtering applied to the same data sets when considering the sensitivity defined by receiver-operator characteristics.

  7. Matching Matched Filtering with Deep Networks for Gravitational-Wave Astronomy.

    Science.gov (United States)

    Gabbard, Hunter; Williams, Michael; Hayes, Fergus; Messenger, Chris

    2018-04-06

    We report on the construction of a deep convolutional neural network that can reproduce the sensitivity of a matched-filtering search for binary black hole gravitational-wave signals. The standard method for the detection of well-modeled transient gravitational-wave signals is matched filtering. We use only whitened time series of measured gravitational-wave strain as an input, and we train and test on simulated binary black hole signals in synthetic Gaussian noise representative of Advanced LIGO sensitivity. We show that our network can classify signal from noise with a performance that emulates that of match filtering applied to the same data sets when considering the sensitivity defined by receiver-operator characteristics.

  8. Precise Th/U-dating of small and heavily coated samples of deep sea corals

    Science.gov (United States)

    Lomitschka, Michael; Mangini, Augusto

    1999-07-01

    Marine carbonate skeletons like deep-sea corals are frequently coated with iron and manganese oxides/hydroxides which adsorb additional thorium and uranium out of the sea water. A new cleaning procedure has been developed to reduce this contamination. In this further cleaning step a solution of Na 2EDTA (Na 2H 2T B) and ascorbic acid is used which composition is optimised especially for samples of 20 mg of weight. It was first tested on aliquots of a reef-building coral which had been artificially contaminated with powdered ferromanganese nodule. Applied on heavily contaminated deep-sea corals (scleractinia), it reduced excess 230Th by another order of magnitude in addition to usual cleaning procedures. The measurement of at least three fractions of different contamination, together with an additional standard correction for contaminated carbonates results in Th/U-ages corrected for the authigenic component. A good agreement between Th/U- and 14C-ages can be achieved even for extremely coated corals.

  9. Deep-well injection of radioactive waste in Russia

    International Nuclear Information System (INIS)

    Hoek, J.

    1998-01-01

    In the Russian federation, deep borehole injection of liquid radioactive waste has been established practice since at least 1963. The liquid is injected into sandy or other formations with high porosity, which are isolated by water-tight layers. This technique has also been used elsewhere for toxic liquid waste and residues from mining operations. Deep-well injection of radioactive waste is not currently used in any of the European Commission (EC) countries. In this paper the results of a EC-funded study were presented. The study is entitled 'Measurements, modelling of migration and possible radiological consequences at deep well injection sites for liquid radioactive waste in Russia', COSU-CT94-0099-UK. The study was carried out jointly by AEA Technology, CAG and the Research Institute for Nuclear Reactors (NIIAR) at Dimitrovgrad. Many scientists have contributed to the results reported here. The aims of the study are: Provision of extensive information on the deep-well injection repositories and their use in the former Soviet Union; Provision of a methodology to assess safety aspects of deep-well injection of liquid radioactive waste in deep geological formations; This will allow evaluation of proposals to use deep-well injection techniques in other regions; Support for Russian regulatory bodies through evaluation of the suitability of the sites, including estimates of the maximum amount of waste that can be safely stored in them; and Provision of a methodology to assess the use of deep-well injection repositories as an alternative disposal technique for EC countries. 7 refs

  10. DEEP: a general computational framework for predicting enhancers

    KAUST Repository

    Kleftogiannis, Dimitrios A.

    2014-11-05

    Transcription regulation in multicellular eukaryotes is orchestrated by a number of DNA functional elements located at gene regulatory regions. Some regulatory regions (e.g. enhancers) are located far away from the gene they affect. Identification of distal regulatory elements is a challenge for the bioinformatics research. Although existing methodologies increased the number of computationally predicted enhancers, performance inconsistency of computational models across different cell-lines, class imbalance within the learning sets and ad hoc rules for selecting enhancer candidates for supervised learning, are some key questions that require further examination. In this study we developed DEEP, a novel ensemble prediction framework. DEEP integrates three components with diverse characteristics that streamline the analysis of enhancer\\'s properties in a great variety of cellular conditions. In our method we train many individual classification models that we combine to classify DNA regions as enhancers or non-enhancers. DEEP uses features derived from histone modification marks or attributes coming from sequence characteristics. Experimental results indicate that DEEP performs better than four state-of-the-art methods on the ENCODE data. We report the first computational enhancer prediction results on FANTOM5 data where DEEP achieves 90.2% accuracy and 90% geometric mean (GM) of specificity and sensitivity across 36 different tissues. We further present results derived using in vivo-derived enhancer data from VISTA database. DEEP-VISTA, when tested on an independent test set, achieved GM of 80.1% and accuracy of 89.64%. DEEP framework is publicly available at http://cbrc.kaust.edu.sa/deep/.

  11. DEEP: a general computational framework for predicting enhancers

    KAUST Repository

    Kleftogiannis, Dimitrios A.; Kalnis, Panos; Bajic, Vladimir B.

    2014-01-01

    Transcription regulation in multicellular eukaryotes is orchestrated by a number of DNA functional elements located at gene regulatory regions. Some regulatory regions (e.g. enhancers) are located far away from the gene they affect. Identification of distal regulatory elements is a challenge for the bioinformatics research. Although existing methodologies increased the number of computationally predicted enhancers, performance inconsistency of computational models across different cell-lines, class imbalance within the learning sets and ad hoc rules for selecting enhancer candidates for supervised learning, are some key questions that require further examination. In this study we developed DEEP, a novel ensemble prediction framework. DEEP integrates three components with diverse characteristics that streamline the analysis of enhancer's properties in a great variety of cellular conditions. In our method we train many individual classification models that we combine to classify DNA regions as enhancers or non-enhancers. DEEP uses features derived from histone modification marks or attributes coming from sequence characteristics. Experimental results indicate that DEEP performs better than four state-of-the-art methods on the ENCODE data. We report the first computational enhancer prediction results on FANTOM5 data where DEEP achieves 90.2% accuracy and 90% geometric mean (GM) of specificity and sensitivity across 36 different tissues. We further present results derived using in vivo-derived enhancer data from VISTA database. DEEP-VISTA, when tested on an independent test set, achieved GM of 80.1% and accuracy of 89.64%. DEEP framework is publicly available at http://cbrc.kaust.edu.sa/deep/.

  12. [Research of electroencephalography representational emotion recognition based on deep belief networks].

    Science.gov (United States)

    Yang, Hao; Zhang, Junran; Jiang, Xiaomei; Liu, Fei

    2018-04-01

    In recent years, with the rapid development of machine learning techniques,the deep learning algorithm has been widely used in one-dimensional physiological signal processing. In this paper we used electroencephalography (EEG) signals based on deep belief network (DBN) model in open source frameworks of deep learning to identify emotional state (positive, negative and neutrals), then the results of DBN were compared with support vector machine (SVM). The EEG signals were collected from the subjects who were under different emotional stimuli, and DBN and SVM were adopted to identify the EEG signals with changes of different characteristics and different frequency bands. We found that the average accuracy of differential entropy (DE) feature by DBN is 89.12%±6.54%, which has a better performance than previous research based on the same data set. At the same time, the classification effects of DBN are better than the results from traditional SVM (the average classification accuracy of 84.2%±9.24%) and its accuracy and stability have a better trend. In three experiments with different time points, single subject can achieve the consistent results of classification by using DBN (the mean standard deviation is1.44%), and the experimental results show that the system has steady performance and good repeatability. According to our research, the characteristic of DE has a better classification result than other characteristics. Furthermore, the Beta band and the Gamma band in the emotional recognition model have higher classification accuracy. To sum up, the performances of classifiers have a promotion by using the deep learning algorithm, which has a reference for establishing a more accurate system of emotional recognition. Meanwhile, we can trace through the results of recognition to find out the brain regions and frequency band that are related to the emotions, which can help us to understand the emotional mechanism better. This study has a high academic value and

  13. Deep-Space Ka-Band Flight Experience

    Science.gov (United States)

    Morabito, D. D.

    2017-11-01

    Lower frequency bands have become more congested in allocated bandwidth as there is increased competition between flight projects and other entities. Going to higher frequency bands offers significantly more bandwidth, allowing for the use of much higher data rates. However, Ka-band is more susceptible to weather effects than lower frequency bands currently used for most standard downlink telemetry operations. Future or prospective flight projects considering deep-space Ka-band (32-GHz) telemetry data links have expressed an interest in understanding past flight experience with received Ka-band downlink performance. Especially important to these flight projects is gaining a better understanding of weather effects from the experience of current or past missions that operated Ka-band radio systems. We will discuss the historical flight experience of several Ka-band missions starting from Mars Observer in 1993 up to present-day deep-space missions such as Kepler. The study of historical Ka-band flight experience allows one to recommend margin policy for future missions. Of particular interest, we will review previously reported-on flight experience with the Cassini spacecraft Ka-band radio system that has been used for radio science investigations as well as engineering studies from 2004 to 2015, when Cassini was in orbit around the planet Saturn. In this article, we will focus primarily on the Kepler spacecraft Ka-band link, which has been used for operational telemetry downlink from an Earth trailing orbit where the spacecraft resides. We analyzed the received Ka-band signal level data in order to characterize link performance over a wide range of weather conditions and as a function of elevation angle. Based on this analysis of Kepler and Cassini flight data, we found that a 4-dB margin with respect to adverse conditions ensures that we achieve at least a 95 percent data return.

  14. Deep Question Answering for protein annotation.

    Science.gov (United States)

    Gobeill, Julien; Gaudinat, Arnaud; Pasche, Emilie; Vishnyakova, Dina; Gaudet, Pascale; Bairoch, Amos; Ruch, Patrick

    2015-01-01

    Biomedical professionals have access to a huge amount of literature, but when they use a search engine, they often have to deal with too many documents to efficiently find the appropriate information in a reasonable time. In this perspective, question-answering (QA) engines are designed to display answers, which were automatically extracted from the retrieved documents. Standard QA engines in literature process a user question, then retrieve relevant documents and finally extract some possible answers out of these documents using various named-entity recognition processes. In our study, we try to answer complex genomics questions, which can be adequately answered only using Gene Ontology (GO) concepts. Such complex answers cannot be found using state-of-the-art dictionary- and redundancy-based QA engines. We compare the effectiveness of two dictionary-based classifiers for extracting correct GO answers from a large set of 100 retrieved abstracts per question. In the same way, we also investigate the power of GOCat, a GO supervised classifier. GOCat exploits the GOA database to propose GO concepts that were annotated by curators for similar abstracts. This approach is called deep QA, as it adds an original classification step, and exploits curated biological data to infer answers, which are not explicitly mentioned in the retrieved documents. We show that for complex answers such as protein functional descriptions, the redundancy phenomenon has a limited effect. Similarly usual dictionary-based approaches are relatively ineffective. In contrast, we demonstrate how existing curated data, beyond information extraction, can be exploited by a supervised classifier, such as GOCat, to massively improve both the quantity and the quality of the answers with a +100% improvement for both recall and precision. Database URL: http://eagl.unige.ch/DeepQA4PA/. © The Author(s) 2015. Published by Oxford University Press.

  15. Radiogenic Isotopes As Paleoceanographic Tracers in Deep-Sea Corals: Advances in TIMS Measurements of Pb Isotopes and Application to Southern Ocean Corals

    Science.gov (United States)

    Wilson, D. J.; van de Flierdt, T.; Bridgestock, L. J.; Paul, M.; Rehkamper, M.; Robinson, L. F.; Adkins, J. F.

    2014-12-01

    Deep-sea corals have emerged as a valuable archive of deep ocean paleoceanographic change, with uranium-series dating providing absolute ages and the potential for centennial resolution. In combination with measurements of radiocarbon, neodymium isotopes and clumped isotopes, this archive has recently been exploited to reconstruct changes in ventilation, water mass sourcing and temperature in relation to millennial climate change. Lead (Pb) isotopes in both corals and seawater have also been used to track anthropogenic inputs through space and time and to trace transport pathways within the oceans. Better understanding of the oceanic Pb cycle is emerging from the GEOTRACES programme. However, while Pb isotopes have been widely used in environmental studies, their full potential as a (pre-anthropogenic) paleoceanographic tracer remains to be exploited. In deep-sea corals, challenges exist from low Pb concentrations in aragonite in comparison to secondary coatings, the potential for contamination, and the efficient elemental separation required for measurement by thermal ionisation mass spectrometry (TIMS). Here we discuss progress in measuring Pb isotopes in coral aragonite using a 207Pb-204Pb double spike on a ThermoFinnigan Triton TIMS. For a 2 ng NIST-981 Pb standard, the long term reproducibility (using 1011 Ω resistors) is ~1000 ppm (2 s.d.) on 206Pb/204Pb, 207Pb/204Pb and 208Pb/204Pb ratios. We now show that using a new 1012 Ω resistor to measure the small 204Pb beam improves the internal precision on these ratios from ~500 ppm (2 s.e.) to ~250 ppm (2 s.e.) and we envisage a potential improvement in the long term reproducibility as a consequence. We further assess the internal precision and external reproducibility of our method using a BCR-2 rock standard and an in-house coral standard. Preliminary evidence on the application of this method to natural samples is derived from cleaning experiments and replication tests on deep-sea corals from the Southern

  16. DEWS (DEep White matter hyperintensity Segmentation framework): A fully automated pipeline for detecting small deep white matter hyperintensities in migraineurs.

    Science.gov (United States)

    Park, Bo-Yong; Lee, Mi Ji; Lee, Seung-Hak; Cha, Jihoon; Chung, Chin-Sang; Kim, Sung Tae; Park, Hyunjin

    2018-01-01

    Migraineurs show an increased load of white matter hyperintensities (WMHs) and more rapid deep WMH progression. Previous methods for WMH segmentation have limited efficacy to detect small deep WMHs. We developed a new fully automated detection pipeline, DEWS (DEep White matter hyperintensity Segmentation framework), for small and superficially-located deep WMHs. A total of 148 non-elderly subjects with migraine were included in this study. The pipeline consists of three components: 1) white matter (WM) extraction, 2) WMH detection, and 3) false positive reduction. In WM extraction, we adjusted the WM mask to re-assign misclassified WMHs back to WM using many sequential low-level image processing steps. In WMH detection, the potential WMH clusters were detected using an intensity based threshold and region growing approach. For false positive reduction, the detected WMH clusters were classified into final WMHs and non-WMHs using the random forest (RF) classifier. Size, texture, and multi-scale deep features were used to train the RF classifier. DEWS successfully detected small deep WMHs with a high positive predictive value (PPV) of 0.98 and true positive rate (TPR) of 0.70 in the training and test sets. Similar performance of PPV (0.96) and TPR (0.68) was attained in the validation set. DEWS showed a superior performance in comparison with other methods. Our proposed pipeline is freely available online to help the research community in quantifying deep WMHs in non-elderly adults.

  17. Cerebral O2 metabolism and cerebral blood flow in humans during deep and rapid-eye-movement sleep

    DEFF Research Database (Denmark)

    Madsen, P L; Schmidt, J F; Wildschiødtz, Gordon

    1991-01-01

    on examination of this question. We have now measured CBF and CMRO2 in young healthy volunteers using the Kety-Schmidt technique with 133Xe as the inert gas. Measurements were performed during wakefulness, deep sleep (stage 3/4), and rapid-eye-movement (REM) sleep as verified by standard polysomnography...... associated with light anesthesia. During REM sleep (dream sleep) CMRO2 was practically the same as in the awake state. Changes in CBF paralleled changes in CMRO2 during both deep and REM sleep.......It could be expected that the various stages of sleep were reflected in variation of the overall level of cerebral activity and thereby in the magnitude of cerebral metabolic rate of oxygen (CMRO2) and cerebral blood flow (CBF). The elusive nature of sleep imposes major methodological restrictions...

  18. Deep-Sea Corals: A New Oceanic Archive

    National Research Council Canada - National Science Library

    Adkins, Jess

    1998-01-01

    Deep-sea corals are an extraordinary new archive of deep ocean behavior. The species Desmophyllum cristagalli is a solitary coral composed of uranium rich, density banded aragonite that I have calibrated for several paleoclimate tracers...

  19. Challenging oil bioremediation at deep-sea hydrostatic pressure

    Directory of Open Access Journals (Sweden)

    Alberto Scoma

    2016-08-01

    Full Text Available The Deepwater Horizon (DWH accident has brought oil contamination of deep-sea environments to worldwide attention. The risk for new deep-sea spills is not expected to decrease in the future, as political pressure mounts to access deep-water fossil reserves, and poorly tested technologies are used to access oil. This also applies to the response to oil-contamination events, with bioremediation the only (biotechnology presently available to combat deep-sea spills. Many questions about the fate of petroleum-hydrocarbons at deep-sea remain unanswered, as much as the main constraints limiting bioremediation under increased hydrostatic pressures and low temperatures. The microbial pathways fueling oil take up are unclear, and the mild upregulation observed for beta-oxidation-related genes in both water and sediments contrasts with the high amount of alkanes present in the spilled-oil. The fate of solid alkanes (tar and that of hydrocarbons degradation rates was largely overlooked, as the reason why the most predominant hydrocarbonoclastic genera were not enriched at deep-sea, despite being present at hydrocarbon seeps at the Gulf of Mexico. This mini-review aims at highlighting the missing information in the field, proposing a holistic approach where in situ and ex situ studies are integrated to reveal the principal mechanisms accounting for deep-sea oil bioremediation.

  20. Deep Crustal Melting and the Survival of Continental Crust

    Science.gov (United States)

    Whitney, D.; Teyssier, C. P.; Rey, P. F.; Korchinski, M.

    2017-12-01

    Plate convergence involving continental lithosphere leads to crustal melting, which ultimately stabilizes the crust because it drives rapid upward flow of hot deep crust, followed by rapid cooling at shallow levels. Collision drives partial melting during crustal thickening (at 40-75 km) and/or continental subduction (at 75-100 km). These depths are not typically exceeded by crustal rocks that are exhumed in each setting because partial melting significantly decreases viscosity, facilitating upward flow of deep crust. Results from numerical models and nature indicate that deep crust moves laterally and then vertically, crystallizing at depths as shallow as 2 km. Deep crust flows en masse, without significant segregation of melt into magmatic bodies, over 10s of kms of vertical transport. This is a major mechanism by which deep crust is exhumed and is therefore a significant process of heat and mass transfer in continental evolution. The result of vertical flow of deep, partially molten crust is a migmatite dome. When lithosphere is under extension or transtension, the deep crust is solicited by faulting of the brittle upper crust, and the flow of deep crust in migmatite domes traverses nearly the entire thickness of orogenic crust in Recognition of the importance of migmatite (gneiss) domes as archives of orogenic deep crust is applicable to determining the chemical and physical properties of continental crust, as well as mechanisms and timescales of crustal differentiation.

  1. Deep Learning and Bayesian Methods

    OpenAIRE

    Prosper Harrison B.

    2017-01-01

    A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such meth...

  2. The dynamics of biogeographic ranges in the deep sea.

    Science.gov (United States)

    McClain, Craig R; Hardy, Sarah Mincks

    2010-12-07

    Anthropogenic disturbances such as fishing, mining, oil drilling, bioprospecting, warming, and acidification in the deep sea are increasing, yet generalities about deep-sea biogeography remain elusive. Owing to the lack of perceived environmental variability and geographical barriers, ranges of deep-sea species were traditionally assumed to be exceedingly large. In contrast, seamount and chemosynthetic habitats with reported high endemicity challenge the broad applicability of a single biogeographic paradigm for the deep sea. New research benefiting from higher resolution sampling, molecular methods and public databases can now more rigorously examine dispersal distances and species ranges on the vast ocean floor. Here, we explore the major outstanding questions in deep-sea biogeography. Based on current evidence, many taxa appear broadly distributed across the deep sea, a pattern replicated in both the abyssal plains and specialized environments such as hydrothermal vents. Cold waters may slow larval metabolism and development augmenting the great intrinsic ability for dispersal among many deep-sea species. Currents, environmental shifts, and topography can prove to be dispersal barriers but are often semipermeable. Evidence of historical events such as points of faunal origin and climatic fluctuations are also evident in contemporary biogeographic ranges. Continued synthetic analysis, database construction, theoretical advancement and field sampling will be required to further refine hypotheses regarding deep-sea biogeography.

  3. Survey on deep learning for radiotherapy.

    Science.gov (United States)

    Meyer, Philippe; Noblet, Vincent; Mazzara, Christophe; Lallement, Alex

    2018-05-17

    More than 50% of cancer patients are treated with radiotherapy, either exclusively or in combination with other methods. The planning and delivery of radiotherapy treatment is a complex process, but can now be greatly facilitated by artificial intelligence technology. Deep learning is the fastest-growing field in artificial intelligence and has been successfully used in recent years in many domains, including medicine. In this article, we first explain the concept of deep learning, addressing it in the broader context of machine learning. The most common network architectures are presented, with a more specific focus on convolutional neural networks. We then present a review of the published works on deep learning methods that can be applied to radiotherapy, which are classified into seven categories related to the patient workflow, and can provide some insights of potential future applications. We have attempted to make this paper accessible to both radiotherapy and deep learning communities, and hope that it will inspire new collaborations between these two communities to develop dedicated radiotherapy applications. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Hello World Deep Learning in Medical Imaging.

    Science.gov (United States)

    Lakhani, Paras; Gray, Daniel L; Pett, Carl R; Nagy, Paul; Shih, George

    2018-05-03

    There is recent popularity in applying machine learning to medical imaging, notably deep learning, which has achieved state-of-the-art performance in image analysis and processing. The rapid adoption of deep learning may be attributed to the availability of machine learning frameworks and libraries to simplify their use. In this tutorial, we provide a high-level overview of how to build a deep neural network for medical image classification, and provide code that can help those new to the field begin their informatics projects.

  5. Vitrification treatment options for disposal of greater-than-Class-C low-level waste in a deep geologic repository

    International Nuclear Information System (INIS)

    Fullmer, K.S.; Fish, L.W.; Fischer, D.K.

    1994-11-01

    The Department of Energy (DOE), in keeping with their responsibility under Public Law 99-240, the Low-Level Radioactive Waste Policy Amendments Act of 1985, is investigating several disposal options for greater-than-Class C low-level waste (GTCC LLW), including emplacement in a deep geologic repository. At the present time vitrification, namely borosilicate glass, is the standard waste form assumed for high-level waste accepted into the Civilian Radioactive Waste Management System. This report supports DOE's investigation of the deep geologic disposal option by comparing the vitrification treatments that are able to convert those GTCC LLWs that are inherently migratory into stable waste forms acceptable for disposal in a deep geologic repository. Eight vitrification treatments that utilize glass, glass ceramic, or basalt waste form matrices are identified. Six of these are discussed in detail, stating the advantages and limitations of each relative to their ability to immobilize GTCC LLW. The report concludes that the waste form most likely to provide the best composite of performance characteristics for GTCC process waste is Iron Enriched Basalt 4 (IEB4)

  6. Impact of deep learning on the normalization of reconstruction kernel effects in imaging biomarker quantification: a pilot study in CT emphysema

    Science.gov (United States)

    Jin, Hyeongmin; Heo, Changyong; Kim, Jong Hyo

    2018-02-01

    Differing reconstruction kernels are known to strongly affect the variability of imaging biomarkers and thus remain as a barrier in translating the computer aided quantification techniques into clinical practice. This study presents a deep learning application to CT kernel conversion which converts a CT image of sharp kernel to that of standard kernel and evaluates its impact on variability reduction of a pulmonary imaging biomarker, the emphysema index (EI). Forty cases of low-dose chest CT exams obtained with 120kVp, 40mAs, 1mm thickness, of 2 reconstruction kernels (B30f, B50f) were selected from the low dose lung cancer screening database of our institution. A Fully convolutional network was implemented with Keras deep learning library. The model consisted of symmetric layers to capture the context and fine structure characteristics of CT images from the standard and sharp reconstruction kernels. Pairs of the full-resolution CT data set were fed to input and output nodes to train the convolutional network to learn the appropriate filter kernels for converting the CT images of sharp kernel to standard kernel with a criterion of measuring the mean squared error between the input and target images. EIs (RA950 and Perc15) were measured with a software package (ImagePrism Pulmo, Seoul, South Korea) and compared for the data sets of B50f, B30f, and the converted B50f. The effect of kernel conversion was evaluated with the mean and standard deviation of pair-wise differences in EI. The population mean of RA950 was 27.65 +/- 7.28% for B50f data set, 10.82 +/- 6.71% for the B30f data set, and 8.87 +/- 6.20% for the converted B50f data set. The mean of pair-wise absolute differences in RA950 between B30f and B50f is reduced from 16.83% to 1.95% using kernel conversion. Our study demonstrates the feasibility of applying the deep learning technique for CT kernel conversion and reducing the kernel-induced variability of EI quantification. The deep learning model has a

  7. Deep Neuromuscular Blockade Improves Laparoscopic Surgical Conditions

    DEFF Research Database (Denmark)

    Rosenberg, Jacob; Herring, W Joseph; Blobner, Manfred

    2017-01-01

    INTRODUCTION: Sustained deep neuromuscular blockade (NMB) during laparoscopic surgery may facilitate optimal surgical conditions. This exploratory study assessed whether deep NMB improves surgical conditions and, in doing so, allows use of lower insufflation pressures during laparoscopic cholecys...

  8. DEEP-SEE: Joint Object Detection, Tracking and Recognition with Application to Visually Impaired Navigational Assistance

    Directory of Open Access Journals (Sweden)

    Ruxandra Tapu

    2017-10-01

    Full Text Available In this paper, we introduce the so-called DEEP-SEE framework that jointly exploits computer vision algorithms and deep convolutional neural networks (CNNs to detect, track and recognize in real time objects encountered during navigation in the outdoor environment. A first feature concerns an object detection technique designed to localize both static and dynamic objects without any a priori knowledge about their position, type or shape. The methodological core of the proposed approach relies on a novel object tracking method based on two convolutional neural networks trained offline. The key principle consists of alternating between tracking using motion information and predicting the object location in time based on visual similarity. The validation of the tracking technique is performed on standard benchmark VOT datasets, and shows that the proposed approach returns state-of-the-art results while minimizing the computational complexity. Then, the DEEP-SEE framework is integrated into a novel assistive device, designed to improve cognition of VI people and to increase their safety when navigating in crowded urban scenes. The validation of our assistive device is performed on a video dataset with 30 elements acquired with the help of VI users. The proposed system shows high accuracy (>90% and robustness (>90% scores regardless on the scene dynamics.

  9. Development of Hydro-Mechanical Deep Drawing

    DEFF Research Database (Denmark)

    Zhang, Shi-Hong; Danckert, Joachim

    1998-01-01

    The hydro-mechanical deep-drawing process is reviewed in this article. The process principles and features are introduced and the developments of the hydro-mechanical deep-drawing process in process performances, in theory and in numerical simulation are described. The applications are summarized....... Some other related hydraulic forming processes are also dealt with as a comparison....

  10. Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture.

    Science.gov (United States)

    Chen, C L Philip; Liu, Zhulin

    2018-01-01

    Broad Learning System (BLS) that aims to offer an alternative way of learning in deep structure is proposed in this paper. Deep structure and learning suffer from a time-consuming training process because of a large number of connecting parameters in filters and layers. Moreover, it encounters a complete retraining process if the structure is not sufficient to model the system. The BLS is established in the form of a flat network, where the original inputs are transferred and placed as "mapped features" in feature nodes and the structure is expanded in wide sense in the "enhancement nodes." The incremental learning algorithms are developed for fast remodeling in broad expansion without a retraining process if the network deems to be expanded. Two incremental learning algorithms are given for both the increment of the feature nodes (or filters in deep structure) and the increment of the enhancement nodes. The designed model and algorithms are very versatile for selecting a model rapidly. In addition, another incremental learning is developed for a system that has been modeled encounters a new incoming input. Specifically, the system can be remodeled in an incremental way without the entire retraining from the beginning. Satisfactory result for model reduction using singular value decomposition is conducted to simplify the final structure. Compared with existing deep neural networks, experimental results on the Modified National Institute of Standards and Technology database and NYU NORB object recognition dataset benchmark data demonstrate the effectiveness of the proposed BLS.

  11. 76 FR 66078 - Notice of Industry Workshop on Technical and Regulatory Challenges in Deep and Ultra-Deep Outer...

    Science.gov (United States)

    2011-10-25

    ...-0087] Notice of Industry Workshop on Technical and Regulatory Challenges in Deep and Ultra-Deep Outer... discussions expected to help identify Outer Continental Shelf (OCS) challenges and technologies associated... structured venue for consultation among offshore deepwater oil and gas industry and regulatory experts in...

  12. Deep Corals, Deep Learning: Moving the Deep Net Towards Real-Time Image Annotation

    OpenAIRE

    Lea-Anne Henry; Sankha S. Mukherjee; Neil M. Roberston; Laurence De Clippele; J. Murray Roberts

    2016-01-01

    The mismatch between human capacity and the acquisition of Big Data such as Earth imagery undermines commitments to Convention on Biological Diversity (CBD) and Aichi targets. Artificial intelligence (AI) solutions to Big Data issues are urgently needed as these could prove to be faster, more accurate, and cheaper. Reducing costs of managing protected areas in remote deep waters and in the High Seas is of great importance, and this is a realm where autonomous technology will be transformative.

  13. Evolutionary process of deep-sea bathymodiolus mussels.

    Science.gov (United States)

    Miyazaki, Jun-Ichi; de Oliveira Martins, Leonardo; Fujita, Yuko; Matsumoto, Hiroto; Fujiwara, Yoshihiro

    2010-04-27

    Since the discovery of deep-sea chemosynthesis-based communities, much work has been done to clarify their organismal and environmental aspects. However, major topics remain to be resolved, including when and how organisms invade and adapt to deep-sea environments; whether strategies for invasion and adaptation are shared by different taxa or unique to each taxon; how organisms extend their distribution and diversity; and how they become isolated to speciate in continuous waters. Deep-sea mussels are one of the dominant organisms in chemosynthesis-based communities, thus investigations of their origin and evolution contribute to resolving questions about life in those communities. We investigated worldwide phylogenetic relationships of deep-sea Bathymodiolus mussels and their mytilid relatives by analyzing nucleotide sequences of the mitochondrial cytochrome c oxidase subunit I (COI) and NADH dehydrogenase subunit 4 (ND4) genes. Phylogenetic analysis of the concatenated sequence data showed that mussels of the subfamily Bathymodiolinae from vents and seeps were divided into four groups, and that mussels of the subfamily Modiolinae from sunken wood and whale carcasses assumed the outgroup position and shallow-water modioline mussels were positioned more distantly to the bathymodioline mussels. We provisionally hypothesized the evolutionary history of Bathymodilolus mussels by estimating evolutionary time under a relaxed molecular clock model. Diversification of bathymodioline mussels was initiated in the early Miocene, and subsequently diversification of the groups occurred in the early to middle Miocene. The phylogenetic relationships support the "Evolutionary stepping stone hypothesis," in which mytilid ancestors exploited sunken wood and whale carcasses in their progressive adaptation to deep-sea environments. This hypothesis is also supported by the evolutionary transition of symbiosis in that nutritional adaptation to the deep sea proceeded from extracellular

  14. U.V. repair in deep-sea bacteria

    International Nuclear Information System (INIS)

    Lutz, L.; Yayanos, A.A.

    1986-01-01

    Exposure of cells to light of less than 320 nanometers wavelengths may lead to lethal lesions and perhaps carcinogenesis. Many organisms have evolved mechanisms to repair U.V. light-induced damage. Organisms such as deep-sea bacteria are presumably never exposed to U.V. light and perhaps occasionally to visible from bioluminescence. Thus, the repair of U.V. damage in deep-sea bacterial DNA might be inefficient and repair by photoreactivation unlikely. The bacteria utilized in this investigation are temperature sensitive and barophilic. Four deep-sea isolates were chosen for this study: PE-36 from 3584 m, CNPT-3 from 5782 m, HS-34 from 5682 m, and MT-41 from 10,476 m, all are from the North Pacific ocean. The deep-sea extends from 1100 m to depths greater than 7000 m. It is a region of relatively uniform conditions. The temperature ranges from 5 to -1 0 C. There is no solar light in the deep-sea. Deep-sea bacteria are sensitive to U.V. light; in fact more sensitive than a variety of terrestrial and sea-surface bacteria. All four isolates demonstrate thymine dimer repair. Photoreactivation was observed in only MT-41. The other strains from shallower depths displayed no photoreactivation. The presence of DNA sequences homologous to the rec A, uvr A, B, and C and phr genes of E. coli have been examined by Southern hybridization techniques

  15. Diabetic retinopathy screening using deep neural network.

    Science.gov (United States)

    Ramachandran, Nishanthan; Hong, Sheng Chiong; Sime, Mary J; Wilson, Graham A

    2017-09-07

    There is a burgeoning interest in the use of deep neural network in diabetic retinal screening. To determine whether a deep neural network could satisfactorily detect diabetic retinopathy that requires referral to an ophthalmologist from a local diabetic retinal screening programme and an international database. Retrospective audit. Diabetic retinal photos from Otago database photographed during October 2016 (485 photos), and 1200 photos from Messidor international database. Receiver operating characteristic curve to illustrate the ability of a deep neural network to identify referable diabetic retinopathy (moderate or worse diabetic retinopathy or exudates within one disc diameter of the fovea). Area under the receiver operating characteristic curve, sensitivity and specificity. For detecting referable diabetic retinopathy, the deep neural network had an area under receiver operating characteristic curve of 0.901 (95% confidence interval 0.807-0.995), with 84.6% sensitivity and 79.7% specificity for Otago and 0.980 (95% confidence interval 0.973-0.986), with 96.0% sensitivity and 90.0% specificity for Messidor. This study has shown that a deep neural network can detect referable diabetic retinopathy with sensitivities and specificities close to or better than 80% from both an international and a domestic (New Zealand) database. We believe that deep neural networks can be integrated into community screening once they can successfully detect both diabetic retinopathy and diabetic macular oedema. © 2017 Royal Australian and New Zealand College of Ophthalmologists.

  16. Deep groundwater chemistry

    International Nuclear Information System (INIS)

    Wikberg, P.; Axelsen, K.; Fredlund, F.

    1987-06-01

    Starting in 1977 and up till now a number of places in Sweden have been investigated in order to collect the necessary geological, hydrogeological and chemical data needed for safety analyses of repositories in deep bedrock systems. Only crystalline rock is considered and in many cases this has been gneisses of sedimentary origin but granites and gabbros are also represented. Core drilled holes have been made at nine sites. Up to 15 holes may be core drilled at one site, the deepest down to 1000 m. In addition to this a number of boreholes are percussion drilled at each site to depths of about 100 m. When possible drilling water is taken from percussion drilled holes. The first objective is to survey the hydraulic conditions. Core drilled boreholes and sections selected for sampling of deep groundwater are summarized. (orig./HP)

  17. Preface: Deep Slab and Mantle Dynamics

    Science.gov (United States)

    Suetsugu, Daisuke; Bina, Craig R.; Inoue, Toru; Wiens, Douglas A.

    2010-11-01

    We are pleased to publish this special issue of the journal Physics of the Earth and Planetary Interiors entitled "Deep Slab and Mantle Dynamics". This issue is an outgrowth of the international symposium "Deep Slab and Mantle Dynamics", which was held on February 25-27, 2009, in Kyoto, Japan. This symposium was organized by the "Stagnant Slab Project" (SSP) research group to present the results of the 5-year project and to facilitate intensive discussion with well-known international researchers in related fields. The SSP and the symposium were supported by a Grant-in-Aid for Scientific Research (16075101) from the Ministry of Education, Culture, Sports, Science and Technology of the Japanese Government. In the symposium, key issues discussed by participants included: transportation of water into the deep mantle and its role in slab-related dynamics; observational and experimental constraints on deep slab properties and the slab environment; modeling of slab stagnation to constrain its mechanisms in comparison with observational and experimental data; observational, experimental and modeling constraints on the fate of stagnant slabs; eventual accumulation of stagnant slabs on the core-mantle boundary and its geodynamic implications. This special issue is a collection of papers presented in the symposium and other papers related to the subject of the symposium. The collected papers provide an overview of the wide range of multidisciplinary studies of mantle dynamics, particularly in the context of subduction, stagnation, and the fate of deep slabs.

  18. Harnessing the Deep Web: Present and Future

    OpenAIRE

    Madhavan, Jayant; Afanasiev, Loredana; Antova, Lyublena; Halevy, Alon

    2009-01-01

    Over the past few years, we have built a system that has exposed large volumes of Deep-Web content to Google.com users. The content that our system exposes contributes to more than 1000 search queries per-second and spans over 50 languages and hundreds of domains. The Deep Web has long been acknowledged to be a major source of structured data on the web, and hence accessing Deep-Web content has long been a problem of interest in the data management community. In this paper, we report on where...

  19. Zooplankton at deep Red Sea brine pools

    KAUST Repository

    Kaartvedt, Stein

    2016-03-02

    The deep-sea anoxic brines of the Red Sea comprise unique, complex and extreme habitats. These environments are too harsh for metazoans, while the brine–seawater interface harbors dense microbial populations. We investigated the adjacent pelagic fauna at two brine pools using net tows, video records from a remotely operated vehicle and submerged echosounders. Waters just above the brine pool of Atlantis II Deep (2000 m depth) appeared depleted of macrofauna. In contrast, the fauna appeared to be enriched at the Kebrit Deep brine–seawater interface (1466 m).

  20. Effect of a Standardized Protocol of Antibiotic Therapy on Surgical Site Infection after Laparoscopic Surgery for Complicated Appendicitis.

    Science.gov (United States)

    Park, Hyoung-Chul; Kim, Min Jeong; Lee, Bong Hwa

    Although it is accepted that complicated appendicitis requires antibiotic therapy to prevent post-operative surgical infections, consensus protocols on the duration and regimens of treatment are not well established. This study aimed to compare the outcome of post-operative infectious complications in patients receiving old non-standardized and new standard antibiotic protocols, involving either 5 or 10 days of treatment, respectively. We enrolled 1,343 patients who underwent laparoscopic surgery for complicated appendicitis between January 2009 and December 2014. At the beginning of the new protocol, the patients were divided into two groups; 10 days of various antibiotic regimens (between January 2009 and June 2012, called the non-standardized protocol; n = 730) and five days of cefuroxime and metronidazole regimen (between July 2012 and December 2014; standardized protocol; n = 613). We compared the clinical outcomes, including surgical site infection (SSI) (superficial and deep organ/space infections) in the two groups. The standardized protocol group had a slightly shorter operative time (67 vs. 69 min), a shorter hospital stay (5 vs. 5.4 d), and lower medical cost (US$1,564 vs. US$1,654). Otherwise, there was no difference between the groups. No differences were found in the non-standardized and standard protocol groups with regard to the rate of superficial infection (10.3% vs. 12.7%; p = 0.488) or deep organ/space infection (2.3% vs. 2.1%; p = 0.797). In patients undergoing laparoscopic surgery for complicated appendicitis, five days of cefuroxime and metronidazole did not lead to more SSIs, and it decreased the medical costs compared with non-standardized antibiotic regimens.

  1. How to study deep roots - and why it matters

    OpenAIRE

    Maeght, Jean-Luc; Rewald, B.; Pierret, Alain

    2013-01-01

    The drivers underlying the development of deep root systems, whether genetic or environmental, are poorly understood but evidence has accumulated that deep rooting could be a more widespread and important trait among plants than commonly anticipated from their share of root biomass. Even though a distinct classification of "deep roots" is missing to date, deep roots provide important functions for individual plants such as nutrient and water uptake but can also shape plant communities by hydr...

  2. Benchmarking State-of-the-Art Deep Learning Software Tools

    OpenAIRE

    Shi, Shaohuai; Wang, Qiang; Xu, Pengfei; Chu, Xiaowen

    2016-01-01

    Deep learning has been shown as a successful machine learning method for a variety of tasks, and its popularity results in numerous open-source deep learning software tools. Training a deep network is usually a very time-consuming process. To address the computational challenge in deep learning, many tools exploit hardware features such as multi-core CPUs and many-core GPUs to shorten the training time. However, different tools exhibit different features and running performance when training ...

  3. High-Redshift Radio Galaxies from Deep Fields

    Indian Academy of Sciences (India)

    2016-01-27

    Jan 27, 2016 ... High-Redshift Radio Galaxies from Deep Fields ... Here we present results from the deep 150 MHz observations of LBDS-Lynx field, which has been imaged at 327, ... Articles are also visible in Web of Science immediately.

  4. Deep-sea benthic community and environmental impact assessment at the Atlantic Frontier

    Science.gov (United States)

    Gage, John D.

    2001-05-01

    the oil industry-funded Atlantic Margin Environmental Study cruises in 1996 and 1998. A predominantly depth-related pattern in variability applies here as found elsewhere in the deep ocean, and just sufficient knowledge-based predictive power exists to make comprehensive, high-resolution grid surveys unnecessary for the purpose of broad-scale environmental assessment. But new, small-scale site surveys remain necessary because of local-scale variability. Site survey should be undertaken in the context of existing knowledge of the deep sea in the UK area of the Atlantic Frontier and beyond, and can itself usefully be structured as tests of a projection from the regional scale to reduce sampling effort. It is to the benefit of all stakeholders that environmental assessment aspires to the highest scientific standards and contributes meaningfully to context knowledge. By doing so it will reduce uncertainties in future impact assessments and hence contribute usefully to environmental risk management.

  5. Deep Learning Microscopy

    KAUST Repository

    Rivenson, Yair; Gorocs, Zoltan; Gunaydin, Harun; Zhang, Yibo; Wang, Hongda; Ozcan, Aydogan

    2017-01-01

    regular optical microscope, without any changes to its design. We blindly tested this deep learning approach using various tissue samples that are imaged with low-resolution and wide-field systems, where the network rapidly outputs an image with remarkably

  6. Spatial Organization and Molecular Correlation of Tumor-Infiltrating Lymphocytes Using Deep Learning on Pathology Images

    Directory of Open Access Journals (Sweden)

    Joel Saltz

    2018-04-01

    Full Text Available Summary: Beyond sample curation and basic pathologic characterization, the digitized H&E-stained images of TCGA samples remain underutilized. To highlight this resource, we present mappings of tumor-infiltrating lymphocytes (TILs based on H&E images from 13 TCGA tumor types. These TIL maps are derived through computational staining using a convolutional neural network trained to classify patches of images. Affinity propagation revealed local spatial structure in TIL patterns and correlation with overall survival. TIL map structural patterns were grouped using standard histopathological parameters. These patterns are enriched in particular T cell subpopulations derived from molecular measures. TIL densities and spatial structure were differentially enriched among tumor types, immune subtypes, and tumor molecular subtypes, implying that spatial infiltrate state could reflect particular tumor cell aberration states. Obtaining spatial lymphocytic patterns linked to the rich genomic characterization of TCGA samples demonstrates one use for the TCGA image archives with insights into the tumor-immune microenvironment. : Tumor-infiltrating lymphocytes (TILs were identified from standard pathology cancer images by a deep-learning-derived “computational stain” developed by Saltz et al. They processed 5,202 digital images from 13 cancer types. Resulting TIL maps were correlated with TCGA molecular data, relating TIL content to survival, tumor subtypes, and immune profiles. Keywords: digital pathology, immuno-oncology, machine learning, lymphocytes, tumor microenvironment, deep learning, tumor-infiltrating lymphocytes, artificial intelligence, bioinformatics, computer vision

  7. Draft directive on the management of radioactive wastes based on deep geological disposal

    International Nuclear Information System (INIS)

    Anon.

    2010-01-01

    The European Commission works on a legal framework to assure that all the member states apply the same standards in all the stages of the management of spent fuels and radioactive wastes till their definitive disposal. The draft propositions are the following. The standards to follow are those proposed by the IAEA. First, each member state has to set a national program dedicated to the management of radioactive wastes. This program will have to detail: the chosen solution, the description of the project, a time schedule, costs and financing. Secondly, the exportation of nuclear wastes for definitive disposal is not allowed unless the 2 countries have agreed to build a common nuclear waste disposal center. Thirdly, the population will have to be informed on the project and will have to take part in the decision process. Fourthly, the standards set by IAEA will be enforced by law. There is a broad consensus between scientists and international organizations like IAEA to consider that the disposal in deep geological layers of high-level radioactive wastes is the most adequate solution. (A.C.)

  8. Evaluation of processes controlling the geochemical constituents in deep groundwater in Bangladesh: Spatial variability on arsenic and boron enrichment

    International Nuclear Information System (INIS)

    Halim, M.A.; Majumder, R.K.; Nessa, S.A.; Hiroshiro, Y.; Sasaki, K.; Saha, B.B.; Saepuloh, A.; Jinno, K.

    2010-01-01

    Forty-six deep groundwater samples from highly arsenic affected areas in Bangladesh were analyzed in order to evaluate the processes controlling geochemical constituents in the deep aquifer system. Spatial trends of solutes, geochemical modeling and principal component analysis indicate that carbonate dissolution, silicate weathering and ion exchange control the major-ion chemistry. The groundwater is dominantly of Na-Cl type brackish water. Approximately 17% of the examined groundwaters exhibit As concentrations higher than the maximum acceptable limit of 10 μg/L for drinking water. Strong correlation (R 2 = 0.67) of Fe with dissolved organic carbon (DOC) and positive saturation index of siderite suggests that the reductive dissolution of Fe-oxyhydroxide in presence of organic matter is considered to be the dominant process to release high content of Fe (median 0.31 mg/L) in the deep aquifer. In contrast, As is not correlated with Fe and DOC. Boron concentration in the 26% samples exceeds the standard limit of 500 μg/L, for water intended for human consumption. Negative relationships of B/Cl ratio with Cl and boron with Na/Ca ratio demonstrate the boron in deep groundwater is accompanied by brackish water and cation exchange within the clayey sediments.

  9. Nuclear security standard: Argentina approach

    International Nuclear Information System (INIS)

    Bonet Duran, Stella M.; Rodriguez, Carlos E.; Menossi, Sergio A.; Serdeiro, Nelida H.

    2007-01-01

    Argentina has a comprehensive regulatory system designed to assure the security and safety of radioactive sources, which has been in place for more than fifty years. In 1989 the Radiation Protection and Nuclear Safety branch of the National Atomic Energy Commission created the 'Council of Physical Protection of Nuclear Materials and Installations' (CAPFMIN). This Council published in 1992 a Physical Protection Standard based on a deep and careful analysis of INFCIRC 225/Rev.2 including topics like 'sabotage scenario'. Since then, the world's scenario has changed, and some concepts like 'design basis threat', 'detection, delay and response', 'performance approach and prescriptive approach', have been applied to the design of physical protection systems in facilities other than nuclear installations. In Argentina, radioactive sources are widely used in medical and industrial applications with more than 1,600 facilities controlled by the Nuclear Regulatory Authority (in spanish ARN). During 2005, measures like 'access control', 'timely detection of intruder', 'background checks', and 'security plan', were required by ARN for implementation in facilities with radioactive sources. To 'close the cycle' the next step is to produce a regulatory standard based on the operational experience acquired during 2005. ARN has developed a set of criteria for including them in a new standard on security of radioactive materials. Besides, a specific Regulatory Guide is being prepared to help licensees of facilities in design a security system and to fulfill the 'Design of Security System Questionnaire'. The present paper describes the proposed Standard on Security of Radioactive Sources and the draft of the Nuclear Security Regulatory Guidance, based on our regulatory experience and the latest international recommendations. (author)

  10. Deep-sea fungi

    Digital Repository Service at National Institute of Oceanography (India)

    Raghukumar, C; Damare, S.R.

    significant in terms of carbon sequestration (5, 8). In light of this, the diversity, abundance, and role of fungi in deep-sea sediments may form an important link in the global C biogeochemistry. This review focuses on issues related to collection...

  11. Deep Trawl Dataset

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Otter trawl (36' Yankee and 4-seam net deepwater gear) catches from mid-Atlantic slope and canyons at 200 - 800 m depth. Deep-sea (200-800 m depth) flat otter trawls...

  12. Simulation of deep one- and two-dimensional redshift surveys

    International Nuclear Information System (INIS)

    Park, Changbom; Gott, J.R. III

    1991-01-01

    We show that slice or pencil-beam redshift surveys of galaxies can be simulated in a box with non-equal sides. This method saves a lot of computer time and memory while providing essentially the same results as from whole-cube simulations. A 2457.6-h -1 Mpc-long rod (out to a redshift z = 0.58 in two opposite directions) is simulated using the standard biased Cold Dark Matter model as an example to mimic the recent deep pencil-beam surveys by Broadhurst et al. The structures (spikes) we see in these simulated samples occur when the narrow pencil-beam pierces walls, filaments and clusters appearing randomly along the line-of-sight. We have applied a statistical test for goodness of fit to a periodic lattice to the observations and the simulations. (author)

  13. Temporal and spatial dispersion of human body temperature during deep hypothermia.

    Science.gov (United States)

    Opatz, O; Trippel, T; Lochner, A; Werner, A; Stahn, A; Steinach, M; Lenk, J; Kuppe, H; Gunga, H C

    2013-11-01

    Clinical temperature management remains challenging. Choosing the right sensor location to determine the core body temperature is a particular matter of academic and clinical debate. This study aimed to investigate the relationship of measured temperatures at different sites during surgery in deep hypothermic patients. In this prospective single-centre study, we studied 24 patients undergoing cardiothoracic surgery: 12 in normothermia, 3 in mild, and 9 in deep hypothermia. Temperature recordings of a non-invasive heat flux sensor at the forehead were compared with the arterial outlet temperature of a heart-lung machine, with the temperature on a conventional vesical bladder thermistor and, for patients undergoing deep hypothermia, with oesophageal temperature. Using a linear model for sensor comparison, the arterial outlet sensor showed a difference among the other sensor positions between -0.54 and -1.12°C. The 95% confidence interval ranged between 7.06 and 8.82°C for the upper limit and -8.14 and -10.62°C for the lower limit. Because of the hysteretic shape, the curves were divided into phases and fitted into a non-linear model according to time and placement of the sensors. During cooling and warming phases, a quadratic relationship could be observed among arterial, oesophageal, vesical, and cranial temperature recordings, with coefficients of determination ranging between 0.95 and 0.98 (standard errors of the estimate 0.69-1.12°C). We suggest that measured surrogate temperatures as indices of the cerebral temperature (e.g. vesical bladder temperature) should be interpreted with respect to the temporal and spatial dispersion during cooling and rewarming phases.

  14. Efficacy of a Deep Learning System for Detecting Glaucomatous Optic Neuropathy Based on Color Fundus Photographs.

    Science.gov (United States)

    Li, Zhixi; He, Yifan; Keel, Stuart; Meng, Wei; Chang, Robert T; He, Mingguang

    2018-03-02

    To assess the performance of a deep learning algorithm for detecting referable glaucomatous optic neuropathy (GON) based on color fundus photographs. A deep learning system for the classification of GON was developed for automated classification of GON on color fundus photographs. We retrospectively included 48 116 fundus photographs for the development and validation of a deep learning algorithm. This study recruited 21 trained ophthalmologists to classify the photographs. Referable GON was defined as vertical cup-to-disc ratio of 0.7 or more and other typical changes of GON. The reference standard was made until 3 graders achieved agreement. A separate validation dataset of 8000 fully gradable fundus photographs was used to assess the performance of this algorithm. The area under receiver operator characteristic curve (AUC) with sensitivity and specificity was applied to evaluate the efficacy of the deep learning algorithm detecting referable GON. In the validation dataset, this deep learning system achieved an AUC of 0.986 with sensitivity of 95.6% and specificity of 92.0%. The most common reasons for false-negative grading (n = 87) were GON with coexisting eye conditions (n = 44 [50.6%]), including pathologic or high myopia (n = 37 [42.6%]), diabetic retinopathy (n = 4 [4.6%]), and age-related macular degeneration (n = 3 [3.4%]). The leading reason for false-positive results (n = 480) was having other eye conditions (n = 458 [95.4%]), mainly including physiologic cupping (n = 267 [55.6%]). Misclassification as false-positive results amidst a normal-appearing fundus occurred in only 22 eyes (4.6%). A deep learning system can detect referable GON with high sensitivity and specificity. Coexistence of high or pathologic myopia is the most common cause resulting in false-negative results. Physiologic cupping and pathologic myopia were the most common reasons for false-positive results. Copyright © 2018 American Academy of Ophthalmology. Published by

  15. Performance of an Artificial Multi-observer Deep Neural Network for Fully Automated Segmentation of Polycystic Kidneys.

    Science.gov (United States)

    Kline, Timothy L; Korfiatis, Panagiotis; Edwards, Marie E; Blais, Jaime D; Czerwiec, Frank S; Harris, Peter C; King, Bernard F; Torres, Vicente E; Erickson, Bradley J

    2017-08-01

    Deep learning techniques are being rapidly applied to medical imaging tasks-from organ and lesion segmentation to tissue and tumor classification. These techniques are becoming the leading algorithmic approaches to solve inherently difficult image processing tasks. Currently, the most critical requirement for successful implementation lies in the need for relatively large datasets that can be used for training the deep learning networks. Based on our initial studies of MR imaging examinations of the kidneys of patients affected by polycystic kidney disease (PKD), we have generated a unique database of imaging data and corresponding reference standard segmentations of polycystic kidneys. In the study of PKD, segmentation of the kidneys is needed in order to measure total kidney volume (TKV). Automated methods to segment the kidneys and measure TKV are needed to increase measurement throughput and alleviate the inherent variability of human-derived measurements. We hypothesize that deep learning techniques can be leveraged to perform fast, accurate, reproducible, and fully automated segmentation of polycystic kidneys. Here, we describe a fully automated approach for segmenting PKD kidneys within MR images that simulates a multi-observer approach in order to create an accurate and robust method for the task of segmentation and computation of TKV for PKD patients. A total of 2000 cases were used for training and validation, and 400 cases were used for testing. The multi-observer ensemble method had mean ± SD percent volume difference of 0.68 ± 2.2% compared with the reference standard segmentations. The complete framework performs fully automated segmentation at a level comparable with interobserver variability and could be considered as a replacement for the task of segmentation of PKD kidneys by a human.

  16. Deep learning with convolutional neural network in radiology.

    Science.gov (United States)

    Yasaka, Koichiro; Akai, Hiroyuki; Kunimatsu, Akira; Kiryu, Shigeru; Abe, Osamu

    2018-04-01

    Deep learning with a convolutional neural network (CNN) is gaining attention recently for its high performance in image recognition. Images themselves can be utilized in a learning process with this technique, and feature extraction in advance of the learning process is not required. Important features can be automatically learned. Thanks to the development of hardware and software in addition to techniques regarding deep learning, application of this technique to radiological images for predicting clinically useful information, such as the detection and the evaluation of lesions, etc., are beginning to be investigated. This article illustrates basic technical knowledge regarding deep learning with CNNs along the actual course (collecting data, implementing CNNs, and training and testing phases). Pitfalls regarding this technique and how to manage them are also illustrated. We also described some advanced topics of deep learning, results of recent clinical studies, and the future directions of clinical application of deep learning techniques.

  17. ECT, rTMS, and deepTMS in pharmacoresistant drug-free patients with unipolar depression: a comparative review

    Directory of Open Access Journals (Sweden)

    Salviati M

    2012-01-01

    , measured by the number of dropped-out patients, worse than ECT.Conclusion: Our investigation confirms the great therapeutic power of ECT. DeepTMS seems to be the only therapy that provides a substantial improvement of both depressive symptoms and cognitive performances; nevertheless it is characterized by a poor tolerability. rTMS seems to provide a better tolerability for patients, but its therapeutic efficacy is lower. Considering the small therapeutic efficacy of deepTMS in the last 2 weeks of treatment, it could be reasonable to shorten the standard period of deepTMS treatment from 4 to 2 weeks, expecting a reduction of dropped-out patients and thus optimizing the treatment outcome.Keywords: deep transcranial magnetic stimulation, transcranial magnetic stimulation, electroconvulsive therapy, pharmacoresistant unipolar depression

  18. Management of facial burns with a collagen/glycosaminoglycan skin substitute-prospective experience with 12 consecutive patients with large, deep facial burns.

    Science.gov (United States)

    Klein, Matthew B; Engrav, Loren H; Holmes, James H; Friedrich, Jeffrey B; Costa, Beth A; Honari, Shari; Gibran, Nicole S

    2005-05-01

    Management of deep facial burns remains one of the greatest challenges in burn care. We have developed a protocol over the past 20 years for management of facial burns that includes excision and coverage with thick autograft. However, the results were not perfect. Deformities of the eyelids, nose and mouth as well as the prominence of skin graft junctures demonstrated the need to explore novel approaches. Integra has been used with success in the management of burns of the trunk and extremities. The purpose of this study was to prospectively evaluate the aesthetic outcome of the use of Integra for deep facial burns. Twelve consecutive patients underwent excision of large, deep facial burns and placement of Integra. Integra provides excellent color and minimally visible skin graft junctures. The texture is good but not as supple as thick autograft. Integra is not well suited for use in the coverage of eyelid burns due to the need to wait 2 weeks for adequate vascularization. In summary, thick autograft remains the gold standard for deep facial burns. However, for patients with extensive burns and limited donor sites, Integra provides an acceptable alternative.

  19. Temperature and pressure adaptation of a sulfate reducer from the deep subsurface

    Directory of Open Access Journals (Sweden)

    Katja eFichtel

    2015-10-01

    Full Text Available Microbial life in deep marine subsurface faces increasing temperatures and hydrostatic pressure with depth. In this study, we have examined growth characteristics and temperature-related adaptation of the Desulfovibrio indonesiensis strain P23 to the in situ pressure of 30 MPa. The strain originates from the deep subsurface of the eastern flank of the Juan de Fuca Ridge (IODP Site U1301. The organism was isolated at 20 °C and atmospheric pressure from ~61 °C-warm sediments approximately five meters above the sediment-basement interface. In comparison to standard laboratory conditions (20 °C and 0.1 MPa, faster growth was recorded when incubated at in situ pressure and high temperature (45 °C, while cell filamentation was induced by further compression. The maximum growth temperature shifted from 48°C at atmospheric pressure to 50°C under high-pressure conditions. Complementary cellular lipid analyses revealed a two-step response of membrane viscosity to increasing temperature with an exchange of unsaturated by saturated fatty acids and subsequent change from branched to unbranched alkyl moieties. While temperature had a stronger effect on the degree of fatty acid saturation and restructuring of main phospholipids, pressure mainly affected branching and length of side chains. The simultaneous decrease of temperature and pressure to ambient laboratory conditions allowed the cultivation of our moderately thermophilic strain. This may in turn be one key to a successful isolation of microorganisms from the deep subsurface adapted to high temperature and pressure.

  20. Photon diffractive dissociation in deep inelastic scattering

    International Nuclear Information System (INIS)

    Ryskin, M.G.

    1990-01-01

    The new ep-collider HERA gives us the possibility to study the diffractive dissociation of virtual photon in deep inelastic ep-collision. The process of photon dissociation in deep inelastic scattering is the most direct way to measure the value of triple-pomeron vertex G 3P . It was shown that the value of the correct bare vertex G 3P may more than 4 times exceeds its effective value measuring in the triple-reggeon region and reaches the value of about 40-50% of the elastic pp-pomeron vertex. On the contrary in deep inelastic processes the perpendicular momenta q t of the secondary particles are large enough. Thus in deep inelastic reactions one can measure the absolute value of G 3P vertex in the most direct way and compare its value and q t dependence with the leading log QCD predictions

  1. Full-Thickness Excision versus Shaving by Laparoscopy for Intestinal Deep Infiltrating Endometriosis: Rationale and Potential Treatment Options

    Directory of Open Access Journals (Sweden)

    Antonio Simone Laganà

    2016-01-01

    Full Text Available Endometriosis is defined as the presence of endometrial mucosa (glands and stroma abnormally implanted in locations other than the uterine cavity. Deep infiltrating endometriosis (DIE is considered the most aggressive presentation of the disease, penetrating more than 5 mm in affected tissues, and it is reported in approximately 20% of all women with endometriosis. DIE can cause a complete distortion of the pelvic anatomy and it mainly involves uterosacral ligaments, bladder, rectovaginal septum, rectum, and rectosigmoid colon. This review describes the state of the art in laparoscopic approach for DIE with a special interest in intestinal involvement, according to recent literature findings. Our attention has been focused particularly on full-thickness excision versus shaving technique in deep endometriosis intestinal involvement. Particularly, the aim of this paper is clarifying from the clinical and methodological points of view the best surgical treatment of deep intestinal endometriosis, since there is no standard of care in the literature and in different surgical settings. Indeed, this review tries to suggest when it is advisable to manage the full-thickness excision or the shaving technique, also analyzing perioperative management, main complications, and surgical outcomes.

  2. Ubiquitous healthy diatoms in the deep sea confirm deep carbon injection by the biological pump

    KAUST Repository

    Agusti, Susana

    2015-07-09

    The role of the ocean as a sink for CO2 is partially dependent on the downward transport of phytoplankton cells packaged within fast-sinking particles. However, whether such fast-sinking mechanisms deliver fresh organic carbon down to the deep bathypelagic sea and whether this mechanism is prevalent across the ocean requires confirmation. Here we report the ubiquitous presence of healthy photosynthetic cells, dominated by diatoms, down to 4,000 m in the deep dark ocean. Decay experiments with surface phytoplankton suggested that the large proportion (18%) of healthy photosynthetic cells observed, on average, in the dark ocean, requires transport times from a few days to a few weeks, corresponding to sinking rates (124–732 m d−1) comparable to those of fast-sinking aggregates and faecal pellets. These results confirm the expectation that fast-sinking mechanisms inject fresh organic carbon into the deep sea and that this is a prevalent process operating across the global oligotrophic ocean.

  3. Ubiquitous healthy diatoms in the deep sea confirm deep carbon injection by the biological pump

    KAUST Repository

    Agusti, Susana; Gonzá lez-Gordillo, J. I.; Vaqué , D.; Estrada, M.; Cerezo, M. I.; Salazar, G.; Gasol, J. M.; Duarte, Carlos M.

    2015-01-01

    The role of the ocean as a sink for CO2 is partially dependent on the downward transport of phytoplankton cells packaged within fast-sinking particles. However, whether such fast-sinking mechanisms deliver fresh organic carbon down to the deep bathypelagic sea and whether this mechanism is prevalent across the ocean requires confirmation. Here we report the ubiquitous presence of healthy photosynthetic cells, dominated by diatoms, down to 4,000 m in the deep dark ocean. Decay experiments with surface phytoplankton suggested that the large proportion (18%) of healthy photosynthetic cells observed, on average, in the dark ocean, requires transport times from a few days to a few weeks, corresponding to sinking rates (124–732 m d−1) comparable to those of fast-sinking aggregates and faecal pellets. These results confirm the expectation that fast-sinking mechanisms inject fresh organic carbon into the deep sea and that this is a prevalent process operating across the global oligotrophic ocean.

  4. The deep universe

    CERN Document Server

    Sandage, AR; Longair, MS

    1995-01-01

    Discusses the concept of the deep universe from two conflicting theoretical viewpoints: firstly as a theory embracing the evolution of the universe from the Big Bang to the present; and secondly through observations gleaned over the years on stars, galaxies and clusters.

  5. Deep Vein Thrombosis

    Centers for Disease Control (CDC) Podcasts

    2012-04-05

    This podcast discusses the risk for deep vein thrombosis in long-distance travelers and ways to minimize that risk.  Created: 4/5/2012 by National Center for Emerging and Zoonotic Infectious Diseases (NCEZID).   Date Released: 4/5/2012.

  6. Deep inelastic scattering

    International Nuclear Information System (INIS)

    Aubert, J.J.

    1982-01-01

    Deep inelastic lepton-nucleon interaction experiments are renewed. Singlet and non-singlet structure functions are measured and the consistency of the different results is checked. A detailed analysis of the scaling violation is performed in terms of the quantum chromodynamics predictions [fr

  7. Assessment of deep dynamic mechanical sensitivity in individuals with tension-type headache: The dynamic pressure algometry.

    Science.gov (United States)

    Palacios-Ceña, M; Wang, K; Castaldo, M; Guerrero-Peral, Á; Caminero, A B; Fernández-de-Las-Peñas, C; Arendt-Nielsen, L

    2017-09-01

    To explore the validity of dynamic pressure algometry for evaluating deep dynamic mechanical sensitivity by assessing its association with headache features and widespread pressure sensitivity in tension-type headache (TTH). One hundred and eighty-eight subjects with TTH (70% women) participated. Deep dynamic sensitivity was assessed with a dynamic pressure algometry set (Aalborg University, Denmark © ) consisting of 11 different rollers including fixed levels from 500 g to 5300 g. Each roller was moved at a speed of 0.5 cm/s over a 60-mm horizontal line covering the temporalis muscle. Dynamic pain threshold (DPT-level of the first painful roller) was determined and pain intensity during DPT was rated on a numerical pain rate scale (NPRS, 0-10). Headache clinical features were collected on a headache diary. As gold standard, static pressure pain thresholds (PPT) were assessed over temporalis, C5/C6 joint, second metacarpal, and tibialis anterior muscle. Side-to-side consistency between DPT (r = 0.843, p  r > 0.656, all p headaches supporting that deep dynamic pressure sensitivity within the trigeminal area is consistent with widespread pressure sensitivity. Assessing deep static and dynamic somatic tissue pain sensitivity may provide new opportunities for differentiated diagnostics and possibly a new tool for assessing treatment effects. The current study found that dynamic pressure algometry in the temporalis muscle was associated with widespread pressure pain sensitivity in individuals with tension-type headache. The association was independent of the frequency of headaches. Assessing deep static and dynamic somatic tissue pain sensitivity may provide new opportunities for differentiated diagnostics and possibly a tool for assessing treatment effects. © 2017 European Pain Federation - EFIC®.

  8. Land Cover Classification via Multitemporal Spatial Data by Deep Recurrent Neural Networks

    Science.gov (United States)

    Ienco, Dino; Gaetano, Raffaele; Dupaquier, Claire; Maurel, Pierre

    2017-10-01

    Nowadays, modern earth observation programs produce huge volumes of satellite images time series (SITS) that can be useful to monitor geographical areas through time. How to efficiently analyze such kind of information is still an open question in the remote sensing field. Recently, deep learning methods proved suitable to deal with remote sensing data mainly for scene classification (i.e. Convolutional Neural Networks - CNNs - on single images) while only very few studies exist involving temporal deep learning approaches (i.e Recurrent Neural Networks - RNNs) to deal with remote sensing time series. In this letter we evaluate the ability of Recurrent Neural Networks, in particular the Long-Short Term Memory (LSTM) model, to perform land cover classification considering multi-temporal spatial data derived from a time series of satellite images. We carried out experiments on two different datasets considering both pixel-based and object-based classification. The obtained results show that Recurrent Neural Networks are competitive compared to state-of-the-art classifiers, and may outperform classical approaches in presence of low represented and/or highly mixed classes. We also show that using the alternative feature representation generated by LSTM can improve the performances of standard classifiers.

  9. Joint OSNR monitoring and modulation format identification in digital coherent receivers using deep neural networks.

    Science.gov (United States)

    Khan, Faisal Nadeem; Zhong, Kangping; Zhou, Xian; Al-Arashi, Waled Hussein; Yu, Changyuan; Lu, Chao; Lau, Alan Pak Tao

    2017-07-24

    We experimentally demonstrate the use of deep neural networks (DNNs) in combination with signals' amplitude histograms (AHs) for simultaneous optical signal-to-noise ratio (OSNR) monitoring and modulation format identification (MFI) in digital coherent receivers. The proposed technique automatically extracts OSNR and modulation format dependent features of AHs, obtained after constant modulus algorithm (CMA) equalization, and exploits them for the joint estimation of these parameters. Experimental results for 112 Gbps polarization-multiplexed (PM) quadrature phase-shift keying (QPSK), 112 Gbps PM 16 quadrature amplitude modulation (16-QAM), and 240 Gbps PM 64-QAM signals demonstrate OSNR monitoring with mean estimation errors of 1.2 dB, 0.4 dB, and 1 dB, respectively. Similarly, the results for MFI show 100% identification accuracy for all three modulation formats. The proposed technique applies deep machine learning algorithms inside standard digital coherent receiver and does not require any additional hardware. Therefore, it is attractive for cost-effective multi-parameter estimation in next-generation elastic optical networks (EONs).

  10. Outcomes of the DeepWind conceptual design

    NARCIS (Netherlands)

    Paulsen, US; Borg, M.; Madsen, HA; Pedersen, TF; Hattel, J; Ritchie, E.; Simao Ferreira, C.; Svendsen, H.; Berthelsen, P.A.; Smadja, C.

    2015-01-01

    DeepWind has been presented as a novel floating offshore wind turbine concept with cost reduction potentials. Twelve international partners developed a Darrieus type floating turbine with new materials and technologies for deep-sea offshore environment. This paper summarizes results of the 5 MW

  11. Earthquakes - a danger to deep-lying repositories?

    International Nuclear Information System (INIS)

    2012-03-01

    This booklet issued by the Swiss National Cooperative for the Disposal of Radioactive Waste NAGRA takes a look at geological factors concerning earthquakes and the safety of deep-lying repositories for nuclear waste. The geological processes involved in the occurrence of earthquakes are briefly looked at and the definitions for magnitude and intensity of earthquakes are discussed. Examples of damage caused by earthquakes are given. The earthquake situation in Switzerland is looked at and the effects of earthquakes on sub-surface structures and deep-lying repositories are discussed. Finally, the ideas proposed for deep-lying geological repositories for nuclear wastes are discussed

  12. Narrowing the uncertainty for deep-ocean injection efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Orr, J.C.; Aumont, O. [Laboratoire des Sciences du Climat et de l' Environnement, CEA-CNRS, Gif-sur-Yvette (France); Yool, A. [Southampton Oceanography Centre, Southampton (United Kingdom); Plattner, G.K.; Joos, F. [Bern Univ., Bern (Switzerland). Physics Inst.; Maier-Reimer, E. [Max Planck Inst. fuer Meteorologie, Hamburg (Germany); Weirig, M.F.; Schlitzer, R. [Alfred Wegener Inst. for Polar and Marine Research, Bremerhaven (Germany); Caldeira, K.; Wickett, M.E. [Lawrence Livermore National Laboratory, CA (United States); Matear, R.J. [Australian Commonwealth Scientific and Research Organization, Hobart (Australia); Mignone, B.K.; Sarmiento, J.L. [Princeton Univ., Princeton, NJ (United States). AOS Program

    2005-07-01

    Ten ocean general circulation models (OCGMs) were compared as part of an international study investigating the ocean's ability to efficiently sequester carbon dioxide (CO{sub 2}). The models were selected for their ability to simulate radiocarbon and CFC-11. All of the model simulations neglected the influence of marine biota, and the simulations used only dissolved inorganic carbon (DIC) as a tracer in order to conserve computing resources. The models were integrated using standard ocean carbon-cycle model intercomparison project (OCMIP) formulations for gas exchange boundary conditions to obtain pre-industrial conditions. All models used the same predefined atmospheric CO{sub 2} records compiled from 1765 to 2000, as well as future scenarios in which atmospheric CO{sub 2} was stabilized at 650 ppm. Injections occurred over a period of 100 years. Results of the study showed that global budgets for CFC-11 and radiocarbon were correlated with global efficiencies for a 3000 m injection simulation. The 3000 m injection efficiency was then correlated with the global mean for deep natural radiocarbon. Results showed that simultaneously accounting for constraints from both CFC-11 and natural radiocarbon narrowed the range for a 3000 m injection efficiency in the year 2500 by a factor of 4. The study showed that models must be able to simulate global inventories for CFC-11 as well as global means for radiocarbon in deep ocean scenarios in order to be credible. It was concluded that models using both constraints will more accurately simulate global injection efficiencies.

  13. DRREP: deep ridge regressed epitope predictor.

    Science.gov (United States)

    Sher, Gene; Zhi, Degui; Zhang, Shaojie

    2017-10-03

    The ability to predict epitopes plays an enormous role in vaccine development in terms of our ability to zero in on where to do a more thorough in-vivo analysis of the protein in question. Though for the past decade there have been numerous advancements and improvements in epitope prediction, on average the best benchmark prediction accuracies are still only around 60%. New machine learning algorithms have arisen within the domain of deep learning, text mining, and convolutional networks. This paper presents a novel analytically trained and string kernel using deep neural network, which is tailored for continuous epitope prediction, called: Deep Ridge Regressed Epitope Predictor (DRREP). DRREP was tested on long protein sequences from the following datasets: SARS, Pellequer, HIV, AntiJen, and SEQ194. DRREP was compared to numerous state of the art epitope predictors, including the most recently published predictors called LBtope and DMNLBE. Using area under ROC curve (AUC), DRREP achieved a performance improvement over the best performing predictors on SARS (13.7%), HIV (8.9%), Pellequer (1.5%), and SEQ194 (3.1%), with its performance being matched only on the AntiJen dataset, by the LBtope predictor, where both DRREP and LBtope achieved an AUC of 0.702. DRREP is an analytically trained deep neural network, thus capable of learning in a single step through regression. By combining the features of deep learning, string kernels, and convolutional networks, the system is able to perform residue-by-residue prediction of continues epitopes with higher accuracy than the current state of the art predictors.

  14. Photon Detection System Designs for the Deep Underground Neutrino Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Whittington, Denver [Indiana U.

    2015-11-19

    The Deep Underground Neutrino Experiment (DUNE) will be a premier facility for exploring long-standing questions about the boundaries of the standard model. Acting in concert with the liquid argon time projection chambers underpinning the far detector design, the DUNE photon detection system will capture ultraviolet scintillation light in order to provide valuable timing information for event reconstruction. To maximize the active area while maintaining a small photocathode coverage, the experiment will utilize a design based on plastic light guides coated with a wavelength-shifting compound, along with silicon photomultipliers, to collect and record scintillation light from liquid argon. This report presents recent preliminary performance measurements of this baseline design and several alternative designs which promise significant improvements in sensitivity to low-energy interactions.

  15. Ploughing the deep sea floor.

    Science.gov (United States)

    Puig, Pere; Canals, Miquel; Company, Joan B; Martín, Jacobo; Amblas, David; Lastras, Galderic; Palanques, Albert

    2012-09-13

    Bottom trawling is a non-selective commercial fishing technique whereby heavy nets and gear are pulled along the sea floor. The direct impact of this technique on fish populations and benthic communities has received much attention, but trawling can also modify the physical properties of seafloor sediments, water–sediment chemical exchanges and sediment fluxes. Most of the studies addressing the physical disturbances of trawl gear on the seabed have been undertaken in coastal and shelf environments, however, where the capacity of trawling to modify the seafloor morphology coexists with high-energy natural processes driving sediment erosion, transport and deposition. Here we show that on upper continental slopes, the reworking of the deep sea floor by trawling gradually modifies the shape of the submarine landscape over large spatial scales. We found that trawling-induced sediment displacement and removal from fishing grounds causes the morphology of the deep sea floor to become smoother over time, reducing its original complexity as shown by high-resolution seafloor relief maps. Our results suggest that in recent decades, following the industrialization of fishing fleets, bottom trawling has become an important driver of deep seascape evolution. Given the global dimension of this type of fishery, we anticipate that the morphology of the upper continental slope in many parts of the world’s oceans could be altered by intensive bottom trawling, producing comparable effects on the deep sea floor to those generated by agricultural ploughing on land.

  16. Parallel Distributed Processing Theory in the Age of Deep Networks.

    Science.gov (United States)

    Bowers, Jeffrey S

    2017-12-01

    Parallel distributed processing (PDP) models in psychology are the precursors of deep networks used in computer science. However, only PDP models are associated with two core psychological claims, namely that all knowledge is coded in a distributed format and cognition is mediated by non-symbolic computations. These claims have long been debated in cognitive science, and recent work with deep networks speaks to this debate. Specifically, single-unit recordings show that deep networks learn units that respond selectively to meaningful categories, and researchers are finding that deep networks need to be supplemented with symbolic systems to perform some tasks. Given the close links between PDP and deep networks, it is surprising that research with deep networks is challenging PDP theory. Copyright © 2017. Published by Elsevier Ltd.

  17. DEEP VADOSE ZONE TREATABILITY TEST PLAN

    International Nuclear Information System (INIS)

    Chronister, G.B.; Truex, M.J.

    2009-01-01

    (sm b ullet) Treatability test plan published in 2008 (sm b ullet) Outlines technology treatability activities for evaluating application of in situ technologies and surface barriers to deep vadose zone contamination (technetium and uranium) (sm b ullet) Key elements - Desiccation testing - Testing of gas-delivered reactants for in situ treatment of uranium - Evaluating surface barrier application to deep vadose zone - Evaluating in situ grouting and soil flushing

  18. The solar neighborhood. XXXIV. A search for planets orbiting nearby M dwarfs using astrometry

    International Nuclear Information System (INIS)

    Lurie, John C.; Henry, Todd J.; Ianna, Philip A.; Jao, Wei-Chun; Quinn, Samuel N.; Winters, Jennifer G.; Koerner, David W.; Riedel, Adric R.; Subasavage, John P.

    2014-01-01

    Astrometric measurements are presented for seven nearby stars with previously detected planets: six M dwarfs (GJ 317, GJ 667C, GJ 581, GJ 849, GJ 876, and GJ 1214) and one K dwarf (BD-10 -3166). Measurements are also presented for six additional nearby M dwarfs without known planets, but which are more favorable to astrometric detections of low mass companions, as well as three binary systems for which we provide astrometric orbit solutions. Observations have baselines of 3 to 13 years, and were made as part of the RECONS long-term astrometry and photometry program at the CTIO/SMARTS 0.9 m telescope. We provide trigonometric parallaxes and proper motions for all 16 systems, and perform an extensive analysis of the astrometric residuals to determine the minimum detectable companion mass for the 12 M dwarfs not having close stellar secondaries. For the six M dwarfs with known planets, we are not sensitive to planets, but can rule out the presence of all but the least massive brown dwarfs at periods of 2–12 years. For the six more astrometrically favorable M dwarfs, we conclude that none have brown dwarf companions, and are sensitive to companions with masses as low as 1 M Jup for periods longer than two years. In particular, we conclude that Proxima Centauri has no Jovian companions at orbital periods of 2–12 years. These results complement previously published M dwarf planet occurrence rates by providing astrometrically determined upper mass limits on potential super-Jupiter companions at orbits of two years and longer. As part of a continuing survey, these results are consistent with the paucity of super-Jupiter and brown dwarf companions we find among the over 250 red dwarfs within 25 pc observed longer than five years in our astrometric program.

  19. The solar neighborhood. XXXIV. A search for planets orbiting nearby M dwarfs using astrometry

    Energy Technology Data Exchange (ETDEWEB)

    Lurie, John C. [Department of Astronomy, University of Washington, Seattle, WA 98195 (United States); Henry, Todd J.; Ianna, Philip A. [RECONS Institute, Chambersburg, PA 17201 (United States); Jao, Wei-Chun; Quinn, Samuel N.; Winters, Jennifer G. [Department of Physics and Astronomy, Georgia State University, Atlanta, GA 30302 (United States); Koerner, David W. [Department of Physics and Astronomy, Northern Arizona University, Flagstaff, AZ 86011 (United States); Riedel, Adric R. [Department of Astrophysics, American Museum of Natural History, New York, NY 10034 (United States); Subasavage, John P., E-mail: lurie@uw.edu [United States Naval Observatory, Flagstaff, AZ 86001 (United States)

    2014-11-01

    Astrometric measurements are presented for seven nearby stars with previously detected planets: six M dwarfs (GJ 317, GJ 667C, GJ 581, GJ 849, GJ 876, and GJ 1214) and one K dwarf (BD-10 -3166). Measurements are also presented for six additional nearby M dwarfs without known planets, but which are more favorable to astrometric detections of low mass companions, as well as three binary systems for which we provide astrometric orbit solutions. Observations have baselines of 3 to 13 years, and were made as part of the RECONS long-term astrometry and photometry program at the CTIO/SMARTS 0.9 m telescope. We provide trigonometric parallaxes and proper motions for all 16 systems, and perform an extensive analysis of the astrometric residuals to determine the minimum detectable companion mass for the 12 M dwarfs not having close stellar secondaries. For the six M dwarfs with known planets, we are not sensitive to planets, but can rule out the presence of all but the least massive brown dwarfs at periods of 2–12 years. For the six more astrometrically favorable M dwarfs, we conclude that none have brown dwarf companions, and are sensitive to companions with masses as low as 1 M{sub Jup} for periods longer than two years. In particular, we conclude that Proxima Centauri has no Jovian companions at orbital periods of 2–12 years. These results complement previously published M dwarf planet occurrence rates by providing astrometrically determined upper mass limits on potential super-Jupiter companions at orbits of two years and longer. As part of a continuing survey, these results are consistent with the paucity of super-Jupiter and brown dwarf companions we find among the over 250 red dwarfs within 25 pc observed longer than five years in our astrometric program.

  20. Deep Learning in Visual Computing and Signal Processing

    OpenAIRE

    Xie, Danfeng; Zhang, Lei; Bai, Li

    2017-01-01

    Deep learning is a subfield of machine learning, which aims to learn a hierarchy of features from input data. Nowadays, researchers have intensively investigated deep learning algorithms for solving challenging problems in many areas such as image classification, speech recognition, signal processing, and natural language processing. In this study, we not only review typical deep learning algorithms in computer vision and signal processing but also provide detailed information on how to apply...

  1. Deep Visual Attention Prediction

    Science.gov (United States)

    Wang, Wenguan; Shen, Jianbing

    2018-05-01

    In this work, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although Convolutional Neural Networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve CNN based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark datasets demonstrate our method yields state-of-the-art performance with competitive inference time.

  2. Observed travel times for Multiply-reflected ScS waves from a deep-focus earthquake

    Directory of Open Access Journals (Sweden)

    A. F. ESPISOSA

    1966-06-01

    Full Text Available Tlie deep-focus Argentinean earthquake of December 8
    1962, generated multiply reflected ScS phases which were recorded very
    clearly at stations of the IGY and the TJSC&GS standardized worldwide
    networks and at Canadian stations. The data gathered from this earthquake
    for the multiply-reflected ScS and sScS were used to construct the
    travel times and to extend them to shorter epicentral distances. These
    new data brought to light an error in published travel times for the 2(ScS
    phase.

  3. Reliability considerations of electronics components for the deep underwater muon and neutrino detection system

    International Nuclear Information System (INIS)

    Leskovar, B.

    1980-02-01

    The reliability of some electronics components for the Deep Underwater Muon and Neutrino Detection (DUMAND) System is discussed. An introductory overview of engineering concepts and technique for reliability assessment is given. Component reliability is discussed in the contest of major factors causing failures, particularly with respect to physical and chemical causes, process technology and testing, and screening procedures. Failure rates are presented for discrete devices and for integrated circuits as well as for basic electronics components. Furthermore, the military reliability specifications and standards for semiconductor devices are reviewed

  4. Deep processes in non-relativistic confining potentials

    International Nuclear Information System (INIS)

    Fishbane, P.M.; Grisaru, M.T.

    1978-01-01

    The authors study deep inelastic and hard scattering processes for non-relativistic particles confined in deep potentials. The mechanisms by which the effects of confinement disappear and the particles scatter as if free are useful in understanding the analogous results for a relativistic field theory. (Auth.)

  5. Deep Learning in Medical Image Analysis.

    Science.gov (United States)

    Shen, Dinggang; Wu, Guorong; Suk, Heung-Il

    2017-06-21

    This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement.

  6. pathways to deep decarbonization - 2014 report

    International Nuclear Information System (INIS)

    Sachs, Jeffrey; Guerin, Emmanuel; Mas, Carl; Schmidt-Traub, Guido; Tubiana, Laurence; Waisman, Henri; Colombier, Michel; Bulger, Claire; Sulakshana, Elana; Zhang, Kathy; Barthelemy, Pierre; Spinazze, Lena; Pharabod, Ivan

    2014-09-01

    The Deep Decarbonization Pathways Project (DDPP) is a collaborative initiative to understand and show how individual countries can transition to a low-carbon economy and how the world can meet the internationally agreed target of limiting the increase in global mean surface temperature to less than 2 degrees Celsius (deg. C). Achieving the 2 deg. C limit will require that global net emissions of greenhouse gases (GHG) approach zero by the second half of the century. This will require a profound transformation of energy systems by mid-century through steep declines in carbon intensity in all sectors of the economy, a transition we call 'deep decarbonization.' Successfully transition to a low-carbon economy will require unprecedented global cooperation, including a global cooperative effort to accelerate the development and diffusion of some key low carbon technologies. As underscored throughout this report, the results of the DDPP analyses remain preliminary and incomplete. The DDPP proceeds in two phases. This 2014 report describes the DDPP's approach to deep decarbonization at the country level and presents preliminary findings on technically feasible pathways to deep decarbonization, utilizing technology assumptions and timelines provided by the DDPP Secretariat. At this stage we have not yet considered the economic and social costs and benefits of deep decarbonization, which will be the topic for the next report. The DDPP is issuing this 2014 report to the UN Secretary-General Ban Ki-moon in support of the Climate Leaders' Summit at the United Nations on September 23, 2014. This 2014 report by the Deep Decarbonization Pathway Project (DDPP) summarizes preliminary findings of the technical pathways developed by the DDPP Country Research Partners with the objective of achieving emission reductions consistent with limiting global warming to less than 2 deg. C., without, at this stage, consideration of economic and social costs and benefits. The DDPP is a knowledge

  7. Evolutionary process of deep-sea bathymodiolus mussels.

    Directory of Open Access Journals (Sweden)

    Jun-Ichi Miyazaki

    Full Text Available BACKGROUND: Since the discovery of deep-sea chemosynthesis-based communities, much work has been done to clarify their organismal and environmental aspects. However, major topics remain to be resolved, including when and how organisms invade and adapt to deep-sea environments; whether strategies for invasion and adaptation are shared by different taxa or unique to each taxon; how organisms extend their distribution and diversity; and how they become isolated to speciate in continuous waters. Deep-sea mussels are one of the dominant organisms in chemosynthesis-based communities, thus investigations of their origin and evolution contribute to resolving questions about life in those communities. METHODOLOGY/PRINCIPAL FINDING: We investigated worldwide phylogenetic relationships of deep-sea Bathymodiolus mussels and their mytilid relatives by analyzing nucleotide sequences of the mitochondrial cytochrome c oxidase subunit I (COI and NADH dehydrogenase subunit 4 (ND4 genes. Phylogenetic analysis of the concatenated sequence data showed that mussels of the subfamily Bathymodiolinae from vents and seeps were divided into four groups, and that mussels of the subfamily Modiolinae from sunken wood and whale carcasses assumed the outgroup position and shallow-water modioline mussels were positioned more distantly to the bathymodioline mussels. We provisionally hypothesized the evolutionary history of Bathymodilolus mussels by estimating evolutionary time under a relaxed molecular clock model. Diversification of bathymodioline mussels was initiated in the early Miocene, and subsequently diversification of the groups occurred in the early to middle Miocene. CONCLUSIONS/SIGNIFICANCE: The phylogenetic relationships support the "Evolutionary stepping stone hypothesis," in which mytilid ancestors exploited sunken wood and whale carcasses in their progressive adaptation to deep-sea environments. This hypothesis is also supported by the evolutionary transition of

  8. Deep water recycling through time.

    Science.gov (United States)

    Magni, Valentina; Bouilhol, Pierre; van Hunen, Jeroen

    2014-11-01

    We investigate the dehydration processes in subduction zones and their implications for the water cycle throughout Earth's history. We use a numerical tool that combines thermo-mechanical models with a thermodynamic database to examine slab dehydration for present-day and early Earth settings and its consequences for the deep water recycling. We investigate the reactions responsible for releasing water from the crust and the hydrated lithospheric mantle and how they change with subduction velocity ( v s ), slab age ( a ) and mantle temperature (T m ). Our results show that faster slabs dehydrate over a wide area: they start dehydrating shallower and they carry water deeper into the mantle. We parameterize the amount of water that can be carried deep into the mantle, W (×10 5 kg/m 2 ), as a function of v s (cm/yr), a (Myrs), and T m (°C):[Formula: see text]. We generally observe that a 1) 100°C increase in the mantle temperature, or 2) ∼15 Myr decrease of plate age, or 3) decrease in subduction velocity of ∼2 cm/yr all have the same effect on the amount of water retained in the slab at depth, corresponding to a decrease of ∼2.2×10 5 kg/m 2 of H 2 O. We estimate that for present-day conditions ∼26% of the global influx water, or 7×10 8 Tg/Myr of H 2 O, is recycled into the mantle. Using a realistic distribution of subduction parameters, we illustrate that deep water recycling might still be possible in early Earth conditions, although its efficiency would generally decrease. Indeed, 0.5-3.7 × 10 8 Tg/Myr of H 2 O could still be recycled in the mantle at 2.8 Ga. Deep water recycling might be possible even in early Earth conditions We provide a scaling law to estimate the amount of H 2 O flux deep into the mantle Subduction velocity has a a major control on the crustal dehydration pattern.

  9. WFC3/UVIS image skew

    Science.gov (United States)

    Petro, Larry

    2009-07-01

    This proposal will provide an independent check of the skew in the ACS astrometric catalog of Omega Cen stars, using exposures taken in a 45-deg range of telescope roll. The roll sequence will also provide a test for orbital variation of skew and field angle dependent PSF variations. The astrometric catalog of Omega Cen, improved for a skew, will be used to derive the geometric distorion to all UVIS filters, which has preliminarily been determined from F606W images and an astrometric catalog of 47 Tuc.

  10. Recovery from deep-plane rhytidectomy following unilateral wound treatment with autologous platelet gel: a pilot study.

    Science.gov (United States)

    Powell, D M; Chang, E; Farrior, E H

    2001-01-01

    To determine the effects of treatment with autologous platelet-rich plasma mixed with thrombin and calcium chloride to form an autologous platelet gel (APG) on postoperative recovery from deep-plane rhytidectomy. A prospective, randomized, controlled pilot study. An accredited ambulatory facial plastic surgery center. Healthy volunteer women (N = 8) undergoing rhytidectomy. Unilateral autologous platelet-rich plasma wound treatment during standard deep-plane rhytidectomy. Staged postoperative facial photographs were graded in a blinded fashion by 3 facial plastic surgeon reviewers for postoperative ecchymosis and edema. Each facial side treated with APG that demonstrated less edema or ecchymosis than the non-APG-treated side was designated a positive response; otherwise, the response was equal (no difference) or negative (untreated side had less edema or ecchymosis). Twenty-one positive and 21 equal responses were observed compared with 8 negative ones. Of 20 unanimous observations, 15 were positive, only 3 equal, and 1 negative. Treatment with APG may prevent or improve edema or ecchymosis after deep-plane rhytidectomy. This trend is more apparent for ecchymosis than for edema, and is chiefly demonstrable in the early phases of recovery. These observations are consistent with previous reports of cell tissue culture and wound response to concentrated platelet product.

  11. Stratification-Based Outlier Detection over the Deep Web

    OpenAIRE

    Xian, Xuefeng; Zhao, Pengpeng; Sheng, Victor S.; Fang, Ligang; Gu, Caidong; Yang, Yuanfeng; Cui, Zhiming

    2016-01-01

    For many applications, finding rare instances or outliers can be more interesting than finding common patterns. Existing work in outlier detection never considers the context of deep web. In this paper, we argue that, for many scenarios, it is more meaningful to detect outliers over deep web. In the context of deep web, users must submit queries through a query interface to retrieve corresponding data. Therefore, traditional data mining methods cannot be directly applied. The primary contribu...

  12. Deep neural networks to enable real-time multimessenger astrophysics

    Science.gov (United States)

    George, Daniel; Huerta, E. A.

    2018-02-01

    Gravitational wave astronomy has set in motion a scientific revolution. To further enhance the science reach of this emergent field of research, there is a pressing need to increase the depth and speed of the algorithms used to enable these ground-breaking discoveries. We introduce Deep Filtering—a new scalable machine learning method for end-to-end time-series signal processing. Deep Filtering is based on deep learning with two deep convolutional neural networks, which are designed for classification and regression, to detect gravitational wave signals in highly noisy time-series data streams and also estimate the parameters of their sources in real time. Acknowledging that some of the most sensitive algorithms for the detection of gravitational waves are based on implementations of matched filtering, and that a matched filter is the optimal linear filter in Gaussian noise, the application of Deep Filtering using whitened signals in Gaussian noise is investigated in this foundational article. The results indicate that Deep Filtering outperforms conventional machine learning techniques, achieves similar performance compared to matched filtering, while being several orders of magnitude faster, allowing real-time signal processing with minimal resources. Furthermore, we demonstrate that Deep Filtering can detect and characterize waveform signals emitted from new classes of eccentric or spin-precessing binary black holes, even when trained with data sets of only quasicircular binary black hole waveforms. The results presented in this article, and the recent use of deep neural networks for the identification of optical transients in telescope data, suggests that deep learning can facilitate real-time searches of gravitational wave sources and their electromagnetic and astroparticle counterparts. In the subsequent article, the framework introduced herein is directly applied to identify and characterize gravitational wave events in real LIGO data.

  13. Avalanches of sediment form deep-marine depositions

    NARCIS (Netherlands)

    Pohl, Florian|info:eu-repo/dai/nl/34309424X

    2017-01-01

    The deep ocean is the largest sedimentary system basin on the planet. It serves as the primary storage point for all terrestrially weathered sediment that makes it beyond the near-shore environment. These deep-marine offshore deposits have become a focus of attention in exploration due to the

  14. Equivalent drawbead performance in deep drawing simulations

    NARCIS (Netherlands)

    Meinders, Vincent T.; Geijselaers, Hubertus J.M.; Huetink, Han

    1999-01-01

    Drawbeads are applied in the deep drawing process to improve the control of the material flow during the forming operation. In simulations of the deep drawing process these drawbeads can be replaced by an equivalent drawbead model. In this paper the usage of an equivalent drawbead model in the

  15. Deep web search: an overview and roadmap

    NARCIS (Netherlands)

    Tjin-Kam-Jet, Kien; Trieschnigg, Rudolf Berend; Hiemstra, Djoerd

    2011-01-01

    We review the state-of-the-art in deep web search and propose a novel classification scheme to better compare deep web search systems. The current binary classification (surfacing versus virtual integration) hides a number of implicit decisions that must be made by a developer. We make these

  16. Age-dependent mixing of deep-sea sediments

    International Nuclear Information System (INIS)

    Smith, C.R.; Maggaard, L.; Pope, R.H.; DeMaster, D.J.

    1993-01-01

    Rates of bioturbation measured in deep-sea sediments commonly are tracer dependent; in particular, shorter lived radiotracers (such as 234 Th) often yield markedly higher diffusive mixing coefficients than their longer-lived counterparts (e.g., 210 Pb). At a single station in the 1,240-m deep Santa Catalina Basin, the authors document a strong negative correlation between bioturbation rate and tracer half-life. Sediment profiles of 234 Th (half-life = 24 days) yield an average mixing coefficient (60 cm 2 y -1 ) two orders of magnitude greater than that for 210 Pb (half-life = 22 y, mean mixing coefficient = 0.4 cm 2 y -1 ). A similar negative relationship between mixing rate and tracer time scale is observed at thirteen other deep-sea sites in which multiple radiotracers have been used to assess diffusive mixing rates. This relationship holds across a variety of radiotracer types and time scales. The authors hypothesize that this negative relationship results from age-dependent mixing, a process in which recently sedimented, food-rich particles are ingested and mixed at higher rates by deposit feeders than are older, food-poor particles. Results from an age-dependent mixing model demonstrate that this process indeed can yield the bioturbation-rate vs. tracer-time-scale correlations observed in deep-sea sediments. Field data on mixing rates of recently sedimented particles, as well as the radiotracer activity of deep-sea deposit feeders, provide strong support for the age-dependent mixing model. The presence of age-dependent mixing in deep-sea sediments may have major implications for diagenetic modeling, requiring a match between the characteristic time scales of mixing tracers and modeled reactants. 102 refs., 6 figs., 5 tabs

  17. Deep-sea Hexactinellida (Porifera) of the Weddell Sea

    Science.gov (United States)

    Janussen, Dorte; Tabachnick, Konstantin R.; Tendal, Ole S.

    2004-07-01

    New Hexactinellida from the deep Weddel Sea are described. This moderately diverse hexactinellid fauna includes 14 species belonging to 12 genera, of which five species and one subgenus are new to science: Periphragella antarctica n. sp., Holascus pseudostellatus n. sp., Caulophacus (Caulophacus) discohexactinus n. sp., C. ( Caulodiscus) brandti n. sp., C. ( Oxydiscus) weddelli n. sp., and C. ( Oxydiscus) n. subgen. So far, 20 hexactinellid species have been reported from the deep Weddell Sea, 15 are known from the northern part and 10 only from here, while 10 came from the southern area, and five of these only from there. However, this apparent high "endemism" of Antarctic hexactinellid sponges is most likely the result of severe undersampling of the deep-sea fauna. We find no reason to believe that a division between an oceanic and a more continental group of species exists. The current poor database indicates that a substantial part of the deep hexactinellid fauna of the Weddell Sea is shared with other deep-sea regions, but it does not indicate a special biogeographic relationship with any other ocean.

  18. Evaluation of persistence of resistant variants with ultra-deep pyrosequencing in chronic hepatitis C patients treated with telaprevir.

    Directory of Open Access Journals (Sweden)

    Xiomara V Thomas

    Full Text Available BACKGROUND & AIMS: Telaprevir, a hepatitis C virus NS3/4A protease inhibitor has significantly improved sustained viral response rates when given in combination with pegylated interferon alfa-2a and ribavirin, compared with current standard of care in hepatitis C virus genotype 1 infected patients. In patients with a failed sustained response, the emergence of drug-resistant variants during treatment has been reported. It is unclear to what extent these variants persist in untreated patients. The aim of this study was to assess using ultra-deep pyrosequencing, whether after 4 years follow-up, the frequency of resistant variants is increased compared to pre-treatment frequencies following 14 days of telaprevir treatment. METHODS: Fifteen patients from 2 previous telaprevir phase 1 clinical studies (VX04-950-101 and VX05-950-103 were included. These patients all received telaprevir monotherapy for 14 days, and 2 patients subsequently received standard of care. Variants at previously well-characterized NS3 protease positions V36, T54, R155 and A156 were assessed at baseline and after a follow-up of 4±1.2 years by ultra-deep pyrosequencing. The prevalence of resistant variants at follow-up was compared to baseline. RESULTS: Resistance associated mutations were detectable at low frequency at baseline. In general, prevalence of resistance mutations at follow-up was not increased compared to baseline. Only one patient had a small, but statistically significant, increase in the number of V36M and T54S variants 4 years after telaprevir-dosing. CONCLUSION: In patients treated for 14 days with telaprevir monotherapy, ultra-deep pyrosequencing indicates that long-term persistence of resistant variants is rare.

  19. Deepwater Program: Lophelia II, continuing ecological research on deep-sea corals and deep-reef habitats in the Gulf of Mexico

    Science.gov (United States)

    Demopoulos, Amanda W.J.; Ross, Steve W.; Kellogg, Christina A.; Morrison, Cheryl L.; Nizinski, Martha S.; Prouty, Nancy G.; Bourque, Jill R.; Galkiewicz, Julie P.; Gray, Michael A.; Springmann, Marcus J.; Coykendall, D. Katharine; Miller, Andrew; Rhode, Mike; Quattrini, Andrea; Ames, Cheryl L.; Brooke, Sandra D.; McClain Counts, Jennifer; Roark, E. Brendan; Buster, Noreen A.; Phillips, Ryan M.; Frometa, Janessy

    2017-12-11

    The deep sea is a rich environment composed of diverse habitat types. While deep-sea coral habitats have been discovered within each ocean basin, knowledge about the ecology of these habitats and associated inhabitants continues to grow. This report presents information and results from the Lophelia II project that examined deep-sea coral habitats in the Gulf of Mexico. The Lophelia II project focused on Lophelia pertusa habitats along the continental slope, at depths ranging from 300 to 1,000 meters. The chapters are authored by several scientists from the U.S. Geological Survey, National Oceanic and Atmospheric Administration, University of North Carolina Wilmington, and Florida State University who examined the community ecology (from microbes to fishes), deep-sea coral age, growth, and reproduction, and population connectivity of deep-sea corals and inhabitants. Data from these studies are presented in the chapters and appendixes of the report as well as in journal publications. This study was conducted by the Ecosystems Mission Area of the U.S. Geological Survey to meet information needs identified by the Bureau of Ocean Energy Management.

  20. Deep learning quick reference useful hacks for training and optimizing deep neural networks with TensorFlow and Keras

    CERN Document Server

    Bernico, Michael

    2018-01-01

    This book is a practical guide to applying deep neural networks including MLPs, CNNs, LSTMs, and more in Keras and TensorFlow. Packed with useful hacks to solve real-world challenges along with the supported math and theory around each topic, this book will be a quick reference for training and optimize your deep neural networks.