WorldWideScience

Sample records for deep astrometric standards

  1. An astrometric standard field in omega Cen

    Science.gov (United States)

    Wyse, Rosemary

    2003-07-01

    We propose to obtain a high-precision astrometric standard in a two-step procedure. First, we will create a ground-based astrometric standard field around omega Cen down to V=22 with a 3 mas accuracy in positions and better than 0.5 mas/yr in proper motions. This standard will be used to obtain precise absolute plate solutions for selected WFPC2 CCD frames and refine the self-calibrated mean distortion solution for the WFPC2 CCD chips. This will eliminate systematic errors inherent in the self-calibration techniques down to the rms=0.3 mas level, thus opening new opportunities to perform precision astrometry with WFPC2 alone or in combination with the other HST imaging instruments. We will also address the issue of the distortion's variation which has a paramount significance for space astrometry such as spearheaded by the HST or being under development {SIM, GAIA}. Second, all reduced WFPC2 CCD frames will be combined into the two field catalogs {astrometric flat fields} of positions in omega Cen of unprecedented precision {s.e.=0.1 mas} down to V=22 and will be available to the GO community and readily applicable to calibrating the ACS.

  2. Accuracy of the HST Standard Astrometric Catalogs w.r.t. Gaia

    Science.gov (United States)

    Kozhurina-Platais, V.; Grogin, N.; Sabbi, E.

    2018-02-01

    The goal of astrometric calibration of the HST ACS/WFC and WFC3/UVIS imaging instruments is to provide a coordinate system free of distortion to the precision level of 0.1 pixel 4-5 mas or better. This astrometric calibration is based on two HST astrometric standard fields in the vicinity of the globular clusters, 47 Tuc and omega Cen, respectively. The derived calibration of the geometric distortion is assumed to be accurate down to 2-3 mas. Is this accuracy in agreement with the true value? Now, with the access to globally accurate positions from the first Gaia data release (DR1), we found that there are measurable offsets, rotation, scale and other deviations of distortion parameters in two HST standard astrometric catalogs. These deviations from the distortion-free and properly aligned coordinate system should be accounted and corrected for, so that the high precision HST positions are free of any systematic errors. We also found that the precision of the HST pixel coordinates is substantially better than the accuracy listed in the Gaia DR1. Therefore, in order to finalize the components of distortion in the HST standard catalogs, the next release of Gaia data is needed.

  3. Using Gaia as an Astrometric Tool for Deep Ground-based Surveys

    Science.gov (United States)

    Casetti-Dinescu, Dana I.; Girard, Terrence M.; Schriefer, Michael

    2018-04-01

    Gaia DR1 positions are used to astrometrically calibrate three epochs' worth of Subaru SuprimeCam images in the fields of globular cluster NGC 2419 and the Sextans dwarf spheroidal galaxy. Distortion-correction ``maps'' are constructed from a combination of offset dithers and reference to Gaia DR1. These are used to derive absolute proper motions in the field of NGC 2419. Notably, we identify the photometrically-detected Monoceros structure in the foreground of NGC 2419 as a kinematically-cold population of stars, distinct from Galactic-field stars. This project demonstrates the feasibility of combining Gaia with deep, ground-based surveys, thus extending high-quality astrometry to magnitudes beyond the limits of Gaia.

  4. ASTROMETRIC REVERBERATION MAPPING

    International Nuclear Information System (INIS)

    Shen Yue

    2012-01-01

    Spatially extended emission regions of active galactic nuclei respond to continuum variations, if such emission regions are powered by energy reprocessing of the continuum. The response from different parts of the reverberating region arrives at different times lagging behind the continuum variation. The lags can be used to map the geometry and kinematics of the emission region (i.e., reverberation mapping, RM). If the extended emission region is not spherically symmetric in configuration and velocity space, reverberation may produce astrometric offsets in the emission region photocenter as a function of time delay and velocity, detectable with future μas to tens of μas astrometry. Such astrometric responses provide independent constraints on the geometric and kinematic structure of the extended emission region, complementary to traditional RM. In addition, astrometric RM is more sensitive to infer the inclination of a flattened geometry and the rotation angle of the extended emission region.

  5. Astrometric vs. photometric microlensing

    NARCIS (Netherlands)

    Dominik, M; Brainerd, TG; Kochanek, CS

    2001-01-01

    I discuss the differences between the properties of astrometric and photometric microlensing and between the arising prospects for survey and follow-up experiments based on these two different signatures. In particular, the prospects for binary stars and extra-solar planets are considered.

  6. PACMAN: PRIMA astrometric instrument software

    Science.gov (United States)

    Abuter, Roberto; Sahlmann, Johannes; Pozna, Eszter

    2010-07-01

    The dual feed astrometric instrument software of PRIMA (PACMAN) that is currently being integrated at the VLTI will use two spatially modulated fringe sensor units and a laser metrology system to carry out differential astrometry. Its software and hardware compromises a distributed system involving many real time computers and workstations operating in a synchronized manner. Its architecture has been designed to allow the construction of efficient and flexible calibration and observation procedures. In parallel, a novel scheme of integrating M-code (MATLAB/OCTAVE) with standard VLT (Very Large Telescope) control software applications had to be devised in order to support numerically intensive operations and to have the capacity of adapting to fast varying strategies and algorithms. This paper presents the instrument software, including the current operational sequences for the laboratory calibration and sky calibration. Finally, a detailed description of the algorithms with their implementation, both under M and C code, are shown together with a comparative analysis of their performance and maintainability.

  7. Astrometric Observation of MACHO Gravitational Microlensing

    Science.gov (United States)

    Boden, A. F.; Shao, M.; Van Buren, D.

    1997-01-01

    This paper discusses the prospects for astrometric observation of MACHO gravitational microlensing events. We derive the expected astrometric observables for a simple microlensing event assuming a dark MACHO, and demonstrate that accurate astrometry can determine the lens mass, distance, and proper motion in a very general fashion.

  8. [Deep brain stimulation in movement disorders: evidence and therapy standards].

    Science.gov (United States)

    Parpaley, Yaroslav; Skodda, Sabine

    2017-07-01

    The deep brain stimulation (DBS) in movement disorders is well established and in many aspects evidence-based procedure. The treatment indications are very heterogeneous and very specific in their course and therapy. The deep brain stimulation plays very important, but usually not the central role in this conditions. The success in the application of DBS is essentially associated with the correct, appropriate and timely indication of the therapy in the course of these diseases. Thanks to the good standardization of the DBS procedure and sufficient published data, the recommendations for indication, diagnosis and operative procedures can be generated. The following article attempts to summarize the most important decision-making criteria and current therapy standards in this fairly comprehensive subject and to present them in close proximity to practice. Georg Thieme Verlag KG Stuttgart · New York.

  9. Detailed Astrometric Analysis of Pluto

    Science.gov (United States)

    ROSSI, GUSTAVO B.; Vieira-Martins, R.; Camargo, J. I.; Assafin, M.

    2013-05-01

    Abstract (2,250 Maximum Characters): Pluto is the main representant of the transneptunian objects (TNO's), presenting some peculiarities such as an atmosphere and a satellite system with 5 known moons: Charon, discovered in 1978, Nix and Hydra, in 2006, P4 in 2011 and P5 in 2012. Until the arrival of the New Horizons spacecraft to this system (july 2015), stellar occultations are the most efficient method, from the ground, to know physical and dinamical properties of this system. In 2010, it was evident a drift in declinations (about 20 mas/year) comparing to the ephemerides. This fact motivated us to remake the reductions and analysis of a great set of our observations at OPD/LNA, in a total of 15 years. The ephemerides and occultations results was then compared with the astrometric and photometric reductions of CCD images of Pluto (around 6500 images). Two corrections were used for a refinement of the data set: diferential chromatic refraction and photocenter. The first is due to the mean color of background stars beeing redder than the color of Pluto, resulting in a slightly different path of light through the atmosphere (that may cause a difference in position of 0.1”). It became more evident because Pluto is crossing the region of the galactic plane. The photocenter correction is based on two gaussians curves overlapped, with different hights and non-coincident centers, corresponding to Pluto and Charon (since they have less than 1” of angular separation). The objective is to separate these two gaussian curves from the observed one and find the right position of Pluto. The method is strongly dependent of the hight of each of the gaussian curves, related to the respective albedos of charon and Pluto. A detailed analysis of the astrometric results, as well a comparison with occultation results was made. Since Pluto has an orbital period of 248,9 years and our interval of observation is about 15 years, we have around 12% of its observed orbit and also, our

  10. The Tycho-Gaia Astrometric Solution

    Science.gov (United States)

    Lindegren, Lennart

    2018-04-01

    Gaia DR1 is based on the first 14 months of Gaia's observations. This is not long enough to reliably disentangle the parallax effect from proper motion. For most sources, therefore, only positions and magnitudes are given. Parallaxes and proper motions were nevertheless obtained for about two million of the brighter stars through the Tycho-Gaia astrometric solution (TGAS), combining the Gaia observations with the much earlier Hipparcos and Tycho-2 positions. In this review I focus on some important characteristics and limitations of TGAS, in particular the reference frame, astrometric uncertainties, correlations, and systematic errors.

  11. ASTROMETRIC JITTER OF THE SUN AS A STAR

    International Nuclear Information System (INIS)

    Makarov, V. V.; Parker, D.; Ulrich, R. K.

    2010-01-01

    The daily variation of the solar photocenter over some 11 yr is derived from the Mount Wilson data reprocessed by Ulrich et al. to closely match the surface distribution of solar irradiance. The standard deviations of astrometric jitter are 0.52 μAU and 0.39 μAU in the equatorial and the axial dimensions, respectively. The overall dispersion is strongly correlated with solar cycle, reaching 0.91 μAU at maximum activity in 2000. The largest short-term deviations from the running average (up to 2.6 μAU) occur when a group of large spots happen to lie on one side with respect to the center of the disk. The amplitude spectrum of the photocenter variations never exceeds 0.033 μAU for the range of periods 0.6-1.4 yr, corresponding to the orbital periods of planets in the habitable zone. Astrometric detection of Earth-like planets around stars as quiet as the Sun is not affected by star spot noise, but the prospects for more active stars may be limited to giant planets.

  12. The astrometric lessons of Gaia-GBOT experiment

    Science.gov (United States)

    Bouquillon, S.; Mendez, R. A.; Altmann, M.

    2017-07-01

    To ensure the full capabilities of the Gaia's measurements, a programme of daily observations with Earth-based telescopes of the satellite itself - called Ground Based Optical Tracking (GBOT) - was implemented since the beginning of the Gaia mission (for more details concerning GBOT operating see Altmann et al. 2014 and concerning GBOT software facilities see Bouquillon et al. 2014). These observations are carried out mainly with two facilities: the 2.6m VLT Survey Telescope (ESO's VST) at the Cerro Paranal in Chile and the 2.0m Liverpool Telescope (LT) on the Canary Island of La Palma. The constraint of 20 mas on the tracking astrometric quality and the fact that Gaia is a faint and relatively fast moving target (its magnitude in a red passband is around 21 and its apparent speed around 0.04"/s), lead us to rigorously analyse the reachable astrometric precision for CCD observations of this kind of celestial objects. During LARIM 2016, we presented the main results of this study which uses the Cramér-Rao lower bound to characterize the precision limit for the PSF center when drifting in the CCD-frame. This work extends earlier studies dealing with one-dimensional detectors and stationary sources (Mendez et al. 2013 & 2014) firstly to the case of standard two-dimensional CCD sensors, and then, to moving sources. These new results have been submitted for a publication in A&A journal this year (Bouquillon et al. 2017).

  13. ESPRI: Astrometric planet search with PRIMA at the VLTI

    Directory of Open Access Journals (Sweden)

    Ségransan D.

    2011-07-01

    Full Text Available The ESPRI consortium will conduct an astrometric survey for extrasolar planets, using the PRIMA facility at the Very Large Telescope Interferometer. Our scientific goals include determining orbital inclinations and masses for planets already known from radial-velocity surveys, searches for planets around nearby stars of all masses, and around young stars. The consortium has built the PRIMA differential delay lines, developed an astrometric operation and calibration plan, and will deliver astrometric data reduction software.

  14. Automatic measurement of images on astrometric plates

    Science.gov (United States)

    Ortiz Gil, A.; Lopez Garcia, A.; Martinez Gonzalez, J. M.; Yershov, V.

    1994-04-01

    We present some results on the process of automatic detection and measurement of objects in overlapped fields of astrometric plates. The main steps of our algorithm are the following: determination of the Scale and Tilt between charge coupled devices (CCD) and microscope coordinate systems and estimation of signal-to-noise ratio in each field;--image identification and improvement of its position and size;--image final centering;--image selection and storage. Several parameters allow the use of variable criteria for image identification, characterization and selection. Problems related with faint images and crowded fields will be approached by special techniques (morphological filters, histogram properties and fitting models).

  15. Astrometric properties of the Tautenburg Plate Scanner

    Science.gov (United States)

    Brunzendorf, Jens; Meusinger, Helmut

    The Tautenburg Plate Scanner (TPS) is an advanced plate-measuring machine run by the Thüringer Landessternwarte Tautenburg (Karl Schwarzschild Observatory), where the machine is housed. It is capable of digitising photographic plates up to 30 cm × 30 cm in size. In our poster, we reported on tests and preliminary results of its astrometric properties. The essential components of the TPS consist of an x-y table movable between an illumination system and a direct imaging system. A telecentric lens images the light transmitted through the photographic emulsion onto a CCD line of 6000 pixels of 10 µm square size each. All components are mounted on a massive air-bearing table. Scanning is performed in lanes of up to 55 mm width by moving the x-y table in a continuous drift-scan mode perpendicular to the CCD line. The analogue output from the CCD is digitised to 12 bit with a total signal/noise ratio of 1000 : 1, corresponding to a photographic density range of three. The pixel map is produced as a series of optionally overlapping lane scans. The pixel data are stored onto CD-ROM or DAT. A Tautenburg Schmidt plate 24 cm × 24 cm in size is digitised within 2.5 hours resulting in 1.3 GB of data. Subsequent high-level data processing is performed off-line on other computers. During the scanning process, the geometry of the optical components is kept fixed. The optimal focussing of the optics is performed prior to the scan. Due to the telecentric lens refocussing is not required. Therefore, the main source of astrometric errors (beside the emulsion itself) are mechanical imperfections in the drive system, which have to be divided into random and systematic ones. The r.m.s. repeatability over the whole plate as measured by repeated scans of the same plate is about 0.5 µm for each axis. The mean plate-to-plate accuracy of the object positions on two plates with the same epoch and the same plate centre has been determined to be about 1 µm. This accuracy is comparable to

  16. Improving the Astrometric Calibration of ACS/WFC for the Most Useful Filters

    Science.gov (United States)

    Anderson, Jay

    2004-07-01

    The distortion correction for the WFC, with which most ACS astrometry is done, is filter-dependent, and is not sufficiently accurate for the most useful filters to the community, F606W and F814W. We propose to derive improved corrections using 1 orbit for each filter. A by-product will be an astrometric standard field at the center of Omega Centauri.

  17. Preliminary Astrometric Results from the PS1 Demo Month and Routine Survey Operations

    Science.gov (United States)

    2010-09-01

    with the  2MASS  catalog 3 to  produce preliminary astrometric solutions.  Using these  coordinates, the NOFS astrometric pipeline correlates PS1...objects with other catalogs (USNO‐B1.0, SDSS, Tycho‐2,  2MASS , etc.) so that unique star identification numbers can  be assigned across all catalogs. This...correlated  pair, and the standard deviation for these pairings is about  0.3 arcsec.  Whereas the  2MASS  catalog error for a brighter  star is believed to be

  18. COMPANIONS TO NEARBY STARS WITH ASTROMETRIC ACCELERATION. II

    International Nuclear Information System (INIS)

    Tokovinin, Andrei; Hartung, Markus; Hayward, Thomas L.

    2013-01-01

    Hipparcos astrometric binaries were observed with the NICI adaptive optics system at Gemini-S, completing the work of Paper I. Among the 65 F, G, and K dwarfs within 67 pc of the Sun studied here, we resolve 18 new subarcsecond companions, remeasure 7 known astrometric pairs, and establish the physical nature of yet another 3 wider companions. The 107 astrometric binaries targeted at Gemini so far have 38 resolved companions with separations under 3''. Modeling shows that bright enough companions with separations on the order of an arcsecond can perturb the Hipparcos astrometry when they are not accounted for in the data reduction. However, the resulting bias of parallax and proper motion is generally below formal errors and such companions cannot produce fake acceleration. This work contributes to the multiplicity statistics of nearby dwarfs by bridging the gap between spectroscopic and visual binaries and by providing estimates of periods and mass ratios for many astrometric binaries.

  19. Astrometric Results of NEOs from the Characterization and Astrometric Follow-up Program at Adler Planetarium

    Science.gov (United States)

    Nault, Kristie A.; Brucker, Melissa J.; Hammergren, Mark; Gyuk, Geza; Solontoi, Mike R.

    2015-11-01

    We present astrometric results of near-Earth objects (NEOs) targeted in fourth quarter 2014 and in 2015. This is part of Adler Planetarium’s NEO characterization and astrometric follow-up program, which uses the Astrophysical Research Consortium (ARC) 3.5-m telescope at Apache Point Observatory (APO). The program utilizes a 17% share of telescope time, amounting to a total of 500 hours per year. This time is divided up into two hour observing runs approximately every other night for astrometry and frequent half-night runs approximately several times a month for spectroscopy (see poster by M. Hammergren et. al.) and light curve studies (see poster by M. J. Brucker et. al.).Observations were made using Seaver Prototype Imaging Camera (SPIcam), a visible-wavelength, direct imaging CCD camera with 2048 x 2048 pixels and a field of view of 4.78’ x 4.78’. Observations were made using 2 x 2 binning.Special emphasis has been made to focus on the smallest NEOs, particularly around 140m in diameter. Targets were selected based on absolute magnitude (prioritizing for those with H > 25 mag to select small objects) and a 3σ uncertainty less than 400” to ensure that the target is in the FOV. Targets were drawn from the Minor Planet Center (MPC) NEA Observing Planning Aid, the JPL What’s Observable tool, and the Spaceguard priority list and faint NEO list.As of August 2015, we have detected 670 NEOs for astrometric follow-up, on point with our goal of providing astrometry on a thousand NEOs per year. Astrometric calculations were done using the interactive software tool Astrometrica, which is used for data reduction focusing on the minor bodies of the solar system. The program includes automatic reference star identification from new-generation star catalogs, access to the complete MPC database of orbital elements, and automatic moving object detection and identification.This work is based on observations done using the 3.5-m telescope at Apache Point Observatory

  20. Astrometric surveys in the Gaia era

    Science.gov (United States)

    Zacharias, Norbert

    2018-04-01

    The Gaia first data release (DR1) already provides an almost error free optical reference frame on the milli-arcsecond (mas) level allowing significantly better calibration of ground-based astrometric data than ever before. Gaia DR1 provides positions, proper motions and trigonometric parallaxes for just over 2 million stars in the Tycho-2 catalog. For over 1.1 billion additional stars DR1 gives positions. Proper motions for these, mainly fainter stars (G >= 11.5) are currently provided by several new projects which combine earlier epoch ground-based observations with Gaia DR1 positions. These data are very helpful in the interim period but will become obsolete with the second Gaia data release (DR2) expected in April 2018. The era of traditional, ground-based, wide-field astrometry with the goal to provide accurate reference stars has come to an end. Future ground-based astrometry will fill in some gaps (very bright stars, observations needed at many or specific epochs) and mainly will go fainter than the Gaia limit, like the PanSTARRS and the upcoming LSST surveys.

  1. The laser astrometric test of relativity mission

    International Nuclear Information System (INIS)

    Turyshev, Slava G.; Shao, Michael; Nordtvedt, Kenneth L.

    2004-01-01

    This paper discusses new fundamental physics experiment to test relativistic gravity at the accuracy better than the effects of the 2nd order in the gravitational field strength, ∝ G 2 . The Laser Astrometric Test Of Relativity (LATOR) mission uses laser interferometry between two micro-spacecraft whose lines of sight pass close by the Sun to accurately measure deflection of light in the solar gravity. The key element of the experimental design is a redundant geometry optical truss provided by a long-baseline (100 m) multi-channel stellar optical interferometer placed on the International Space Station (ISS). The interferometer is used for measuring the angles between the two spacecraft. In Euclidean geometry, determination of a triangle's three sides determines any angle therein; with gravity changing the optical lengths of sides passing close by the Sun and deflecting the light, the Euclidean relationships are overthrown. The geometric redundancy enables LATOR to measure the departure from Euclidean geometry caused by the solar gravity field to a very high accuracy. LATOR will not only improve the value of the parameterized post-Newtonian (PPN) parameter γ to unprecedented levels of accuracy of 10 -8 , it will also reach ability to measure effects of the next post-Newtonian order (c -4 ) of light deflection resulting from gravity's intrinsic non-linearity. The solar quadrupole moment parameter, J2, will be measured with high precision, as well as a variety of other relativistic effects including Lense-Thirring precession. LATOR will lead to very robust advances in the tests of fundamental physics: this mission could discover a violation or extension of general relativity, or reveal the presence of an additional long range interaction in the physical law. There are no analogs to the LATOR experiment; it is unique and is a natural culmination of solar system gravity experiments

  2. News on Seeking Gaia's Astrometric Core Solution with AGIS

    Science.gov (United States)

    Lammers, U.; Lindegren, L.

    We report on recent new developments around the Astrometric Global Iterative Solution system. This includes the availability of an efficient Conjugate Gradient solver and the Generic Astrometric Calibration scheme that had been proposed a while ago. The number of primary stars to be included in the core solution is now believed to be significantly higher than the 100 Million that served as baseline until now. Cloud computing services are being studied as a possible cost-effective alternative to running AGIS on dedicated computing hardware at ESAC during the operational phase.

  3. Optical design for the Laser Astrometric Test of Relativity

    Science.gov (United States)

    Turyshev, Slava G.; Shao, Michael; Nordtvedt, Kenneth L., Jr.

    2004-01-01

    This paper discusses the Laser Astrometric Test of Relativity (LATOR) mission. LATOR is a Michelson-Morley-type experiment designed to test the pure tensor metric nature of gravitation the fundamental postulate of Einstein's theory of general relativity. With its focus on gravity's action on light propagation it complements other tests which rely on the gravitational dynamics of bodies.

  4. A real-time standard parts inspection based on deep learning

    Science.gov (United States)

    Xu, Kuan; Li, XuDong; Jiang, Hongzhi; Zhao, Huijie

    2017-10-01

    Since standard parts are necessary components in mechanical structure like bogie and connector. These mechanical structures will be shattered or loosen if standard parts are lost. So real-time standard parts inspection systems are essential to guarantee their safety. Researchers would like to take inspection systems based on deep learning because it works well in image with complex backgrounds which is common in standard parts inspection situation. A typical inspection detection system contains two basic components: feature extractors and object classifiers. For the object classifier, Region Proposal Network (RPN) is one of the most essential architectures in most state-of-art object detection systems. However, in the basic RPN architecture, the proposals of Region of Interest (ROI) have fixed sizes (9 anchors for each pixel), they are effective but they waste much computing resources and time. In standard parts detection situations, standard parts have given size, thus we can manually choose sizes of anchors based on the ground-truths through machine learning. The experiments prove that we could use 2 anchors to achieve almost the same accuracy and recall rate. Basically, our standard parts detection system could reach 15fps on NVIDIA GTX1080 (GPU), while achieving detection accuracy 90.01% mAP.

  5. Stereotactically Standard Areas: Applied Mathematics in the Service of Brain Targeting in Deep Brain Stimulation

    Directory of Open Access Journals (Sweden)

    Ioannis N. Mavridis

    2017-12-01

    Full Text Available The concept of stereotactically standard areas (SSAs within human brain nuclei belongs to the knowledge of the modern field of stereotactic brain microanatomy. These are areas resisting the individual variability of the nuclear location in stereotactic space. This paper summarizes the current knowledge regarding SSAs. A mathematical formula of SSAs was recently invented, allowing for their robust, reproducible, and accurate application to laboratory studies and clinical practice. Thus, SSAs open new doors for the application of stereotactic microanatomy to highly accurate brain targeting, which is mainly useful for minimally invasive neurosurgical procedures, such as deep brain stimulation.

  6. Stereotactically Standard Areas: Applied Mathematics in the Service of Brain Targeting in Deep Brain Stimulation.

    Science.gov (United States)

    Mavridis, Ioannis N

    2017-12-11

    The concept of stereotactically standard areas (SSAs) within human brain nuclei belongs to the knowledge of the modern field of stereotactic brain microanatomy. These are areas resisting the individual variability of the nuclear location in stereotactic space. This paper summarizes the current knowledge regarding SSAs. A mathematical formula of SSAs was recently invented, allowing for their robust, reproducible, and accurate application to laboratory studies and clinical practice. Thus, SSAs open new doors for the application of stereotactic microanatomy to highly accurate brain targeting, which is mainly useful for minimally invasive neurosurgical procedures, such as deep brain stimulation.

  7. Deep round window insertion versus standard approach in cochlear implant surgery.

    Science.gov (United States)

    Nordfalk, Karl Fredrik; Rasmussen, Kjell; Bunne, Marie; Jablonski, Greg Eigner

    2016-01-01

    The aim of this study was to compare the outcomes of vestibular tests and the residual hearing of patients who have undergone full insertion cochlear implant surgery using the round window approach with a hearing preservation protocol (RW-HP) or the standard cochleostomy approach (SCA) without hearing preservation. A prospective study of 34 adults who underwent unilateral cochlear implantation was carried out. One group was operated using the RW-HP (n = 17) approach with Med-El +Flex(SOFT) electrode array with full insertion, while the control group underwent a more conventional SCA surgery (n = 17) with shorter perimodiolar electrodes. Assessments of residual hearing, cervical vestibular-evoked myogenic potentials (cVEMP), videonystagmography, subjective visual vertical/horizontal (SVH/SVV) were performed before and after surgery. There was a significantly (p < 0.05) greater number of subjects who exhibited complete or partial hearing preservation in the deep insertion RW-HP group (9/17) compared to the SCA group (2/15). A higher degree of vestibular loss but a lower degree of vertigo symptoms could be seen in the RW-HP group, but the differences were not statistically significant. It is possible to preserve residual hearing to a certain extent also with deep insertion. Full insertion with hearing preservation was less harmful to residual hearing particularly at 125 Hz (p < 0.05), than was the standard cochleostomy approach.

  8. A Predicted Astrometric Microlensing Event by a Nearby White Dwarf

    Science.gov (United States)

    McGill, Peter; Smith, Leigh C.; Wyn Evans, N.; Belokurov, Vasily; Smart, R. L.

    2018-04-01

    We used the Tycho-Gaia Astrometric Solution catalogue, part of Gaia Data Release 1, to search for candidate astrometric microlensing events expected to occur within the remaining lifetime of the Gaia satellite. Our search yielded one promising candidate. We predict that the nearby DQ type white dwarf LAWD 37 (WD 1142-645) will lens a background star and will reach closest approach on November 11th 2019 (± 4 days) with impact parameter 380 ± 10 mas. This will produce an apparent maximum deviation of the source position of 2.8 ± 0.1 mas. In the most propitious circumstance, Gaia will be able to determine the mass of LAWD 37 to ˜3%. This mass determination will provide an independent check on atmospheric models of white dwarfs with helium rich atmospheres, as well as tests of white dwarf mass radius relationships and evolutionary theory.

  9. Implementing the Gaia Astrometric Global Iterative Solution (AGIS) in Java

    OpenAIRE

    O'Mullane, William; Lammers, Uwe; Lindegren, Lennart; Hernandez, Jose; Hobbs, David

    2011-01-01

    This paper provides a description of the Java software framework which has been constructed to run the Astrometric Global Iterative Solution for the Gaia mission. This is the mathematical framework to provide the rigid reference frame for Gaia observations from the Gaia data itself. This process makes Gaia a self calibrated, and input catalogue independent, mission. The framework is highly distributed typically running on a cluster of machines with a database back end. All code is written in ...

  10. Implementing the Gaia Astrometric Global Iterative Solution (AGIS) in Java

    Science.gov (United States)

    O'Mullane, William; Lammers, Uwe; Lindegren, Lennart; Hernandez, Jose; Hobbs, David

    2011-10-01

    This paper provides a description of the Java software framework which has been constructed to run the Astrometric Global Iterative Solution for the Gaia mission. This is the mathematical framework to provide the rigid reference frame for Gaia observations from the Gaia data itself. This process makes Gaia a self calibrated, and input catalogue independent, mission. The framework is highly distributed typically running on a cluster of machines with a database back end. All code is written in the Java language. We describe the overall architecture and some of the details of the implementation.

  11. Astrometric and photometric study of the open cluster NGC 2323

    Directory of Open Access Journals (Sweden)

    Amin M.Y.

    2017-01-01

    Full Text Available We present a study of the open cluster NGC 2323 using astrometric and photometric data. In our study we used two methods that are able to separate open cluster’s stars from those that belong to the stellar background. Our results of calculations by these two methods indicate that: 1 according to the membership probability, NGC 2323 should contain 497 stars, 2 the cluster center should be at 07h 02m 48.s02 and -08° 20' 17''74,3 the limiting radius of NGC 2323 is 2.31 ± 0.04 pc, the surface number density at this radius is 98.16 stars pc −2, 4 the magnitude function has a maximum at about mv = 14 mag, 5 the total mass of NGC 2323 is estimated dynamically by using astrometric data to be 890 M_, and statistically by using photometric data to be 900 M_, and 6 the distance and age of the cluster are found to be equal to 900 ± 100 pc, and 140 ± 20 Myr, respectively. Finally the dynamical evolution parameter τ of the cluster is about 436.2.

  12. ASTROMETRIC MASSES OF 26 ASTEROIDS AND OBSERVATIONS ON ASTEROID POROSITY

    International Nuclear Information System (INIS)

    Baer, James; Chesley, Steven R.; Matson, Robert D.

    2011-01-01

    As an application of our recent observational error model, we present the astrometric masses of 26 main-belt asteroids. We also present an integrated ephemeris of 300 large asteroids, which was used in the mass determination algorithm to model significant perturbations from the rest of the main belt. After combining our mass estimates with those of other authors, we study the bulk porosities of over 50 main-belt asteroids and observe that asteroids as large as 300 km in diameter may be loose aggregates. This finding may place specific constraints on models of main-belt collisional evolution. Additionally, we observe that C-group asteroids tend to have significantly higher macroporosity than S-group asteroids.

  13. Astrometric tests of General Relativity in the Solar system

    International Nuclear Information System (INIS)

    Gai, M; Vecchiato, A; Riva, A; Lattanzi, M G; Sozzetti, A; Crosta, M T; Busonero, D

    2014-01-01

    Micro-arcsec astronomy is able to verify the predictions of theoretical models of gravitation at a level adequate to constraint relevant parameters and select among different formulations. In particular, this concerns the weak field limit applicable to the Sun neighborhood, where competing models can be expressed in a common framework as the Parametrised Post-Newtonian and Parametrised Post-Post-Newtonian formulations. The mission Gaia is going to provide an unprecedented determination of the γ PPN parameter at the 10 −6 level. Other recently proposed concepts, as GAME, may improve the precision on γ by one or two orders of magnitude and provide constraints on other crucial phenomenological aspects. We review the key concepts of astrometric tests of General Relativity and discuss a possible development scenario

  14. PEPSI deep spectra. II. Gaia benchmark stars and other M-K standards

    Science.gov (United States)

    Strassmeier, K. G.; Ilyin, I.; Weber, M.

    2018-04-01

    Context. High-resolution échelle spectra confine many essential stellar parameters once the data reach a quality appropriate to constrain the various physical processes that form these spectra. Aim. We provide a homogeneous library of high-resolution, high-S/N spectra for 48 bright AFGKM stars, some of them approaching the quality of solar-flux spectra. Our sample includes the northern Gaia benchmark stars, some solar analogs, and some other bright Morgan-Keenan (M-K) spectral standards. Methods: Well-exposed deep spectra were created by average-combining individual exposures. The data-reduction process relies on adaptive selection of parameters by using statistical inference and robust estimators. We employed spectrum synthesis techniques and statistics tools in order to characterize the spectra and give a first quick look at some of the science cases possible. Results: With an average spectral resolution of R ≈ 220 000 (1.36 km s-1), a continuous wavelength coverage from 383 nm to 912 nm, and S/N of between 70:1 for the faintest star in the extreme blue and 6000:1 for the brightest star in the red, these spectra are now made public for further data mining and analysis. Preliminary results include new stellar parameters for 70 Vir and α Tau, the detection of the rare-earth element dysprosium and the heavy elements uranium, thorium and neodymium in several RGB stars, and the use of the 12C to 13C isotope ratio for age-related determinations. We also found Arcturus to exhibit few-percent Ca II H&K and Hα residual profile changes with respect to the KPNO atlas taken in 1999. Based on data acquired with PEPSI using the Large Binocular Telescope (LBT) and the Vatican Advanced Technology Telescope (VATT). The LBT is an international collaboration among institutions in the United States, Italy, and Germany. LBT Corporation partners are the University of Arizona on behalf of the Arizona university system; Istituto Nazionale di Astrofisica, Italy; LBT

  15. Effects of a non-standard W± magnetic moment in W± production via deep inelastic e-P scattering

    International Nuclear Information System (INIS)

    Boehm, M.; Rosado, A.

    1989-01-01

    We calculate the production of charged bosons in deep inelastic e - P scattering in the context of an electroweak model in which the vector boson self interactions may be different from those prescribed by the electroweak standard model. We present results which show the dependence of the cross section on the anomalous magnetic dipole moment κ of the W ± . We find for energies available at HERA that even small deviations from the standard model value of κ imply observable deviations in the W ± production rates. We also show that the contributions from heavy boson exchange diagrams are very important. (orig.)

  16. The science, technology and mission design for the Laser Astrometric test of relativity

    Science.gov (United States)

    Turyshev, Slava G.

    2006-01-01

    The Laser Astrometric Test of Relativity (LATOR) is a Michelson-Morley-type experiment designed to test the Einstein's general theory of relativity in the most intense gravitational environment available in the solar system - the close proximity to the Sun.

  17. Precision Orbit of δ Delphini and Prospects for Astrometric Detection of Exoplanets

    Science.gov (United States)

    Gardner, Tyler; Monnier, John D.; Fekel, Francis C.; Williamson, Mike; Duncan, Douglas K.; White, Timothy R.; Ireland, Michael; Adams, Fred C.; Barman, Travis; Baron, Fabien; ten Brummelaar, Theo; Che, Xiao; Huber, Daniel; Kraus, Stefan; Roettenbacher, Rachael M.; Schaefer, Gail; Sturmann, Judit; Sturmann, Laszlo; Swihart, Samuel J.; Zhao, Ming

    2018-03-01

    Combining visual and spectroscopic orbits of binary stars leads to a determination of the full 3D orbit, individual masses, and distance to the system. We present a full analysis of the evolved binary system δ Delphini using astrometric data from the MIRC and PAVO instruments on the CHARA long-baseline interferometer, 97 new spectra from the Fairborn Observatory, and 87 unpublished spectra from the Lick Observatory. We determine the full set of orbital elements for δ Del, along with masses of 1.78 ± 0.07 M ⊙ and 1.62 ± 0.07 M ⊙ for each component, and a distance of 63.61 ± 0.89 pc. These results are important in two contexts: for testing stellar evolution models and for defining the detection capabilities for future planet searches. We find that the evolutionary state of this system is puzzling, as our measured flux ratios, radii, and masses imply a ∼200 Myr age difference between the components, using standard stellar evolution models. Possible explanations for this age discrepancy include mass transfer scenarios with a now-ejected tertiary companion. For individual measurements taken over a span of two years, we achieve 2 M J on orbits >0.75 au around individual components of hot binary stars via differential astrometry.

  18. Revisiting TW Hydrae in light of new astrometric data

    Science.gov (United States)

    Teixeira, R.; Ducourant, C.; Galli, P. A. B.; Le Campion, J. F.; Zuckerman, B.; Krone-Martins, A. G. O.; Chauvin, G.; Song, I.

    2014-10-01

    Our efforts in the present work focused mainly on refining and improving the previous description and understanding of the stellar association TW Hydrae (TWA) including a very detailed membership analysis and its dynamical and evolutionary age.To achieve our objectives in a fully reliable way we take advantage of our own astrometric measurements (Ducourant et al. 2013) performed with NTT/EFOSC2 - ESO (La Silla - Chile) spread over three years (2007 - 2010) and of those published in the literature.A very detailed membership analysis based on the convergent point strategy as developed by our team (Galli et al. 2012, 2013) allowed us to define a consistent kinematic group containing 31 stars among the 44 proposed as TWA member in the literature. Assuming that our sample of stars may be contaminated by non-members and to get rid of the particular influence of each star we applied a Jacknife resampling technique generating 2000 random lists of 13 stars taken from our 16 stars and calculated for each the epoch of convergence when the radius is minimum. The mean of the epochs obtained and the dispersion about the mean give a dynamical age of 7.5± 0.7 Myr for the association that is in good agreement with the previous traceback age (De La Reza et al. 2006). We also estimated age for TWA moving group members from pre-main sequence evolutionary models (Siess et al. 2000) and find a mean age of 7.4± 1.2 Myr. These results show that the dynamical age of the association obtained via the traceback technique and the average age derived from theoretical evolutionary models are in good agreement.

  19. Standard high-reliability integrated circuit logic packaging. [for deep space tracking stations

    Science.gov (United States)

    Slaughter, D. W.

    1977-01-01

    A family of standard, high-reliability hardware used for packaging digital integrated circuits is described. The design transition from early prototypes to production hardware is covered and future plans are discussed. Interconnections techniques are described as well as connectors and related hardware available at both the microcircuit packaging and main-frame level. General applications information is also provided.

  20. An astrometric search for a stellar companion to the sun

    International Nuclear Information System (INIS)

    Perlmutter, S.

    1986-01-01

    A companion star within 0.8 pc of the Sun has been postulated to explain a possible 26 Myr periodicity in mass extinctions of species on the Earth. Such a star would already be catalogued in the Yale Bright Star catalogue unless it is fainter than m/sub nu/ = 6.5; this limits the possible stellar types for an unseen companion to red dwarfs, brown dwarfs, or compact objects. Red dwarfs account for about 75% of these possible stars. We describe here the design and development of an astrometric search for a nearby red dwarf companion with a six-month peak-to-peak parallax of ≥2.5 arcseconds. We are measuring the parallax of 2770 candidate faint red stars selected from the Dearborn Observatory catalogue. An automated 30-inch telescope and CCD camera system collect digitized images of the candidate stars, along with a 13' x 16' surrounding field of background stars. Second-epoch images, taken a few months later, are registered to the first epoch images using the background stars as fiducials. An apparent motion, m/sub a/, of the candidate stars is found to a precision of σ/sub m//sub a/ ≅ 0.08 pixel ≅ 0.2 arcseconds for fields with N/sub fiducial/ ≥ 10 fiducial stars visible above the background noise. This precision is sufficient to detect the parallactic motion of a star at 0.8 pc with a two month interval between the observation epochs. Images with fewer fiducial stars above background noise are observed with a longer interval between epochs. If a star is found with high parallactic motion, we will confirm its distance with further parallax measurements, photometry, and spectral studies, and will measure radial velocity and proper motion to establish its orbit. We have demonstrated the search procedure with observations of 41 stars, and have shown that none of these is a nearby star. 37 refs., 16 figs., 3 tabs

  1. Implementation of the Global Parameters Determination in Gaia's Astrometric Solution (AGIS)

    Science.gov (United States)

    Raison, F.; Olias, A.; Hobbs, D.; Lindegren, L.

    2010-12-01

    Gaia is ESA’s space astrometry mission with a foreseen launch date in early 2012. Its main objective is to perform a stellar census of the 1000 Million brightest objects in our galaxy (completeness to V=20 mag) from which an astrometric catalog of micro-arcsec level accuracy will be constructed. A key element in this endeavor is the Astrometric Global Iterative Solution (AGIS). A core part of AGIS is to determine the accurate spacecraft attitude, geometric instrument calibration and astrometric model parameters for a well-behaved subset of all the objects (the ‘primary stars’). In addition, a small number of global parameters will be estimated, one of these being PPN γ. We present here the implementation of the algorithms dedicated to the determination of the global parameters.

  2. Inferring Binary and Trinary Stellar Populations in Photometric and Astrometric Surveys

    Science.gov (United States)

    Widmark, Axel; Leistedt, Boris; Hogg, David W.

    2018-04-01

    Multiple stellar systems are ubiquitous in the Milky Way but are often unresolved and seen as single objects in spectroscopic, photometric, and astrometric surveys. However, modeling them is essential for developing a full understanding of large surveys such as Gaia and connecting them to stellar and Galactic models. In this paper, we address this problem by jointly fitting the Gaia and Two Micron All Sky Survey photometric and astrometric data using a data-driven Bayesian hierarchical model that includes populations of binary and trinary systems. This allows us to classify observations into singles, binaries, and trinaries, in a robust and efficient manner, without resorting to external models. We are able to identify multiple systems and, in some cases, make strong predictions for the properties of their unresolved stars. We will be able to compare such predictions with Gaia Data Release 4, which will contain astrometric identification and analysis of binary systems.

  3. Standard high-resolution pelvic MRI vs. low-resolution pelvic MRI in the evaluation of deep infiltrating endometriosis

    International Nuclear Information System (INIS)

    Scardapane, Arnaldo; Lorusso, Filomenamila; Ferrante, Annunziata; Stabile Ianora, Amato Antonio; Angelelli, Giuseppe; Scioscia, Marco

    2014-01-01

    To compare the capabilities of standard pelvic MRI with low-resolution pelvic MRI using fast breath-hold sequences to evaluate deep infiltrating endometriosis (DIE). Sixty-eight consecutive women with suspected DIE were studied with pelvic MRI. A double-acquisition protocol was carried out in each case. High-resolution (HR)-MRI consisted of axial, sagittal, and coronal TSE T2W images, axial TSE T1W, and axial THRIVE. Low-resolution (LR)-MRI was acquired using fast single shot (SSH) T2 and T1 images. Two radiologists with 10 and 2 years of experience reviewed HR and LR images in two separate sessions. The presence of endometriotic lesions of the uterosacral ligament (USL), rectovaginal septum (RVS), pouch of Douglas (POD), and rectal wall was noted. The accuracies of LR-MRI and HR-MRI were compared with the laparoscopic and histopathological findings. Average acquisition times were 24 minutes for HR-MRI and 7 minutes for LR-MRI. The more experienced radiologist achieved higher accuracy with both HR-MRI and LR-MRI. The values of sensitivity, specificity, PPV, NPV, and accuracy did not significantly change between HR and LR images or interobserver agreement for all of the considered anatomic sites. LR-MRI performs as well as HR-MRI and is a valuable tool for the detection of deep endometriosis extension. (orig.)

  4. Standard high-resolution pelvic MRI vs. low-resolution pelvic MRI in the evaluation of deep infiltrating endometriosis

    Energy Technology Data Exchange (ETDEWEB)

    Scardapane, Arnaldo; Lorusso, Filomenamila; Ferrante, Annunziata; Stabile Ianora, Amato Antonio; Angelelli, Giuseppe [University Hospital ' ' Policlinico' ' of Bari, Interdisciplinary Department of Medicine, Bari (Italy); Scioscia, Marco [Sacro Cuore Don Calabria General Hospital, Department of Obstetrics and Gynecology, Negrar, Verona (Italy)

    2014-10-15

    To compare the capabilities of standard pelvic MRI with low-resolution pelvic MRI using fast breath-hold sequences to evaluate deep infiltrating endometriosis (DIE). Sixty-eight consecutive women with suspected DIE were studied with pelvic MRI. A double-acquisition protocol was carried out in each case. High-resolution (HR)-MRI consisted of axial, sagittal, and coronal TSE T2W images, axial TSE T1W, and axial THRIVE. Low-resolution (LR)-MRI was acquired using fast single shot (SSH) T2 and T1 images. Two radiologists with 10 and 2 years of experience reviewed HR and LR images in two separate sessions. The presence of endometriotic lesions of the uterosacral ligament (USL), rectovaginal septum (RVS), pouch of Douglas (POD), and rectal wall was noted. The accuracies of LR-MRI and HR-MRI were compared with the laparoscopic and histopathological findings. Average acquisition times were 24 minutes for HR-MRI and 7 minutes for LR-MRI. The more experienced radiologist achieved higher accuracy with both HR-MRI and LR-MRI. The values of sensitivity, specificity, PPV, NPV, and accuracy did not significantly change between HR and LR images or interobserver agreement for all of the considered anatomic sites. LR-MRI performs as well as HR-MRI and is a valuable tool for the detection of deep endometriosis extension. (orig.)

  5. High Astrometric Precision in the Calculation of the Coordinates of Orbiters in the GEO Ring

    Science.gov (United States)

    Lacruz, E.; Abad, C.; Downes, J. J.; Hernández-Pérez, F.; Casanova, D.; Tresaco, E.

    2018-04-01

    We present an astrometric method for the calculation of the positions of orbiters in the GEO ring with a high precision, through a rigorous astrometric treatment of observations with a 1-m class telescope, which are part of the CIDA survey of the GEO ring. We compute the distortion pattern to correct for the systematic errors introduced by the optics and electronics of the telescope, resulting in absolute mean errors of 0.16″ and 0.12″ in right ascension and declination, respectively. These correspond to ≍25 m at the mean distance of the GEO ring, and are thus good quality results.

  6. A 481pJ/decision 3.4M decision/s Multifunctional Deep In-memory Inference Processor using Standard 6T SRAM Array

    OpenAIRE

    Kang, Mingu; Gonugondla, Sujan; Patil, Ameya; Shanbhag, Naresh

    2016-01-01

    This paper describes a multi-functional deep in-memory processor for inference applications. Deep in-memory processing is achieved by embedding pitch-matched low-SNR analog processing into a standard 6T 16KB SRAM array in 65 nm CMOS. Four applications are demonstrated. The prototype achieves up to 5.6X (9.7X estimated for multi-bank scenario) energy savings with negligible (

  7. A New Browser-based, Ontology-driven Tool for Generating Standardized, Deep Descriptions of Geoscience Models

    Science.gov (United States)

    Peckham, S. D.; Kelbert, A.; Rudan, S.; Stoica, M.

    2016-12-01

    Standardized metadata for models is the key to reliable and greatly simplified coupling in model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System). This model metadata also helps model users to understand the important details that underpin computational models and to compare the capabilities of different models. These details include simplifying assumptions on the physics, governing equations and the numerical methods used to solve them, discretization of space (the grid) and time (the time-stepping scheme), state variables (input or output), model configuration parameters. This kind of metadata provides a "deep description" of a computational model that goes well beyond other types of metadata (e.g. author, purpose, scientific domain, programming language, digital rights, provenance, execution) and captures the science that underpins a model. While having this kind of standardized metadata for each model in a repository opens up a wide range of exciting possibilities, it is difficult to collect this information and a carefully conceived "data model" or schema is needed to store it. Automated harvesting and scraping methods can provide some useful information, but they often result in metadata that is inaccurate or incomplete, and this is not sufficient to enable the desired capabilities. In order to address this problem, we have developed a browser-based tool called the MCM Tool (Model Component Metadata) which runs on notebooks, tablets and smart phones. This tool was partially inspired by the TurboTax software, which greatly simplifies the necessary task of preparing tax documents. It allows a model developer or advanced user to provide a standardized, deep description of a computational geoscience model, including hydrologic models. Under the hood, the tool uses a new ontology for models built on the CSDMS Standard Names, expressed as a collection of RDF files (Resource Description Framework). This ontology is based on core concepts

  8. Quantum astrometric observables I: time delay in classical and quantum gravity

    NARCIS (Netherlands)

    Khavkine, I.

    2012-01-01

    A class of diffeomorphism invariant, physical observables, so-called astrometric observables, is introduced. A particularly simple example, the time delay, which expresses the difference between two initially synchronized proper time clocks in relative inertial motion, is analyzed in detail. It is

  9. Properties of comet Halley derived from thermal models and astrometric data

    International Nuclear Information System (INIS)

    Hechler, F.W.; Morley, T.A.; Mahr, P.

    1986-01-01

    The motion of a comet nucleus is influenced by outgassing forces. The orbit determination from astrometric data of comet Halley using empiric force and observation bias models and the incorporation of thermal models developed at ESOC into the orbit determination allows to draw some conclusions on the comet Halley dynamics and physics. 21 references

  10. Gravitational lensing statistics with extragalactic surveys - II. Analysis of the Jodrell Bank-VLA Astrometric Survey

    NARCIS (Netherlands)

    Helbig, P; Marlow, D; Quast, R; Wilkinson, PN; Browne, IWA; Koopmans, LVE

    We present constraints on the cosmological constant lambda(0) from gravitational lensing statistics of the Jodrell Bank-VLA Astrometric Survey (JVAS). Although this is the largest gravitational lens survey which has been analysed, cosmological constraints are only comparable to those from optical

  11. Gravitational lensing statistics with extragalactic surveys; 2, Analysis of the Jodrell Bank-VLA Astrometric Survey

    NARCIS (Netherlands)

    Helbig, P.; Marlow, D. R.; Quast, R.; Wilkinson, P. N.; Browne, I. W. A.; Koopmans, L. V. E.

    1999-01-01

    Published in: Astron. Astrophys. Suppl. Ser. 136 (1999) no. 2, pp.297-305 citations recorded in [Science Citation Index] Abstract: We present constraints on the cosmological constant $lambda_{0}$ from gravitational lensing statistics of the Jodrell Bank-VLA Astrometric Survey (JVAS). Although this

  12. Bed rest versus early ambulation with standard anticoagulation in the management of deep vein thrombosis: a meta-analysis.

    Directory of Open Access Journals (Sweden)

    Zhenlei Liu

    Full Text Available Bed rest has been considered as the cornerstone of management of deep vein thrombosis (DVT for a long time, though it is not evidence-base, and there is growing evidence favoring early ambulation.Electronic databases including Medline, PubMed, Cochrane Library and three Chinese databases were searched with key words of "deep vein thrombosis", "pulmonary embolism", "venous thrombosis", "bed rest", "immobilization", "mobilization" and "ambulation". We considered randomized controlled trials, prospective or retrospective cohort studies that compared the outcomes of acute DVT patients managed with early ambulation versus bed rest, in addition to standard anticoagulation. Meta-analysis pertaining to the incidence of new pulmonary embolism (PE, progression of DVT, and DVT related deaths were conducted, as well as the extent of remission of pain and edema.13 studies were included with a total of 3269 patients. Compared to bed rest, early ambulation was not associated with a higher incidence of new PE, progression of DVT, or DVT related deaths (RD -0.03, 95% CI -0.05∼ -0.02; Z = 1.24, p = 0.22; random effect model, Tau2 = 0.01. Moreover, if the patients suffered moderate or severe pain initially, early ambulation was related to a better outcome, with respect to remission of acute pain in the affected limb (SMD 0.42, 95%CI 0.09∼0.74; Z = 2.52, p = 0.01; random effect model, Tau2 = 0.04. Meta-analysis of alleviation of edema cannot elicit a solid conclusion because of significant heterogeneity among the few studies.Compared to bed rest, early ambulation of acute DVT patients with anticoagulation was not associated with a higher incidence of new PE, progression of DVT, and DVT related deaths. Furthermore, for the patients suffered moderate or severe pain initially, a better outcome can be seen in early ambulation group, regarding to the remission of acute pain in the affected limb.

  13. A SEARCH FOR STELLAR-MASS BLACK HOLES VIA ASTROMETRIC MICROLENSING

    Energy Technology Data Exchange (ETDEWEB)

    Lu, J. R. [Astronomy Department, University of California, Berkeley, CA 94720 (United States); Sinukoff, E. [Institute for Astronomy, University of Hawai‘i at Mānoa, Honolulu, HI 96822 (United States); Ofek, E. O. [Department of Particle Physics and Astrophysics, Weizmann Institute of Science, Rehovot, 76100 (Israel); Udalski, A.; Kozlowski, S. [Warsaw University Observatory, Al. Ujazdowskie 4, 00-478 Warszawa (Poland)

    2016-10-10

    While dozens of stellar-mass black holes (BHs) have been discovered in binary systems, isolated BHs have eluded detection. Their presence can be inferred when they lens light from a background star. We attempt to detect the astrometric lensing signatures of three photometrically identified microlensing events, OGLE-2011-BLG-0022, OGLE-2011-BLG-0125, and OGLE-2012-BLG-0169 (OB110022, OB110125, and OB120169), located toward the Galactic Bulge. These events were selected because of their long durations, which statistically favors more massive lenses. Astrometric measurements were made over one to two years using laser-guided adaptive optics observations from the W. M. Keck Observatory. Lens model parameters were first constrained by the photometric light curves. The OB120169 light curve is well fit by a single-lens model, while both OB110022 and OB110125 light curves favor binary lens models. Using the photometric fits as prior information, no significant astrometric lensing signal was detected and all targets were consistent with linear motion. The significant lack of astrometric signal constrains the lens mass of OB110022 to 0.05–1.79 M {sub ⊙} in a 99.7% confidence interval, which disfavors a BH lens. Fits to OB110125 yielded a reduced Einstein crossing time and insufficient observations during the peak, so no mass limits were obtained. Two degenerate solutions exist for OB120169, which have a lens mass between 0.2–38.8 M {sub ⊙} and 0.4–39.8 M {sub ⊙} for a 99.7% confidence interval. Follow-up observations of OB120169 will further constrain the lens mass. Based on our experience, we use simulations to design optimal astrometric observing strategies and show that with more typical observing conditions the detection of BHs is feasible.

  14. Revealing Companions to Nearby Stars with Astrometric Acceleration

    Science.gov (United States)

    2012-07-01

    CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência e Tecnologia (Brazil), and Ministerio de Ciencia , Tecnologı́a e...Most observations were done in the I or Strömgren y bands. The detection limits Δm(ρ) for the unresolved stars are published. They are not as deep...is the duration of the Hipparcos mission. The dis- placement of the photo-center in X, Y caused by motion due to a binary is calculated for each of

  15. Deep defect levels in standard and oxygen enriched silicon detectors before and after **6**0Co-gamma-irradiation

    CERN Document Server

    Stahl, J; Lindström, G; Pintilie, I

    2003-01-01

    Capacitance Deep Level Transient Spectroscopy (C-DLTS) measurements have been performed on standard and oxygen-doped silicon detectors manufactured from high-resistivity n-type float zone material with left angle bracket 111 right angle bracket and left angle bracket 100 right angle bracket orientation. Three different oxygen concentrations were achieved by the so-called diffusion oxygenated float zone (DOFZ) process initiated by the CERN-RD48 (ROSE) collaboration. Before the irradiation a material characterization has been performed. In contrast to radiation damage by neutrons or high- energy charged hadrons, were the bulk damage is dominated by a mixture of clusters and point defects, the bulk damage caused by **6**0Co-gamma-radiation is only due to the introduction of point defects. The dominant electrically active defects which have been detected after **6**0Co-gamma-irradiation by C-DLTS are the electron traps VO//i, C//iC//s, V//2( = /-), V //2(-/0) and the hole trap C//i O//i. The main difference betwe...

  16. Magnetic Field Studies in BL Lacertae through Faraday Rotation and a Novel Astrometric Technique

    Directory of Open Access Journals (Sweden)

    Sol N. Molina

    2017-12-01

    Full Text Available It is thought that dynamically important helical magnetic fields twisted by the differential rotation of the black hole’s accretion disk or ergosphere play an important role in the launching, acceleration, and collimation of active galactic nuclei (AGN jets. We present multi-frequency astrometric and polarimetric Very Long Baseline Array (VLBA images at 15, 22, and 43 GHz, as well as Faraday rotation analyses of the jet in BL Lacertae as part of a sample of AGN jets aimed to probe the magnetic field structure at the innermost scales to test jet formation models. The novel astrometric technique applied allows us to obtain the absolute position at mm wavelengths without any external calibrator.

  17. On an Allan variance approach to classify VLBI radio-sources on the basis of their astrometric stability

    Science.gov (United States)

    Gattano, C.; Lambert, S.; Bizouard, C.

    2017-12-01

    In the context of selecting sources defining the celestial reference frame, we compute astrometric time series of all VLBI radio-sources from observations in the International VLBI Service database. The time series are then analyzed with Allan variance in order to estimate the astrometric stability. From results, we establish a new classification that takes into account the whole multi-time scales information. The algorithm is flexible on the definition of ``stable source" through an adjustable threshold.

  18. Faster, Better, Cheaper: News on Seeking Gaia's Astrometric Solution with AGIS

    Science.gov (United States)

    Lammers, U.; Lindegren, L.; Bombrun, A.; O'Mullane, W.; Hobbs, D.

    2010-12-01

    Gaia is ESA’s ambitious space astrometry mission with a foreseen launch date in early 2012. Its main objective is to perform a stellar census of the 1000 Million brightest objects in our galaxy (completeness to V=20 mag) from which an astrometric catalog of micro-arcsec level accuracy will be constructed. A key element in this endeavor is the Astrometric Global Iterative Solution (AGIS) - the mathematical and numerical framework for combining the ≍80 available observations per star obtained during Gaia’s 5yr lifetime into a single global astrometric solution. At last year’s ADASS XVIII we presented (O4.1) in detail the fundamental working principles of AGIS, its development status, and selected results obtained by running the system on processing hardware at ESAC, Madrid with large-scale simulated data sets. We present here the latest developments around AGIS highlighting in particular a much improved algebraic solving method that has recently been implemented. This Conjugate Gradient scheme improves the convergence behavior in significant ways and leads to a solution of much higher scientific quality. We also report on a new collaboration aiming at processing the data from the future small Japanese astrometry mission Nano-Jasmine with AGIS.

  19. Astrometric detectability of systems with unseen companions: effects of the Earth orbital motion

    Science.gov (United States)

    Butkevich, Alexey G.

    2018-06-01

    The astrometric detection of an unseen companion is based on an analysis of the apparent motion of its host star around the system's barycentre. Systems with an orbital period close to 1 yr may escape detection if the orbital motion of their host stars is observationally indistinguishable from the effects of parallax. Additionally, an astrometric solution may produce a biased parallax estimation for such systems. We examine the effects of the orbital motion of the Earth on astrometric detectability in terms of a correlation between the Earth's orbital position and the position of the star relative to its system barycentre. The χ2 statistic for parallax estimation is calculated analytically, leading to expressions that relate the decrease in detectability and accompanying parallax bias to the position correlation function. The impact of the Earth's motion critically depends on the exoplanet's orbital period, diminishing rapidly as the period deviates from 1 yr. Selection effects against 1-yr-period systems is, therefore, expected. Statistical estimation shows that the corresponding loss of sensitivity results in a typical 10 per cent increase in the detection threshold. Consideration of eccentric orbits shows that the Earth's motion has no effect on detectability for e≳ 0.5. The dependence of the detectability on other parameters, such as orbital phases and inclination of the orbital plane to the ecliptic, are smooth and monotonic because they are described by simple trigonometric functions.

  20. Gaia , an all sky astrometric and photometric survey

    International Nuclear Information System (INIS)

    Carrasco, J.M.

    2017-01-01

    Gaia space mission includes a low resolution spectroscopic instrument to classify and parametrize the observed sources. Gaia is a full-sky unbiased survey down to about 20th magnitude. The scanning law yields a rather uniform coverage of the sky over the full mission. The data reduction is a global one over the full mission. Both sky coverage and data reduction strategy ensure an unprecedented all-sky homogeneous spectrophotometric survey. Certainly, that survey is of interest for future on-ground and space projects (LSST, PLATO, EUCLID, ...). This work addresses the exploitation of the Gaia spectrophotometry as standard photometry reference through the discussion of the sky coverage, the spectrophotometric precision and the expected uncertainties of the synthetic photometry derived from the low resolution Gaia spectra and photometry.

  1. The consensus among Chinese interventional experts on the standard of interventional therapy for deep venous thrombosis of lower extremity

    International Nuclear Information System (INIS)

    Academic Group of Interventional Radiology, Radiology Branch of Chinese Medical Association

    2011-01-01

    This paper aims to introduce the indications and contraindications of catheter-directed thrombolysis, percutaneous mechanical thrombectomy, balloon angioplasty and stent implantation for deep venous thrombosis of lower extremity, and also aims to summarize and to illustrate the manipulating procedure, the points for attention, the perioperative complications and preventions in performing different kind of interventional technique. Great importance is attached to the interventional therapy for both acute and subacute deep venous thrombosis of lower extremity in order to effectively reduce the occurrence of post-thrombosis syndrome. (authors)

  2. Tycho- Gaia Astrometric Solution Parallaxes and Proper Motions for Five Galactic Globular Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Watkins, Laura L.; Van der Marel, Roeland P., E-mail: lwatkins@stsci.edu [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore MD 21218 (United States)

    2017-04-20

    We present a pilot study of Galactic globular cluster (GC) proper motion (PM) determinations using Gaia data. We search for GC stars in the Tycho- Gaia Astrometric Solution (TGAS) catalog from Gaia Data Release 1 (DR1), and identify five members of NGC 104 (47 Tucanae), one member of NGC 5272 (M3), five members of NGC 6121 (M4), seven members of NGC 6397, and two members of NGC 6656 (M22). By taking a weighted average of member stars, fully accounting for the correlations between parameters, we estimate the parallax (and, hence, distance) and PM of the GCs. This provides a homogeneous PM study of multiple GCs based on an astrometric catalog with small and well-controlled systematic errors and yields random PM errors similar to existing measurements. Detailed comparison to the available Hubble Space Telescope ( HST ) measurements generally shows excellent agreement, validating the astrometric quality of both TGAS and HST . By contrast, comparison to ground-based measurements shows that some of those must have systematic errors exceeding the random errors. Our parallax estimates have uncertainties an order of magnitude larger than previous studies, but nevertheless imply distances consistent with previous estimates. By combining our PM measurements with literature positions, distances, and radial velocities, we measure Galactocentric space motions for the clusters and find that these also agree well with previous analyses. Our analysis provides a framework for determining more accurate distances and PMs of Galactic GCs using future Gaia data releases. This will provide crucial constraints on the near end of the cosmic distance ladder and provide accurate GC orbital histories.

  3. Iterative methods used in overlap astrometric reduction techniques do not always converge

    Science.gov (United States)

    Rapaport, M.; Ducourant, C.; Colin, J.; Le Campion, J. F.

    1993-04-01

    In this paper we prove that the classical Gauss-Seidel type iterative methods used for the solution of the reduced normal equations occurring in overlapping reduction methods of astrometry do not always converge. We exhibit examples of divergence. We then analyze an alternative algorithm proposed by Wang (1985). We prove the consistency of this algorithm and verify that it can be convergent while the Gauss-Seidel method is divergent. We conjecture the convergence of Wang method for the solution of astrometric problems using overlap techniques.

  4. The principle of measuring unusual change of underground mass by optical astrometric instrument

    Directory of Open Access Journals (Sweden)

    Wang Jiancheng

    2012-11-01

    In this study, we estimate the deflection angle of the plumb line on a ground site, and give a relation between the angle, abnormal mass and site distance (depth and horizontal distance. Then we derive the abnormality of underground material density using the plumb lines measured at different sites, and study the earthquake gestation, development and occurrence. Using the deflection angles of plumb lines observed at two sites, we give a method to calculate the mass and the center of gravity of underground materials. We also estimate the abnormal masses of latent seismic zones with different energy, using thermodynamic relations, and introduce a new optical astrometric instrument we had developed.

  5. Low incidence of clonality in cold water corals revealed through the novel use of a standardized protocol adapted to deep sea sampling

    Science.gov (United States)

    Becheler, Ronan; Cassone, Anne-Laure; Noël, Philippe; Mouchel, Olivier; Morrison, Cheryl L.; Arnaud-Haond, Sophie

    2017-11-01

    Sampling in the deep sea is a technical challenge, which has hindered the acquisition of robust datasets that are necessary to determine the fine-grained biological patterns and processes that may shape genetic diversity. Estimates of the extent of clonality in deep-sea species, despite the importance of clonality in shaping the local dynamics and evolutionary trajectories, have been largely obscured by such limitations. Cold-water coral reefs along European margins are formed mainly by two reef-building species, Lophelia pertusa and Madrepora oculata. Here we present a fine-grained analysis of the genotypic and genetic composition of reefs occurring in the Bay of Biscay, based on an innovative deep-sea sampling protocol. This strategy was designed to be standardized, random, and allowed the georeferencing of all sampled colonies. Clonal lineages discriminated through their Multi-Locus Genotypes (MLG) at 6-7 microsatellite markers could thus be mapped to assess the level of clonality and the spatial spread of clonal lineages. High values of clonal richness were observed for both species across all sites suggesting a limited occurrence of clonality, which likely originated through fragmentation. Additionally, spatial autocorrelation analysis underlined the possible occurrence of fine-grained genetic structure in several populations of both L. pertusa and M. oculata. The two cold-water coral species examined had contrasting patterns of connectivity among canyons, with among-canyon genetic structuring detected in M. oculata, whereas L. pertusa was panmictic at the canyon scale. This study exemplifies that a standardized, random and georeferenced sampling strategy, while challenging, can be applied in the deep sea, and associated benefits outlined here include improved estimates of fine grained patterns of clonality and dispersal that are comparable across sites and among species.

  6. Application of the Coastal and Marine Ecological Classification Standard to ROV Video Data for Enhanced Analysis of Deep-Sea Habitats in the Gulf of Mexico

    Science.gov (United States)

    Ruby, C.; Skarke, A. D.; Mesick, S.

    2016-02-01

    The Coastal and Marine Ecological Classification Standard (CMECS) is a network of common nomenclature that provides a comprehensive framework for organizing physical, biological, and chemical information about marine ecosystems. It was developed by the National Oceanic and Atmospheric Administration (NOAA) Coastal Services Center, in collaboration with other feral agencies and academic institutions, as a means for scientists to more easily access, compare, and integrate marine environmental data from a wide range of sources and time frames. CMECS has been endorsed by the Federal Geographic Data Committee (FGDC) as a national metadata standard. The research presented here is focused on the application of CMECS to deep-sea video and environmental data collected by the NOAA ROV Deep Discoverer and the NOAA Ship Okeanos Explorer in the Gulf of Mexico in 2011-2014. Specifically, a spatiotemporal index of the physical, chemical, biological, and geological features observed in ROV video records was developed in order to allow scientist, otherwise unfamiliar with the specific content of existing video data, to rapidly determine the abundance and distribution of features of interest, and thus evaluate the applicability of those video data to their research. CMECS units (setting, component, or modifier) for seafloor images extracted from high-definition ROV video data were established based upon visual assessment as well as analysis of coincident environmental sensor (temperature, conductivity), navigation (ROV position, depth, attitude), and log (narrative dive summary) data. The resulting classification units were integrated into easily searchable textual and geo-databases as well as an interactive web map. The spatial distribution and associations of deep-sea habitats as indicated by CMECS classifications are described and optimized methodological approaches for application of CMECS to deep-sea video and environmental data are presented.

  7. VLBI FOR GRAVITY PROBE B. IV. A NEW ASTROMETRIC ANALYSIS TECHNIQUE AND A COMPARISON WITH RESULTS FROM OTHER TECHNIQUES

    International Nuclear Information System (INIS)

    Lebach, D. E.; Ratner, M. I.; Shapiro, I. I.; Bartel, N.; Bietenholz, M. F.; Lederman, J. I.; Ransom, R. R.; Campbell, R. M.; Gordon, D.; Lestrade, J.-F.

    2012-01-01

    When very long baseline interferometry (VLBI) observations are used to determine the position or motion of a radio source relative to reference sources nearby on the sky, the astrometric information is usually obtained via (1) phase-referenced maps or (2) parametric model fits to measured fringe phases or multiband delays. In this paper, we describe a 'merged' analysis technique which combines some of the most important advantages of these other two approaches. In particular, our merged technique combines the superior model-correction capabilities of parametric model fits with the ability of phase-referenced maps to yield astrometric measurements of sources that are too weak to be used in parametric model fits. We compare the results from this merged technique with the results from phase-referenced maps and from parametric model fits in the analysis of astrometric VLBI observations of the radio-bright star IM Pegasi (HR 8703) and the radio source B2252+172 nearby on the sky. In these studies we use central-core components of radio sources 3C 454.3 and B2250+194 as our positional references. We obtain astrometric results for IM Peg with our merged technique even when the source is too weak to be used in parametric model fits, and we find that our merged technique yields astrometric results superior to the phase-referenced mapping technique. We used our merged technique to estimate the proper motion and other astrometric parameters of IM Peg in support of the NASA/Stanford Gravity Probe B mission.

  8. To Boldly Go Where No Man has Gone Before: Seeking Gaia's Astrometric Solution with AGIS

    Science.gov (United States)

    Lammers, U.; Lindegren, L.; O'Mullane, W.; Hobbs, D.

    2009-09-01

    Gaia is ESA's ambitious space astrometry mission with a foreseen launch date in late 2011. Its main objective is to perform a stellar census of the 1,000 million brightest objects in our galaxy (completeness to V=20 mag) from which an astrometric catalog of micro-arcsec (μas) level accuracy will be constructed. A key element in this endeavor is the Astrometric Global Iterative Solution (AGIS) - the mathematical and numerical framework for combining the ≈80 available observations per star obtained during Gaia's 5 yr lifetime into a single global astrometic solution. AGIS consists of four main algorithmic cores which improve the source astrometic parameters, satellite attitude, calibration, and global parameters in a block-iterative manner. We present and discuss this basic scheme, the algorithms themselves and the overarching system architecture. The latter is a data-driven distributed processing framework designed to achieve an overall system performance that is not I/O limited. AGIS is being developed as a pure Java system by a small number of geographically distributed European groups. We present some of the software engineering aspects of the project and show used methodologies and tools. Finally we will briefly discuss how AGIS is embedded into the overall Gaia data processing architecture.

  9. A NEW APPLICATION OF THE ASTROMETRIC METHOD TO BREAK SEVERE DEGENERACIES IN BINARY MICROLENSING EVENTS

    International Nuclear Information System (INIS)

    Chung, Sun-Ju; Park, Byeong-Gon; Humphrey, Andrew; Ryu, Yoon-Hyun

    2009-01-01

    When a source star is microlensed by one stellar component of widely separated binary stellar components, after finishing the lensing event, the event induced by the other binary star can be additionally detected. In this paper, we investigate whether the close/wide degeneracies in binary lensing events can be resolved by detecting the additional centroid shift of the source images induced by the secondary binary star in wide binary lensing events. From this investigation, we find that if the source star passes close to the Einstein ring of the secondary companion, the degeneracy can be easily resolved by using future astrometric follow-up observations with high astrometric precision. We determine the probability of detecting the additional centroid shift in binary lensing events with high magnification. From this, we find that the degeneracy of binary lensing events with a separation of ∼<20.0 AU can be resolved with a significant efficiency. We also estimate the waiting time for the detection of the additional centroid shift in wide binary lensing events. We find that for typical Galactic lensing events with a separation of ∼<20.0 AU, the additional centroid shift can be detected within 100 days, and thus the degeneracy of those events can be sufficiently broken within a year.

  10. NEAT: an astrometric space telescope to search for habitable exoplanets in the solar neighborhood

    Science.gov (United States)

    Crouzier, A.; Malbet, F.; Kern, P.; Feautrier, P.; Preiss, O.; Martin, G.; Henault, F.; Stadler, E.; Lafrasse, S.; Behar, E.; Saintpe, M.; Dupont, J.; Potin, S.; Lagage, P.-O.; Cara, C.; Leger, A.; Leduigou, J.-M.; Shao, M.; Goullioud, R.

    2014-03-01

    The last decade has witnessed a spectacular development of exoplanet detection techniques, which led to an exponential number of discoveries and a great diversity of known exoplanets. However, it must be noted that the quest for the holy grail of astrobiology, i.e. a nearby terrestrial exoplanet in habitable zone around a solar type star, is still ongoing and proves to be very hard. Radial velocities will have to overcome stellar noise if there are to discover habitable planets around stars more massive than M ones. For very close systems, transits are impeded by their low geometrical probability. Here we present an alternative concept: space astrometry. NEAT (Nearby Earth Astrometric Telescope) is a concept of astrometric mission proposed to ESA which goal is to make a whole sky survey of close (less then 20 pc) planetary systems. The detection limit required for the instrument is the astrometric signal of an Earth analog (at 10 pc). Differential astrometry is a very interesting tool to detect nearby habitable exoplanets. Indeed, for F, G and K main sequence stars, the astrophysical noise is smaller than the astrometric signal, contrary to the case for radial velocities. The difficulty lies in the fact that the signal of an exo-Earth around a G type star at 10 pc is a tiny 0.3 micro arc sec, which is equivalent to a coin on the moon, seen from the Earth: the main challenge is related to instrumentation. In order to reach this specification, NEAT consists of two formation flying spacecraft at a 40m distance, one carries the mirror and the other one the focal plane. Thus NEAT has a configuration with only one optical surface: an off-axis parabola. Consequently, beamwalk errors are common to the whole field of view and have a small effect on differential astrometry. Moreover a metrology system projects young fringes on the focal plane, which can characterize the pixels whenever necessary during the mission. NEAT has two main scientific objectives: combined with

  11. Mixed deep learning and natural language processing method for fake-food image recognition and standardization to help automated dietary assessment.

    Science.gov (United States)

    Mezgec, Simon; Eftimov, Tome; Bucher, Tamara; Koroušić Seljak, Barbara

    2018-04-06

    The present study tested the combination of an established and a validated food-choice research method (the 'fake food buffet') with a new food-matching technology to automate the data collection and analysis. The methodology combines fake-food image recognition using deep learning and food matching and standardization based on natural language processing. The former is specific because it uses a single deep learning network to perform both the segmentation and the classification at the pixel level of the image. To assess its performance, measures based on the standard pixel accuracy and Intersection over Union were applied. Food matching firstly describes each of the recognized food items in the image and then matches the food items with their compositional data, considering both their food names and their descriptors. The final accuracy of the deep learning model trained on fake-food images acquired by 124 study participants and providing fifty-five food classes was 92·18 %, while the food matching was performed with a classification accuracy of 93 %. The present findings are a step towards automating dietary assessment and food-choice research. The methodology outperforms other approaches in pixel accuracy, and since it is the first automatic solution for recognizing the images of fake foods, the results could be used as a baseline for possible future studies. As the approach enables a semi-automatic description of recognized food items (e.g. with respect to FoodEx2), these can be linked to any food composition database that applies the same classification and description system.

  12. The Caviar software package for the astrometric reduction of Cassini ISS images: description and examples

    Science.gov (United States)

    Cooper, N. J.; Lainey, V.; Meunier, L.-E.; Murray, C. D.; Zhang, Q.-F.; Baillie, K.; Evans, M. W.; Thuillot, W.; Vienne, A.

    2018-02-01

    Aims: Caviar is a software package designed for the astrometric measurement of natural satellite positions in images taken using the Imaging Science Subsystem (ISS) of the Cassini spacecraft. Aspects of the structure, functionality, and use of the software are described, and examples are provided. The integrity of the software is demonstrated by generating new measurements of the positions of selected major satellites of Saturn, 2013-2016, along with their observed minus computed (O-C) residuals relative to published ephemerides. Methods: Satellite positions were estimated by fitting a model to the imaged limbs of the target satellites. Corrections to the nominal spacecraft pointing were computed using background star positions based on the UCAC5 and Tycho2 star catalogues. UCAC5 is currently used in preference to Gaia-DR1 because of the availability of proper motion information in UCAC5. Results: The Caviar package is available for free download. A total of 256 new astrometric observations of the Saturnian moons Mimas (44), Tethys (58), Dione (55), Rhea (33), Iapetus (63), and Hyperion (3) have been made, in addition to opportunistic detections of Pandora (20), Enceladus (4), Janus (2), and Helene (5), giving an overall total of 287 new detections. Mean observed-minus-computed residuals for the main moons relative to the JPL SAT375 ephemeris were - 0.66 ± 1.30 pixels in the line direction and 0.05 ± 1.47 pixels in the sample direction. Mean residuals relative to the IMCCE NOE-6-2015-MAIN-coorb2 ephemeris were -0.34 ± 0.91 pixels in the line direction and 0.15 ± 1.65 pixels in the sample direction. The reduced astrometric data are provided in the form of satellite positions for each image. The reference star positions are included in order to allow reprocessing at some later date using improved star catalogues, such as later releases of Gaia, without the need to re-estimate the imaged star positions. The Caviar software is available for free download from: ftp

  13. Nano-JASMINE: use of AGIS for the next astrometric satellite

    Science.gov (United States)

    Yamada, Y.; Gouda, N.; Lammers, U.

    The core data reduction for the Nano-JASMINE mission is planned to be done with Gaia's Astrometric Global Iterative Solution (AGIS). The collaboration started at 2007 prompted by Uwe Lammers' proposal. In addition to similar design and operating principles of the two missions, this is possible thanks to the encapsulation of all Gaia-specific aspects of AGIS in a Parameter Database. Nano-JASMINE will be the test bench for Gaia AGIS software. We present this idea in detail and the necessary practical steps to make AGIS work with Nano-JASMINE data. We also show the key mission parameters, goals, and status of the data reduction for the Nano-JASMINE.

  14. First results of astrometric and photometric processing of scanned plates DLFA MAO NAS of Ukraine

    Science.gov (United States)

    Shatokhina, S.; Andruk, V.; Yatsenko, A.

    2011-02-01

    In the paper the first estimation of astrometric and photometric results of digitization of images on plates of Double Long Focus Astrograph (DLFA) was made. The digitization of plates was carried out with the scanner Microtek ScanMaker 9800XL TMA. For image processing the package LINUX/MIDAS/ROMAFOT was used. For selected plates DLFA mean square errors for equatorial coordinates (in a system of TYCHO-2 catalogue) and stellar magnitudes (in the Johnson B-system) per one image are 0.06" and 0.13m. The errors are of random nature and there are no systematic dependences on coordinates, magnitudes and colour of stars. The comparison of obtained results with that of earlier plate measurements obtained with complex PARSEC was made.

  15. Digitization, correction, and standardization of geophysical logs from deep boreholes of Central New York State. Final technical report

    International Nuclear Information System (INIS)

    Robinson, J.E.

    1981-07-01

    Digitization and standardization of suitable logs are provided from wells located in the area of Central New York that had been under consideration as a possible site for the disposal of radioactive waste. Digitized logs included those with geophysical parameters that could be analyzed for formation porosity and lithology and in which the log interval was sufficient to evaluate formation parameters. Digitizing equipment was purchased, interfaced, and necessary software was written and documented. Magnetic tapes and hard copy playbacks of all digitized well logs are being forwarded to the Department of Energy repository at Battelle Memorial Institute for use in future projects

  16. Visualization of the internal globus pallidus: sequence and orientation for deep brain stimulation using a standard installation protocol at 3.0 Tesla.

    Science.gov (United States)

    Nölte, Ingo S; Gerigk, Lars; Al-Zghloul, Mansour; Groden, Christoph; Kerl, Hans U

    2012-03-01

    Deep-brain stimulation (DBS) of the internal globus pallidus (GPi) has shown remarkable therapeutic benefits for treatment-resistant neurological disorders including dystonia and Parkinson's disease (PD). The success of the DBS is critically dependent on the reliable visualization of the GPi. The aim of the study was to evaluate promising 3.0 Tesla magnetic resonance imaging (MRI) methods for pre-stereotactic visualization of the GPi using a standard installation protocol. MRI at 3.0 T of nine healthy individuals and of one patient with PD was acquired (FLAIR, T1-MPRAGE, T2-SPACE, T2*-FLASH2D, susceptibility-weighted imaging mapping (SWI)). Image quality and visualization of the GPi for each sequence were assessed by two neuroradiologists independently using a 6-point scale. Axial, coronal, and sagittal planes of the T2*-FLASH2D images were compared. Inter-rater reliability, contrast-to-noise ratios (CNR) and signal-to-noise ratios (SNR) for the GPi were determined. For illustration, axial T2*-FLASH2D images were fused with a section schema of the Schaltenbrand-Wahren stereotactic atlas. The GPi was best and reliably visualized in axial and to a lesser degree on coronal T2*-FLASH2D images. No major artifacts in the GPi were observed in any of the sequences. SWI offered a significantly higher CNR for the GPi compared to standard T2-weighted imaging using the standard parameters. The fusion of the axial T2*-FLASH2D images and the atlas projected the GPi clearly in the boundaries of the section schema. Using a standard installation protocol at 3.0 T T2*-FLASH2D imaging (particularly axial view) provides optimal and reliable delineation of the GPi.

  17. Double-blind test program for astrometric planet detection with Gaia

    Science.gov (United States)

    Casertano, S.; Lattanzi, M. G.; Sozzetti, A.; Spagna, A.; Jancart, S.; Morbidelli, R.; Pannunzio, R.; Pourbaix, D.; Queloz, D.

    2008-05-01

    Aims: The scope of this paper is twofold. First, it describes the simulation scenarios and the results of a large-scale, double-blind test campaign carried out to estimate the potential of Gaia for detecting and measuring planetary systems. The identified capabilities are then put in context by highlighting the unique contribution that the Gaia exoplanet discoveries will be able to bring to the science of extrasolar planets in the next decade. Methods: We use detailed simulations of the Gaia observations of synthetic planetary systems and develop and utilize independent software codes in double-blind mode to analyze the data, including statistical tools for planet detection and different algorithms for single and multiple Keplerian orbit fitting that use no a priori knowledge of the true orbital parameters of the systems. Results: 1) Planets with astrometric signatures α≃ 3 times the assumed single-measurement error σ_ψ and period P≤ 5 yr can be detected reliably and consistently, with a very small number of false positives. 2) At twice the detection limit, uncertainties in orbital parameters and masses are typically 15-20%. 3) Over 70% of two-planet systems with well-separated periods in the range 0.2≤ P≤ 9 yr, astrometric signal-to-noise ratio 2≤α/σ_ψ≤ 50, and eccentricity e≤ 0.6 are correctly identified. 4) Favorable orbital configurations (both planets with P≤ 4 yr and α/σ_ψ≥ 10, redundancy over a factor of 2 in the number of observations) have orbital elements measured to better than 10% accuracy > 90% of the time, and the value of the mutual inclination angle i_rel determined with uncertainties ≤ 10°. 5) Finally, nominal uncertainties obtained from the fitting procedures are a good estimate of the actual errors in the orbit reconstruction. Extrapolating from the present-day statistical properties of the exoplanet sample, the results imply that a Gaia with σ_ψ = 8 μas, in its unbiased and complete magnitude-limited census of

  18. Verification of the astrometric performance of the Korean VLBI network, using comparative SFPR studies with the VLBA AT 14/7 mm

    Energy Technology Data Exchange (ETDEWEB)

    Rioja, María J.; Dodson, Richard; Jung, TaeHyun; Sohn, Bong Won; Byun, Do-Young; Cho, Se-Hyung; Lee, Sang-Sung; Kim, Jongsoo; Kim, Kee-Tae; Oh, Chung Sik; Han, Seog-Tae; Je, Do-Heung; Chung, Moon-Hee; Wi, Seog-Oh; Kang, Jiman; Lee, Jung-Won; Chung, Hyunsoo; Kim, Hyo Ryoung; Kim, Hyun-Goo [Korea Astronomy and Space Science Institute, Daedeokdae-ro 776, Yuseong-gu, Daejeon 305-348 (Korea, Republic of); Agudo, Iván, E-mail: maria.rioja@icrar.org [Joint Institute for VLBI in Europe, Postbus 2, NL-7990 AA Dwingeloo (Netherlands); and others

    2014-11-01

    The Korean VLBI Network (KVN) is a new millimeter VLBI dedicated array with the capability to simultaneously observe at multiple frequencies, up to 129 GHz. The innovative multi-channel receivers present significant benefits for astrometric measurements in the frequency domain. The aim of this work is to verify the astrometric performance of the KVN using a comparative study with the VLBA, a well-established instrument. For that purpose, we carried out nearly contemporaneous observations with the KVN and the VLBA, at 14/7 mm, in 2013 April. The KVN observations consisted of simultaneous dual frequency observations, while the VLBA used fast frequency switching observations. We used the Source Frequency Phase Referencing technique for the observational and analysis strategy. We find that having simultaneous observations results in superior compensation for all atmospheric terms in the observables, in addition to offering other significant benefits for astrometric analysis. We have compared the KVN astrometry measurements to those from the VLBA. We find that the structure blending effects introduce dominant systematic astrometric shifts, and these need to be taken into account. We have tested multiple analytical routes to characterize the impact of the low-resolution effects for extended sources in the astrometric measurements. The results from the analysis of the KVN and full VLBA data sets agree within 2σ of the thermal error estimate. We interpret the discrepancy as arising from the different resolutions. We find that the KVN provides astrometric results with excellent agreement, within 1σ, when compared to a VLBA configuration that has a similar resolution. Therefore, this comparative study verifies the astrometric performance of the KVN using SFPR at 14/7 mm, and validates the KVN as an astrometric instrument.

  19. What stellar orbit is needed to measure the spin of the Galactic centre black hole from astrometric data?

    Science.gov (United States)

    Waisberg, Idel; Dexter, Jason; Gillessen, Stefan; Pfuhl, Oliver; Eisenhauer, Frank; Plewa, Phillip M.; Bauböck, Michi; Jimenez-Rosales, Alejandra; Habibi, Maryam; Ott, Thomas; von Fellenberg, Sebastiano; Gao, Feng; Widmann, Felix; Genzel, Reinhard

    2018-05-01

    Astrometric and spectroscopic monitoring of individual stars orbiting the supermassive black hole in the Galactic Center offer a promising way to detect general relativistic effects. While low-order effects are expected to be detected following the periastron passage of S2 in Spring 2018, detecting higher order effects due to black hole spin will require the discovery of closer stars. In this paper, we set out to determine the requirements such a star would have to satisfy to allow the detection of black hole spin. We focus on the instrument GRAVITY, which saw first light in 2016 and which is expected to achieve astrometric accuracies 10-100 μas. For an observing campaign with duration T years, total observations Nobs, astrometric precision σx, and normalized black hole spin χ, we find that a_orb(1-e^2)^{3/4} ≲ 300 R_S √{T/4 {yr}} (N_obs/120)^{0.25} √{10 μ as/σ _x} √{χ /0.9} is needed. For χ = 0.9 and a potential observing campaign with σ _x = 10 μas, 30 observations yr-1 and duration 4-10 yr, we expect ˜0.1 star with K < 19 satisfying this constraint based on the current knowledge about the stellar population in the central 1 arcsec. We also propose a method through which GRAVITY could potentially measure radial velocities with precision ˜50 km s-1. If the astrometric precision can be maintained, adding radial velocity information increases the expected number of stars by roughly a factor of 2. While we focus on GRAVITY, the results can also be scaled to parameters relevant for future extremely large telescopes.

  20. THE APPLICATION OF MULTIVIEW METHODS FOR HIGH-PRECISION ASTROMETRIC SPACE VLBI AT LOW FREQUENCIES

    International Nuclear Information System (INIS)

    Dodson, R.; Rioja, M.; Imai, H.; Asaki, Y.; Hong, X.-Y.; Shen, Z.

    2013-01-01

    High-precision astrometric space very long baseline interferometry (S-VLBI) at the low end of the conventional frequency range, i.e., 20 cm, is a requirement for a number of high-priority science goals. These are headlined by obtaining trigonometric parallax distances to pulsars in pulsar-black hole pairs and OH masers anywhere in the Milky Way and the Magellanic Clouds. We propose a solution for the most difficult technical problems in S-VLBI by the MultiView approach where multiple sources, separated by several degrees on the sky, are observed simultaneously. We simulated a number of challenging S-VLBI configurations, with orbit errors up to 8 m in size and with ionospheric atmospheres consistent with poor conditions. In these simulations we performed MultiView analysis to achieve the required science goals. This approach removes the need for beam switching requiring a Control Moment Gyro, and the space and ground infrastructure required for high-quality orbit reconstruction of a space-based radio telescope. This will dramatically reduce the complexity of S-VLBI missions which implement the phase-referencing technique.

  1. THE APPLICATION OF MULTIVIEW METHODS FOR HIGH-PRECISION ASTROMETRIC SPACE VLBI AT LOW FREQUENCIES

    Energy Technology Data Exchange (ETDEWEB)

    Dodson, R.; Rioja, M.; Imai, H. [International Centre for Radio Astronomy Research, M468, University of Western Australia, 35 Stirling Hwy, Crawley, Western Australia 6009 (Australia); Asaki, Y. [Institute of Space and Astronautical Science, 3-1-1 Yoshinodai, Chuou, Sagamihara, Kanagawa 252-5210 (Japan); Hong, X.-Y.; Shen, Z., E-mail: richard.dodson@icrar.org [Shanghai Astronomical Observatory, CAS, 200030 Shanghai (China)

    2013-06-15

    High-precision astrometric space very long baseline interferometry (S-VLBI) at the low end of the conventional frequency range, i.e., 20 cm, is a requirement for a number of high-priority science goals. These are headlined by obtaining trigonometric parallax distances to pulsars in pulsar-black hole pairs and OH masers anywhere in the Milky Way and the Magellanic Clouds. We propose a solution for the most difficult technical problems in S-VLBI by the MultiView approach where multiple sources, separated by several degrees on the sky, are observed simultaneously. We simulated a number of challenging S-VLBI configurations, with orbit errors up to 8 m in size and with ionospheric atmospheres consistent with poor conditions. In these simulations we performed MultiView analysis to achieve the required science goals. This approach removes the need for beam switching requiring a Control Moment Gyro, and the space and ground infrastructure required for high-quality orbit reconstruction of a space-based radio telescope. This will dramatically reduce the complexity of S-VLBI missions which implement the phase-referencing technique.

  2. Add-on deep transcranial magnetic stimulation (dTMS) in patients with dysthymic disorder comorbid with alcohol use disorder: a comparison with standard treatment.

    Science.gov (United States)

    Girardi, Paolo; Rapinesi, Chiara; Chiarotti, Flavia; Kotzalidis, Georgios D; Piacentino, Daria; Serata, Daniele; Del Casale, Antonio; Scatena, Paola; Mascioli, Flavia; Raccah, Ruggero N; Brugnoli, Roberto; Digiacomantonio, Vittorio; Ferri, Vittoria Rachele; Ferracuti, Stefano; Zangen, Abraham; Angeletti, Gloria

    2015-01-01

    Dorsolateral prefrontal cortex (DLPFC) is dysfunctional in mood and substance use disorders. We predicted higher efficacy for add-on bilateral prefrontal high-frequency deep transcranial magnetic stimulation (dTMS), compared with standard drug treatment (SDT) in patients with dysthymic disorder (DD)/alcohol use disorder (AUD) comorbidity. We carried-out a 6-month open-label study involving 20 abstinent patients with DSM-IV-TR AUD comorbid with previously developed DD. Ten patients received SDT for AUD with add-on bilateral dTMS (dTMS-AO) over the DLPFC, while another 10 received SDT alone. We rated alcohol craving with the Obsessive Compulsive Drinking Scale (OCDS), depression with the Hamilton Depression Rating Scale (HDRS), clinical status with the Clinical Global Impressions scale (CGI), and global functioning with the Global Assessment of Functioning (GAF). At the end of the 20-session dTMS period (or an equivalent period in the SDT group), craving scores and depressive symptoms in the dTMS-AO group dropped significantly more than in the SDT group (P < 0.001 and P < 0.02, respectively). High frequency bilateral DLPFC dTMS with left preference was well tolerated and found to be effective as add-on in AUD. The potential of dTMS for reducing craving in substance use disorder patients deserves to be further investigated.

  3. Deep learning evaluation using deep linguistic processing

    OpenAIRE

    Kuhnle, Alexander; Copestake, Ann

    2017-01-01

    We discuss problems with the standard approaches to evaluation for tasks like visual question answering, and argue that artificial data can be used to address these as a complement to current practice. We demonstrate that with the help of existing 'deep' linguistic processing technology we are able to create challenging abstract datasets, which enable us to investigate the language understanding abilities of multimodal deep learning models in detail, as compared to a single performance value ...

  4. HIGH-PRECISION ASTROMETRIC MILLIMETER VERY LONG BASELINE INTERFEROMETRY USING A NEW METHOD FOR MULTI-FREQUENCY CALIBRATION

    Energy Technology Data Exchange (ETDEWEB)

    Dodson, Richard; Rioja, María J. [International Centre for Radio Astronomy Research, M468, The University of Western Australia, 35 Stirling Hwy, Crawley, Western Australia 6009 (Australia); Molina, Sol N.; Gómez, José L., E-mail: richard.dodson@icrar.org [Instituto de Astrofísica de Andalucía-CSIC, Glorieta de la Astronomía s/n, E-18008 Granada (Spain)

    2017-01-10

    In this paper we describe a new approach for millimeter Very Long Baseline Interferometry (mm-VLBI) calibration that provides bona-fide astrometric alignment of the millimeter-wavelength images from a single source, for the measurement of frequency-dependent effects, such as “core-shifts” near the black hole of active galactic nucleus jets. We achieve our astrometric alignment by solving first for the ionospheric (dispersive) contributions using wide-band centimeter-wavelength observations. Second, we solve for the tropospheric (non-dispersive) contributions by using fast frequency-switching at the target millimeter-wavelengths. These solutions can be scaled and transferred from low frequency to the high frequency. To complete the calibration chain an additional step is required to remove a residual constant phase offset on each antenna. The result is an astrometric calibration and the measurement of the core-shift between 22 and 43 GHz for the jet in BL Lacertae to be −8 ± 5, 20 ± 6 μ as, in R.A. and decl., respectively. By comparison to conventional phase referencing at centimeter-wavelengths we are able to show that this core shift at millimeter-wavelengths is significantly less than what would be predicted by extrapolating the low-frequency result, which closely followed the predictions of the Blandford and Königl conical jet model. As such it would be the first demonstration for the association of the VLBI core with a recollimation shock, normally hidden at low frequencies due to the optical depth, which could be responsible for the γ -ray production in blazar jets.

  5. HIGH-PRECISION ASTROMETRIC MILLIMETER VERY LONG BASELINE INTERFEROMETRY USING A NEW METHOD FOR MULTI-FREQUENCY CALIBRATION

    International Nuclear Information System (INIS)

    Dodson, Richard; Rioja, María J.; Molina, Sol N.; Gómez, José L.

    2017-01-01

    In this paper we describe a new approach for millimeter Very Long Baseline Interferometry (mm-VLBI) calibration that provides bona-fide astrometric alignment of the millimeter-wavelength images from a single source, for the measurement of frequency-dependent effects, such as “core-shifts” near the black hole of active galactic nucleus jets. We achieve our astrometric alignment by solving first for the ionospheric (dispersive) contributions using wide-band centimeter-wavelength observations. Second, we solve for the tropospheric (non-dispersive) contributions by using fast frequency-switching at the target millimeter-wavelengths. These solutions can be scaled and transferred from low frequency to the high frequency. To complete the calibration chain an additional step is required to remove a residual constant phase offset on each antenna. The result is an astrometric calibration and the measurement of the core-shift between 22 and 43 GHz for the jet in BL Lacertae to be −8 ± 5, 20 ± 6 μ as, in R.A. and decl., respectively. By comparison to conventional phase referencing at centimeter-wavelengths we are able to show that this core shift at millimeter-wavelengths is significantly less than what would be predicted by extrapolating the low-frequency result, which closely followed the predictions of the Blandford and Königl conical jet model. As such it would be the first demonstration for the association of the VLBI core with a recollimation shock, normally hidden at low frequencies due to the optical depth, which could be responsible for the γ -ray production in blazar jets.

  6. Analyses of the Short Periodical Part of the Spectrum of Pole Coordinate Variations Determined by the Astrometric and Laser Technique

    Science.gov (United States)

    Kołaczek, B.; Kosek, W.; Galas, R.

    Series of BIH astrometric (BIH-ASTR) pole coordinates and of CSR LAGEOS laser ranging (CSR-LALAR) pole coordinates determined in the MERIT Campaign in the years 1972 - 1986, 1983 - 1986, respectively, have been filtered by different band pass filters consisting of the law pass Gauss filter and of the high pass Butterworth filter. Filtered residuals were analysed by the MESA-Maximum Entropy Spectra Analysis and by the Ormsby narrow band pass filters in order to find numerically modeled signals approximating these residuals in the best way.

  7. Absolute Nuv magnitudes of Gaia DR1 astrometric stars and a search for hot companions in nearby systems

    Science.gov (United States)

    Makarov, V. V.

    2017-10-01

    Accurate parallaxes from Gaia DR1 (TGAS) are combined with GALEX visual Nuv magnitudes to produce absolute Mnuv magnitudes and an ultraviolet HR diagram for a large sample of astrometric stars. A functional fit is derived of the lower envelope main sequence of the nearest 1403 stars (distance Pleiades, or, most likely, tight interacting binaries of the BY Dra-type. A separate collection of 40 stars with precise trigonometric parallaxes and Nuv-G colors bluer than 2 mag is presented. It includes several known novae, white dwarfs, and binaries with hot subdwarf (sdOB) components, but most remain unexplored.

  8. Compendium of Single Event Effects Test Results for Commercial Off-The-Shelf and Standard Electronics for Low Earth Orbit and Deep Space Applications

    Science.gov (United States)

    Reddell, Brandon D.; Bailey, Charles R.; Nguyen, Kyson V.; O'Neill, Patrick M.; Wheeler, Scott; Gaza, Razvan; Cooper, Jaime; Kalb, Theodore; Patel, Chirag; Beach, Elden R.; hide

    2017-01-01

    We present the results of Single Event Effects (SEE) testing with high energy protons and with low and high energy heavy ions for electrical components considered for Low Earth Orbit (LEO) and for deep space applications.

  9. Compendium of Single Event Effects (SEE) Test Results for COTS and Standard Electronics for Low Earth Orbit and Deep Space Applications

    Science.gov (United States)

    Reddell, Brandon; Bailey, Chuck; Nguyen, Kyson; O'Neill, Patrick; Gaza, Razvan; Patel, Chirag; Cooper, Jaime; Kalb, Theodore

    2017-01-01

    We present the results of SEE testing with high energy protons and with low and high energy heavy ions. This paper summarizes test results for components considered for Low Earth Orbit and Deep Space applications.

  10. Update on Astrometric Follow-Up at Apache Point Observatory by Adler Planetarium

    Science.gov (United States)

    Nault, Kristie A.; Brucker, Melissa; Hammergren, Mark

    2016-10-01

    We began our NEO astrometric follow-up and characterization program in 2014 Q4 using about 500 hours of observing time per year with the Astrophysical Research Consortium (ARC) 3.5m telescope at Apache Point Observatory (APO). Our observing is split into 2 hour blocks approximately every other night for astrometry (this poster) and several half-nights per month for spectroscopy (see poster by M. Hammergren et al.) and light curve studies.For astrometry, we use the ARC Telescope Imaging Camera (ARCTIC) with an SDSS r filter, in 2 hour observing blocks centered around midnight. ARCTIC has a magnitude limit of V~23 in 60s, and we target 20 NEOs per session. ARCTIC has a FOV 1.57 times larger and a readout time half as long as the previous imager, SPIcam, which we used from 2014 Q4 through 2015 Q3. Targets are selected primarily from the Minor Planet Center's (MPC) NEO Confirmation Page (NEOCP), and NEA Observation Planning Aid; we also refer to JPL's What's Observable page, the Spaceguard Priority List and Faint NEOs List, and requests from other observers. To quickly adapt to changing weather and seeing conditions, we create faint, midrange, and bright target lists. Detected NEOs are measured with Astrometrica and internal software, and the astrometry is reported to the MPC.As of June 19, 2016, we have targeted 2264 NEOs, 1955 with provisional designations, 1582 of which were detected. We began observing NEOCP asteroids on January 30, 2016, and have targeted 309, 207 of which were detected. In addition, we serendipitously observed 281 moving objects, 201 of which were identified as previously known objects.This work is based on observations obtained with the Apache Point Observatory 3.5m telescope, which is owned and operated by the Astrophysical Research Consortium. We gratefully acknowledge support from NASA NEOO award NNX14AL17G and thank the University of Chicago Department of Astronomy and Astrophysics for observing time in 2014.

  11. VizieR Online Data Catalog: HD 128311 radial velocity and astrometric data (McArthur+, 2014)

    Science.gov (United States)

    McArthur, B. E.; Benedict, G. F.; Henry, G. W.; Hatzes, A.; Cochran, W. D.; Harrison, T. E.; Johns-Krull, C.; Nelan, E.

    2017-05-01

    The High Resolution Spectrograph (HRS; Tull, 1998SPIE.3355..387T) at the HET at McDonald Observatory was used to make the spectroscopic observations using the iodine absorption cell method (Butler et al. 1996PASP..108..500B). Our reduction of HET HRS data is given in Bean et al. (2007AJ....134..749B), which uses the REDUCE package (Piskunov & Valenti, 2002A&A...385.1095P). Our observations include a total of 355 high-resolution spectra which were obtained between 2005 April and 2011 January. Because typically two or more observations were made in less than 1 hr per night, we observed at 161 epochs with the HET HRS. The astrometric observations were made with the Hubble Space Telescope (HST) Fine Guidance Sensor (FGS) 1r, a two-axis interferometer, in position (POS) "fringe-tracking" mode. Twenty-nine orbits of HST astrometric observations were made between 2007 December and 2009 August. (2 data files).

  12. Estimation of position and velocity for a low dynamic vehicle in near space using nonresolved photometric and astrometric data.

    Science.gov (United States)

    Jing, Nan; Li, Chuang; Chong, Yaqin

    2017-01-20

    An estimation method for indirectly observable parameters for a typical low dynamic vehicle (LDV) is presented. The estimation method utilizes apparent magnitude, azimuth angle, and elevation angle to estimate the position and velocity of a typical LDV, such as a high altitude balloon (HAB). In order to validate the accuracy of the estimated parameters gained from an unscented Kalman filter, two sets of experiments are carried out to obtain the nonresolved photometric and astrometric data. In the experiments, a HAB launch is planned; models of the HAB dynamics and kinematics and observation models are built to use as time update and measurement update functions, respectively. When the HAB is launched, a ground-based optoelectronic detector is used to capture the object images, which are processed using aperture photometry technology to obtain the time-varying apparent magnitude of the HAB. Two sets of actual and estimated parameters are given to clearly indicate the parameter differences. Two sets of errors between the actual and estimated parameters are also given to show how the estimated position and velocity differ with respect to the observation time. The similar distribution curve results from the two scenarios, which agree within 3σ, verify that nonresolved photometric and astrometric data can be used to estimate the indirectly observable state parameters (position and velocity) for a typical LDV. This technique can be applied to small and dim space objects in the future.

  13. Astrometric analysis of the unresolved binary MU Cassiopeiae from photographs taken with the Sproul 61 centimeter refractor

    International Nuclear Information System (INIS)

    Lippincott, S.L.

    1981-01-01

    Mu Cassiopeiae, a high-velocity Population II subdwarf, is an astrometric binary which has been on the Sproul Observatory astrometric program since 1937. The data yield P = 21.43 yr, with a photocentric semiaxis major, α = 0''.186 +- 0''.001 (p.e) and a relative parallax, π = +0''.130 +- 0''.001. Rigorous masses for the components from the Sproul results will follow in the future only in conjunction with reliable values for Δm and separation derived from other techniques. The best tentative values of Δm and separation so far found suggest M/sub A/ = 0.7 M/sub sun/, and M/sub B/roughly-equal0.2Msun with Δmapprox.4.5, which indicate higher. He content for μ Cas A than for the Sun. The masses are of particular interest because they hold a clue to the chemical composition of the system which is likely to be similar to that in the interstellar medium during the early stages of our Galaxy at the time μ Cas is thought to have originated

  14. Gaia’s Cepheids and RR Lyrae stars and luminosity calibrations based on Tycho-Gaia Astrometric Solution

    Directory of Open Access Journals (Sweden)

    Clementini Gisella

    2017-01-01

    Full Text Available Gaia Data Release 1 contains parallaxes for more than 700 Galactic Cepheids and RR Lyrae stars, computed as part of the Tycho-Gaia Astrometric Solution (TGAS. We have used TGAS parallaxes, along with literature (V, I, J, Ks, W1 photometry and spectroscopy, to calibrate the zero point of the period-luminosity and period-Wesenheit relations of classical and type II Cepheids, and the near-infrared period-luminosity, period-luminosity-metallicity and optical luminosity-metallicity relations of RR Lyrae stars. In this contribution we briefly summarise results obtained by fitting these basic relations adopting different techniques that operate either in parallax or distance (absolute magnitude space.

  15. Deep frying

    NARCIS (Netherlands)

    Koerten, van K.N.

    2016-01-01

    Deep frying is one of the most used methods in the food processing industry. Though practically any food can be fried, French fries are probably the most well-known deep fried products. The popularity of French fries stems from their unique taste and texture, a crispy outside with a mealy soft

  16. Deep learning

    CERN Document Server

    Goodfellow, Ian; Courville, Aaron

    2016-01-01

    Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language proces...

  17. Deep learning for image classification

    Science.gov (United States)

    McCoppin, Ryan; Rizki, Mateen

    2014-06-01

    This paper provides an overview of deep learning and introduces the several subfields of deep learning including a specific tutorial of convolutional neural networks. Traditional methods for learning image features are compared to deep learning techniques. In addition, we present our preliminary classification results, our basic implementation of a convolutional restricted Boltzmann machine on the Mixed National Institute of Standards and Technology database (MNIST), and we explain how to use deep learning networks to assist in our development of a robust gender classification system.

  18. The First U.S. Naval Observatory Robotic Astrometric Telescope Catalog

    Science.gov (United States)

    2015-10-01

    over 188 million objects matched with the Two Micron All Sky Survey ( 2MASS ) point-source catalog proper motions (typically 5–7 masyr–1 standard...errors) are provided. These data are supplemented by 2MASS and AAVSO Photometric All-Sky Survey (APASS) photometry. Observations, reductions, and catalog...reference star catalog for current epochs about 4 times more precise than UCAC with a density similar to the Two Micron All Sky Survey ( 2MASS

  19. Another look at AM Herculis - radio-astrometric campaign with the e-EVN at 6 cm

    Science.gov (United States)

    Gawroński, M. P.; Goździewski, K.; Katarzyński, K.; Rycyk, G.

    2018-03-01

    We conducted radio-interferometric observations of the well-known binary cataclysmic system AM Herculis. This particular system is formed from a magnetic white dwarf (primary) and a red dwarf (secondary), and it is the prototype of so-called polars. Our observations were conducted with the European VLBI Network (EVN) in e-EVN mode at 5 GHz. We obtained six astrometric measurements spanning 1 yr, which make it possible to update the annual parallax for this system with the best precision to date (π = 11.29 ± 0.08 mas), which is equivalent to a distance of 88.6 ± 0.6 pc. The system was observed mostly in the quiescent phase (visual magnitude mv ˜ 15.3), when the radio emission was at the level of about 300 μJy. Our analysis suggests that the radio flux of AM Herculis is modulated with the orbital motion. Such specific properties of the radiation can be explained using an emission mechanism like the scenario proposed for V471 Tau and, in general, for RS CVn-type stars. In this scenario, the radio emission arises near the surface of the red dwarf, where the global magnetic field strength may reach a few kG. We argue that the quiescent radio emission distinguishes AM Herculis and AR Ursae Majoris (a second known persistent radio polar) from other polars, which are systems with a magnetized secondary star.

  20. Adaptation requirements due to anatomical changes in free-breathing and deep-inspiration breath-hold for standard and dose-escalated radiotherapy of lung cancer patients

    DEFF Research Database (Denmark)

    Sibolt, Patrik; Ottosson, Wiviann; Sjöström, David

    2015-01-01

    to investigate the need for adaptation due to anatomical changes, for both standard (ST) and DE plans in free-breathing (FB) and DIBH. Material and methods. The effect of tumor shrinkage (TS), pleural effusion (PE) and atelectasis was investigated for patients and for a CIRS thorax phantom. Sixteen patients were...... volume. Results. Phantom simulations resulted in maximum deviations in mean dose to the GTV-T ( GTV-T ) of -1% for 3 cm PE and centrally located tumor, and + 3% for TS from 5 cm to 1 cm diameter for an anterior tumor location. For the majority of the patients, simulated PE resulted in a decreasing...

  1. Shallow and deep breath lung tumor volume as estimated by spiral volumetric CT in comparison to standard axial CT using virtual simulation

    International Nuclear Information System (INIS)

    Quader, M.A.; Kalend, A.M.; Deutsch, M.; Greenberger, J.S.

    1995-01-01

    Purpose/Objective: In order to assess an individual patient tumor volume (TV) margins that are sufficient to design a beam-eye-view (BEW) conformal portal, the radiographic extent of gross tumor volume (GTV) dimensions and its fluctuation with breathing are measured by fast spiral CT scanning of patients treated for Stage II, III lung cancers using 5-6 field multi-collimated conformal beams. Materials and Methods: Over the course of conformal radiotherapy for lung cancer, a full thorax CT scans of the patient were taken by conventional axial CT scanning with patients immobilized in the treatment position and breathing normally. Patient(s) with good pulmonary function test (PFT) status were selected to perform deep breathing and re-scanned by fast spiral techniques in order to re-acquire the tomographic variation in the (GTV) with breathing. A Picker spiral ZAP-100 software running on the AQSim-PQ-2000 was used with a variable helical pitch of 1.0, 1.5 and 2.0. The variable pitch spirals were limited to tumor bed, diaphragm and lung apex area for measurements. Effect of breathing motion along x,y,z direction were then assessed for each beam-eye-view portal as seen in digitally reconstructed radiography (DRR) at the treated gantry angle. Results: Comparison of axial and spiral scans shows the progression of lung and diaphram motion with breathing can be gauged better in spiral scans. The movement of the diaphragm during shallow breathing has been found to be 2-3cm by measuring the distance between the most inferior and superior slices where diaphragm is present. The variation of the tumor dimensions along AP/PA and lateral direction seems to be less sensitive to breathing than those along inferior-superior direction. Conclusion: The fast spiral CT scanning is sensitive to patient lung motion and can be used to determine the fluctuations of the gross tumor volume with breathing. The extent of the fluctuation is location dependent and increases as one moves from the

  2. Deep Learning

    DEFF Research Database (Denmark)

    Jensen, Morten Bornø; Bahnsen, Chris Holmberg; Nasrollahi, Kamal

    2018-01-01

    I løbet af de sidste 10 år er kunstige neurale netværk gået fra at være en støvet, udstødt tekno-logi til at spille en hovedrolle i udviklingen af kunstig intelligens. Dette fænomen kaldes deep learning og er inspireret af hjernens opbygning.......I løbet af de sidste 10 år er kunstige neurale netværk gået fra at være en støvet, udstødt tekno-logi til at spille en hovedrolle i udviklingen af kunstig intelligens. Dette fænomen kaldes deep learning og er inspireret af hjernens opbygning....

  3. Deep geothermics

    International Nuclear Information System (INIS)

    Anon.

    1995-01-01

    The hot-dry-rocks located at 3-4 km of depth correspond to low permeable rocks carrying a large amount of heat. The extraction of this heat usually requires artificial hydraulic fracturing of the rock to increase its permeability before water injection. Hot-dry-rocks geothermics or deep geothermics is not today a commercial channel but only a scientific and technological research field. The Soultz-sous-Forets site (Northern Alsace, France) is characterized by a 6 degrees per meter geothermal gradient and is used as a natural laboratory for deep geothermal and geological studies in the framework of a European research program. Two boreholes have been drilled up to 3600 m of depth in the highly-fractured granite massif beneath the site. The aim is to create a deep heat exchanger using only the natural fracturing for water transfer. A consortium of german, french and italian industrial companies (Pfalzwerke, Badenwerk, EdF and Enel) has been created for a more active participation to the pilot phase. (J.S.). 1 fig., 2 photos

  4. Deep smarts.

    Science.gov (United States)

    Leonard, Dorothy; Swap, Walter

    2004-09-01

    When a person sizes up a complex situation and rapidly comes to a decision that proves to be not just good but brilliant, you think, "That was smart." After you watch him do this a few times, you realize you're in the presence of something special. It's not raw brainpower, though that helps. It's not emotional intelligence, either, though that, too, is often involved. It's deep smarts. Deep smarts are not philosophical--they're not"wisdom" in that sense, but they're as close to wisdom as business gets. You see them in the manager who understands when and how to move into a new international market, in the executive who knows just what kind of talk to give when her organization is in crisis, in the technician who can track a product failure back to an interaction between independently produced elements. These are people whose knowledge would be hard to purchase on the open market. Their insight is based on know-how more than on know-what; it comprises a system view as well as expertise in individual areas. Because deep smarts are experienced based and often context specific, they can't be produced overnight or readily imported into an organization. It takes years for an individual to develop them--and no time at all for an organization to lose them when a valued veteran walks out the door. They can be taught, however, with the right techniques. Drawing on their forthcoming book Deep Smarts, Dorothy Leonard and Walter Swap say the best way to transfer such expertise to novices--and, on a larger scale, to make individual knowledge institutional--isn't through PowerPoint slides, a Web site of best practices, online training, project reports, or lectures. Rather, the sage needs to teach the neophyte individually how to draw wisdom from experience. Companies have to be willing to dedicate time and effort to such extensive training, but the investment more than pays for itself.

  5. A revised estimate of the distance to the clouds in the Chamaeleon complex using the Tycho-Gaia Astrometric Solution

    Science.gov (United States)

    Voirin, Jordan; Manara, Carlo F.; Prusti, Timo

    2018-03-01

    Context. The determination of the distance to dark star-forming clouds is a key parameter to derive the properties of the cloud itself and of its stellar content. This parameter is still loosely constrained even in nearby star-forming regions. Aim. We want to determine the distances to the clouds in the Chamaeleon-Musca complex and explore the connection between these clouds and the large-scale cloud structures in the Galaxy. Methods: We used the newly estimated distances obtained from the parallaxes measured by the Gaia satellite and included in the Tycho-Gaia Astrometric Solution catalog. When known members of a region are included in this catalog we used their distances to infer the distance to the cloud. Otherwise, we analyzed the dependence of the color excess on the distance of the stars and looked for a turn-on of this excess, which is a proxy of the position of the front-edge of the star-forming cloud. Results: We are able to measure the distance to the three Chamaeleon clouds. The distance to Chamaeleon I is 179-10-10+11+11 pc, where the quoted uncertainties are statistical and systematic uncertainties, respectively, 20 pc further away than previously assumed. The Chamaeleon II cloud is located at the distance of 181-5-10+6+11 pc, which agrees with previous estimates. We are able to measure for the first time a distance to the Chamaeleon III cloud of 199-7-11+8+12 pc. Finally, the distance of the Musca cloud is smaller than 603-70-92+91+133 pc. These estimates do not allow us to distinguish between the possibility that the Chamaeleon clouds are part of a sheet of clouds parallel to the Galactic plane, or perpendicular to it. Conclusions: We measured a larger distance to the Chamaeleon I cloud than assumed in the past, confirmed the distance to the Chamaeleon II region, and measured for the first time the distance to the Chamaleon III cloud. These values are consistent with the scenario in which the three clouds are part of a single large-scale structure

  6. DeepPy: Pythonic deep learning

    DEFF Research Database (Denmark)

    Larsen, Anders Boesen Lindbo

    This technical report introduces DeepPy – a deep learning framework built on top of NumPy with GPU acceleration. DeepPy bridges the gap between highperformance neural networks and the ease of development from Python/NumPy. Users with a background in scientific computing in Python will quickly...... be able to understand and change the DeepPy codebase as it is mainly implemented using high-level NumPy primitives. Moreover, DeepPy supports complex network architectures by letting the user compose mathematical expressions as directed graphs. The latest version is available at http...

  7. An Ensemble of Deep Support Vector Machines for Image Categorization

    NARCIS (Netherlands)

    Abdullah, Azizi; Veltkamp, Remco C.; Wiering, Marco

    2009-01-01

    This paper presents the deep support vector machine (D-SVM) inspired by the increasing popularity of deep belief networks for image recognition. Our deep SVM trains an SVM in the standard way and then uses the kernel activations of support vectors as inputs for training another SVM at the next

  8. Greedy Deep Dictionary Learning

    OpenAIRE

    Tariyal, Snigdha; Majumdar, Angshul; Singh, Richa; Vatsa, Mayank

    2016-01-01

    In this work we propose a new deep learning tool called deep dictionary learning. Multi-level dictionaries are learnt in a greedy fashion, one layer at a time. This requires solving a simple (shallow) dictionary learning problem, the solution to this is well known. We apply the proposed technique on some benchmark deep learning datasets. We compare our results with other deep learning tools like stacked autoencoder and deep belief network; and state of the art supervised dictionary learning t...

  9. Taoism and Deep Ecology.

    Science.gov (United States)

    Sylvan, Richard; Bennett, David

    1988-01-01

    Contrasted are the philosophies of Deep Ecology and ancient Chinese. Discusses the cosmology, morality, lifestyle, views of power, politics, and environmental philosophies of each. Concludes that Deep Ecology could gain much from Taoism. (CW)

  10. Deep Incremental Boosting

    OpenAIRE

    Mosca, Alan; Magoulas, George D

    2017-01-01

    This paper introduces Deep Incremental Boosting, a new technique derived from AdaBoost, specifically adapted to work with Deep Learning methods, that reduces the required training time and improves generalisation. We draw inspiration from Transfer of Learning approaches to reduce the start-up time to training each incremental Ensemble member. We show a set of experiments that outlines some preliminary results on some common Deep Learning datasets and discuss the potential improvements Deep In...

  11. Deep Space Telecommunications

    Science.gov (United States)

    Kuiper, T. B. H.; Resch, G. M.

    2000-01-01

    The increasing load on NASA's deep Space Network, the new capabilities for deep space missions inherent in a next-generation radio telescope, and the potential of new telescope technology for reducing construction and operation costs suggest a natural marriage between radio astronomy and deep space telecommunications in developing advanced radio telescope concepts.

  12. Deep learning with Python

    CERN Document Server

    Chollet, Francois

    2018-01-01

    DESCRIPTION Deep learning is applicable to a widening range of artificial intelligence problems, such as image classification, speech recognition, text classification, question answering, text-to-speech, and optical character recognition. Deep Learning with Python is structured around a series of practical code examples that illustrate each new concept introduced and demonstrate best practices. By the time you reach the end of this book, you will have become a Keras expert and will be able to apply deep learning in your own projects. KEY FEATURES • Practical code examples • In-depth introduction to Keras • Teaches the difference between Deep Learning and AI ABOUT THE TECHNOLOGY Deep learning is the technology behind photo tagging systems at Facebook and Google, self-driving cars, speech recognition systems on your smartphone, and much more. AUTHOR BIO Francois Chollet is the author of Keras, one of the most widely used libraries for deep learning in Python. He has been working with deep neural ...

  13. Deep learning relevance

    DEFF Research Database (Denmark)

    Lioma, Christina; Larsen, Birger; Petersen, Casper

    2016-01-01

    train a Recurrent Neural Network (RNN) on existing relevant information to that query. We then use the RNN to "deep learn" a single, synthetic, and we assume, relevant document for that query. We design a crowdsourcing experiment to assess how relevant the "deep learned" document is, compared...... to existing relevant documents. Users are shown a query and four wordclouds (of three existing relevant documents and our deep learned synthetic document). The synthetic document is ranked on average most relevant of all....

  14. Deep learning in TMVA Benchmarking Benchmarking TMVA DNN Integration of a Deep Autoencoder

    CERN Document Server

    Huwiler, Marc

    2017-01-01

    The TMVA library in ROOT is dedicated to multivariate analysis, and in partic- ular oers numerous machine learning algorithms in a standardized framework. It is widely used in High Energy Physics for data analysis, mainly to perform regression and classication. To keep up to date with the state of the art in deep learning, a new deep learning module was being developed this summer, oering deep neural net- work, convolutional neural network, and autoencoder. TMVA did not have yet any autoencoder method, and the present project consists in implementing the TMVA autoencoder class based on the deep learning module. It also includes some bench- marking performed on the actual deep neural network implementation, in comparison to the Keras framework with Tensorflow and Theano backend.

  15. Deep Vein Thrombosis

    African Journals Online (AJOL)

    OWNER

    Deep Vein Thrombosis: Risk Factors and Prevention in Surgical Patients. Deep Vein ... preventable morbidity and mortality in hospitalized surgical patients. ... the elderly.3,4 It is very rare before the age ... depends on the risk level; therefore an .... but also in the post-operative period. ... is continuing uncertainty regarding.

  16. Deep Echo State Network (DeepESN): A Brief Survey

    OpenAIRE

    Gallicchio, Claudio; Micheli, Alessio

    2017-01-01

    The study of deep recurrent neural networks (RNNs) and, in particular, of deep Reservoir Computing (RC) is gaining an increasing research attention in the neural networks community. The recently introduced deep Echo State Network (deepESN) model opened the way to an extremely efficient approach for designing deep neural networks for temporal data. At the same time, the study of deepESNs allowed to shed light on the intrinsic properties of state dynamics developed by hierarchical compositions ...

  17. Parity violation in deep inelastic scattering

    Energy Technology Data Exchange (ETDEWEB)

    Souder, P. [Syracuse Univ., NY (United States)

    1994-04-01

    AA beam of polarized electrons at CEBAF with an energy of 8 GeV or more will be useful for performing precision measurements of parity violation in deep inelastic scattering. Possible applications include precision tests of the Standard Model, model-independent measurements of parton distribution functions, and studies of quark correlations.

  18. THE 2012 HUBBLE ULTRA DEEP FIELD (UDF12): OBSERVATIONAL OVERVIEW

    Energy Technology Data Exchange (ETDEWEB)

    Koekemoer, Anton M. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Ellis, Richard S.; Schenker, Matthew A. [Department of Astrophysics, California Institute of Technology, MS 249-17, Pasadena, CA 91125 (United States); McLure, Ross J.; Dunlop, James S.; Bowler, Rebecca A. A.; Rogers, Alexander B.; Curtis-Lake, Emma; Cirasuolo, Michele; Wild, V.; Targett, T. [Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh EH9 3HJ (United Kingdom); Robertson, Brant E.; Schneider, Evan; Stark, Daniel P. [Department of Astronomy and Steward Observatory, University of Arizona, Tucson, AZ 85721 (United States); Ono, Yoshiaki; Ouchi, Masami [Institute for Cosmic Ray Research, University of Tokyo, Kashiwa City, Chiba 277-8582 (Japan); Charlot, Stephane [UPMC-CNRS, UMR7095, Institut d' Astrophysique de Paris, F-75014, Paris (France); Furlanetto, Steven R. [Department of Physics and Astronomy, University of California, Los Angeles, CA 90095 (United States)

    2013-11-01

    We present the 2012 Hubble Ultra Deep Field campaign (UDF12), a large 128 orbit Cycle 19 Hubble Space Telescope program aimed at extending previous Wide Field Camera 3 (WFC3)/IR observations of the UDF by quadrupling the exposure time in the F105W filter, imaging in an additional F140W filter, and extending the F160W exposure time by 50%, as well as adding an extremely deep parallel field with the Advanced Camera for Surveys (ACS) in the F814W filter with a total exposure time of 128 orbits. The principal scientific goal of this project is to determine whether galaxies reionized the universe; our observations are designed to provide a robust determination of the star formation density at z ∼> 8, improve measurements of the ultraviolet continuum slope at z ∼ 7-8, facilitate the construction of new samples of z ∼ 9-10 candidates, and enable the detection of sources up to z ∼ 12. For this project we committed to combining these and other WFC3/IR imaging observations of the UDF area into a single homogeneous dataset to provide the deepest near-infrared observations of the sky. In this paper we present the observational overview of the project and describe the procedures used in reducing the data as well as the final products that were produced. We present the details of several special procedures that we implemented to correct calibration issues in the data for both the WFC3/IR observations of the main UDF field and our deep 128 orbit ACS/WFC F814W parallel field image, including treatment for persistence, correction for time-variable sky backgrounds, and astrometric alignment to an accuracy of a few milliarcseconds. We release the full, combined mosaics comprising a single, unified set of mosaics of the UDF, providing the deepest near-infrared blank-field view of the universe currently achievable, reaching magnitudes as deep as AB ∼ 30 mag in the near-infrared, and yielding a legacy dataset on this field.

  19. THE 2012 HUBBLE ULTRA DEEP FIELD (UDF12): OBSERVATIONAL OVERVIEW

    International Nuclear Information System (INIS)

    Koekemoer, Anton M.; Ellis, Richard S.; Schenker, Matthew A.; McLure, Ross J.; Dunlop, James S.; Bowler, Rebecca A. A.; Rogers, Alexander B.; Curtis-Lake, Emma; Cirasuolo, Michele; Wild, V.; Targett, T.; Robertson, Brant E.; Schneider, Evan; Stark, Daniel P.; Ono, Yoshiaki; Ouchi, Masami; Charlot, Stephane; Furlanetto, Steven R.

    2013-01-01

    We present the 2012 Hubble Ultra Deep Field campaign (UDF12), a large 128 orbit Cycle 19 Hubble Space Telescope program aimed at extending previous Wide Field Camera 3 (WFC3)/IR observations of the UDF by quadrupling the exposure time in the F105W filter, imaging in an additional F140W filter, and extending the F160W exposure time by 50%, as well as adding an extremely deep parallel field with the Advanced Camera for Surveys (ACS) in the F814W filter with a total exposure time of 128 orbits. The principal scientific goal of this project is to determine whether galaxies reionized the universe; our observations are designed to provide a robust determination of the star formation density at z ∼> 8, improve measurements of the ultraviolet continuum slope at z ∼ 7-8, facilitate the construction of new samples of z ∼ 9-10 candidates, and enable the detection of sources up to z ∼ 12. For this project we committed to combining these and other WFC3/IR imaging observations of the UDF area into a single homogeneous dataset to provide the deepest near-infrared observations of the sky. In this paper we present the observational overview of the project and describe the procedures used in reducing the data as well as the final products that were produced. We present the details of several special procedures that we implemented to correct calibration issues in the data for both the WFC3/IR observations of the main UDF field and our deep 128 orbit ACS/WFC F814W parallel field image, including treatment for persistence, correction for time-variable sky backgrounds, and astrometric alignment to an accuracy of a few milliarcseconds. We release the full, combined mosaics comprising a single, unified set of mosaics of the UDF, providing the deepest near-infrared blank-field view of the universe currently achievable, reaching magnitudes as deep as AB ∼ 30 mag in the near-infrared, and yielding a legacy dataset on this field

  20. Deep learning in bioinformatics.

    Science.gov (United States)

    Min, Seonwoo; Lee, Byunghan; Yoon, Sungroh

    2017-09-01

    In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Deep subsurface microbial processes

    Science.gov (United States)

    Lovley, D.R.; Chapelle, F.H.

    1995-01-01

    Information on the microbiology of the deep subsurface is necessary in order to understand the factors controlling the rate and extent of the microbially catalyzed redox reactions that influence the geophysical properties of these environments. Furthermore, there is an increasing threat that deep aquifers, an important drinking water resource, may be contaminated by man's activities, and there is a need to predict the extent to which microbial activity may remediate such contamination. Metabolically active microorganisms can be recovered from a diversity of deep subsurface environments. The available evidence suggests that these microorganisms are responsible for catalyzing the oxidation of organic matter coupled to a variety of electron acceptors just as microorganisms do in surface sediments, but at much slower rates. The technical difficulties in aseptically sampling deep subsurface sediments and the fact that microbial processes in laboratory incubations of deep subsurface material often do not mimic in situ processes frequently necessitate that microbial activity in the deep subsurface be inferred through nonmicrobiological analyses of ground water. These approaches include measurements of dissolved H2, which can predict the predominant microbially catalyzed redox reactions in aquifers, as well as geochemical and groundwater flow modeling, which can be used to estimate the rates of microbial processes. Microorganisms recovered from the deep subsurface have the potential to affect the fate of toxic organics and inorganic contaminants in groundwater. Microbial activity also greatly influences 1 the chemistry of many pristine groundwaters and contributes to such phenomena as porosity development in carbonate aquifers, accumulation of undesirably high concentrations of dissolved iron, and production of methane and hydrogen sulfide. Although the last decade has seen a dramatic increase in interest in deep subsurface microbiology, in comparison with the study of

  2. DeepSurv: personalized treatment recommender system using a Cox proportional hazards deep neural network.

    Science.gov (United States)

    Katzman, Jared L; Shaham, Uri; Cloninger, Alexander; Bates, Jonathan; Jiang, Tingting; Kluger, Yuval

    2018-02-26

    Medical practitioners use survival models to explore and understand the relationships between patients' covariates (e.g. clinical and genetic features) and the effectiveness of various treatment options. Standard survival models like the linear Cox proportional hazards model require extensive feature engineering or prior medical knowledge to model treatment interaction at an individual level. While nonlinear survival methods, such as neural networks and survival forests, can inherently model these high-level interaction terms, they have yet to be shown as effective treatment recommender systems. We introduce DeepSurv, a Cox proportional hazards deep neural network and state-of-the-art survival method for modeling interactions between a patient's covariates and treatment effectiveness in order to provide personalized treatment recommendations. We perform a number of experiments training DeepSurv on simulated and real survival data. We demonstrate that DeepSurv performs as well as or better than other state-of-the-art survival models and validate that DeepSurv successfully models increasingly complex relationships between a patient's covariates and their risk of failure. We then show how DeepSurv models the relationship between a patient's features and effectiveness of different treatment options to show how DeepSurv can be used to provide individual treatment recommendations. Finally, we train DeepSurv on real clinical studies to demonstrate how it's personalized treatment recommendations would increase the survival time of a set of patients. The predictive and modeling capabilities of DeepSurv will enable medical researchers to use deep neural networks as a tool in their exploration, understanding, and prediction of the effects of a patient's characteristics on their risk of failure.

  3. Deep Water Survey Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The deep water biodiversity surveys explore and describe the biodiversity of the bathy- and bentho-pelagic nekton using Midwater and bottom trawls centered in the...

  4. Deep Space Habitat Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The Deep Space Habitat was closed out at the end of Fiscal Year 2013 (September 30, 2013). Results and select content have been incorporated into the new Exploration...

  5. Deep Learning in Neuroradiology.

    Science.gov (United States)

    Zaharchuk, G; Gong, E; Wintermark, M; Rubin, D; Langlotz, C P

    2018-02-01

    Deep learning is a form of machine learning using a convolutional neural network architecture that shows tremendous promise for imaging applications. It is increasingly being adapted from its original demonstration in computer vision applications to medical imaging. Because of the high volume and wealth of multimodal imaging information acquired in typical studies, neuroradiology is poised to be an early adopter of deep learning. Compelling deep learning research applications have been demonstrated, and their use is likely to grow rapidly. This review article describes the reasons, outlines the basic methods used to train and test deep learning models, and presents a brief overview of current and potential clinical applications with an emphasis on how they are likely to change future neuroradiology practice. Facility with these methods among neuroimaging researchers and clinicians will be important to channel and harness the vast potential of this new method. © 2018 by American Journal of Neuroradiology.

  6. Deep inelastic lepton scattering

    International Nuclear Information System (INIS)

    Nachtmann, O.

    1977-01-01

    Deep inelastic electron (muon) nucleon and neutrino nucleon scattering as well as electron positron annihilation into hadrons are reviewed from a theoretical point of view. The emphasis is placed on comparisons of quantum chromodynamics with the data. (orig.) [de

  7. Neuromorphic Deep Learning Machines

    OpenAIRE

    Neftci, E; Augustine, C; Paul, S; Detorakis, G

    2017-01-01

    An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Back Propagation (BP) rule, often relies on the immediate availability of network-wide...

  8. Pathogenesis of deep endometriosis.

    Science.gov (United States)

    Gordts, Stephan; Koninckx, Philippe; Brosens, Ivo

    2017-12-01

    The pathophysiology of (deep) endometriosis is still unclear. As originally suggested by Cullen, change the definition "deeper than 5 mm" to "adenomyosis externa." With the discovery of the old European literature on uterine bleeding in 5%-10% of the neonates and histologic evidence that the bleeding represents decidual shedding, it is postulated/hypothesized that endometrial stem/progenitor cells, implanted in the pelvic cavity after birth, may be at the origin of adolescent and even the occasionally premenarcheal pelvic endometriosis. Endometriosis in the adolescent is characterized by angiogenic and hemorrhagic peritoneal and ovarian lesions. The development of deep endometriosis at a later age suggests that deep infiltrating endometriosis is a delayed stage of endometriosis. Another hypothesis is that the endometriotic cell has undergone genetic or epigenetic changes and those specific changes determine the development into deep endometriosis. This is compatible with the hereditary aspects, and with the clonality of deep and cystic ovarian endometriosis. It explains the predisposition and an eventual causal effect by dioxin or radiation. Specific genetic/epigenetic changes could explain the various expressions and thus typical, cystic, and deep endometriosis become three different diseases. Subtle lesions are not a disease until epi(genetic) changes occur. A classification should reflect that deep endometriosis is a specific disease. In conclusion the pathophysiology of deep endometriosis remains debated and the mechanisms of disease progression, as well as the role of genetics and epigenetics in the process, still needs to be unraveled. Copyright © 2017 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  9. Astrometric Observation of Delta Cepheus

    Science.gov (United States)

    Warren, Naomi; Wilson, Betsie; Estrada, Chris; Crisafi, Kim; King, Jackie; Jones, Stephany; Salam, Akash; Warren, Glenn; Collins, S. Jananne; Genet, Russell

    2012-04-01

    Members of a Cuesta College astronomy research seminar used a manually-controlled 10-inch Newtonian Reflector telescope to determine the separation and position angle of the binary star Delta Cepheus. It was observed on the night of Saturday, October 29, 2011, at Star Hill in Santa Margarita, California. Their values of 40.2 arc seconds and 192.4 degrees were similar to those reported in the WDS (1910).

  10. Approximate Inference and Deep Generative Models

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Advances in deep generative models are at the forefront of deep learning research because of the promise they offer for allowing data-efficient learning, and for model-based reinforcement learning. In this talk I'll review a few standard methods for approximate inference and introduce modern approximations which allow for efficient large-scale training of a wide variety of generative models. Finally, I'll demonstrate several important application of these models to density estimation, missing data imputation, data compression and planning.

  11. Why & When Deep Learning Works: Looking Inside Deep Learnings

    OpenAIRE

    Ronen, Ronny

    2017-01-01

    The Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI) has been heavily supporting Machine Learning and Deep Learning research from its foundation in 2012. We have asked six leading ICRI-CI Deep Learning researchers to address the challenge of "Why & When Deep Learning works", with the goal of looking inside Deep Learning, providing insights on how deep networks function, and uncovering key observations on their expressiveness, limitations, and potential. The outp...

  12. Ultra Deep Wave Equation Imaging and Illumination

    Energy Technology Data Exchange (ETDEWEB)

    Alexander M. Popovici; Sergey Fomel; Paul Sava; Sean Crawley; Yining Li; Cristian Lupascu

    2006-09-30

    In this project we developed and tested a novel technology, designed to enhance seismic resolution and imaging of ultra-deep complex geologic structures by using state-of-the-art wave-equation depth migration and wave-equation velocity model building technology for deeper data penetration and recovery, steeper dip and ultra-deep structure imaging, accurate velocity estimation for imaging and pore pressure prediction and accurate illumination and amplitude processing for extending the AVO prediction window. Ultra-deep wave-equation imaging provides greater resolution and accuracy under complex geologic structures where energy multipathing occurs, than what can be accomplished today with standard imaging technology. The objective of the research effort was to examine the feasibility of imaging ultra-deep structures onshore and offshore, by using (1) wave-equation migration, (2) angle-gathers velocity model building, and (3) wave-equation illumination and amplitude compensation. The effort consisted of answering critical technical questions that determine the feasibility of the proposed methodology, testing the theory on synthetic data, and finally applying the technology for imaging ultra-deep real data. Some of the questions answered by this research addressed: (1) the handling of true amplitudes in the downward continuation and imaging algorithm and the preservation of the amplitude with offset or amplitude with angle information required for AVO studies, (2) the effect of several imaging conditions on amplitudes, (3) non-elastic attenuation and approaches for recovering the amplitude and frequency, (4) the effect of aperture and illumination on imaging steep dips and on discriminating the velocities in the ultra-deep structures. All these effects were incorporated in the final imaging step of a real data set acquired specifically to address ultra-deep imaging issues, with large offsets (12,500 m) and long recording time (20 s).

  13. Auxiliary Deep Generative Models

    DEFF Research Database (Denmark)

    Maaløe, Lars; Sønderby, Casper Kaae; Sønderby, Søren Kaae

    2016-01-01

    Deep generative models parameterized by neural networks have recently achieved state-of-the-art performance in unsupervised and semi-supervised learning. We extend deep generative models with auxiliary variables which improves the variational approximation. The auxiliary variables leave...... the generative model unchanged but make the variational distribution more expressive. Inspired by the structure of the auxiliary variable we also propose a model with two stochastic layers and skip connections. Our findings suggest that more expressive and properly specified deep generative models converge...... faster with better results. We show state-of-the-art performance within semi-supervised learning on MNIST (0.96%), SVHN (16.61%) and NORB (9.40%) datasets....

  14. Deep Learning from Crowds

    DEFF Research Database (Denmark)

    Rodrigues, Filipe; Pereira, Francisco Camara

    Over the last few years, deep learning has revolutionized the field of machine learning by dramatically improving the stateof-the-art in various domains. However, as the size of supervised artificial neural networks grows, typically so does the need for larger labeled datasets. Recently...... networks from crowds. We begin by describing an EM algorithm for jointly learning the parameters of the network and the reliabilities of the annotators. Then, a novel general-purpose crowd layer is proposed, which allows us to train deep neural networks end-to-end, directly from the noisy labels......, crowdsourcing has established itself as an efficient and cost-effective solution for labeling large sets of data in a scalable manner, but it often requires aggregating labels from multiple noisy contributors with different levels of expertise. In this paper, we address the problem of learning deep neural...

  15. Deep boreholes; Tiefe Bohrloecher

    Energy Technology Data Exchange (ETDEWEB)

    Bracke, Guido [Gesellschaft fuer Anlagen- und Reaktorsicherheit gGmbH Koeln (Germany); Charlier, Frank [NSE international nuclear safety engineering gmbh, Aachen (Germany); Geckeis, Horst [Karlsruher Institut fuer Technologie (Germany). Inst. fuer Nukleare Entsorgung; and others

    2016-02-15

    The report on deep boreholes covers the following subject areas: methods for safe enclosure of radioactive wastes, requirements concerning the geological conditions of possible boreholes, reversibility of decisions and retrievability, status of drilling technology. The introduction covers national and international activities. Further chapters deal with the following issues: basic concept of the storage in deep bore holes, status of the drilling technology, safe enclosure, geomechanics and stability, reversibility of decisions, risk scenarios, compliancy with safe4ty requirements and site selection criteria, research and development demand.

  16. Deep Water Acoustics

    Science.gov (United States)

    2016-06-28

    the Deep Water project and participate in the NPAL Workshops, including Art Baggeroer (MIT), J. Beron- Vera (UMiami), M. Brown (UMiami), T...Kathleen E . Wage. The North Pacific Acoustic Laboratory deep-water acoustic propagation experiments in the Philippine Sea. J. Acoust. Soc. Am., 134(4...estimate of the angle α during PhilSea09, made from ADCP measurements at the site of the DVLA. Sim. A B1 B2 B3 C D E F Prof. # 0 4 4 4 5 10 16 20 α

  17. Deep diode atomic battery

    International Nuclear Information System (INIS)

    Anthony, T.R.; Cline, H.E.

    1977-01-01

    A deep diode atomic battery is made from a bulk semiconductor crystal containing three-dimensional arrays of columnar and lamellar P-N junctions. The battery is powered by gamma rays and x-ray emission from a radioactive source embedded in the interior of the semiconductor crystal

  18. Deep Learning Policy Quantization

    NARCIS (Netherlands)

    van de Wolfshaar, Jos; Wiering, Marco; Schomaker, Lambertus

    2018-01-01

    We introduce a novel type of actor-critic approach for deep reinforcement learning which is based on learning vector quantization. We replace the softmax operator of the policy with a more general and more flexible operator that is similar to the robust soft learning vector quantization algorithm.

  19. Deep-sea fungi

    Digital Repository Service at National Institute of Oceanography (India)

    Raghukumar, C; Damare, S.R.

    significant in terms of carbon sequestration (5, 8). In light of this, the diversity, abundance, and role of fungi in deep-sea sediments may form an important link in the global C biogeochemistry. This review focuses on issues related to collection...

  20. Deep inelastic scattering

    International Nuclear Information System (INIS)

    Aubert, J.J.

    1982-01-01

    Deep inelastic lepton-nucleon interaction experiments are renewed. Singlet and non-singlet structure functions are measured and the consistency of the different results is checked. A detailed analysis of the scaling violation is performed in terms of the quantum chromodynamics predictions [fr

  1. Deep Vein Thrombosis

    Centers for Disease Control (CDC) Podcasts

    2012-04-05

    This podcast discusses the risk for deep vein thrombosis in long-distance travelers and ways to minimize that risk.  Created: 4/5/2012 by National Center for Emerging and Zoonotic Infectious Diseases (NCEZID).   Date Released: 4/5/2012.

  2. Deep Learning Microscopy

    KAUST Repository

    Rivenson, Yair; Gorocs, Zoltan; Gunaydin, Harun; Zhang, Yibo; Wang, Hongda; Ozcan, Aydogan

    2017-01-01

    regular optical microscope, without any changes to its design. We blindly tested this deep learning approach using various tissue samples that are imaged with low-resolution and wide-field systems, where the network rapidly outputs an image with remarkably

  3. The deep universe

    CERN Document Server

    Sandage, AR; Longair, MS

    1995-01-01

    Discusses the concept of the deep universe from two conflicting theoretical viewpoints: firstly as a theory embracing the evolution of the universe from the Big Bang to the present; and secondly through observations gleaned over the years on stars, galaxies and clusters.

  4. Teaching for Deep Learning

    Science.gov (United States)

    Smith, Tracy Wilson; Colby, Susan A.

    2007-01-01

    The authors have been engaged in research focused on students' depth of learning as well as teachers' efforts to foster deep learning. Findings from a study examining the teaching practices and student learning outcomes of sixty-four teachers in seventeen different states (Smith et al. 2005) indicated that most of the learning in these classrooms…

  5. Deep Trawl Dataset

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Otter trawl (36' Yankee and 4-seam net deepwater gear) catches from mid-Atlantic slope and canyons at 200 - 800 m depth. Deep-sea (200-800 m depth) flat otter trawls...

  6. [Deep vein thrombosis prophylaxis.

    Science.gov (United States)

    Sandoval-Chagoya, Gloria Alejandra; Laniado-Laborín, Rafael

    2013-01-01

    Background: despite the proven effectiveness of preventive therapy for deep vein thrombosis, a significant proportion of patients at risk for thromboembolism do not receive prophylaxis during hospitalization. Our objective was to determine the adherence to thrombosis prophylaxis guidelines in a general hospital as a quality control strategy. Methods: a random audit of clinical charts was conducted at the Tijuana General Hospital, Baja California, Mexico, to determine the degree of adherence to deep vein thrombosis prophylaxis guidelines. The instrument used was the Caprini's checklist for thrombosis risk assessment in adult patients. Results: the sample included 300 patient charts; 182 (60.7 %) were surgical patients and 118 were medical patients. Forty six patients (15.3 %) received deep vein thrombosis pharmacologic prophylaxis; 27.1 % of medical patients received deep vein thrombosis prophylaxis versus 8.3 % of surgical patients (p < 0.0001). Conclusions: our results show that adherence to DVT prophylaxis at our hospital is extremely low. Only 15.3 % of our patients at risk received treatment, and even patients with very high risk received treatment in less than 25 % of the cases. We have implemented strategies to increase compliance with clinical guidelines.

  7. Deep inelastic collisions viewed as Brownian motion

    International Nuclear Information System (INIS)

    Gross, D.H.E.; Freie Univ. Berlin

    1980-01-01

    Non-equilibrium transport processes like Brownian motion, are studied since perhaps 100 years and one should ask why does one not use these theories to explain deep inelastic collision data. These theories have reached a high standard of sophistication, experience, and precision that I believe them to be very usefull for our problem. I will try to sketch a possible form of an advanced theory of Brownian motion that seems to be suitable for low energy heavy ion collisions. (orig./FKS)

  8. Astrometrically registered simultaneous observations of the 22 GHz H{sub 2}O and 43 GHz SiO masers toward R Leonis Minoris using KVN and source/frequency phase referencing

    Energy Technology Data Exchange (ETDEWEB)

    Dodson, Richard; Rioja, María J.; Jung, Tae-Hyun; Sohn, Bong-Won; Byun, Do-Young; Cho, Se-Hyung; Lee, Sang-Sung; Kim, Jongsoo; Kim, Kee-Tae; Oh, Chung-Sik; Han, Seog-Tae; Je, Do-Heung; Chung, Moon-Hee; Wi, Seog-Oh; Kang, Jiman; Lee, Jung-Won; Chung, Hyunsoo; Kim, Hyo-Ryoung; Kim, Hyun-Goo; Lee, Chang-Hoon, E-mail: rdodson@kasi.re.kr [Korea Astronomy and Space Science Institute, Daedeokdae-ro 776, Yuseong-gu, Daejeon 305-348 (Korea, Republic of); and others

    2014-11-01

    Oxygen-rich asymptotic giant branch (AGB) stars can be intense emitters of SiO (v = 1 and 2, J = 1 → 0) and H{sub 2}O maser lines at 43 and 22 GHz, respectively. Very long baseline interferometry (VLBI) observations of the maser emission provide a unique tool to probe the innermost layers of the circumstellar envelopes in AGB stars. Nevertheless, the difficulties in achieving astrometrically aligned H{sub 2}O and v = 1 and v = 2 SiO maser maps have traditionally limited the physical constraints that can be placed on the SiO maser pumping mechanism. We present phase-referenced simultaneous spectral-line VLBI images for the SiO v = 1 and v = 2, J = 1 → 0, and H{sub 2}O maser emission around the AGB star R LMi, obtained from the Korean VLBI Network (KVN). The simultaneous multi-channel receivers of the KVN offer great possibilities for astrometry in the frequency domain. With this facility, we have produced images with bona fide absolute astrometric registration between high-frequency maser transitions of different species to provide the positions of the H{sub 2}O maser emission and the center of the SiO maser emission, hence reducing the uncertainty in the proper motions for R LMi by an order of magnitude over that from Hipparcos. This is the first successful demonstration of source frequency phase referencing for millimeter VLBI spectral-line observations and also where the ratio between the frequencies is not an integer.

  9. ['Gold standard', not 'golden standard'

    NARCIS (Netherlands)

    Claassen, J.A.H.R.

    2005-01-01

    In medical literature, both 'gold standard' and 'golden standard' are employed to describe a reference test used for comparison with a novel method. The term 'gold standard' in its current sense in medical research was coined by Rudd in 1979, in reference to the monetary gold standard. In the same

  10. Decommissioning standards

    International Nuclear Information System (INIS)

    Crofford, W.N.

    1980-01-01

    EPA has agreed to establish a series of environmental standards for the safe disposal of radioactive waste through participation in the Interagency Review Group on Nuclear Waste Management (IRG). One of the standards required under the IRG is the standard for decommissioning of radioactive contaminated sites, facilities, and materials. This standard is to be proposed by December 1980 and promulgated by December 1981. Several considerations are important in establishing these standards. This study includes discussions of some of these considerations and attempts to evaluate their relative importance. Items covered include: the form of the standards, timing for decommissioning, occupational radiation protection, costs and financial provisions. 4 refs

  11. Deep inelastic scattering

    International Nuclear Information System (INIS)

    Zakharov, V.I.

    1977-01-01

    The present status of the quark-parton-gluon picture of deep inelastic scattering is reviewed. The general framework is mostly theoretical and covers investigations since 1970. Predictions of the parton model and of the asymptotically free field theories are compared with experimental data available. The valence quark approximation is concluded to be valid in most cases, but fails to account for the data on the total momentum transfer. On the basis of gluon corrections introduced to the parton model certain predictions concerning both the deep inelastic structure functions and form factors are made. The contributions of gluon exchanges and gluon bremsstrahlung are highlighted. Asymptotic freedom is concluded to be very attractive and provide qualitative explanation to some experimental observations (scaling violations, breaking of the Drell-Yan-West type relations). Lepton-nuclear scattering is pointed out to be helpful in probing the nature of nuclear forces and studying the space-time picture of the parton model

  12. Deep Energy Retrofit

    DEFF Research Database (Denmark)

    Zhivov, Alexander; Lohse, Rüdiger; Rose, Jørgen

    Deep Energy Retrofit – A Guide to Achieving Significant Energy User Reduction with Major Renovation Projects contains recommendations for characteristics of some of core technologies and measures that are based on studies conducted by national teams associated with the International Energy Agency...... Energy Conservation in Buildings and Communities Program (IEA-EBC) Annex 61 (Lohse et al. 2016, Case, et al. 2016, Rose et al. 2016, Yao, et al. 2016, Dake 2014, Stankevica et al. 2016, Kiatreungwattana 2014). Results of these studies provided a base for setting minimum requirements to the building...... envelope-related technologies to make Deep Energy Retrofit feasible and, in many situations, cost effective. Use of energy efficiency measures (EEMs) in addition to core technologies bundle and high-efficiency appliances will foster further energy use reduction. This Guide also provides best practice...

  13. Deep groundwater chemistry

    International Nuclear Information System (INIS)

    Wikberg, P.; Axelsen, K.; Fredlund, F.

    1987-06-01

    Starting in 1977 and up till now a number of places in Sweden have been investigated in order to collect the necessary geological, hydrogeological and chemical data needed for safety analyses of repositories in deep bedrock systems. Only crystalline rock is considered and in many cases this has been gneisses of sedimentary origin but granites and gabbros are also represented. Core drilled holes have been made at nine sites. Up to 15 holes may be core drilled at one site, the deepest down to 1000 m. In addition to this a number of boreholes are percussion drilled at each site to depths of about 100 m. When possible drilling water is taken from percussion drilled holes. The first objective is to survey the hydraulic conditions. Core drilled boreholes and sections selected for sampling of deep groundwater are summarized. (orig./HP)

  14. Deep Reinforcement Fuzzing

    OpenAIRE

    Böttinger, Konstantin; Godefroid, Patrice; Singh, Rishabh

    2018-01-01

    Fuzzing is the process of finding security vulnerabilities in input-processing code by repeatedly testing the code with modified inputs. In this paper, we formalize fuzzing as a reinforcement learning problem using the concept of Markov decision processes. This in turn allows us to apply state-of-the-art deep Q-learning algorithms that optimize rewards, which we define from runtime properties of the program under test. By observing the rewards caused by mutating with a specific set of actions...

  15. Deep Visual Attention Prediction

    Science.gov (United States)

    Wang, Wenguan; Shen, Jianbing

    2018-05-01

    In this work, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although Convolutional Neural Networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve CNN based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark datasets demonstrate our method yields state-of-the-art performance with competitive inference time.

  16. Accounting standards

    NARCIS (Netherlands)

    Stellinga, B.; Mügge, D.

    2014-01-01

    The European and global regulation of accounting standards have witnessed remarkable changes over the past twenty years. In the early 1990s, EU accounting practices were fragmented along national lines and US accounting standards were the de facto global standards. Since 2005, all EU listed

  17. Standardization Documents

    Science.gov (United States)

    2011-08-01

    Specifications and Standards; Guide Specifications; CIDs; and NGSs . Learn. Perform. Succeed. STANDARDIZATION DOCUMENTS Federal Specifications Commercial...national or international standardization document developed by a private sector association, organization, or technical society that plans ...Maintain lessons learned • Examples: Guidance for application of a technology; Lists of options Learn. Perform. Succeed. DEFENSE HANDBOOK

  18. Beyond standard quantum chromodynamics

    International Nuclear Information System (INIS)

    Brodsky, S.J.

    1995-09-01

    Despite the many empirical successes of QCD, there are a number of intriguing experimental anomalies that have been observed in heavy flavor hadroproduction, in measurements of azimuthal correlations in deep inelastic processes, and in measurements of spin correlations in hadronic reactions. Such phenomena point to color coherence and multiparton correlations in the hadron wavefunctions and physics beyond standard leading twist factorization. Two new high precision tests of QCD and the Standard Model are discussed: classical polarized photoabsorption sum rules, which are sensitive to anomalous couplings and composite structure, and commensurate scale relations, which relate physical observables to each other without scale or scheme ambiguity. The relationship of anomalous couplings to composite structure is also discussed

  19. Deep Red (Profondo Rosso)

    CERN Multimedia

    Cine Club

    2015-01-01

    Wednesday 29 April 2015 at 20:00 CERN Council Chamber    Deep Red (Profondo Rosso) Directed by Dario Argento (Italy, 1975) 126 minutes A psychic who can read minds picks up the thoughts of a murderer in the audience and soon becomes a victim. An English pianist gets involved in solving the murders, but finds many of his avenues of inquiry cut off by new murders, and he begins to wonder how the murderer can track his movements so closely. Original version Italian; English subtitles

  20. Reversible deep disposal

    International Nuclear Information System (INIS)

    2009-10-01

    This presentation, given by the national agency of radioactive waste management (ANDRA) at the meeting of October 8, 2009 of the high committee for the nuclear safety transparency and information (HCTISN), describes the concept of deep reversible disposal for high level/long living radioactive wastes, as considered by the ANDRA in the framework of the program law of June 28, 2006 about the sustainable management of radioactive materials and wastes. The document presents the social and political reasons of reversibility, the technical means considered (containers, disposal cavities, monitoring system, test facilities and industrial prototypes), the decisional process (progressive development and blocked off of the facility, public information and debate). (J.S.)

  1. Deep inelastic neutron scattering

    International Nuclear Information System (INIS)

    Mayers, J.

    1989-03-01

    The report is based on an invited talk given at a conference on ''Neutron Scattering at ISIS: Recent Highlights in Condensed Matter Research'', which was held in Rome, 1988, and is intended as an introduction to the techniques of Deep Inelastic Neutron Scattering. The subject is discussed under the following topic headings:- the impulse approximation I.A., scaling behaviour, kinematical consequences of energy and momentum conservation, examples of measurements, derivation of the I.A., the I.A. in a harmonic system, and validity of the I.A. in neutron scattering. (U.K.)

  2. [Deep mycoses rarely described].

    Science.gov (United States)

    Charles, D

    1986-01-01

    Beside deep mycoses very well known: histoplasmosis, candidosis, cryptococcosis, there are other mycoses less frequently described. Some of them are endemic in some countries: South American blastomycosis in Brazil, coccidioidomycosis in California; some others are cosmopolitan and may affect everyone: sporotrichosis, or may affect only immunodeficient persons: mucormycosis. They do not spare Africa, we may encounter basidiobolomycosis, rhinophycomycosis, dermatophytosis, sporotrichosis and, more recently reported, rhinosporidiosis. Important therapeutic progresses have been accomplished with amphotericin B and with antifungus imidazole compounds (miconazole and ketoconazole). Surgical intervention is sometime recommended in chromomycosis and rhinosporidiosis.

  3. Deep penetration calculations

    International Nuclear Information System (INIS)

    Thompson, W.L.; Deutsch, O.L.; Booth, T.E.

    1980-04-01

    Several Monte Carlo techniques are compared in the transport of neutrons of different source energies through two different deep-penetration problems each with two parts. The first problem involves transmission through a 200-cm concrete slab. The second problem is a 90 0 bent pipe jacketed by concrete. In one case the pipe is void, and in the other it is filled with liquid sodium. Calculations are made with two different Los Alamos Monte Carlo codes: the continuous-energy code MCNP and the multigroup code MCMG

  4. Jet-images — deep learning edition

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Luke de [Institute for Computational and Mathematical Engineering, Stanford University,Huang Building 475 Via Ortega, Stanford, CA 94305 (United States); Kagan, Michael [SLAC National Accelerator Laboratory, Stanford University,2575 Sand Hill Rd, Menlo Park, CA 94025 (United States); Mackey, Lester [Department of Statistics, Stanford University,390 Serra Mall, Stanford, CA 94305 (United States); Nachman, Benjamin; Schwartzman, Ariel [SLAC National Accelerator Laboratory, Stanford University,2575 Sand Hill Rd, Menlo Park, CA 94025 (United States)

    2016-07-13

    Building on the notion of a particle physics detector as a camera and the collimated streams of high energy particles, or jets, it measures as an image, we investigate the potential of machine learning techniques based on deep learning architectures to identify highly boosted W bosons. Modern deep learning algorithms trained on jet images can out-perform standard physically-motivated feature driven approaches to jet tagging. We develop techniques for visualizing how these features are learned by the network and what additional information is used to improve performance. This interplay between physically-motivated feature driven tools and supervised learning algorithms is general and can be used to significantly increase the sensitivity to discover new particles and new forces, and gain a deeper understanding of the physics within jets.

  5. Jet-images — deep learning edition

    International Nuclear Information System (INIS)

    Oliveira, Luke de; Kagan, Michael; Mackey, Lester; Nachman, Benjamin; Schwartzman, Ariel

    2016-01-01

    Building on the notion of a particle physics detector as a camera and the collimated streams of high energy particles, or jets, it measures as an image, we investigate the potential of machine learning techniques based on deep learning architectures to identify highly boosted W bosons. Modern deep learning algorithms trained on jet images can out-perform standard physically-motivated feature driven approaches to jet tagging. We develop techniques for visualizing how these features are learned by the network and what additional information is used to improve performance. This interplay between physically-motivated feature driven tools and supervised learning algorithms is general and can be used to significantly increase the sensitivity to discover new particles and new forces, and gain a deeper understanding of the physics within jets.

  6. Deep Super Learner: A Deep Ensemble for Classification Problems

    OpenAIRE

    Young, Steven; Abdou, Tamer; Bener, Ayse

    2018-01-01

    Deep learning has become very popular for tasks such as predictive modeling and pattern recognition in handling big data. Deep learning is a powerful machine learning method that extracts lower level features and feeds them forward for the next layer to identify higher level features that improve performance. However, deep neural networks have drawbacks, which include many hyper-parameters and infinite architectures, opaqueness into results, and relatively slower convergence on smaller datase...

  7. Communications standards

    CERN Document Server

    Stokes, A V

    1986-01-01

    Communications Standards deals with the standardization of computer communication networks. This book examines the types of local area networks (LANs) that have been developed and looks at some of the relevant protocols in more detail. The work of Project 802 is briefly discussed, along with a protocol which has developed from one of the LAN standards and is now a de facto standard in one particular area, namely the Manufacturing Automation Protocol (MAP). Factors that affect the usage of networks, such as network management and security, are also considered. This book is divided into three se

  8. Deep sea biophysics

    International Nuclear Information System (INIS)

    Yayanos, A.A.

    1982-01-01

    A collection of deep-sea bacterial cultures was completed. Procedures were instituted to shelter the culture collection from accidential warming. A substantial data base on the rates of reproduction of more than 100 strains of bacteria from that collection was obtained from experiments and the analysis of that data was begun. The data on the rates of reproduction were obtained under conditions of temperature and pressure found in the deep sea. The experiments were facilitated by inexpensively fabricated pressure vessels, by the streamlining of the methods for the study of kinetics at high pressures, and by computer-assisted methods. A polybarothermostat was used to study the growth of bacteria along temperature gradients at eight distinct pressures. This device should allow for the study of microbial processes in the temperature field simulating the environment around buried HLW. It is small enough to allow placement in a radiation field in future studies. A flow fluorocytometer was fabricated. This device will be used to determine the DNA content per cell in bacteria grown in laboratory culture and in microorganisms in samples from the ocean. The technique will be tested for its rapidity in determining the concentration of cells (standing stock of microorganisms) in samples from the ocean

  9. Deep Learning in Radiology.

    Science.gov (United States)

    McBee, Morgan P; Awan, Omer A; Colucci, Andrew T; Ghobadi, Comeron W; Kadom, Nadja; Kansagra, Akash P; Tridandapani, Srini; Auffermann, William F

    2018-03-29

    As radiology is inherently a data-driven specialty, it is especially conducive to utilizing data processing techniques. One such technique, deep learning (DL), has become a remarkably powerful tool for image processing in recent years. In this work, the Association of University Radiologists Radiology Research Alliance Task Force on Deep Learning provides an overview of DL for the radiologist. This article aims to present an overview of DL in a manner that is understandable to radiologists; to examine past, present, and future applications; as well as to evaluate how radiologists may benefit from this remarkable new tool. We describe several areas within radiology in which DL techniques are having the most significant impact: lesion or disease detection, classification, quantification, and segmentation. The legal and ethical hurdles to implementation are also discussed. By taking advantage of this powerful tool, radiologists can become increasingly more accurate in their interpretations with fewer errors and spend more time to focus on patient care. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  10. Deep Learning Microscopy

    KAUST Repository

    Rivenson, Yair

    2017-05-12

    We demonstrate that a deep neural network can significantly improve optical microscopy, enhancing its spatial resolution over a large field-of-view and depth-of-field. After its training, the only input to this network is an image acquired using a regular optical microscope, without any changes to its design. We blindly tested this deep learning approach using various tissue samples that are imaged with low-resolution and wide-field systems, where the network rapidly outputs an image with remarkably better resolution, matching the performance of higher numerical aperture lenses, also significantly surpassing their limited field-of-view and depth-of-field. These results are transformative for various fields that use microscopy tools, including e.g., life sciences, where optical microscopy is considered as one of the most widely used and deployed techniques. Beyond such applications, our presented approach is broadly applicable to other imaging modalities, also spanning different parts of the electromagnetic spectrum, and can be used to design computational imagers that get better and better as they continue to image specimen and establish new transformations among different modes of imaging.

  11. Deep Transfer Metric Learning.

    Science.gov (United States)

    Junlin Hu; Jiwen Lu; Yap-Peng Tan; Jie Zhou

    2016-12-01

    Conventional metric learning methods usually assume that the training and test samples are captured in similar scenarios so that their distributions are assumed to be the same. This assumption does not hold in many real visual recognition applications, especially when samples are captured across different data sets. In this paper, we propose a new deep transfer metric learning (DTML) method to learn a set of hierarchical nonlinear transformations for cross-domain visual recognition by transferring discriminative knowledge from the labeled source domain to the unlabeled target domain. Specifically, our DTML learns a deep metric network by maximizing the inter-class variations and minimizing the intra-class variations, and minimizing the distribution divergence between the source domain and the target domain at the top layer of the network. To better exploit the discriminative information from the source domain, we further develop a deeply supervised transfer metric learning (DSTML) method by including an additional objective on DTML, where the output of both the hidden layers and the top layer are optimized jointly. To preserve the local manifold of input data points in the metric space, we present two new methods, DTML with autoencoder regularization and DSTML with autoencoder regularization. Experimental results on face verification, person re-identification, and handwritten digit recognition validate the effectiveness of the proposed methods.

  12. Deep Reinforcement Learning: An Overview

    OpenAIRE

    Li, Yuxi

    2017-01-01

    We give an overview of recent exciting achievements of deep reinforcement learning (RL). We discuss six core elements, six important mechanisms, and twelve applications. We start with background of machine learning, deep learning and reinforcement learning. Next we discuss core RL elements, including value function, in particular, Deep Q-Network (DQN), policy, reward, model, planning, and exploration. After that, we discuss important mechanisms for RL, including attention and memory, unsuperv...

  13. Deep Feature Consistent Variational Autoencoder

    OpenAIRE

    Hou, Xianxu; Shen, Linlin; Sun, Ke; Qiu, Guoping

    2016-01-01

    We present a novel method for constructing Variational Autoencoder (VAE). Instead of using pixel-by-pixel loss, we enforce deep feature consistency between the input and the output of a VAE, which ensures the VAE's output to preserve the spatial correlation characteristics of the input, thus leading the output to have a more natural visual appearance and better perceptual quality. Based on recent deep learning works such as style transfer, we employ a pre-trained deep convolutional neural net...

  14. Training Standardization

    International Nuclear Information System (INIS)

    Agnihotri, Newal

    2003-01-01

    The article describes the benefits of and required process and recommendations for implementing the standardization of training in the nuclear power industry in the United States and abroad. Current Information and Communication Technologies (ICT) enable training standardization in the nuclear power industry. The delivery of training through the Internet, Intranet and video over IP will facilitate this standardization and bring multiple benefits to the nuclear power industry worldwide. As the amount of available qualified and experienced professionals decreases because of retirements and fewer nuclear engineering institutions, standardized training will help increase the number of available professionals in the industry. Technology will make it possible to use the experience of retired professionals who may be interested in working part-time from a remote location. Well-planned standardized training will prevent a fragmented approach among utilities, and it will save the industry considerable resources in the long run. It will also ensure cost-effective and safe nuclear power plant operation

  15. Deep learning? What deep learning? | Fourie | South African ...

    African Journals Online (AJOL)

    In teaching generally over the past twenty years, there has been a move towards teaching methods that encourage deep, rather than surface approaches to learning. The reason for this being that students, who adopt a deep approach to learning are considered to have learning outcomes of a better quality and desirability ...

  16. Effluent standards

    Energy Technology Data Exchange (ETDEWEB)

    Geisler, G C [Pennsylvania State University (United States)

    1974-07-01

    At the conference there was a considerable interest in research reactor standards and effluent standards in particular. On the program, this is demonstrated by the panel discussion on effluents, the paper on argon 41 measured by Sims, and the summary paper by Ringle, et al. on the activities of ANS research reactor standards committee (ANS-15). As a result, a meeting was organized to discuss the proposed ANS standard on research reactor effluents (15.9). This was held on Tuesday evening, was attended by members of the ANS-15 committee who were present at the conference, participants in the panel discussion on the subject, and others interested. Out of this meeting came a number of excellent suggestions for changes which will increase the utility of the standard, and a strong recommendation that the effluent standard (15.9) be combined with the effluent monitoring standard. It is expected that these suggestions and recommendations will be incorporated and a revised draft issued for comment early this summer. (author)

  17. Nuclear standards

    International Nuclear Information System (INIS)

    Fichtner, N.; Becker, K.; Bashir, M.

    1981-01-01

    This compilation of all nuclear standards available to the authors by mid 1980 represents the third, carefully revised edition of a catalogue which was first published in 1975 as EUR 5362. In this third edition several changes have been made. The title has been condensed. The information has again been carefully up-dated, covering all changes regarding status, withdrawal of old standards, new projects, amendments, revisions, splitting of standards into several parts, combination of several standards into one, etc., as available to the authors by mid 1980. The speed with which information travels varies and requires in many cases rather tedious and cumbersome inquiries. Also, the classification scheme has been revised with the goal of better adjustment to changing situations and priorities. Whenever it turned out to be difficult to attribute a standard to a single subject category, multiple listings in all relevant categories have been made. As in previous editions, within the subcategories the standards are arranged by organization (in Categorie 2.1 by country) alphabetically and in ascending numerical order. It covers all relevant areas of power reactors, the fuel cycle, radiation protection, etc., from the basic laws and governmental regulations, regulatory guides, etc., all the way to voluntary industrial standards and codes of pratice. (orig./HP)

  18. The Nature of Thinking, Shallow and Deep

    Directory of Open Access Journals (Sweden)

    Gary L. Brase

    2014-05-01

    Full Text Available Because the criteria for success differ across various domains of life, no single normative standard will ever work for all types of thinking. One method for dealing with this apparent dilemma is to propose that the mind is made up of a large number of specialized modules. This review describes how this multi-modular framework for the mind overcomes several critical conceptual and theoretical challenges to our understanding of human thinking, and hopefully clarifies what are (and are not some of the implications based on this framework. In particular, an evolutionarily informed deep rationality conception of human thinking can guide psychological research out of clusters of ad hoc models which currently occupy some fields. First, the idea of deep rationality helps theoretical frameworks in terms of orienting themselves with regard to time scale references, which can alter the nature of rationality assessments. Second, the functional domains of deep rationality can be hypothesized (non-exhaustively to include the areas of self-protection, status, affiliation, mate acquisition, mate retention, kin care, and disease avoidance. Thus, although there is no single normative standard of rationality across all of human cognition, there are sensible and objective standards by which we can evaluate multiple, fundamental, domain-specific motives underlying human cognition and behavior. This review concludes with two examples to illustrate the implications of this framework. The first example, decisions about having a child, illustrates how competing models can be understood by realizing that different fundamental motives guiding people’s thinking can sometimes be in conflict. The second example is that of personifications within modern financial markets (e.g., in the form of corporations, which are entities specifically constructed to have just one fundamental motive. This single focus is the source of both the strengths and flaws in how such entities

  19. The nature of thinking, shallow and deep.

    Science.gov (United States)

    Brase, Gary L

    2014-01-01

    Because the criteria for success differ across various domains of life, no single normative standard will ever work for all types of thinking. One method for dealing with this apparent dilemma is to propose that the mind is made up of a large number of specialized modules. This review describes how this multi-modular framework for the mind overcomes several critical conceptual and theoretical challenges to our understanding of human thinking, and hopefully clarifies what are (and are not) some of the implications based on this framework. In particular, an evolutionarily informed "deep rationality" conception of human thinking can guide psychological research out of clusters of ad hoc models which currently occupy some fields. First, the idea of deep rationality helps theoretical frameworks in terms of orienting themselves with regard to time scale references, which can alter the nature of rationality assessments. Second, the functional domains of deep rationality can be hypothesized (non-exhaustively) to include the areas of self-protection, status, affiliation, mate acquisition, mate retention, kin care, and disease avoidance. Thus, although there is no single normative standard of rationality across all of human cognition, there are sensible and objective standards by which we can evaluate multiple, fundamental, domain-specific motives underlying human cognition and behavior. This review concludes with two examples to illustrate the implications of this framework. The first example, decisions about having a child, illustrates how competing models can be understood by realizing that different fundamental motives guiding people's thinking can sometimes be in conflict. The second example is that of personifications within modern financial markets (e.g., in the form of corporations), which are entities specifically constructed to have just one fundamental motive. This single focus is the source of both the strengths and flaws in how such entities behave.

  20. Deep sea radionuclides

    International Nuclear Information System (INIS)

    Kanisch, G.; Vobach, M.

    1993-01-01

    Every year since 1979, either in sping or in summer, the fishing research vessel 'Walther Herwig' goes to the North Atlantic disposal areas of solid radioactive wastes, and, for comparative purposes, to other areas, in order to collect water samples, plankton and nekton, and, from the deep sea bed, sediment samples and benthos organisms. In addition to data on the radionuclide contents of various media, information about the plankton, nekton and benthos organisms living in those areas and about their biomasses could be gathered. The investigations are aimed at acquiring scientifically founded knowledge of the uptake of radioactive substances by microorganisms, and their migration from the sea bottom to the areas used by man. (orig.) [de

  1. Deep inelastic phenomena

    International Nuclear Information System (INIS)

    Aubert, J.J.

    1982-01-01

    The experimental situation of the deep inelastic scattering for electrons (muons) is reviewed. A brief history of experimentation highlights Mohr and Nicoll's 1932 experiment on electron-atom scattering and Hofstadter's 1950 experiment on electron-nucleus scattering. The phenomenology of electron-nucleon scattering carried out between 1960 and 1970 is described, with emphasis on the parton model, and scaling. Experiments at SLAC and FNAL since 1974 exhibit scaling violations. Three muon-nucleon scattering experiments at BFP, BCDMA, and EMA, currently producing new results in the high Q 2 domain suggest a rather flat behaviour of the structure function at fixed x as a function of Q 2 . It is seen that the structure measured in DIS can then be projected into a pure hadronic process to predict a cross section. Protonneutron difference, moment analysis, and Drell-Yan pairs are also considered

  2. Top tagging with deep neural networks [Vidyo

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Recent literature on deep neural networks for top tagging has focussed on image based techniques or multivariate approaches using high level jet substructure variables. Here, we take a sequential approach to this task by using anordered sequence of energy deposits as training inputs. Unlike previous approaches, this strategy does not result in a loss of information during pixelization or the calculation of high level features. We also propose new preprocessing methods that do not alter key physical quantities such as jet mass. We compare the performance of this approach to standard tagging techniques and present results evaluating the robustness of the neural network to pileup.

  3. MATE standardization

    Science.gov (United States)

    Farmer, R. E.

    1982-11-01

    The MATE (Modular Automatic Test Equipment) program was developed to combat the proliferation of unique, expensive ATE within the Air Force. MATE incorporates a standard management approach and a standard architecture designed to implement a cradle-to-grave approach to the acquisition of ATE and to significantly reduce the life cycle cost of weapons systems support. These standards are detailed in the MATE Guides. The MATE Guides assist both the Air Force and Industry in implementing the MATE concept, and provide the necessary tools and guidance required for successful acquisition of ATE. The guides also provide the necessary specifications for industry to build MATE-qualifiable equipment. The MATE architecture provides standards for all key interfaces of an ATE system. The MATE approach to the acquisition and management of ATE has been jointly endorsed by the commanders of Air Force Systems Command and Air Force Logistics Command as the way of doing business in the future.

  4. Context and Deep Learning Design

    Science.gov (United States)

    Boyle, Tom; Ravenscroft, Andrew

    2012-01-01

    Conceptual clarification is essential if we are to establish a stable and deep discipline of technology enhanced learning. The technology is alluring; this can distract from deep design in a surface rush to exploit the affordances of the new technology. We need a basis for design, and a conceptual unit of organization, that are applicable across…

  5. Tephrostratigraphy the DEEP site record, Lake Ohrid

    Science.gov (United States)

    Leicher, N.; Zanchetta, G.; Sulpizio, R.; Giaccio, B.; Wagner, B.; Francke, A.

    2016-12-01

    In the central Mediterranean region, tephrostratigraphy has been proofed to be a suitable and powerful tool for dating and correlating marine and terrestrial records. However, for the period older 200 ka, tephrostratigraphy is incomplete and restricted to some Italian continental basins (e.g. Sulmona, Acerno, Mercure), and continuous records downwind of the Italian volcanoes are rare. Lake Ohrid (Macedonia/Albania) in the eastern Mediterranean region fits this requisite and is assumed to be the oldest continuously existing lake of Europe. A continous record (DEEP) was recovered within the scope of the ICDP deep-drilling campaign SCOPSCO (Scientific Collaboration on Past Speciation Conditions in Lake Ohrid). In the uppermost 450 meters of the record, covering more than 1.2 Myrs of Italian volcanism, 54 tephra layers were identified during core-opening and description. A first tephrostratigraphic record was established for the uppermost 248 m ( 637 ka). Major element analyses (EDS/WDS) were carried out on juvenile glass fragments and 15 out of 35 tephra layers have been identified and correlated with known and dated eruptions of Italian volcanoes. Existing 40Ar/39Ar ages were re-calculated by using the same flux standard and used as first order tie points to develop a robust chronology for the DEEP site succession. Between 248 and 450 m of the DEEP site record, another 19 tephra horizons were identified and are subject of ongoing work. These deposits, once correlated with known and dated tephra, will hopefully enable dating this part of the succession, likely supported by major paleomagnetic events, such as the Brunhes-Matuyama boundary, or the Cobb-Mountain or the Jaramillo excursions. This makes the Lake Ohrid record a unique continuous, distal record of Italian volcanic activity, which is candidate to become the template for the central Mediterranean tephrostratigraphy, especially for the hitherto poorly known and explored lower Middle Pleistocene period.

  6. Deep Learning Fluid Mechanics

    Science.gov (United States)

    Barati Farimani, Amir; Gomes, Joseph; Pande, Vijay

    2017-11-01

    We have developed a new data-driven model paradigm for the rapid inference and solution of the constitutive equations of fluid mechanic by deep learning models. Using generative adversarial networks (GAN), we train models for the direct generation of solutions to steady state heat conduction and incompressible fluid flow without knowledge of the underlying governing equations. Rather than using artificial neural networks to approximate the solution of the constitutive equations, GANs can directly generate the solutions to these equations conditional upon an arbitrary set of boundary conditions. Both models predict temperature, velocity and pressure fields with great test accuracy (>99.5%). The application of our framework for inferring and generating the solutions of partial differential equations can be applied to any physical phenomena and can be used to learn directly from experiments where the underlying physical model is complex or unknown. We also have shown that our framework can be used to couple multiple physics simultaneously, making it amenable to tackle multi-physics problems.

  7. Deep video deblurring

    KAUST Repository

    Su, Shuochen

    2016-11-25

    Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.

  8. Deep space telescopes

    CERN Multimedia

    CERN. Geneva

    2006-01-01

    The short series of seminars will address results and aims of current and future space astrophysics as the cultural framework for the development of deep space telescopes. It will then present such new tools, as they are currently available to, or imagined by, the scientific community, in the context of the science plans of ESA and of all major world space agencies. Ground-based astronomy, in the 400 years since Galileo’s telescope, has given us a profound phenomenological comprehension of our Universe, but has traditionally been limited to the narrow band(s) to which our terrestrial atmosphere is transparent. Celestial objects, however, do not care about our limitations, and distribute most of the information about their physics throughout the complete electromagnetic spectrum. Such information is there for the taking, from millimiter wavelengths to gamma rays. Forty years astronomy from space, covering now most of the e.m. spectrum, have thus given us a better understanding of our physical Universe then t...

  9. Deep inelastic final states

    International Nuclear Information System (INIS)

    Girardi, G.

    1980-11-01

    In these lectures we attempt to describe the final states of deep inelastic scattering as given by QCD. In the first section we shall briefly comment on the parton model and give the main properties of decay functions which are of interest for the study of semi-inclusive leptoproduction. The second section is devoted to the QCD approach to single hadron leptoproduction. First we recall basic facts on QCD log's and derive after that the evolution equations for the fragmentation functions. For this purpose we make a short detour in e + e - annihilation. The rest of the section is a study of the factorization of long distance effects associated with the initial and final states. We then show how when one includes next to leading QCD corrections one induces factorization breaking and describe the double moments useful for testing such effects. The next section contains a review on the QCD jets in the hadronic final state. We begin by introducing the notion of infrared safe variable and defining a few useful examples. Distributions in these variables are studied to first order in QCD, with some comments on the resummation of logs encountered in higher orders. Finally the last section is a 'gaullimaufry' of jet studies

  10. Frequency standards

    CERN Document Server

    Riehle, Fritz

    2006-01-01

    Of all measurement units, frequency is the one that may be determined with the highest degree of accuracy. It equally allows precise measurements of other physical and technical quantities, whenever they can be measured in terms of frequency.This volume covers the central methods and techniques relevant for frequency standards developed in physics, electronics, quantum electronics, and statistics. After a review of the basic principles, the book looks at the realisation of commonly used components. It then continues with the description and characterisation of important frequency standards

  11. Deep-learnt classification of light curves

    DEFF Research Database (Denmark)

    Mahabal, Ashish; Gieseke, Fabian; Pai, Akshay Sadananda Uppinakudru

    2017-01-01

    is to derive statistical features from the time series and to use machine learning methods, generally supervised, to separate objects into a few of the standard classes. In this work, we transform the time series to two-dimensional light curve representations in order to classify them using modern deep......Astronomy light curves are sparse, gappy, and heteroscedastic. As a result standard time series methods regularly used for financial and similar datasets are of little help and astronomers are usually left to their own instruments and techniques to classify light curves. A common approach...... learning techniques. In particular, we show that convolutional neural networks based classifiers work well for broad characterization and classification. We use labeled datasets of periodic variables from CRTS survey and show how this opens doors for a quick classification of diverse classes with several...

  12. Decadal trends in deep ocean salinity and regional effects on steric sea level

    Science.gov (United States)

    Purkey, S. G.; Llovel, W.

    2017-12-01

    We present deep (below 2000 m) and abyssal (below 4000 m) global ocean salinity trends from the 1990s through the 2010s and assess the role of deep salinity in local and global sea level budgets. Deep salinity trends are assessed using all deep basins with available full-depth, high-quality hydrographic section data that have been occupied two or more times since the 1980s through either the World Ocean Circulation Experiment (WOCE) Hydrographic Program or the Global Ship-Based Hydrographic Investigations Program (GO-SHIP). All salinity data is calibrated to standard seawater and any intercruise offsets applied. While the global mean deep halosteric contribution to sea level rise is close to zero (-0.017 +/- 0.023 mm/yr below 4000 m), there is a large regional variability with the southern deep basins becoming fresher and northern deep basins becoming more saline. This meridional gradient in the deep salinity trend reflects different mechanisms driving the deep salinity variability. The deep Southern Ocean is freshening owing to a recent increased flux of freshwater to the deep ocean. Outside of the Southern Ocean, the deep salinity and temperature changes are tied to isopycnal heave associated with a falling of deep isopycnals in recent decades. Therefore, regions of the ocean with a deep salinity minimum are experiencing both a halosteric contraction with a thermosteric expansion. While the thermosteric expansion is larger in most cases, in some regions the halosteric compensates for as much as 50% of the deep thermal expansion, making a significant contribution to local sea level rise budgets.

  13. Deep Mapping and Spatial Anthropology

    Directory of Open Access Journals (Sweden)

    Les Roberts

    2016-01-01

    Full Text Available This paper provides an introduction to the Humanities Special Issue on “Deep Mapping”. It sets out the rationale for the collection and explores the broad-ranging nature of perspectives and practices that fall within the “undisciplined” interdisciplinary domain of spatial humanities. Sketching a cross-current of ideas that have begun to coalesce around the concept of “deep mapping”, the paper argues that rather than attempting to outline a set of defining characteristics and “deep” cartographic features, a more instructive approach is to pay closer attention to the multivalent ways deep mapping is performatively put to work. Casting a critical and reflexive gaze over the developing discourse of deep mapping, it is argued that what deep mapping “is” cannot be reduced to the otherwise a-spatial and a-temporal fixity of the “deep map”. In this respect, as an undisciplined survey of this increasing expansive field of study and practice, the paper explores the ways in which deep mapping can engage broader discussion around questions of spatial anthropology.

  14. Deep learning for computational chemistry.

    Science.gov (United States)

    Goh, Garrett B; Hodas, Nathan O; Vishnu, Abhinav

    2017-06-15

    The rise and fall of artificial neural networks is well documented in the scientific literature of both computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on multilayer neural networks. Within the last few years, we have seen the transformative impact of deep learning in many domains, particularly in speech recognition and computer vision, to the extent that the majority of expert practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. In this review, we provide an introductory overview into the theory of deep neural networks and their unique properties that distinguish them from traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including quantitative structure activity relationship, virtual screening, protein structure prediction, quantum chemistry, materials design, and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non-neural networks state-of-the-art models across disparate research topics, and deep neural network-based models often exceeded the "glass ceiling" expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a valuable tool for computational chemistry. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  15. Deep learning for computational chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Goh, Garrett B. [Advanced Computing, Mathematics, and Data Division, Pacific Northwest National Laboratory, 902 Battelle Blvd Richland Washington 99354; Hodas, Nathan O. [Advanced Computing, Mathematics, and Data Division, Pacific Northwest National Laboratory, 902 Battelle Blvd Richland Washington 99354; Vishnu, Abhinav [Advanced Computing, Mathematics, and Data Division, Pacific Northwest National Laboratory, 902 Battelle Blvd Richland Washington 99354

    2017-03-08

    The rise and fall of artificial neural networks is well documented in the scientific literature of both the fields of computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on “deep” neural networks. Within the last few years, we have seen the transformative impact of deep learning the computer science domain, notably in speech recognition and computer vision, to the extent that the majority of practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. In this review, we provide an introductory overview into the theory of deep neural networks and their unique properties as compared to traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including QSAR, virtual screening, protein structure modeling, QM calculations, materials synthesis and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non neural networks state-of-the-art models across disparate research topics, and deep neural network based models often exceeded the “glass ceiling” expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a useful tool and may grow into a pivotal role for various challenges in the computational chemistry field.

  16. DeepSimulator: a deep simulator for Nanopore sequencing

    KAUST Repository

    Li, Yu; Han, Renmin; Bi, Chongwei; Li, Mo; Wang, Sheng; Gao, Xin

    2017-01-01

    or assembled contigs, we simulate the electrical current signals by a context-dependent deep learning model, followed by a base-calling procedure to yield simulated reads. This workflow mimics the sequencing procedure more naturally. The thorough experiments

  17. Relevant Standards

    Indian Academy of Sciences (India)

    .86: Ethernet over LAPS. Standard in China and India. G.7041: Generic Framing Procedure (GFP). Supports Ethernet as well as other data formats (e.g., Fibre Channel); Protocol of ... IEEE 802.3x for flow control of incoming Ethernet data ...

  18. Achieving Standardization

    DEFF Research Database (Denmark)

    Henningsson, Stefan

    2014-01-01

    International e-Customs is going through a standardization process. Driven by the need to increase control in the trade process to address security challenges stemming from threats of terrorists, diseases, and counterfeit products, and to lower the administrative burdens on traders to stay...

  19. Achieving Standardization

    DEFF Research Database (Denmark)

    Henningsson, Stefan

    2016-01-01

    International e-Customs is going through a standardization process. Driven by the need to increase control in the trade process to address security challenges stemming from threats of terrorists, diseases, and counterfeit products, and to lower the administrative burdens on traders to stay...

  20. Standard Fortran

    International Nuclear Information System (INIS)

    Marshall, N.H.

    1981-01-01

    Because of its vast software investment in Fortran programs, the nuclear community has an inherent interest in the evolution of Fortran. This paper reviews the impact of the new Fortran 77 standard and discusses the projected changes which can be expected in the future

  1. Extracting Databases from Dark Data with DeepDive.

    Science.gov (United States)

    Zhang, Ce; Shin, Jaeho; Ré, Christopher; Cafarella, Michael; Niu, Feng

    2016-01-01

    DeepDive is a system for extracting relational databases from dark data : the mass of text, tables, and images that are widely collected and stored but which cannot be exploited by standard relational tools. If the information in dark data - scientific papers, Web classified ads, customer service notes, and so on - were instead in a relational database, it would give analysts a massive and valuable new set of "big data." DeepDive is distinctive when compared to previous information extraction systems in its ability to obtain very high precision and recall at reasonable engineering cost; in a number of applications, we have used DeepDive to create databases with accuracy that meets that of human annotators. To date we have successfully deployed DeepDive to create data-centric applications for insurance, materials science, genomics, paleontologists, law enforcement, and others. The data unlocked by DeepDive represents a massive opportunity for industry, government, and scientific researchers. DeepDive is enabled by an unusual design that combines large-scale probabilistic inference with a novel developer interaction cycle. This design is enabled by several core innovations around probabilistic training and inference.

  2. Deep UV LEDs

    Science.gov (United States)

    Han, Jung; Amano, Hiroshi; Schowalter, Leo

    2014-06-01

    Deep ultraviolet (DUV) photons interact strongly with a broad range of chemical and biological molecules; compact DUV light sources could enable a wide range of applications in chemi/bio-sensing, sterilization, agriculture, and industrial curing. The much shorter wavelength also results in useful characteristics related to optical diffraction (for lithography) and scattering (non-line-of-sight communication). The family of III-N (AlGaInN) compound semiconductors offers a tunable energy gap from infrared to DUV. While InGaN-based blue light emitters have been the primary focus for the obvious application of solid state lighting, there is a growing interest in the development of efficient UV and DUV light-emitting devices. In the past few years we have witnessed an increasing investment from both government and industry sectors to further the state of DUV light-emitting devices. The contributions in Semiconductor Science and Technology 's special issue on DUV devices provide an up-to-date snapshot covering many relevant topics in this field. Given the expected importance of bulk AlN substrate in DUV technology, we are pleased to include a review article by Hartmann et al on the growth of AlN bulk crystal by physical vapour transport. The issue of polarization field within the deep ultraviolet LEDs is examined in the article by Braut et al. Several commercial companies provide useful updates in their development of DUV emitters, including Nichia (Fujioka et al ), Nitride Semiconductors (Muramoto et al ) and Sensor Electronic Technology (Shatalov et al ). We believe these articles will provide an excellent overview of the state of technology. The growth of AlGaN heterostructures by molecular beam epitaxy, in contrast to the common organo-metallic vapour phase epitaxy, is discussed by Ivanov et al. Since hexagonal boron nitride (BN) has received much attention as both a UV and a two-dimensional electronic material, we believe it serves readers well to include the

  3. DEEP INFILTRATING ENDOMETRIOSIS

    Directory of Open Access Journals (Sweden)

    Martina Ribič-Pucelj

    2018-02-01

    Full Text Available Background: Endometriosis is not considered a unified disease, but a disease encompassing three differ- ent forms differentiated by aetiology and pathogenesis: peritoneal endometriosis, ovarian endometriosis and deep infiltrating endometriosis (DIE. The disease is classified as DIE when the lesions penetrate 5 mm or more into the retroperitoneal space. The estimated incidence of endometriosis in women of reproductive age ranges from 10–15 % and that of DIE from 3–10 %, the highest being in infertile women and in those with chronic pelvic pain. The leading symptoms of DIE are chronic pelvic pain which increases with age and correlates with the depth of infiltration and infertility. The most important diagnostic procedures are patient’s history and proper gynecological examination. The diagnosis is confirmed with laparoscopy. DIE can affect, beside reproductive organs, also bowel, bladder and ureters, therefore adi- tional diagnostic procedures must be performed preopertively to confirm or to exclude the involvement of the mentioned organs. Endometriosis is hormon dependent disease, there- fore several hormonal treatment regims are used to supress estrogen production but the symptoms recurr soon after caesation of the treatment. At the moment, surgical treatment with excision of all lesions, including those of bowel, bladder and ureters, is the method of choice but requires frequently interdisciplinary approach. Surgical treatment significantly reduces pain and improves fertility in inferile patients. Conclusions: DIE is not a rare form of endometriosis characterized by chronic pelvic pain and infertility. Medical treatment is not efficient. The method of choice is surgical treatment with excision of all lesions. It significantly reduces pelvic pain and enables high spontaneus and IVF preg- nacy rates.Therefore such patients should be treated at centres with experience in treatment of DIE and with possibility of interdisciplinary approach.

  4. ShapeShop: Towards Understanding Deep Learning Representations via Interactive Experimentation

    Energy Technology Data Exchange (ETDEWEB)

    Hohman, Frederick M.; Hodas, Nathan O.; Chau, Duen Horng

    2017-05-30

    Deep learning is the driving force behind many recent technologies; however, deep neural networks are often viewed as “black-boxes” due to their internal complexity that is hard to understand. Little research focuses on helping people explore and understand the relationship between a user’s data and the learned representations in deep learning models. We present our ongoing work, ShapeShop, an interactive system for visualizing and understanding what semantics a neural network model has learned. Built using standard web technologies, ShapeShop allows users to experiment with and compare deep learning models to help explore the robustness of image classifiers.

  5. ShapeShop: Towards Understanding Deep Learning Representations via Interactive Experimentation.

    Science.gov (United States)

    Hohman, Fred; Hodas, Nathan; Chau, Duen Horng

    2017-05-01

    Deep learning is the driving force behind many recent technologies; however, deep neural networks are often viewed as "black-boxes" due to their internal complexity that is hard to understand. Little research focuses on helping people explore and understand the relationship between a user's data and the learned representations in deep learning models. We present our ongoing work, ShapeShop, an interactive system for visualizing and understanding what semantics a neural network model has learned. Built using standard web technologies, ShapeShop allows users to experiment with and compare deep learning models to help explore the robustness of image classifiers.

  6. Telepresence for Deep Space Missions

    Data.gov (United States)

    National Aeronautics and Space Administration — Incorporating telepresence technologies into deep space mission operations can give the crew and ground personnel the impression that they are in a location at time...

  7. Hybrid mask for deep etching

    KAUST Repository

    Ghoneim, Mohamed T.

    2017-01-01

    Deep reactive ion etching is essential for creating high aspect ratio micro-structures for microelectromechanical systems, sensors and actuators, and emerging flexible electronics. A novel hybrid dual soft/hard mask bilayer may be deposited during

  8. Deep Learning and Bayesian Methods

    OpenAIRE

    Prosper Harrison B.

    2017-01-01

    A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such meth...

  9. Density functionals from deep learning

    OpenAIRE

    McMahon, Jeffrey M.

    2016-01-01

    Density-functional theory is a formally exact description of a many-body quantum system in terms of its density; in practice, however, approximations to the universal density functional are required. In this work, a model based on deep learning is developed to approximate this functional. Deep learning allows computational models that are capable of naturally discovering intricate structure in large and/or high-dimensional data sets, with multiple levels of abstraction. As no assumptions are ...

  10. Deep Unfolding for Topic Models.

    Science.gov (United States)

    Chien, Jen-Tzung; Lee, Chao-Hsi

    2018-02-01

    Deep unfolding provides an approach to integrate the probabilistic generative models and the deterministic neural networks. Such an approach is benefited by deep representation, easy interpretation, flexible learning and stochastic modeling. This study develops the unsupervised and supervised learning of deep unfolded topic models for document representation and classification. Conventionally, the unsupervised and supervised topic models are inferred via the variational inference algorithm where the model parameters are estimated by maximizing the lower bound of logarithm of marginal likelihood using input documents without and with class labels, respectively. The representation capability or classification accuracy is constrained by the variational lower bound and the tied model parameters across inference procedure. This paper aims to relax these constraints by directly maximizing the end performance criterion and continuously untying the parameters in learning process via deep unfolding inference (DUI). The inference procedure is treated as the layer-wise learning in a deep neural network. The end performance is iteratively improved by using the estimated topic parameters according to the exponentiated updates. Deep learning of topic models is therefore implemented through a back-propagation procedure. Experimental results show the merits of DUI with increasing number of layers compared with variational inference in unsupervised as well as supervised topic models.

  11. Method for manufacturing nuclear radiation detector with deep diffused junction

    International Nuclear Information System (INIS)

    Hall, R.N.

    1977-01-01

    Germanium radiation detectors are manufactured by diffusing lithium into high purity p-type germanium. The diffusion is most readily accomplished from a lithium-lead-bismuth alloy at approximately 430 0 C and is monitored by a quartz half cell containing a standard composition of this alloy. Detectors having n-type cores may be constructed by converting high purity p-type germanium to n-type by a lithium diffusion and subsequently diffusing some of the lithium back out through the surface to create a deep p-n junction. Production of coaxial germanium detectors comprising deep p-n junctions by the lithium diffusion process is described

  12. Hot, deep origin of petroleum: deep basin evidence and application

    Science.gov (United States)

    Price, Leigh C.

    1978-01-01

    Use of the model of a hot deep origin of oil places rigid constraints on the migration and entrapment of crude oil. Specifically, oil originating from depth migrates vertically up faults and is emplaced in traps at shallower depths. Review of petroleum-producing basins worldwide shows oil occurrence in these basins conforms to the restraints of and therefore supports the hypothesis. Most of the world's oil is found in the very deepest sedimentary basins, and production over or adjacent to the deep basin is cut by or directly updip from faults dipping into the basin deep. Generally the greater the fault throw the greater the reserves. Fault-block highs next to deep sedimentary troughs are the best target areas by the present concept. Traps along major basin-forming faults are quite prospective. The structural style of a basin governs the distribution, types, and amounts of hydrocarbons expected and hence the exploration strategy. Production in delta depocenters (Niger) is in structures cut by or updip from major growth faults, and structures not associated with such faults are barren. Production in block fault basins is on horsts next to deep sedimentary troughs (Sirte, North Sea). In basins whose sediment thickness, structure and geologic history are known to a moderate degree, the main oil occurrences can be specifically predicted by analysis of fault systems and possible hydrocarbon migration routes. Use of the concept permits the identification of significant targets which have either been downgraded or ignored in the past, such as production in or just updip from thrust belts, stratigraphic traps over the deep basin associated with major faulting, production over the basin deep, and regional stratigraphic trapping updip from established production along major fault zones.

  13. Introducing ADES: A New IAU Astrometry Data Exchange Standard

    Science.gov (United States)

    Chesley, Steven R.; Hockney, George M.; Holman, Matthew J.

    2017-10-01

    For several decades, small body astrometry has been exchanged, distributed and archived in the form of 80-column ASCII records. As a replacement for this obsolescent format, we have worked with a number of members of the community to develop the Astrometric Data Exchange Standard (ADES), which was formally adopted by IAU Commission 20 in August 2015 at the XXIX General Assembly in Honolulu, Hawaii.The purpose of ADES is to ensure that useful and available observational information is submitted, archived, and disseminated as needed. Availability of more complete information will allow orbit computers to process the data more correctly, leading to improved accuracy and reliability of orbital fits. In this way, it will be possible to fully exploit the improving accuracy and increasing number of both optical and radar observations. ADES overcomes several limitations of the previous format by allowing characterization of astrometric and photometric errors, adequate precision in time and angle fields, and flexibility and extensibility.To accommodate a diverse base of users, from automated surveys to hands-on follow-up observers, the ADES protocol allows for two file formats, eXtensible Markup Language (XML) and Pipe-Separated Values (PSV). Each format carries the same information and simple tools allow users to losslessly transform back and forth between XML and PSV.We have further developed and refined ADES since it was first announced in July 2015 [1]. The proposal at that time [2] has undergone several modest revisions to aid validation and avoid overloaded fields. We now have validation schema and file transformation utilities. Suitable example files, test suites, and input/output libraries in a number of modern programming languages are now available. Acknowledgements: Useful feedback during the development of ADES has been received from numerous colleagues in the community of observers and orbit specialists working on asteroids comets and planetary satellites

  14. How Stressful Is "Deep Bubbling"?

    Science.gov (United States)

    Tyrmi, Jaana; Laukkanen, Anne-Maria

    2017-03-01

    Water resistance therapy by phonating through a tube into the water is used to treat dysphonia. Deep submersion (≥10 cm in water, "deep bubbling") is used for hypofunctional voice disorders. Using it with caution is recommended to avoid vocal overloading. This experimental study aimed to investigate how strenuous "deep bubbling" is. Fourteen subjects, half of them with voice training, repeated the syllable [pa:] in comfortable speaking pitch and loudness, loudly, and in strained voice. Thereafter, they phonated a vowel-like sound both in comfortable loudness and loudly into a glass resonance tube immersed 10 cm into the water. Oral pressure, contact quotient (CQ, calculated from electroglottographic signal), and sound pressure level were studied. The peak oral pressure P(oral) during [p] and shuttering of the outer end of the tube was measured to estimate the subglottic pressure P(sub) and the mean P(oral) during vowel portions to enable calculation of transglottic pressure P(trans). Sensations during phonation were reported with an open-ended interview. P(sub) and P(oral) were higher in "deep bubbling" and P(trans) lower than in loud syllable phonation, but the CQ did not differ significantly. Similar results were obtained for the comparison between loud "deep bubbling" and strained phonation, although P(sub) did not differ significantly. Most of the subjects reported "deep bubbling" to be stressful only for respiratory and lip muscles. No big differences were found between trained and untrained subjects. The CQ values suggest that "deep bubbling" may increase vocal fold loading. Further studies should address impact stress during water resistance exercises. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  15. Deep vein thrombosis of the leg

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eun Hee; Rhee, Kwang Woo; Jeon, Suk Chul; Joo, Kyung Bin; Lee, Seung Ro; Seo, Heung Suk; Hahm, Chang Kok [College of Medicine, Hanyang University, Seoul (Korea, Republic of)

    1987-04-15

    Ascending contrast venography is the definitive standard method for the diagnosis of deep vein thrombosis (DVT) of the lower extremities. Authors analysed 22 cases of DVT clinically and radiographically. 1.The patients ranged in age from 15 to 70 yrs and the most prevalent age group was 7th decade (31%). There was an equal distribution of males and females. 2.In 11 cases of 22 cases, variable etiologic factors were recognized, such as abdominal surgery, chronic bedridden state, local trauma on the leg, pregnancy, postpartum, Behcet's syndrome, iliac artery aneurysm, and chronic medication of estrogen. 3.Nineteen cases out of 22 cases showed primary venographic signs of DVT, such as well-defined filling defect in opacified veins and narrowed, irregularly filled venous lumen. In only 3 cases, the diagnosis of DVT was base upon the segmental nonvisualization of deep veins with good opacification of proximal and distal veins and presence of collaterals. 4.Extent of thrombosis: 3 cases were confined to calf vein, 4 cases extended to femoral vein, and 15 cases had involvement above iliac vein. 5.In 17 cases involving relatively long segment of deep veins, propagation pattern of thrombus was evaluated by its radiologic morphology according to the age of thrombus: 9 cases suggested central or antegrade propagation pattern and 8 cases, peripheral or retrograde pattern. 6.None of 22 cases showed clinical evidence of pulmonary embolism. The cause of the rarity of pulmonary embolism in Korean in presumed to be related to the difference in major involving site and propagation pattern of DVT in the leg.

  16. Deep vein thrombosis of the leg

    International Nuclear Information System (INIS)

    Lee, Eun Hee; Rhee, Kwang Woo; Jeon, Suk Chul; Joo, Kyung Bin; Lee, Seung Ro; Seo, Heung Suk; Hahm, Chang Kok

    1987-01-01

    Ascending contrast venography is the definitive standard method for the diagnosis of deep vein thrombosis (DVT) of the lower extremities. Authors analysed 22 cases of DVT clinically and radiographically. 1.The patients ranged in age from 15 to 70 yrs and the most prevalent age group was 7th decade (31%). There was an equal distribution of males and females. 2.In 11 cases of 22 cases, variable etiologic factors were recognized, such as abdominal surgery, chronic bedridden state, local trauma on the leg, pregnancy, postpartum, Behcet's syndrome, iliac artery aneurysm, and chronic medication of estrogen. 3.Nineteen cases out of 22 cases showed primary venographic signs of DVT, such as well-defined filling defect in opacified veins and narrowed, irregularly filled venous lumen. In only 3 cases, the diagnosis of DVT was base upon the segmental nonvisualization of deep veins with good opacification of proximal and distal veins and presence of collaterals. 4.Extent of thrombosis: 3 cases were confined to calf vein, 4 cases extended to femoral vein, and 15 cases had involvement above iliac vein. 5.In 17 cases involving relatively long segment of deep veins, propagation pattern of thrombus was evaluated by its radiologic morphology according to the age of thrombus: 9 cases suggested central or antegrade propagation pattern and 8 cases, peripheral or retrograde pattern. 6.None of 22 cases showed clinical evidence of pulmonary embolism. The cause of the rarity of pulmonary embolism in Korean in presumed to be related to the difference in major involving site and propagation pattern of DVT in the leg

  17. Accelerating Deep Learning with Shrinkage and Recall

    OpenAIRE

    Zheng, Shuai; Vishnu, Abhinav; Ding, Chris

    2016-01-01

    Deep Learning is a very powerful machine learning model. Deep Learning trains a large number of parameters for multiple layers and is very slow when data is in large scale and the architecture size is large. Inspired from the shrinking technique used in accelerating computation of Support Vector Machines (SVM) algorithm and screening technique used in LASSO, we propose a shrinking Deep Learning with recall (sDLr) approach to speed up deep learning computation. We experiment shrinking Deep Lea...

  18. What Really is Deep Learning Doing?

    OpenAIRE

    Xiong, Chuyu

    2017-01-01

    Deep learning has achieved a great success in many areas, from computer vision to natural language processing, to game playing, and much more. Yet, what deep learning is really doing is still an open question. There are a lot of works in this direction. For example, [5] tried to explain deep learning by group renormalization, and [6] tried to explain deep learning from the view of functional approximation. In order to address this very crucial question, here we see deep learning from perspect...

  19. Deep borehole disposal of high-level radioactive waste.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua S.; Freeze, Geoffrey A.; Brady, Patrick Vane; Swift, Peter N.; Rechard, Robert Paul; Arnold, Bill Walter; Kanney, Joseph F.; Bauer, Stephen J.

    2009-07-01

    Preliminary evaluation of deep borehole disposal of high-level radioactive waste and spent nuclear fuel indicates the potential for excellent long-term safety performance at costs competitive with mined repositories. Significant fluid flow through basement rock is prevented, in part, by low permeabilities, poorly connected transport pathways, and overburden self-sealing. Deep fluids also resist vertical movement because they are density stratified. Thermal hydrologic calculations estimate the thermal pulse from emplaced waste to be small (less than 20 C at 10 meters from the borehole, for less than a few hundred years), and to result in maximum total vertical fluid movement of {approx}100 m. Reducing conditions will sharply limit solubilities of most dose-critical radionuclides at depth, and high ionic strengths of deep fluids will prevent colloidal transport. For the bounding analysis of this report, waste is envisioned to be emplaced as fuel assemblies stacked inside drill casing that are lowered, and emplaced using off-the-shelf oilfield and geothermal drilling techniques, into the lower 1-2 km portion of a vertical borehole {approx}45 cm in diameter and 3-5 km deep, followed by borehole sealing. Deep borehole disposal of radioactive waste in the United States would require modifications to the Nuclear Waste Policy Act and to applicable regulatory standards for long-term performance set by the US Environmental Protection Agency (40 CFR part 191) and US Nuclear Regulatory Commission (10 CFR part 60). The performance analysis described here is based on the assumption that long-term standards for deep borehole disposal would be identical in the key regards to those prescribed for existing repositories (40 CFR part 197 and 10 CFR part 63).

  20. DEEP EMPATHIC DESIGN

    Directory of Open Access Journals (Sweden)

    FRAQUELLI Roberto

    2015-06-01

    Full Text Available Empathic design is often described as a creative process that translates observations - typically of people and their behaviours into design ideas. Another similar and closely related expression is user centred design, which attempts to turn the attention away from an object or product towards its usefulness and usability. The central premise of empathic design is that the best-designed products and services result from understanding the needs of the people who will use them. User-centred designers engage actively with end-users to gather insights that drive design from the earliest stages of product and service development, right through the design process.[1] Our standard definitions and understanding of empathic design or user centred design are well recognised and widely practiced, particularly taught in design schools and in professional creative business. This paper extends and explores a deeper understanding of empathy within a systems thinking framework where the observer and subject are both components of empathic design. It proposes that empathy can be described as the bonds of connection with others (in its traditional interpretation, but also with an ecological, social and economic context.

  1. Deep water challenges for drilling rig design

    Energy Technology Data Exchange (ETDEWEB)

    Roth, M [Transocean Sedco Forex, Houston, TX (United States)

    2001-07-01

    Drilling rigs designed for deep water must meet specific design considerations for harsh environments. The early lessons for rig design came from experiences in the North Sea. Rig efficiency and safety considerations must include structural integrity, isolated/redundant ballast controls, triple redundant DP systems, enclosed heated work spaces, and automated equipment such as bridge cranes, pipe handling gear, offline capabilities, subsea tree handling, and computerized drill floors. All components must be designed to harmonize man and machine. Some challenges which are unique to Eastern Canada include frequent storms and fog, cold temperature, icebergs, rig ice, and difficult logistics. This power point presentation described station keeping and mooring issues in terms of dynamic positioning issues. The environmental influence on riser management during forced disconnects was also described. Design issues for connected deep water risers must insure elastic stability, and control deflected shape. The design must also keep stresses within acceptable limits. Codes and standards for stress limits, flex joints and tension were also presented. tabs., figs.

  2. Photoacoustic image reconstruction via deep learning

    Science.gov (United States)

    Antholzer, Stephan; Haltmeier, Markus; Nuster, Robert; Schwab, Johannes

    2018-02-01

    Applying standard algorithms to sparse data problems in photoacoustic tomography (PAT) yields low-quality images containing severe under-sampling artifacts. To some extent, these artifacts can be reduced by iterative image reconstruction algorithms which allow to include prior knowledge such as smoothness, total variation (TV) or sparsity constraints. These algorithms tend to be time consuming as the forward and adjoint problems have to be solved repeatedly. Further, iterative algorithms have additional drawbacks. For example, the reconstruction quality strongly depends on a-priori model assumptions about the objects to be recovered, which are often not strictly satisfied in practical applications. To overcome these issues, in this paper, we develop direct and efficient reconstruction algorithms based on deep learning. As opposed to iterative algorithms, we apply a convolutional neural network, whose parameters are trained before the reconstruction process based on a set of training data. For actual image reconstruction, a single evaluation of the trained network yields the desired result. Our presented numerical results (using two different network architectures) demonstrate that the proposed deep learning approach reconstructs images with a quality comparable to state of the art iterative reconstruction methods.

  3. Deep Learning and Bayesian Methods

    Directory of Open Access Journals (Sweden)

    Prosper Harrison B.

    2017-01-01

    Full Text Available A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such methods might be used to automate certain aspects of data analysis in particle physics. Next, the connection to Bayesian methods is discussed and the paper ends with thoughts on a significant practical issue, namely, how, from a Bayesian perspective, one might optimize the construction of deep neural networks.

  4. Deep Learning in Drug Discovery.

    Science.gov (United States)

    Gawehn, Erik; Hiss, Jan A; Schneider, Gisbert

    2016-01-01

    Artificial neural networks had their first heyday in molecular informatics and drug discovery approximately two decades ago. Currently, we are witnessing renewed interest in adapting advanced neural network architectures for pharmaceutical research by borrowing from the field of "deep learning". Compared with some of the other life sciences, their application in drug discovery is still limited. Here, we provide an overview of this emerging field of molecular informatics, present the basic concepts of prominent deep learning methods and offer motivation to explore these techniques for their usefulness in computer-assisted drug discovery and design. We specifically emphasize deep neural networks, restricted Boltzmann machine networks and convolutional networks. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Eric Davidson and deep time.

    Science.gov (United States)

    Erwin, Douglas H

    2017-10-13

    Eric Davidson had a deep and abiding interest in the role developmental mechanisms played in generating evolutionary patterns documented in deep time, from the origin of the euechinoids to the processes responsible for the morphological architectures of major animal clades. Although not an evolutionary biologist, Davidson's interests long preceded the current excitement over comparative evolutionary developmental biology. Here I discuss three aspects at the intersection between his research and evolutionary patterns in deep time: First, understanding the mechanisms of body plan formation, particularly those associated with the early diversification of major metazoan clades. Second, a critique of early claims about ancestral metazoans based on the discoveries of highly conserved genes across bilaterian animals. Third, Davidson's own involvement in paleontology through a collaborative study of the fossil embryos from the Ediacaran Doushantuo Formation in south China.

  6. Deep Learning in Gastrointestinal Endoscopy.

    Science.gov (United States)

    Patel, Vivek; Armstrong, David; Ganguli, Malika; Roopra, Sandeep; Kantipudi, Neha; Albashir, Siwar; Kamath, Markad V

    2016-01-01

    Gastrointestinal (GI) endoscopy is used to inspect the lumen or interior of the GI tract for several purposes, including, (1) making a clinical diagnosis, in real time, based on the visual appearances; (2) taking targeted tissue samples for subsequent histopathological examination; and (3) in some cases, performing therapeutic interventions targeted at specific lesions. GI endoscopy is therefore predicated on the assumption that the operator-the endoscopist-is able to identify and characterize abnormalities or lesions accurately and reproducibly. However, as in other areas of clinical medicine, such as histopathology and radiology, many studies have documented marked interobserver and intraobserver variability in lesion recognition. Thus, there is a clear need and opportunity for techniques or methodologies that will enhance the quality of lesion recognition and diagnosis and improve the outcomes of GI endoscopy. Deep learning models provide a basis to make better clinical decisions in medical image analysis. Biomedical image segmentation, classification, and registration can be improved with deep learning. Recent evidence suggests that the application of deep learning methods to medical image analysis can contribute significantly to computer-aided diagnosis. Deep learning models are usually considered to be more flexible and provide reliable solutions for image analysis problems compared to conventional computer vision models. The use of fast computers offers the possibility of real-time support that is important for endoscopic diagnosis, which has to be made in real time. Advanced graphics processing units and cloud computing have also favored the use of machine learning, and more particularly, deep learning for patient care. This paper reviews the rapidly evolving literature on the feasibility of applying deep learning algorithms to endoscopic imaging.

  7. Deep hierarchical attention network for video description

    Science.gov (United States)

    Li, Shuohao; Tang, Min; Zhang, Jun

    2018-03-01

    Pairing video to natural language description remains a challenge in computer vision and machine translation. Inspired by image description, which uses an encoder-decoder model for reducing visual scene into a single sentence, we propose a deep hierarchical attention network for video description. The proposed model uses convolutional neural network (CNN) and bidirectional LSTM network as encoders while a hierarchical attention network is used as the decoder. Compared to encoder-decoder models used in video description, the bidirectional LSTM network can capture the temporal structure among video frames. Moreover, the hierarchical attention network has an advantage over single-layer attention network on global context modeling. To make a fair comparison with other methods, we evaluate the proposed architecture with different types of CNN structures and decoders. Experimental results on the standard datasets show that our model has a more superior performance than the state-of-the-art techniques.

  8. Deep mycoses in Amazon region.

    Science.gov (United States)

    Talhari, S; Cunha, M G; Schettini, A P; Talhari, A C

    1988-09-01

    Patients with deep mycoses diagnosed in dermatologic clinics of Manaus (state of Amazonas, Brazil) were studied from November 1973 to December 1983. They came from the Brazilian states of Amazonas, Pará, Acre, and Rondônia and the Federal Territory of Roraima. All of these regions, with the exception of Pará, are situated in the western part of the Amazon Basin. The climatic conditions in this region are almost the same: tropical forest, high rainfall, and mean annual temperature of 26C. The deep mycoses diagnosed, in order of frequency, were Jorge Lobo's disease, paracoccidioidomycosis, chromomycosis, sporotrichosis, mycetoma, cryptococcosis, zygomycosis, and histoplasmosis.

  9. Producing deep-water hydrocarbons

    International Nuclear Information System (INIS)

    Pilenko, Thierry

    2011-01-01

    Several studies relate the history and progress made in offshore production from oil and gas fields in relation to reserves and the techniques for producing oil offshore. The intention herein is not to review these studies but rather to argue that the activities of prospecting and producing deep-water oil and gas call for a combination of technology and project management and, above all, of devotion and innovation. Without this sense of commitment motivating men and women in this industry, the human adventure of deep-water production would never have taken place

  10. Opportunities and Challenges in Deep Mining: A Brief Review

    Directory of Open Access Journals (Sweden)

    Pathegama G. Ranjith

    2017-08-01

    Full Text Available Mineral consumption is increasing rapidly as more consumers enter the market for minerals and as the global standard of living increases. As a result, underground mining continues to progress to deeper levels in order to tackle the mineral supply crisis in the 21st century. However, deep mining occurs in a very technical and challenging environment, in which significant innovative solutions and best practice are required and additional safety standards must be implemented in order to overcome the challenges and reap huge economic gains. These challenges include the catastrophic events that are often met in deep mining engineering: rockbursts, gas outbursts, high in situ and redistributed stresses, large deformation, squeezing and creeping rocks, and high temperature. This review paper presents the current global status of deep mining and highlights some of the newest technological achievements and opportunities associated with rock mechanics and geotechnical engineering in deep mining. Of the various technical achievements, unmanned working-faces and unmanned mines based on fully automated mining and mineral extraction processes have become important fields in the 21st century.

  11. DeepSimulator: a deep simulator for Nanopore sequencing

    KAUST Repository

    Li, Yu

    2017-12-23

    Motivation: Oxford Nanopore sequencing is a rapidly developed sequencing technology in recent years. To keep pace with the explosion of the downstream data analytical tools, a versatile Nanopore sequencing simulator is needed to complement the experimental data as well as to benchmark those newly developed tools. However, all the currently available simulators are based on simple statistics of the produced reads, which have difficulty in capturing the complex nature of the Nanopore sequencing procedure, the main task of which is the generation of raw electrical current signals. Results: Here we propose a deep learning based simulator, DeepSimulator, to mimic the entire pipeline of Nanopore sequencing. Starting from a given reference genome or assembled contigs, we simulate the electrical current signals by a context-dependent deep learning model, followed by a base-calling procedure to yield simulated reads. This workflow mimics the sequencing procedure more naturally. The thorough experiments performed across four species show that the signals generated by our context-dependent model are more similar to the experimentally obtained signals than the ones generated by the official context-independent pore model. In terms of the simulated reads, we provide a parameter interface to users so that they can obtain the reads with different accuracies ranging from 83% to 97%. The reads generated by the default parameter have almost the same properties as the real data. Two case studies demonstrate the application of DeepSimulator to benefit the development of tools in de novo assembly and in low coverage SNP detection. Availability: The software can be accessed freely at: https://github.com/lykaust15/DeepSimulator.

  12. Stimulation Technologies for Deep Well Completions

    Energy Technology Data Exchange (ETDEWEB)

    None

    2003-09-30

    The Department of Energy (DOE) is sponsoring the Deep Trek Program targeted at improving the economics of drilling and completing deep gas wells. Under the DOE program, Pinnacle Technologies is conducting a study to evaluate the stimulation of deep wells. The objective of the project is to assess U.S. deep well drilling & stimulation activity, review rock mechanics & fracture growth in deep, high pressure/temperature wells and evaluate stimulation technology in several key deep plays. An assessment of historical deep gas well drilling activity and forecast of future trends was completed during the first six months of the project; this segment of the project was covered in Technical Project Report No. 1. The second progress report covers the next six months of the project during which efforts were primarily split between summarizing rock mechanics and fracture growth in deep reservoirs and contacting operators about case studies of deep gas well stimulation.

  13. STIMULATION TECHNOLOGIES FOR DEEP WELL COMPLETIONS

    Energy Technology Data Exchange (ETDEWEB)

    Stephen Wolhart

    2003-06-01

    The Department of Energy (DOE) is sponsoring a Deep Trek Program targeted at improving the economics of drilling and completing deep gas wells. Under the DOE program, Pinnacle Technologies is conducting a project to evaluate the stimulation of deep wells. The objective of the project is to assess U.S. deep well drilling & stimulation activity, review rock mechanics & fracture growth in deep, high pressure/temperature wells and evaluate stimulation technology in several key deep plays. Phase 1 was recently completed and consisted of assessing deep gas well drilling activity (1995-2007) and an industry survey on deep gas well stimulation practices by region. Of the 29,000 oil, gas and dry holes drilled in 2002, about 300 were drilled in the deep well; 25% were dry, 50% were high temperature/high pressure completions and 25% were simply deep completions. South Texas has about 30% of these wells, Oklahoma 20%, Gulf of Mexico Shelf 15% and the Gulf Coast about 15%. The Rockies represent only 2% of deep drilling. Of the 60 operators who drill deep and HTHP wells, the top 20 drill almost 80% of the wells. Six operators drill half the U.S. deep wells. Deep drilling peaked at 425 wells in 1998 and fell to 250 in 1999. Drilling is expected to rise through 2004 after which drilling should cycle down as overall drilling declines.

  14. Deep Space Climate Observatory (DSCOVR)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Deep Space Climate ObserVatoRy (DSCOVR) satellite is a NOAA operated asset at the first Lagrange (L1) point. The primary space weather instrument is the PlasMag...

  15. Ploughing the deep sea floor.

    Science.gov (United States)

    Puig, Pere; Canals, Miquel; Company, Joan B; Martín, Jacobo; Amblas, David; Lastras, Galderic; Palanques, Albert

    2012-09-13

    Bottom trawling is a non-selective commercial fishing technique whereby heavy nets and gear are pulled along the sea floor. The direct impact of this technique on fish populations and benthic communities has received much attention, but trawling can also modify the physical properties of seafloor sediments, water–sediment chemical exchanges and sediment fluxes. Most of the studies addressing the physical disturbances of trawl gear on the seabed have been undertaken in coastal and shelf environments, however, where the capacity of trawling to modify the seafloor morphology coexists with high-energy natural processes driving sediment erosion, transport and deposition. Here we show that on upper continental slopes, the reworking of the deep sea floor by trawling gradually modifies the shape of the submarine landscape over large spatial scales. We found that trawling-induced sediment displacement and removal from fishing grounds causes the morphology of the deep sea floor to become smoother over time, reducing its original complexity as shown by high-resolution seafloor relief maps. Our results suggest that in recent decades, following the industrialization of fishing fleets, bottom trawling has become an important driver of deep seascape evolution. Given the global dimension of this type of fishery, we anticipate that the morphology of the upper continental slope in many parts of the world’s oceans could be altered by intensive bottom trawling, producing comparable effects on the deep sea floor to those generated by agricultural ploughing on land.

  16. FOSTERING DEEP LEARNING AMONGST ENTREPRENEURSHIP ...

    African Journals Online (AJOL)

    An important prerequisite for this important objective to be achieved is that lecturers ensure that students adopt a deep learning approach towards entrepreneurship courses been taught, as this will enable them to truly understand key entrepreneurial concepts and strategies and how they can be implemented in the real ...

  17. Deep Space Gateway "Recycler" Mission

    Science.gov (United States)

    Graham, L.; Fries, M.; Hamilton, J.; Landis, R.; John, K.; O'Hara, W.

    2018-02-01

    Use of the Deep Space Gateway provides a hub for a reusable planetary sample return vehicle for missions to gather star dust as well as samples from various parts of the solar system including main belt asteroids, near-Earth asteroids, and Mars moon.

  18. Deep freezers with heat recovery

    Energy Technology Data Exchange (ETDEWEB)

    Kistler, J.

    1981-09-02

    Together with space and water heating systems, deep freezers are the biggest energy consumers in households. The article investigates the possibility of using the waste heat for water heating. The design principle of such a system is presented in a wiring diagram.

  19. A Deep-Sea Simulation.

    Science.gov (United States)

    Montes, Georgia E.

    1997-01-01

    Describes an activity that simulates exploration techniques used in deep-sea explorations and teaches students how this technology can be used to take a closer look inside volcanoes, inspect hazardous waste sites such as nuclear reactors, and explore other environments dangerous to humans. (DDR)

  20. Barbabos Deep-Water Sponges

    NARCIS (Netherlands)

    Soest, van R.W.M.; Stentoft, N.

    1988-01-01

    Deep-water sponges dredged up in two locations off the west coast of Barbados are systematically described. A total of 69 species is recorded, among which 16 are new to science, viz. Pachymatisma geodiformis, Asteropus syringiferus, Cinachyra arenosa, Theonella atlantica. Corallistes paratypus,

  1. Deep learning for visual understanding

    NARCIS (Netherlands)

    Guo, Y.

    2017-01-01

    With the dramatic growth of the image data on the web, there is an increasing demand of the algorithms capable of understanding the visual information automatically. Deep learning, served as one of the most significant breakthroughs, has brought revolutionary success in diverse visual applications,

  2. Deep-Sky Video Astronomy

    CERN Document Server

    Massey, Steve

    2009-01-01

    A guide to using modern integrating video cameras for deep-sky viewing and imaging with the kinds of modest telescopes available commercially to amateur astronomers. It includes an introduction and a brief history of the technology and camera types. It examines the pros and cons of this unrefrigerated yet highly efficient technology

  3. DM Considerations for Deep Drilling

    OpenAIRE

    Dubois-Felsmann, Gregory

    2016-01-01

    An outline of the current situation regarding the DM plans for the Deep Drilling surveys and an invitation to the community to provide feedback on what they would like to see included in the data processing and visualization of these surveys.

  4. Lessons from Earth's Deep Time

    Science.gov (United States)

    Soreghan, G. S.

    2005-01-01

    Earth is a repository of data on climatic changes from its deep-time history. Article discusses the collection and study of these data to predict future climatic changes, the need to create national study centers for the purpose, and the necessary cooperation between different branches of science in climatic research.

  5. Digging Deeper: The Deep Web.

    Science.gov (United States)

    Turner, Laura

    2001-01-01

    Focuses on the Deep Web, defined as Web content in searchable databases of the type that can be found only by direct query. Discusses the problems of indexing; inability to find information not indexed in the search engine's database; and metasearch engines. Describes 10 sites created to access online databases or directly search them. Lists ways…

  6. Deep Learning and Music Adversaries

    DEFF Research Database (Denmark)

    Kereliuk, Corey Mose; Sturm, Bob L.; Larsen, Jan

    2015-01-01

    the minimal perturbation of the input image such that the system misclassifies it with high confidence. We adapt this approach to construct and deploy an adversary of deep learning systems applied to music content analysis. In our case, however, the system inputs are magnitude spectral frames, which require...

  7. Stimulation Technologies for Deep Well Completions

    Energy Technology Data Exchange (ETDEWEB)

    Stephen Wolhart

    2005-06-30

    The Department of Energy (DOE) is sponsoring the Deep Trek Program targeted at improving the economics of drilling and completing deep gas wells. Under the DOE program, Pinnacle Technologies conducted a study to evaluate the stimulation of deep wells. The objective of the project was to review U.S. deep well drilling and stimulation activity, review rock mechanics and fracture growth in deep, high-pressure/temperature wells and evaluate stimulation technology in several key deep plays. This report documents results from this project.

  8. Determining the dark matter mass with DeepCore

    Energy Technology Data Exchange (ETDEWEB)

    Das, Chitta R. [Centro de Física Teórica de Partículas, Instituto Superior Técnico (CFTP), Universidade Tćnica de Lisboa, Avenida Rovisco Pais 1, 1049-001 Lisboa (Portugal); Mena, Olga [Instituto de Física Corpuscular (IFIC), CSIC-Universitat de València, Apartado de Correos 22085, E-46071 Valencia (Spain); Palomares-Ruiz, Sergio, E-mail: sergio.palomares.ruiz@ist.utl.pt [Centro de Física Teórica de Partículas, Instituto Superior Técnico (CFTP), Universidade Tćnica de Lisboa, Avenida Rovisco Pais 1, 1049-001 Lisboa (Portugal); Instituto de Física Corpuscular (IFIC), CSIC-Universitat de València, Apartado de Correos 22085, E-46071 Valencia (Spain); Pascoli, Silvia [IPPP, Department of Physics, Durham University, Durham DH1 3LE (United Kingdom)

    2013-10-01

    Cosmological and astrophysical observations provide increasing evidence of the existence of dark matter in our Universe. Dark matter particles with a mass above a few GeV can be captured by the Sun, accumulate in the core, annihilate, and produce high energy neutrinos either directly or by subsequent decays of Standard Model particles. We investigate the prospects for indirect dark matter detection in the IceCube/DeepCore neutrino telescope and its capabilities to determine the dark matter mass.

  9. Deep Web and Dark Web: Deep World of the Internet

    OpenAIRE

    Çelik, Emine

    2018-01-01

    The Internet is undoubtedly still a revolutionary breakthrough in the history of humanity. Many people use the internet for communication, social media, shopping, political and social agenda, and more. Deep Web and Dark Web concepts not only handled by computer, software engineers but also handled by social siciensists because of the role of internet for the States in international arenas, public institutions and human life. By the moving point that very importantrole of internet for social s...

  10. DeepNAT: Deep convolutional neural network for segmenting neuroanatomy.

    Science.gov (United States)

    Wachinger, Christian; Reuter, Martin; Klein, Tassilo

    2018-04-15

    We introduce DeepNAT, a 3D Deep convolutional neural network for the automatic segmentation of NeuroAnaTomy in T1-weighted magnetic resonance images. DeepNAT is an end-to-end learning-based approach to brain segmentation that jointly learns an abstract feature representation and a multi-class classification. We propose a 3D patch-based approach, where we do not only predict the center voxel of the patch but also neighbors, which is formulated as multi-task learning. To address a class imbalance problem, we arrange two networks hierarchically, where the first one separates foreground from background, and the second one identifies 25 brain structures on the foreground. Since patches lack spatial context, we augment them with coordinates. To this end, we introduce a novel intrinsic parameterization of the brain volume, formed by eigenfunctions of the Laplace-Beltrami operator. As network architecture, we use three convolutional layers with pooling, batch normalization, and non-linearities, followed by fully connected layers with dropout. The final segmentation is inferred from the probabilistic output of the network with a 3D fully connected conditional random field, which ensures label agreement between close voxels. The roughly 2.7million parameters in the network are learned with stochastic gradient descent. Our results show that DeepNAT compares favorably to state-of-the-art methods. Finally, the purely learning-based method may have a high potential for the adaptation to young, old, or diseased brains by fine-tuning the pre-trained network with a small training sample on the target application, where the availability of larger datasets with manual annotations may boost the overall segmentation accuracy in the future. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Deep Phenotyping: Deep Learning For Temporal Phenotype/Genotype Classification

    OpenAIRE

    Najafi, Mohammad; Namin, Sarah; Esmaeilzadeh, Mohammad; Brown, Tim; Borevitz, Justin

    2017-01-01

    High resolution and high throughput, genotype to phenotype studies in plants are underway to accelerate breeding of climate ready crops. Complex developmental phenotypes are observed by imaging a variety of accessions in different environment conditions, however extracting the genetically heritable traits is challenging. In the recent years, deep learning techniques and in particular Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs) and Long-Short Term Memories (LSTMs), h...

  12. VIRAC: the VVV Infrared Astrometric Catalogue

    Science.gov (United States)

    Smith, L. C.; Lucas, P. W.; Kurtev, R.; Smart, R.; Minniti, D.; Borissova, J.; Jones, H. R. A.; Zhang, Z. H.; Marocco, F.; Contreras Peña, C.; Gromadzki, M.; Kuhn, M. A.; Drew, J. E.; Pinfield, D. J.; Bedin, L. R.

    2018-02-01

    We present VIRAC version 1, a near-infrared proper motion and parallax catalogue of the VISTA Variables in the Via Lactea (VVV) survey for 312 587 642 unique sources averaged across all overlapping pawprint and tile images covering 560 deg2 of the bulge of the Milky Way and southern disc. The catalogue includes 119 million high-quality proper motion measurements, of which 47 million have statistical uncertainties below 1 mas yr-1. In the 11 stars and brown dwarfs, subdwarfs, white dwarfs) and kinematic distance measurements of young stellar objects. Nearby objects discovered include LTT 7251 B, an L7 benchmark companion to a G dwarf with over 20 published elemental abundances, a bright L subdwarf, VVV 1256-6202, with extremely blue colours and nine new members of the 25 pc sample. We also demonstrate why this catalogue remains useful in the era of Gaia. Future versions will be based on profile fitting photometry, use the Gaia absolute reference frame and incorporate the longer time baseline of the VVV extended survey.

  13. VIRAC: The VVV Infrared Astrometric Catalogue

    OpenAIRE

    Smith, L. C.; Lucas, P. W.; Kurtev, R.; Smart, R.; Minniti, D.; Borissova, J.; Jones, H. R. A; Zhang, Z. H.; Marocco, F.; Peña, C. Contreras; Gromadzki, M.; Kuhn, M. A.; Drew, J. E.; Pinfield, D. J.; Bedin, L. R.

    2017-01-01

    We present VIRAC version 1, a near-infrared proper motion and parallax catalogue of the VISTA VVV survey for 312,587,642 unique sources averaged across all overlapping pawprint and tile images covering 560 deg$^2$ of the bulge of the Milky Way and southern disk. The catalogue includes 119 million high quality proper motion measurements, of which 47 million have statistical uncertainties below 1 mas yr$^{-1}$. In the 11$

  14. Improved Astrometric Parameters for Equatorial LDS Systems

    OpenAIRE

    C. E. López

    2006-01-01

    La publicación de extensos catálogos, astrométricos y no astrométricos, junto con la implementación de herramientas para su manejo por parte de incipientes Observatorios Virtuales, constituyen una excelente combinación para la búsqueda a gran escala de diferentes tipos de objetos. Un ejemplo es el que reportamos en este trabajo, donde se dan parámetros astrométricos y magnitudes en diferentes bandas a sistemas dobles ecuatoriales de Luyten (LDS).

  15. Deep Neuromuscular Blockade Improves Laparoscopic Surgical Conditions

    DEFF Research Database (Denmark)

    Rosenberg, Jacob; Herring, W Joseph; Blobner, Manfred

    2017-01-01

    INTRODUCTION: Sustained deep neuromuscular blockade (NMB) during laparoscopic surgery may facilitate optimal surgical conditions. This exploratory study assessed whether deep NMB improves surgical conditions and, in doing so, allows use of lower insufflation pressures during laparoscopic cholecys...

  16. Joint Training of Deep Boltzmann Machines

    OpenAIRE

    Goodfellow, Ian; Courville, Aaron; Bengio, Yoshua

    2012-01-01

    We introduce a new method for training deep Boltzmann machines jointly. Prior methods require an initial learning pass that trains the deep Boltzmann machine greedily, one layer at a time, or do not perform well on classifi- cation tasks.

  17. Bacteriological examination and biological characteristics of deep frozen bone preserved by gamma sterilization

    International Nuclear Information System (INIS)

    Pham Quang Ngoc; Le The Trung; Vo Van Thuan; Ho Minh Duc

    1999-01-01

    To promote the surgical success in Vietnam, we should supply bone allografts of different sizes. For this reason we have developed a standard procedure in procurement, deep freezing, packaging and radiation sterilization of massive bone. The achievement in this attempt will be briefly reported. The dose of 10-15 kGy is proved to be suitable for radiation sterilization of massive bone allografts being treated in clean condition and preserved in deep frozen. Neither deep freezing nor radiation sterilization cause any significant loss of biochemical stability of massive bone allografts especially when deep freezing combines with radiation. There were neither cross infection nor change of biological characteristics found after 6 months of storage since radiation treatment. In addition to results of the previous research and development of tissue grafts for medical care, the deep freezing radiation sterilization has been established for preservation of massive bone that is of high demand for surgery in Vietnam

  18. Deep-learning top taggers or the end of QCD?

    Energy Technology Data Exchange (ETDEWEB)

    Kasieczka, Gregor [Institute for Particle Physics, ETH Zürich,Otto-Stern-Weg 5, Zürich (Switzerland); Plehn, Tilman [Institut für Theoretische Physik, Universität Heidelberg,Philosophenweg 16, Heidelberg (Germany); Russell, Michael [School of Physics and Astronomy, University of Glasgow,Glasgow G12 8QQ, Glasgow (United Kingdom); Schell, Torben [Institut für Theoretische Physik, Universität Heidelberg,Philosophenweg 16, Heidelberg (Germany)

    2017-05-02

    Machine learning based on convolutional neural networks can be used to study jet images from the LHC. Top tagging in fat jets offers a well-defined framework to establish our DeepTop approach and compare its performance to QCD-based top taggers. We first optimize a network architecture to identify top quarks in Monte Carlo simulations of the Standard Model production channel. Using standard fat jets we then compare its performance to a multivariate QCD-based top tagger. We find that both approaches lead to comparable performance, establishing convolutional networks as a promising new approach for multivariate hypothesis-based top tagging.

  19. Jet flavor tagging with Deep Learning using Python

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Besides the part that implements the resulting deep neural net in the ATLAS C++ software framework, a Python framework has been developed to connect HEP data to standard Data Science Python based libraries for Machine Learning. It makes use of HDF5, JSON and Pickle as intermediate data storage format, pandas and numpy for data handling and calculations, Keras for neural net construction and training as well as testing and matplotlib for plotting. It can be seen as an example of taking advantage of outside-HEP software developments without relying on the HEP standard ROOT.

  20. Deep-learning top taggers or the end of QCD?

    International Nuclear Information System (INIS)

    Kasieczka, Gregor; Plehn, Tilman; Russell, Michael; Schell, Torben

    2017-01-01

    Machine learning based on convolutional neural networks can be used to study jet images from the LHC. Top tagging in fat jets offers a well-defined framework to establish our DeepTop approach and compare its performance to QCD-based top taggers. We first optimize a network architecture to identify top quarks in Monte Carlo simulations of the Standard Model production channel. Using standard fat jets we then compare its performance to a multivariate QCD-based top tagger. We find that both approaches lead to comparable performance, establishing convolutional networks as a promising new approach for multivariate hypothesis-based top tagging.

  1. Development of a code of practice for deep geothermal wells

    International Nuclear Information System (INIS)

    Leaver, J.D.; Bolton, R.S.; Dench, N.D.; Fooks, L.

    1990-01-01

    Recent and on-going changes to the structure of the New Zealand geothermal industry has shifted responsibility for the development of geothermal resources from central government to private enterprise. The need for a code of practice for deep geothermal wells was identified by the Geothermal Inspectorate of the Ministry of Commerce to maintain adequate standards of health and safety and to assist with industry deregulation. This paper reports that the Code contains details of methods, procedures, formulae and design data necessary to attain those standards, and includes information which drilling engineers having experience only in the oil industry could not be expected to be familiar with

  2. Developing Deep Learning Applications for Life Science and Pharma Industry.

    Science.gov (United States)

    Siegismund, Daniel; Tolkachev, Vasily; Heyse, Stephan; Sick, Beate; Duerr, Oliver; Steigele, Stephan

    2018-06-01

    Deep Learning has boosted artificial intelligence over the past 5 years and is seen now as one of the major technological innovation areas, predicted to replace lots of repetitive, but complex tasks of human labor within the next decade. It is also expected to be 'game changing' for research activities in pharma and life sciences, where large sets of similar yet complex data samples are systematically analyzed. Deep learning is currently conquering formerly expert domains especially in areas requiring perception, previously not amenable to standard machine learning. A typical example is the automated analysis of images which are typically produced en-masse in many domains, e. g., in high-content screening or digital pathology. Deep learning enables to create competitive applications in so-far defined core domains of 'human intelligence'. Applications of artificial intelligence have been enabled in recent years by (i) the massive availability of data samples, collected in pharma driven drug programs (='big data') as well as (ii) deep learning algorithmic advancements and (iii) increase in compute power. Such applications are based on software frameworks with specific strengths and weaknesses. Here, we introduce typical applications and underlying frameworks for deep learning with a set of practical criteria for developing production ready solutions in life science and pharma research. Based on our own experience in successfully developing deep learning applications we provide suggestions and a baseline for selecting the most suited frameworks for a future-proof and cost-effective development. © Georg Thieme Verlag KG Stuttgart · New York.

  3. Deep learning guided stroke management: a review of clinical applications.

    Science.gov (United States)

    Feng, Rui; Badgeley, Marcus; Mocco, J; Oermann, Eric K

    2018-04-01

    Stroke is a leading cause of long-term disability, and outcome is directly related to timely intervention. Not all patients benefit from rapid intervention, however. Thus a significant amount of attention has been paid to using neuroimaging to assess potential benefit by identifying areas of ischemia that have not yet experienced cellular death. The perfusion-diffusion mismatch, is used as a simple metric for potential benefit with timely intervention, yet penumbral patterns provide an inaccurate predictor of clinical outcome. Machine learning research in the form of deep learning (artificial intelligence) techniques using deep neural networks (DNNs) excel at working with complex inputs. The key areas where deep learning may be imminently applied to stroke management are image segmentation, automated featurization (radiomics), and multimodal prognostication. The application of convolutional neural networks, the family of DNN architectures designed to work with images, to stroke imaging data is a perfect match between a mature deep learning technique and a data type that is naturally suited to benefit from deep learning's strengths. These powerful tools have opened up exciting opportunities for data-driven stroke management for acute intervention and for guiding prognosis. Deep learning techniques are useful for the speed and power of results they can deliver and will become an increasingly standard tool in the modern stroke specialist's arsenal for delivering personalized medicine to patients with ischemic stroke. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  4. Deep Carbon Observatory investigates Carbon from Crust to Core: An Academic Record of the History of Deep Carbon Science

    Science.gov (United States)

    Mitton, S. A.

    2017-12-01

    Carbon plays an unparalleled role in our lives: as the element of life, as the basis of most of society's energy, as the backbone of most new materials, and as the central focus in efforts to understand Earth's variable and uncertain climate. Yet in spite of carbon's importance, scientists remain largely ignorant of the physical, chemical, and biological behavior of many of Earth's carbon-bearing systems. The Deep Carbon Observatory (DCO) is a global research program to transform our understanding of carbon in Earth. At its heart, DCO is a community of scientists, from biologists to physicists, geoscientists to chemists, and many others whose work crosses these disciplinary lines, forging a new, integrative field of deep carbon science. As a historian of science, I specialise in the history of planetary science and astronomy since 1900. This is directed toward understanding of the history of the steps on the road to discovering the internal dynamics of our planet. Within a framework that describes the historical background to the new field of Earth System Science, I present the first history of deep carbon science. This project will identifies the key discoveries of deep carbon science. It will assess the impact of new knowledge on geochemistry, geodynamics, and geobiology. The project will lead to publication, in book form in 2019, of an illuminating narrative that will highlight the engaging human stories of many remarkable scientists and natural philosophers from whom we have learned about the complexity of Earth's internal world. On this journey of discovery we will encounter not just the pioneering researchers of deep carbon science, but also their institutions, their instrumental inventiveness, and their passion for exploration. The book is organised thematically around the four communities of the Deep Carbon Observatory: Deep Life, Extreme Physics and Chemistry, Reservoirs and Fluxes, and Deep Energy. The presentation has a gallery and list of Deep Carbon

  5. Building Program Vector Representations for Deep Learning

    OpenAIRE

    Mou, Lili; Li, Ge; Liu, Yuxuan; Peng, Hao; Jin, Zhi; Xu, Yan; Zhang, Lu

    2014-01-01

    Deep learning has made significant breakthroughs in various fields of artificial intelligence. Advantages of deep learning include the ability to capture highly complicated features, weak involvement of human engineering, etc. However, it is still virtually impossible to use deep learning to analyze programs since deep architectures cannot be trained effectively with pure back propagation. In this pioneering paper, we propose the "coding criterion" to build program vector representations, whi...

  6. The Dynamics of Standardization

    DEFF Research Database (Denmark)

    Brunsson, Nils; Rasche, Andreas; Seidl, David

    2012-01-01

    This paper suggests that when the phenomenon of standards and standardization is examined from the perspective of organization studies, three aspects stand out: the standardization of organizations, standardization by organizations and standardization as (a form of) organization. Following a comp...

  7. Hydride vapor phase GaN films with reduced density of residual electrons and deep traps

    International Nuclear Information System (INIS)

    Polyakov, A. Y.; Smirnov, N. B.; Govorkov, A. V.; Yugova, T. G.; Cox, H.; Helava, H.; Makarov, Yu.; Usikov, A. S.

    2014-01-01

    Electrical properties and deep electron and hole traps spectra are compared for undoped n-GaN films grown by hydride vapor phase epitaxy (HVPE) in the regular process (standard HVPE samples) and in HVPE process optimized for decreasing the concentration of residual donor impurities (improved HVPE samples). It is shown that the residual donor density can be reduced by optimization from ∼10 17  cm −3 to (2–5) × 10 14  cm −3 . The density of deep hole traps and deep electron traps decreases with decreased donor density, so that the concentration of deep hole traps in the improved samples is reduced to ∼5 × 10 13  cm −3 versus 2.9 × 10 16  cm −3 in the standard samples, with a similar decrease in the electron traps concentration

  8. Search for sterile neutrinos with IceCube DeepCore

    Energy Technology Data Exchange (ETDEWEB)

    Terliuk, Andrii [DESY, Platanenallee 6, 15738 Zeuthen (Germany); Collaboration: IceCube-Collaboration

    2016-07-01

    The DeepCore detector is a sub-array of the IceCube Neutrino Observatory that lowers the energy threshold for neutrino detection down to approximately 10 GeV. DeepCore is used for a variety of studies including atmospheric neutrino oscillations. The standard three-neutrino oscillation paradigm is tested using the DeepCore detector by searching for an additional light, sterile neutrino with a mass on the order of 1 eV. Sterile neutrinos do not interact with the ordinary matter, however they can be mixed with the three active neutrino states. Such mixture changes the picture of standard neutrino oscillations for atmospheric neutrinos with energies below 100 GeV. The capabilities of DeepCore detector to measure such sterile neutrino mixing will be presented in this talk.

  9. Is Multitask Deep Learning Practical for Pharma?

    Science.gov (United States)

    Ramsundar, Bharath; Liu, Bowen; Wu, Zhenqin; Verras, Andreas; Tudor, Matthew; Sheridan, Robert P; Pande, Vijay

    2017-08-28

    Multitask deep learning has emerged as a powerful tool for computational drug discovery. However, despite a number of preliminary studies, multitask deep networks have yet to be widely deployed in the pharmaceutical and biotech industries. This lack of acceptance stems from both software difficulties and lack of understanding of the robustness of multitask deep networks. Our work aims to resolve both of these barriers to adoption. We introduce a high-quality open-source implementation of multitask deep networks as part of the DeepChem open-source platform. Our implementation enables simple python scripts to construct, fit, and evaluate sophisticated deep models. We use our implementation to analyze the performance of multitask deep networks and related deep models on four collections of pharmaceutical data (three of which have not previously been analyzed in the literature). We split these data sets into train/valid/test using time and neighbor splits to test multitask deep learning performance under challenging conditions. Our results demonstrate that multitask deep networks are surprisingly robust and can offer strong improvement over random forests. Our analysis and open-source implementation in DeepChem provide an argument that multitask deep networks are ready for widespread use in commercial drug discovery.

  10. Evaluation of the DeepWind concept

    DEFF Research Database (Denmark)

    Schmidt Paulsen, Uwe; Borg, Michael; Gonzales Seabra, Luis Alberto

    The report describes the DeepWind 5 MW conceptual design as a baseline for results obtained in the scientific and technical work packages of the DeepWind project. A comparison of DeepWi nd with existing VAWTs and paper projects are carried out and the evaluation of the concept in terms of cost...

  11. Consolidated Deep Actor Critic Networks (DRAFT)

    NARCIS (Netherlands)

    Van der Laan, T.A.

    2015-01-01

    The works [Volodymyr et al. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.] and [Volodymyr et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.] have demonstrated the power of combining deep neural networks with

  12. Simulator Studies of the Deep Stall

    Science.gov (United States)

    White, Maurice D.; Cooper, George E.

    1965-01-01

    Simulator studies of the deep-stall problem encountered with modern airplanes are discussed. The results indicate that the basic deep-stall tendencies produced by aerodynamic characteristics are augmented by operational considerations. Because of control difficulties to be anticipated in the deep stall, it is desirable that adequate safeguards be provided against inadvertent penetrations.

  13. TOPIC MODELING: CLUSTERING OF DEEP WEBPAGES

    OpenAIRE

    Muhunthaadithya C; Rohit J.V; Sadhana Kesavan; E. Sivasankar

    2015-01-01

    The internet is comprised of massive amount of information in the form of zillions of web pages.This information can be categorized into the surface web and the deep web. The existing search engines can effectively make use of surface web information.But the deep web remains unexploited yet. Machine learning techniques have been commonly employed to access deep web content.

  14. Shear Strengthening of RC Deep Beam Using Externally Bonded GFRP Fabrics

    Science.gov (United States)

    Kumari, A.; Patel, S. S.; Nayak, A. N.

    2018-06-01

    This work presents the experimental investigation of RC deep beams wrapped with externally bonded Glass Fibre Reinforced Polymer (GFRP) fabrics in order to study the Load versus deflection behavior, cracking pattern, failure modes and ultimate shear strength. A total number of five deep beams have been casted, which is designed with conventional steel reinforcement as per IS: 456 (Indian standard plain and reinforced concrete—code for practice, Bureau of Indian Standards, New Delhi, 2000). The spans to depth ratio for all RC deep beams have been kept less than 2 as per the above specification. Out of five RC deep beams, one without retrofitting serves as a reference beam and the rest four have been wrapped with GFRP fabrics in multiple layers and tested with two point loading condition. The first cracking load, ultimate load and the shear contribution of GFRP to the deep beams have been observed. A critical discussion is made with respect to the enhancement of the strength, behaviour and performance of retrofitted deep beams in comparison to the deep beam without GFRP in order to explore the potential use of GFRP for strengthening the RC deep beams. Test results have demonstrated that the deep beams retrofitted with GFRP shows a slower development of the diagonal cracks and improves shear carrying capacity of the RC deep beam. A comparative study of the experimental results with the theoretical ones predicted by various researchers available in the literatures has also been presented. It is observed that the ultimate load of the beams retrofitted with GFRP fabrics increases with increase of number of GFRP layers up to a specific number of layers, i.e. 3 layers, beyond which it decreases.

  15. DeepFlavour in CMS

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Flavour-tagging of jets is an important task in collider based high energy physics and a field where machine learning tools are applied by all major experiments. A new tagger (DeepFlavour) was developed and commissioned in CMS that is based on an advanced machine learning procedure. A deep neural network is used to do multi-classification of jets that origin from a b-quark, two b-quarks, a c-quark, two c-quarks or light colored particles (u, d, s-quark or gluon). The performance was measured in both, data and simulation. The talk will also include the measured performance of all taggers in CMS. The different taggers and results will be discussed and compared with some focus on details of the newest tagger.

  16. Deep Learning for ECG Classification

    Science.gov (United States)

    Pyakillya, B.; Kazachenko, N.; Mikhailovsky, N.

    2017-10-01

    The importance of ECG classification is very high now due to many current medical applications where this problem can be stated. Currently, there are many machine learning (ML) solutions which can be used for analyzing and classifying ECG data. However, the main disadvantages of these ML results is use of heuristic hand-crafted or engineered features with shallow feature learning architectures. The problem relies in the possibility not to find most appropriate features which will give high classification accuracy in this ECG problem. One of the proposing solution is to use deep learning architectures where first layers of convolutional neurons behave as feature extractors and in the end some fully-connected (FCN) layers are used for making final decision about ECG classes. In this work the deep learning architecture with 1D convolutional layers and FCN layers for ECG classification is presented and some classification results are showed.

  17. Deep Space Habitat Concept Demonstrator

    Science.gov (United States)

    Bookout, Paul S.; Smitherman, David

    2015-01-01

    This project will develop, integrate, test, and evaluate Habitation Systems that will be utilized as technology testbeds and will advance NASA's understanding of alternative deep space mission architectures, requirements, and operations concepts. Rapid prototyping and existing hardware will be utilized to develop full-scale habitat demonstrators. FY 2014 focused on the development of a large volume Space Launch System (SLS) class habitat (Skylab Gen 2) based on the SLS hydrogen tank components. Similar to the original Skylab, a tank section of the SLS rocket can be outfitted with a deep space habitat configuration and launched as a payload on an SLS rocket. This concept can be used to support extended stay at the Lunar Distant Retrograde Orbit to support the Asteroid Retrieval Mission and provide a habitat suitable for human missions to Mars.

  18. Hybrid mask for deep etching

    KAUST Repository

    Ghoneim, Mohamed T.

    2017-08-10

    Deep reactive ion etching is essential for creating high aspect ratio micro-structures for microelectromechanical systems, sensors and actuators, and emerging flexible electronics. A novel hybrid dual soft/hard mask bilayer may be deposited during semiconductor manufacturing for deep reactive etches. Such a manufacturing process may include depositing a first mask material on a substrate; depositing a second mask material on the first mask material; depositing a third mask material on the second mask material; patterning the third mask material with a pattern corresponding to one or more trenches for transfer to the substrate; transferring the pattern from the third mask material to the second mask material; transferring the pattern from the second mask material to the first mask material; and/or transferring the pattern from the first mask material to the substrate.

  19. Soft-Deep Boltzmann Machines

    OpenAIRE

    Kiwaki, Taichi

    2015-01-01

    We present a layered Boltzmann machine (BM) that can better exploit the advantages of a distributed representation. It is widely believed that deep BMs (DBMs) have far greater representational power than its shallow counterpart, restricted Boltzmann machines (RBMs). However, this expectation on the supremacy of DBMs over RBMs has not ever been validated in a theoretical fashion. In this paper, we provide both theoretical and empirical evidences that the representational power of DBMs can be a...

  20. Evolving Deep Networks Using HPC

    Energy Technology Data Exchange (ETDEWEB)

    Young, Steven R. [ORNL, Oak Ridge; Rose, Derek C. [ORNL, Oak Ridge; Johnston, Travis [ORNL, Oak Ridge; Heller, William T. [ORNL, Oak Ridge; Karnowski, thomas P. [ORNL, Oak Ridge; Potok, Thomas E. [ORNL, Oak Ridge; Patton, Robert M. [ORNL, Oak Ridge; Perdue, Gabriel [Fermilab; Miller, Jonathan [Santa Maria U., Valparaiso

    2017-01-01

    While a large number of deep learning networks have been studied and published that produce outstanding results on natural image datasets, these datasets only make up a fraction of those to which deep learning can be applied. These datasets include text data, audio data, and arrays of sensors that have very different characteristics than natural images. As these “best” networks for natural images have been largely discovered through experimentation and cannot be proven optimal on some theoretical basis, there is no reason to believe that they are the optimal network for these drastically different datasets. Hyperparameter search is thus often a very important process when applying deep learning to a new problem. In this work we present an evolutionary approach to searching the possible space of network hyperparameters and construction that can scale to 18, 000 nodes. This approach is applied to datasets of varying types and characteristics where we demonstrate the ability to rapidly find best hyperparameters in order to enable practitioners to quickly iterate between idea and result.

  1. Deep Space Gateway Science Opportunities

    Science.gov (United States)

    Quincy, C. D.; Charles, J. B.; Hamill, Doris; Sidney, S. C.

    2018-01-01

    The NASA Life Sciences Research Capabilities Team (LSRCT) has been discussing deep space research needs for the last two years. NASA's programs conducting life sciences studies - the Human Research Program, Space Biology, Astrobiology, and Planetary Protection - see the Deep Space Gateway (DSG) as affording enormous opportunities to investigate biological organisms in a unique environment that cannot be replicated in Earth-based laboratories or on Low Earth Orbit science platforms. These investigations may provide in many cases the definitive answers to risks associated with exploration and living outside Earth's protective magnetic field. Unlike Low Earth Orbit or terrestrial locations, the Gateway location will be subjected to the true deep space spectrum and influence of both galactic cosmic and solar particle radiation and thus presents an opportunity to investigate their long-term exposure effects. The question of how a community of biological organisms change over time within the harsh environment of space flight outside of the magnetic field protection can be investigated. The biological response to the absence of Earth's geomagnetic field can be studied for the first time. Will organisms change in new and unique ways under these new conditions? This may be specifically true on investigations of microbial communities. The Gateway provides a platform for microbiology experiments both inside, to improve understanding of interactions between microbes and human habitats, and outside, to improve understanding of microbe-hardware interactions exposed to the space environment.

  2. Deep Question Answering for protein annotation.

    Science.gov (United States)

    Gobeill, Julien; Gaudinat, Arnaud; Pasche, Emilie; Vishnyakova, Dina; Gaudet, Pascale; Bairoch, Amos; Ruch, Patrick

    2015-01-01

    Biomedical professionals have access to a huge amount of literature, but when they use a search engine, they often have to deal with too many documents to efficiently find the appropriate information in a reasonable time. In this perspective, question-answering (QA) engines are designed to display answers, which were automatically extracted from the retrieved documents. Standard QA engines in literature process a user question, then retrieve relevant documents and finally extract some possible answers out of these documents using various named-entity recognition processes. In our study, we try to answer complex genomics questions, which can be adequately answered only using Gene Ontology (GO) concepts. Such complex answers cannot be found using state-of-the-art dictionary- and redundancy-based QA engines. We compare the effectiveness of two dictionary-based classifiers for extracting correct GO answers from a large set of 100 retrieved abstracts per question. In the same way, we also investigate the power of GOCat, a GO supervised classifier. GOCat exploits the GOA database to propose GO concepts that were annotated by curators for similar abstracts. This approach is called deep QA, as it adds an original classification step, and exploits curated biological data to infer answers, which are not explicitly mentioned in the retrieved documents. We show that for complex answers such as protein functional descriptions, the redundancy phenomenon has a limited effect. Similarly usual dictionary-based approaches are relatively ineffective. In contrast, we demonstrate how existing curated data, beyond information extraction, can be exploited by a supervised classifier, such as GOCat, to massively improve both the quantity and the quality of the answers with a +100% improvement for both recall and precision. Database URL: http://eagl.unige.ch/DeepQA4PA/. © The Author(s) 2015. Published by Oxford University Press.

  3. GMSK Modulation for Deep Space Applications

    Science.gov (United States)

    Shambayati, Shervin; Lee, Dennis K.

    2012-01-01

    Due to scarcity of spectrum at 8.42 GHz deep space Xband allocation, many deep space missions are now considering the use of higher order modulation schemes instead of the traditional binary phase shift keying (BPSK). One such scheme is pre-coded Gaussian minimum shift keying (GMSK). GMSK is an excellent candidate for deep space missions. GMSK is a constant envelope, bandwidth efficien modulation whose frame error rate (FER) performance with perfect carrier tracking and proper receiver structure is nearly identical to that of BPSK. There are several issues that need to be addressed with GMSK however. Specificall, we are interested in the combined effects of spectrum limitations and receiver structure on the coded performance of the X-band link using GMSK. The receivers that are typically used for GMSK demodulations are variations on offset quadrature phase shift keying (OQPSK) receivers. In this paper we consider three receivers: the standard DSN OQPSK receiver, DSN OQPSK receiver with filte ed input, and an optimum OQPSK receiver with filte ed input. For the DSN OQPSK receiver we show experimental results with (8920, 1/2), (8920, 1/3) and (8920, 1/6) turbo codes in terms of their error rate performance. We also consider the tracking performance of this receiver as a function of data rate, channel code and the carrier loop signal-to-noise ratio (SNR). For the other two receivers we derive theoretical results that will show that for a given loop bandwidth, a receiver structure, and a channel code, there is a lower data rate limit on the GMSK below which a higher SNR than what is required to achieve the required FER on the link is needed. These limits stem from the minimum loop signal-to-noise ratio requirements on the receivers for achieving lock. As a result of this, for a given channel code and a given FER, there could be a gap between the maximum data rate that BPSK can support without violating the spectrum limits and the minimum data rate that GMSK can support

  4. Deep water recycling through time.

    Science.gov (United States)

    Magni, Valentina; Bouilhol, Pierre; van Hunen, Jeroen

    2014-11-01

    We investigate the dehydration processes in subduction zones and their implications for the water cycle throughout Earth's history. We use a numerical tool that combines thermo-mechanical models with a thermodynamic database to examine slab dehydration for present-day and early Earth settings and its consequences for the deep water recycling. We investigate the reactions responsible for releasing water from the crust and the hydrated lithospheric mantle and how they change with subduction velocity ( v s ), slab age ( a ) and mantle temperature (T m ). Our results show that faster slabs dehydrate over a wide area: they start dehydrating shallower and they carry water deeper into the mantle. We parameterize the amount of water that can be carried deep into the mantle, W (×10 5 kg/m 2 ), as a function of v s (cm/yr), a (Myrs), and T m (°C):[Formula: see text]. We generally observe that a 1) 100°C increase in the mantle temperature, or 2) ∼15 Myr decrease of plate age, or 3) decrease in subduction velocity of ∼2 cm/yr all have the same effect on the amount of water retained in the slab at depth, corresponding to a decrease of ∼2.2×10 5 kg/m 2 of H 2 O. We estimate that for present-day conditions ∼26% of the global influx water, or 7×10 8 Tg/Myr of H 2 O, is recycled into the mantle. Using a realistic distribution of subduction parameters, we illustrate that deep water recycling might still be possible in early Earth conditions, although its efficiency would generally decrease. Indeed, 0.5-3.7 × 10 8 Tg/Myr of H 2 O could still be recycled in the mantle at 2.8 Ga. Deep water recycling might be possible even in early Earth conditions We provide a scaling law to estimate the amount of H 2 O flux deep into the mantle Subduction velocity has a a major control on the crustal dehydration pattern.

  5. Vision in the deep sea.

    Science.gov (United States)

    Warrant, Eric J; Locket, N Adam

    2004-08-01

    The deep sea is the largest habitat on earth. Its three great faunal environments--the twilight mesopelagic zone, the dark bathypelagic zone and the vast flat expanses of the benthic habitat--are home to a rich fauna of vertebrates and invertebrates. In the mesopelagic zone (150-1000 m), the down-welling daylight creates an extended scene that becomes increasingly dimmer and bluer with depth. The available daylight also originates increasingly from vertically above, and bioluminescent point-source flashes, well contrasted against the dim background daylight, become increasingly visible. In the bathypelagic zone below 1000 m no daylight remains, and the scene becomes entirely dominated by point-like bioluminescence. This changing nature of visual scenes with depth--from extended source to point source--has had a profound effect on the designs of deep-sea eyes, both optically and neurally, a fact that until recently was not fully appreciated. Recent measurements of the sensitivity and spatial resolution of deep-sea eyes--particularly from the camera eyes of fishes and cephalopods and the compound eyes of crustaceans--reveal that ocular designs are well matched to the nature of the visual scene at any given depth. This match between eye design and visual scene is the subject of this review. The greatest variation in eye design is found in the mesopelagic zone, where dim down-welling daylight and bio-luminescent point sources may be visible simultaneously. Some mesopelagic eyes rely on spatial and temporal summation to increase sensitivity to a dim extended scene, while others sacrifice this sensitivity to localise pinpoints of bright bioluminescence. Yet other eyes have retinal regions separately specialised for each type of light. In the bathypelagic zone, eyes generally get smaller and therefore less sensitive to point sources with increasing depth. In fishes, this insensitivity, combined with surprisingly high spatial resolution, is very well adapted to the

  6. The deep Canary poleward undercurrent

    Science.gov (United States)

    Velez-Belchi, P. J.; Hernandez-Guerra, A.; González-Pola, C.; Fraile, E.; Collins, C. A.; Machín, F.

    2012-12-01

    Poleward undercurrents are well known features in Eastern Boundary systems. In the California upwelling system (CalCEBS), the deep poleward flow has been observed along the entire outer continental shelf and upper-slope, using indirect methods based on geostrophic estimates and also using direct current measurements. The importance of the poleward undercurrents in the CalCEBS, among others, is to maintain its high productivity by means of the transport of equatorial Pacific waters all the way northward to Vancouver Island and the subpolar gyre but there is also concern about the low oxygen concentration of these waters. However, in the case of the Canary Current Eastern Boundary upwelling system (CanCEBS), there are very few observations of the poleward undercurrent. Most of these observations are short-term mooring records, or drifter trajectories of the upper-slope flow. Hence, the importance of the subsurface poleward flow in the CanCEBS has been only hypothesized. Moreover, due to the large differences between the shape of the coastline and topography between the California and the Canary Current system, the results obtained for the CalCEBS are not completely applicable to the CanCEBS. In this study we report the first direct observations of the continuity of the deep poleward flow of the Canary Deep Poleward undercurrent (CdPU) in the North-Africa sector of the CanCEBS, and one of the few direct observations in the North-Africa sector of the Canary Current eastern boundary. The results indicate that the Canary Island archipelago disrupts the deep poleward undercurrent even at depths where the flow is not blocked by the bathymetry. The deep poleward undercurrent flows west around the eastern-most islands and north east of the Conception Bank to rejoin the intermittent branch that follows the African slope in the Lanzarote Passage. This hypothesis is consistent with the AAIW found west of Lanzarote, as far as 17 W. But also, this hypothesis would be coherent

  7. Deep iCrawl: An Intelligent Vision-Based Deep Web Crawler

    OpenAIRE

    R.Anita; V.Ganga Bharani; N.Nityanandam; Pradeep Kumar Sahoo

    2011-01-01

    The explosive growth of World Wide Web has posed a challenging problem in extracting relevant data. Traditional web crawlers focus only on the surface web while the deep web keeps expanding behind the scene. Deep web pages are created dynamically as a result of queries posed to specific web databases. The structure of the deep web pages makes it impossible for traditional web crawlers to access deep web contents. This paper, Deep iCrawl, gives a novel and vision-based app...

  8. Deep Corals, Deep Learning: Moving the Deep Net Towards Real-Time Image Annotation

    OpenAIRE

    Lea-Anne Henry; Sankha S. Mukherjee; Neil M. Roberston; Laurence De Clippele; J. Murray Roberts

    2016-01-01

    The mismatch between human capacity and the acquisition of Big Data such as Earth imagery undermines commitments to Convention on Biological Diversity (CBD) and Aichi targets. Artificial intelligence (AI) solutions to Big Data issues are urgently needed as these could prove to be faster, more accurate, and cheaper. Reducing costs of managing protected areas in remote deep waters and in the High Seas is of great importance, and this is a realm where autonomous technology will be transformative.

  9. Invited talk: Deep Learning Meets Physics

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Deep Learning has emerged as one of the most successful fields of machine learning and artificial intelligence with overwhelming success in industrial speech, text and vision benchmarks. Consequently it evolved into the central field of research for IT giants like Google, facebook, Microsoft, Baidu, and Amazon. Deep Learning is founded on novel neural network techniques, the recent availability of very fast computers, and massive data sets. In its core, Deep Learning discovers multiple levels of abstract representations of the input. The main obstacle to learning deep neural networks is the vanishing gradient problem. The vanishing gradient impedes credit assignment to the first layers of a deep network or to early elements of a sequence, therefore limits model selection. Major advances in Deep Learning can be related to avoiding the vanishing gradient like stacking, ReLUs, residual networks, highway networks, and LSTM. For Deep Learning, we suggested self-normalizing neural networks (SNNs) which automatica...

  10. Classification of Exacerbation Frequency in the COPDGene Cohort Using Deep Learning with Deep Belief Networks.

    Science.gov (United States)

    Ying, Jun; Dutta, Joyita; Guo, Ning; Hu, Chenhui; Zhou, Dan; Sitek, Arkadiusz; Li, Quanzheng

    2016-12-21

    This study aims to develop an automatic classifier based on deep learning for exacerbation frequency in patients with chronic obstructive pulmonary disease (COPD). A threelayer deep belief network (DBN) with two hidden layers and one visible layer was employed to develop classification models and the models' robustness to exacerbation was analyzed. Subjects from the COPDGene cohort were labeled with exacerbation frequency, defined as the number of exacerbation events per year. 10,300 subjects with 361 features each were included in the analysis. After feature selection and parameter optimization, the proposed classification method achieved an accuracy of 91.99%, using a 10-fold cross validation experiment. The analysis of DBN weights showed that there was a good visual spatial relationship between the underlying critical features of different layers. Our findings show that the most sensitive features obtained from the DBN weights are consistent with the consensus showed by clinical rules and standards for COPD diagnostics. We thus demonstrate that DBN is a competitive tool for exacerbation risk assessment for patients suffering from COPD.

  11. Deep remission: a new concept?

    Science.gov (United States)

    Colombel, Jean-Frédéric; Louis, Edouard; Peyrin-Biroulet, Laurent; Sandborn, William J; Panaccione, Remo

    2012-01-01

    Crohn's disease (CD) is a chronic inflammatory disorder characterized by periods of clinical remission alternating with periods of relapse defined by recurrent clinical symptoms. Persistent inflammation is believed to lead to progressive bowel damage over time, which manifests with the development of strictures, fistulae and abscesses. These disease complications frequently lead to a need for surgical resection, which in turn leads to disability. So CD can be characterized as a chronic, progressive, destructive and disabling disease. In rheumatoid arthritis, treatment paradigms have evolved beyond partial symptom control alone toward the induction and maintenance of sustained biological remission, also known as a 'treat to target' strategy, with the goal of improving long-term disease outcomes. In CD, there is currently no accepted, well-defined, comprehensive treatment goal that entails the treatment of both clinical symptoms and biologic inflammation. It is important that such a treatment concept begins to evolve for CD. A treatment strategy that delays or halts the progression of CD to increasing damage and disability is a priority. As a starting point, a working definition of sustained deep remission (that includes long-term biological remission and symptom control) with defined patient outcomes (including no disease progression) has been proposed. The concept of sustained deep remission represents a goal for CD management that may still evolve. It is not clear if the concept also applies to ulcerative colitis. Clinical trials are needed to evaluate whether treatment algorithms that tailor therapy to achieve deep remission in patients with CD can prevent disease progression and disability. Copyright © 2012 S. Karger AG, Basel.

  12. BCDForest: a boosting cascade deep forest model towards the classification of cancer subtypes based on gene expression data.

    Science.gov (United States)

    Guo, Yang; Liu, Shuhui; Li, Zhanhuai; Shang, Xuequn

    2018-04-11

    The classification of cancer subtypes is of great importance to cancer disease diagnosis and therapy. Many supervised learning approaches have been applied to cancer subtype classification in the past few years, especially of deep learning based approaches. Recently, the deep forest model has been proposed as an alternative of deep neural networks to learn hyper-representations by using cascade ensemble decision trees. It has been proved that the deep forest model has competitive or even better performance than deep neural networks in some extent. However, the standard deep forest model may face overfitting and ensemble diversity challenges when dealing with small sample size and high-dimensional biology data. In this paper, we propose a deep learning model, so-called BCDForest, to address cancer subtype classification on small-scale biology datasets, which can be viewed as a modification of the standard deep forest model. The BCDForest distinguishes from the standard deep forest model with the following two main contributions: First, a named multi-class-grained scanning method is proposed to train multiple binary classifiers to encourage diversity of ensemble. Meanwhile, the fitting quality of each classifier is considered in representation learning. Second, we propose a boosting strategy to emphasize more important features in cascade forests, thus to propagate the benefits of discriminative features among cascade layers to improve the classification performance. Systematic comparison experiments on both microarray and RNA-Seq gene expression datasets demonstrate that our method consistently outperforms the state-of-the-art methods in application of cancer subtype classification. The multi-class-grained scanning and boosting strategy in our model provide an effective solution to ease the overfitting challenge and improve the robustness of deep forest model working on small-scale data. Our model provides a useful approach to the classification of cancer subtypes

  13. Topics in deep inelastic scattering

    International Nuclear Information System (INIS)

    Wandzura, S.M.

    1977-01-01

    Several topics in deep inelastic lepton--nucleon scattering are discussed, with emphasis on the structure functions appearing in polarized experiments. The major results are: infinite set of new sum rules reducing the number of independent spin dependent structure functions (for electroproduction) from two to one; the application of the techniques of Nachtmann to extract the coefficients appearing in the Wilson operator product expansion; and radiative corrections to the Wilson coefficients of free field theory. Also discussed are the use of dimensional regularization to simplify the calculation of these radiative corrections

  14. First LOCSMITH locations of deep moonquakes

    Science.gov (United States)

    Hempel, S.; Knapmeyer, M.; Sens-Schönfelder, C.; Oberst, J.

    2008-09-01

    Introduction Several thousand seismic events were recorded by the Apollo seismic network from 19691977. Different types of events can be distinguished: meteoroid impacts, thermal quakes and internally caused moonquakes. The latter subdivide into shallow (100 to 300km) and deep moonquakes (700 to 1100km), which are by far the most common events. The deep quakes would be no immediate danger to inhabitated stations on the Earth's Moon because of their relatively low magnitude and great depth. However, they bear important information on lunar structure and evolution, and their distribution probably reflects their source mechanism. In this study, we reinvestigate location patterns of deep lunar quakes. LOCSMITH The core of this study is a new location method (LOCSMITH, [1]). This algorithm uses time intervals rather than time instants as input, which contain the dedicated arrival with probability 1. LOCSMITH models and compares theoretical and actual travel times on a global scale and uses an adaptive grid to search source locations compatible with all observations. The output is a set of all possible hypocenters for the considered region of repeating, tidally triggered moonquake activity, called clusters. The shape and size of these sets gives a better estimate of the location uncertainty than the formal standard deviations returned by classical methods. This is used for grading of deep moonquake clusters according to the currently available data quality. Classification of deep moonquakes As first step, we establish a reciprocal dependence of size and shape of LOCSMITH location clouds on number of arrivals. Four different shapes are recognized, listed here in an order corresponding to decreasing spatial resolution: 1. "Balls", which are well defined and relatively small types of sets resembling the commonly assumed error ellipsoid. These are found in the best cases with many observations. Locations in this shape are obtained for clusters 1, 18 or 33, these were already

  15. Deep space propagation experiments at Ka-band

    Science.gov (United States)

    Butman, Stanley A.

    1990-01-01

    Propagation experiments as essential components of the general plan to develop an operational deep space telecommunications and navigation capability at Ka-band (32 to 35 GHz) by the end of the 20th century are discussed. Significant benefits of Ka-band over the current deep space standard X-band (8.4 GHz) are an improvement of 4 to 10 dB in telemetry capacity and a similar increase in radio navigation accuracy. Propagation experiments are planned on the Mars Observer Mission in 1992 in preparation for the Cassini Mission to Saturn in 1996, which will use Ka-band in the search for gravity waves as well as to enhance telemetry and navigation at Saturn in 2002. Subsequent uses of Ka-band are planned for the Solar Probe Mission and the Mars Program.

  16. Challenges for deep space communications in the 1990s

    Science.gov (United States)

    Dumas, Larry N.; Hornstein, Robert M.

    1991-01-01

    The discussion of NASA's Deep Space Network (DSN) examines the evolving character of aerospace missions and the corresponding changes in the DSN architecture. Deep space missions are reviewed, and it is noted that the two 34-m and the 70-m antenna subnets of the DSN are heavily loaded and more use is expected. High operational workload and the challenge of network cross-support are the design drivers for a flexible DSN architecture configuration. Incorporated in the design are antenna arraying for aperture augmentation, beam-waveguide antennas for frequency agility, and connectivity with non-DSN sites for cross-support. Compatibility between spacecraft and ground-facility designs is important for establishing common international standards of communication and data-system specification.

  17. Magnetic resonance imaging in deep pelvic endometriosis: iconographic essay

    International Nuclear Information System (INIS)

    Coutinho Junior, Antonio Carlos; Coutinho, Elisa Pompeu Dias; Lima, Claudio Marcio Amaral de Oliveira; Ribeiro, Erica Barreiros; Aidar, Marisa Nassar; Gasparetto, Emerson Leandro

    2008-01-01

    Endometriosis is characterized by the presence of normal endometrial tissue outside the uterine cavity. In patients with deep pelvic endometriosis, uterosacral ligaments, rectum, rectovaginal septum, vagina or bladder may be involved. Clinical manifestations may be variable, including pelvic pain, dysmenorrhea, dyspareunia, urinary symptoms and infertility. Complete surgical excision is the gold standard for treating this disease, and hence the importance of the preoperative work-up that usually is limited to an evaluation of sonographic and clinical data. Magnetic resonance imaging is of paramount importance in the diagnosis of endometriosis, considering its high accuracy in the identification of lesions intermingled with adhesions, and in the determination of peritoneal lesions extent. The present pictorial review describes the main magnetic resonance imaging findings in deep pelvic endometriosis. (author)

  18. Magnetic resonance imaging in deep pelvic endometriosis: iconographic essay

    Energy Technology Data Exchange (ETDEWEB)

    Coutinho Junior, Antonio Carlos; Coutinho, Elisa Pompeu Dias; Lima, Claudio Marcio Amaral de Oliveira; Ribeiro, Erica Barreiros; Aidar, Marisa Nassar [Clinica de Diagnostico por Imagem (CDPI), Rio de Janeiro, RJ (Brazil); Clinica Multi-Imagem, Rio de Janeiro, RJ (Brazil); E-mail: cmaol@br.inter.net; Gasparetto, Emerson Leandro [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Dept. de Radiologia

    2008-03-15

    Endometriosis is characterized by the presence of normal endometrial tissue outside the uterine cavity. In patients with deep pelvic endometriosis, uterosacral ligaments, rectum, rectovaginal septum, vagina or bladder may be involved. Clinical manifestations may be variable, including pelvic pain, dysmenorrhea, dyspareunia, urinary symptoms and infertility. Complete surgical excision is the gold standard for treating this disease, and hence the importance of the preoperative work-up that usually is limited to an evaluation of sonographic and clinical data. Magnetic resonance imaging is of paramount importance in the diagnosis of endometriosis, considering its high accuracy in the identification of lesions intermingled with adhesions, and in the determination of peritoneal lesions extent. The present pictorial review describes the main magnetic resonance imaging findings in deep pelvic endometriosis. (author)

  19. Deep groundwater flow at Palmottu

    International Nuclear Information System (INIS)

    Niini, H.; Vesterinen, M.; Tuokko, T.

    1993-01-01

    Further observations, measurements, and calculations aimed at determining the groundwater flow regimes and periodical variations in flow at deeper levels were carried out in the Lake Palmottu (a natural analogue study site for radioactive waste disposal in southwestern Finland) drainage basin. These water movements affect the migration of radionuclides from the Palmottu U-Th deposit. The deep water flow is essentially restricted to the bedrock fractures which developed under, and are still affected by, the stress state of the bedrock. Determination of the detailed variations was based on fracture-tectonic modelling of the 12 most significant underground water-flow channels that cross the surficial water of the Palmottu area. According to the direction of the hydraulic gradient the deep water flow is mostly outwards from the Palmottu catchment but in the westernmost section it is partly towards the centre. Estimation of the water flow through the U-Th deposit by the water-balance method is still only approximate and needs continued observation series and improved field measurements

  20. Deep ocean model penetrator experiments

    International Nuclear Information System (INIS)

    Freeman, T.J.; Burdett, J.R.F.

    1986-01-01

    Preliminary trials of experimental model penetrators in the deep ocean have been conducted as an international collaborative exercise by participating members (national bodies and the CEC) of the Engineering Studies Task Group of the Nuclear Energy Agency's Seabed Working Group. This report describes and gives the results of these experiments, which were conducted at two deep ocean study areas in the Atlantic: Great Meteor East and the Nares Abyssal Plain. Velocity profiles of penetrators of differing dimensions and weights have been determined as they free-fell through the water column and impacted the sediment. These velocity profiles are used to determine the final embedment depth of the penetrators and the resistance to penetration offered by the sediment. The results are compared with predictions of embedment depth derived from elementary models of a penetrator impacting with a sediment. It is tentatively concluded that once the resistance to penetration offered by a sediment at a particular site has been determined, this quantity can be used to sucessfully predict the embedment that penetrators of differing sizes and weights would achieve at the same site

  1. Academic Training: Deep Space Telescopes

    CERN Multimedia

    Françoise Benz

    2006-01-01

    2005-2006 ACADEMIC TRAINING PROGRAMME LECTURE SERIES 20, 21, 22, 23, 24 February from 11:00 to 12:00 - Council Chamber on 20, 21, 23, 24 February, TH Auditorium, bldg 4 - 3-006, on 22 February Deep Space Telescopes G. BIGNAMI / CNRS, Toulouse, F & Univ. di Pavia, I The short series of seminars will address results and aims of current and future space astrophysics as the cultural framework for the development of deep space telescopes. It will then present such new tools, as they are currently available to, or imagined by, the scientific community, in the context of the science plans of ESA and of all major world space agencies. Ground-based astronomy, in the 400 years since Galileo's telescope, has given us a profound phenomenological comprehension of our Universe, but has traditionally been limited to the narrow band(s) to which our terrestrial atmosphere is transparent. Celestial objects, however, do not care about our limitations, and distribute most of the information about their physics thro...

  2. Chinese expert consensus on programming deep brain stimulation for patients with Parkinson's disease.

    Science.gov (United States)

    Chen, Shengdi; Gao, Guodong; Feng, Tao; Zhang, Jianguo

    2018-01-01

    Deep Brain Stimulation (DBS) therapy for the treatment of Parkinson's Disease (PD) is now a well-established option for some patients. Postoperative standardized programming processes can improve the level of postoperative management and programming, relieve symptoms and improve quality of life. In order to improve the quality of the programming, the experts on DBS and PD in neurology and neurosurgery in China reviewed the relevant literatures and combined their own experiences and developed this expert consensus on the programming of deep brain stimulation in patients with PD in China. This Chinese expert consensus on postoperative programming can standardize and improve postoperative management and programming of DBS for PD.

  3. Anaesthetic management of a patient with deep brain stimulation implant for radical nephrectomy

    Directory of Open Access Journals (Sweden)

    Monica Khetarpal

    2014-01-01

    Full Text Available A 63-year-old man with severe Parkinson′s disease (PD who had been implanted with deep brain stimulators into both sides underwent radical nephrectomy under general anaesthesia with standard monitoring. Deep brain stimulation (DBS is an alternative and effective treatment option for severe and refractory PD and other illnesses such as essential tremor and intractable epilepsy. Anaesthesia in the patients with implanted neurostimulator requires special consideration because of the interaction between neurostimulator and the diathermy. The diathermy can damage the brain tissue at the site of electrode. There are no standard guidelines for the anaesthetic management of a patient with DBS electrode in situ posted for surgery.

  4. Deep Space Network information system architecture study

    Science.gov (United States)

    Beswick, C. A.; Markley, R. W. (Editor); Atkinson, D. J.; Cooper, L. P.; Tausworthe, R. C.; Masline, R. C.; Jenkins, J. S.; Crowe, R. A.; Thomas, J. L.; Stoloff, M. J.

    1992-01-01

    The purpose of this article is to describe an architecture for the DSN information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990's. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies--i.e., computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control.

  5. Deep learning enhanced mobile-phone microscopy

    KAUST Repository

    Rivenson, Yair

    2017-12-12

    Mobile-phones have facilitated the creation of field-portable, cost-effective imaging and sensing technologies that approach laboratory-grade instrument performance. However, the optical imaging interfaces of mobile-phones are not designed for microscopy and produce spatial and spectral distortions in imaging microscopic specimens. Here, we report on the use of deep learning to correct such distortions introduced by mobile-phone-based microscopes, facilitating the production of high-resolution, denoised and colour-corrected images, matching the performance of benchtop microscopes with high-end objective lenses, also extending their limited depth-of-field. After training a convolutional neural network, we successfully imaged various samples, including blood smears, histopathology tissue sections, and parasites, where the recorded images were highly compressed to ease storage and transmission for telemedicine applications. This method is applicable to other low-cost, aberrated imaging systems, and could offer alternatives for costly and bulky microscopes, while also providing a framework for standardization of optical images for clinical and biomedical applications.

  6. Image Captioning with Deep Bidirectional LSTMs

    OpenAIRE

    Wang, Cheng; Yang, Haojin; Bartz, Christian; Meinel, Christoph

    2016-01-01

    This work presents an end-to-end trainable deep bidirectional LSTM (Long-Short Term Memory) model for image captioning. Our model builds on a deep convolutional neural network (CNN) and two separate LSTM networks. It is capable of learning long term visual-language interactions by making use of history and future context information at high level semantic space. Two novel deep bidirectional variant models, in which we increase the depth of nonlinearity transition in different way, are propose...

  7. Deep inelastic processes and the parton model

    International Nuclear Information System (INIS)

    Altarelli, G.

    The lecture was intended as an elementary introduction to the physics of deep inelastic phenomena from the point of view of theory. General formulae and facts concerning inclusive deep inelastic processes in the form: l+N→l'+hadrons (electroproduction, neutrino scattering) are first recalled. The deep inelastic annihilation e + e - →hadrons is then envisaged. The light cone approach, the parton model and their relation are mainly emphasized

  8. Deep inelastic electron and muon scattering

    International Nuclear Information System (INIS)

    Taylor, R.E.

    1975-07-01

    From the review of deep inelastic electron and muon scattering it is concluded that the puzzle of deep inelastic scattering versus annihilation was replaced with the challenge of the new particles, that the evidence for the simplest quark-algebra models of deep inelastic processes is weaker than a year ago. Definite evidence of scale breaking was found but the specific form of that scale breaking is difficult to extract from the data. 59 references

  9. Fast, Distributed Algorithms in Deep Networks

    Science.gov (United States)

    2016-05-11

    shallow networks, additional work will need to be done in order to allow for the application of ADMM to deep nets. The ADMM method allows for quick...Quock V Le, et al. Large scale distributed deep networks. In Advances in Neural Information Processing Systems, pages 1223–1231, 2012. [11] Ken-Ichi...A TRIDENT SCHOLAR PROJECT REPORT NO. 446 Fast, Distributed Algorithms in Deep Networks by Midshipman 1/C Ryan J. Burmeister, USN

  10. Learning Transferable Features with Deep Adaptation Networks

    OpenAIRE

    Long, Mingsheng; Cao, Yue; Wang, Jianmin; Jordan, Michael I.

    2015-01-01

    Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation...

  11. Standards for Standardized Logistic Regression Coefficients

    Science.gov (United States)

    Menard, Scott

    2011-01-01

    Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…

  12. Instant standard concept for data standards development

    NARCIS (Netherlands)

    Folmer, Erwin Johan Albert; Kulcsor, Istvan Zsolt; Roes, Jasper

    2013-01-01

    This paper presents the current results of an ongoing research about a new data standards development concept. The concept is called Instant Standard referring to the pressure that is generated by shrinking the length of the standardization process. Based on this concept it is estimated that the

  13. The standard model in a nutshell

    CERN Document Server

    Goldberg, Dave

    2017-01-01

    For a theory as genuinely elegant as the Standard Model--the current framework describing elementary particles and their forces--it can sometimes appear to students to be little more than a complicated collection of particles and ranked list of interactions. The Standard Model in a Nutshell provides a comprehensive and uncommonly accessible introduction to one of the most important subjects in modern physics, revealing why, despite initial appearances, the entire framework really is as elegant as physicists say. Dave Goldberg uses a "just-in-time" approach to instruction that enables students to gradually develop a deep understanding of the Standard Model even if this is their first exposure to it. He covers everything from relativity, group theory, and relativistic quantum mechanics to the Higgs boson, unification schemes, and physics beyond the Standard Model. The book also looks at new avenues of research that could answer still-unresolved questions and features numerous worked examples, helpful illustrat...

  14. An overview of latest deep water technologies

    International Nuclear Information System (INIS)

    Anon.

    1995-01-01

    The 8th Deep Offshore Technology Conference (DOT VIII, Rio de Janeiro, October 30 - November 3, 1995) has brought together renowned specialists in deep water development projects, as well as managers from oil companies and engineering/service companies to discuss state-of-the-art technologies and ongoing projects in the deep offshore. This paper is a compilation of the session summaries about sub sea technologies, mooring and dynamic positioning, floaters (Tension Leg Platforms (TLP) and Floating Production Storage and Off loading (FPSO)), pipelines and risers, exploration and drilling, and other deep water techniques. (J.S.)

  15. Deep learning in neural networks: an overview.

    Science.gov (United States)

    Schmidhuber, Jürgen

    2015-01-01

    In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

  16. The deep ocean under climate change

    Science.gov (United States)

    Levin, Lisa A.; Le Bris, Nadine

    2015-11-01

    The deep ocean absorbs vast amounts of heat and carbon dioxide, providing a critical buffer to climate change but exposing vulnerable ecosystems to combined stresses of warming, ocean acidification, deoxygenation, and altered food inputs. Resulting changes may threaten biodiversity and compromise key ocean services that maintain a healthy planet and human livelihoods. There exist large gaps in understanding of the physical and ecological feedbacks that will occur. Explicit recognition of deep-ocean climate mitigation and inclusion in adaptation planning by the United Nations Framework Convention on Climate Change (UNFCCC) could help to expand deep-ocean research and observation and to protect the integrity and functions of deep-ocean ecosystems.

  17. Docker Containers for Deep Learning Experiments

    OpenAIRE

    Gerke, Paul K.

    2017-01-01

    Deep learning is a powerful tool to solve problems in the area of image analysis. The dominant compute platform for deep learning is Nvidia’s proprietary CUDA, which can only be used together with Nvidia graphics cards. The nivida-docker project allows exposing Nvidia graphics cards to docker containers and thus makes it possible to run deep learning experiments in docker containers.In our department, we use deep learning to solve problems in the area of medical image analysis and use docker ...

  18. Deep Brain Stimulation for Parkinson's Disease

    Science.gov (United States)

    ... about the BRAIN initiative, see www.nih.gov/science/brain . Show More Show Less Search Disorders SEARCH SEARCH Definition Treatment Prognosis Clinical Trials Organizations Publications Definition Deep ...

  19. Cultivating the Deep Subsurface Microbiome

    Science.gov (United States)

    Casar, C. P.; Osburn, M. R.; Flynn, T. M.; Masterson, A.; Kruger, B.

    2017-12-01

    Subterranean ecosystems are poorly understood because many microbes detected in metagenomic surveys are only distantly related to characterized isolates. Cultivating microorganisms from the deep subsurface is challenging due to its inaccessibility and potential for contamination. The Deep Mine Microbial Observatory (DeMMO) in Lead, SD however, offers access to deep microbial life via pristine fracture fluids in bedrock to a depth of 1478 m. The metabolic landscape of DeMMO was previously characterized via thermodynamic modeling coupled with genomic data, illustrating the potential for microbial inhabitants of DeMMO to utilize mineral substrates as energy sources. Here, we employ field and lab based cultivation approaches with pure minerals to link phylogeny to metabolism at DeMMO. Fracture fluids were directed through reactors filled with Fe3O4, Fe2O3, FeS2, MnO2, and FeCO3 at two sites (610 m and 1478 m) for 2 months prior to harvesting for subsequent analyses. We examined mineralogical, geochemical, and microbiological composition of the reactors via DNA sequencing, microscopy, lipid biomarker characterization, and bulk C and N isotope ratios to determine the influence of mineralogy on biofilm community development. Pre-characterized mineral chips were imaged via SEM to assay microbial growth; preliminary results suggest MnO2, Fe3O4, and Fe2O3 were most conducive to colonization. Solid materials from reactors were used as inoculum for batch cultivation experiments. Media designed to mimic fracture fluid chemistry was supplemented with mineral substrates targeting metal reducers. DNA sequences and microscopy of iron oxide-rich biofilms and fracture fluids suggest iron oxidation is a major energy source at redox transition zones where anaerobic fluids meet more oxidizing conditions. We utilized these biofilms and fluids as inoculum in gradient cultivation experiments targeting microaerophilic iron oxidizers. Cultivation of microbes endemic to DeMMO, a system

  20. Application of positron annihilation lifetime technique to the study of deep level transients in semiconductors

    Science.gov (United States)

    Deng, A. H.; Shan, Y. Y.; Fung, S.; Beling, C. D.

    2002-03-01

    Unlike its conventional applications in lattice defect characterization, positron annihilation lifetime technique was applied to study temperature-dependent deep level transients in semiconductors. Defect levels in the band gap can be determined as they are determined by conventional deep level transient spectroscopy (DLTS) studies. The promising advantage of this application of positron annihilation over the conventional DLTS is that it could further extract extra microstructure information of deep-level defects, such as whether a deep level defect is vacancy related or not. A demonstration of EL2 defect level transient study in GaAs was shown and the EL2 level of 0.82±0.02 eV was obtained by a standard Arrhenius analysis, similar to that in conventional DLTS studies.

  1. Postoperative shoulder pain after laparoscopic hysterectomy with deep neuromuscular blockade and low-pressure pneumoperitoneum

    DEFF Research Database (Denmark)

    Madsen, Matias Vested; Istre, Olav; Staehr-Rye, Anne K

    2016-01-01

    indicate that the use of deep neuromuscular blockade (NMB) improves surgical conditions during a low-pressure pneumoperitoneum (8 mmHg). OBJECTIVE: The aim of this study was to investigate whether low-pressure pneumoperitoneum (8 mmHg) and deep NMB (posttetanic count 0 to 1) compared with standard......: Ninety-nine patients. INTERVENTIONS: Randomisation to either deep NMB and 8 mmHg pneumoperitoneum (Group 8-Deep) or moderate NMB and 12 mmHg pneumoperitoneum (Group 12-Mod). Pain was assessed on a visual analogue scale (VAS) for 14 postoperative days. MAIN OUTCOME MEASURES: The primary endpoint...... was the incidence of shoulder pain during 14 postoperative days. Secondary endpoints included area under curve VAS scores for shoulder, abdominal, incisional and overall pain during 4 and 14 postoperative days; opioid consumption; incidence of nausea and vomiting; antiemetic consumption; time to recovery...

  2. Deep inelastic scattering and disquarks

    International Nuclear Information System (INIS)

    Anselmino, M.

    1993-01-01

    The most comprehensive and detailed analyses of the existing data on the structure function F 2 (x, Q 2 ) of free nucleons, from the deep inelastic scattering (DIS) of charged leptons on hydrogen and deuterium targets, have proved beyond any doubt that higher twist, 1/Q 2 corrections are needed in order to obtain a perfect agreement between perturbative QCD predictions and the data. These higher twist corrections take into account two quark correlations inside the nucleon; it is then natural to try to model them in the quark-diquark model of the proton. In so doing all interactions between the two quarks inside the diquark, both perturbative and non perturbative, are supposed to be taken into account. (orig./HSI)

  3. Detector for deep well logging

    International Nuclear Information System (INIS)

    1976-01-01

    A substantial improvement in the useful life and efficiency of a deep-well scintillation detector is achieved by a unique construction wherein the steel cylinder enclosing the sodium iodide scintillation crystal is provided with a tapered recess to receive a glass window which has a high transmittance at the critical wavelength and, for glass, a high coefficient of thermal expansion. A special high-temperature epoxy adhesive composition is employed to form a relatively thick sealing annulus which keeps the glass window in the tapered recess and compensates for the differences in coefficients of expansion between the container and glass so as to maintain a hermetic seal as the unit is subjected to a wide range of temperature

  4. Deep borehole disposal of plutonium

    International Nuclear Information System (INIS)

    Gibb, F. G. F.; Taylor, K. J.; Burakov, B. E.

    2008-01-01

    Excess plutonium not destined for burning as MOX or in Generation IV reactors is both a long-term waste management problem and a security threat. Immobilisation in mineral and ceramic-based waste forms for interim safe storage and eventual disposal is a widely proposed first step. The safest and most secure form of geological disposal for Pu yet suggested is in very deep boreholes and we propose here that the key to successful combination of these immobilisation and disposal concepts is the encapsulation of the waste form in small cylinders of recrystallized granite. The underlying science is discussed and the results of high pressure and temperature experiments on zircon, depleted UO 2 and Ce-doped cubic zirconia enclosed in granitic melts are presented. The outcomes of these experiments demonstrate the viability of the proposed solution and that Pu could be successfully isolated from its environment for many millions of years. (authors)

  5. Automatic Differentiation and Deep Learning

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Statistical learning has been getting more and more interest from the particle-physics community in recent times, with neural networks and gradient-based optimization being a focus. In this talk we shall discuss three things: automatic differention tools: tools to quickly build DAGs of computation that are fully differentiable. We shall focus on one such tool "PyTorch".  Easy deployment of trained neural networks into large systems with many constraints: for example, deploying a model at the reconstruction phase where the neural network has to be integrated into CERN's bulk data-processing C++-only environment Some recent models in deep learning for segmentation and generation that might be useful for particle physics problems.

  6. Jets in deep inelastic scattering

    International Nuclear Information System (INIS)

    Joensson, L.

    1995-01-01

    Jet production in deep inelastic scattering provides a basis for the investigation of various phenomena related to QCD. Two-jet production at large Q 2 has been studied and the distributions with respect to the partonic scaling variables have been compared to models and to next to leading order calculations. The first observations of azimuthal asymmetries of jets produced in first order α s processes have been obtained. The gluon initiated boson-gluon fusion process permits a direct determination of the gluon density of the proton from an analysis of the jets produced in the hard scattering process. A comparison of these results with those from indirect extractions of the gluon density provides an important test of QCD. (author)

  7. NESTOR Deep Sea Neutrino Telescope

    International Nuclear Information System (INIS)

    Aggouras, G.; Anassontzis, E.G.; Ball, A.E.; Bourlis, G.; Chinowsky, W.; Fahrun, E.; Grammatikakis, G.; Green, C.; Grieder, P.; Katrivanos, P.; Koske, P.; Leisos, A.; Markopoulos, E.; Minkowsky, P.; Nygren, D.; Papageorgiou, K.; Przybylski, G.; Resvanis, L.K.; Siotis, I.; Sopher, J.; Staveris-Polikalas, A.; Tsagli, V.; Tsirigotis, A.; Tzamarias, S.; Zhukov, V.A.

    2006-01-01

    One module of NESTOR, the Mediterranean deep-sea neutrino telescope, was deployed at a depth of 4000m, 14km off the Sapienza Island, off the South West coast of Greece. The deployment site provides excellent environmental characteristics. The deployed NESTOR module is constructed as a hexagonal star like latticed titanium star with 12 Optical Modules and an one-meter diameter titanium sphere which houses the electronics. Power and data were transferred through a 30km electro-optical cable to the shore laboratory. In this report we describe briefly the detector and the detector electronics and discuss the first physics data acquired and give the zenith angular distribution of the reconstructed muons

  8. Deep Borehole Disposal Safety Analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Freeze, Geoffrey A. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Stein, Emily [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Price, Laura L. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); MacKinnon, Robert J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Tillman, Jack Bruce [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2016-10-01

    This report presents a preliminary safety analysis for the deep borehole disposal (DBD) concept, using a safety case framework. A safety case is an integrated collection of qualitative and quantitative arguments, evidence, and analyses that substantiate the safety, and the level of confidence in the safety, of a geologic repository. This safety case framework for DBD follows the outline of the elements of a safety case, and identifies the types of information that will be required to satisfy these elements. At this very preliminary phase of development, the DBD safety case focuses on the generic feasibility of the DBD concept. It is based on potential system designs, waste forms, engineering, and geologic conditions; however, no specific site or regulatory framework exists. It will progress to a site-specific safety case as the DBD concept advances into a site-specific phase, progressing through consent-based site selection and site investigation and characterization.

  9. DeepRT: deep learning for peptide retention time prediction in proteomics

    OpenAIRE

    Ma, Chunwei; Zhu, Zhiyong; Ye, Jun; Yang, Jiarui; Pei, Jianguo; Xu, Shaohang; Zhou, Ruo; Yu, Chang; Mo, Fan; Wen, Bo; Liu, Siqi

    2017-01-01

    Accurate predictions of peptide retention times (RT) in liquid chromatography have many applications in mass spectrometry-based proteomics. Herein, we present DeepRT, a deep learning based software for peptide retention time prediction. DeepRT automatically learns features directly from the peptide sequences using the deep convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) model, which eliminates the need to use hand-crafted features or rules. After the feature learning, pr...

  10. Malaysian NDT standards

    International Nuclear Information System (INIS)

    Khazali Mohd Zin

    2001-01-01

    In order to become a developed country, Malaysia needs to develop her own national standards. It has been projected that by the year 2020, Malaysia requires about 8,000 standards (Department of Standard Malaysia). Currently more than 2,000 Malaysian Standards have been gazette by the government which considerably too low before tire year 2020. NDT standards have been identified by the standard working group as one of the areas to promote our national standards. In this paper the author describes the steps taken to establish the Malaysian very own NDT standards. The project starts with the establishment of radiographic standards. (Author)

  11. Deep Learning Techniques for Top-Quark Reconstruction

    CERN Document Server

    Naderi, Kiarash

    2017-01-01

    Top quarks are unique probes of the standard model (SM) predictions and have the potential to be a window for physics beyond the SM (BSM). Top quarks decay to a $Wb$ pair, and the $W$ can decay in leptons or jets. In a top pair event, assigning jets to their correct source is a challenge. In this study, I studied different methods for improving top reconstruction. The main motivation was to use Deep Learning Techniques in order to enhance the precision of top reconstruction.

  12. Equivalent drawbead performance in deep drawing simulations

    NARCIS (Netherlands)

    Meinders, Vincent T.; Geijselaers, Hubertus J.M.; Huetink, Han

    1999-01-01

    Drawbeads are applied in the deep drawing process to improve the control of the material flow during the forming operation. In simulations of the deep drawing process these drawbeads can be replaced by an equivalent drawbead model. In this paper the usage of an equivalent drawbead model in the

  13. Is deep dreaming the new collage?

    Science.gov (United States)

    Boden, Margaret A.

    2017-10-01

    Deep dreaming (DD) can combine and transform images in surprising ways. But, being based in deep learning (DL), it is not analytically understood. Collage is an art form that is constrained along various dimensions. DD will not be able to generate collages until DL can be guided in a disciplined fashion.

  14. Deep web search: an overview and roadmap

    NARCIS (Netherlands)

    Tjin-Kam-Jet, Kien; Trieschnigg, Rudolf Berend; Hiemstra, Djoerd

    2011-01-01

    We review the state-of-the-art in deep web search and propose a novel classification scheme to better compare deep web search systems. The current binary classification (surfacing versus virtual integration) hides a number of implicit decisions that must be made by a developer. We make these

  15. Research Proposal for Distributed Deep Web Search

    NARCIS (Netherlands)

    Tjin-Kam-Jet, Kien

    2010-01-01

    This proposal identifies two main problems related to deep web search, and proposes a step by step solution for each of them. The first problem is about searching deep web content by means of a simple free-text interface (with just one input field, instead of a complex interface with many input

  16. Development of Hydro-Mechanical Deep Drawing

    DEFF Research Database (Denmark)

    Zhang, Shi-Hong; Danckert, Joachim

    1998-01-01

    The hydro-mechanical deep-drawing process is reviewed in this article. The process principles and features are introduced and the developments of the hydro-mechanical deep-drawing process in process performances, in theory and in numerical simulation are described. The applications are summarized....... Some other related hydraulic forming processes are also dealt with as a comparison....

  17. Stable architectures for deep neural networks

    Science.gov (United States)

    Haber, Eldad; Ruthotto, Lars

    2018-01-01

    Deep neural networks have become invaluable tools for supervised machine learning, e.g. classification of text or images. While often offering superior results over traditional techniques and successfully expressing complicated patterns in data, deep architectures are known to be challenging to design and train such that they generalize well to new data. Critical issues with deep architectures are numerical instabilities in derivative-based learning algorithms commonly called exploding or vanishing gradients. In this paper, we propose new forward propagation techniques inspired by systems of ordinary differential equations (ODE) that overcome this challenge and lead to well-posed learning problems for arbitrarily deep networks. The backbone of our approach is our interpretation of deep learning as a parameter estimation problem of nonlinear dynamical systems. Given this formulation, we analyze stability and well-posedness of deep learning and use this new understanding to develop new network architectures. We relate the exploding and vanishing gradient phenomenon to the stability of the discrete ODE and present several strategies for stabilizing deep learning for very deep networks. While our new architectures restrict the solution space, several numerical experiments show their competitiveness with state-of-the-art networks.

  18. Temperature impacts on deep-sea biodiversity.

    Science.gov (United States)

    Yasuhara, Moriaki; Danovaro, Roberto

    2016-05-01

    Temperature is considered to be a fundamental factor controlling biodiversity in marine ecosystems, but precisely what role temperature plays in modulating diversity is still not clear. The deep ocean, lacking light and in situ photosynthetic primary production, is an ideal model system to test the effects of temperature changes on biodiversity. Here we synthesize current knowledge on temperature-diversity relationships in the deep sea. Our results from both present and past deep-sea assemblages suggest that, when a wide range of deep-sea bottom-water temperatures is considered, a unimodal relationship exists between temperature and diversity (that may be right skewed). It is possible that temperature is important only when at relatively high and low levels but does not play a major role in the intermediate temperature range. Possible mechanisms explaining the temperature-biodiversity relationship include the physiological-tolerance hypothesis, the metabolic hypothesis, island biogeography theory, or some combination of these. The possible unimodal relationship discussed here may allow us to identify tipping points at which on-going global change and deep-water warming may increase or decrease deep-sea biodiversity. Predicted changes in deep-sea temperatures due to human-induced climate change may have more adverse consequences than expected considering the sensitivity of deep-sea ecosystems to temperature changes. © 2014 Cambridge Philosophical Society.

  19. Deep waters : the Ottawa River and Canada's nuclear adventure

    International Nuclear Information System (INIS)

    Krenz, F.H.K.

    2004-01-01

    Deep Waters is an intimate account of the principal events and personalities involved in the successful development of the Canadian nuclear power system (CANDU), an achievement that is arguably one of Canada's greatest scientific and technical successes of the twentieth century. The author tells the stories of the people involved and the problems they faced and overcame and also relates the history of the development of the town of Deep River, built exclusively for the scientists and employees of the Chalk River Project and describes the impact of the Project on the traditional communities of the Ottawa Valley. Public understanding of nuclear power has remained confused, yet decisions about whether and how to use it are of vital importance to Canadians today - and will increase in importance as we seek to maintain our standard of living without doing irreparable damage to the environment around us. Deep Waters examines the issues involved in the use of nuclear power without over-emphasizing its positive aspects or avoiding its negative aspects.

  20. Minimally invasive trans-portal resection of deep intracranial lesions.

    Science.gov (United States)

    Raza, S M; Recinos, P F; Avendano, J; Adams, H; Jallo, G I; Quinones-Hinojosa, A

    2011-02-01

    The surgical management of deep intra-axial lesions still requires microsurgical approaches that utilize retraction of deep white matter to obtain adequate visualization. We report our experience with a new tubular retractor system, designed specifically for intracranial applications, linked with frameless neuronavigation for a cohort of intraventricular and deep intra-axial tumors. The ViewSite Brain Access System (Vycor, Inc) was used in a series of 9 adult and pediatric patients with a variety of pathologies. Histological diagnoses either resected or biopsied with the system included: colloid cyst, DNET, papillary pineal tumor, anaplastic astrocytoma, toxoplasmosis and lymphoma. The locations of the lesions approached include: lateral ventricle, basal ganglia, pulvinar/posterior thalamus and insular cortex. Post-operative imaging was assessed to determine extent of resection and extent of white matter damage along the surgical trajectory (based on T (2)/FLAIR and diffusion restriction/ADC signal). Satisfactory resection or biopsy was obtained in all patients. Radiographic analysis demonstrated evidence of white matter damage along the surgical trajectory in one patient. None of the patients experienced neurological deficits as a result of white matter retraction/manipulation. Based on a retrospective review of our experience, we feel that this access system, when used in conjunction with frameless neuronavigational systems, provides adequate visualization for tumor resection while permitting the use of standard microsurgical techniques through minimally invasive craniotomies. Our initial data indicate that this system may minimize white matter injury, but further studies are necessary. © Georg Thieme Verlag KG Stuttgart · New York.

  1. Nuclear security standard: Argentina approach

    International Nuclear Information System (INIS)

    Bonet Duran, Stella M.; Rodriguez, Carlos E.; Menossi, Sergio A.; Serdeiro, Nelida H.

    2007-01-01

    Argentina has a comprehensive regulatory system designed to assure the security and safety of radioactive sources, which has been in place for more than fifty years. In 1989 the Radiation Protection and Nuclear Safety branch of the National Atomic Energy Commission created the 'Council of Physical Protection of Nuclear Materials and Installations' (CAPFMIN). This Council published in 1992 a Physical Protection Standard based on a deep and careful analysis of INFCIRC 225/Rev.2 including topics like 'sabotage scenario'. Since then, the world's scenario has changed, and some concepts like 'design basis threat', 'detection, delay and response', 'performance approach and prescriptive approach', have been applied to the design of physical protection systems in facilities other than nuclear installations. In Argentina, radioactive sources are widely used in medical and industrial applications with more than 1,600 facilities controlled by the Nuclear Regulatory Authority (in spanish ARN). During 2005, measures like 'access control', 'timely detection of intruder', 'background checks', and 'security plan', were required by ARN for implementation in facilities with radioactive sources. To 'close the cycle' the next step is to produce a regulatory standard based on the operational experience acquired during 2005. ARN has developed a set of criteria for including them in a new standard on security of radioactive materials. Besides, a specific Regulatory Guide is being prepared to help licensees of facilities in design a security system and to fulfill the 'Design of Security System Questionnaire'. The present paper describes the proposed Standard on Security of Radioactive Sources and the draft of the Nuclear Security Regulatory Guidance, based on our regulatory experience and the latest international recommendations. (author)

  2. The International Standards Organisation offshore structures standard

    International Nuclear Information System (INIS)

    Snell, R.O.

    1994-01-01

    The International Standards Organisation has initiated a program to develop a suite of ISO Codes and Standards for the Oil Industry. The Offshore Structures Standard is one of seven topics being addressed. The scope of the standard will encompass fixed steel and concrete structures, floating structures, Arctic structures and the site specific assessment of mobile drilling and accommodation units. The standard will use as base documents the existing recommended practices and standards most frequently used for each type of structure, and will develop them to incorporate best published and recognized practice and knowledge where it provides a significant improvement on the base document. Work on the Code has commenced under the direction of an internationally constituted sub-committee comprising representatives from most of the countries with a substantial offshore oil and gas industry. This paper outlines the background to the code and the format, content and work program

  3. 3D Deep Learning Angiography (3D-DLA) from C-arm Conebeam CT.

    Science.gov (United States)

    Montoya, J C; Li, Y; Strother, C; Chen, G-H

    2018-05-01

    Deep learning is a branch of artificial intelligence that has demonstrated unprecedented performance in many medical imaging applications. Our purpose was to develop a deep learning angiography method to generate 3D cerebral angiograms from a single contrast-enhanced C-arm conebeam CT acquisition in order to reduce image artifacts and radiation dose. A set of 105 3D rotational angiography examinations were randomly selected from an internal data base. All were acquired using a clinical system in conjunction with a standard injection protocol. More than 150 million labeled voxels from 35 subjects were used for training. A deep convolutional neural network was trained to classify each image voxel into 3 tissue types (vasculature, bone, and soft tissue). The trained deep learning angiography model was then applied for tissue classification into a validation cohort of 8 subjects and a final testing cohort of the remaining 62 subjects. The final vasculature tissue class was used to generate the 3D deep learning angiography images. To quantify the generalization error of the trained model, we calculated the accuracy, sensitivity, precision, and Dice similarity coefficients for vasculature classification in relevant anatomy. The 3D deep learning angiography and clinical 3D rotational angiography images were subjected to a qualitative assessment for the presence of intersweep motion artifacts. Vasculature classification accuracy and 95% CI in the testing dataset were 98.7% (98.3%-99.1%). No residual signal from osseous structures was observed for any 3D deep learning angiography testing cases except for small regions in the otic capsule and nasal cavity compared with 37% (23/62) of the 3D rotational angiographies. Deep learning angiography accurately recreated the vascular anatomy of the 3D rotational angiography reconstructions without a mask. Deep learning angiography reduced misregistration artifacts induced by intersweep motion, and it reduced radiation exposure

  4. Towards deep learning with segregated dendrites.

    Science.gov (United States)

    Guerguiev, Jordan; Lillicrap, Timothy P; Richards, Blake A

    2017-12-05

    Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network learns to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful higher-order representations-the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the morphology of neocortical pyramidal neurons.

  5. Standard Industry Fare Level

    Data.gov (United States)

    Department of Transportation — Standard Industry Fare Level was establish after airline deregulation to serve as the standard against which a statutory zone of operating expense reasonableness was...

  6. Standard Reference Tables -

    Data.gov (United States)

    Department of Transportation — The Standard Reference Tables (SRT) provide consistent reference data for the various applications that support Flight Standards Service (AFS) business processes and...

  7. Analyses of the deep borehole drilling status for a deep borehole disposal system

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Youl; Choi, Heui Joo; Lee, Min Soo; Kim, Geon Young; Kim, Kyung Su [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    The purpose of disposal for radioactive wastes is not only to isolate them from humans, but also to inhibit leakage of any radioactive materials into the accessible environment. Because of the extremely high level and long-time scale radioactivity of HLW(High-level radioactive waste), a mined deep geological disposal concept, the disposal depth is about 500 m below ground, is considered as the safest method to isolate the spent fuels or high-level radioactive waste from the human environment with the best available technology at present time. Therefore, as an alternative disposal concept, i.e., deep borehole disposal technology is under consideration in number of countries in terms of its outstanding safety and cost effectiveness. In this paper, the general status of deep drilling technologies was reviewed for deep borehole disposal of high level radioactive wastes. Based on the results of these review, very preliminary applicability of deep drilling technology for deep borehole disposal analyzed. In this paper, as one of key technologies of deep borehole disposal system, the general status of deep drilling technologies in oil industry, geothermal industry and geo scientific field was reviewed for deep borehole disposal of high level radioactive wastes. Based on the results of these review, the very preliminary applicability of deep drilling technology for deep borehole disposal such as relation between depth and diameter, drilling time and feasibility classification was analyzed.

  8. Free-living energy expenditure reduced after deep brain stimulation surgery for Parkinson's disease

    DEFF Research Database (Denmark)

    Jørgensen, Hans Ulrik; Werdelin, Lene; Lokkegaard, Annemette

    2012-01-01

    with deep brain stimulation in the subthalamic nucleus (STN-DBS) is now considered the gold standard in fluctuating PD. Many patients experience a gain of weight following the surgery. The aim of this study was to identify possible mechanisms, which may contribute to body weight gain in patients with PD...

  9. Deep inelastic scattering as a probe of new hadronic mass scales

    International Nuclear Information System (INIS)

    Burges, C.J.C.; Schnitzer, H.J.

    1984-01-01

    We present the general form for deep-inelastic cross sections obtained from all SU(3) x SU(2) x U(1) invariant operators of dimension six or less. The operators of dimension six generate corrections to the predictions of the standard model, which serve as a probe of a possible new mass-scale Λ and other new physics. (orig.)

  10. The application of deep brain stimulation in the treatment of psychiatric disorders

    NARCIS (Netherlands)

    Graat, Ilse; Figee, Martijn; Denys, D.

    2017-01-01

    Deep brain stimulation (DBS) is a last-resort treatment for neurological and psychiatric disorders that are refractory to standard treatment. Over the last decades, the progress of DBS in psychiatry has been slower than in neurology, in part owing to the heterogenic symptomatology and complex

  11. Creating Deep Time Diaries: An English/Earth Science Unit for Middle School Students

    Science.gov (United States)

    Jordan, Vicky; Barnes, Mark

    2006-01-01

    Students love a good story. That is why incorporating literary fiction that parallels teaching goals and standards can be effective. In the interdisciplinary, thematic six-week unit described in this article, the authors use the fictional book "The Deep Time Diaries," by Gary Raham, to explore topics in paleontology, Earth science, and creative…

  12. Duplex scanning in the diagnosis of acute deep vein thrombosis of the lower extremity

    NARCIS (Netherlands)

    van Ramshorst, B.; Legemate, D. A.; Verzijlbergen, J. F.; Hoeneveld, H.; Eikelboom, B. C.; de Valois, J. C.; Meuwissen, O. J.

    1991-01-01

    In a prospective study the value of duplex scanning in the diagnosis of acute femoro-popliteal thrombosis was compared to conventional contrast venography (CV) as a gold standard. A total of 126 legs in 117 patients suspected of having deep vein thrombosis (DVT) or pulmonary embolism (PE) were

  13. Sunnyvale Marine Climate Deep Retrofit

    Energy Technology Data Exchange (ETDEWEB)

    German, A.; Siddiqui, A.; Dakin, B.

    2014-11-01

    The Alliance for Residential Building Innovation (ARBI) and Allen Gilliland of One Sky Homes collaborated on a marine climate retrofit project designed to meet both Passive House (PH) and Building America (BA) program standards. The scope included sealing, installing wall, roof and floor insulation (previously lacking), replacing windows, upgrading the heating and cooling system, and installing.

  14. Sunnyvale Marine Climate Deep Retrofit

    Energy Technology Data Exchange (ETDEWEB)

    German, A. [Alliance for Residential Building Innovation (ARBI), Davis, CA (United States); Siddiqui, A. [Alliance for Residential Building Innovation (ARBI), Davis, CA (United States); Dakin, B. [Alliance for Residential Building Innovation (ARBI), Davis, CA (United States)

    2014-11-01

    The Alliance for Residential Building Innovation (ARBI) and Allen Gilliland of One Sky Homes collaborated on a marine climate retrofit project designed to meet both Passive House (PH) and Building America program standards. The scope included sealing, installing wall, roof and floor insulation (previously lacking), replacing windows, upgrading the heating and cooling system, and installing mechanical ventilation.

  15. Overview of deep learning in medical imaging.

    Science.gov (United States)

    Suzuki, Kenji

    2017-09-01

    The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a

  16. Deep Learning Methods for Improved Decoding of Linear Codes

    Science.gov (United States)

    Nachmani, Eliya; Marciano, Elad; Lugosch, Loren; Gross, Warren J.; Burshtein, David; Be'ery, Yair

    2018-02-01

    The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.

  17. Deep-learning Top Taggers or The End of QCD?

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    https://arxiv.org/abs/1701.08784 Machine learning based on convolutional neural networks can be used to study jet images from the LHC. Top tagging in fat jets offers a well-defined framework to establish our DeepTop approach and compare its performance to QCD-based top taggers. We first optimize a network architecture to identify top quarks in Monte Carlo simulations of the Standard Model production channel. Using standard fat jets we then compare its performance to a multivariate QCD-based top tagger. We find that both approaches lead to comparable performance, establishing convolutional networks as a promising new approach for multivariate hypothesis-based top tagging.

  18. Invited Article: Deep Impact instrument calibration

    International Nuclear Information System (INIS)

    Klaasen, Kenneth P.; Mastrodemos, Nickolaos; A'Hearn, Michael F.; Farnham, Tony; Groussin, Olivier; Ipatov, Sergei; Li Jianyang; McLaughlin, Stephanie; Sunshine, Jessica; Wellnitz, Dennis; Baca, Michael; Delamere, Alan; Desnoyer, Mark; Thomas, Peter; Hampton, Donald; Lisse, Carey

    2008-01-01

    Calibration of NASA's Deep Impact spacecraft instruments allows reliable scientific interpretation of the images and spectra returned from comet Tempel 1. Calibrations of the four onboard remote sensing imaging instruments have been performed in the areas of geometric calibration, spatial resolution, spectral resolution, and radiometric response. Error sources such as noise (random, coherent, encoding, data compression), detector readout artifacts, scattered light, and radiation interactions have been quantified. The point spread functions (PSFs) of the medium resolution instrument and its twin impactor targeting sensor are near the theoretical minimum [∼1.7 pixels full width at half maximum (FWHM)]. However, the high resolution instrument camera was found to be out of focus with a PSF FWHM of ∼9 pixels. The charge coupled device (CCD) read noise is ∼1 DN. Electrical cross-talk between the CCD detector quadrants is correctable to <2 DN. The IR spectrometer response nonlinearity is correctable to ∼1%. Spectrometer read noise is ∼2 DN. The variation in zero-exposure signal level with time and spectrometer temperature is not fully characterized; currently corrections are good to ∼10 DN at best. Wavelength mapping onto the detector is known within 1 pixel; spectral lines have a FWHM of ∼2 pixels. About 1% of the IR detector pixels behave badly and remain uncalibrated. The spectrometer exhibits a faint ghost image from reflection off a beamsplitter. Instrument absolute radiometric calibration accuracies were determined generally to <10% using star imaging. Flat-field calibration reduces pixel-to-pixel response differences to ∼0.5% for the cameras and <2% for the spectrometer. A standard calibration image processing pipeline is used to produce archival image files for analysis by researchers.

  19. Invited Article: Deep Impact instrument calibration.

    Science.gov (United States)

    Klaasen, Kenneth P; A'Hearn, Michael F; Baca, Michael; Delamere, Alan; Desnoyer, Mark; Farnham, Tony; Groussin, Olivier; Hampton, Donald; Ipatov, Sergei; Li, Jianyang; Lisse, Carey; Mastrodemos, Nickolaos; McLaughlin, Stephanie; Sunshine, Jessica; Thomas, Peter; Wellnitz, Dennis

    2008-09-01

    Calibration of NASA's Deep Impact spacecraft instruments allows reliable scientific interpretation of the images and spectra returned from comet Tempel 1. Calibrations of the four onboard remote sensing imaging instruments have been performed in the areas of geometric calibration, spatial resolution, spectral resolution, and radiometric response. Error sources such as noise (random, coherent, encoding, data compression), detector readout artifacts, scattered light, and radiation interactions have been quantified. The point spread functions (PSFs) of the medium resolution instrument and its twin impactor targeting sensor are near the theoretical minimum [ approximately 1.7 pixels full width at half maximum (FWHM)]. However, the high resolution instrument camera was found to be out of focus with a PSF FWHM of approximately 9 pixels. The charge coupled device (CCD) read noise is approximately 1 DN. Electrical cross-talk between the CCD detector quadrants is correctable to <2 DN. The IR spectrometer response nonlinearity is correctable to approximately 1%. Spectrometer read noise is approximately 2 DN. The variation in zero-exposure signal level with time and spectrometer temperature is not fully characterized; currently corrections are good to approximately 10 DN at best. Wavelength mapping onto the detector is known within 1 pixel; spectral lines have a FWHM of approximately 2 pixels. About 1% of the IR detector pixels behave badly and remain uncalibrated. The spectrometer exhibits a faint ghost image from reflection off a beamsplitter. Instrument absolute radiometric calibration accuracies were determined generally to <10% using star imaging. Flat-field calibration reduces pixel-to-pixel response differences to approximately 0.5% for the cameras and <2% for the spectrometer. A standard calibration image processing pipeline is used to produce archival image files for analysis by researchers.

  20. Standards for holdup measurement

    International Nuclear Information System (INIS)

    Zucker, M.S.

    1982-01-01

    Holdup measurement, needed for material balance, depend intensively on standards and on interpretation of the calibration procedure. More than other measurements, the calibration procedure using the standard becomes part of the standard. Standards practical for field use and calibration techniques have been developed. While accuracy in holdup measurements is comparatively poor, avoidance of bias is a necessary goal

  1. Creating standards: Creating illusions?

    DEFF Research Database (Denmark)

    Linneberg, Mai Skjøtt

    written standards may open up for the creation of illusions. These are created when written standards' content is not in accordance with the perception standard adopters and standard users have of the specific practice phenomenon's content. This general theoretical argument is exemplified by the specific...

  2. WFIRST: Science from Deep Field Surveys

    Science.gov (United States)

    Koekemoer, Anton; Foley, Ryan; WFIRST Deep Field Working Group

    2018-01-01

    WFIRST will enable deep field imaging across much larger areas than those previously obtained with Hubble, opening up completely new areas of parameter space for extragalactic deep fields including cosmology, supernova and galaxy evolution science. The instantaneous field of view of the Wide Field Instrument (WFI) is about 0.3 square degrees, which would for example yield an Ultra Deep Field (UDF) reaching similar depths at visible and near-infrared wavelengths to that obtained with Hubble, over an area about 100-200 times larger, for a comparable investment in time. Moreover, wider fields on scales of 10-20 square degrees could achieve depths comparable to large HST surveys at medium depths such as GOODS and CANDELS, and would enable multi-epoch supernova science that could be matched in area to LSST Deep Drilling fields or other large survey areas. Such fields may benefit from being placed on locations in the sky that have ancillary multi-band imaging or spectroscopy from other facilities, from the ground or in space. The WFIRST Deep Fields Working Group has been examining the science considerations for various types of deep fields that may be obtained with WFIRST, and present here a summary of the various properties of different locations in the sky that may be considered for future deep fields with WFIRST.

  3. Deep Learning and Its Applications in Biomedicine.

    Science.gov (United States)

    Cao, Chensi; Liu, Feng; Tan, Hai; Song, Deshou; Shu, Wenjie; Li, Weizhong; Zhou, Yiming; Bo, Xiaochen; Xie, Zhi

    2018-02-01

    Advances in biological and medical technologies have been providing us explosive volumes of biological and physiological data, such as medical images, electroencephalography, genomic and protein sequences. Learning from these data facilitates the understanding of human health and disease. Developed from artificial neural networks, deep learning-based algorithms show great promise in extracting features and learning patterns from complex data. The aim of this paper is to provide an overview of deep learning techniques and some of the state-of-the-art applications in the biomedical field. We first introduce the development of artificial neural network and deep learning. We then describe two main components of deep learning, i.e., deep learning architectures and model optimization. Subsequently, some examples are demonstrated for deep learning applications, including medical image classification, genomic sequence analysis, as well as protein structure classification and prediction. Finally, we offer our perspectives for the future directions in the field of deep learning. Copyright © 2018. Production and hosting by Elsevier B.V.

  4. The deep lymphatic anatomy of the hand.

    Science.gov (United States)

    Ma, Chuan-Xiang; Pan, Wei-Ren; Liu, Zhi-An; Zeng, Fan-Qiang; Qiu, Zhi-Qiang

    2018-04-03

    The deep lymphatic anatomy of the hand still remains the least described in medical literature. Eight hands were harvested from four nonembalmed human cadavers amputated above the wrist. A small amount of 6% hydrogen peroxide was employed to detect the lymphatic vessels around the superficial and deep palmar vascular arches, in webs from the index to little fingers, the thenar and hypothenar areas. A 30-gauge needle was inserted into the vessels and injected with a barium sulphate compound. Each specimen was dissected, photographed and radiographed to demonstrate deep lymphatic distribution of the hand. Five groups of deep collecting lymph vessels were found in the hand: superficial palmar arch lymph vessel (SPALV); deep palmar arch lymph vessel (DPALV); thenar lymph vessel (TLV); hypothenar lymph vessel (HTLV); deep finger web lymph vessel (DFWLV). Each group of vessels drained in different directions first, then all turned and ran towards the wrist in different layers. The deep lymphatic drainage of the hand has been presented. The results will provide an anatomical basis for clinical management, educational reference and scientific research. Copyright © 2018 Elsevier GmbH. All rights reserved.

  5. Moby and Moby 2: creatures of the deep (web).

    Science.gov (United States)

    Vandervalk, Ben P; McCarthy, E Luke; Wilkinson, Mark D

    2009-03-01

    Facile and meaningful integration of data from disparate resources is the 'holy grail' of bioinformatics. Some resources have begun to address this problem by providing their data using Semantic Web standards, specifically the Resource Description Framework (RDF) and the Web Ontology Language (OWL). Unfortunately, adoption of Semantic Web standards has been slow overall, and even in cases where the standards are being utilized, interconnectivity between resources is rare. In response, we have seen the emergence of centralized 'semantic warehouses' that collect public data from third parties, integrate it, translate it into OWL/RDF and provide it to the community as a unified and queryable resource. One limitation of the warehouse approach is that queries are confined to the resources that have been selected for inclusion. A related problem, perhaps of greater concern, is that the majority of bioinformatics data exists in the 'Deep Web'-that is, the data does not exist until an application or analytical tool is invoked, and therefore does not have a predictable Web address. The inability to utilize Uniform Resource Identifiers (URIs) to address this data is a barrier to its accessibility via URI-centric Semantic Web technologies. Here we examine 'The State of the Union' for the adoption of Semantic Web standards in the health care and life sciences domain by key bioinformatics resources, explore the nature and connectivity of several community-driven semantic warehousing projects, and report on our own progress with the CardioSHARE/Moby-2 project, which aims to make the resources of the Deep Web transparently accessible through SPARQL queries.

  6. Collaboration Between Multistakeholder Standards

    DEFF Research Database (Denmark)

    Rasche, Andreas; Maclean, Camilla

    Public interest in corporate social responsibility (CSR) has resulted in a wide variety of multistakeholder CSR standards in which companies can choose to participate. While such standards reflect collaborative governance arrangements between public and private actors, the market for corporate...... responsibility is unlikely to support a great variety of partly competing and overlapping standards. Increased collaboration between these standards would enhance both their impact and their adoption by firms. This report examines the nature, benefits, and shortcomings of existing multistakeholder standards...

  7. Hello World Deep Learning in Medical Imaging.

    Science.gov (United States)

    Lakhani, Paras; Gray, Daniel L; Pett, Carl R; Nagy, Paul; Shih, George

    2018-05-03

    There is recent popularity in applying machine learning to medical imaging, notably deep learning, which has achieved state-of-the-art performance in image analysis and processing. The rapid adoption of deep learning may be attributed to the availability of machine learning frameworks and libraries to simplify their use. In this tutorial, we provide a high-level overview of how to build a deep neural network for medical image classification, and provide code that can help those new to the field begin their informatics projects.

  8. Deep Generative Models for Molecular Science

    DEFF Research Database (Denmark)

    Jørgensen, Peter Bjørn; Schmidt, Mikkel Nørgaard; Winther, Ole

    2018-01-01

    Generative deep machine learning models now rival traditional quantum-mechanical computations in predicting properties of new structures, and they come with a significantly lower computational cost, opening new avenues in computational molecular science. In the last few years, a variety of deep...... generative models have been proposed for modeling molecules, which differ in both their model structure and choice of input features. We review these recent advances within deep generative models for predicting molecular properties, with particular focus on models based on the probabilistic autoencoder (or...

  9. Harnessing the Deep Web: Present and Future

    OpenAIRE

    Madhavan, Jayant; Afanasiev, Loredana; Antova, Lyublena; Halevy, Alon

    2009-01-01

    Over the past few years, we have built a system that has exposed large volumes of Deep-Web content to Google.com users. The content that our system exposes contributes to more than 1000 search queries per-second and spans over 50 languages and hundreds of domains. The Deep Web has long been acknowledged to be a major source of structured data on the web, and hence accessing Deep-Web content has long been a problem of interest in the data management community. In this paper, we report on where...

  10. Desalination Economic Evaluation Program (DEEP). User's manual

    International Nuclear Information System (INIS)

    2000-01-01

    DEEP (formerly named ''Co-generation and Desalination Economic Evaluation'' Spreadsheet, CDEE) has been developed originally by General Atomics under contract, and has been used in the IAEA's feasibility studies. For further confidence in the software, it was validated in March 1998. After that, a user friendly version has been issued under the name of DEEP at the end of 1998. DEEP output includes the levelised cost of water and power, a breakdown of cost components, energy consumption and net saleable power for each selected option. Specific power plants can be modelled by adjustment of input data including design power, power cycle parameters and costs

  11. Zooplankton at deep Red Sea brine pools

    KAUST Repository

    Kaartvedt, Stein

    2016-03-02

    The deep-sea anoxic brines of the Red Sea comprise unique, complex and extreme habitats. These environments are too harsh for metazoans, while the brine–seawater interface harbors dense microbial populations. We investigated the adjacent pelagic fauna at two brine pools using net tows, video records from a remotely operated vehicle and submerged echosounders. Waters just above the brine pool of Atlantis II Deep (2000 m depth) appeared depleted of macrofauna. In contrast, the fauna appeared to be enriched at the Kebrit Deep brine–seawater interface (1466 m).

  12. Kernel methods for deep learning

    OpenAIRE

    Cho, Youngmin

    2012-01-01

    We introduce a new family of positive-definite kernels that mimic the computation in large neural networks. We derive the different members of this family by considering neural networks with different activation functions. Using these kernels as building blocks, we also show how to construct other positive-definite kernels by operations such as composition, multiplication, and averaging. We explore the use of these kernels in standard models of supervised learning, such as support vector mach...

  13. Influence of deep neuromuscular block on the surgeonś assessment of surgical conditions during laparotomy

    DEFF Research Database (Denmark)

    Madsen, M V; Scheppan, S; Mørk, E

    2017-01-01

    Background: During laparotomy, surgeons may experience difficult surgical conditions if the patient's abdominal wall or diaphragm is tense. Deep neuromuscular block (NMB), defined as a post-tetanic-count (PTC) between 0-1, paralyses the abdominal wall muscles and the diaphragm. We hypothesized th...... time, occurrence of wound infection, and wound dehiscence were found. Conclusions: Deep NMB compared with standard NMB resulted in better subjective ratings of surgical conditions during laparotomy.......Background: During laparotomy, surgeons may experience difficult surgical conditions if the patient's abdominal wall or diaphragm is tense. Deep neuromuscular block (NMB), defined as a post-tetanic-count (PTC) between 0-1, paralyses the abdominal wall muscles and the diaphragm. We hypothesized...... that deep NMB (PTC 0-1) would improve subjective ratings of surgical conditions during upper laparotomy as compared with standard NMB. Methods: This was a double blinded, randomized study. A total of 128 patients undergoing elective upper laparotomy were randomized to either continuous deep NMB (infusion...

  14. Deep-sea environment and biodiversity of the West African Equatorial margin

    Science.gov (United States)

    Sibuet, Myriam; Vangriesheim, Annick

    2009-12-01

    The long-term BIOZAIRE multidisciplinary deep-sea environmental program on the West Equatorial African margin organized in partnership between Ifremer and TOTAL aimed at characterizing the benthic community structure in relation with physical and chemical processes in a region of oil and gas interest. The morphology of the deep Congo submarine channel and the sedimentological structures of the deep-sea fan were established during the geological ZAIANGO project and helped to select study sites ranging from 350 to 4800 m water depth inside or near the channel and away from its influence. Ifremer conducted eight deep-sea cruises on board research vessels between 2000 and 2005. Standardized methods of sampling together with new technologies such as the ROV Victor 6000 and its associated instrumentation were used to investigate this poorly known continental margin. In addition to the study of sedimentary environments more or less influenced by turbidity events, the discovery of one of the largest cold seeps near the Congo channel and deep coral reefs extends our knowledge of the different habitats of this margin. This paper presents the background, objectives and major results of the BIOZAIRE Program. It highlights the work achieved in the 16 papers in this special issue. This synthesis paper describes the knowledge acquired at a regional and local scale of the Equatorial East Atlantic margin, and tackles new interdisciplinary questions to be answered in the various domains of physics, chemistry, taxonomy and ecology to better understand the deep-sea environment in the Gulf of Guinea.

  15. Joint Segmentation of Multiple Thoracic Organs in CT Images with Two Collaborative Deep Architectures.

    Science.gov (United States)

    Trullo, Roger; Petitjean, Caroline; Nie, Dong; Shen, Dinggang; Ruan, Su

    2017-09-01

    Computed Tomography (CT) is the standard imaging technique for radiotherapy planning. The delineation of Organs at Risk (OAR) in thoracic CT images is a necessary step before radiotherapy, for preventing irradiation of healthy organs. However, due to low contrast, multi-organ segmentation is a challenge. In this paper, we focus on developing a novel framework for automatic delineation of OARs. Different from previous works in OAR segmentation where each organ is segmented separately, we propose two collaborative deep architectures to jointly segment all organs, including esophagus, heart, aorta and trachea. Since most of the organ borders are ill-defined, we believe spatial relationships must be taken into account to overcome the lack of contrast. The aim of combining two networks is to learn anatomical constraints with the first network, which will be used in the second network, when each OAR is segmented in turn. Specifically, we use the first deep architecture, a deep SharpMask architecture, for providing an effective combination of low-level representations with deep high-level features, and then take into account the spatial relationships between organs by the use of Conditional Random Fields (CRF). Next, the second deep architecture is employed to refine the segmentation of each organ by using the maps obtained on the first deep architecture to learn anatomical constraints for guiding and refining the segmentations. Experimental results show superior performance on 30 CT scans, comparing with other state-of-the-art methods.

  16. Recent machine learning advancements in sensor-based mobility analysis: Deep learning for Parkinson's disease assessment.

    Science.gov (United States)

    Eskofier, Bjoern M; Lee, Sunghoon I; Daneault, Jean-Francois; Golabchi, Fatemeh N; Ferreira-Carvalho, Gabriela; Vergara-Diaz, Gloria; Sapienza, Stefano; Costante, Gianluca; Klucken, Jochen; Kautz, Thomas; Bonato, Paolo

    2016-08-01

    The development of wearable sensors has opened the door for long-term assessment of movement disorders. However, there is still a need for developing methods suitable to monitor motor symptoms in and outside the clinic. The purpose of this paper was to investigate deep learning as a method for this monitoring. Deep learning recently broke records in speech and image classification, but it has not been fully investigated as a potential approach to analyze wearable sensor data. We collected data from ten patients with idiopathic Parkinson's disease using inertial measurement units. Several motor tasks were expert-labeled and used for classification. We specifically focused on the detection of bradykinesia. For this, we compared standard machine learning pipelines with deep learning based on convolutional neural networks. Our results showed that deep learning outperformed other state-of-the-art machine learning algorithms by at least 4.6 % in terms of classification rate. We contribute a discussion of the advantages and disadvantages of deep learning for sensor-based movement assessment and conclude that deep learning is a promising method for this field.

  17. NATURAL GAS RESOURCES IN DEEP SEDIMENTARY BASINS

    Energy Technology Data Exchange (ETDEWEB)

    Thaddeus S. Dyman; Troy Cook; Robert A. Crovelli; Allison A. Henry; Timothy C. Hester; Ronald C. Johnson; Michael D. Lewan; Vito F. Nuccio; James W. Schmoker; Dennis B. Riggin; Christopher J. Schenk

    2002-02-05

    From a geological perspective, deep natural gas resources are generally defined as resources occurring in reservoirs at or below 15,000 feet, whereas ultra-deep gas occurs below 25,000 feet. From an operational point of view, ''deep'' is often thought of in a relative sense based on the geologic and engineering knowledge of gas (and oil) resources in a particular area. Deep gas can be found in either conventionally-trapped or unconventional basin-center accumulations that are essentially large single fields having spatial dimensions often exceeding those of conventional fields. Exploration for deep conventional and unconventional basin-center natural gas resources deserves special attention because these resources are widespread and occur in diverse geologic environments. In 1995, the U.S. Geological Survey estimated that 939 TCF of technically recoverable natural gas remained to be discovered or was part of reserve appreciation from known fields in the onshore areas and State waters of the United. Of this USGS resource, nearly 114 trillion cubic feet (Tcf) of technically-recoverable gas remains to be discovered from deep sedimentary basins. Worldwide estimates of deep gas are also high. The U.S. Geological Survey World Petroleum Assessment 2000 Project recently estimated a world mean undiscovered conventional gas resource outside the U.S. of 844 Tcf below 4.5 km (about 15,000 feet). Less is known about the origins of deep gas than about the origins of gas at shallower depths because fewer wells have been drilled into the deeper portions of many basins. Some of the many factors contributing to the origin of deep gas include the thermal stability of methane, the role of water and non-hydrocarbon gases in natural gas generation, porosity loss with increasing thermal maturity, the kinetics of deep gas generation, thermal cracking of oil to gas, and source rock potential based on thermal maturity and kerogen type. Recent experimental simulations

  18. Efficacy of Laser Debridement With Autologous Split-Thickness Skin Grafting in Promoting Improved Wound Healing of Deep Cutaneous Sulfur Mustard Burns

    National Research Council Canada - National Science Library

    Graham, John

    2002-01-01

    ...) full thickness CO2 laser debridement followed by skin grafting, (2) full thickness sharp surgical tangential excision followed by skin grafting, the 'Gold Standard' used in deep thermal burns management, (3...

  19. Results of Using the Global Positioning System to Maintain the Time and Frequency Synchronization in the Jet Propulsion Laboratory's Deep Space Network

    National Research Council Canada - National Science Library

    Clements, P. A; Kirk, A; Unglaub, R

    1986-01-01

    The Jet Propulsion Laboratory's Deep Space Network (DSN) consists of three tracking stations located in California, Australia, and Spain, each with two hydrogen maser clocks as the time and frequency standard...

  20. Comet Dust After Deep Impact

    Science.gov (United States)

    Wooden, Diane H.; Harker, David E.; Woodward, Charles E.

    2006-01-01

    When the Deep Impact Mission hit Jupiter Family comet 9P/Tempel 1, an ejecta crater was formed and an pocket of volatile gases and ices from 10-30 m below the surface was exposed (A Hearn et aI. 2005). This resulted in a gas geyser that persisted for a few hours (Sugita et al, 2005). The gas geyser pushed dust grains into the coma (Sugita et a1. 2005), as well as ice grains (Schulz et al. 2006). The smaller of the dust grains were submicron in radii (0-25.3 micron), and were primarily composed of highly refractory minerals including amorphous (non-graphitic) carbon, and silicate minerals including amorphous (disordered) olivine (Fe,Mg)2SiO4 and pyroxene (Fe,Mg)SiO3 and crystalline Mg-rich olivine. The smaller grains moved faster, as expected from the size-dependent velocity law produced by gas-drag on grains. The mineralogy evolved with time: progressively larger grains persisted in the near nuclear region, having been imparted with slower velocities, and the mineralogies of these larger grains appeared simpler and without crystals. The smaller 0.2-0.3 micron grains reached the coma in about 1.5 hours (1 arc sec = 740 km), were more diverse in mineralogy than the larger grains and contained crystals, and appeared to travel through the coma together. No smaller grains appeared at larger coma distances later (with slower velocities), implying that if grain fragmentation occurred, it happened within the gas acceleration zone. These results of the high spatial resolution spectroscopy (GEMINI+Michelle: Harker et 4. 2005, 2006; Subaru+COMICS: Sugita et al. 2005) revealed that the grains released from the interior were different from the nominally active areas of this comet by their: (a) crystalline content, (b) smaller size, (c) more diverse mineralogy. The temporal changes in the spectra, recorded by GEMIM+Michelle every 7 minutes, indicated that the dust mineralogy is inhomogeneous and, unexpectedly, the portion of the size distribution dominated by smaller grains has

  1. Anisotropy in the deep Earth

    Science.gov (United States)

    Romanowicz, Barbara; Wenk, Hans-Rudolf

    2017-08-01

    Seismic anisotropy has been found in many regions of the Earth's interior. Its presence in the Earth's crust has been known since the 19th century, and is due in part to the alignment of anisotropic crystals in rocks, and in part to patterns in the distribution of fractures and pores. In the upper mantle, seismic anisotropy was discovered 50 years ago, and can be attributed for the most part, to the alignment of intrinsically anisotropic olivine crystals during large scale deformation associated with convection. There is some indication for anisotropy in the transition zone, particularly in the vicinity of subducted slabs. Here we focus on the deep Earth - the lower mantle and core, where anisotropy is not yet mapped in detail, nor is there consensus on its origin. Most of the lower mantle appears largely isotropic, except in the last 200-300 km, in the D″ region, where evidence for seismic anisotropy has been accumulating since the late 1980s, mostly from shear wave splitting measurements. Recently, a picture has been emerging, where strong anisotropy is associated with high shear velocities at the edges of the large low shear velocity provinces (LLSVPs) in the central Pacific and under Africa. These observations are consistent with being due to the presence of highly anisotropic MgSiO3 post-perovskite crystals, aligned during the deformation of slabs impinging on the core-mantle boundary, and upwelling flow within the LLSVPs. We also discuss mineral physics aspects such as ultrahigh pressure deformation experiments, first principles calculations to obtain information about elastic properties, and derivation of dislocation activity based on bonding characteristics. Polycrystal plasticity simulations can predict anisotropy but models are still highly idealized and neglect the complex microstructure of polyphase aggregates with strong and weak components. A promising direction for future progress in understanding the origin of seismic anisotropy in the deep mantle

  2. Standardisation in standards

    International Nuclear Information System (INIS)

    McDonald, J. C.

    2012-01-01

    The following observations are offered by one who has served on national and international standards-writing committees and standards review committees. Service on working groups consists of either updating previous standards or developing new standards. The process of writing either type of document proceeds along similar lines. The first order of business is to recognise the need for developing or updating a standard and to identify the potential user community. It is also necessary to ensure that there is a required number of members willing to do the writing. A justification is required as to why a new standard should be developed, and this is written as a new work item proposal or a project initiation notification system form. This document must be filed officially and approved, and a search is then undertaken to ensure that the proposed new standard will not duplicate a standard that has already been published or is underway in another standards organisation. (author)

  3. DeepDive: Declarative Knowledge Base Construction.

    Science.gov (United States)

    De Sa, Christopher; Ratner, Alex; Ré, Christopher; Shin, Jaeho; Wang, Feiran; Wu, Sen; Zhang, Ce

    2016-03-01

    The dark data extraction or knowledge base construction (KBC) problem is to populate a SQL database with information from unstructured data sources including emails, webpages, and pdf reports. KBC is a long-standing problem in industry and research that encompasses problems of data extraction, cleaning, and integration. We describe DeepDive, a system that combines database and machine learning ideas to help develop KBC systems. The key idea in DeepDive is that statistical inference and machine learning are key tools to attack classical data problems in extraction, cleaning, and integration in a unified and more effective manner. DeepDive programs are declarative in that one cannot write probabilistic inference algorithms; instead, one interacts by defining features or rules about the domain. A key reason for this design choice is to enable domain experts to build their own KBC systems. We present the applications, abstractions, and techniques of DeepDive employed to accelerate construction of KBC systems.

  4. Variational inference & deep learning : A new synthesis

    NARCIS (Netherlands)

    Kingma, D.P.

    2017-01-01

    In this thesis, Variational Inference and Deep Learning: A New Synthesis, we propose novel solutions to the problems of variational (Bayesian) inference, generative modeling, representation learning, semi-supervised learning, and stochastic optimization.

  5. Pathways to deep decarbonization - 2015 report

    International Nuclear Information System (INIS)

    Ribera, Teresa; Colombier, Michel; Waisman, Henri; Bataille, Chris; Pierfederici, Roberta; Sachs, Jeffrey; Schmidt-Traub, Guido; Williams, Jim; Segafredo, Laura; Hamburg Coplan, Jill; Pharabod, Ivan; Oury, Christian

    2015-12-01

    In September 2015, the Deep Decarbonization Pathways Project published the Executive Summary of the Pathways to Deep Decarbonization: 2015 Synthesis Report. The full 2015 Synthesis Report was launched in Paris on December 3, 2015, at a technical workshop with the Mitigation Action Plans and Scenarios (MAPS) program. The Deep Decarbonization Pathways Project (DDPP) is a collaborative initiative to understand and show how individual countries can transition to a low-carbon economy and how the world can meet the internationally agreed target of limiting the increase in global mean surface temperature to less than 2 degrees Celsius (deg. C). Achieving the 2 deg. C limit will require that global net emissions of greenhouse gases (GHG) approach zero by the second half of the century. In turn, this will require a profound transformation of energy systems by mid-century through steep declines in carbon intensity in all sectors of the economy, a transition we call 'deep decarbonization'

  6. Variational inference & deep learning: A new synthesis

    OpenAIRE

    Kingma, D.P.

    2017-01-01

    In this thesis, Variational Inference and Deep Learning: A New Synthesis, we propose novel solutions to the problems of variational (Bayesian) inference, generative modeling, representation learning, semi-supervised learning, and stochastic optimization.

  7. DNA Replication Profiling Using Deep Sequencing.

    Science.gov (United States)

    Saayman, Xanita; Ramos-Pérez, Cristina; Brown, Grant W

    2018-01-01

    Profiling of DNA replication during progression through S phase allows a quantitative snap-shot of replication origin usage and DNA replication fork progression. We present a method for using deep sequencing data to profile DNA replication in S. cerevisiae.

  8. DAPs: Deep Action Proposals for Action Understanding

    KAUST Repository

    Escorcia, Victor; Caba Heilbron, Fabian; Niebles, Juan Carlos; Ghanem, Bernard

    2016-01-01

    action proposals from long videos. We show how to take advantage of the vast capacity of deep learning models and memory cells to retrieve from untrimmed videos temporal segments, which are likely to contain actions. A comprehensive evaluation indicates

  9. Evaluation of static resistance of deep foundations.

    Science.gov (United States)

    2017-05-01

    The focus of this research was to evaluate and improve Florida Department of Transportation (FDOT) FB-Deep software prediction of nominal resistance of H-piles, prestressed concrete piles in limestone, large diameter (> 36) open steel and concrete...

  10. The deep ocean under climate change.

    Science.gov (United States)

    Levin, Lisa A; Le Bris, Nadine

    2015-11-13

    The deep ocean absorbs vast amounts of heat and carbon dioxide, providing a critical buffer to climate change but exposing vulnerable ecosystems to combined stresses of warming, ocean acidification, deoxygenation, and altered food inputs. Resulting changes may threaten biodiversity and compromise key ocean services that maintain a healthy planet and human livelihoods. There exist large gaps in understanding of the physical and ecological feedbacks that will occur. Explicit recognition of deep-ocean climate mitigation and inclusion in adaptation planning by the United Nations Framework Convention on Climate Change (UNFCCC) could help to expand deep-ocean research and observation and to protect the integrity and functions of deep-ocean ecosystems. Copyright © 2015, American Association for the Advancement of Science.

  11. Deep gold mine fracture zone behaviour

    CSIR Research Space (South Africa)

    Napier, JAL

    1998-12-01

    Full Text Available The investigation of the behaviour of the fracture zone surrounding deep level gold mine stopes is detailed in three main sections of this report. Section 2 outlines the ongoing study of fundamental fracture process and their numerical...

  12. Deep Ultraviolet Macroporous Silicon Filters, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This SBIR Phase I proposal describes a novel method to make deep and far UV optical filters from macroporous silicon. This type of filter consists of an array of...

  13. Toolkits and Libraries for Deep Learning.

    Science.gov (United States)

    Erickson, Bradley J; Korfiatis, Panagiotis; Akkus, Zeynettin; Kline, Timothy; Philbrick, Kenneth

    2017-08-01

    Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data. In this paper, we will describe some of the libraries and tools that are available to aid in the construction and efficient execution of deep learning as applied to medical images.

  14. Deep-Sea Soft Coral Habitat Suitability

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Deep-sea corals, also known as cold water corals, create complex communities that provide habitat for a variety of invertebrate and fish species, such as grouper,...

  15. Photon diffractive dissociation in deep inelastic scattering

    International Nuclear Information System (INIS)

    Ryskin, M.G.

    1990-01-01

    The new ep-collider HERA gives us the possibility to study the diffractive dissociation of virtual photon in deep inelastic ep-collision. The process of photon dissociation in deep inelastic scattering is the most direct way to measure the value of triple-pomeron vertex G 3P . It was shown that the value of the correct bare vertex G 3P may more than 4 times exceeds its effective value measuring in the triple-reggeon region and reaches the value of about 40-50% of the elastic pp-pomeron vertex. On the contrary in deep inelastic processes the perpendicular momenta q t of the secondary particles are large enough. Thus in deep inelastic reactions one can measure the absolute value of G 3P vertex in the most direct way and compare its value and q t dependence with the leading log QCD predictions

  16. Applications of Deep Learning in Biomedicine.

    Science.gov (United States)

    Mamoshina, Polina; Vieira, Armando; Putin, Evgeny; Zhavoronkov, Alex

    2016-05-02

    Increases in throughput and installed base of biomedical research equipment led to a massive accumulation of -omics data known to be highly variable, high-dimensional, and sourced from multiple often incompatible data platforms. While this data may be useful for biomarker identification and drug discovery, the bulk of it remains underutilized. Deep neural networks (DNNs) are efficient algorithms based on the use of compositional layers of neurons, with advantages well matched to the challenges -omics data presents. While achieving state-of-the-art results and even surpassing human accuracy in many challenging tasks, the adoption of deep learning in biomedicine has been comparatively slow. Here, we discuss key features of deep learning that may give this approach an edge over other machine learning methods. We then consider limitations and review a number of applications of deep learning in biomedical studies demonstrating proof of concept and practical utility.

  17. Mean associative multiplicities in deep inelastic processes

    International Nuclear Information System (INIS)

    Dzhaparidze, G.Sh.; Kiselev, A.V.; Petrov, V.A.

    1981-01-01

    The associative hadron multiplicities in deep inelastic and Drell--Yan processes are studied. In particular the mean multiplicities in different hard processes in QCD are found to be determined by the mean multiplicity in parton jet [ru

  18. Deep-Sea Stony Coral Habitat Suitability

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Deep-sea corals, also known as cold water corals, create complex communities that provide habitat for a variety of invertebrate and fish species, such as grouper,...

  19. Deep Learning and Applications in Computational Biology

    KAUST Repository

    Zeng, Jianyang

    2016-01-01

    In this work, we develop a general and flexible deep learning framework for modeling structural binding preferences and predicting binding sites of RBPs, which takes (predicted) RNA tertiary structural information

  20. Leading particle in deep inelastic scattering

    International Nuclear Information System (INIS)

    Petrov, V.A.

    1984-01-01

    The leading particle effect in deep inelastic scattering is considered. The change of the characteris cs shape of the leading particle inclusive spectrum with Q 2 is estimated to be rather significant at very high Q 2

  1. Progress in deep-UV photoresists

    Indian Academy of Sciences (India)

    Unknown

    This paper reviews the recent development and challenges of deep-UV photoresists and their ... small amount of acid, when exposed to light by photo- chemical ... anomalous insoluble skin and linewidth shift when the. PEB was delayed.

  2. Methods in mooring deep sea sediment traps

    Digital Repository Service at National Institute of Oceanography (India)

    Venkatesan, R.; Fernando, V.; Rajaraman, V.S.; Janakiraman, G.

    The experience gained during the process of deployment and retrieval of nearly 39 sets of deep sea sediment trap moorings on various ships like FS Sonne, ORV Sagarkanya and DSV Nand Rachit are outlined. The various problems encountered...

  3. Deep Water Horizon (HB1006, EK60)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Monitor and measure the biological, chemical, and physical environment in the area of the oil spill from the deep water horizon oil rig in the Gulf of Mexico. A wide...

  4. Nuclear standardization development study

    International Nuclear Information System (INIS)

    Pan Jianjun

    2010-01-01

    Nuclear industry is the important part of national security and national economic development is key area of national new energy supported by government. nuclear standardization is the important force for nuclear industry development, is the fundamental guarantee of nuclear safe production, is the valuable means of China's nuclear industry technology to the world market. Now nuclear standardization faces to the new development opportunity, nuclear standardization should implement strategy in standard system building, foreign standard research, company standard building, and talented people building to meet the requirement of nuclear industry development. (author)

  5. Biodiversity loss from deep-sea mining

    OpenAIRE

    C. L. Van Dover; J. A. Ardron; E. Escobar; M. Gianni; K. M. Gjerde; A. Jaeckel; D. O. B. Jones; L. A. Levin; H. Niner; L. Pendleton; C. R. Smith; T. Thiele; P. J. Turner; L. Watling; P. P. E. Weaver

    2017-01-01

    The emerging deep-sea mining industry is seen by some to be an engine for economic development in the maritime sector. The International Seabed Authority (ISA) – the body that regulates mining activities on the seabed beyond national jurisdiction – must also protect the marine environment from harmful effects that arise from mining. The ISA is currently drafting a regulatory framework for deep-sea mining that includes measures for environmental protection. Responsible mining increasingly stri...

  6. DEEP VADOSE ZONE TREATABILITY TEST PLAN

    International Nuclear Information System (INIS)

    Chronister, G.B.; Truex, M.J.

    2009-01-01

    (sm b ullet) Treatability test plan published in 2008 (sm b ullet) Outlines technology treatability activities for evaluating application of in situ technologies and surface barriers to deep vadose zone contamination (technetium and uranium) (sm b ullet) Key elements - Desiccation testing - Testing of gas-delivered reactants for in situ treatment of uranium - Evaluating surface barrier application to deep vadose zone - Evaluating in situ grouting and soil flushing

  7. Deep inelastic inclusive weak and electromagnetic interactions

    International Nuclear Information System (INIS)

    Adler, S.L.

    1976-01-01

    The theory of deep inelastic inclusive interactions is reviewed, emphasizing applications to electromagnetic and weak charged current processes. The following reactions are considered: e + N → e + X, ν + N → μ - + X, anti ν + N → μ + + X where X denotes a summation over all final state hadrons and the ν's are muon neutrinos. After a discussion of scaling, the quark-parton model is invoked to explain the principle experimental features of deep inelastic inclusive reactions

  8. Short-term Memory of Deep RNN

    OpenAIRE

    Gallicchio, Claudio

    2018-01-01

    The extension of deep learning towards temporal data processing is gaining an increasing research interest. In this paper we investigate the properties of state dynamics developed in successive levels of deep recurrent neural networks (RNNs) in terms of short-term memory abilities. Our results reveal interesting insights that shed light on the nature of layering as a factor of RNN design. Noticeably, higher layers in a hierarchically organized RNN architecture results to be inherently biased ...

  9. Deep Learning for Video Game Playing

    OpenAIRE

    Justesen, Niels; Bontrager, Philip; Togelius, Julian; Risi, Sebastian

    2017-01-01

    In this article, we review recent Deep Learning advances in the context of how they have been applied to play different types of video games such as first-person shooters, arcade games, and real-time strategy games. We analyze the unique requirements that different game genres pose to a deep learning system and highlight important open challenges in the context of applying these machine learning methods to video games, such as general game playing, dealing with extremely large decision spaces...

  10. Life Support for Deep Space and Mars

    Science.gov (United States)

    Jones, Harry W.; Hodgson, Edward W.; Kliss, Mark H.

    2014-01-01

    How should life support for deep space be developed? The International Space Station (ISS) life support system is the operational result of many decades of research and development. Long duration deep space missions such as Mars have been expected to use matured and upgraded versions of ISS life support. Deep space life support must use the knowledge base incorporated in ISS but it must also meet much more difficult requirements. The primary new requirement is that life support in deep space must be considerably more reliable than on ISS or anywhere in the Earth-Moon system, where emergency resupply and a quick return are possible. Due to the great distance from Earth and the long duration of deep space missions, if life support systems fail, the traditional approaches for emergency supply of oxygen and water, emergency supply of parts, and crew return to Earth or escape to a safe haven are likely infeasible. The Orbital Replacement Unit (ORU) maintenance approach used by ISS is unsuitable for deep space with ORU's as large and complex as those originally provided in ISS designs because it minimizes opportunities for commonality of spares, requires replacement of many functional parts with each failure, and results in substantial launch mass and volume penalties. It has become impractical even for ISS after the shuttle era, resulting in the need for ad hoc repair activity at lower assembly levels with consequent crew time penalties and extended repair timelines. Less complex, more robust technical approaches may be needed to meet the difficult deep space requirements for reliability, maintainability, and reparability. Developing an entirely new life support system would neglect what has been achieved. The suggested approach is use the ISS life support technologies as a platform to build on and to continue to improve ISS subsystems while also developing new subsystems where needed to meet deep space requirements.

  11. Deep Predictive Models in Interactive Music

    OpenAIRE

    Martin, Charles P.; Ellefsen, Kai Olav; Torresen, Jim

    2018-01-01

    Automatic music generation is a compelling task where much recent progress has been made with deep learning models. In this paper, we ask how these models can be integrated into interactive music systems; how can they encourage or enhance the music making of human users? Musical performance requires prediction to operate instruments, and perform in groups. We argue that predictive models could help interactive systems to understand their temporal context, and ensemble behaviour. Deep learning...

  12. Predicting Process Behaviour using Deep Learning

    OpenAIRE

    Evermann, Joerg; Rehse, Jana-Rebecca; Fettke, Peter

    2016-01-01

    Predicting business process behaviour is an important aspect of business process management. Motivated by research in natural language processing, this paper describes an application of deep learning with recurrent neural networks to the problem of predicting the next event in a business process. This is both a novel method in process prediction, which has largely relied on explicit process models, and also a novel application of deep learning methods. The approach is evaluated on two real da...

  13. A Deep Learning Approach to Drone Monitoring

    OpenAIRE

    Chen, Yueru; Aggarwal, Pranav; Choi, Jongmoo; Kuo, C. -C. Jay

    2017-01-01

    A drone monitoring system that integrates deep-learning-based detection and tracking modules is proposed in this work. The biggest challenge in adopting deep learning methods for drone detection is the limited amount of training drone images. To address this issue, we develop a model-based drone augmentation technique that automatically generates drone images with a bounding box label on drone's location. To track a small flying drone, we utilize the residual information between consecutive i...

  14. Bank of Weight Filters for Deep CNNs

    Science.gov (United States)

    2016-11-22

    very large even on the best available hardware . In some studies in transfer learning it has been observed that the network learnt on one task can be...CNNs. Keywords: CNN, deep learning , neural networks, transfer learning , bank of weigh filters, BWF 1. Introduction Object recognition is an important...of CNNs (or, in general, of deep neural networks) is that feature generation part is fused with the classifier part and both parts are learned together

  15. Leveraging multiple datasets for deep leaf counting

    OpenAIRE

    Dobrescu, Andrei; Giuffrida, Mario Valerio; Tsaftaris, Sotirios A

    2017-01-01

    The number of leaves a plant has is one of the key traits (phenotypes) describing its development and growth. Here, we propose an automated, deep learning based approach for counting leaves in model rosette plants. While state-of-the-art results on leaf counting with deep learning methods have recently been reported, they obtain the count as a result of leaf segmentation and thus require per-leaf (instance) segmentation to train the models (a rather strong annotation). Instead, our method tre...

  16. DeepSpark: A Spark-Based Distributed Deep Learning Framework for Commodity Clusters

    OpenAIRE

    Kim, Hanjoo; Park, Jaehong; Jang, Jaehee; Yoon, Sungroh

    2016-01-01

    The increasing complexity of deep neural networks (DNNs) has made it challenging to exploit existing large-scale data processing pipelines for handling massive data and parameters involved in DNN training. Distributed computing platforms and GPGPU-based acceleration provide a mainstream solution to this computational challenge. In this paper, we propose DeepSpark, a distributed and parallel deep learning framework that exploits Apache Spark on commodity clusters. To support parallel operation...

  17. Contemporary deep recurrent learning for recognition

    Science.gov (United States)

    Iftekharuddin, K. M.; Alam, M.; Vidyaratne, L.

    2017-05-01

    Large-scale feed-forward neural networks have seen intense application in many computer vision problems. However, these networks can get hefty and computationally intensive with increasing complexity of the task. Our work, for the first time in literature, introduces a Cellular Simultaneous Recurrent Network (CSRN) based hierarchical neural network for object detection. CSRN has shown to be more effective to solving complex tasks such as maze traversal and image processing when compared to generic feed forward networks. While deep neural networks (DNN) have exhibited excellent performance in object detection and recognition, such hierarchical structure has largely been absent in neural networks with recurrency. Further, our work introduces deep hierarchy in SRN for object recognition. The simultaneous recurrency results in an unfolding effect of the SRN through time, potentially enabling the design of an arbitrarily deep network. This paper shows experiments using face, facial expression and character recognition tasks using novel deep recurrent model and compares recognition performance with that of generic deep feed forward model. Finally, we demonstrate the flexibility of incorporating our proposed deep SRN based recognition framework in a humanoid robotic platform called NAO.

  18. Diabetic retinopathy screening using deep neural network.

    Science.gov (United States)

    Ramachandran, Nishanthan; Hong, Sheng Chiong; Sime, Mary J; Wilson, Graham A

    2017-09-07

    There is a burgeoning interest in the use of deep neural network in diabetic retinal screening. To determine whether a deep neural network could satisfactorily detect diabetic retinopathy that requires referral to an ophthalmologist from a local diabetic retinal screening programme and an international database. Retrospective audit. Diabetic retinal photos from Otago database photographed during October 2016 (485 photos), and 1200 photos from Messidor international database. Receiver operating characteristic curve to illustrate the ability of a deep neural network to identify referable diabetic retinopathy (moderate or worse diabetic retinopathy or exudates within one disc diameter of the fovea). Area under the receiver operating characteristic curve, sensitivity and specificity. For detecting referable diabetic retinopathy, the deep neural network had an area under receiver operating characteristic curve of 0.901 (95% confidence interval 0.807-0.995), with 84.6% sensitivity and 79.7% specificity for Otago and 0.980 (95% confidence interval 0.973-0.986), with 96.0% sensitivity and 90.0% specificity for Messidor. This study has shown that a deep neural network can detect referable diabetic retinopathy with sensitivities and specificities close to or better than 80% from both an international and a domestic (New Zealand) database. We believe that deep neural networks can be integrated into community screening once they can successfully detect both diabetic retinopathy and diabetic macular oedema. © 2017 Royal Australian and New Zealand College of Ophthalmologists.

  19. Catheter directed thrombolysis for deep vein thrombosis during the first trimester of pregnancy: two case report

    International Nuclear Information System (INIS)

    Kim, Kum Rae; Park, Won Kyu; Kim, Jae Woon; Kwun, Woo Hyung; Suh, Bo Yang; Park, Kyeong Seok

    2008-01-01

    Anticoagulation with heparin has been the standard management therapy of deep vein thrombosis during pregnancy. Pregnancy is generally considered as a contraindication for thrombolysis. However, anticoagulation therapy alone does not protect the limbs from post-thrombotic syndrome and venous valve insufficiency. Catheter-directed thrombolysis, combined with angioplasty and stenting, can remove the thrombus and restore patency of the veins, resulting in prevention of post-thrombotic syndrome and valve insufficiency. We report successful catheter-directed thrombolysis and stenting in two early gestation patients with a deep vein thrombosis of the left lower extremity

  20. Catheter directed thrombolysis for deep vein thrombosis during the first trimester of pregnancy: two case report

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kum Rae; Park, Won Kyu; Kim, Jae Woon; Kwun, Woo Hyung; Suh, Bo Yang [College of Medicine, Yeungnam University, Daegu (Korea, Republic of); Park, Kyeong Seok [Yeungnam University, Medical Center, Daegu (Korea, Republic of)

    2008-02-15

    Anticoagulation with heparin has been the standard management therapy of deep vein thrombosis during pregnancy. Pregnancy is generally considered as a contraindication for thrombolysis. However, anticoagulation therapy alone does not protect the limbs from post-thrombotic syndrome and venous valve insufficiency. Catheter-directed thrombolysis, combined with angioplasty and stenting, can remove the thrombus and restore patency of the veins, resulting in prevention of post-thrombotic syndrome and valve insufficiency. We report successful catheter-directed thrombolysis and stenting in two early gestation patients with a deep vein thrombosis of the left lower extremity.

  1. Coaxial nuclear radiation detector with deep junction and radial field gradient

    International Nuclear Information System (INIS)

    Hall, R.N.

    1979-01-01

    Germanium radiation detectors are manufactured by diffusion lithium into high purity p-type germanium. The diffusion is most readily accomplished from a lithium-lead-bismuth alloy at approximately 430 0 and is monitored by a quartz half cell containing a standard composition of this alloy. Detectors having n-type cores may be constructed by converting high purity p-type germanium to n-type by a lithium diffusion and subsequently diffusing some of the lithium back out through the surface to create a deep p-n junction. Coaxial germanium detectors comprising deep p-n junctions are produced by the lithium diffusion process

  2. Some Challenges of Deep Mining†

    Directory of Open Access Journals (Sweden)

    Charles Fairhurst

    2017-08-01

    Full Text Available An increased global supply of minerals is essential to meet the needs and expectations of a rapidly rising world population. This implies extraction from greater depths. Autonomous mining systems, developed through sustained R&D by equipment suppliers, reduce miner exposure to hostile work environments and increase safety. This places increased focus on “ground control” and on rock mechanics to define the depth to which minerals may be extracted economically. Although significant efforts have been made since the end of World War II to apply mechanics to mine design, there have been both technological and organizational obstacles. Rock in situ is a more complex engineering material than is typically encountered in most other engineering disciplines. Mining engineering has relied heavily on empirical procedures in design for thousands of years. These are no longer adequate to address the challenges of the 21st century, as mines venture to increasingly greater depths. The development of the synthetic rock mass (SRM in 2008 provides researchers with the ability to analyze the deformational behavior of rock masses that are anisotropic and discontinuous—attributes that were described as the defining characteristics of in situ rock by Leopold Müller, the president and founder of the International Society for Rock Mechanics (ISRM, in 1966. Recent developments in the numerical modeling of large-scale mining operations (e.g., caving using the SRM reveal unanticipated deformational behavior of the rock. The application of massive parallelization and cloud computational techniques offers major opportunities: for example, to assess uncertainties in numerical predictions; to establish the mechanics basis for the empirical rules now used in rock engineering and their validity for the prediction of rock mass behavior beyond current experience; and to use the discrete element method (DEM in the optimization of deep mine design. For the first time, mining

  3. DeepInfer: open-source deep learning deployment toolkit for image-guided therapy

    Science.gov (United States)

    Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A.; Kapur, Tina; Wells, William M.; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang

    2017-03-01

    Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research work ows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.

  4. Some considerations about standardization

    Energy Technology Data Exchange (ETDEWEB)

    Dewez, Ph L; Fanjas, Y R [C.E.R.C.A., Romans (France)

    1985-07-01

    Complete standardization of research reactor fuel is not possible. However the transition from HEU to LEU should be an opportunity for a double effort towards standardization and optimization in order to reduce cost. (author)

  5. BTS statistical standards manual

    Science.gov (United States)

    2005-10-01

    The Bureau of Transportation Statistics (BTS), like other federal statistical agencies, establishes professional standards to guide the methods and procedures for the collection, processing, storage, and presentation of statistical data. Standards an...

  6. Dental Assisting Program Standards.

    Science.gov (United States)

    Georgia Univ., Athens. Dept. of Vocational Education.

    This publication contains statewide standards for the dental assisting program in Georgia. The standards are divided into 12 categories: foundations (philosophy, purpose, goals, program objectives, availability, evaluation); admissions (admission requirements, provisional admission requirements, recruitment, evaluation and planning); program…

  7. Some considerations about standardization

    International Nuclear Information System (INIS)

    Dewez, Ph.L.; Fanjas, Y.R.

    1985-01-01

    Complete standardization of research reactor fuel is not possible. However the transition from HEU to LEU should be an opportunity for a double effort towards standardization and optimization in order to reduce cost. (author)

  8. The Distance Standard Deviation

    OpenAIRE

    Edelmann, Dominic; Richards, Donald; Vogel, Daniel

    2017-01-01

    The distance standard deviation, which arises in distance correlation analysis of multivariate data, is studied as a measure of spread. New representations for the distance standard deviation are obtained in terms of Gini's mean difference and in terms of the moments of spacings of order statistics. Inequalities for the distance variance are derived, proving that the distance standard deviation is bounded above by the classical standard deviation and by Gini's mean difference. Further, it is ...

  9. Making standards work

    OpenAIRE

    Stigzelius, Ingrid

    2009-01-01

    Social and environmental standards can function as tools for companies that want to improve their conduct in social and environmental areas in the supply chain. However, relatively little attention has been given to how the adoption of social and environmental standards may influence the actual business practices in the supply chain. The overall aim of this thesis is to examine the institutional context surrounding the adoption of social and environmental standards and how these standards inf...

  10. Standards, the users perspective

    International Nuclear Information System (INIS)

    Nason, W.D.

    1993-01-01

    The term standard has little meaning until put into the proper context. What is being standardized? What are the standard conditions to be applied? The list of questions that arise goes on and on. In this presentation, answers to these questions are considered in the interest of providing a basic understanding of what might be useful to the electrical power industry in the way of standards and what the limitations on application of them would be as well. 16 figs

  11. Radiological Control Technician: Standardized technician Qualification Standard

    International Nuclear Information System (INIS)

    1992-10-01

    The Qualification Standard states and defines the knowledge and skill requirements necessary for successful completion of the Radiological Control Technician Training Program. The standard is divided into three phases: Phase I concerns RCT Academic training. There are 13 lessons associated with the core academics program and 19 lessons associated with the site academics program. The staff member should sign the appropriate blocks upon successful completion of the examination for that lesson or group of lessons. In addition, facility specific lesson plans may be added to meet the knowledge requirements in the Job Performance Measures (JPM) of the practical program. Phase II concerns RCT core/site practical (JPMs) training. There are thirteen generic tasks associated with the core practical program. Both the trainer/evaluator and student should sign the appropriate block upon successful completion of the JPM. In addition, facility specific tasks may be added or generic tasks deleted based on the results of the facility job evaluation. Phase III concerns the oral examination board successful completion of the oral examination board is documented by the signature of the chairperson of the board. Upon completion of all of the standardized technician qualification requirements, final qualification is verified by the student and the manager of the Radiological Control Department and acknowledged by signatures on the qualification standard. The completed Qualification Standard shall be maintained as an official training record

  12. Quality of semantic standards

    NARCIS (Netherlands)

    Folmer, Erwin Johan Albert

    2012-01-01

    Little scientific literature addresses the issue of quality of semantic standards, albeit a problem with high economic and social impact. Our problem survey, including 34 semantic Standard Setting Organizations (SSOs), gives evidence that quality of standards can be improved, but for improvement a

  13. Automotive Technology Skill Standards

    Science.gov (United States)

    Garrett, Tom; Asay, Don; Evans, Richard; Barbie, Bill; Herdener, John; Teague, Todd; Allen, Scott; Benshoof, James

    2009-01-01

    The standards in this document are for Automotive Technology programs and are designed to clearly state what the student should know and be able to do upon completion of an advanced high-school automotive program. Minimally, the student will complete a three-year program to achieve all standards. Although these exit-level standards are designed…

  14. Calibration of Flick standards

    International Nuclear Information System (INIS)

    Thalmann, Ruedi; Spiller, Jürg; Küng, Alain; Jusko, Otto

    2012-01-01

    Flick standards or magnification standards are widely used for an efficient and functional calibration of the sensitivity of form measuring instruments. The results of a recent measurement comparison have shown to be partially unsatisfactory and revealed problems related to the calibration of these standards. In this paper the influence factors for the calibration of Flick standards using roundness measurement instruments are discussed in detail, in particular the bandwidth of the measurement chain, residual form errors of the device under test, profile distortions due to the diameter of the probing element and questions related to the definition of the measurand. The different contributions are estimated using simulations and are experimentally verified. Also alternative methods to calibrate Flick standards are investigated. Finally the practical limitations of Flick standard calibration are shown and the usability of Flick standards both to calibrate the sensitivity of roundness instruments and to check the filter function of such instruments is analysed. (paper)

  15. High efficacy with deep nurse-administered propofol sedation for advanced gastroenterologic endoscopic procedures

    DEFF Research Database (Denmark)

    Jensen, Jeppe Thue; Hornslet, Pernille; Konge, Lars

    2016-01-01

    was requested eight times (0.4 %). One patient was intubated due to suspected aspiration. CONCLUSIONS: Intermittent deep NAPS for advanced endoscopies in selected patients provided an almost 100 % success rate. However, the rate of hypoxia, hypotension and respiratory support was high compared with previously......BACKGROUND AND STUDY AIMS: Whereas data on moderate nurse-administered propofol sedation (NAPS) efficacy and safety for standard endoscopy is abundant, few reports on the use of deep sedation by endoscopy nurses during advanced endoscopy, such as Endoscopic Retrograde Cholangiopancreatography (ERCP......) and Endoscopic Ultrasound (EUS) are available and potential benefits or hazards remain unclear. The aims of this study were to investigate the efficacy of intermittent deep sedation with propofol for a large cohort of advanced endoscopies and to provide data on the safety. PATIENTS AND METHODS: All available...

  16. Partitioned learning of deep Boltzmann machines for SNP data.

    Science.gov (United States)

    Hess, Moritz; Lenz, Stefan; Blätte, Tamara J; Bullinger, Lars; Binder, Harald

    2017-10-15

    Learning the joint distributions of measurements, and in particular identification of an appropriate low-dimensional manifold, has been found to be a powerful ingredient of deep leaning approaches. Yet, such approaches have hardly been applied to single nucleotide polymorphism (SNP) data, probably due to the high number of features typically exceeding the number of studied individuals. After a brief overview of how deep Boltzmann machines (DBMs), a deep learning approach, can be adapted to SNP data in principle, we specifically present a way to alleviate the dimensionality problem by partitioned learning. We propose a sparse regression approach to coarsely screen the joint distribution of SNPs, followed by training several DBMs on SNP partitions that were identified by the screening. Aggregate features representing SNP patterns and the corresponding SNPs are extracted from the DBMs by a combination of statistical tests and sparse regression. In simulated case-control data, we show how this can uncover complex SNP patterns and augment results from univariate approaches, while maintaining type 1 error control. Time-to-event endpoints are considered in an application with acute myeloid leukemia patients, where SNP patterns are modeled after a pre-screening based on gene expression data. The proposed approach identified three SNPs that seem to jointly influence survival in a validation dataset. This indicates the added value of jointly investigating SNPs compared to standard univariate analyses and makes partitioned learning of DBMs an interesting complementary approach when analyzing SNP data. A Julia package is provided at 'http://github.com/binderh/BoltzmannMachines.jl'. binderh@imbi.uni-freiburg.de. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  17. DeepPVP: phenotype-based prioritization of causative variants using deep learning

    KAUST Repository

    Boudellioua, Imene; Kulmanov, Maxat; Schofield, Paul N; Gkoutos, Georgios V; Hoehndorf, Robert

    2018-01-01

    phenotype-based methods that use similar features. DeepPVP is freely available at https://github.com/bio-ontology-research-group/phenomenet-vp Conclusions: DeepPVP further improves on existing variant prioritization methods both in terms of speed as well

  18. Assessment of deep geological environment condition

    International Nuclear Information System (INIS)

    Bae, Dae Seok; Han, Kyung Won; Joen, Kwan Sik

    2003-05-01

    The main tasks of geoscientific study in the 2nd stage was characterized focusing mainly on a near-field condition of deep geologic environment, and aimed to generate the geologic input data for a Korean reference disposal system for high level radioactive wastes and to establish site characterization methodology, including neotectonic features, fracture systems and mechanical properties of plutonic rocks, and hydrogeochemical characteristics. The preliminary assessment of neotectonics in the Korean peninsula was performed on the basis of seismicity recorded, Quarternary faults investigated, uplift characteristics studied on limited areas, distribution of the major regional faults and their characteristics. The local fracture system was studied in detail from the data obtained from deep boreholes in granitic terrain. Through this deep drilling project, the geometrical and hydraulic properties of different fracture sets are statistically analysed on a block scale. The mechanical properties of intact rocks were evaluated from the core samples by laboratory testing and the in-situ stress conditions were estimated by a hydro fracturing test in the boreholes. The hydrogeochemical conditions in the deep boreholes were characterized based on hydrochemical composition and isotopic signatures and were attempted to assess the interrelation with a major fracture system. The residence time of deep groundwater was estimated by C-14 dating. For the travel time of groundwater between the boreholes, the methodology and equipment for tracer test were established

  19. Molecular analysis of deep subsurface bacteria

    International Nuclear Information System (INIS)

    Jimenez Baez, L.E.

    1989-09-01

    Deep sediments samples from site C10a, in Appleton, and sites, P24, P28, and P29, at the Savannah River Site (SRS), near Aiken, South Carolina were studied to determine their microbial community composition, DNA homology and mol %G+C. Different geological formations with great variability in hydrogeological parameters were found across the depth profile. Phenotypic identification of deep subsurface bacteria underestimated the bacterial diversity at the three SRS sites, since bacteria with the same phenotype have different DNA composition and less than 70% DNA homology. Total DNA hybridization and mol %G+C analysis of deep sediment bacterial isolates suggested that each formation is comprised of different microbial communities. Depositional environment was more important than site and geological formation on the DNA relatedness between deep subsurface bacteria, since more 70% of bacteria with 20% or more of DNA homology came from the same depositional environments. Based on phenotypic and genotypic tests Pseudomonas spp. and Acinetobacter spp.-like bacteria were identified in 85 million years old sediments. This suggests that these microbial communities might have been adapted during a long period of time to the environmental conditions of the deep subsurface

  20. Preface: Deep Slab and Mantle Dynamics

    Science.gov (United States)

    Suetsugu, Daisuke; Bina, Craig R.; Inoue, Toru; Wiens, Douglas A.

    2010-11-01

    We are pleased to publish this special issue of the journal Physics of the Earth and Planetary Interiors entitled "Deep Slab and Mantle Dynamics". This issue is an outgrowth of the international symposium "Deep Slab and Mantle Dynamics", which was held on February 25-27, 2009, in Kyoto, Japan. This symposium was organized by the "Stagnant Slab Project" (SSP) research group to present the results of the 5-year project and to facilitate intensive discussion with well-known international researchers in related fields. The SSP and the symposium were supported by a Grant-in-Aid for Scientific Research (16075101) from the Ministry of Education, Culture, Sports, Science and Technology of the Japanese Government. In the symposium, key issues discussed by participants included: transportation of water into the deep mantle and its role in slab-related dynamics; observational and experimental constraints on deep slab properties and the slab environment; modeling of slab stagnation to constrain its mechanisms in comparison with observational and experimental data; observational, experimental and modeling constraints on the fate of stagnant slabs; eventual accumulation of stagnant slabs on the core-mantle boundary and its geodynamic implications. This special issue is a collection of papers presented in the symposium and other papers related to the subject of the symposium. The collected papers provide an overview of the wide range of multidisciplinary studies of mantle dynamics, particularly in the context of subduction, stagnation, and the fate of deep slabs.

  1. Training Deep Spiking Neural Networks Using Backpropagation.

    Science.gov (United States)

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael

    2016-01-01

    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  2. Deep Ocean Contribution to Sea Level Rise

    Science.gov (United States)

    Chang, L.; Sun, W.; Tang, H.; Wang, Q.

    2017-12-01

    The ocean temperature and salinity change in the upper 2000m can be detected by Argo floats, so we can know the steric height change of the ocean. But the ocean layers above 2000m represent only 50% of the total ocean volume. Although the temperature and salinity change are small compared to the upper ocean, the deep ocean contribution to sea level might be significant because of its large volume. There has been some research on the deep ocean rely on the very sparse situ observation and are limited to decadal and longer-term rates of change. The available observational data in the deep ocean are too spares to determine the temporal variability, and the long-term changes may have a bias. We will use the Argo date and combine the situ data and topographic data to estimate the temperature and salinity of the sea water below 2000m, so we can obtain a monthly data. We will analyze the seasonal and annual change of the steric height change due to the deep ocean between 2005 and 2016. And we will evaluate the result combination the present-day satellite and in situ observing systems. The deep ocean contribution can be inferred indirectly as the difference between the altimetry minus GRACE and Argo-based steric sea level.

  3. Deep Learning: A Primer for Radiologists.

    Science.gov (United States)

    Chartrand, Gabriel; Cheng, Phillip M; Vorontsov, Eugene; Drozdzal, Michal; Turcotte, Simon; Pal, Christopher J; Kadoury, Samuel; Tang, An

    2017-01-01

    Deep learning is a class of machine learning methods that are gaining success and attracting interest in many domains, including computer vision, speech recognition, natural language processing, and playing games. Deep learning methods produce a mapping from raw inputs to desired outputs (eg, image classes). Unlike traditional machine learning methods, which require hand-engineered feature extraction from inputs, deep learning methods learn these features directly from data. With the advent of large datasets and increased computing power, these methods can produce models with exceptional performance. These models are multilayer artificial neural networks, loosely inspired by biologic neural systems. Weighted connections between nodes (neurons) in the network are iteratively adjusted based on example pairs of inputs and target outputs by back-propagating a corrective error signal through the network. For computer vision tasks, convolutional neural networks (CNNs) have proven to be effective. Recently, several clinical applications of CNNs have been proposed and studied in radiology for classification, detection, and segmentation tasks. This article reviews the key concepts of deep learning for clinical radiologists, discusses technical requirements, describes emerging applications in clinical radiology, and outlines limitations and future directions in this field. Radiologists should become familiar with the principles and potential applications of deep learning in medical imaging. © RSNA, 2017.

  4. Requirements of quality standards

    International Nuclear Information System (INIS)

    Mueller, J.

    1977-01-01

    The lecture traces the development of nuclear standards, codes, and Federal regulations on quality assurance (QA) for nuclear power plants and associated facilities. The technical evolution of the last twelve years, especially in the area of nuclear technology, led to different activities and regulatory initiatives, and the present result is: several nations have their own homemade standards. The lecture discusses the former and especially current activities in standard development, and gives a description of the requirements of QA-standards used in USA and Europe, especially Western Germany. Furthermore the lecture attempts to give a comparison and an evaluation of the international quality standards from the author's viewpoint. Finally the lecture presents an outlook for the future international implications of QA-standards. There is an urgent need within the nuclear industry for simplification and standardization of QA-standards. The relationship between the various standards, and the applicability of the standards need clarification and a better transparancy. To point out these problems is the purpose of the lecture. (orig.) [de

  5. DeepPVP: phenotype-based prioritization of causative variants using deep learning

    KAUST Repository

    Boudellioua, Imene

    2018-05-02

    Background: Prioritization of variants in personal genomic data is a major challenge. Recently, computational methods that rely on comparing phenotype similarity have shown to be useful to identify causative variants. In these methods, pathogenicity prediction is combined with a semantic similarity measure to prioritize not only variants that are likely to be dysfunctional but those that are likely involved in the pathogenesis of a patient\\'s phenotype. Results: We have developed DeepPVP, a variant prioritization method that combined automated inference with deep neural networks to identify the likely causative variants in whole exome or whole genome sequence data. We demonstrate that DeepPVP performs significantly better than existing methods, including phenotype-based methods that use similar features. DeepPVP is freely available at https://github.com/bio-ontology-research-group/phenomenet-vp Conclusions: DeepPVP further improves on existing variant prioritization methods both in terms of speed as well as accuracy.

  6. 75 FR 51838 - Public Review of Draft Coastal and Marine Ecological Classification Standard

    Science.gov (United States)

    2010-08-23

    ... Web site. DATES: Comments on the draft Coastal and Marine Ecological Classification Standard must be... marine and coastal environments of the United States. It was developed to provide a common language that... existing classification standards. The CMECS domain extends from the coastal tidal splash zone to the deep...

  7. MOVING OBJECTS IN THE HUBBLE ULTRA DEEP FIELD

    Energy Technology Data Exchange (ETDEWEB)

    Kilic, Mukremin; Gianninas, Alexandros [Homer L. Dodge Department of Physics and Astronomy, University of Oklahoma, 440 W. Brooks St., Norman, OK 73019 (United States); Von Hippel, Ted, E-mail: kilic@ou.edu, E-mail: alexg@nhn.ou.edu, E-mail: ted.vonhippel@erau.edu [Embry-Riddle Aeronautical University, 600 S. Clyde Morris Blvd., Daytona Beach, FL 32114 (United States)

    2013-09-01

    We identify proper motion objects in the Hubble Ultra Deep Field (UDF) using the optical data from the original UDF program in 2004 and the near-infrared data from the 128 orbit UDF 2012 campaign. There are 12 sources brighter than I = 27 mag that display >3{sigma} significant proper motions. We do not find any proper motion objects fainter than this magnitude limit. Combining optical and near-infrared photometry, we model the spectral energy distribution of each point-source using stellar templates and state-of-the-art white dwarf models. For I {<=} 27 mag, we identify 23 stars with K0-M6 spectral types and two faint blue objects that are clearly old, thick disk white dwarfs. We measure a thick disk white dwarf space density of 0.1-1.7 Multiplication-Sign 10{sup -3} pc{sup -3} from these two objects. There are no halo white dwarfs in the UDF down to I = 27 mag. Combining the Hubble Deep Field North, South, and the UDF data, we do not see any evidence for dark matter in the form of faint halo white dwarfs, and the observed population of white dwarfs can be explained with the standard Galactic models.

  8. Archaeological Excavation and Deep Mapping in Historic Rural Communities

    Directory of Open Access Journals (Sweden)

    Carenza Lewis

    2015-09-01

    Full Text Available This paper reviews the results of more than a hundred small archaeological “test pit” excavations carried out in 2013 within four rural communities in eastern England. Each excavation used standardized protocols in a different location within the host village, with the finds dated and mapped to create a series of maps spanning more than 3500 years, in order to advance understanding of the spatial development of settlements and landscapes over time. The excavations were all carried out by local volunteers working physically within their own communities, supported and advised by professional archaeologists, with most test pits sited in volunteers’ own gardens or those of their friends, family or neighbors. Site-by-site, the results provided glimpses of the use made by humans of each of the excavated sites spanning prehistory to the present day; while in aggregate the mapped data show how settlement and land-use developed and changed over time. Feedback from participants also demonstrates the diverse positive impacts the project had on individuals and communities. The results are presented and reviewed here in order to highlight the contribution archaeological test pit excavation can make to deep mapping, and the contribution that deep mapping can make to rural communities.

  9. Development and test of a plastic deep-well pump

    International Nuclear Information System (INIS)

    Zhang, Q H; Gao, X F; Xu, Y; Shi, W D; Lu, W G; Liu, W

    2013-01-01

    To develop a plastic deep-well pump, three methods are proposed on structural and forming technique. First, the major hydraulic components are constructed by plastics, and the connection component is constructed by steel. Thus the pump structure is more concise and slim, greatly reducing its weight and easing its transportation, installation, and maintenance. Second, the impeller is designed by maximum diameter method. Using same pump casing, the stage head is greatly increased. Third, a sealing is formed by impeller front end face and steel end face, and two slots are designed on the impeller front end face, thus when the two end faces approach, a lubricating pair is formed, leading to an effective sealing. With above methods, the pump's axial length is greatly reduced, and its stage head is larger and more efficient. Especially, the pump's axial force is effectively balanced. To examine the above proposals, a prototype pump is constructed, and its testing results show that the pump efficiency exceeds the national standard by 6%, and the stage head is improved by 41%, meanwhile, its structure is more concise and ease of transportation. Development of this pump would provide useful experiences for further popularity of plastic deep-well pumps

  10. Deep Learning in Open Source Learning Streams

    DEFF Research Database (Denmark)

    Kjærgaard, Thomas

    2016-01-01

    This chapter presents research on deep learning in a digital learning environment and raises the question if digital instructional designs can catalyze deeper learning than traditional classroom teaching. As a theoretical point of departure the notion of ‘situated learning’ is utilized...... and contrasted to the notion of functionalistic learning in a digital context. The mechanism that enables deep learning in this context is ‘The Open Source Learning Stream’. ‘The Open Source Learning Stream’ is the notion of sharing ‘learning instances’ in a digital space (discussion board, Facebook group......, unistructural, multistructural or relational learning. The research concludes that ‘The Open Source Learning Stream’ can catalyze deep learning and that there are four types of ‘Open Source Learning streams’; individual/ asynchronous, individual/synchronous, shared/asynchronous and shared...

  11. Deep learning in medical imaging: General overview

    Energy Technology Data Exchange (ETDEWEB)

    Lee, June Goo; Jun, Sang Hoon; Cho, Young Won; Lee, Hyun Na; KIm, Guk Bae; Seo, Joon Beom; Kim, Nam Kug [University of Ulsan College of Medicine, Asan Medical Center, Seoul (Korea, Republic of)

    2017-08-01

    The artificial neural network (ANN)–a machine learning technique inspired by the human neuronal synapse system–was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and health care, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging.

  12. Deep-seated sarcomas of the penis

    Directory of Open Access Journals (Sweden)

    Alberto A. Antunes

    2005-06-01

    Full Text Available Mesenchymal neoplasias represent 5% of tumors affecting the penis. Due to the rarity of such tumors, there is no agreement concerning the best method for staging and managing these patients. Sarcomas of the penis can be classified as deep-seated if they derive from the structures forming the spongy body and the cavernous bodies. Superficial lesions are usually low-grade and show a small tendency towards distant metastasis. In contrast, deep-seated lesions usually show behavior that is more aggressive and have poorer prognosis. The authors report 3 cases of deep-seated primary sarcomas of the penis and review the literature on this rare and aggressive neoplasia.

  13. Strategic Technologies for Deep Space Transport

    Science.gov (United States)

    Litchford, Ronald J.

    2016-01-01

    Deep space transportation capability for science and exploration is fundamentally limited by available propulsion technologies. Traditional chemical systems are performance plateaued and require enormous Initial Mass in Low Earth Orbit (IMLEO) whereas solar electric propulsion systems are power limited and unable to execute rapid transits. Nuclear based propulsion and alternative energetic methods, on the other hand, represent potential avenues, perhaps the only viable avenues, to high specific power space transport evincing reduced trip time, reduced IMLEO, and expanded deep space reach. Here, key deep space transport mission capability objectives are reviewed in relation to STMD technology portfolio needs, and the advanced propulsion technology solution landscape is examined including open questions, technical challenges, and developmental prospects. Options for potential future investment across the full compliment of STMD programs are presented based on an informed awareness of complimentary activities in industry, academia, OGAs, and NASA mission directorates.

  14. Deep learning in medical imaging: General overview

    International Nuclear Information System (INIS)

    Lee, June Goo; Jun, Sang Hoon; Cho, Young Won; Lee, Hyun Na; KIm, Guk Bae; Seo, Joon Beom; Kim, Nam Kug

    2017-01-01

    The artificial neural network (ANN)–a machine learning technique inspired by the human neuronal synapse system–was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and health care, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging

  15. Deep learning for SAR image formation

    Science.gov (United States)

    Mason, Eric; Yonel, Bariscan; Yazici, Birsen

    2017-04-01

    The recent success of deep learning has lead to growing interest in applying these methods to signal processing problems. This paper explores the applications of deep learning to synthetic aperture radar (SAR) image formation. We review deep learning from a perspective relevant to SAR image formation. Our objective is to address SAR image formation in the presence of uncertainties in the SAR forward model. We present a recurrent auto-encoder network architecture based on the iterative shrinkage thresholding algorithm (ISTA) that incorporates SAR modeling. We then present an off-line training method using stochastic gradient descent and discuss the challenges and key steps of learning. Lastly, we show experimentally that our method can be used to form focused images in the presence of phase uncertainties. We demonstrate that the resulting algorithm has faster convergence and decreased reconstruction error than that of ISTA.

  16. Oceanography related to deep sea waste disposal

    International Nuclear Information System (INIS)

    1978-09-01

    In connection with studies on the feasibility of the safe disposal of radioactive waste, from a large scale nuclear power programme, either on the bed of the deep ocean or within the deep ocean bed, preparation of the present document was commissioned by the (United Kingdom) Department of the Environment. It attempts (a) to summarize the present state of knowledge of the deep ocean environment relevant to the disposal options and assess the processes which could aid or hinder dispersal of material released from its container; (b) to identify areas of research in which more work is needed before the safety of disposal on, or beneath, the ocean bed can be assessed; and (c) to indicate which areas of research can or should be undertaken by British scientists. The programmes of international cooperation in this field are discussed. The report is divided into four chapters dealing respectively with geology and geophysics, geochemistry, physical oceanography and marine biology. (U.K.)

  17. In Brief: Deep-sea observatory

    Science.gov (United States)

    Showstack, Randy

    2008-11-01

    The first deep-sea ocean observatory offshore of the continental United States has begun operating in the waters off central California. The remotely operated Monterey Accelerated Research System (MARS) will allow scientists to monitor the deep sea continuously. Among the first devices to be hooked up to the observatory are instruments to monitor earthquakes, videotape deep-sea animals, and study the effects of acidification on seafloor animals. ``Some day we may look back at the first packets of data streaming in from the MARS observatory as the equivalent of those first words spoken by Alexander Graham Bell: `Watson, come here, I need you!','' commented Marcia McNutt, president and CEO of the Monterey Bay Aquarium Research Institute, which coordinated construction of the observatory. For more information, see http://www.mbari.org/news/news_releases/2008/mars-live/mars-live.html.

  18. Deep learning in jet reconstruction at CMS

    CERN Document Server

    Stoye, Markus

    2017-01-01

    Deep learning has led to several breakthroughs outside the field of high energy physics, yet in jet reconstruction for the CMS experiment at the CERN LHC it has not been used so far. This report shows results of applying deep learning strategies to jet reconstruction at the stage of identifying the original parton association of the jet (jet tagging), which is crucial for physics analyses at the LHC experiments. We introduce a custom deep neural network architecture for jet tagging. We compare the performance of this novel method with the other established approaches at CMS and show that the proposed strategy provides a significant improvement. The strategy provides the first multi-class classifier, instead of the few binary classifiers that previously were used, and thus yields more information and in a more convenient way. The performance results obtained with simulation imply a significant improvement for a large number of important physics analysis at the CMS experiment.

  19. Deep Learning in Medical Imaging: General Overview

    Science.gov (United States)

    Lee, June-Goo; Jun, Sanghoon; Cho, Young-Won; Lee, Hyunna; Kim, Guk Bae

    2017-01-01

    The artificial neural network (ANN)–a machine learning technique inspired by the human neuronal synapse system–was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and healthcare, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging. PMID:28670152

  20. Deep Learning in Medical Image Analysis.

    Science.gov (United States)

    Shen, Dinggang; Wu, Guorong; Suk, Heung-Il

    2017-06-21

    This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement.

  1. Pathways to deep decarbonization - Interim 2014 Report

    International Nuclear Information System (INIS)

    2014-01-01

    The interim 2014 report by the Deep Decarbonization Pathways Project (DDPP), coordinated and published by IDDRI and the Sustainable Development Solutions Network (SDSN), presents preliminary findings of the pathways developed by the DDPP Country Research Teams with the objective of achieving emission reductions consistent with limiting global warming to less than 2 deg. C. The DDPP is a knowledge network comprising 15 Country Research Teams and several Partner Organizations who develop and share methods, assumptions, and findings related to deep decarbonization. Each DDPP Country Research Team has developed an illustrative road-map for the transition to a low-carbon economy, with the intent of taking into account national socio-economic conditions, development aspirations, infrastructure stocks, resource endowments, and other relevant factors. The interim 2014 report focuses on technically feasible pathways to deep decarbonization

  2. Excess plutonium disposition: The deep borehole option

    International Nuclear Information System (INIS)

    Ferguson, K.L.

    1994-01-01

    This report reviews the current status of technologies required for the disposition of plutonium in Very Deep Holes (VDH). It is in response to a recent National Academy of Sciences (NAS) report which addressed the management of excess weapons plutonium and recommended three approaches to the ultimate disposition of excess plutonium: (1) fabrication and use as a fuel in existing or modified reactors in a once-through cycle, (2) vitrification with high-level radioactive waste for repository disposition, (3) burial in deep boreholes. As indicated in the NAS report, substantial effort would be required to address the broad range of issues related to deep bore-hole emplacement. Subjects reviewed in this report include geology and hydrology, design and engineering, safety and licensing, policy decisions that can impact the viability of the concept, and applicable international programs. Key technical areas that would require attention should decisions be made to further develop the borehole emplacement option are identified

  3. Deep Learning in Medical Imaging: General Overview.

    Science.gov (United States)

    Lee, June-Goo; Jun, Sanghoon; Cho, Young-Won; Lee, Hyunna; Kim, Guk Bae; Seo, Joon Beom; Kim, Namkug

    2017-01-01

    The artificial neural network (ANN)-a machine learning technique inspired by the human neuronal synapse system-was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and healthcare, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging.

  4. Beyond the standard model

    International Nuclear Information System (INIS)

    Wilczek, F.

    1993-01-01

    The standard model of particle physics is highly successful, although it is obviously not a complete or final theory. In this presentation the author argues that the structure of the standard model gives some quite concrete, compelling hints regarding what lies beyond. Essentially, this presentation is a record of the author's own judgement of what the central clues for physics beyond the standard model are, and also it is an attempt at some pedagogy. 14 refs., 6 figs

  5. Standard Model processes

    CERN Document Server

    Mangano, M.L.; Aguilar-Saavedra, Juan Antonio; Alekhin, S.; Badger, S.; Bauer, C.W.; Becher, T.; Bertone, V.; Bonvini, M.; Boselli, S.; Bothmann, E.; Boughezal, R.; Cacciari, M.; Carloni Calame, C.M.; Caola, F.; Campbell, J.M.; Carrazza, S.; Chiesa, M.; Cieri, L.; Cimaglia, F.; Febres Cordero, F.; Ferrarese, P.; D'Enterria, D.; Ferrera, G.; Garcia i Tormo, X.; Garzelli, M.V.; Germann, E.; Hirschi, V.; Han, T.; Ita, H.; Jäger, B.; Kallweit, S.; Karlberg, A.; Kuttimalai, S.; Krauss, F.; Larkoski, A.J.; Lindert, J.; Luisoni, G.; Maierhöfer, P.; Mattelaer, O.; Martinez, H.; Moch, S.; Montagna, G.; Moretti, M.; Nason, P.; Nicrosini, O.; Oleari, C.; Pagani, D.; Papaefstathiou, A.; Petriello, F.; Piccinini, F.; Pierini, M.; Pierog, T.; Pozzorini, S.; Re, E.; Robens, T.; Rojo, J.; Ruiz, R.; Sakurai, K.; Salam, G.P.; Salfelder, L.; Schönherr, M.; Schulze, M.; Schumann, S.; Selvaggi, M.; Shivaji, A.; Siodmok, A.; Skands, P.; Torrielli, P.; Tramontano, F.; Tsinikos, I.; Tweedie, B.; Vicini, A.; Westhoff, S.; Zaro, M.; Zeppenfeld, D.; CERN. Geneva. ATS Department

    2017-06-22

    This report summarises the properties of Standard Model processes at the 100 TeV pp collider. We document the production rates and typical distributions for a number of benchmark Standard Model processes, and discuss new dynamical phenomena arising at the highest energies available at this collider. We discuss the intrinsic physics interest in the measurement of these Standard Model processes, as well as their role as backgrounds for New Physics searches.

  6. International Construction Measurement Standard

    OpenAIRE

    Mitchell, Charles

    2016-01-01

    The International Construction Measurement Standard Coalition (the Coalition) was formed on 17 June 2015 after meeting at the International Monetary Fund in Washington DC, USA. The Coalition, comprising the organisations listed below at the date of publication, aims to bring about consistency in construction cost reporting standards internationally. This is achieved by the creation and adoption of this ICMS, an agreed international standard for the structuring and presentation of cost reports...

  7. IMPROVEMENT OF RECOGNITION QUALITY IN DEEP LEARNING NETWORKS BY SIMULATED ANNEALING METHOD

    Directory of Open Access Journals (Sweden)

    A. S. Potapov

    2014-09-01

    Full Text Available The subject of this research is deep learning methods, in which automatic construction of feature transforms is taken place in tasks of pattern recognition. Multilayer autoencoders have been taken as the considered type of deep learning networks. Autoencoders perform nonlinear feature transform with logistic regression as an upper classification layer. In order to verify the hypothesis of possibility to improve recognition rate by global optimization of parameters for deep learning networks, which are traditionally trained layer-by-layer by gradient descent, a new method has been designed and implemented. The method applies simulated annealing for tuning connection weights of autoencoders while regression layer is simultaneously trained by stochastic gradient descent. Experiments held by means of standard MNIST handwritten digit database have shown the decrease of recognition error rate from 1.1 to 1.5 times in case of the modified method comparing to the traditional method, which is based on local optimization. Thus, overfitting effect doesn’t appear and the possibility to improve learning rate is confirmed in deep learning networks by global optimization methods (in terms of increasing recognition probability. Research results can be applied for improving the probability of pattern recognition in the fields, which require automatic construction of nonlinear feature transforms, in particular, in the image recognition. Keywords: pattern recognition, deep learning, autoencoder, logistic regression, simulated annealing.

  8. Statistical Analysis of Deep Drilling Process Conditions Using Vibrations and Force Signals

    Directory of Open Access Journals (Sweden)

    Syafiq Hazwan

    2016-01-01

    Full Text Available Cooling systems is a key point for hot forming process of Ultra High Strength Steels (UHSS. Normally, cooling systems is made using deep drilling technique. Although deep twist drill is better than other drilling techniques in term of higher productivity however its main problem is premature tool breakage, which affects the production quality. In this paper, analysis of deep twist drill process parameters such as cutting speed, feed rate and depth of cut by using statistical analysis to identify the tool condition is presented. The comparisons between different two tool geometries are also studied. Measured data from vibrations and force sensors are being analyzed through several statistical parameters such as root mean square (RMS, mean, kurtosis, standard deviation and skewness. Result found that kurtosis and skewness value are the most appropriate parameters to represent the deep twist drill tool conditions behaviors from vibrations and forces data. The condition of the deep twist drill process been classified according to good, blunt and fracture. It also found that the different tool geometry parameters affect the performance of the tool drill. It believe the results of this study are useful in determining the suitable analysis method to be used for developing online tool condition monitoring system to identify the tertiary tool life stage and helps to avoid mature of tool fracture during drilling process.

  9. Deep ecology: A movement and a new approach to solving environmental problems

    Directory of Open Access Journals (Sweden)

    Mišković Milan M.

    2016-01-01

    Full Text Available In the industrial society, nature is conceived as a resource for unlimited exploitation, and the entropic effects of its pollution and depletion can be effectively controlled and resolved. Non-human entities are viewed as raw materials for technical manipulation and the increase in the standard of living for consumers in mass societies. Contrary to such utilitarian pragmatism, some new views on the relationship of man, society and nature are appearing, as well as different concepts of environmentally balanced development. According to these views, the transition to ecological society and ecological culture will not be possible without replacing the current anthropocentric ethics with the ecocentric or environmental ethics. Deep ecology arises in the spectrum of environmental ethics theories. It is considered as a movement and a new approach to solving environmental problems. Deep ecology is a type of ecosophy formed by Arne Nes, and it focuses on wisdom and ecological balance. It is based on ecological science, but it asks deeper questions about the causes of the ecological crisis and corresponds to the general discourse on sustainable development. The article discusses the platform of deep ecology movement and gives the basic principles of deep ecology. It gives explanations of the two basic norms of deep ecology (self-understanding and biospheric egalitarianism and criticism of these concepts.

  10. Stability of deep features across CT scanners and field of view using a physical phantom

    Science.gov (United States)

    Paul, Rahul; Shafiq-ul-Hassan, Muhammad; Moros, Eduardo G.; Gillies, Robert J.; Hall, Lawrence O.; Goldgof, Dmitry B.

    2018-02-01

    Radiomics is the process of analyzing radiological images by extracting quantitative features for monitoring and diagnosis of various cancers. Analyzing images acquired from different medical centers is confounded by many choices in acquisition, reconstruction parameters and differences among device manufacturers. Consequently, scanning the same patient or phantom using various acquisition/reconstruction parameters as well as different scanners may result in different feature values. To further evaluate this issue, in this study, CT images from a physical radiomic phantom were used. Recent studies showed that some quantitative features were dependent on voxel size and that this dependency could be reduced or removed by the appropriate normalization factor. Deep features extracted from a convolutional neural network, may also provide additional features for image analysis. Using a transfer learning approach, we obtained deep features from three convolutional neural networks pre-trained on color camera images. An we examination of the dependency of deep features on image pixel size was done. We found that some deep features were pixel size dependent, and to remove this dependency we proposed two effective normalization approaches. For analyzing the effects of normalization, a threshold has been used based on the calculated standard deviation and average distance from a best fit horizontal line among the features' underlying pixel size before and after normalization. The inter and intra scanner dependency of deep features has also been evaluated.

  11. Stable isotope geochemistry of deep sea cherts

    Energy Technology Data Exchange (ETDEWEB)

    Kolodny, Y; Epstein, S [California Inst. of Tech., Pasadena (USA). Div. of Geological Sciences

    1976-10-01

    Seventy four samples of DSDP (Deep Sea Drilling Project) recovered cherts of Jurassic to Miocene age from varying locations, and 27 samples of on-land exposed cherts were analyzed for the isotopic composition of their oxygen and hydrogen. These studies were accompanied by mineralogical analyses and some isotopic analyses of the coexisting carbonates. delta/sup 18/0 of chert ranges between 27 and 39 parts per thousand relative to SMOW, delta/sup 18/0 of porcellanite - between 30 and 42 parts per thousand. The consistent enrichment of opal-CT in porcellanites in /sup 18/0 with respect to coexisting microcrystalline quartz in chert is probably a reflection of a different temperature (depth) of diagenesis of the two phases. delta/sup 18/0 of deep sea cherts generally decrease with increasing age, indicating an overall cooling of the ocean bottom during the last 150 m.y. A comparison of this trend with that recorded by benthonic foraminifera (Douglas et al., Initial Reports of the Deep Sea Drilling Project; 32:509(1975)) indicates the possibility of delta/sup 18/0 in deep sea cherts not being frozen in until several tens of millions of years after deposition. Cherts of any Age show a spread of delta/sup 18/0 values, increasing diagenesis being reflected in a lowering of delta/sup 18/0. Drusy quartz has the lowest delta/sup 18/0 values. On land exposed cherts are consistently depleted in /sup 18/0 in comparison to their deep sea time equivalent cherts. Water extracted from deep sea cherts ranges between 0.5 and 1.4 wt%. deltaD of this water ranges between -78 and -95 parts per thousand and is not a function of delta/sup 18/0 of the cherts (or the temperature of their formation).

  12. Standard NIM instrumentation system

    International Nuclear Information System (INIS)

    1990-05-01

    NIM is a standard modular instrumentation system that is in wide use throughout the world. As the NIM system developed and accommodations were made to a dynamic instrumentation field and a rapidly advancing technology, additions, revisions and clarifications were made. These were incorporated into the standard in the form of addenda and errata. This standard is a revision of the NIM document, AEC Report TID-20893 (Rev. 4) dated July 1974. It includes all the addenda and errata items that were previously issued as well as numerous additional items to make the standard current with modern technology and manufacturing practice

  13. Towards common technical standards

    International Nuclear Information System (INIS)

    Rahmat, H.; Suardi, A.R.

    1993-01-01

    In 1989, PETRONAS launched its Total Quality Management (TQM) program. In the same year the decision was taken by the PETRONAS Management to introduce common technical standards group wide. These standards apply to the design, construction, operation and maintenance of all PETRONAS installations in the upstream, downstream and petrochemical sectors. The introduction of common company standards is seen as part of an overall technical management system, which is an integral part of Total Quality Management. The Engineering and Safety Unit in the PETRONAS Central Office in Kuala Lumpur has been charged with the task of putting in place a set of technical standards throughout PETRONAS and its operating units

  14. The Standard Model course

    CERN Multimedia

    CERN. Geneva HR-RFA

    2006-01-01

    Suggested Readings: Aspects of Quantum Chromodynamics/A Pich, arXiv:hep-ph/0001118. - The Standard Model of Electroweak Interactions/A Pich, arXiv:hep-ph/0502010. - The Standard Model of Particle Physics/A Pich The Standard Model of Elementary Particle Physics will be described. A detailed discussion of the particle content, structure and symmetries of the theory will be given, together with an overview of the most important experimental facts which have established this theoretical framework as the Standard Theory of particle interactions.

  15. Flight Standards Automation System -

    Data.gov (United States)

    Department of Transportation — FAVSIS supports Flight Standards Service (AFS) by maintaining their information on entities such as air carriers, air agencies, designated airmen, and check airmen....

  16. Improved Automated Detection of Diabetic Retinopathy on a Publicly Available Dataset Through Integration of Deep Learning.

    Science.gov (United States)

    Abràmoff, Michael David; Lou, Yiyue; Erginay, Ali; Clarida, Warren; Amelon, Ryan; Folk, James C; Niemeijer, Meindert

    2016-10-01

    To compare performance of a deep-learning enhanced algorithm for automated detection of diabetic retinopathy (DR), to the previously published performance of that algorithm, the Iowa Detection Program (IDP)-without deep learning components-on the same publicly available set of fundus images and previously reported consensus reference standard set, by three US Board certified retinal specialists. We used the previously reported consensus reference standard of referable DR (rDR), defined as International Clinical Classification of Diabetic Retinopathy moderate, severe nonproliferative (NPDR), proliferative DR, and/or macular edema (ME). Neither Messidor-2 images, nor the three retinal specialists setting the Messidor-2 reference standard were used for training IDx-DR version X2.1. Sensitivity, specificity, negative predictive value, area under the curve (AUC), and their confidence intervals (CIs) were calculated. Sensitivity was 96.8% (95% CI: 93.3%-98.8%), specificity was 87.0% (95% CI: 84.2%-89.4%), with 6/874 false negatives, resulting in a negative predictive value of 99.0% (95% CI: 97.8%-99.6%). No cases of severe NPDR, PDR, or ME were missed. The AUC was 0.980 (95% CI: 0.968-0.992). Sensitivity was not statistically different from published IDP sensitivity, which had a CI of 94.4% to 99.3%, but specificity was significantly better than the published IDP specificity CI of 55.7% to 63.0%. A deep-learning enhanced algorithm for the automated detection of DR, achieves significantly better performance than a previously reported, otherwise essentially identical, algorithm that does not employ deep learning. Deep learning enhanced algorithms have the potential to improve the efficiency of DR screening, and thereby to prevent visual loss and blindness from this devastating disease.

  17. Wind tunnel tests of a deep seabed penetrator model

    International Nuclear Information System (INIS)

    Visintini, L.; Murray, C.N.

    1991-01-01

    C.C.R. Euratom Ispra are currently involved in studies on the possibility of storing radioactive wastes in deep ocean sediment beds. The report summarizes the results of wind tunnel tests performed in March 1985 on a 1:2.5 scale model of a European Standard Penetrator in Aermacchi low speed wind tunnel. Tests covered the measurement of overall fluid dynamic forces at varying angle of attack and measurement of unsteady pressures acting on the instrumentation head protruding in the penetrator's wake. Overall force coefficients were found to be in good agreement with predictions. Unsteady pressures were found to be much smaller than expected so that no mechanical damage to instrumentation is to be foreseen even at the high dynamic pressures typical of the penetrator moving into water. The present work has been undertaken under contract 2450-84-08 ED ISP I of C.C.R. EURATOM ISPRA

  18. Simulation of deep one- and two-dimensional redshift surveys

    International Nuclear Information System (INIS)

    Park, Changbom; Gott, J.R. III

    1991-01-01

    We show that slice or pencil-beam redshift surveys of galaxies can be simulated in a box with non-equal sides. This method saves a lot of computer time and memory while providing essentially the same results as from whole-cube simulations. A 2457.6-h -1 Mpc-long rod (out to a redshift z = 0.58 in two opposite directions) is simulated using the standard biased Cold Dark Matter model as an example to mimic the recent deep pencil-beam surveys by Broadhurst et al. The structures (spikes) we see in these simulated samples occur when the narrow pencil-beam pierces walls, filaments and clusters appearing randomly along the line-of-sight. We have applied a statistical test for goodness of fit to a periodic lattice to the observations and the simulations. (author)

  19. Photon Detection System Designs for the Deep Underground Neutrino Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Whittington, Denver [Indiana U.

    2015-11-19

    The Deep Underground Neutrino Experiment (DUNE) will be a premier facility for exploring long-standing questions about the boundaries of the standard model. Acting in concert with the liquid argon time projection chambers underpinning the far detector design, the DUNE photon detection system will capture ultraviolet scintillation light in order to provide valuable timing information for event reconstruction. To maximize the active area while maintaining a small photocathode coverage, the experiment will utilize a design based on plastic light guides coated with a wavelength-shifting compound, along with silicon photomultipliers, to collect and record scintillation light from liquid argon. This report presents recent preliminary performance measurements of this baseline design and several alternative designs which promise significant improvements in sensitivity to low-energy interactions.

  20. [Diagnostic strategy in patients with clinically suspected deep vein thrombosis

    DEFF Research Database (Denmark)

    Mantoni, Margit Yvonne; Kristensen, M.; Brogaard, M.H.

    2008-01-01

    INTRODUCTION: The standard method for diagnosing deep vein thrombosis (DVT) involves determination of D-dimer and ultrasound scanning. In an attempt to reduce the number of ultrasound examinations we have supplemented this with a clinical probability estimate for DVT (DVT-score) over one year....... MATERIALS AND METHODS: A total of 508 consecutive patients presenting in the emergency room with suspected DVT had D-dimer and DVT-score performed. Patients with non-elevated D-dimer and a low or moderate DVT score received no treatment. The remainder had ultrasound scanning from the groin to the popliteal...... patients with normal D-dimer had high DVT-scores, none had DVT, so that the benefit from determining DVT-scores was modest. Ultrasound scanning revealed DVT in 85 out of 397 patients with elevated D-dimer. A repeat examination was performed in 91 patients with persisting symptoms, and disclosed DVT in two...

  1. Detection of eardrum abnormalities using ensemble deep learning approaches

    Science.gov (United States)

    Senaras, Caglar; Moberly, Aaron C.; Teknos, Theodoros; Essig, Garth; Elmaraghy, Charles; Taj-Schaal, Nazhat; Yua, Lianbo; Gurcan, Metin N.

    2018-02-01

    In this study, we proposed an approach to report the condition of the eardrum as "normal" or "abnormal" by ensembling two different deep learning architectures. In the first network (Network 1), we applied transfer learning to the Inception V3 network by using 409 labeled samples. As a second network (Network 2), we designed a convolutional neural network to take advantage of auto-encoders by using additional 673 unlabeled eardrum samples. The individual classification accuracies of the Network 1 and Network 2 were calculated as 84.4%(+/- 12.1%) and 82.6% (+/- 11.3%), respectively. Only 32% of the errors of the two networks were the same, making it possible to combine two approaches to achieve better classification accuracy. The proposed ensemble method allows us to achieve robust classification because it has high accuracy (84.4%) with the lowest standard deviation (+/- 10.3%).

  2. Development of HMPE fiber for deep water permanent mooring applications

    Energy Technology Data Exchange (ETDEWEB)

    Vlasblom, Martin; Fronzaglia, Bill; Boesten, Jorn [DSM Dyneema, Urmond (Netherlands); Leite, Sergio [Lankhorst Ropes, Sneek (Netherlands); Davies, Peter [Institut Francais de Recherche pour L' Exploration de la Mer (IFREMER) (France)

    2012-07-01

    For a number of years, the creep performance of standard High Modulus Polyethylene (HMPE) fiber types has limited their use in synthetic offshore mooring systems. In 2003, a low creep HMPE fiber was introduced and qualified for semi-permanent MODU moorings. This paper reports on a new High Modulus Polyethylene fiber type with significantly improved creep properties compared to any other HMPE fiber type, which, for the first time, allows its use in permanent offshore mooring systems, for example for deep water FPSO moorings. Results on fiber and rope creep experiments and stiffness measurements are reported. Laboratory testing shows that ropes made with the new fiber type retain the properties characteristic of HMPE such as high static strength, high fatigue resistance and stiffness, and illustrate that stiffness properties determined on HMPE fiber or rope are dependent on the applied load and temperature. (author)

  3. Deep Borehole Emplacement Mode Hazard Analysis Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    Sevougian, S. David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-08-07

    This letter report outlines a methodology and provides resource information for the Deep Borehole Emplacement Mode Hazard Analysis (DBEMHA). The main purpose is identify the accident hazards and accident event sequences associated with the two emplacement mode options (wireline or drillstring), to outline a methodology for computing accident probabilities and frequencies, and to point to available databases on the nature and frequency of accidents typically associated with standard borehole drilling and nuclear handling operations. Risk mitigation and prevention measures, which have been incorporated into the two emplacement designs (see Cochran and Hardin 2015), are also discussed. A key intent of this report is to provide background information to brief subject matter experts involved in the Emplacement Mode Design Study. [Note: Revision 0 of this report is concentrated more on the wireline emplacement mode. It is expected that Revision 1 will contain further development of the preliminary fault and event trees for the drill string emplacement mode.

  4. Deep Space Detection of Oriented Ice Crystals

    Science.gov (United States)

    Marshak, A.; Varnai, T.; Kostinski, A. B.

    2017-12-01

    The deep space climate observatory (DSCOVR) spacecraft resides at the first Lagrangian point about one million miles from Earth. A polychromatic imaging camera onboard delivers nearly hourly observations of the entire sun-lit face of the Earth. Many images contain unexpected bright flashes of light over both ocean and land. We constructed a yearlong time series of flash latitudes, scattering angles and oxygen absorption to demonstrate conclusively that the flashes over land are specular reflections off tiny ice crystals floating in the air nearly horizontally. Such deep space detection of tropospheric ice can be used to constrain the likelihood of oriented crystals and their contribution to Earth albedo.

  5. A clinical study on deep neck abscess

    International Nuclear Information System (INIS)

    Ota, Yumi; Ogawa, Yoshiko; Takemura, Teiji; Sawada, Toru

    2007-01-01

    Although various effective antibiotics have been synthesized, deep neck abscess is still a serious and life-threatening infection. It is important to diagnose promptly and treat adequately, and contrast-enhanced CT is useful and indispensable for diagnosis. We reviewed our patients with deep neck abscess, and analyzed the location by reviewing CT images, and discussed the treatment. Surgical drainage is a fundamental treatment for abscess but if it exists in only one area such as the parotid gland space, it can be cured with needle aspiration and suitable antibiotics. (author)

  6. Deep Belief Nets for Topic Modeling

    DEFF Research Database (Denmark)

    Maaløe, Lars; Arngren, Morten; Winther, Ole

    2015-01-01

    -formative. In this paper we describe large-scale content based collaborative filtering for digital publishing. To solve the digital publishing recommender problem we compare two approaches: latent Dirichlet allocation (LDA) and deep be-lief nets (DBN) that both find low-dimensional latent representations for documents....... Efficient retrieval can be carried out in the latent representation. We work both on public benchmarks and digital media content provided by Issuu, an on-line publishing platform. This article also comes with a newly developed deep belief nets toolbox for topic modeling tailored towards performance...

  7. Un paseo por la Deep Web

    OpenAIRE

    Ortega Castillo, Carlos

    2018-01-01

    Este documento busca presentar una mirada técnica e inclusiva a algunas de las tecnologías de interconexión desarrolladas en la DeepWeb, primero desde un punto de vista teórico y después con una breve introducción práctica. La desmitificación de los procesos desarrollados bajo la DeepWeb, brinda herramientas a los usuarios para esclarecer y construir nuevos paradigmas de sociedad, conocimiento y tecnología que aporten al desarrollo responsable de este tipo de redes y contribuyan al crecimi...

  8. Deep fracturation of granitic rock mass

    International Nuclear Information System (INIS)

    Bles, J.L.; Blanchin, R.; Bonijoly, D.; Dutartre, P.; Feybesse, J.L.; Gros, Y.; Landry, J.; Martin, P.

    1986-01-01

    This documentary study realized with the financial support of the European Communities and the CEA aims at the utilization of available data for the understanding of the evolution of natural fractures in granitic rocks from the surface to deep underground, in various feasibility studies dealing with radioactive wastes disposal. The Mont Blanc road tunnel, the EDF Arc-Isere gallerie, the Auriat deep borehole and the Pyrenean rock mass of Bassies are studied. In this study are more particularly analyzed the relationship between small fractures and large faults, evolution with depth of fracture density and direction, consequences of rock decompression and relationship between fracturation and groundwater [fr

  9. Gamma-rays from deep inelastic collisions

    International Nuclear Information System (INIS)

    Stephens, F.S.

    1979-01-01

    The γ-rays associated with deep inelastic collisions can give information about the magnitude and orientation of the angular momentum transferred in these events. In this review, special emphasis is placed on understanding the origin and nature of these γ-rays in order to avoid some of the ambiguities that can arise. The experimental information coming from these γ-ray studies is reviewed, and compared briefly with that obtained by other methods and also with the expectations from current models for deep inelastic collisions. 15 figures

  10. Fractal measures in a deep penetration problem

    International Nuclear Information System (INIS)

    Murthy, K.P.N.; Indira, R.; John, T.M.

    1993-01-01

    In the Monte Carlo simulation of a deep penetration problem the parameter, say b in the importance function must be assigned a value b' such that variance is minimum. If b b' the sample mean is still not reliable; but the sample fluctuations would be small and misleading, though the actual fluctuations are quite large. This is because the distribution of transmission has a tail which becomes prominent when b > b'. Considering a model deep penetration problem, and employing exact enumeration techniques, it is shown that in the limit of large biasing the long tailed distribution to the transmission is multifractal. (author). 5 refs., 3 figs

  11. La deep web : el mercado negro global

    OpenAIRE

    Gay Fernández, José

    2015-01-01

    La deep web es un espacio oculto de internet donde la primera garantía es el anonimato. En líneas generales, la deep web contiene todo aquello que los buscadores convencionales no pueden localizar. Esta garantía sirve para albergar una vasta red de servicios ilegales, como el narcotráfico, la trata de blancas, la contratación de sicarios, la compra-venta de pasaportes y cuentas bancarias, o la pornografía infantil, entre otros muchos. Pero el anonimato también posibilita que activ...

  12. Quantitative phase microscopy using deep neural networks

    Science.gov (United States)

    Li, Shuai; Sinha, Ayan; Lee, Justin; Barbastathis, George

    2018-02-01

    Deep learning has been proven to achieve ground-breaking accuracy in various tasks. In this paper, we implemented a deep neural network (DNN) to achieve phase retrieval in a wide-field microscope. Our DNN utilized the residual neural network (ResNet) architecture and was trained using the data generated by a phase SLM. The results showed that our DNN was able to reconstruct the profile of the phase target qualitatively. In the meantime, large error still existed, which indicated that our approach still need to be improved.

  13. Nuclear structure in deep-inelastic reactions

    International Nuclear Information System (INIS)

    Rehm, K.E.

    1986-01-01

    The paper concentrates on recent deep inelastic experiments conducted at Argonne National Laboratory and the nuclear structure effects evident in reactions between super heavy nuclei. Experiments indicate that these reactions evolve gradually from simple transfer processes which have been studied extensively for lighter nuclei such as 16 O, suggesting a theoretical approach connecting the one-step DWBA theory to the multistep statistical models of nuclear reactions. This transition between quasi-elastic and deep inelastic reactions is achieved by a simple random walk model. Some typical examples of nuclear structure effects are shown. 24 refs., 9 figs

  14. Deep Learning For Sequential Pattern Recognition

    OpenAIRE

    Safari, Pooyan

    2013-01-01

    Projecte realitzat en el marc d’un programa de mobilitat amb la Technische Universität München (TUM) In recent years, deep learning has opened a new research line in pattern recognition tasks. It has been hypothesized that this kind of learning would capture more abstract patterns concealed in data. It is motivated by the new findings both in biological aspects of the brain and hardware developments which have made the parallel processing possible. Deep learning methods come along with ...

  15. Environmental challenges of deep water activities

    International Nuclear Information System (INIS)

    Sande, Arvid

    1998-01-01

    In this presentation there are discussed the experiences of petroleum industry, and the projects that have been conducted in connection with the planning and drilling of the first deep water wells in Norway. There are also presented views on where to put more effort in the years to come, so as to increase the knowledge of deep water areas. Attention is laid on exploration drilling as this is the only activity with environmental potential that will take place during the next five years or so. The challenges for future field developments in these water depths are briefly discussed. 7 refs

  16. DeepGO: predicting protein functions from sequence and interactions using a deep ontology-aware classifier

    KAUST Repository

    Kulmanov, Maxat

    2017-09-27

    Motivation A large number of protein sequences are becoming available through the application of novel high-throughput sequencing technologies. Experimental functional characterization of these proteins is time-consuming and expensive, and is often only done rigorously for few selected model organisms. Computational function prediction approaches have been suggested to fill this gap. The functions of proteins are classified using the Gene Ontology (GO), which contains over 40 000 classes. Additionally, proteins have multiple functions, making function prediction a large-scale, multi-class, multi-label problem. Results We have developed a novel method to predict protein function from sequence. We use deep learning to learn features from protein sequences as well as a cross-species protein–protein interaction network. Our approach specifically outputs information in the structure of the GO and utilizes the dependencies between GO classes as background information to construct a deep learning model. We evaluate our method using the standards established by the Computational Assessment of Function Annotation (CAFA) and demonstrate a significant improvement over baseline methods such as BLAST, in particular for predicting cellular locations.

  17. DeepGO: predicting protein functions from sequence and interactions using a deep ontology-aware classifier.

    Science.gov (United States)

    Kulmanov, Maxat; Khan, Mohammed Asif; Hoehndorf, Robert; Wren, Jonathan

    2018-02-15

    A large number of protein sequences are becoming available through the application of novel high-throughput sequencing technologies. Experimental functional characterization of these proteins is time-consuming and expensive, and is often only done rigorously for few selected model organisms. Computational function prediction approaches have been suggested to fill this gap. The functions of proteins are classified using the Gene Ontology (GO), which contains over 40 000 classes. Additionally, proteins have multiple functions, making function prediction a large-scale, multi-class, multi-label problem. We have developed a novel method to predict protein function from sequence. We use deep learning to learn features from protein sequences as well as a cross-species protein-protein interaction network. Our approach specifically outputs information in the structure of the GO and utilizes the dependencies between GO classes as background information to construct a deep learning model. We evaluate our method using the standards established by the Computational Assessment of Function Annotation (CAFA) and demonstrate a significant improvement over baseline methods such as BLAST, in particular for predicting cellular locations. Web server: http://deepgo.bio2vec.net, Source code: https://github.com/bio-ontology-research-group/deepgo. robert.hoehndorf@kaust.edu.sa. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  18. The value of MR angiography in the diagnosis of deep vein thrombosis of the lower limbs: comparative study with DSA

    International Nuclear Information System (INIS)

    Feng Min; Wang Shuzhi; Gu Jianping; Sun Jun; Mao Cunnan; Lu Lingquan; Yin Xindao

    2007-01-01

    Objective: To assess the clinical values of MR angiography (MRA) in the detection of deep vein thrombosis of the lower limbs. Methods: Two-dimensional time of flight (2D TOF) MRA was performed in thirty patients who were suspected of having deep vein thrombosis in the lower limbs. The findings of MRA were compared to that of digital subtraction angiography (DSA). Results: twenty-five cases showed deep vein thrombosis in the lower limbs, the MRA findings included venous filling defect (14 cases), occlusions and interruptions of veins (8 cases), venous recanalizations (3 cases), collateral veins (25 cases). Taking the results of DSA as a golden standard, MRA detected all of the affected cases with only one case as the false positive. Conclusion: 2D TOF MRA is a method of choice in the diagnosis of deep vein thrombosis of the lower limbs. (authors)

  19. Position paper on standardization

    International Nuclear Information System (INIS)

    1991-04-01

    The ''NPOC Strategic Plan for Building New Nuclear Plants'' creates a framework within which new standardized nuclear plants may be built. The Strategic Plan is an expression of the nuclear energy industry's serious intent to create the necessary conditions for new plant construction and operation. One of the key elements of the Strategic Plan is a comprehensive industry commitment to standardization: through design certification, combined license, first-of-a-kind engineering, construction, operation and maintenance of nuclear power plants. The NPOC plan proposes four stages of standardization in advanced light water reactors (ALWRs). The first stage is established by the ALWR Utility Requirements Document which specifies owner/operator requirements at a functional level covering all elements of plant design and construction, and many aspects of operations and maintenance. The second stage of standardization is that achieved in the NRC design certification. This certification level includes requirements, design criteria and bases, functional descriptions and performance requirements for systems to assure plant safety. The third stage of standardization, commercial standardization, carries the design to a level of completion beyond that required for design certification to enable the industry to achieve potential increases in efficiency and economy. The final stage of standardization is enhanced standardization beyond design. A standardized approach is being developed in construction practices, operating, maintenance training, and procurement practices. This comprehensive standardization program enables the NRC to proceed with design certification with the confidence that standardization beyond the regulations will be achieved. This confidence should answer the question of design detail required for design certification, and demonstrate that the NRC should require no further regulatory review beyond that required by 10 CFR Part 52

  20. DeepMitosis: Mitosis detection via deep detection, verification and segmentation networks.

    Science.gov (United States)

    Li, Chao; Wang, Xinggang; Liu, Wenyu; Latecki, Longin Jan

    2018-04-01

    Mitotic count is a critical predictor of tumor aggressiveness in the breast cancer diagnosis. Nowadays mitosis counting is mainly performed by pathologists manually, which is extremely arduous and time-consuming. In this paper, we propose an accurate method for detecting the mitotic cells from histopathological slides using a novel multi-stage deep learning framework. Our method consists of a deep segmentation network for generating mitosis region when only a weak label is given (i.e., only the centroid pixel of mitosis is annotated), an elaborately designed deep detection network for localizing mitosis by using contextual region information, and a deep verification network for improving detection accuracy by removing false positives. We validate the proposed deep learning method on two widely used Mitosis Detection in Breast Cancer Histological Images (MITOSIS) datasets. Experimental results show that we can achieve the highest F-score on the MITOSIS dataset from ICPR 2012 grand challenge merely using the deep detection network. For the ICPR 2014 MITOSIS dataset that only provides the centroid location of mitosis, we employ the segmentation model to estimate the bounding box annotation for training the deep detection network. We also apply the verification model to eliminate some false positives produced from the detection model. By fusing scores of the detection and verification models, we achieve the state-of-the-art results. Moreover, our method is very fast with GPU computing, which makes it feasible for clinical practice. Copyright © 2018 Elsevier B.V. All rights reserved.