WorldWideScience

Sample records for end-to-end ccd simulator

  1. End-to-end plasma bubble PIC simulations on GPUs

    Science.gov (United States)

    Germaschewski, Kai; Fox, William; Matteucci, Jackson; Bhattacharjee, Amitava

    2017-10-01

    Accelerator technologies play a crucial role in eventually achieving exascale computing capabilities. The current and upcoming leadership machines at ORNL (Titan and Summit) employ Nvidia GPUs, which provide vast computational power but also need specifically adapted computational kernels to fully exploit them. In this work, we will show end-to-end particle-in-cell simulations of the formation, evolution and coalescence of laser-generated plasma bubbles. This work showcases the GPU capabilities of the PSC particle-in-cell code, which has been adapted for this problem to support particle injection, a heating operator and a collision operator on GPUs.

  2. End-to-end simulation: The front end

    International Nuclear Information System (INIS)

    Haber, I.; Bieniosek, F.M.; Celata, C.M.; Friedman, A.; Grote, D.P.; Henestroza, E.; Vay, J.-L.; Bernal, S.; Kishek, R.A.; O'Shea, P.G.; Reiser, M.; Herrmannsfeldt, W.B.

    2002-01-01

    For the intense beams in heavy ion fusion accelerators, details of the beam distribution as it emerges from the source region can determine the beam behavior well downstream. This occurs because collective space-charge modes excited as the beam is born remain undamped for many focusing periods. Traditional studies of the source region in particle beam systems have emphasized the behavior of averaged beam characteristics, such as total current, rms beam size, or emittance, rather than the details of the full beam distribution function that are necessary to predict the excitation of these modes. Simulations of the beam in the source region and comparisons to experimental measurements at LBNL and the University of Maryland are presented to illustrate some of the complexity in beam characteristics that has been uncovered as increased attention has been devoted to developing a detailed understanding of the source region. Also discussed are methods of using the simulations to infer characteristics of the beam distribution that can be difficult to measure directly

  3. End-to-end System Performance Simulation: A Data-Centric Approach

    Science.gov (United States)

    Guillaume, Arnaud; Laffitte de Petit, Jean-Luc; Auberger, Xavier

    2013-08-01

    In the early times of space industry, the feasibility of Earth observation missions was directly driven by what could be achieved by the satellite. It was clear to everyone that the ground segment would be able to deal with the small amount of data sent by the payload. Over the years, the amounts of data processed by the spacecrafts have been increasing drastically, leading to put more and more constraints on the ground segment performances - and in particular on timeliness. Nowadays, many space systems require high data throughputs and short response times, with information coming from multiple sources and involving complex algorithms. It has become necessary to perform thorough end-to-end analyses of the full system in order to optimise its cost and efficiency, but even sometimes to assess the feasibility of the mission. This paper presents a novel framework developed by Astrium Satellites in order to meet these needs of timeliness evaluation and optimisation. This framework, named ETOS (for “End-to-end Timeliness Optimisation of Space systems”), provides a modelling process with associated tools, models and GUIs. These are integrated thanks to a common data model and suitable adapters, with the aim of building suitable space systems simulators of the full end-to-end chain. A big challenge of such environment is to integrate heterogeneous tools (each one being well-adapted to part of the chain) into a relevant timeliness simulation.

  4. Modeling and simulation of satellite subsystems for end-to-end spacecraft modeling

    Science.gov (United States)

    Schum, William K.; Doolittle, Christina M.; Boyarko, George A.

    2006-05-01

    During the past ten years, the Air Force Research Laboratory (AFRL) has been simultaneously developing high-fidelity spacecraft payload models as well as a robust distributed simulation environment for modeling spacecraft subsystems. Much of this research has occurred in the Distributed Architecture Simulation Laboratory (DASL). AFRL developers working in the DASL have effectively combined satellite power, attitude pointing, and communication link analysis subsystem models with robust satellite sensor models to create a first-order end-to-end satellite simulation capability. The merging of these two simulation areas has advanced the field of spacecraft simulation, design, and analysis, and enabled more in-depth mission and satellite utility analyses. A core capability of the DASL is the support of a variety of modeling and analysis efforts, ranging from physics and engineering-level modeling to mission and campaign-level analysis. The flexibility and agility of this simulation architecture will be used to support space mission analysis, military utility analysis, and various integrated exercises with other military and space organizations via direct integration, or through DOD standards such as Distributed Interaction Simulation. This paper discusses the results and lessons learned in modeling satellite communication link analysis, power, and attitude control subsystems for an end-to-end satellite simulation. It also discusses how these spacecraft subsystem simulations feed into and support military utility and space mission analyses.

  5. End-to-End Beam Simulations for the New Muon G-2 Experiment at Fermilab

    Energy Technology Data Exchange (ETDEWEB)

    Korostelev, Maxim [Cockcroft Inst. Accel. Sci. Tech.; Bailey, Ian [Lancaster U.; Herrod, Alexander [Liverpool U.; Morgan, James [Fermilab; Morse, William [RIKEN BNL; Stratakis, Diktys [RIKEN BNL; Tishchenko, Vladimir [RIKEN BNL; Wolski, Andrzej [Cockcroft Inst. Accel. Sci. Tech.

    2016-06-01

    The aim of the new muon g-2 experiment at Fermilab is to measure the anomalous magnetic moment of the muon with an unprecedented uncertainty of 140 ppb. A beam of positive muons required for the experiment is created by pion decay. Detailed studies of the beam dynamics and spin polarization of the muons are important to predict systematic uncertainties in the experiment. In this paper, we present the results of beam simulations and spin tracking from the pion production target to the muon storage ring. The end-to-end beam simulations are developed in Bmad and include the processes of particle decay, collimation (with accurate representation of all apertures) and spin tracking.

  6. End-to-end interoperability and workflows from building architecture design to one or more simulations

    Science.gov (United States)

    Chao, Tian-Jy; Kim, Younghun

    2015-02-10

    An end-to-end interoperability and workflows from building architecture design to one or more simulations, in one aspect, may comprise establishing a BIM enablement platform architecture. A data model defines data entities and entity relationships for enabling the interoperability and workflows. A data definition language may be implemented that defines and creates a table schema of a database associated with the data model. Data management services and/or application programming interfaces may be implemented for interacting with the data model. Web services may also be provided for interacting with the data model via the Web. A user interface may be implemented that communicates with users and uses the BIM enablement platform architecture, the data model, the data definition language, data management services and application programming interfaces to provide functions to the users to perform work related to building information management.

  7. MRI simulation: end-to-end testing for prostate radiation therapy using geometric pelvic MRI phantoms

    International Nuclear Information System (INIS)

    Sun, Jidi; Menk, Fred; Lambert, Jonathan; Martin, Jarad; Denham, James W; Greer, Peter B; Dowling, Jason; Rivest-Henault, David; Pichler, Peter; Parker, Joel; Arm, Jameen; Best, Leah

    2015-01-01

    To clinically implement MRI simulation or MRI-alone treatment planning requires comprehensive end-to-end testing to ensure an accurate process. The purpose of this study was to design and build a geometric phantom simulating a human male pelvis that is suitable for both CT and MRI scanning and use it to test geometric and dosimetric aspects of MRI simulation including treatment planning and digitally reconstructed radiograph (DRR) generation.A liquid filled pelvic shaped phantom with simulated pelvic organs was scanned in a 3T MRI simulator with dedicated radiotherapy couch-top, laser bridge and pelvic coil mounts. A second phantom with the same external shape but with an internal distortion grid was used to quantify the distortion of the MR image. Both phantoms were also CT scanned as the gold-standard for both geometry and dosimetry. Deformable image registration was used to quantify the MR distortion. Dose comparison was made using a seven-field IMRT plan developed on the CT scan with the fluences copied to the MR image and recalculated using bulk electron densities.Without correction the maximum distortion of the MR compared with the CT scan was 7.5 mm across the pelvis, while this was reduced to 2.6 and 1.7 mm by the vendor’s 2D and 3D correction algorithms, respectively. Within the locations of the internal organs of interest, the distortion was <1.5 and <1 mm with 2D and 3D correction algorithms, respectively. The dose at the prostate isocentre calculated on CT and MRI images differed by 0.01% (1.1 cGy). Positioning shifts were within 1 mm when setup was performed using MRI generated DRRs compared to setup using CT DRRs.The MRI pelvic phantom allows end-to-end testing of the MRI simulation workflow with comparison to the gold-standard CT based process. MRI simulation was found to be geometrically accurate with organ dimensions, dose distributions and DRR based setup within acceptable limits compared to CT. (paper)

  8. IDENTIFYING ELUSIVE ELECTROMAGNETIC COUNTERPARTS TO GRAVITATIONAL WAVE MERGERS: AN END-TO-END SIMULATION

    International Nuclear Information System (INIS)

    Nissanke, Samaya; Georgieva, Alexandra; Kasliwal, Mansi

    2013-01-01

    Combined gravitational wave (GW) and electromagnetic (EM) observations of compact binary mergers should enable detailed studies of astrophysical processes in the strong-field gravity regime. This decade, ground-based GW interferometers promise to routinely detect compact binary mergers. Unfortunately, networks of GW interferometers have poor angular resolution on the sky and their EM signatures are predicted to be faint. Therefore, a challenging goal will be to unambiguously pinpoint the EM counterparts of GW mergers. We perform the first comprehensive end-to-end simulation that focuses on: (1) GW sky localization, distance measures, and volume errors with two compact binary populations and four different GW networks; (2) subsequent EM detectability by a slew of multiwavelength telescopes; and (3) final identification of the merger counterpart amidst a sea of possible astrophysical false positives. First, we find that double neutron star binary mergers can be detected out to a maximum distance of 400 Mpc (or 750 Mpc) by three (or five) detector GW networks, respectively. Neutron-star-black-hole binary mergers can be detected a factor of 1.5 further out; their median to maximum sky localizations are 50-170 deg 2 (or 6-65 deg 2 ) for a three (or five) detector GW network. Second, by optimizing depth, cadence, and sky area, we quantify relative fractions of optical counterparts that are detectable by a suite of different aperture-size telescopes across the globe. Third, we present five case studies to illustrate the diversity of scenarios in secure identification of the EM counterpart. We discuss the case of a typical binary, neither beamed nor nearby, and the challenges associated with identifying an EM counterpart at both low and high Galactic latitudes. For the first time, we demonstrate how construction of low-latency GW volumes in conjunction with local universe galaxy catalogs can help solve the problem of false positives. We conclude with strategies that would

  9. End-to-end simulation of the C-ADS injector Ⅱ with a 3-D field map

    International Nuclear Information System (INIS)

    Wang Zhijun; He Yuan; Li Chao; Wang Wangsheng; Liu Shuhui; Jia Huan; Xu Xianbo; Chen Ximeng

    2013-01-01

    The Injector II, one of the two parallel injectors of the high-current superconducting proton driver linac for the China Accelerator-Driven System (C-ADS) project, is being designed and constructed by the Institute of Modern Physics. At present, the design work for the injector is almost finished. End-to-end simulation has been carried out using the TRACK multiparticle simulation code to check the match between each acceleration section and the performance of the injector as a whole. Moreover, multiparticle simulations with all kinds of errors and misalignments have been performed to define the requirements of each device. The simulation results indicate that the lattice design is robust. In this paper, the results of end-to-end simulation and error simulation with a 3-D field map are presented. (authors)

  10. Modeling and Simulation of Satellite Subsystems for End-to-End Spacecraft Modeling

    National Research Council Canada - National Science Library

    Schum, William K; Doolittle, Christina M; Boyarko, George A

    2006-01-01

    During the past ten years, the Air Force Research Laboratory (AFRL) has been simultaneously developing high-fidelity spacecraft payload models as well as a robust distributed simulation environment for modeling spacecraft subsystems...

  11. End-to-end simulation of a visible 1 kW FEL

    International Nuclear Information System (INIS)

    Parazzoli, Claudio G.; Koltenbah, Benjamin E.C.

    2000-01-01

    In this paper we present the complete numerical simulation of the 1 kW visible Free Electron Laser under construction in Seattle. We show that the goal of producing 1.0 kW at 0.7 μm is well within the hardware capabilities. We simulate in detail the evolution of the electron bunch phase space in the entire e-beam line. The e-beam line includes the photo-injector cavities, the 433.33 MHz accelerator, the magnetic buncher, the 1300 MHz accelerator, the 180 deg. bend and the matching optics into the wiggler. The computed phase space is input for a three-dimensional time-dependent code that predicts the FEL performance. All the computations are based on state of the art software, and the limitations of the current software are discussed. We believe that this is the first time that such a thorough numerical simulation has been carried out and that such a realistic electron phase space has been used in FEL performance calculations

  12. Crosstalk in an FDM Laboratory Setup and the Athena X-IFU End-to-End Simulator

    Science.gov (United States)

    den Hartog, R.; Kirsch, C.; de Vries, C.; Akamatsu, H.; Dauser, T.; Peille, P.; Cucchetti, E.; Jackson, B.; Bandler, S.; Smith, S.; Wilms, J.

    2018-04-01

    The impact of various crosstalk mechanisms on the performance of the Athena X-IFU instrument has been assessed with detailed end-to-end simulations. For the crosstalk in the electrical circuit, a detailed model has been developed. In this contribution, we test this model against measurements made with an FDM laboratory setup and discuss the assumption of deterministic crosstalk in the context of the weak link effect in the detectors. We conclude that crosstalk levels predicted by the model are conservative with respect to the observed levels.

  13. HIDE & SEEK: End-to-end packages to simulate and process radio survey data

    Science.gov (United States)

    Akeret, J.; Seehars, S.; Chang, C.; Monstein, C.; Amara, A.; Refregier, A.

    2017-01-01

    As several large single-dish radio surveys begin operation within the coming decade, a wealth of radio data will become available and provide a new window to the Universe. In order to fully exploit the potential of these datasets, it is important to understand the systematic effects associated with the instrument and the analysis pipeline. A common approach to tackle this is to forward-model the entire system-from the hardware to the analysis of the data products. For this purpose, we introduce two newly developed, open-source Python packages: the HI Data Emulator (HIDE) and the Signal Extraction and Emission Kartographer (SEEK) for simulating and processing single-dish radio survey data. HIDE forward-models the process of collecting astronomical radio signals in a single-dish radio telescope instrument and outputs pixel-level time-ordered-data. SEEK processes the time-ordered-data, removes artifacts from Radio Frequency Interference (RFI), automatically applies flux calibration, and aims to recover the astronomical radio signal. The two packages can be used separately or together depending on the application. Their modular and flexible nature allows easy adaptation to other instruments and datasets. We describe the basic architecture of the two packages and examine in detail the noise and RFI modeling in HIDE, as well as the implementation of gain calibration and RFI mitigation in SEEK. We then apply HIDE &SEEK to forward-model a Galactic survey in the frequency range 990-1260 MHz based on data taken at the Bleien Observatory. For this survey, we expect to cover 70% of the full sky and achieve a median signal-to-noise ratio of approximately 5-6 in the cleanest channels including systematic uncertainties. However, we also point out the potential challenges of high RFI contamination and baseline removal when examining the early data from the Bleien Observatory. The fully documented HIDE &SEEK packages are available at http://hideseek.phys.ethz.ch/ and are published

  14. End to End Travel

    Data.gov (United States)

    US Agency for International Development — E2 Solutions is a web based end-to-end travel management tool that includes paperless travel authorization and voucher document submissions, document approval...

  15. WE-DE-BRA-11: A Study of Motion Tracking Accuracy of Robotic Radiosurgery Using a Novel CCD Camera Based End-To-End Test System

    Energy Technology Data Exchange (ETDEWEB)

    Wang, L; M Yang, Y [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States); Nelson, B [Logos Systems Intl, Scotts Valley, CA (United States)

    2016-06-15

    Purpose: A novel end-to-end test system using a CCD camera and a scintillator based phantom (XRV-124, Logos Systems Int’l) capable of measuring the beam-by-beam delivery accuracy of Robotic Radiosurgery (CyberKnife) was developed and reported in our previous work. This work investigates its application in assessing the motion tracking (Synchrony) accuracy for CyberKnife. Methods: A QA plan with Anterior and Lateral beams (with 4 different collimator sizes) was created (Multiplan v5.3) for the XRV-124 phantom. The phantom was placed on a motion platform (superior and inferior movement), and the plans were delivered on the CyberKnife M6 system using four motion patterns: static, Sine- wave, Sine with 15° phase shift, and a patient breathing pattern composed of 2cm maximum motion with 4 second breathing cycle. Under integral recording mode, the time-averaged beam vectors (X, Y, Z) were measured by the phantom and compared with static delivery. In dynamic recording mode, the beam spots were recorded at a rate of 10 frames/second. The beam vector deviation from average position was evaluated against the various breathing patterns. Results: The average beam position of the six deliveries with no motion and three deliveries with Synchrony tracking on ideal motion (sinewave without phase shift) all agree within −0.03±0.00 mm, 0.10±0.04, and 0.04±0.03 in the X, Y, and X directions. Radiation beam width (FWHM) variations are within ±0.03 mm. Dynamic video record showed submillimeter tracking stability for both regular and irregular breathing pattern; however the tracking error up to 3.5 mm was observed when a 15 degree phase shift was introduced. Conclusion: The XRV-124 system is able to provide 3D and 4D targeting accuracy for CyberKnife delivery with Synchrony. The experimental results showed sub-millimeter delivery in phantom with excellent correlation in target to breathing motion. The accuracy was degraded when irregular motion and phase shift was introduced.

  16. Unmanned Aircraft Systems Minimum Operations Performance Standards End-to-End Verification and Validation (E2-V2) Simulation

    Science.gov (United States)

    Ghatas, Rania W.; Jack, Devin P.; Tsakpinis, Dimitrios; Vincent, Michael J.; Sturdy, James L.; Munoz, Cesar A.; Hoffler, Keith D.; Dutle, Aaron M.; Myer, Robert R.; Dehaven, Anna M.; hide

    2017-01-01

    As Unmanned Aircraft Systems (UAS) make their way to mainstream aviation operations within the National Airspace System (NAS), research efforts are underway to develop a safe and effective environment for their integration into the NAS. Detect and Avoid (DAA) systems are required to account for the lack of "eyes in the sky" due to having no human on-board the aircraft. The current NAS relies on pilot's vigilance and judgement to remain Well Clear (CFR 14 91.113) of other aircraft. RTCA SC-228 has defined DAA Well Clear (DAAWC) to provide a quantified Well Clear volume to allow systems to be designed and measured against. Extended research efforts have been conducted to understand and quantify system requirements needed to support a UAS pilot's ability to remain well clear of other aircraft. The efforts have included developing and testing sensor, algorithm, alerting, and display requirements. More recently, sensor uncertainty and uncertainty mitigation strategies have been evaluated. This paper discusses results and lessons learned from an End-to-End Verification and Validation (E2-V2) simulation study of a DAA system representative of RTCA SC-228's proposed Phase I DAA Minimum Operational Performance Standards (MOPS). NASA Langley Research Center (LaRC) was called upon to develop a system that evaluates a specific set of encounters, in a variety of geometries, with end-to-end DAA functionality including the use of sensor and tracker models, a sensor uncertainty mitigation model, DAA algorithmic guidance in both vertical and horizontal maneuvering, and a pilot model which maneuvers the ownship aircraft to remain well clear from intruder aircraft, having received collective input from the previous modules of the system. LaRC developed a functioning batch simulation and added a sensor/tracker model from the Federal Aviation Administration (FAA) William J. Hughes Technical Center, an in-house developed sensor uncertainty mitigation strategy, and implemented a pilot

  17. End-to-end simulations and planning of a small space telescopes: Galaxy Evolution Spectroscopic Explorer: a case study

    Science.gov (United States)

    Heap, Sara; Folta, David; Gong, Qian; Howard, Joseph; Hull, Tony; Purves, Lloyd

    2016-08-01

    Large astronomical missions are usually general-purpose telescopes with a suite of instruments optimized for different wavelength regions, spectral resolutions, etc. Their end-to-end (E2E) simulations are typically photons-in to flux-out calculations made to verify that each instrument meets its performance specifications. In contrast, smaller space missions are usually single-purpose telescopes, and their E2E simulations start with the scientific question to be answered and end with an assessment of the effectiveness of the mission in answering the scientific question. Thus, E2E simulations for small missions consist a longer string of calculations than for large missions, as they include not only the telescope and instrumentation, but also the spacecraft, orbit, and external factors such as coordination with other telescopes. Here, we illustrate the strategy and organization of small-mission E2E simulations using the Galaxy Evolution Spectroscopic Explorer (GESE) as a case study. GESE is an Explorer/Probe-class space mission concept with the primary aim of understanding galaxy evolution. Operation of a small survey telescope in space like GESE is usually simpler than operations of large telescopes driven by the varied scientific programs of the observers or by transient events. Nevertheless, both types of telescopes share two common challenges: maximizing the integration time on target, while minimizing operation costs including communication costs and staffing on the ground. We show in the case of GESE how these challenges can be met through a custom orbit and a system design emphasizing simplification and leveraging information from ground-based telescopes.

  18. Quasi-real-time end-to-end simulations of ELT-scale adaptive optics systems on GPUs

    Science.gov (United States)

    Gratadour, Damien

    2011-09-01

    Our team has started the development of a code dedicated to GPUs for the simulation of AO systems at the E-ELT scale. It uses the CUDA toolkit and an original binding to Yorick (an open source interpreted language) to provide the user with a comprehensive interface. In this paper we present the first performance analysis of our simulation code, showing its ability to provide Shack-Hartmann (SH) images and measurements at the kHz scale for VLT-sized AO system and in quasi-real-time (up to 70 Hz) for ELT-sized systems on a single top-end GPU. The simulation code includes multiple layers atmospheric turbulence generation, ray tracing through these layers, image formation at the focal plane of every sub-apertures of a SH sensor using either natural or laser guide stars and centroiding on these images using various algorithms. Turbulence is generated on-the-fly giving the ability to simulate hours of observations without the need of loading extremely large phase screens in the global memory. Because of its performance this code additionally provides the unique ability to test real-time controllers for future AO systems under nominal conditions.

  19. Verification of the active deformation compensation system of the LMT/GTM by end-to-end simulations

    Science.gov (United States)

    Eisentraeger, Peter; Suess, Martin

    2000-07-01

    The 50 m LMT/GTM is exposed to the climatic conditions at 4,600 m height on Cerro La Negra, Mexico. For operating the telescope to the challenging requirements of its millimeter objective, an active approach for monitoring and compensating the structural deformations (Flexible Body Compensation FBC) is necessary. This system includes temperature sensors and strain gages for identifying large scale deformations of the reflector backup structure, a laser system for measuring the subreflector position, and an inclinometer system for measuring the deformations of the alidade. For compensating the monitored deformations, the telescope is equipped with additional actuators for active control of the main reflector surface and the subreflector position. The paper describes the verification of the active deformation system by finite element calculations and MATLAB simulations of the surface accuracy and the pointing including the servo under the operational wind and thermal conditions.

  20. Greenhouse gas profiling by infrared-laser and microwave occultation: retrieval algorithm and demonstration results from end-to-end simulations

    Directory of Open Access Journals (Sweden)

    V. Proschek

    2011-10-01

    Full Text Available Measuring greenhouse gas (GHG profiles with global coverage and high accuracy and vertical resolution in the upper troposphere and lower stratosphere (UTLS is key for improved monitoring of GHG concentrations in the free atmosphere. In this respect a new satellite mission concept adding an infrared-laser part to the already well studied microwave occultation technique exploits the joint propagation of infrared-laser and microwave signals between Low Earth Orbit (LEO satellites. This synergetic combination, referred to as LEO-LEO microwave and infrared-laser occultation (LMIO method, enables to retrieve thermodynamic profiles (pressure, temperature, humidity and accurate altitude levels from the microwave signals and GHG profiles from the simultaneously measured infrared-laser signals. However, due to the novelty of the LMIO method, a retrieval algorithm for GHG profiling is not yet available. Here we introduce such an algorithm for retrieving GHGs from LEO-LEO infrared-laser occultation (LIO data, applied as a second step after retrieving thermodynamic profiles from LEO-LEO microwave occultation (LMO data. We thoroughly describe the LIO retrieval algorithm and unveil the synergy with the LMO-retrieved pressure, temperature, and altitude information. We furthermore demonstrate the effective independence of the GHG retrieval results from background (a priori information in discussing demonstration results from LMIO end-to-end simulations for a representative set of GHG profiles, including carbon dioxide (CO2, water vapor (H2O, methane (CH4, and ozone (O3. The GHGs except for ozone are well retrieved throughout the UTLS, while ozone is well retrieved from about 10 km to 15 km upwards, since the ozone layer resides in the lower stratosphere. The GHG retrieval errors are generally smaller than 1% to 3% r.m.s., at a vertical resolution of about 1 km. The retrieved profiles also appear unbiased, which points

  1. The Swarm End-to-End mission simulator study: A demonstration of separating the various contributions to Earth's magnetic field using synthetic data

    DEFF Research Database (Denmark)

    Olsen, Nils; Haagmans, R.; Sabaka, T.J.

    2006-01-01

    Swarm, a satellite constellation to measure Earth's magnetic field with unpreceded accuracy, has been selected by ESA for launch in 2009. The mission will provide the best ever survey of the geomagnetic field and its temporal evolution, in order to gain new insights into the Earth system...... to the science objectives of Swarm. In order to be able to use realistic parameters of the Earth's environment, the mission simulation starts at January 1, 1997 and lasts until re-entry of the lower satellites five years later. Synthetic magnetic field values were generated for all relevant contributions...

  2. OISI dynamic end-to-end modeling tool

    Science.gov (United States)

    Kersten, Michael; Weidler, Alexander; Wilhelm, Rainer; Johann, Ulrich A.; Szerdahelyi, Laszlo

    2000-07-01

    The OISI Dynamic end-to-end modeling tool is tailored to end-to-end modeling and dynamic simulation of Earth- and space-based actively controlled optical instruments such as e.g. optical stellar interferometers. `End-to-end modeling' is meant to denote the feature that the overall model comprises besides optical sub-models also structural, sensor, actuator, controller and disturbance sub-models influencing the optical transmission, so that the system- level instrument performance due to disturbances and active optics can be simulated. This tool has been developed to support performance analysis and prediction as well as control loop design and fine-tuning for OISI, Germany's preparatory program for optical/infrared spaceborne interferometry initiated in 1994 by Dornier Satellitensysteme GmbH in Friedrichshafen.

  3. Unmanned Aircraft Systems Detect and Avoid System: End-to-End Verification and Validation Simulation Study of Minimum Operations Performance Standards for Integrating Unmanned Aircraft into the National Airspace System

    Science.gov (United States)

    Ghatas, Rania W.; Jack, Devin P.; Tsakpinis, Dimitrios; Sturdy, James L.; Vincent, Michael J.; Hoffler, Keith D.; Myer, Robert R.; DeHaven, Anna M.

    2017-01-01

    As Unmanned Aircraft Systems (UAS) make their way to mainstream aviation operations within the National Airspace System (NAS), research efforts are underway to develop a safe and effective environment for their integration into the NAS. Detect and Avoid (DAA) systems are required to account for the lack of "eyes in the sky" due to having no human on-board the aircraft. The technique, results, and lessons learned from a detailed End-to-End Verification and Validation (E2-V2) simulation study of a DAA system representative of RTCA SC-228's proposed Phase I DAA Minimum Operational Performance Standards (MOPS), based on specific test vectors and encounter cases, will be presented in this paper.

  4. TOWARD END-TO-END MODELING FOR NUCLEAR EXPLOSION MONITORING: SIMULATION OF UNDERGROUND NUCLEAR EXPLOSIONS AND EARTHQUAKES USING HYDRODYNAMIC AND ANELASTIC SIMULATIONS, HIGH-PERFORMANCE COMPUTING AND THREE-DIMENSIONAL EARTH MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Rodgers, A; Vorobiev, O; Petersson, A; Sjogreen, B

    2009-07-06

    This paper describes new research being performed to improve understanding of seismic waves generated by underground nuclear explosions (UNE) by using full waveform simulation, high-performance computing and three-dimensional (3D) earth models. The goal of this effort is to develop an end-to-end modeling capability to cover the range of wave propagation required for nuclear explosion monitoring (NEM) from the buried nuclear device to the seismic sensor. The goal of this work is to improve understanding of the physical basis and prediction capabilities of seismic observables for NEM including source and path-propagation effects. We are pursuing research along three main thrusts. Firstly, we are modeling the non-linear hydrodynamic response of geologic materials to underground explosions in order to better understand how source emplacement conditions impact the seismic waves that emerge from the source region and are ultimately observed hundreds or thousands of kilometers away. Empirical evidence shows that the amplitudes and frequency content of seismic waves at all distances are strongly impacted by the physical properties of the source region (e.g. density, strength, porosity). To model the near-source shock-wave motions of an UNE, we use GEODYN, an Eulerian Godunov (finite volume) code incorporating thermodynamically consistent non-linear constitutive relations, including cavity formation, yielding, porous compaction, tensile failure, bulking and damage. In order to propagate motions to seismic distances we are developing a one-way coupling method to pass motions to WPP (a Cartesian anelastic finite difference code). Preliminary investigations of UNE's in canonical materials (granite, tuff and alluvium) confirm that emplacement conditions have a strong effect on seismic amplitudes and the generation of shear waves. Specifically, we find that motions from an explosion in high-strength, low-porosity granite have high compressional wave amplitudes and weak

  5. Performance of the end-to-end test for the characterization of a simulator in stereotaxic corporal radiotherapy of liver; Realização do teste end-to-end para a caracterização de um simulador em radioterapia estereotáxica corpórea de fígado

    Energy Technology Data Exchange (ETDEWEB)

    Burgos, A.F.; Paiva, E. de, E-mail: adamfburgos@gmail.com [Instituto de Radioproteção e Dosimetria (IRD/CNEN), Rio de Janeiro-RJ (Brazil). Div. de Física Médica; Silva, L.P. da [Instituto Nacional de Câncer (MS/INCA), Rio de Janeiro-RJ (Brazil). Dept. de Física Médica

    2017-07-01

    Currently, one of the alternatives to the radiotherapy of the liver is the body stereotactic radiotherapy (SBRT), which delivers high doses in a few fractions, due to its good prognosis. However, in order to ensure that the high dose value delivered to the target is the same as planned, a full-process verification test (image acquisition, design, scheduling, and dose delivery) should be performed. For this purpose, the objective of this work was to develop a water density simulator that takes into account the relative position of the liver and the risk organs involved in this treatment, evaluating the influence of target movement on the dose value, due to the the respiratory process, as well as in positions related to the organs at risk.

  6. Cyberinfrastructure for End-to-End Environmental Explorations

    Science.gov (United States)

    Merwade, V.; Kumar, S.; Song, C.; Zhao, L.; Govindaraju, R.; Niyogi, D.

    2007-12-01

    The design and implementation of a cyberinfrastructure for End-to-End Environmental Exploration (C4E4) is presented. The C4E4 framework addresses the need for an integrated data/computation platform for studying broad environmental impacts by combining heterogeneous data resources with state-of-the-art modeling and visualization tools. With Purdue being a TeraGrid Resource Provider, C4E4 builds on top of the Purdue TeraGrid data management system and Grid resources, and integrates them through a service-oriented workflow system. It allows researchers to construct environmental workflows for data discovery, access, transformation, modeling, and visualization. Using the C4E4 framework, we have implemented an end-to-end SWAT simulation and analysis workflow that connects our TeraGrid data and computation resources. It enables researchers to conduct comprehensive studies on the impact of land management practices in the St. Joseph watershed using data from various sources in hydrologic, atmospheric, agricultural, and other related disciplines.

  7. A Bayes Theory-Based Modeling Algorithm to End-to-end Network Traffic

    Directory of Open Access Journals (Sweden)

    Zhao Hong-hao

    2016-01-01

    Full Text Available Recently, network traffic has exponentially increasing due to all kind of applications, such as mobile Internet, smart cities, smart transportations, Internet of things, and so on. the end-to-end network traffic becomes more important for traffic engineering. Usually end-to-end traffic estimation is highly difficult. This paper proposes a Bayes theory-based method to model the end-to-end network traffic. Firstly, the end-to-end network traffic is described as a independent identically distributed normal process. Then the Bases theory is used to characterize the end-to-end network traffic. By calculating the parameters, the model is determined correctly. Simulation results show that our approach is feasible and effective.

  8. Standardizing an End-to-end Accounting Service

    Science.gov (United States)

    Greenberg, Edward; Kazz, Greg

    2006-01-01

    Currently there are no space system standards available for space agencies to accomplish end-to-end accounting. Such a standard does not exist for spacecraft operations nor for tracing the relationship between the mission planning activities, the command sequences designed to perform those activities, the commands formulated to initiate those activities and the mission data and specifically the mission data products created by those activities. In order for space agencies to cross-support one another for data accountability/data tracing and for inter agency spacecraft to interoperate with each other, an international CCSDS standard for end-to-end data accountability/tracing needs to be developed. We will first describe the end-to-end accounting service model and functionality that supports the service. This model will describe how science plans that are ultimately transformed into commands can be associated with the telemetry products generated as a result of their execution. Moreover, the interaction between end-to-end accounting and service management will be explored. Finally, we will show how the standard end-to-end accounting service can be applied to a real life flight project i.e., the Mars Reconnaissance Orbiter project.

  9. End-to-End Security for Personal Telehealth

    NARCIS (Netherlands)

    Koster, R.P.; Asim, M.; Petkovic, M.

    2011-01-01

    Personal telehealth is in rapid development with innovative emerging applications like disease management. With personal telehealth people participate in their own care supported by an open distributed system with health services. This poses new end-to-end security and privacy challenges. In this

  10. Utilizing Domain Knowledge in End-to-End Audio Processing

    DEFF Research Database (Denmark)

    Tax, Tycho; Antich, Jose Luis Diez; Purwins, Hendrik

    2017-01-01

    to learn the commonly-used log-scaled mel-spectrogram transformation. Secondly, we demonstrate that upon initializing the first layers of an end-to-end CNN classifier with the learned transformation, convergence and performance on the ESC-50 environmental sound classification dataset are similar to a CNN......-based model trained on the highly pre-processed log-scaled mel-spectrogram features....

  11. Measurements and analysis of end-to-end Internet dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Paxson, Vern [Univ. of California, Berkeley, CA (United States). Computer Science Division

    1997-04-01

    Accurately characterizing end-to-end Internet dynamics - the performance that a user actually obtains from the lengthy series of network links that comprise a path through the Internet - is exceptionally difficult, due to the network`s immense heterogeneity. At the heart of this work is a `measurement framework` in which a number of sites around the Internet host a specialized measurement service. By coordinating `probes` between pairs of these sites one can measure end-to-end behavior along O(N2) paths for a framework consisting of N sites. Consequently, one obtains a superlinear scaling that allows measuring a rich cross-section of Internet behavior without requiring huge numbers of observation points. 37 sites participated in this study, allowing the author to measure more than 1,000 distinct Internet paths. The first part of this work looks at the behavior of end-to-end routing: the series of routers over which a connection`s packets travel. Based on 40,000 measurements made using this framework, the author analyzes: routing `pathologies` such as loops, outages, and flutter; the stability of routes over time; and the symmetry of routing along the two directions of an end-to-end path. The author finds that pathologies increased significantly over the course of 1995 and that Internet paths are heavily dominated by a single route. The second part of this work studies end-to-end Internet packet dynamics. The author analyzes 20,000 TCP transfers of 100 Kbyte each to investigate the performance of both the TCP endpoints and the Internet paths. The measurements used for this part of the study are much richer than those for the first part, but require a great degree of attention to issues of calibration, which are addressed by applying self-consistency checks to the measurements whenever possible. The author finds that packet filters are capable of a wide range of measurement errors, some of which, if undetected, can significantly taint subsequent analysis.

  12. Location Assisted Vertical Handover Algorithm for QoS Optimization in End-to-End Connections

    DEFF Research Database (Denmark)

    Dam, Martin S.; Christensen, Steffen R.; Mikkelsen, Lars M.

    2012-01-01

    implementation on Android based tablets. The simulations cover a wide range of scenarios for two mobile users in an urban area with ubiquitous cellular coverage, and shows our algorithm leads to increased throughput, with fewer handovers, when considering the end-to-end connection than to other handover schemes...

  13. End-to-End Operations in the ELT Era

    Science.gov (United States)

    Hainaut, O. R.; Bierwirth, T.; Brillant, S.; Mieske, S.; Patat, F.; Rejkuba, M.; Romaniello, M.; Sterzik, M.

    2018-03-01

    The Data Flow System is the infrastructure on which Very Large Telescope (VLT) observations are performed at the Observatory, before and after the observations themselves take place. Since its original conception in the late 1990s, it has evolved to accommodate new observing modes and new instruments on La Silla and Paranal. Several updates and upgrades are needed to overcome its obsolescence and to integrate requirements from the new instruments from the community and, of course, from ESO's Extremely Large Telescope (ELT), which will be integrated into Paranal's operations. We describe the end-to-end operations and the resulting roadmap guiding their further development.

  14. End-to-end tests using alanine dosimetry in scanned proton beams

    Science.gov (United States)

    Carlino, A.; Gouldstone, C.; Kragl, G.; Traneus, E.; Marrale, M.; Vatnitsky, S.; Stock, M.; Palmans, H.

    2018-03-01

    This paper describes end-to-end test procedures as the last fundamental step of medical commissioning before starting clinical operation of the MedAustron synchrotron-based pencil beam scanning (PBS) therapy facility with protons. One in-house homogeneous phantom and two anthropomorphic heterogeneous (head and pelvis) phantoms were used for end-to-end tests at MedAustron. The phantoms were equipped with alanine detectors, radiochromic films and ionization chambers. The correction for the ‘quenching’ effect of alanine pellets was implemented in the Monte Carlo platform of the evaluation version of RayStation TPS. During the end-to-end tests, the phantoms were transferred through the workflow like real patients to simulate the entire clinical workflow: immobilization, imaging, treatment planning and dose delivery. Different clinical scenarios of increasing complexity were simulated: delivery of a single beam, two oblique beams without and with range shifter. In addition to the dose comparison in the plastic phantoms the dose obtained from alanine pellet readings was compared with the dose determined with the Farmer ionization chamber in water. A consistent systematic deviation of about 2% was found between alanine dosimetry and the ionization chamber dosimetry in water and plastic materials. Acceptable agreement of planned and delivered doses was observed together with consistent and reproducible results of the end-to-end testing performed with different dosimetric techniques (alanine detectors, ionization chambers and EBT3 radiochromic films). The results confirmed the adequate implementation and integration of the new PBS technology at MedAustron. This work demonstrates that alanine pellets are suitable detectors for end-to-end tests in proton beam therapy and the developed procedures with customized anthropomorphic phantoms can be used to support implementation of PBS technology in clinical practice.

  15. STS/DBS power subsystem end-to-end stability margin

    Science.gov (United States)

    Devaux, R. N.; Vattimo, R. J.; Peck, S. R.; Baker, W. E.

    Attention is given to a full-up end-to-end subsystem stability test which was performed with a flight solar array providing power to a fully operational spacecraft. The solar array simulator is described, and a comparison is made between test results obtained with the simulator and those obtained with the actual array. It is concluded that stability testing with a fully integrated spacecraft is necessary to ensure that all elements have been adequately modeled.

  16. An end to end secure CBIR over encrypted medical database.

    Science.gov (United States)

    Bellafqira, Reda; Coatrieux, Gouenou; Bouslimi, Dalel; Quellec, Gwenole

    2016-08-01

    In this paper, we propose a new secure content based image retrieval (SCBIR) system adapted to the cloud framework. This solution allows a physician to retrieve images of similar content within an outsourced and encrypted image database, without decrypting them. Contrarily to actual CBIR approaches in the encrypted domain, the originality of the proposed scheme stands on the fact that the features extracted from the encrypted images are themselves encrypted. This is achieved by means of homomorphic encryption and two non-colluding servers, we however both consider as honest but curious. In that way an end to end secure CBIR process is ensured. Experimental results carried out on a diabetic retinopathy database encrypted with the Paillier cryptosystem indicate that our SCBIR achieves retrieval performance as good as if images were processed in their non-encrypted form.

  17. System of end-to-end symmetric database encryption

    Science.gov (United States)

    Galushka, V. V.; Aydinyan, A. R.; Tsvetkova, O. L.; Fathi, V. A.; Fathi, D. V.

    2018-05-01

    The article is devoted to the actual problem of protecting databases from information leakage, which is performed while bypassing access control mechanisms. To solve this problem, it is proposed to use end-to-end data encryption, implemented at the end nodes of an interaction of the information system components using one of the symmetric cryptographic algorithms. For this purpose, a key management method designed for use in a multi-user system based on the distributed key representation model, part of which is stored in the database, and the other part is obtained by converting the user's password, has been developed and described. In this case, the key is calculated immediately before the cryptographic transformations and is not stored in the memory after the completion of these transformations. Algorithms for registering and authorizing a user, as well as changing his password, have been described, and the methods for calculating parts of a key when performing these operations have been provided.

  18. End-to-end learning for digital hologram reconstruction

    Science.gov (United States)

    Xu, Zhimin; Zuo, Si; Lam, Edmund Y.

    2018-02-01

    Digital holography is a well-known method to perform three-dimensional imaging by recording the light wavefront information originating from the object. Not only the intensity, but also the phase distribution of the wavefront can then be computed from the recorded hologram in the numerical reconstruction process. However, the reconstructions via the traditional methods suffer from various artifacts caused by twin-image, zero-order term, and noise from image sensors. Here we demonstrate that an end-to-end deep neural network (DNN) can learn to perform both intensity and phase recovery directly from an intensity-only hologram. We experimentally show that the artifacts can be effectively suppressed. Meanwhile, our network doesn't need any preprocessing for initialization, and is comparably fast to train and test, in comparison with the recently published learning-based method. In addition, we validate that the performance improvement can be achieved by introducing a prior on sparsity.

  19. End-to-end delay analysis in wireless sensor networks with service vacation

    KAUST Repository

    Alabdulmohsin, Ibrahim; Hyadi, Amal; Afify, Laila H.; Shihada, Basem

    2014-01-01

    In this paper, a delay-sensitive multi-hop wireless sensor network is considered, employing an M/G/1 with vacations framework. Sensors transmit measurements to a predefined data sink subject to maximum end-to-end delay constraint. In order to prolong the battery lifetime, a sleeping scheme is adopted throughout the network nodes. The objective of our proposed framework is to present an expression for maximum hop-count as well as an approximate expression of the probability of blocking at the sink node upon violating certain end-to-end delay threshold. Using numerical simulations, we validate the proposed analytical model and demonstrate that the blocking probability of the system for various vacation time distributions matches the simulation results.

  20. End-to-end delay analysis in wireless sensor networks with service vacation

    KAUST Repository

    Alabdulmohsin, Ibrahim

    2014-04-01

    In this paper, a delay-sensitive multi-hop wireless sensor network is considered, employing an M/G/1 with vacations framework. Sensors transmit measurements to a predefined data sink subject to maximum end-to-end delay constraint. In order to prolong the battery lifetime, a sleeping scheme is adopted throughout the network nodes. The objective of our proposed framework is to present an expression for maximum hop-count as well as an approximate expression of the probability of blocking at the sink node upon violating certain end-to-end delay threshold. Using numerical simulations, we validate the proposed analytical model and demonstrate that the blocking probability of the system for various vacation time distributions matches the simulation results.

  1. End-to-End Adversarial Retinal Image Synthesis.

    Science.gov (United States)

    Costa, Pedro; Galdran, Adrian; Meyer, Maria Ines; Niemeijer, Meindert; Abramoff, Michael; Mendonca, Ana Maria; Campilho, Aurelio

    2018-03-01

    In medical image analysis applications, the availability of the large amounts of annotated data is becoming increasingly critical. However, annotated medical data is often scarce and costly to obtain. In this paper, we address the problem of synthesizing retinal color images by applying recent techniques based on adversarial learning. In this setting, a generative model is trained to maximize a loss function provided by a second model attempting to classify its output into real or synthetic. In particular, we propose to implement an adversarial autoencoder for the task of retinal vessel network synthesis. We use the generated vessel trees as an intermediate stage for the generation of color retinal images, which is accomplished with a generative adversarial network. Both models require the optimization of almost everywhere differentiable loss functions, which allows us to train them jointly. The resulting model offers an end-to-end retinal image synthesis system capable of generating as many retinal images as the user requires, with their corresponding vessel networks, by sampling from a simple probability distribution that we impose to the associated latent space. We show that the learned latent space contains a well-defined semantic structure, implying that we can perform calculations in the space of retinal images, e.g., smoothly interpolating new data points between two retinal images. Visual and quantitative results demonstrate that the synthesized images are substantially different from those in the training set, while being also anatomically consistent and displaying a reasonable visual quality.

  2. A Bayes Theory-Based Modeling Algorithm to End-to-end Network Traffic

    OpenAIRE

    Zhao Hong-hao; Meng Fan-bo; Zhao Si-wen; Zhao Si-hang; Lu Yi

    2016-01-01

    Recently, network traffic has exponentially increasing due to all kind of applications, such as mobile Internet, smart cities, smart transportations, Internet of things, and so on. the end-to-end network traffic becomes more important for traffic engineering. Usually end-to-end traffic estimation is highly difficult. This paper proposes a Bayes theory-based method to model the end-to-end network traffic. Firstly, the end-to-end network traffic is described as a independent identically distrib...

  3. Performance Enhancements of UMTS networks using end-to-end QoS provisioning

    DEFF Research Database (Denmark)

    Wang, Haibo; Prasad, Devendra; Teyeb, Oumer

    2005-01-01

    This paper investigates the end-to-end(E2E) quality of service(QoS) provisioning approaches for UMTS networks together with DiffServ IP network. The effort was put on QoS classes mapping from DiffServ to UMTS, Access Control(AC), buffering and scheduling optimization. The DiffServ Code Point (DSCP......) was utilized in the whole UMTS QoS provisioning to differentiate different type of traffics. The overall algorithm was optimized to guarantee the E2E QoS parameters of each service class, especially for realtime applications, as well as to improve the bandwidth utilization. Simulation shows that the enhanced...

  4. Screening California Current fishery management scenarios using the Atlantis end-to-end ecosystem model

    Science.gov (United States)

    Kaplan, Isaac C.; Horne, Peter J.; Levin, Phillip S.

    2012-09-01

    value. However, this cost was minimal when local conservation actions were part of a concerted coast-wide plan. The simulations demonstrate the utility of using the Atlantis end-to-end ecosystem model within NOAA’s Integrated Ecosystem Assessment, by illustrating an end-to-end modeling tool that allows consideration of multiple management alternatives that are relevant to numerous state, federal and private interests.

  5. End to End Inter-domain Quality of Service Provisioning

    DEFF Research Database (Denmark)

    Brewka, Lukasz Jerzy

    This thesis addresses selected topics of Quality of Service (QoS) provisioning in heterogeneous data networks that construct the communication environment of today's Internet. In the vast range of protocols available in different domains of network infrastructures, a few chosen ones are discussed......, the general UPnPQoS performance was assessed analytically and confirmed by simulations results. The results validate the usability of UPnP-QoS, but some open issues in the specication were identified. As a result of addressing mentioned shortcomings of UPnP-QoS, a few pre-emption algorithms for home gateway...... and discuss also access Passive Optical Network (PON) technologies, a GMPLS controlled Ten Gigabit Passive Optical Network (XGPON) was proposed. This part of the thesis introduces the possibility of managing the XG-PON by the GMPLS suite, showing again that this protocol suite is a good candidate...

  6. Analytical Framework for End-to-End Delay Based on Unidirectional Highway Scenario

    Directory of Open Access Journals (Sweden)

    Aslinda Hassan

    2015-01-01

    Full Text Available In a sparse vehicular ad hoc network, a vehicle normally employs a carry and forward approach, where it holds the message it wants to transmit until the vehicle meets other vehicles or roadside units. A number of analyses in the literature have been done to investigate the time delay when packets are being carried by vehicles on both unidirectional and bidirectional highways. However, these analyses are focusing on the delay between either two disconnected vehicles or two disconnected vehicle clusters. Furthermore, majority of the analyses only concentrate on the expected value of the end-to-end delay when the carry and forward approach is used. Using regression analysis, we establish the distribution model for the time delay between two disconnected vehicle clusters as an exponential distribution. Consequently, a distribution is newly derived to represent the number of clusters on a highway using a vehicular traffic model. From there, we are able to formulate end-to-end delay model which extends the time delay model for two disconnected vehicle clusters to multiple disconnected clusters on a unidirectional highway. The analytical results obtained from the analytical model are then validated through simulation results.

  7. AN ANALYSIS OF THE APPLICATION END TO END QUALITY OF SERVICE ON 3G TELECOMMUNICATION NETWORK

    Directory of Open Access Journals (Sweden)

    Cahya Lukito

    2012-05-01

    Full Text Available End to End Quality of Service is a way to provide data package service in a telecommunication network that based on Right Price, Right Service Level, and Right Quality. The goal of this research is to analyze the impact of End to End QoS use on 3G telecommunication network for voice service and data. This research uses an analysis method by doing the application on the lab. The result that is achieved in this research shows that End to End QoS is very influental to the Service Level Agreement to the users of the telecommunication service.Keywords: End to End Qos, SLA, Diffserv

  8. End-to-End Traffic Flow Modeling of the Integrated SCaN Network

    Science.gov (United States)

    Cheung, K.-M.; Abraham, D. S.

    2012-05-01

    In this article, we describe the analysis and simulation effort of the end-to-end traffic flow for the Integrated Space Communications and Navigation (SCaN) Network. Using the network traffic derived for the 30-day period of July 2018 from the Space Communications Mission Model (SCMM), we generate the wide-area network (WAN) bandwidths of the ground links for different architecture options of the Integrated SCaN Network. We also develop a new analytical scheme to model the traffic flow and buffering mechanism of a store-and-forward network. It is found that the WAN bandwidth of the Integrated SCaN Network is an important differentiator of different architecture options, as the recurring circuit costs of certain architecture options can be prohibitively high.

  9. Experimental demonstration of software defined data center optical networks with Tbps end-to-end tunability

    Science.gov (United States)

    Zhao, Yongli; Zhang, Jie; Ji, Yuefeng; Li, Hui; Wang, Huitao; Ge, Chao

    2015-10-01

    The end-to-end tunability is important to provision elastic channel for the burst traffic of data center optical networks. Then, how to complete the end-to-end tunability based on elastic optical networks? Software defined networking (SDN) based end-to-end tunability solution is proposed for software defined data center optical networks, and the protocol extension and implementation procedure are designed accordingly. For the first time, the flexible grid all optical networks with Tbps end-to-end tunable transport and switch system have been online demonstrated for data center interconnection, which are controlled by OpenDayLight (ODL) based controller. The performance of the end-to-end tunable transport and switch system has been evaluated with wavelength number tuning, bit rate tuning, and transmit power tuning procedure.

  10. Model outputs - Developing end-to-end models of the Gulf of California

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The purpose of this project is to develop spatially discrete end-to-end models of the northern Gulf of California, linking oceanography, biogeochemistry, food web...

  11. Physical oceanography - Developing end-to-end models of the California Current Large Marine Ecosystem

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The purpose of this project is to develop spatially discrete end-to-end models of the California Current LME, linking oceanography, biogeochemistry, food web...

  12. Atlantis model outputs - Developing end-to-end models of the California Current Large Marine Ecosystem

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The purpose of this project is to develop spatially discrete end-to-end models of the California Current LME, linking oceanography, biogeochemistry, food web...

  13. Advanced Camera Image Cropping Approach for CNN-Based End-to-End Controls on Sustainable Computing

    Directory of Open Access Journals (Sweden)

    Yunsick Sung

    2018-03-01

    Full Text Available Recent research on deep learning has been applied to a diversity of fields. In particular, numerous studies have been conducted on self-driving vehicles using end-to-end approaches based on images captured by a single camera. End-to-end controls learn the output vectors of output devices directly from the input vectors of available input devices. In other words, an end-to-end approach learns not by analyzing the meaning of input vectors, but by extracting optimal output vectors based on input vectors. Generally, when end-to-end control is applied to self-driving vehicles, the steering wheel and pedals are controlled autonomously by learning from the images captured by a camera. However, high-resolution images captured from a car cannot be directly used as inputs to Convolutional Neural Networks (CNNs owing to memory limitations; the image size needs to be efficiently reduced. Therefore, it is necessary to extract features from captured images automatically and to generate input images by merging the parts of the images that contain the extracted features. This paper proposes a learning method for end-to-end control that generates input images for CNNs by extracting road parts from input images, identifying the edges of the extracted road parts, and merging the parts of the images that contain the detected edges. In addition, a CNN model for end-to-end control is introduced. Experiments involving the Open Racing Car Simulator (TORCS, a sustainable computing environment for cars, confirmed the effectiveness of the proposed method for self-driving by comparing the accumulated difference in the angle of the steering wheel in the images generated by it with those of resized images containing the entire captured area and cropped images containing only a part of the captured area. The results showed that the proposed method reduced the accumulated difference by 0.839% and 0.850% compared to those yielded by the resized images and cropped images

  14. Sleep/wake scheduling scheme for minimizing end-to-end delay in multi-hop wireless sensor networks

    Directory of Open Access Journals (Sweden)

    Madani Sajjad

    2011-01-01

    Full Text Available Abstract We present a sleep/wake schedule protocol for minimizing end-to-end delay for event driven multi-hop wireless sensor networks. In contrast to generic sleep/wake scheduling schemes, our proposed algorithm performs scheduling that is dependent on traffic loads. Nodes adapt their sleep/wake schedule based on traffic loads in response to three important factors, (a the distance of the node from the sink node, (b the importance of the node's location from connectivity's perspective, and (c if the node is in the proximity where an event occurs. Using these heuristics, the proposed scheme reduces end-to-end delay and maximizes the throughput by minimizing the congestion at nodes having heavy traffic load. Simulations are carried out to evaluate the performance of the proposed protocol, by comparing its performance with S-MAC and Anycast protocols. Simulation results demonstrate that the proposed protocol has significantly reduced the end-to-end delay, as well as has improved the other QoS parameters, like average energy per packet, average delay, packet loss ratio, throughput, and coverage lifetime.

  15. Availability and End-to-end Reliability in Low Duty Cycle Multihop Wireless Sensor Networks.

    Science.gov (United States)

    Suhonen, Jukka; Hämäläinen, Timo D; Hännikäinen, Marko

    2009-01-01

    A wireless sensor network (WSN) is an ad-hoc technology that may even consist of thousands of nodes, which necessitates autonomic, self-organizing and multihop operations. A typical WSN node is battery powered, which makes the network lifetime the primary concern. The highest energy efficiency is achieved with low duty cycle operation, however, this alone is not enough. WSNs are deployed for different uses, each requiring acceptable Quality of Service (QoS). Due to the unique characteristics of WSNs, such as dynamic wireless multihop routing and resource constraints, the legacy QoS metrics are not feasible as such. We give a new definition to measure and implement QoS in low duty cycle WSNs, namely availability and reliability. Then, we analyze the effect of duty cycling for reaching the availability and reliability. The results are obtained by simulations with ZigBee and proprietary TUTWSN protocols. Based on the results, we also propose a data forwarding algorithm suitable for resource constrained WSNs that guarantees end-to-end reliability while adding a small overhead that is relative to the packet error rate (PER). The forwarding algorithm guarantees reliability up to 30% PER.

  16. Availability and End-to-end Reliability in Low Duty Cycle MultihopWireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Timo D. Hämäläinen

    2009-03-01

    Full Text Available A wireless sensor network (WSN is an ad-hoc technology that may even consist of thousands of nodes, which necessitates autonomic, self-organizing and multihop operations. A typical WSN node is battery powered, which makes the network lifetime the primary concern. The highest energy efficiency is achieved with low duty cycle operation, however, this alone is not enough. WSNs are deployed for different uses, each requiring acceptable Quality of Service (QoS. Due to the unique characteristics of WSNs, such as dynamic wireless multihop routing and resource constraints, the legacy QoS metrics are not feasible as such. We give a new definition to measure and implement QoS in low duty cycle WSNs, namely availability and reliability. Then, we analyze the effect of duty cycling for reaching the availability and reliability. The results are obtained by simulations with ZigBee and proprietary TUTWSN protocols. Based on the results, we also propose a data forwarding algorithm suitable for resource constrained WSNs that guarantees end-to-end reliability while adding a small overhead that is relative to the packet error rate (PER. The forwarding algorithm guarantees reliability up to 30% PER.

  17. A Workflow-based Intelligent Network Data Movement Advisor with End-to-end Performance Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Michelle M. [Southern Illinois Univ., Carbondale, IL (United States); Wu, Chase Q. [Univ. of Memphis, TN (United States)

    2013-11-07

    Next-generation eScience applications often generate large amounts of simulation, experimental, or observational data that must be shared and managed by collaborative organizations. Advanced networking technologies and services have been rapidly developed and deployed to facilitate such massive data transfer. However, these technologies and services have not been fully utilized mainly because their use typically requires significant domain knowledge and in many cases application users are even not aware of their existence. By leveraging the functionalities of an existing Network-Aware Data Movement Advisor (NADMA) utility, we propose a new Workflow-based Intelligent Network Data Movement Advisor (WINDMA) with end-to-end performance optimization for this DOE funded project. This WINDMA system integrates three major components: resource discovery, data movement, and status monitoring, and supports the sharing of common data movement workflows through account and database management. This system provides a web interface and interacts with existing data/space management and discovery services such as Storage Resource Management, transport methods such as GridFTP and GlobusOnline, and network resource provisioning brokers such as ION and OSCARS. We demonstrate the efficacy of the proposed transport-support workflow system in several use cases based on its implementation and deployment in DOE wide-area networks.

  18. Impacts of the Deepwater Horizon oil spill evaluated using an end-to-end ecosystem model.

    Science.gov (United States)

    Ainsworth, Cameron H; Paris, Claire B; Perlin, Natalie; Dornberger, Lindsey N; Patterson, William F; Chancellor, Emily; Murawski, Steve; Hollander, David; Daly, Kendra; Romero, Isabel C; Coleman, Felicia; Perryman, Holly

    2018-01-01

    We use a spatially explicit biogeochemical end-to-end ecosystem model, Atlantis, to simulate impacts from the Deepwater Horizon oil spill and subsequent recovery of fish guilds. Dose-response relationships with expected oil concentrations were utilized to estimate the impact on fish growth and mortality rates. We also examine the effects of fisheries closures and impacts on recruitment. We validate predictions of the model by comparing population trends and age structure before and after the oil spill with fisheries independent data. The model suggests that recruitment effects and fishery closures had little influence on biomass dynamics. However, at the assumed level of oil concentrations and toxicity, impacts on fish mortality and growth rates were large and commensurate with observations. Sensitivity analysis suggests the biomass of large reef fish decreased by 25% to 50% in areas most affected by the spill, and biomass of large demersal fish decreased even more, by 40% to 70%. Impacts on reef and demersal forage caused starvation mortality in predators and increased reliance on pelagic forage. Impacts on the food web translated effects of the spill far away from the oiled area. Effects on age structure suggest possible delayed impacts on fishery yields. Recovery of high-turnover populations generally is predicted to occur within 10 years, but some slower-growing populations may take 30+ years to fully recover.

  19. An end-to-end assessment of range uncertainty in proton therapy using animal tissues

    Science.gov (United States)

    Zheng, Yuanshui; Kang, Yixiu; Zeidan, Omar; Schreuder, Niek

    2016-11-01

    Accurate assessment of range uncertainty is critical in proton therapy. However, there is a lack of data and consensus on how to evaluate the appropriate amount of uncertainty. The purpose of this study is to quantify the range uncertainty in various treatment conditions in proton therapy, using transmission measurements through various animal tissues. Animal tissues, including a pig head, beef steak, and lamb leg, were used in this study. For each tissue, an end-to-end test closely imitating patient treatments was performed. This included CT scan simulation, treatment planning, image-guided alignment, and beam delivery. Radio-chromic films were placed at various depths in the distal dose falloff region to measure depth dose. Comparisons between measured and calculated doses were used to evaluate range differences. The dose difference at the distal falloff between measurement and calculation depends on tissue type and treatment conditions. The estimated range difference was up to 5, 6 and 4 mm for the pig head, beef steak, and lamb leg irradiation, respectively. Our study shows that the TPS was able to calculate proton range within about 1.5% plus 1.5 mm. Accurate assessment of range uncertainty in treatment planning would allow better optimization of proton beam treatment, thus fully achieving proton beams’ superior dose advantage over conventional photon-based radiation therapy.

  20. End-to-side and end-to-end anastomoses give similar results in cervical oesophagogastrostomy.

    Science.gov (United States)

    Pierie, J P; De Graaf, P W; Poen, H; Van Der Tweel, I; Obertop, H

    1995-12-01

    To find out if there were any differences in healing between end-to-end and end-to-side anastomoses for oesophagogastrostomy. Open study with historical controls. University hospital, The Netherlands. 28 patients with end-to-end and 90 patients with end-to-side anastomoses after transhiatal oesophagectomy and partial gastrectomy for cancer of the oesophagus or oesophagogastric junction, with gastric tube reconstruction and cervical anastomosis. Leak and stricture rates, and the number of dilatations needed to relieve dysphagia. There were no significant differences in leak rates (end-to-end 4/28, 14%, and end-to-side 13/90, 14%) or anastomotic strictures (end-to-end 9/28, 32%, and end-to-side 26/90, 29%). The median number of dilatations needed to relieve dysphagia was 7 (1-33) after end-to-end and 9 (1-113) after end-to-side oesophagogastrostomy. There were no differences between the two methods of suture of cervical oesophagogastrostomy when leakage, stricture, and number of dilatations were used as criteria of good healing.

  1. Automatic provisioning of end-to-end QoS into the home

    DEFF Research Database (Denmark)

    Brewka, Lukasz Jerzy; Skoldström, Pontus; Nelis, Jelle

    2011-01-01

    Due to a growing number of high bandwidth applications today (such as HDTV), and an increasing amount of network and cloud based applications, service providers need to pay attention to QoS in their networks. We believe there is a need for an end-to-end approach reaching into the home as well....... The Home Gateway (HG) as a key component of the home network is crucial for enabling the end-to-end solutions. UPnP-QoS has been proposed as an inhome solution for resource reservations. In this paper we assess a solution for automatic QoS reservations, on behalf of non-UPnP-QoS aware applications....... Additionally we focus on an integrated end-to-end solution, combining GMPLS-based reservations in e.g., access/metro and UPnP-QoS based reservation in the home network....

  2. Design and end-to-end modelling of a deployable telescope

    Science.gov (United States)

    Dolkens, Dennis; Kuiper, Hans

    2017-09-01

    a closed-loop system based on measurements of the image sharpness as well as measurements obtained with edge sensors placed between the mirror segments. In addition, a phase diversity system will be used to recover residual wavefront aberrations. To aid the design of the deployable telescope, an end-to-end performance model was developed. The model is built around a dedicated ray-trace program written in Matlab. This program was built from the ground up for the purpose of modelling segmented telescope systems and allows for surface data computed with Finite Element Models (FEM) to be imported in the model. The program also contains modules which can simulate the closed-loop calibration of the telescope and it can use simulated images as an input for phase diversity and image processing algorithms. For a given thermo-mechanical state, the end-to-end model can predict the image quality that will be obtained after the calibration has been completed and the image has been processed. As such, the model is a powerful systems engineering tool, which can be used to optimize the in-orbit performance of a segmented, deployable telescope.

  3. SPoRT - An End-to-End R2O Activity

    Science.gov (United States)

    Jedlovec, Gary J.

    2009-01-01

    Established in 2002 to demonstrate the weather and forecasting application of real-time EOS measurements, the Short-term Prediction Research and Transition (SPoRT) program has grown to be an end-to-end research to operations activity focused on the use of advanced NASA modeling and data assimilation approaches, nowcasting techniques, and unique high-resolution multispectral observational data applications from EOS satellites to improve short-term weather forecasts on a regional and local scale. SPoRT currently partners with several universities and other government agencies for access to real-time data and products, and works collaboratively with them and operational end users at 13 WFOs to develop and test the new products and capabilities in a "test-bed" mode. The test-bed simulates key aspects of the operational environment without putting constraints on the forecaster workload. Products and capabilities which show utility in the test-bed environment are then transitioned experimentally into the operational environment for further evaluation and assessment. SPoRT focuses on a suite of data and products from MODIS, AMSR-E, and AIRS on the NASA Terra and Aqua satellites, and total lightning measurements from ground-based networks. Some of the observations are assimilated into or used with various versions of the WRF model to provide supplemental forecast guidance to operational end users. SPoRT is enhancing partnerships with NOAA / NESDIS for new product development and data access to exploit the remote sensing capabilities of instruments on the NPOESS satellites to address short term weather forecasting problems. The VIIRS and CrIS instruments on the NPP and follow-on NPOESS satellites provide similar observing capabilities to the MODIS and AIRS instruments on Terra and Aqua. SPoRT will be transitioning existing and new capabilities into the AWIIPS II environment to continue the continuity of its activities.

  4. Security Considerations around End-to-End Security in the IP-based Internet of Things

    NARCIS (Netherlands)

    Brachmann, M.; Garcia-Mochon, O.; Keoh, S.L.; Kumar, S.S.

    2012-01-01

    The IP-based Internet of Things refers to the interconnection of smart objects in a Low-power and Lossy Network (LLN) with the Internetby means of protocols such as 6LoWPAN or CoAP. The provisioning of an end-to-end security connection is the key to ensure basic functionalities such as software

  5. QoC-based Optimization of End-to-End M-Health Data Delivery Services

    NARCIS (Netherlands)

    Widya, I.A.; van Beijnum, Bernhard J.F.; Salden, Alfons

    2006-01-01

    This paper addresses how Quality of Context (QoC) can be used to optimize end-to-end mobile healthcare (m-health) data delivery services in the presence of alternative delivery paths, which is quite common in a pervasive computing and communication environment. We propose min-max-plus based

  6. End-to-End Availability Analysis of IMS-Based Networks

    DEFF Research Database (Denmark)

    Kamyod, Chayapol; Nielsen, Rasmus Hjorth; Prasad, Neeli R.

    2013-01-01

    Generation Networks (NGNs). In this paper, an end-to-end availability model is proposed and evaluated using a combination of Reliability Block Diagrams (RBD) and a proposed five-state Markov model. The overall availability for intra- and inter domain communication in IMS is analyzed, and the state...

  7. End-to-End Delay Model for Train Messaging over Public Land Mobile Networks

    Directory of Open Access Journals (Sweden)

    Franco Mazzenga

    2017-11-01

    Full Text Available Modern train control systems rely on a dedicated radio network for train to ground communications. A number of possible alternatives have been analysed to adopt the European Rail Traffic Management System/European Train Control System (ERTMS/ETCS control system on local/regional lines to improve transport capacity. Among them, a communication system based on public networks (cellular&satellite provides an interesting, effective and alternative solution to proprietary and expensive radio networks. To analyse performance of this solution, it is necessary to model the end-to-end delay and message loss to fully characterize the message transfer process from train to ground and vice versa. Starting from the results of a railway test campaign over a 300 km railway line for a cumulative 12,000 traveled km in 21 days, in this paper, we derive a statistical model for the end-to-end delay required for delivering messages. In particular, we propose a two states model allowing for reproducing the main behavioral characteristics of the end-to-end delay as observed experimentally. Model formulation has been derived after deep analysis of the recorded experimental data. When it is applied to model a realistic scenario, it allows for explicitly accounting for radio coverage characteristics, the received power level, the handover points along the line and for the serving radio technology. As an example, the proposed model is used to generate the end-to-end delay profile in a realistic scenario.

  8. End-to-end Configuration of Wireless Realtime Communication over Heterogeneous Protocols

    DEFF Research Database (Denmark)

    Malinowsky, B.; Grønbæk, Jesper; Schwefel, Hans-Peter

    2015-01-01

    This paper describes a wireless real-time communication system design using two Time Division Multiple Access (TDMA) protocols. Messages are subject to prioritization and queuing. For this interoperation scenario, we show a method for end-to-end configuration of protocols and queue sizes. Such co...

  9. Coupling of a single quantum emitter to end-to-end aligned silver nanowires

    DEFF Research Database (Denmark)

    Kumar, Shailesh; Huck, Alexander; Chen, Yuntian

    2013-01-01

    We report on the observation of coupling a single nitrogen vacancy (NV) center in a nanodiamond crystal to a propagating plasmonic mode of silver nanowires. The nanocrystal is placed either near the apex of a single silver nanowire or in the gap between two end-to-end aligned silver nanowires. We...

  10. End-to-End Flow Control for Visual-Haptic Communication under Bandwidth Change

    Science.gov (United States)

    Yashiro, Daisuke; Tian, Dapeng; Yakoh, Takahiro

    This paper proposes an end-to-end flow controller for visual-haptic communication. A visual-haptic communication system transmits non-real-time packets, which contain large-size visual data, and real-time packets, which contain small-size haptic data. When the transmission rate of visual data exceeds the communication bandwidth, the visual-haptic communication system becomes unstable owing to buffer overflow. To solve this problem, an end-to-end flow controller is proposed. This controller determines the optimal transmission rate of visual data on the basis of the traffic conditions, which are estimated by the packets for haptic communication. Experimental results confirm that in the proposed method, a short packet-sending interval and a short delay are achieved under bandwidth change, and thus, high-precision visual-haptic communication is realized.

  11. End-to-end network models encompassing terrestrial, wireless, and satellite components

    Science.gov (United States)

    Boyarko, Chandler L.; Britton, John S.; Flores, Phil E.; Lambert, Charles B.; Pendzick, John M.; Ryan, Christopher M.; Shankman, Gordon L.; Williams, Ramon P.

    2004-08-01

    Development of network models that reflect true end-to-end architectures such as the Transformational Communications Architecture need to encompass terrestrial, wireless and satellite component to truly represent all of the complexities in a world wide communications network. Use of best-in-class tools including OPNET, Satellite Tool Kit (STK), Popkin System Architect and their well known XML-friendly definitions, such as OPNET Modeler's Data Type Description (DTD), or socket-based data transfer modules, such as STK/Connect, enable the sharing of data between applications for more rapid development of end-to-end system architectures and a more complete system design. By sharing the results of and integrating best-in-class tools we are able to (1) promote sharing of data, (2) enhance the fidelity of our results and (3) allow network and application performance to be viewed in the context of the entire enterprise and its processes.

  12. Providing end-to-end QoS for multimedia applications in 3G wireless networks

    Science.gov (United States)

    Guo, Katherine; Rangarajan, Samapth; Siddiqui, M. A.; Paul, Sanjoy

    2003-11-01

    As the usage of wireless packet data services increases, wireless carriers today are faced with the challenge of offering multimedia applications with QoS requirements within current 3G data networks. End-to-end QoS requires support at the application, network, link and medium access control (MAC) layers. We discuss existing CDMA2000 network architecture and show its shortcomings that prevent supporting multiple classes of traffic at the Radio Access Network (RAN). We then propose changes in RAN within the standards framework that enable support for multiple traffic classes. In addition, we discuss how Session Initiation Protocol (SIP) can be augmented with QoS signaling for supporting end-to-end QoS. We also review state of the art scheduling algorithms at the base station and provide possible extensions to these algorithms to support different classes of traffic as well as different classes of users.

  13. Rectovaginal fistula following colectomy with an end-to-end anastomosis stapler for a colorectal adenocarcinoma.

    Science.gov (United States)

    Klein, A; Scotti, S; Hidalgo, A; Viateau, V; Fayolle, P; Moissonnier, P

    2006-12-01

    An 11-year-old, female neutered Labrador retriever was presented with a micro-invasive differentiated papillar adenocarcinoma at the colorectal junction. A colorectal end-to-end anastomosis stapler device was used to perform resection and anastomosis using a transanal technique. A rectovaginal fistula was diagnosed two days later. An exploratory laparotomy was conducted and the fistula was identified and closed. Early dehiscence of the colon was also suspected and another colorectal anastomosis was performed using a manual technique. Comparison to a conventional manual technique of intestinal surgery showed that the use of an automatic staple device was quicker and easier. To the authors' knowledge, this is the first report of a rectovaginal fistula occurring after end-to-end anastomosis stapler colorectal resection-anastomosis in the dog. To minimise the risk of this potential complication associated with the limited surgical visibility, adequate tissue retraction and inspection of the anastomosis site are essential.

  14. Development of a Dynamic, End-to-End Free Piston Stirling Convertor Model

    Science.gov (United States)

    Regan, Timothy F.; Gerber, Scott S.; Roth, Mary Ellen

    2003-01-01

    A dynamic model for a free-piston Stirling convertor is being developed at the NASA Glenn Research Center. The model is an end-to-end system model that includes the cycle thermodynamics, the dynamics, and electrical aspects of the system. The subsystems of interest are the heat source, the springs, the moving masses, the linear alternator, the controller and the end-user load. The envisioned use of the model will be in evaluating how changes in a subsystem could affect the operation of the convertor. The model under development will speed the evaluation of improvements to a subsystem and aid in determining areas in which most significant improvements may be found. One of the first uses of the end-to-end model will be in the development of controller architectures. Another related area is in evaluating changes to details in the linear alternator.

  15. Building dialogue POMDPs from expert dialogues an end-to-end approach

    CERN Document Server

    Chinaei, Hamidreza

    2016-01-01

    This book discusses the Partially Observable Markov Decision Process (POMDP) framework applied in dialogue systems. It presents POMDP as a formal framework to represent uncertainty explicitly while supporting automated policy solving. The authors propose and implement an end-to-end learning approach for dialogue POMDP model components. Starting from scratch, they present the state, the transition model, the observation model and then finally the reward model from unannotated and noisy dialogues. These altogether form a significant set of contributions that can potentially inspire substantial further work. This concise manuscript is written in a simple language, full of illustrative examples, figures, and tables. Provides insights on building dialogue systems to be applied in real domain Illustrates learning dialogue POMDP model components from unannotated dialogues in a concise format Introduces an end-to-end approach that makes use of unannotated and noisy dialogue for learning each component of dialogue POM...

  16. Circular myotomy as an aid to resection and end-to-end anastomosis of the esophagus.

    Science.gov (United States)

    Attum, A A; Hankins, J R; Ngangana, J; McLaughlin, J S

    1979-08-01

    Segments ranging from 40 to 70% of the thoracic esophagus were resected in 80 mongrel dogs. End-to-end anastomosis was effected after circular myotomy either proximal or distal, or both proximal and distal, to the anastomosis. Among dogs undergoing resection of 60% of the esophagus, distal myotomy enabled 6 of 8 animals to survive, and combined proximal and distal myotomy permitted 8 of 10 to survive. Cineesophagography was performed in a majority of the 50 surviving animals and showed no appreciable delay of peristalsis at the myotomy sites. When these sites were examined at postmortem examination up to 13 months after operation, 1 dog showed a small diverticulum but none showed dilatation or stricture. It is concluded that circular myotomy holds real promise as a means of extending the clinical application of esophageal resection with end-to-end anastomosis.

  17. Financing the End-to-end Supply Chain: A Reference Guide to Supply Chain Finance

    OpenAIRE

    Templar, Simon; Hofmann, Erik; Findlay, Charles

    2016-01-01

    Financing the End to End Supply Chain provides readers with a real insight into the increasingly important area of supply chain finance. It demonstrates the importance of the strategic relationship between the physical supply of goods and services and the associated financial flows. The book provides a clear introduction, demonstrating the importance of the strategic relationship between supply chain and financial communities within an organization. It contains vital information on how supply...

  18. Testing Application (End-to-End Performance of Networks With EFT Traffic

    Directory of Open Access Journals (Sweden)

    Vlatko Lipovac

    2009-01-01

    Full Text Available This paper studies how end-to-end application peiformance(of Electronic Financial Transaction traffic, in particulardepends on the actual protocol stacks, operating systemsand network transmission rates. With this respect, the respectivesimulation tests of peiformance of TCP and UDP protocolsrunning on various operating systems, ranging from Windows,Sun Solmis, to Linux have been implemented, and thedifferences in peiformance addressed focusing on throughputand response time.

  19. Experimental evaluation of end-to-end delay in switched Ethernet application in the automotive domain

    OpenAIRE

    Beretis , Kostas; Symeonidis , Ieroklis

    2013-01-01

    International audience; This article presents an approach for deriving upper bound for end-to-end delay in a double star switched Ethernet network. Four traffic classes, following a strict priority queuing policy, were considered. The theoretical analysis was based on network calculus. An experimental setup, which accu-rately reflects an automotive communication network, was implemented in or-der to evaluate the theoretical model. The results obtained by the experiments provided valuable feed...

  20. Urban Biomining Meets Printable Electronics: End-To-End at Destination Biological Recycling and Reprinting

    Science.gov (United States)

    Rothschild, Lynn J. (Principal Investigator); Koehne, Jessica; Gandhiraman, Ram; Navarrete, Jesica; Spangle, Dylan

    2017-01-01

    Space missions rely utterly on metallic components, from the spacecraft to electronics. Yet, metals add mass, and electronics have the additional problem of a limited lifespan. Thus, current mission architectures must compensate for replacement. In space, spent electronics are discarded; on earth, there is some recycling but current processes are toxic and environmentally hazardous. Imagine instead an end-to-end recycling of spent electronics at low mass, low cost, room temperature, and in a non-toxic manner. Here, we propose a solution that will not only enhance mission success by decreasing upmass and providing a fresh supply of electronics, but in addition has immediate applications to a serious environmental issue on the Earth. Spent electronics will be used as feedstock to make fresh electronic components, a process we will accomplish with so-called 'urban biomining' using synthetically enhanced microbes to bind metals with elemental specificity. To create new electronics, the microbes will be used as 'bioink' to print a new IC chip, using plasma jet electronics printing. The plasma jet electronics printing technology will have the potential to use martian atmospheric gas to print and to tailor the electronic and chemical properties of the materials. Our preliminary results have suggested that this process also serves as a purification step to enhance the proportion of metals in the 'bioink'. The presence of electric field and plasma can ensure printing in microgravity environment while also providing material morphology and electronic structure tunabiity and thus optimization. Here we propose to increase the TRL level of the concept by engineering microbes to dissolve the siliceous matrix in the IC, extract copper from a mixture of metals, and use the microbes as feedstock to print interconnects using mars gas simulant. To assess the ability of this concept to influence mission architecture, we will do an analysis of the infrastructure required to execute

  1. CHEETAH: circuit-switched high-speed end-to-end transport architecture

    Science.gov (United States)

    Veeraraghavan, Malathi; Zheng, Xuan; Lee, Hyuk; Gardner, M.; Feng, Wuchun

    2003-10-01

    Leveraging the dominance of Ethernet in LANs and SONET/SDH in MANs and WANs, we propose a service called CHEETAH (Circuit-switched High-speed End-to-End Transport ArcHitecture). The service concept is to provide end hosts with high-speed, end-to-end circuit connectivity on a call-by-call shared basis, where a "circuit" consists of Ethernet segments at the ends that are mapped into Ethernet-over-SONET long-distance circuits. This paper focuses on the file-transfer application for such circuits. For this application, the CHEETAH service is proposed as an add-on to the primary Internet access service already in place for enterprise hosts. This allows an end host that is sending a file to first attempt setting up an end-to-end Ethernet/EoS circuit, and if rejected, fall back to the TCP/IP path. If the circuit setup is successful, the end host will enjoy a much shorter file-transfer delay than on the TCP/IP path. To determine the conditions under which an end host with access to the CHEETAH service should attempt circuit setup, we analyze mean file-transfer delays as a function of call blocking probability in the circuit-switched network, probability of packet loss in the IP network, round-trip times, link rates, and so on.

  2. QoS Modeling for End-to-End Performance Evaluation over Networks with Wireless Access

    Directory of Open Access Journals (Sweden)

    Gómez Gerardo

    2010-01-01

    Full Text Available This paper presents an end-to-end Quality of Service (QoS model for assessing the performance of data services over networks with wireless access. The proposed model deals with performance degradation across protocol layers using a bottom-up strategy, starting with the physical layer and moving on up to the application layer. This approach makes it possible to analytically assess performance at different layers, thereby facilitating a possible end-to-end optimization process. As a representative case, a scenario where a set of mobile terminals connected to a streaming server through an IP access node has been studied. UDP, TCP, and the new TCP-Friendly Rate Control (TFRC protocols were analyzed at the transport layer. The radio interface consisted of a variable-rate multiuser and multichannel subsystem, including retransmissions and adaptive modulation and coding. The proposed analytical QoS model was validated on a real-time emulator of an end-to-end network with wireless access and proved to be very useful for the purposes of service performance estimation and optimization.

  3. Analysis of the relationship between end-to-end distance and activity of single-chain antibody against colorectal carcinoma.

    Science.gov (United States)

    Zhang, Jianhua; Liu, Shanhong; Shang, Zhigang; Shi, Li; Yun, Jun

    2012-08-22

    We investigated the relationship of End-to-end distance between VH and VL with different peptide linkers and the activity of single-chain antibodies by computer-aided simulation. First, we developed (G4S)n (where n = 1-9) as the linker to connect VH and VL, and estimated the 3D structure of single-chain Fv antibody (scFv) by homologous modeling. After molecular models were evaluated and optimized, the coordinate system of every protein was built and unified into one coordinate system, and End-to-end distances calculated using 3D space coordinates. After expression and purification of scFv-n with (G4S)n as n = 1, 3, 5, 7 or 9, the immunoreactivity of purified ND-1 scFv-n was determined by ELISA. A multi-factorial relationship model was employed to analyze the structural factors affecting scFv: rn=ABn-ABO2+CDn-CDO2+BCn-BCst2. The relationship between immunoreactivity and r-values revealed that fusion protein structure approached the desired state when the r-value = 3. The immunoreactivity declined as the r-value increased, but when the r-value exceeded a certain threshold, it stabilized. We used a linear relationship to analyze structural factors affecting scFv immunoreactivity.

  4. Weighted-DESYNC and Its Application to End-to-End Throughput Fairness in Wireless Multihop Network

    Directory of Open Access Journals (Sweden)

    Ui-Seong Yu

    2017-01-01

    Full Text Available The end-to-end throughput of a routing path in wireless multihop network is restricted by a bottleneck node that has the smallest bandwidth among the nodes on the routing path. In this study, we propose a method for resolving the bottleneck-node problem in multihop networks, which is based on multihop DESYNC (MH-DESYNC algorithm that is a bioinspired resource allocation method developed for use in multihop environments and enables fair resource allocation among nearby (up to two hops neighbors. Based on MH-DESYNC, we newly propose weighted-DESYNC (W-DESYNC as a tool artificially to control the amount of resource allocated to the specific user and thus to achieve throughput fairness over a routing path. Proposed W-DESYNC employs the weight factor of a link to determine the amount of bandwidth allocated to a node. By letting the weight factor be the link quality of a routing path and making it the same across a routing path via Cucker-Smale flocking model, we can obtain throughput fairness over a routing path. The simulation results show that the proposed algorithm achieves throughput fairness over a routing path and can increase total end-to-end throughput in wireless multihop networks.

  5. Outcome of end-to-end urethroplasty in post-traumatic stricture of posterior urethra.

    Science.gov (United States)

    Hussain, Akbar; Pansota, Mudassar Saeed; Rasool, Mumtaz; Tabassum, Shafqat Ali; Ahmad, Iftikhar; Saleem, Muhammad Shahzad

    2013-04-01

    To determine the outcome of delayed end-to-end anastomotic urethroplasty in blind post-traumatic stricture of posterior urethra at our setup. Case series. Department of Urology and Renal Transplantation, Quaid-e-Azam Medical College/Bahawal Victoria Hospital, Bahawalpur, from January 2009 to June 2011. Adult patients with completely obliterated post-traumatic stricture of posterior urethra ≤ 2 cm were included in the study. Patients with post-prostatectomy (TUR-P, TVP) stricture, stricture more than 2 cm in size or patients of stricture with neurogenic bladder and patients with any perineal disease were excluded from the study. Retrograde urethrogram and voiding cysto-urethrogram was done in every patient to assess stricture length and location. Stricture excision and delayed end-to-end anastomosis of urethra with spatulation was performed in every patient. Minimum followup period was 6 months and maximum 18 months. There were 26 cases with road traffic accident (indirect) and 14 had history of fall/direct trauma to perineum or urethra. Majority of the patients (57.5%) were between 16 to 30 years of age. Twelve (30.0%) patients developed complications postoperatively. Early complications of wound infection occurred in 01 (2.5%) patient. Late complications were seen in 11 (27.5%) patients i.e. stricture recurrence in 7 (17.5%), erectile dysfunction in 2 (5.0%), urethrocutaneous fistula and urinary incontinence in one patient (2.5%) each. Success rate was 70.0% initially and 87.5% overall. Delayed end-to-end anastomotic urethroplasty is an effective procedure for traumatic posterior urethral strictures with success rate of about 87.5%.

  6. Reversible end-to-end assembly of gold nanorods using a disulfide-modified polypeptide

    International Nuclear Information System (INIS)

    Walker, David A; Gupta, Vinay K

    2008-01-01

    Directing the self-assembly of colloidal particles into nanostructures is of great interest in nanotechnology. Here, reversible end-to-end assembly of gold nanorods (GNR) is induced by pH-dependent changes in the secondary conformation of a disulfide-modified poly(L-glutamic acid) (SSPLGA). The disulfide anchoring group drives chemisorption of the polyacid onto the end of the gold nanorods in an ethanolic solution. A layer of poly(vinyl pyrrolidone) is adsorbed on the positively charged, surfactant-stabilized GNR to screen the surfactant bilayer charge and provide stability for dispersion of the GNR in ethanol. For comparison, irreversible end-to-end assembly using a bidentate ligand, namely 1,6-hexanedithiol, is also performed. Characterization of the modified GNR and its end-to-end linking behavior using SSPLGA and hexanedithiol is performed using dynamic light scattering (DLS), UV-vis absorption spectroscopy and transmission electron microscopy (TEM). Experimental results show that, in a colloidal solution of GNR-SSPLGA at a pH∼3.5, where the PLGA is in an α-helical conformation, the modified GNR self-assemble into one-dimensional nanostructures. The linking behavior can be reversed by increasing the pH (>8.5) to drive the conformation of the polypeptide to a random coil and this reversal with pH occurs rapidly within minutes. Cycling the pH multiple times between low and high pH values can be used to drive the formation of the nanostructures of the GNR and disperse them in solution.

  7. Outcome of end-to-end urethroplasty in post-traumatic stricture of posterior urethra

    International Nuclear Information System (INIS)

    Hussain, A.; Pansota, M. S.; Rasool, M.; Tabassum, S. A.; Ahmad, I.; Saleem, M. S.

    2013-01-01

    Objective: To determine the outcome of delayed end-to-end anastomotic urethroplasty in blind post-traumatic stricture of posterior urethra at our setup. Study Design: Case series. Place and Duration of Study: Department of Urology and Renal Transplantation, Quaid-e-Azam Medical College/ Bahawal Victoria Hospital, Bahawalpur, from January 2009 to June 2011. Methodology: Adult patients with completely obliterated post-traumatic stricture of posterior urethra 2 cm/sup 2/ were included in the study. Patients with post-prostatectomy (TUR-P, TVP) stricture, stricture more than 2 cm in size or patients of stricture with neurogenic bladder and patients with any perineal disease were excluded from the study. Retrograde urethrogram and voiding cysto-urethrogram was done in every patient to assess stricture length and location. Stricture excision and delayed end-to-end anastomosis of urethra with spatulation was performed in every patient. Minimum followup period was 6 months and maximum 18 months. Results: There were 26 cases with road traffic accident (indirect) and 14 had history of fall/direct trauma to perineum or urethra. Majority of the patients (57.5%) were between 16 to 30 years of age. Twelve (30.0%) patients developed complications postoperatively. Early complications of wound infection occurred in 01 (2.5%) patient. Late complications were seen in 11 (27.5%) patients i.e. stricture recurrence in 7 (17.5%), erectile dysfunction in 2 (5.0%), urethrocutaneous fistula and urinary incontinence in one patient (2.5%) each. Success rate was 70.0% initially and 87.5% overall. Conclusion: Delayed end-to-end anastomotic urethroplasty is an effective procedure for traumatic posterior urethral strictures with success rate of about 87.5%. (author)

  8. Outcome of end-to-end urethroplasty in post-traumatic stricture of posterior urethra

    Energy Technology Data Exchange (ETDEWEB)

    Hussain, A.; Pansota, M. S.; Rasool, M.; Tabassum, S. A.; Ahmad, I.; Saleem, M. S. [Bahawal Victoria Hospital, Bahawalpur (Pakistan). Dept. of Urology

    2013-04-15

    Objective: To determine the outcome of delayed end-to-end anastomotic urethroplasty in blind post-traumatic stricture of posterior urethra at our setup. Study Design: Case series. Place and Duration of Study: Department of Urology and Renal Transplantation, Quaid-e-Azam Medical College/ Bahawal Victoria Hospital, Bahawalpur, from January 2009 to June 2011. Methodology: Adult patients with completely obliterated post-traumatic stricture of posterior urethra 2 cm/sup 2/ were included in the study. Patients with post-prostatectomy (TUR-P, TVP) stricture, stricture more than 2 cm in size or patients of stricture with neurogenic bladder and patients with any perineal disease were excluded from the study. Retrograde urethrogram and voiding cysto-urethrogram was done in every patient to assess stricture length and location. Stricture excision and delayed end-to-end anastomosis of urethra with spatulation was performed in every patient. Minimum followup period was 6 months and maximum 18 months. Results: There were 26 cases with road traffic accident (indirect) and 14 had history of fall/direct trauma to perineum or urethra. Majority of the patients (57.5%) were between 16 to 30 years of age. Twelve (30.0%) patients developed complications postoperatively. Early complications of wound infection occurred in 01 (2.5%) patient. Late complications were seen in 11 (27.5%) patients i.e. stricture recurrence in 7 (17.5%), erectile dysfunction in 2 (5.0%), urethrocutaneous fistula and urinary incontinence in one patient (2.5%) each. Success rate was 70.0% initially and 87.5% overall. Conclusion: Delayed end-to-end anastomotic urethroplasty is an effective procedure for traumatic posterior urethral strictures with success rate of about 87.5%. (author)

  9. Optimizing End-to-End Big Data Transfers over Terabits Network Infrastructure

    International Nuclear Information System (INIS)

    Kim, Youngjae; Vallee, Geoffroy R.; Lee, Sangkeun; Shipman, Galen M.

    2016-01-01

    While future terabit networks hold the promise of significantly improving big-data motion among geographically distributed data centers, significant challenges must be overcome even on today's 100 gigabit networks to realize end-to-end performance. Multiple bottlenecks exist along the end-to-end path from source to sink, for instance, the data storage infrastructure at both the source and sink and its interplay with the wide-area network are increasingly the bottleneck to achieving high performance. In this study, we identify the issues that lead to congestion on the path of an end-to-end data transfer in the terabit network environment, and we present a new bulk data movement framework for terabit networks, called LADS. LADS exploits the underlying storage layout at each endpoint to maximize throughput without negatively impacting the performance of shared storage resources for other users. LADS also uses the Common Communication Interface (CCI) in lieu of the sockets interface to benefit from hardware-level zero-copy, and operating system bypass capabilities when available. It can further improve data transfer performance under congestion on the end systems using buffering at the source using flash storage. With our evaluations, we show that LADS can avoid congested storage elements within the shared storage resource, improving input/output bandwidth, and data transfer rates across the high speed networks. We also investigate the performance degradation problems of LADS due to I/O contention on the parallel file system (PFS), when multiple LADS tools share the PFS. We design and evaluate a meta-scheduler to coordinate multiple I/O streams while sharing the PFS, to minimize the I/O contention on the PFS. Finally, with our evaluations, we observe that LADS with meta-scheduling can further improve the performance by up to 14 percent relative to LADS without meta-scheduling.

  10. WiMAX security and quality of service an end-to-end perspective

    CERN Document Server

    Tang, Seok-Yee; Sharif, Hamid

    2010-01-01

    WiMAX is the first standard technology to deliver true broadband mobility at speeds that enable powerful multimedia applications such as Voice over Internet Protocol (VoIP), online gaming, mobile TV, and personalized infotainment. WiMAX Security and Quality of Service, focuses on the interdisciplinary subject of advanced Security and Quality of Service (QoS) in WiMAX wireless telecommunication systems including its models, standards, implementations, and applications. Split into 4 parts, Part A of the book is an end-to-end overview of the WiMAX architecture, protocol, and system requirements.

  11. An overview of recent end-to-end wireless medical video telemedicine systems using 3G.

    Science.gov (United States)

    Panayides, A; Pattichis, M S; Pattichis, C S; Schizas, C N; Spanias, A; Kyriacou, E

    2010-01-01

    Advances in video compression, network technologies, and computer technologies have contributed to the rapid growth of mobile health (m-health) systems and services. Wide deployment of such systems and services is expected in the near future, and it's foreseen that they will soon be incorporated in daily clinical practice. This study focuses in describing the basic components of an end-to-end wireless medical video telemedicine system, providing a brief overview of the recent advances in the field, while it also highlights future trends in the design of telemedicine systems that are diagnostically driven.

  12. Wiretapping End-to-End Encrypted VoIP Calls: Real-World Attacks on ZRTP

    Directory of Open Access Journals (Sweden)

    Schürmann Dominik

    2017-07-01

    Full Text Available Voice calls are still one of the most common use cases for smartphones. Often, sensitive personal information but also confidential business information is shared. End-to-end security is required to protect against wiretapping of voice calls. For such real-time communication, the ZRTP key-agreement protocol has been proposed. By verbally comparing a small number of on-screen characters or words, called Short Authentication Strings, the participants can be sure that no one is wiretapping the call. Since 2011, ZRTP is an IETF standard implemented in several VoIP clients.

  13. End to end adaptive congestion control in TCP/IP networks

    CERN Document Server

    Houmkozlis, Christos N

    2012-01-01

    This book provides an adaptive control theory perspective on designing congestion controls for packet-switching networks. Relevant to a wide range of disciplines and industries, including the music industry, computers, image trading, and virtual groups, the text extensively discusses source oriented, or end to end, congestion control algorithms. The book empowers readers with clear understanding of the characteristics of packet-switching networks and their effects on system stability and performance. It provides schemes capable of controlling congestion and fairness and presents real-world app

  14. End-to-End Assessment of a Large Aperture Segmented Ultraviolet Optical Infrared (UVOIR) Telescope Architecture

    Science.gov (United States)

    Feinberg, Lee; Bolcar, Matt; Liu, Alice; Guyon, Olivier; Stark,Chris; Arenberg, Jon

    2016-01-01

    Key challenges of a future large aperture, segmented Ultraviolet Optical Infrared (UVOIR) Telescope capable of performing a spectroscopic survey of hundreds of Exoplanets will be sufficient stability to achieve 10-10 contrast measurements and sufficient throughput and sensitivity for high yield Exo-Earth spectroscopic detection. Our team has collectively assessed an optimized end to end architecture including a high throughput coronagraph capable of working with a segmented telescope, a cost-effective and heritage based stable segmented telescope, a control architecture that minimizes the amount of new technologies, and an Exo-Earth yield assessment to evaluate potential performance.

  15. On end-to-end performance of MIMO multiuser in cognitive radio networks

    KAUST Repository

    Yang, Yuli

    2011-12-01

    In this paper, a design for the multiple-input-multiple-output (MIMO) multiuser transmission in the cognitive radio network is developed and its end-to-end performance is investigated under spectrum-sharing constraints. Firstly, the overall average packet error rate is analyzed by considering the channel state information feedback delay and the multiuser scheduling. Then, we provide corresponding numerical results to measure the performance evaluation for several separate scenarios, which presents a convenient tool for the cognitive radio network design with multiple secondary MIMO users. © 2011 IEEE.

  16. On end-to-end performance of MIMO multiuser in cognitive radio networks

    KAUST Repository

    Yang, Yuli; Aissa, Sonia

    2011-01-01

    In this paper, a design for the multiple-input-multiple-output (MIMO) multiuser transmission in the cognitive radio network is developed and its end-to-end performance is investigated under spectrum-sharing constraints. Firstly, the overall average packet error rate is analyzed by considering the channel state information feedback delay and the multiuser scheduling. Then, we provide corresponding numerical results to measure the performance evaluation for several separate scenarios, which presents a convenient tool for the cognitive radio network design with multiple secondary MIMO users. © 2011 IEEE.

  17. Increasing operations profitability using an end-to-end, wireless internet, gas monitoring system

    Energy Technology Data Exchange (ETDEWEB)

    McDougall, M. [Northrock Resources Ltd., AB (Canada); Benterud, K. [zed.i solutions, inc., Calgary, AB (Canada)

    2004-10-01

    Implementation by Northrock Resources Ltd., a wholly-owned subsidiary of Unocal Corporation, of a fully integrated end-to-end gas measurement and production analysis system, is discussed. The system, dubbed Smart-Alek(TM), utilizes public wireless communications and a web browser only delivery system to provide seamless well visibility to a desk-top computer. Smart-Alek(TM) is an example of a new type of end-to-end electronic gas flow measurement system, known as FINE(TM), which is an acronym for Field Intelligence Network and End-User Interface. The system delivers easy-to-use, complete, reliable and cost effective production information, far more effective than is possible to obtain with conventional SCADA technology. By installing the system, Northrock was able to increase gas volumes with more accurate electronic flow measurement in place of mechanical charts, with very low technical maintenance, and at a reduced operating cost. It is emphasized that deploying the technology alone will produce only partial benefits; to realize full benefits it is also essential to change grass roots operating practices, aiming at timely decision-making at the field level. 5 refs., 5 figs.

  18. An End-to-End Model of Plant Pheromone Channel for Long Range Molecular Communication.

    Science.gov (United States)

    Unluturk, Bige D; Akyildiz, Ian F

    2017-01-01

    A new track in molecular communication is using pheromones which can scale up the range of diffusion-based communication from μm meters to meters and enable new applications requiring long range. Pheromone communication is the emission of molecules in the air which trigger behavioral or physiological responses in receiving organisms. The objective of this paper is to introduce a new end-to-end model which incorporates pheromone behavior with communication theory for plants. The proposed model includes both the transmission and reception processes as well as the propagation channel. The transmission process is the emission of pheromones from the leaves of plants. The dispersion of pheromones by the flow of wind constitutes the propagation process. The reception process is the sensing of pheromones by the pheromone receptors of plants. The major difference of pheromone communication from other molecular communication techniques is the dispersion channel acting under the laws of turbulent diffusion. In this paper, the pheromone channel is modeled as a Gaussian puff, i.e., a cloud of pheromone released instantaneously from the source whose dispersion follows a Gaussian distribution. Numerical results on the performance of the overall end-to-end pheromone channel in terms of normalized gain and delay are provided.

  19. End-to-End Airplane Detection Using Transfer Learning in Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Zhong Chen

    2018-01-01

    Full Text Available Airplane detection in remote sensing images remains a challenging problem due to the complexity of backgrounds. In recent years, with the development of deep learning, object detection has also obtained great breakthroughs. For object detection tasks in natural images, such as the PASCAL (Pattern Analysis, Statistical Modelling and Computational Learning VOC (Visual Object Classes Challenge, the major trend of current development is to use a large amount of labeled classification data to pre-train the deep neural network as a base network, and then use a small amount of annotated detection data to fine-tune the network for detection. In this paper, we use object detection technology based on deep learning for airplane detection in remote sensing images. In addition to using some characteristics of remote sensing images, some new data augmentation techniques have been proposed. We also use transfer learning and adopt a single deep convolutional neural network and limited training samples to implement end-to-end trainable airplane detection. Classification and positioning are no longer divided into multistage tasks; end-to-end detection attempts to combine them for optimization, which ensures an optimal solution for the final stage. In our experiment, we use remote sensing images of airports collected from Google Earth. The experimental results show that the proposed algorithm is highly accurate and meaningful for remote sensing object detection.

  20. End to end distribution functions for a class of polymer models

    International Nuclear Information System (INIS)

    Khandekar, D.C.; Wiegel, F.W.

    1988-01-01

    The two point end-to-end distribution functions for a class of polymer models have been obtained within the first cumulant approximation. The trial distribution function this purpose is chosen to correspond to a general non-local quadratic functional. An Exact expression for the trial distribution function is obtained. It is pointed out that these trial distribution functions themselves can be used to study certain aspects of the configurational behaviours of polymers. These distribution functions are also used to obtain the averaged mean square size 2 > of a polymer characterized by the non-local quadratic potential energy functional. Finally, we derive an analytic expression for 2 > of a polyelectrolyte model and show that for a long polymer a weak electrostatic interaction does not change the behaviour of 2 > from that of a free polymer. (author). 16 refs

  1. End-to-End Assessment of a Large Aperture Segmented Ultraviolet Optical Infrared (UVOIR) Telescope Architecture

    Science.gov (United States)

    Feinberg, Lee; Rioux, Norman; Bolcar, Matthew; Liu, Alice; Guyon, Oliver; Stark, Chris; Arenberg, Jon

    2016-01-01

    Key challenges of a future large aperture, segmented Ultraviolet Optical Infrared (UVOIR) Telescope capable of performing a spectroscopic survey of hundreds of Exoplanets will be sufficient stability to achieve 10^-10 contrast measurements and sufficient throughput and sensitivity for high yield Exo-Earth spectroscopic detection. Our team has collectively assessed an optimized end to end architecture including a high throughput coronagraph capable of working with a segmented telescope, a cost-effective and heritage based stable segmented telescope, a control architecture that minimizes the amount of new technologies, and an Exo-Earth yield assessment to evaluate potential performance. These efforts are combined through integrated modeling, coronagraph evaluations, and Exo-Earth yield calculations to assess the potential performance of the selected architecture. In addition, we discusses the scalability of this architecture to larger apertures and the technological tall poles to enabling it.

  2. An end-to-end secure patient information access card system.

    Science.gov (United States)

    Alkhateeb, A; Singer, H; Yakami, M; Takahashi, T

    2000-03-01

    The rapid development of the Internet and the increasing interest in Internet-based solutions has promoted the idea of creating Internet-based health information applications. This will force a change in the role of IC cards in healthcare card systems from a data carrier to an access key medium. At the Medical Informatics Department of Kyoto University Hospital we are developing a smart card patient information project where patient databases are accessed via the Internet. Strong end-to-end data encryption is performed via Secure Socket Layers, transparent to transmit patient information. The smart card is playing the crucial role of access key to the database: user authentication is performed internally without ever revealing the actual key. For easy acceptance by healthcare professionals, the user interface is integrated as a plug-in for two familiar Web browsers, Netscape Navigator and MS Internet Explorer.

  3. End-to-end operations at the National Radio Astronomy Observatory

    Science.gov (United States)

    Radziwill, Nicole M.

    2008-07-01

    In 2006 NRAO launched a formal organization, the Office of End to End Operations (OEO), to broaden access to its instruments (VLA/EVLA, VLBA, GBT and ALMA) in the most cost-effective ways possible. The VLA, VLBA and GBT are mature instruments, and the EVLA and ALMA are currently under construction, which presents unique challenges for integrating software across the Observatory. This article 1) provides a survey of the new developments over the past year, and those planned for the next year, 2) describes the business model used to deliver many of these services, and 3) discusses the management models being applied to ensure continuous innovation in operations, while preserving the flexibility and autonomy of telescope software development groups.

  4. End-to-end performance of cooperative relaying in spectrum-sharing systems with quality of service requirements

    KAUST Repository

    Asghari, Vahid Reza

    2011-07-01

    We propose adopting a cooperative relaying technique in spectrum-sharing cognitive radio (CR) systems to more effectively and efficiently utilize available transmission resources, such as power, rate, and bandwidth, while adhering to the quality of service (QoS) requirements of the licensed (primary) users of the shared spectrum band. In particular, we first consider that the cognitive (secondary) user\\'s communication is assisted by an intermediate relay that implements the decode-and-forward (DF) technique onto the secondary user\\'s relayed signal to help with communication between the corresponding source and the destination nodes. In this context, we obtain first-order statistics pertaining to the first- and second-hop transmission channels, and then, we investigate the end-to-end performance of the proposed spectrum-sharing cooperative relaying system under resource constraints defined to assure that the primary QoS is unaffected. Specifically, we investigate the overall average bit error rate (BER), ergodic capacity, and outage probability of the secondary\\'s communication subject to appropriate constraints on the interference power at the primary receivers. We then consider a general scenario where a cluster of relays is available between the secondary source and destination nodes. In this case, making use of the partial relay selection method, we generalize our results for the single-relay scheme and obtain the end-to-end performance of the cooperative spectrum-sharing system with a cluster of L available relays. Finally, we examine our theoretical results through simulations and comparisons, illustrating the overall performance of the proposed spectrum-sharing cooperative system and quantify its advantages for different operating scenarios and conditions. © 2011 IEEE.

  5. The role of sea ports in end-to-end maritime transport chain emissions

    International Nuclear Information System (INIS)

    Gibbs, David; Rigot-Muller, Patrick; Mangan, John; Lalwani, Chandra

    2014-01-01

    This paper's purpose is to investigate the role of sea ports in helping to mitigate the GHG emissions associated with the end-to-end maritime transport chain. The analysis is primarily focused on the UK, but is international in application. The paper is based on both the analysis of secondary data and information on actions taken by ports to reduce their emissions, with the latter data collected for the main UK ports via their published reports and/or via interviews. Only a small number of ports (representing 32% of UK port activity) actually measure and report their carbon emissions in the UK context. The emissions generated by ships calling at these ports are analysed using a method based on Department for Transport Maritime Statistics Data. In addition, a case example (Felixstowe) of emissions associated with HGV movements to and from ports is presented, and data on vessel emissions at berth are also considered. Our analyses indicate that emissions generated by ships during their voyages between ports are of a far greater magnitude than those generated by the port activities. Thus while reducing the ports' own emissions is worthwhile, the results suggest that ports might have more impact through focusing their efforts on reducing shipping emissions. - Highlights: • Investigates role of ports in mitigating GHG emissions in the end-to-end maritime transport chain. • Emissions generated both by ports and by ships calling at ports are analysed. • Shipping's emissions are far greater than those generated by port activities. • Ports may have more impact through focusing efforts on reducing shipping's emissions. • Options for ports to support and drive change in the maritime sector also considered

  6. The end-to-end testbed of the optical metrology system on-board LISA Pathfinder

    Energy Technology Data Exchange (ETDEWEB)

    Steier, F; Cervantes, F Guzman; Marin, A F GarcIa; Heinzel, G; Danzmann, K [Max-Planck-Institut fuer Gravitationsphysik (Albert-Einstein-Institut) and Universitaet Hannover (Germany); Gerardi, D, E-mail: frank.steier@aei.mpg.d [EADS Astrium Satellites GmbH, Friedrichshafen (Germany)

    2009-05-07

    LISA Pathfinder is a technology demonstration mission for the Laser Interferometer Space Antenna (LISA). The main experiment on-board LISA Pathfinder is the so-called LISA Technology Package (LTP) which has the aim to measure the differential acceleration between two free-falling test masses with an accuracy of 3 x 10{sup -14} ms{sup -2} Hz{sup -1/2} between 1 mHz and 30 mHz. This measurement is performed interferometrically by the optical metrology system (OMS) on-board LISA Pathfinder. In this paper, we present the development of an experimental end-to-end testbed of the entire OMS. It includes the interferometer and its sub-units, the interferometer backend which is a phasemeter and the processing of the phasemeter output data. Furthermore, three-axes piezo-actuated mirrors are used instead of the free-falling test masses for the characterization of the dynamic behaviour of the system and some parts of the drag-free and attitude control system (DFACS) which controls the test masses and the satellite. The end-to-end testbed includes all parts of the LTP that can reasonably be tested on earth without free-falling test masses. At its present status it consists mainly of breadboard components. Some of those have already been replaced by engineering models of the LTP experiment. In the next steps, further engineering and flight models will also be inserted in this testbed and tested against well-characterized breadboard components. The presented testbed is an important reference for the unit tests and can also be used for validation of the on-board experiment during the mission.

  7. Kinetics of end-to-end collision in short single-stranded nucleic acids.

    Science.gov (United States)

    Wang, Xiaojuan; Nau, Werner M

    2004-01-28

    A novel fluorescence-based method, which entails contact quenching of the long-lived fluorescent state of 2,3-diazabicyclo[2.2.2]-oct-2-ene (DBO), was employed to measure the kinetics of end-to-end collision in short single-stranded oligodeoxyribonucleotides of the type 5'-DBO-(X)n-dG with X = dA, dC, dT, or dU and n = 2 or 4. The fluorophore was covalently attached to the 5' end and dG was introduced as an efficient intrinsic quencher at the 3' terminus. The end-to-end collision rates, which can be directly related to the efficiency of intramolecular fluorescence quenching, ranged from 0.1 to 9.0 x 10(6) s(-1). They were strongly dependent on the strand length, the base sequence, as well as the temperature. Oligonucleotides containing dA in the backbone displayed much slower collision rates and significantly higher positive activation energies than strands composed of pyrimidine bases, suggesting a higher intrinsic rigidity of oligoadenylate. Comparison of the measured collision rates in short single-stranded oligodeoxyribonucleotides with the previously reported kinetics of hairpin formation indicates that the intramolecular collision is significantly faster than the nucleation step of hairpin closing. This is consistent with the configurational diffusion model suggested by Ansari et al. (Ansari, A.; Kuznetsov, S. V.; Shen, Y. Proc.Natl. Acad. Sci. USA 2001, 98, 7771-7776), in which the formation of misfolded loops is thought to slow hairpin formation.

  8. Composable Mission Framework for Rapid End-to-End Mission Design and Simulation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation proposed here is the Composable Mission Framework (CMF) a model-based software framework that shall enable seamless continuity of mission design and...

  9. Simulation experiment on total ionization dose effects of linear CCD

    International Nuclear Information System (INIS)

    Tang Benqi; Zhang Yong; Xiao Zhigang; Wang Zujun; Huang Shaoyan

    2004-01-01

    We carry out the ionization radiation experiment of linear CCDs operated in unbiased, biased, biased and driven mode respectively by Co-60 γ source with our self-designed test system, and offline test the Dark signal and Saturation voltage and SNR varied with total dose for TCD132D, and get some valuable results. On the basis of above work, we set forth a primary experiment approaches to simulate the total dose radiation effects of charge coupled devices. (authors)

  10. Internet end-to-end performance monitoring for the High Energy Nuclear and Particle Physics community

    Energy Technology Data Exchange (ETDEWEB)

    Matthews, W.

    2000-02-22

    Modern High Energy Nuclear and Particle Physics (HENP) experiments at Laboratories around the world present a significant challenge to wide area networks. Petabytes (1015) or exabytes (1018) of data will be generated during the lifetime of the experiment. Much of this data will be distributed via the Internet to the experiment's collaborators at Universities and Institutes throughout the world for analysis. In order to assess the feasibility of the computing goals of these and future experiments, the HENP networking community is actively monitoring performance across a large part of the Internet used by its collaborators. Since 1995, the pingER project has been collecting data on ping packet loss and round trip times. In January 2000, there are 28 monitoring sites in 15 countries gathering data on over 2,000 end-to-end pairs. HENP labs such as SLAC, Fermi Lab and CERN are using Advanced Network's Surveyor project and monitoring performance from one-way delay of UDP packets. More recently several HENP sites have become involved with NLANR's active measurement program (AMP). In addition SLAC and CERN are part of the RIPE test-traffic project and SLAC is home for a NIMI machine. The large End-to-end performance monitoring infrastructure allows the HENP networking community to chart long term trends and closely examine short term glitches across a wide range of networks and connections. The different methodologies provide opportunities to compare results based on different protocols and statistical samples. Understanding agreement and discrepancies between results provides particular insight into the nature of the network. This paper will highlight the practical side of monitoring by reviewing the special needs of High Energy Nuclear and Particle Physics experiments and provide an overview of the experience of measuring performance across a large number of interconnected networks throughout the world with various methodologies. In particular, results

  11. Internet end-to-end performance monitoring for the High Energy Nuclear and Particle Physics community

    International Nuclear Information System (INIS)

    Matthews, W.

    2000-01-01

    Modern High Energy Nuclear and Particle Physics (HENP) experiments at Laboratories around the world present a significant challenge to wide area networks. Petabytes (1015) or exabytes (1018) of data will be generated during the lifetime of the experiment. Much of this data will be distributed via the Internet to the experiment's collaborators at Universities and Institutes throughout the world for analysis. In order to assess the feasibility of the computing goals of these and future experiments, the HENP networking community is actively monitoring performance across a large part of the Internet used by its collaborators. Since 1995, the pingER project has been collecting data on ping packet loss and round trip times. In January 2000, there are 28 monitoring sites in 15 countries gathering data on over 2,000 end-to-end pairs. HENP labs such as SLAC, Fermi Lab and CERN are using Advanced Network's Surveyor project and monitoring performance from one-way delay of UDP packets. More recently several HENP sites have become involved with NLANR's active measurement program (AMP). In addition SLAC and CERN are part of the RIPE test-traffic project and SLAC is home for a NIMI machine. The large End-to-end performance monitoring infrastructure allows the HENP networking community to chart long term trends and closely examine short term glitches across a wide range of networks and connections. The different methodologies provide opportunities to compare results based on different protocols and statistical samples. Understanding agreement and discrepancies between results provides particular insight into the nature of the network. This paper will highlight the practical side of monitoring by reviewing the special needs of High Energy Nuclear and Particle Physics experiments and provide an overview of the experience of measuring performance across a large number of interconnected networks throughout the world with various methodologies. In particular, results from each project

  12. MO-B-BRB-04: 3D Dosimetry in End-To-End Dosimetry QA

    Energy Technology Data Exchange (ETDEWEB)

    Ibbott, G. [UT MD Anderson Cancer Center (United States)

    2016-06-15

    irradiated volume can help understand interplay effects during TomoTherapy or VMAT. Titania Juang: Special techniques in the clinic and research Understand the potential for 3D dosimetry in validating dose accumulation in deformable systems, and Observe the benefits of high resolution measurements for precision therapy in SRS and in MicroSBRT for small animal irradiators Geoffrey S. Ibbott: 3D Dosimetry in end-to-end dosimetry QA Understand the potential for 3D dosimetry for end-to-end radiation therapy process validation in the in-house and external credentialing setting. Canadian Institutes of Health Research; L. Schreiner, Modus QA, London, ON, Canada; T. Juang, NIH R01CA100835.

  13. End-to-End Multimodal Emotion Recognition Using Deep Neural Networks

    Science.gov (United States)

    Tzirakis, Panagiotis; Trigeorgis, George; Nicolaou, Mihalis A.; Schuller, Bjorn W.; Zafeiriou, Stefanos

    2017-12-01

    Automatic affect recognition is a challenging task due to the various modalities emotions can be expressed with. Applications can be found in many domains including multimedia retrieval and human computer interaction. In recent years, deep neural networks have been used with great success in determining emotional states. Inspired by this success, we propose an emotion recognition system using auditory and visual modalities. To capture the emotional content for various styles of speaking, robust features need to be extracted. To this purpose, we utilize a Convolutional Neural Network (CNN) to extract features from the speech, while for the visual modality a deep residual network (ResNet) of 50 layers. In addition to the importance of feature extraction, a machine learning algorithm needs also to be insensitive to outliers while being able to model the context. To tackle this problem, Long Short-Term Memory (LSTM) networks are utilized. The system is then trained in an end-to-end fashion where - by also taking advantage of the correlations of the each of the streams - we manage to significantly outperform the traditional approaches based on auditory and visual handcrafted features for the prediction of spontaneous and natural emotions on the RECOLA database of the AVEC 2016 research challenge on emotion recognition.

  14. End-to-End Neural Optical Music Recognition of Monophonic Scores

    Directory of Open Access Journals (Sweden)

    Jorge Calvo-Zaragoza

    2018-04-01

    Full Text Available Optical Music Recognition is a field of research that investigates how to computationally decode music notation from images. Despite the efforts made so far, there are hardly any complete solutions to the problem. In this work, we study the use of neural networks that work in an end-to-end manner. This is achieved by using a neural model that combines the capabilities of convolutional neural networks, which work on the input image, and recurrent neural networks, which deal with the sequential nature of the problem. Thanks to the use of the the so-called Connectionist Temporal Classification loss function, these models can be directly trained from input images accompanied by their corresponding transcripts into music symbol sequences. We also present the Printed Music Scores dataset, containing more than 80,000 monodic single-staff real scores in common western notation, that is used to train and evaluate the neural approach. In our experiments, it is demonstrated that this formulation can be carried out successfully. Additionally, we study several considerations about the codification of the output musical sequences, the convergence and scalability of the neural models, as well as the ability of this approach to locate symbols in the input score.

  15. End to End Digitisation and Analysis of Three-Dimensional Coral Models, from Communities to Corallites.

    Directory of Open Access Journals (Sweden)

    Luis Gutierrez-Heredia

    Full Text Available Coral reefs hosts nearly 25% of all marine species and provide food sources for half a billion people worldwide while only a very small percentage have been surveyed. Advances in technology and processing along with affordable underwater cameras and Internet availability gives us the possibility to provide tools and softwares to survey entire coral reefs. Holistic ecological analyses of corals require not only the community view (10s to 100s of meters, but also the single colony analysis as well as corallite identification. As corals are three-dimensional, classical approaches to determine percent cover and structural complexity across spatial scales are inefficient, time-consuming and limited to experts. Here we propose an end-to-end approach to estimate these parameters using low-cost equipment (GoPro, Canon and freeware (123D Catch, Meshmixer and Netfabb, allowing every community to participate in surveys and monitoring of their coral ecosystem. We demonstrate our approach on 9 species of underwater colonies in ranging size and morphology. 3D models of underwater colonies, fresh samples and bleached skeletons with high quality texture mapping and detailed topographic morphology were produced, and Surface Area and Volume measurements (parameters widely used for ecological and coral health studies were calculated and analysed. Moreover, we integrated collected sample models with micro-photogrammetry models of individual corallites to aid identification and colony and polyp scale analysis.

  16. End to End Digitisation and Analysis of Three-Dimensional Coral Models, from Communities to Corallites.

    Science.gov (United States)

    Gutierrez-Heredia, Luis; Benzoni, Francesca; Murphy, Emma; Reynaud, Emmanuel G

    2016-01-01

    Coral reefs hosts nearly 25% of all marine species and provide food sources for half a billion people worldwide while only a very small percentage have been surveyed. Advances in technology and processing along with affordable underwater cameras and Internet availability gives us the possibility to provide tools and softwares to survey entire coral reefs. Holistic ecological analyses of corals require not only the community view (10s to 100s of meters), but also the single colony analysis as well as corallite identification. As corals are three-dimensional, classical approaches to determine percent cover and structural complexity across spatial scales are inefficient, time-consuming and limited to experts. Here we propose an end-to-end approach to estimate these parameters using low-cost equipment (GoPro, Canon) and freeware (123D Catch, Meshmixer and Netfabb), allowing every community to participate in surveys and monitoring of their coral ecosystem. We demonstrate our approach on 9 species of underwater colonies in ranging size and morphology. 3D models of underwater colonies, fresh samples and bleached skeletons with high quality texture mapping and detailed topographic morphology were produced, and Surface Area and Volume measurements (parameters widely used for ecological and coral health studies) were calculated and analysed. Moreover, we integrated collected sample models with micro-photogrammetry models of individual corallites to aid identification and colony and polyp scale analysis.

  17. Mechanics of spatulated end-to-end artery-to-vein anastomoses.

    Science.gov (United States)

    Morasch, M D; Dobrin, P B; Dong, Q S; Mrkvicka, R

    1998-01-01

    It previously has been shown that in straight end-to-end artery-to-vein anastomoses, maximum dimensions are obtained with an interrupted suture line. Nearly equivalent dimensions are obtained with a continuous compliant polybutester suture (Novafil), and the smallest dimensions are obtained with a continuous noncompliant polypropylene suture (Surgilene). The present study was undertaken to examine these suture techniques in a spatulated or beveled anastomosis in living dogs. Anastomoses were constructed using continuous 6-0 polypropylene (Surgilene), continuous 6-0 polybutester (Novafil), or interrupted 6-0 polypropylene or polybutester. Thirty minutes after construction, the artery, vein, and beveled anastomoses were excised, restored to in situ length and pressurized with the lumen filled with a dilute suspension of barium sulfate. High resolution radiographs were obtained at 25 mmHg pressure increments up to 200 mmHg. Dimensions and compliance were determined from the radiographic images. Results showed that, unlike straight artery-to-vein anastomoses, there were no differences in the dimensions or compliance of spatulated anastomoses with continuous Surgilene, continuous Novafil, or interrupted suture techniques. Therefore a continuous suture technique is acceptable when constructing spatulated artery-to-vein anastomoses in patients.

  18. End-to-end Information Flow Security Model for Software-Defined Networks

    Directory of Open Access Journals (Sweden)

    D. Ju. Chaly

    2015-01-01

    Full Text Available Software-defined networks (SDN are a novel paradigm of networking which became an enabler technology for many modern applications such as network virtualization, policy-based access control and many others. Software can provide flexibility and fast-paced innovations in the networking; however, it has a complex nature. In this connection there is an increasing necessity of means for assuring its correctness and security. Abstract models for SDN can tackle these challenges. This paper addresses to confidentiality and some integrity properties of SDNs. These are critical properties for multi-tenant SDN environments, since the network management software must ensure that no confidential data of one tenant are leaked to other tenants in spite of using the same physical infrastructure. We define a notion of end-to-end security in context of software-defined networks and propose a semantic model where the reasoning is possible about confidentiality, and we can check that confidential information flows do not interfere with non-confidential ones. We show that the model can be extended in order to reason about networks with secure and insecure links which can arise, for example, in wireless environments.The article is published in the authors’ wording.

  19. Circumferential resection and "Z"-shape plastic end-to-end anastomosis of canine trachea.

    Science.gov (United States)

    Zhao, H; Li, Z; Fang, J; Fang, C

    1999-03-01

    To prevent anastomotic stricture of the trachea. Forty young mongrel dogs, weighing 5-7 kg, were randomly divided into two groups: experimental group and control group, with 20 dogs in each group. Four tracheal rings were removed from each dog. In the experimental group, two "Z"-shape tracheoplastic anastomoses were performed on each dog, one on the anterior wall and the other on the membranous part of the trachea. In the control group, each dog received only simple end-to-end anastomosis. Vicryl 3-0 absorbable suture and OB fibrin glue were used for both groups. All dogs were killed when their body weight doubled. The average sagittal stenotic ratio were 1.20 +/- 0.12 for the experimental group and 0.83 +/- 0.05 for the control group. The average cross-sectional area stenotic ratio were 0.90 +/- 0.12 and 0.69 +/- 0.09 and T values were 8.71 and 4.57 for the two groups (P anastomosis in preventing anastomotic stricture of canine trachea.

  20. Mucociliary clearance following tracheal resection and end-to-end anastomosis.

    Science.gov (United States)

    Toomes, H; Linder, A

    1989-10-01

    Mucociliary clearance is an important cleaning system of the bronchial tree. The complex transport system reacts sensitively to medicinal stimuli and inhaled substances. A disturbance causes secretion retention which encourages the development of acute and chronic pulmonary diseases. It is not yet known in which way sectional resection of the central airway effects mucociliary clearance. A large number of the surgical failures are attributable to septic complications in the area of the anastomosis. In order to study the transportation process over the anastomosis, ten dogs underwent a tracheal resection with end-to-end anastomosis, and the mucociliary activity was recorded using a bronchoscopic video-technical method. Recommencement of mucous transport was observed on the third, and transport over the anastomosis from the sixth to tenth, postoperative days. The mucociliary clearance had completely recovered on the twenty-first day in the majority of dogs. Histological examination of the anastomoses nine months postoperatively showed a flat substitute epithelium without cilia-bearing cells in all dogs. This contrasts with the quick restitution of the transport function. In case of undamaged respiratory mucosa, a good adaptation of the resection margins suffices for the mucous film to slide over the anastomosis.

  1. Human Assisted Robotic Vehicle Studies - A conceptual end-to-end mission architecture

    Science.gov (United States)

    Lehner, B. A. E.; Mazzotta, D. G.; Teeney, L.; Spina, F.; Filosa, A.; Pou, A. Canals; Schlechten, J.; Campbell, S.; Soriano, P. López

    2017-11-01

    With current space exploration roadmaps indicating the Moon as a proving ground on the way to human exploration of Mars, it is clear that human-robotic partnerships will play a key role for successful future human space missions. This paper details a conceptual end-to-end architecture for an exploration mission in cis-lunar space with a focus on human-robot interactions, called Human Assisted Robotic Vehicle Studies (HARVeSt). HARVeSt will build on knowledge of plant growth in space gained from experiments on-board the ISS and test the first growth of plants on the Moon. A planned deep space habitat will be utilised as the base of operations for human-robotic elements of the mission. The mission will serve as a technology demonstrator not only for autonomous tele-operations in cis-lunar space but also for key enabling technologies for future human surface missions. The successful approach of the ISS will be built on in this mission with international cooperation. Mission assets such as a modular rover will allow for an extendable mission and to scout and prepare the area for the start of an international Moon Village.

  2. Semantic Complex Event Processing over End-to-End Data Flows

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Qunzhi [University of Southern California; Simmhan, Yogesh; Prasanna, Viktor K.

    2012-04-01

    Emerging Complex Event Processing (CEP) applications in cyber physical systems like SmartPower Grids present novel challenges for end-to-end analysis over events, flowing from heterogeneous information sources to persistent knowledge repositories. CEP for these applications must support two distinctive features - easy specification patterns over diverse information streams, and integrated pattern detection over realtime and historical events. Existing work on CEP has been limited to relational query patterns, and engines that match events arriving after the query has been registered. We propose SCEPter, a semantic complex event processing framework which uniformly processes queries over continuous and archived events. SCEPteris built around an existing CEP engine with innovative support for semantic event pattern specification and allows their seamless detection over past, present and future events. Specifically, we describe a unified semantic query model that can operate over data flowing through event streams to event repositories. Compile-time and runtime semantic patterns are distinguished and addressed separately for efficiency. Query rewriting is examined and analyzed in the context of temporal boundaries that exist between event streams and their repository to avoid duplicate or missing results. The design and prototype implementation of SCEPterare analyzed using latency and throughput metrics for scenarios from the Smart Grid domain.

  3. Telomere dynamics, end-to-end fusions and telomerase activation during the human fibroblast immortalization process.

    Science.gov (United States)

    Ducray, C; Pommier, J P; Martins, L; Boussin, F D; Sabatier, L

    1999-07-22

    Loss of telomeric repeats during cell proliferation could play a role in senescence. It has been generally assumed that activation of telomerase prevents further telomere shortening and is essential for cell immortalization. In this study, we performed a detailed cytogenetic and molecular characterization of four SV40 transformed human fibroblastic cell lines by regularly monitoring the size distribution of terminal restriction fragments, telomerase activity and the associated chromosomal instability throughout immortalization. The mean TRF lengths progressively decreased in pre-crisis cells during the lifespan of the cultures. At crisis, telomeres reached a critical size, different among the cell lines, contributing to the peak of dicentric chromosomes, which resulted mostly from telomeric associations. We observed a direct correlation between short telomere length at crisis and chromosomal instability. In two immortal cell lines, although telomerase was detected, mean telomere length still continued to decrease whereas the number of dicentric chromosomes associated was stabilized. Thus telomerase could protect specifically telomeres which have reached a critical size against end-to-end dicentrics, while long telomeres continue to decrease, although at a slower rate as before crisis. This suggests a balance between elongation by telomerase and telomere shortening, towards a stabilized 'optimal' length.

  4. Practical End-to-End Performance Testing Tool for High Speed 3G-Based Networks

    Science.gov (United States)

    Shinbo, Hiroyuki; Tagami, Atsushi; Ano, Shigehiro; Hasegawa, Toru; Suzuki, Kenji

    High speed IP communication is a killer application for 3rd generation (3G) mobile systems. Thus 3G network operators should perform extensive tests to check whether expected end-to-end performances are provided to customers under various environments. An important objective of such tests is to check whether network nodes fulfill requirements to durations of processing packets because a long duration of such processing causes performance degradation. This requires testers (persons who do tests) to precisely know how long a packet is hold by various network nodes. Without any tool's help, this task is time-consuming and error prone. Thus we propose a multi-point packet header analysis tool which extracts and records packet headers with synchronized timestamps at multiple observation points. Such recorded packet headers enable testers to calculate such holding durations. The notable feature of this tool is that it is implemented on off-the shelf hardware platforms, i.e., lap-top personal computers. The key challenges of the implementation are precise clock synchronization without any special hardware and a sophisticated header extraction algorithm without any drop.

  5. An end-to-end microfluidic platform for engineering life supporting microbes in space exploration missions, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — HJ Science & Technology proposes a programmable, low-cost, and compact microfluidic platform capable of running automated end-to-end processes and optimization...

  6. Numerical simulations and analyses of temperature control loop heat pipe for space CCD camera

    Science.gov (United States)

    Meng, Qingliang; Yang, Tao; Li, Chunlin

    2016-10-01

    As one of the key units of space CCD camera, the temperature range and stability of CCD components affect the image's indexes. Reasonable thermal design and robust thermal control devices are needed. One kind of temperature control loop heat pipe (TCLHP) is designed, which highly meets the thermal control requirements of CCD components. In order to study the dynamic behaviors of heat and mass transfer of TCLHP, particularly in the orbital flight case, a transient numerical model is developed by using the well-established empirical correlations for flow models within three dimensional thermal modeling. The temperature control principle and details of mathematical model are presented. The model is used to study operating state, flow and heat characteristics based upon the analyses of variations of temperature, pressure and quality under different operating modes and external heat flux variations. The results indicate that TCLHP can satisfy the thermal control requirements of CCD components well, and always ensure good temperature stability and uniformity. By comparison between flight data and simulated results, it is found that the model is to be accurate to within 1°C. The model can be better used for predicting and understanding the transient performance of TCLHP.

  7. Status report of the end-to-end ASKAP software system: towards early science operations

    Science.gov (United States)

    Guzman, Juan Carlos; Chapman, Jessica; Marquarding, Malte; Whiting, Matthew

    2016-08-01

    300 MHz bandwidth for Array Release 1; followed by the deployment of the real-time data processing components. In addition to the Central Processor, the first production release of the CSIRO ASKAP Science Data Archive (CASDA) has also been deployed in one of the Pawsey Supercomputing Centre facilities and it is integrated to the end-to-end ASKAP data flow system. This paper describes the current status of the "end-to-end" data flow software system from preparing observations to data acquisition, processing and archiving; and the challenges of integrating an HPC facility as a key part of the instrument. It also shares some lessons learned since the start of integration activities and the challenges ahead in preparation for the start of the Early Science program.

  8. SME2EM: Smart mobile end-to-end monitoring architecture for life-long diseases.

    Science.gov (United States)

    Serhani, Mohamed Adel; Menshawy, Mohamed El; Benharref, Abdelghani

    2016-01-01

    Monitoring life-long diseases requires continuous measurements and recording of physical vital signs. Most of these diseases are manifested through unexpected and non-uniform occurrences and behaviors. It is impractical to keep patients in hospitals, health-care institutions, or even at home for long periods of time. Monitoring solutions based on smartphones combined with mobile sensors and wireless communication technologies are a potential candidate to support complete mobility-freedom, not only for patients, but also for physicians. However, existing monitoring architectures based on smartphones and modern communication technologies are not suitable to address some challenging issues, such as intensive and big data, resource constraints, data integration, and context awareness in an integrated framework. This manuscript provides a novel mobile-based end-to-end architecture for live monitoring and visualization of life-long diseases. The proposed architecture provides smartness features to cope with continuous monitoring, data explosion, dynamic adaptation, unlimited mobility, and constrained devices resources. The integration of the architecture׳s components provides information about diseases׳ recurrences as soon as they occur to expedite taking necessary actions, and thus prevent severe consequences. Our architecture system is formally model-checked to automatically verify its correctness against designers׳ desirable properties at design time. Its components are fully implemented as Web services with respect to the SOA architecture to be easy to deploy and integrate, and supported by Cloud infrastructure and services to allow high scalability, availability of processes and data being stored and exchanged. The architecture׳s applicability is evaluated through concrete experimental scenarios on monitoring and visualizing states of epileptic diseases. The obtained theoretical and experimental results are very promising and efficiently satisfy the proposed

  9. jade: An End-To-End Data Transfer and Catalog Tool

    Science.gov (United States)

    Meade, P.

    2017-10-01

    The IceCube Neutrino Observatory is a cubic kilometer neutrino telescope located at the Geographic South Pole. IceCube collects 1 TB of data every day. An online filtering farm processes this data in real time and selects 10% to be sent via satellite to the main data center at the University of Wisconsin-Madison. IceCube has two year-round on-site operators. New operators are hired every year, due to the hard conditions of wintering at the South Pole. These operators are tasked with the daily operations of running a complex detector in serious isolation conditions. One of the systems they operate is the data archiving and transfer system. Due to these challenging operational conditions, the data archive and transfer system must above all be simple and robust. It must also share the limited resource of satellite bandwidth, and collect and preserve useful metadata. The original data archive and transfer software for IceCube was written in 2005. After running in production for several years, the decision was taken to fully rewrite it, in order to address a number of structural drawbacks. The new data archive and transfer software (JADE2) has been in production for several months providing improved performance and resiliency. One of the main goals for JADE2 is to provide a unified system that handles the IceCube data end-to-end: from collection at the South Pole, all the way to long-term archive and preservation in dedicated repositories at the North. In this contribution, we describe our experiences and lessons learned from developing and operating the data archive and transfer software for a particle physics experiment in extreme operational conditions like IceCube.

  10. In vivo laser assisted end-to-end anastomosis with ICG-infused chitosan patches

    Science.gov (United States)

    Rossi, Francesca; Matteini, Paolo; Esposito, Giuseppe; Scerrati, Alba; Albanese, Alessio; Puca, Alfredo; Maira, Giulio; Rossi, Giacomo; Pini, Roberto

    2011-07-01

    Laser assisted vascular repair is a new optimized technique based on the use of ICG-infused chitosan patch to close a vessel wound, with or even without few supporting single stitches. We present an in vivo experimental study on an innovative end-to-end laser assisted vascular anastomotic (LAVA) technique, performed with the application of ICGinfused chitosan patches. The photostability and the mechanical properties of ICG-infused chitosan films were preliminary measured. The in vivo study was performed in 10 New Zealand rabbits. After anesthesia, a 3-cm segment of the right common carotid artery was exposed, thus clamped proximally and distally. The artery was then interrupted by means of a full thickness cut. Three single microsutures were used to approximate the two vessel edges. The ICG-infused chitosan patch was rolled all over the anastomotic site and welded by the use of a diode laser emitting at 810 nm and equipped with a 300 μm diameter optical fiber. Welding was obtained by delivering single laser spots to induce local patch/tissue adhesion. The result was an immediate closure of the anastomosis, with no bleeding at clamps release. Thus animals underwent different follow-up periods, in order to evaluate the welded vessels over time. At follow-up examinations, all the anastomoses were patent and no bleeding signs were documented. Samples of welded vessels underwent histological examinations. Results showed that this technique offer several advantages over conventional suturing methods: simplification of the surgical procedure, shortening of the operative time, better re-endothelization and optimal vascular healing process.

  11. NCAR Earth Observing Laboratory - An End-to-End Observational Science Enterprise

    Science.gov (United States)

    Rockwell, A.; Baeuerle, B.; Grubišić, V.; Hock, T. F.; Lee, W. C.; Ranson, J.; Stith, J. L.; Stossmeister, G.

    2017-12-01

    Researchers who want to understand and describe the Earth System require high-quality observations of the atmosphere, ocean, and biosphere. Making these observations not only requires capable research platforms and state-of-the-art instrumentation but also benefits from comprehensive in-field project management and data services. NCAR's Earth Observing Laboratory (EOL) is an end-to-end observational science enterprise that provides leadership in observational research to scientists from universities, U.S. government agencies, and NCAR. Deployment: EOL manages the majority of the NSF Lower Atmosphere Observing Facilities, which includes research aircraft, radars, lidars, profilers, and surface and sounding systems. This suite is designed to address a wide range of Earth system science - from microscale to climate process studies and from the planet's surface into the Upper Troposphere/Lower Stratosphere. EOL offers scientific, technical, operational, and logistics support to small and large field campaigns across the globe. Development: By working closely with the scientific community, EOL's engineering and scientific staff actively develop the next generation of observing facilities, staying abreast of emerging trends, technologies, and applications in order to improve our measurement capabilities. Through our Design and Fabrication Services, we also offer high-level engineering and technical expertise, mechanical design, and fabrication to the atmospheric research community. Data Services: EOL's platforms and instruments collect unique datasets that must be validated, archived, and made available to the research community. EOL's Data Management and Services deliver high-quality datasets and metadata in ways that are transparent, secure, and easily accessible. We are committed to the highest standard of data stewardship from collection to validation to archival. Discovery: EOL promotes curiosity about Earth science, and fosters advanced understanding of the

  12. End-To-END Performance of the future MOMA intrument aboard the EXOMARS MISSION

    Science.gov (United States)

    Buch, A.; Pinnick, V. T.; Szopa, C.; Grand, N.; Danell, R.; van Amerom, F. H. W.; Freissinet, C.; Glavin, D. P.; Stalport, F.; Arevalo, R. D., Jr.; Coll, P. J.; Steininger, H.; Raulin, F.; Goesmann, F.; Mahaffy, P. R.; Brinckerhoff, W. B.

    2016-12-01

    After the SAM experiment aboard the curiosity rover, the Mars Organic Molecule Analyzer (MOMA) experiment aboard the future ExoMars mission will be the continuation of the search for the organic composition of the Mars surface with the advantage that the sample will be extracted as deep as 2 meters below the martian surface to minimize effects of radiation and oxidation on organic materials. To analyse the wide range of organic composition (volatile and non volatils compounds) of the martian soil MOMA is composed with an UV laser desorption / ionization (LDI) and a pyrolysis gas chromatography ion trap mass spectrometry (pyr-GC-ITMS). In order to analyse refractory organic compounds and chirality samples which undergo GC-ITMS analysis may be submitted to a derivatization process, consisting of the reaction of the sample components with specific reactants (MTBSTFA [1], DMF-DMA [2] or TMAH [3]). To optimize and test the performance of the GC-ITMS instrument we have performed several coupling tests campaigns between the GC, providing by the French team (LISA, LATMOS, CentraleSupelec), and the MS, providing by the US team (NASA, GSFC). Last campaign has been done with the ITU models wich is similar to the flight model and wich include the oven and the taping station providing by the German team (MPS). The results obtained demonstrate the current status of the end-to-end performance of the gas chromatography-mass spectrometry mode of operation. References:[1] Buch, A. et al. (2009) J chrom. A, 43, 143-151. [2] Freissinet et al. (2011) J Chrom A, 1306, 59-71. [3] Geffroy-Rodier, C. et al. (2009) JAAP, 85, 454-459. Acknowledgements: Funding provided by the Mars Exploration Program (point of contact, George Tahu, NASA/HQ). MOMA is a collaboration between NASA and ESA (PI Goesmann, MPS). MOMA-GC team acknowledges support from the French Space Agency (CNES), French National Programme of Planetology (PNP), National French Council (CNRS), Pierre Simon Laplace Institute.

  13. End-to-End Trade-space Analysis for Designing Constellation Missions

    Science.gov (United States)

    LeMoigne, J.; Dabney, P.; Foreman, V.; Grogan, P.; Hache, S.; Holland, M. P.; Hughes, S. P.; Nag, S.; Siddiqi, A.

    2017-12-01

    cost model represents an aggregate model consisting of Cost Estimating Relationships (CERs) from widely accepted models. The current GUI automatically generates graphics representing metrics such as average revisit time or coverage as a function of cost. The end-to-end system will be demonstrated as part of the presentation.

  14. An End-to-End System to Enable Quick, Easy and Inexpensive Deployment of Hydrometeorological Stations

    Science.gov (United States)

    Celicourt, P.; Piasecki, M.

    2014-12-01

    The high cost of hydro-meteorological data acquisition, communication and publication systems along with limited qualified human resources is considered as the main reason why hydro-meteorological data collection remains a challenge especially in developing countries. Despite significant advances in sensor network technologies which gave birth to open hardware and software, low-cost (less than $50) and low-power (in the order of a few miliWatts) sensor platforms in the last two decades, sensors and sensor network deployment remains a labor-intensive, time consuming, cumbersome, and thus expensive task. These factors give rise for the need to develop a affordable, simple to deploy, scalable and self-organizing end-to-end (from sensor to publication) system suitable for deployment in such countries. The design of the envisioned system will consist of a few Sensed-And-Programmed Arduino-based sensor nodes with low-cost sensors measuring parameters relevant to hydrological processes and a Raspberry Pi micro-computer hosting the in-the-field back-end data management. This latter comprises the Python/Django model of the CUAHSI Observations Data Model (ODM) namely DjangODM backed by a PostgreSQL Database Server. We are also developing a Python-based data processing script which will be paired with the data autoloading capability of Django to populate the DjangODM database with the incoming data. To publish the data, the WOFpy (WaterOneFlow Web Services in Python) developed by the Texas Water Development Board for 'Water Data for Texas' which can produce WaterML web services from a variety of back-end database installations such as SQLite, MySQL, and PostgreSQL will be used. A step further would be the development of an appealing online visualization tool using Python statistics and analytics tools (Scipy, Numpy, Pandas) showing the spatial distribution of variables across an entire watershed as a time variant layer on top of a basemap.

  15. Unidata's Vision for Providing Comprehensive and End-to-end Data Services

    Science.gov (United States)

    Ramamurthy, M. K.

    2009-05-01

    This paper presents Unidata's vision for providing comprehensive, well-integrated, and end-to-end data services for the geosciences. These include an array of functions for collecting, finding, and accessing data; data management tools for generating, cataloging, and exchanging metadata; and submitting or publishing, sharing, analyzing, visualizing, and integrating data. When this vision is realized, users no matter where they are or how they are connected to the Internetwill be able to find and access a plethora of geosciences data and use Unidata-provided tools and services both productively and creatively in their research and education. What that vision means for the Unidata community is elucidated by drawing a simple analogy. Most of users are familiar with Amazon and eBay e-commerce sites and content sharing sites like YouTube and Flickr. On the eBay marketplace, people can sell practically anything at any time and buyers can share their experience of purchasing a product or the reputation of a seller. Likewise, at Amazon, thousands of merchants sell their goods and millions of customers not only buy those goods, but provide a review or opinion of the products they buy and share their experiences as purchasers. Similarly, YouTube and Flickr are sites tailored to video- and photo-sharing, respectively, where users can upload their own content and share it with millions of other users, including family and friends. What all these sites, together with social-networking applications like MySpace and Facebook, have enabled is a sense of a virtual community in which users can search and browse products or content, comment and rate those products from anywhere, at any time, and via any Internet- enabled device like an iPhone, laptop, or a desktop computer. In essence, these enterprises have fundamentally altered people's buying modes and behavior toward purchases. Unidata believes that similar approaches, appropriately tailored to meet the needs of the scientific

  16. Ocean Acidification Scientific Data Stewardship: An approach for end-to-end data management and integration

    Science.gov (United States)

    Arzayus, K. M.; Garcia, H. E.; Jiang, L.; Michael, P.

    2012-12-01

    As the designated Federal permanent oceanographic data center in the United States, NOAA's National Oceanographic Data Center (NODC) has been providing scientific stewardship for national and international marine environmental and ecosystem data for over 50 years. NODC is supporting NOAA's Ocean Acidification Program and the science community by providing end-to-end scientific data management of ocean acidification (OA) data, dedicated online data discovery, and user-friendly access to a diverse range of historical and modern OA and other chemical, physical, and biological oceanographic data. This effort is being catalyzed by the NOAA Ocean Acidification Program, but the intended reach is for the broader scientific ocean acidification community. The first three years of the project will be focused on infrastructure building. A complete ocean acidification data content standard is being developed to ensure that a full spectrum of ocean acidification data and metadata can be stored and utilized for optimal data discovery and access in usable data formats. We plan to develop a data access interface capable of allowing users to constrain their search based on real-time and delayed mode measured variables, scientific data quality, their observation types, the temporal coverage, methods, instruments, standards, collecting institutions, and the spatial coverage. In addition, NODC seeks to utilize the existing suite of international standards (including ISO 19115-2 and CF-compliant netCDF) to help our data producers use those standards for their data, and help our data consumers make use of the well-standardized metadata-rich data sets. These tools will be available through our NODC Ocean Acidification Scientific Data Stewardship (OADS) web page at http://www.nodc.noaa.gov/oceanacidification. NODC also has a goal to provide each archived dataset with a unique ID, to ensure a means of providing credit to the data provider. Working with partner institutions, such as the

  17. Common Patterns with End-to-end Interoperability for Data Access

    Science.gov (United States)

    Gallagher, J.; Potter, N.; Jones, M. B.

    2010-12-01

    At first glance, using common storage formats and open standards should be enough to ensure interoperability between data servers and client applications, but that is often not the case. In the REAP (Realtime Environment for Analytical Processing; NSF #0619060) project we integrated access to data from OPeNDAP servers into the Kepler workflow system and found that, as in previous cases, we spent the bulk of our effort addressing the twin issues of data model compatibility and integration strategies. Implementing seamless data access between a remote data source and a client application (data sink) can be broken down into two kinds of issues. First, the solution must address any differences in the data models used by the data source (OPeNDAP) and the data sink (the Kepler workflow system). If these models match completely, there is little work to be done. However, that is rarely the case. To map OPeNDAP's data model to Kepler's, we used two techniques (ignoring trivial conversions): On-the-fly type mapping and out-of-band communication. Type conversion takes place both for data and metadata because Kepler requires a priori knowledge of some aspects (e.g., syntactic metadata) of the data to build a workflow. In addition, OPeNDAP's constraint expression syntax was used to send out-of-band information to restrict the data requested from the server, facilitating changes in the returned data's type. This technique provides a way for users to exert fine-grained control over the data request, a potentially useful technique, at the cost of requiring that users understand a little about the data source's processing capabilities. The second set of issues for end-to-end data access are integration strategies. OPeNDAP provides several different tools for bringing data into an application: C++, C and Java libraries that provide functions for newly written software; The netCDF library which enables existing applications to read from servers using an older interface; and simple

  18. On the importance of risk knowledge for an end-to-end tsunami early warning system

    Science.gov (United States)

    Post, Joachim; Strunz, Günter; Riedlinger, Torsten; Mück, Matthias; Wegscheider, Stephanie; Zosseder, Kai; Steinmetz, Tilmann; Gebert, Niklas; Anwar, Herryal

    2010-05-01

    context has been worked out. The generated results contribute significantly in the fields of (1) warning decision and warning levels, (2) warning dissemination and warning message content, (3) early warning chain planning, (4) increasing response capabilities and protective systems, (5) emergency relief and (6) enhancing communities' awareness and preparedness towards tsunami threats. Additionally examples will be given on the potentials of an operational use of risk information in early warning systems as first experiences exist for the tsunami early warning center in Jakarta, Indonesia. Beside this the importance of linking national level early warning information with tsunami risk information available at the local level (e.g. linking warning message information on expected intensity with respective tsunami hazard zone maps at community level for effective evacuation) will be demonstrated through experiences gained in three pilot areas in Indonesia. The presentation seeks to provide new insights on benefits using risk information in early warning and will provide further evidence that practical use of risk information is an important and indispensable component of end-to-end early warning.

  19. A vision for end-to-end data services to foster international partnerships through data sharing

    Science.gov (United States)

    Ramamurthy, M.; Yoksas, T.

    2009-04-01

    Increasingly, the conduct of science requires scientific partnerships and sharing of knowledge, information, and other assets. This is particularly true in our field where the highly-coupled Earth system and its many linkages have heightened the importance of collaborations across geographic, disciplinary, and organizational boundaries. The climate system, for example, is far too complex a puzzle to be unraveled by individual investigators or nations. As articulated in the NSF Strategic Plan: FY 2006-2011, "…discovery increasingly requires expertise of individuals from different disciplines, with diverse perspectives, and often from different nations, working together to accommodate the extraordinary complexity of today's science and engineering challenges." The Nobel Prize winning IPCC assessments are a prime example of such an effort. Earth science education is also uniquely suited to drawing connections between the dynamic Earth system and societal issues. Events like the 2004 Indian Ocean tsunami and Hurricane Katrina provide ample evidence of this relevance, as they underscore the importance of timely and interdisciplinary integration and synthesis of data. Our success in addressing such complex problems and advancing geosciences depends on the availability of a state-of-the-art and robust cyberinfrastructure, transparent and timely access to high-quality data from diverse sources, and requisite tools to integrate and use the data effectively, toward creating new knowledge. To that end, Unidata's vision calls for providing comprehensive, well-integrated, and end-to-end data services for the geosciences. These include an array of functions for collecting, finding, and accessing data; data management tools for generating, cataloging, and exchanging metadata; and submitting or publishing, sharing, analyzing, visualizing, and integrating data. When this vision is realized, users — no matter where they are, how they are connected to the Internet, or what

  20. An end-to-end coupled model ROMS-N 2 P 2 Z 2 D 2 -OSMOSE of ...

    African Journals Online (AJOL)

    An end-to-end coupled model ROMS-N 2 P 2 Z 2 D 2 -OSMOSE of the southern Benguela foodweb: parameterisation, calibration and pattern-oriented validation. ... We also highlight the capacity of this model for tracking indicators at various hierarchical levels. Keywords: individual-based model, model validation, ...

  1. GROWTH OF THE HYPOPLASTIC AORTIC-ARCH AFTER SIMPLE COARCTATION RESECTION AND END-TO-END ANASTOMOSIS

    NARCIS (Netherlands)

    BROUWER, MHJ; CROMMEDIJKHUIS, AH; EBELS, T; EIJGELAAR, A

    Surgical treatment of a hypoplastic aortic arch associated with an aortic coarctation is controversial. The controversy concerns the claimed need to surgically enlarge the diameter of the hypoplastic arch, in addition to resection and end-to-end anastomosis. The purpose of this prospective study is

  2. IMS Intra- and Inter Domain End-to-End Resilience Analysis

    DEFF Research Database (Denmark)

    Kamyod, Chayapol; Nielsen, Rasmus Hjorth; Prasad, Neeli R.

    2013-01-01

    This paper evaluated resilience of the reference IMS based network topology in operation through the keys reliability parameters via OPNET. The reliability behaviors of communication within similar and across registered home IMS domains were simulated and compared. Besides, the reliability effects...

  3. Exploring the requirements for multimodal interaction for mobile devices in an end-to-end journey context.

    Science.gov (United States)

    Krehl, Claudia; Sharples, Sarah

    2012-01-01

    The paper investigates the requirements for multimodal interaction on mobile devices in an end-to-end journey context. Traditional interfaces are deemed cumbersome and inefficient for exchanging information with the user. Multimodal interaction provides a different user-centred approach allowing for more natural and intuitive interaction between humans and computers. It is especially suitable for mobile interaction as it can overcome additional constraints including small screens, awkward keypads, and continuously changing settings - an inherent property of mobility. This paper is based on end-to-end journeys where users encounter several contexts during their journeys. Interviews and focus groups explore the requirements for multimodal interaction design for mobile devices by examining journey stages and identifying the users' information needs and sources. Findings suggest that multimodal communication is crucial when users multitask. Choosing suitable modalities depend on user context, characteristics and tasks.

  4. Minimizing End-to-End Interference in I/O Stacks Spanning Shared Multi-Level Buffer Caches

    Science.gov (United States)

    Patrick, Christina M.

    2011-01-01

    This thesis presents an end-to-end interference minimizing uniquely designed high performance I/O stack that spans multi-level shared buffer cache hierarchies accessing shared I/O servers to deliver a seamless high performance I/O stack. In this thesis, I show that I can build a superior I/O stack which minimizes the inter-application interference…

  5. Sleep/wake scheduling scheme for minimizing end-to-end delay in multi-hop wireless sensor networks

    OpenAIRE

    Madani Sajjad; Nazir Babar; Hasbullah Halabi

    2011-01-01

    Abstract We present a sleep/wake schedule protocol for minimizing end-to-end delay for event driven multi-hop wireless sensor networks. In contrast to generic sleep/wake scheduling schemes, our proposed algorithm performs scheduling that is dependent on traffic loads. Nodes adapt their sleep/wake schedule based on traffic loads in response to three important factors, (a) the distance of the node from the sink node, (b) the importance of the node's location from connectivity's perspective, and...

  6. Multi-institutional evaluation of end-to-end protocol for IMRT/VMAT treatment chains utilizing conventional linacs.

    Science.gov (United States)

    Loughery, Brian; Knill, Cory; Silverstein, Evan; Zakjevskii, Viatcheslav; Masi, Kathryn; Covington, Elizabeth; Snyder, Karen; Song, Kwang; Snyder, Michael

    2018-03-20

    We conducted a multi-institutional assessment of a recently developed end-to-end monthly quality assurance (QA) protocol for external beam radiation therapy treatment chains. This protocol validates the entire treatment chain against a baseline to detect the presence of complex errors not easily found in standard component-based QA methods. Participating physicists from 3 institutions ran the end-to-end protocol on treatment chains that include Imaging and Radiation Oncology Core (IROC)-credentialed linacs. Results were analyzed in the form of American Association of Physicists in Medicine (AAPM) Task Group (TG)-119 so that they may be referenced by future test participants. Optically stimulated luminescent dosimeter (OSLD), EBT3 radiochromic film, and A1SL ion chamber readings were accumulated across 10 test runs. Confidence limits were calculated to determine where 95% of measurements should fall. From calculated confidence limits, 95% of measurements should be within 5% error for OSLDs, 4% error for ionization chambers, and 4% error for (96% relative gamma pass rate) radiochromic film at 3% agreement/3 mm distance to agreement. Data were separated by institution, model of linac, and treatment protocol (intensity-modulated radiation therapy [IMRT] vs volumetric modulated arc therapy [VMAT]). A total of 97% of OSLDs, 98% of ion chambers, and 93% of films were within the confidence limits; measurements were found outside these limits by a maximum of 4%, consistent despite institutional differences in OSLD reading equipment and radiochromic film calibration techniques. Results from this test may be used by clinics for data comparison. Areas of improvement were identified in the end-to-end protocol that can be implemented in an updated version. The consistency of our data demonstrates the reproducibility and ease-of-use of such tests and suggests a potential role for their use in broad end-to-end QA initiatives. Copyright © 2018 American Association of Medical

  7. Hybrid monitoring scheme for end-to-end performance enhancement of multicast-based real-time media

    Science.gov (United States)

    Park, Ju-Won; Kim, JongWon

    2004-10-01

    As real-time media applications based on IP multicast networks spread widely, end-to-end QoS (quality of service) provisioning for these applications have become very important. To guarantee the end-to-end QoS of multi-party media applications, it is essential to monitor the time-varying status of both network metrics (i.e., delay, jitter and loss) and system metrics (i.e., CPU and memory utilization). In this paper, targeting the multicast-enabled AG (Access Grid) a next-generation group collaboration tool based on multi-party media services, the applicability of hybrid monitoring scheme that combines active and passive monitoring is investigated. The active monitoring measures network-layer metrics (i.e., network condition) with probe packets while the passive monitoring checks both application-layer metrics (i.e., user traffic condition by analyzing RTCP packets) and system metrics. By comparing these hybrid results, we attempt to pinpoint the causes of performance degradation and explore corresponding reactions to improve the end-to-end performance. The experimental results show that the proposed hybrid monitoring can provide useful information to coordinate the performance improvement of multi-party real-time media applications.

  8. Debris mitigation measures by satellite design and operational methods - Findings from the DLR space debris End-to-End Service

    Science.gov (United States)

    Sdunnus, H.; Beltrami, P.; Janovsky, R.; Koppenwallner, G.; Krag, H.; Reimerdes, H.; Schäfer, F.

    Debris Mitigation has been recognised as an issue to be addressed by the space faring nations around the world. Currently, there are various activities going on, aiming at the establishment of debris mitigation guidelines on various levels, reaching from the UN down to national space agencies. Though guidelines established on the national level already provide concrete information how things should be done (rather that specifying what should be done or providing fundamental principles) potential users of the guidelines will still have the need to explore the technical, management, and financial implications of the guidelines for their projects. Those questions are addressed by the so called "Space Debris End-to-End Service" project, which has been initiated as a national initiative of the German Aerospace Centre (DLR). Based on a review of already existing mitigation guidelines or guidelines under development and following an identification of needs from a circle of industrial users the "End-to-End Service Gu idelines" have been established for designer and operators of spacecraft. The End-to-End Service Guidelines are based on requirements addressed by the mitigation guidelines and provide recommendations how and when the technical consideration of the mitigation guidelines should take place. By referencing requirements from the mitigation guidelines, the End-to-End Service Guidelines address the consideration of debris mitigation measures by spacecraft design and operational measures. This paper will give an introduction to the End-to-End Service Guidelines. It will focus on the proposals made for mitigation measures by the S/C system design, i.e. on protective design measures inside the spacecraft and on design measures, e.g. innovative protective (shielding) systems. Furthermore, approaches on the analytical optimisation of protective systems will be presented, aiming at the minimisation of shield mass under conservation of the protective effects. On the

  9. SciBox, an end-to-end automated science planning and commanding system

    Science.gov (United States)

    Choo, Teck H.; Murchie, Scott L.; Bedini, Peter D.; Steele, R. Josh; Skura, Joseph P.; Nguyen, Lillian; Nair, Hari; Lucks, Michael; Berman, Alice F.; McGovern, James A.; Turner, F. Scott

    2014-01-01

    SciBox is a new technology for planning and commanding science operations for Earth-orbital and planetary space missions. It has been incrementally developed since 2001 and demonstrated on several spaceflight projects. The technology has matured to the point that it is now being used to plan and command all orbital science operations for the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) mission to Mercury. SciBox encompasses the derivation of observing sequences from science objectives, the scheduling of those sequences, the generation of spacecraft and instrument commands, and the validation of those commands prior to uploading to the spacecraft. Although the process is automated, science and observing requirements are incorporated at each step by a series of rules and parameters to optimize observing opportunities, which are tested and validated through simulation and review. Except for limited special operations and tests, there is no manual scheduling of observations or construction of command sequences. SciBox reduces the lead time for operations planning by shortening the time-consuming coordination process, reduces cost by automating the labor-intensive processes of human-in-the-loop adjudication of observing priorities, reduces operations risk by systematically checking constraints, and maximizes science return by fully evaluating the trade space of observing opportunities to meet MESSENGER science priorities within spacecraft recorder, downlink, scheduling, and orbital-geometry constraints.

  10. Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing.

    Science.gov (United States)

    Fredrikson, Matthew; Lantz, Eric; Jha, Somesh; Lin, Simon; Page, David; Ristenpart, Thomas

    2014-08-01

    We initiate the study of privacy in pharmacogenetics, wherein machine learning models are used to guide medical treatments based on a patient's genotype and background. Performing an in-depth case study on privacy in personalized warfarin dosing, we show that suggested models carry privacy risks, in particular because attackers can perform what we call model inversion : an attacker, given the model and some demographic information about a patient, can predict the patient's genetic markers. As differential privacy (DP) is an oft-proposed solution for medical settings such as this, we evaluate its effectiveness for building private versions of pharmacogenetic models. We show that DP mechanisms prevent our model inversion attacks when the privacy budget is carefully selected . We go on to analyze the impact on utility by performing simulated clinical trials with DP dosing models. We find that for privacy budgets effective at preventing attacks, patients would be exposed to increased risk of stroke, bleeding events, and mortality . We conclude that current DP mechanisms do not simultaneously improve genomic privacy while retaining desirable clinical efficacy, highlighting the need for new mechanisms that should be evaluated in situ using the general methodology introduced by our work.

  11. Telephony Over IP: A QoS Measurement-Based End to End Control Algorithm

    Directory of Open Access Journals (Sweden)

    Luigi Alcuri

    2004-12-01

    Full Text Available This paper presents a method for admitting voice calls in Telephony over IP (ToIP scenarios. This method, called QoS-Weighted CAC, aims to guarantee Quality of Service to telephony applications. We use a measurement-based call admission control algorithm, which detects network congested links through a feedback on overall link utilization. This feedback is based on the measures of packet delivery latencies related to voice over IP connections at the edges of the transport network. In this way we introduce a close loop control method, which is able to auto-adapt the quality margin on the basis of network load and specific service level requirements. Moreover we evaluate the difference in performance achieved by different Queue management configurations to guarantee Quality of Service to telephony applications, in which our goal was to evaluate the weight of edge router queue configuration in complex and real-like telephony over IP scenario. We want to compare many well-know queue scheduling algorithms, such as SFQ, WRR, RR, WIRR, and Priority. This comparison aims to locate queue schedulers in a more general control scheme context where different elements such as DiffServ marking and Admission control algorithms contribute to the overall Quality of Service required by real-time voice conversations. By means of software simulations we want to compare this solution with other call admission methods already described in scientific literature in order to locate this proposed method in a more general control scheme context. On the basis of the results we try to evidence the possible advantages of this QoS-Weighted solution in comparison with other similar CAC solutions ( in particular Measured Sum, Bandwidth Equivalent with Hoeffding Bounds, and Simple Measure CAC, on the planes of complexity, stability, management, tune-ability to service level requirements, and compatibility with actual network implementation.

  12. Understanding Effect of Constraint Release Environment on End-to-End Vector Relaxation of Linear Polymer Chains

    KAUST Repository

    Shivokhin, Maksim E.

    2017-05-30

    We propose and verify methods based on the slip-spring (SSp) model [ Macromolecules 2005, 38, 14 ] for predicting the effect of any monodisperse, binary, or ternary environment of topological constraints on the relaxation of the end-to-end vector of a linear probe chain. For this purpose we first validate the ability of the model to consistently predict both the viscoelastic and dielectric response of monodisperse and binary mixtures of type A polymers, based on published experimental data. We also report the synthesis of new binary and ternary polybutadiene systems, the measurement of their linear viscoelastic response, and the prediction of these data by the SSp model. We next clarify the relaxation mechanisms of probe chains in these constraint release (CR) environments by analyzing a set of "toy" SSp models with simplified constraint release rates, by examining fluctuations of the end-to-end vector. In our analysis, the longest relaxation time of the probe chain is determined by a competition between the longest relaxation times of the effective CR motions of the fat and thin tubes and the motion of the chain itself in the thin tube. This picture is tested by the analysis of four model systems designed to separate and estimate every single contribution involved in the relaxation of the probe\\'s end-to-end vector in polydisperse systems. We follow the CR picture of Viovy et al. [ Macromolecules 1991, 24, 3587 ] and refine the effective chain friction in the thin and fat tubes based on Read et al. [ J. Rheol. 2012, 56, 823 ]. The derived analytical equations form a basis for generalizing the proposed methodology to polydisperse mixtures of linear and branched polymers. The consistency between the SSp model and tube model predictions is a strong indicator of the compatibility between these two distinct mesoscopic frameworks.

  13. Automated Design of Propellant-Optimal, End-to-End, Low-Thrust Trajectories for Trojan Asteroid Tours

    Science.gov (United States)

    Stuart, Jeffrey; Howell, Kathleen; Wilson, Roby

    2013-01-01

    The Sun-Jupiter Trojan asteroids are celestial bodies of great scientific interest as well as potential resources offering water and other mineral resources for longterm human exploration of the solar system. Previous investigations under this project have addressed the automated design of tours within the asteroid swarm. This investigation expands the current automation scheme by incorporating options for a complete trajectory design approach to the Trojan asteroids. Computational aspects of the design procedure are automated such that end-to-end trajectories are generated with a minimum of human interaction after key elements and constraints associated with a proposed mission concept are specified.

  14. AN AUTOMATED END-TO-END MULTI-AGENT QOS BASED ARCHITECTURE FOR SELECTION OF GEOSPATIAL WEB SERVICES

    Directory of Open Access Journals (Sweden)

    M. Shah

    2012-07-01

    With the proliferation of web services published over the internet, multiple web services may provide similar functionality, but with different non-functional properties. Thus, Quality of Service (QoS offers a metric to differentiate the services and their service providers. In a quality-driven selection of web services, it is important to consider non-functional properties of the web service so as to satisfy the constraints or requirements of the end users. The main intent of this paper is to build an automated end-to-end multi-agent based solution to provide the best-fit web service to service requester based on QoS.

  15. Comparison of Direct Side-to-End and End-to-End Hypoglossal-Facial Anastomosis for Facial Nerve Repair.

    Science.gov (United States)

    Samii, Madjid; Alimohamadi, Maysam; Khouzani, Reza Karimi; Rashid, Masoud Rafizadeh; Gerganov, Venelin

    2015-08-01

    The hypoglossal facial anastomosis (HFA) is the gold standard for facial reanimation in patients with severe facial nerve palsy. The major drawbacks of the classic HFA technique are lingual morbidities due to hypoglossal nerve transection. The side-to-end HFA is a modification of the classic technique with fewer tongue-related morbidities. In this study we compared the outcome of the classic end-to-end and the direct side-to-end HFA surgeries performed at our center in regards to the facial reanimation success rate and tongue-related morbidities. Twenty-six successive cases of HFA were enrolled. In 9 of them end-to-end anastomoses were performed, and 17 had direct side-to-end anastomoses. The House-Brackmann (HB) and Pitty and Tator (PT) scales were used to document surgical outcome. The hemiglossal atrophy, swallowing, and hypoglossal nerve function were assessed at follow-up. The original pathology was vestibular schwannoma in 15, meningioma in 4, brain stem glioma in 4, and other pathologies in 3. The mean interval between facial palsy and HFA was 18 months (range: 0-60). The median follow-up period was 20 months. The PT grade at follow-up was worse in patients with a longer interval from facial palsy and HFA (P value: 0.041). The lesion type was the only other factor that affected PT grade (the best results in vestibular schwannoma and the worst in the other pathologies group, P value: 0.038). The recovery period for facial tonicity was longer in patients with radiation therapy before HFA (13.5 vs. 8.5 months) and those with a longer than 2-year interval from facial palsy to HFA (13.5 vs. 8.5 months). Although no significant difference between the side-to-end and the end-to-end groups was seen in terms of facial nerve functional recovery, patients from the side-to-end group had a significantly lower rate of lingual morbidities (tongue hemiatrophy: 100% vs. 5.8%, swallowing difficulty: 55% vs. 11.7%, speech disorder 33% vs. 0%). With the side-to-end HFA

  16. Risk Factors for Dehiscence of Stapled Functional End-to-End Intestinal Anastomoses in Dogs: 53 Cases (2001-2012).

    Science.gov (United States)

    Snowdon, Kyle A; Smeak, Daniel D; Chiang, Sharon

    2016-01-01

    To identify risk factors for dehiscence in stapled functional end-to-end anastomoses (SFEEA) in dogs. Retrospective case series. Dogs (n = 53) requiring an enterectomy. Medical records from a single institution for all dogs undergoing an enterectomy (2001-2012) were reviewed. Surgeries were included when gastrointestinal (GIA) and thoracoabdominal (TA) stapling equipment was used to create a functional end-to-end anastomosis between segments of small intestine or small and large intestine in dogs. Information regarding preoperative, surgical, and postoperative factors was recorded. Anastomotic dehiscence was noted in 6 of 53 cases (11%), with a mortality rate of 83%. The only preoperative factor significantly associated with dehiscence was the presence of inflammatory bowel disease (IBD). Surgical factors significantly associated with dehiscence included the presence, duration, and number of intraoperative hypotensive periods, and location of anastomosis, with greater odds of dehiscence in anastomoses involving the large intestine. IBD, location of anastomosis, and intraoperative hypotension are risk factors for intestinal anastomotic dehiscence after SFEEA in dogs. Previously suggested risk factors (low serum albumin concentration, preoperative septic peritonitis, and intestinal foreign body) were not confirmed in this study. © Copyright 2015 by The American College of Veterinary Surgeons.

  17. A new technique for end-to-end ureterostomy in the rat, using an indwelling reabsorbable stent.

    Science.gov (United States)

    Carmignani, G; Farina, F P; De Stefani, S; Maffezzini, M

    1983-01-01

    The restoration of the continuity of the urinary tract represents one of the major problems in rat renal transplantation. End-to-end ureterostomy is the most physiologically effective technique; however, it involves noteworthy technical difficulties because of the extremely thin caliber of the ureter in the rat and the high incidence of postoperative hydronephrosis. We describe a new technique for end-to-end ureterostomy in the rat, where the use of an absorbable ureteral stent is recommended. A 5-0 plain catgut thread is used as a stent. The anastomosis is performed under an operating microscope at X 25-40 magnification with interrupted sutures of 11-0 Vicryl. The use of the indwelling stent facilitates the performance of the anastomosis and yields optimal results. The macroscopical, radiological, and histological controls in a group of rats operated on with this technique showed a very high percentage of success with no complications, a result undoubtedly superior to that obtained with conventional methods.

  18. A multicentre 'end to end' dosimetry audit of motion management (4DCT-defined motion envelope) in radiotherapy.

    Science.gov (United States)

    Palmer, Antony L; Nash, David; Kearton, John R; Jafari, Shakardokht M; Muscat, Sarah

    2017-12-01

    External dosimetry audit is valuable for the assurance of radiotherapy quality. However, motion management has not been rigorously audited, despite its complexity and importance for accuracy. We describe the first end-to-end dosimetry audit for non-SABR (stereotactic ablative body radiotherapy) lung treatments, measuring dose accumulation in a moving target, and assessing adequacy of target dose coverage. A respiratory motion lung-phantom with custom-designed insert was used. Dose was measured with radiochromic film, employing triple-channel dosimetry and uncertainty reduction. The host's 4DCT scan, outlining and planning techniques were used. Measurements with the phantom static and then moving at treatment delivery separated inherent treatment uncertainties from motion effects. Calculated and measured dose distributions were compared by isodose overlay, gamma analysis, and we introduce the concept of 'dose plane histograms' for clinically relevant interpretation of film dosimetry. 12 radiotherapy centres and 19 plans were audited: conformal, IMRT (intensity modulated radiotherapy) and VMAT (volumetric modulated radiotherapy). Excellent agreement between planned and static-phantom results were seen (mean gamma pass 98.7% at 3% 2 mm). Dose blurring was evident in the moving-phantom measurements (mean gamma pass 88.2% at 3% 2 mm). Planning techniques for motion management were adequate to deliver the intended moving-target dose coverage. A novel, clinically-relevant, end-to-end dosimetry audit of motion management strategies in radiotherapy is reported. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. End-to-end self-assembly of gold nanorods in isopropanol solution: experimental and theoretical studies

    Energy Technology Data Exchange (ETDEWEB)

    Gordel, M., E-mail: marta.gordel@pwr.edu.pl [Wrocław University of Technology, Advanced Materials Engineering and Modelling Group, Faculty of Chemistry (Poland); Piela, K., E-mail: katarzyna.piela@pwr.edu.pl [Wrocław University of Technology, Department of Physical and Quantum Chemistry (Poland); Kołkowski, R. [Wrocław University of Technology, Advanced Materials Engineering and Modelling Group, Faculty of Chemistry (Poland); Koźlecki, T. [Wrocław University of Technology, Department of Chemical Engineering, Faculty of Chemistry (Poland); Buckle, M. [CNRS, École Normale Supérieure de Cachan, Laboratoire de Biologie et Pharmacologie Appliquée (France); Samoć, M. [Wrocław University of Technology, Advanced Materials Engineering and Modelling Group, Faculty of Chemistry (Poland)

    2015-12-15

    We describe here a modification of properties of colloidal gold nanorods (NRs) resulting from the chemical treatment used to carry out their transfer into isopropanol (IPA) solution. The NRs acquire a tendency to attach one to another by their ends (end-to-end assembly). We focus on the investigation of the change in position and shape of the longitudinal surface plasmon (l-SPR) band after self-assembly. The experimental results are supported by a theoretical calculation, which rationalizes the dramatic change in optical properties when the NRs are positioned end-to-end at short distances. The detailed spectroscopic characterization performed at the consecutive stages of transfer of the NRs from water into IPA solution revealed the features of the interaction between the polymers used as ligands and their contribution to the final stage, when the NRs were dispersed in IPA solution. The efficient method of aligning the NRs detailed here may facilitate applications of the self-assembled NRs as building blocks for optical materials and biological sensing.Graphical Abstract.

  20. End-to-End Joint Antenna Selection Strategy and Distributed Compress and Forward Strategy for Relay Channels

    Directory of Open Access Journals (Sweden)

    Rahul Vaze

    2009-01-01

    Full Text Available Multihop relay channels use multiple relay stages, each with multiple relay nodes, to facilitate communication between a source and destination. Previously, distributed space-time codes were proposed to maximize the achievable diversity-multiplexing tradeoff; however, they fail to achieve all the points of the optimal diversity-multiplexing tradeoff. In the presence of a low-rate feedback link from the destination to each relay stage and the source, this paper proposes an end-to-end antenna selection (EEAS strategy as an alternative to distributed space-time codes. The EEAS strategy uses a subset of antennas of each relay stage for transmission of the source signal to the destination with amplifying and forwarding at each relay stage. The subsets are chosen such that they maximize the end-to-end mutual information at the destination. The EEAS strategy achieves the corner points of the optimal diversity-multiplexing tradeoff (corresponding to maximum diversity gain and maximum multiplexing gain and achieves better diversity gain at intermediate values of multiplexing gain, versus the best-known distributed space-time coding strategies. A distributed compress and forward (CF strategy is also proposed to achieve all points of the optimal diversity-multiplexing tradeoff for a two-hop relay channel with multiple relay nodes.

  1. The simulated spectrum of the OGRE X-ray EM-CCD camera system

    Science.gov (United States)

    Lewis, M.; Soman, M.; Holland, A.; Lumb, D.; Tutt, J.; McEntaffer, R.; Schultz, T.; Holland, K.

    2017-12-01

    The X-ray astronomical telescopes in use today, such as Chandra and XMM-Newton, use X-ray grating spectrometers to probe the high energy physics of the Universe. These instruments typically use reflective optics for focussing onto gratings that disperse incident X-rays across a detector, often a Charge-Coupled Device (CCD). The X-ray energy is determined from the position that it was detected on the CCD. Improved technology for the next generation of X-ray grating spectrometers has been developed and will be tested on a sounding rocket experiment known as the Off-plane Grating Rocket Experiment (OGRE). OGRE aims to capture the highest resolution soft X-ray spectrum of Capella, a well-known astronomical X-ray source, during an observation period lasting between 3 and 6 minutes whilst proving the performance and suitability of three key components. These three components consist of a telescope made from silicon mirrors, gold coated silicon X-ray diffraction gratings and a camera that comprises of four Electron-Multiplying (EM)-CCDs that will be arranged to observe the soft X-rays dispersed by the gratings. EM-CCDs have an architecture similar to standard CCDs, with the addition of an EM gain register where the electron signal is amplified so that the effective signal-to-noise ratio of the imager is improved. The devices also have incredibly favourable Quantum Efficiency values for detecting soft X-ray photons. On OGRE, this improved detector performance allows for easier identification of low energy X-rays and fast readouts due to the amplified signal charge making readout noise almost negligible. A simulation that applies the OGRE instrument performance to the Capella soft X-ray spectrum has been developed that allows the distribution of X-rays onto the EM-CCDs to be predicted. A proposed optical model is also discussed which would enable the missions minimum success criteria's photon count requirement to have a high chance of being met with the shortest possible

  2. End-to-end Structural Restriction of α-Synuclein and Its Influence on Amyloid Fibril Formation

    International Nuclear Information System (INIS)

    Hong, Chul Suk; Park, Jae Hyung; Choe, Young Jun; Paik, Seung R.

    2014-01-01

    Relationship between molecular freedom of amyloidogenic protein and its self-assembly into amyloid fibrils has been evaluated with α-synuclein, an intrinsically unfolded protein related to Parkinson's disease, by restricting its structural plasticity through an end-to-end disulfide bond formation between two newly introduced cysteine residues on the N- and C-termini. Although the resulting circular form of α-synuclein exhibited an impaired fibrillation propensity, the restriction did not completely block the protein's interactive core since co-incubation with wild-type α-synuclein dramatically facilitated the fibrillation by producing distinctive forms of amyloid fibrils. The suppressed fibrillation propensity was instantly restored as the structural restriction was unleashed with β-mercaptoethanol. Conformational flexibility of the accreting amyloidogenic protein to pre-existing seeds has been demonstrated to be critical for fibrillar extension process by exerting structural adjustment to a complementary structure for the assembly

  3. End-to-end Structural Restriction of α-Synuclein and Its Influence on Amyloid Fibril Formation

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Chul Suk; Park, Jae Hyung; Choe, Young Jun; Paik, Seung R. [Seoul National University, Seoul (Korea, Republic of)

    2014-09-15

    Relationship between molecular freedom of amyloidogenic protein and its self-assembly into amyloid fibrils has been evaluated with α-synuclein, an intrinsically unfolded protein related to Parkinson's disease, by restricting its structural plasticity through an end-to-end disulfide bond formation between two newly introduced cysteine residues on the N- and C-termini. Although the resulting circular form of α-synuclein exhibited an impaired fibrillation propensity, the restriction did not completely block the protein's interactive core since co-incubation with wild-type α-synuclein dramatically facilitated the fibrillation by producing distinctive forms of amyloid fibrils. The suppressed fibrillation propensity was instantly restored as the structural restriction was unleashed with β-mercaptoethanol. Conformational flexibility of the accreting amyloidogenic protein to pre-existing seeds has been demonstrated to be critical for fibrillar extension process by exerting structural adjustment to a complementary structure for the assembly.

  4. Self-assembled nanogaps via seed-mediated growth of end-to-end linked gold nanorods

    DEFF Research Database (Denmark)

    Jain, Titoo; Westerlund, Axel Rune Fredrik; Johnson, Erik

    2009-01-01

    Gold nanorods (AuNRs) are of interest for a wide range of applications, ranging from imaging to molecular electronics, and they have been studied extensively for the past decade. An important issue in AuNR applications is the ability to self-assemble the rods in predictable structures...... on the nanoscale. We here present a new way to end-to-end link AuNRs with a single or few linker molecules. Whereas methods reported in the literature so far rely on modification of the AuNRs after the synthesis, we here dimerize gold nanoparticle seeds with a water-soluble dithiol-functionalized polyethylene...... that a large fraction of the rods are flexible around the hinging molecule in solution, as expected for a molecularly linked nanogap. By using excess of gold nanoparticles relative to the linking dithiol molecule, this method can provide a high probability that a single molecule is connecting the two rods...

  5. Increasing gas producer profitability with virtual well visibility via an end-to-end wireless Internet gas monitoring system

    Energy Technology Data Exchange (ETDEWEB)

    McDougall, M. [Northrock Resources Ltd., Calgary, AB (Canada); Benterud, K. [Zed.i solutions, Calgary, AB (Canada)

    2003-07-01

    This PowerPoint presentation describes how Northrock Resources Ltd. increased profitability using Smart-Alek{sup TM} while avoiding high implementation costs. Smart-Alek is a new type of fully integrated end-to-end electronic gas flow measurement (GFM) system based on Field Intelligence Network and End User Interference (FINE). Smart-Alek can analyze gas production through public wireless communications and a web-browser delivery system. The system has enabled Northrock to increase gas volumes with more accurate measurement and reduced downtime. In addition, operating costs have decreased because the frequency of well visits has been reduced and the administrative procedures of data collection is more efficient. The real-time well visibility of the tool has proven to be very effective in optimizing business profitability. 7 figs.

  6. End-to-End Trajectory for Conjunction Class Mars Missions Using Hybrid Solar-Electric/Chemical Transportation System

    Science.gov (United States)

    Chai, Patrick R.; Merrill, Raymond G.; Qu, Min

    2016-01-01

    NASA's Human Spaceflight Architecture Team is developing a reusable hybrid transportation architecture in which both chemical and solar-electric propulsion systems are used to deliver crew and cargo to exploration destinations. By combining chemical and solar-electric propulsion into a single spacecraft and applying each where it is most effective, the hybrid architecture enables a series of Mars trajectories that are more fuel efficient than an all chemical propulsion architecture without significant increases to trip time. The architecture calls for the aggregation of exploration assets in cislunar space prior to departure for Mars and utilizes high energy lunar-distant high Earth orbits for the final staging prior to departure. This paper presents the detailed analysis of various cislunar operations for the EMC Hybrid architecture as well as the result of the higher fidelity end-to-end trajectory analysis to understand the implications of the design choices on the Mars exploration campaign.

  7. SU-E-T-282: Dose Measurements with An End-To-End Audit Phantom for Stereotactic Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Jones, R; Artschan, R [Calvary Mater Newcastle, Newcastle, NSW (Australia); Thwaites, D [University of Sydney, Sydney, NSW (Australia); Lehmann, J [Calvary Mater Newcastle, Newcastle, NSW (Australia); University of Sydney, Sydney, NSW (Australia)

    2015-06-15

    Purpose: Report on dose measurements as part of an end-to-end test for stereotactic radiotherapy, using a new audit tool, which allows audits to be performed efficiently either by an onsite team or as a postal audit. Methods: Film measurements have been performed with a new Stereotactic Cube Phantom. The phantom has been designed to perform Winston Lutz type position verification measurements and dose measurements in one setup. It comprises a plastic cube with a high density ball in its centre (used for MV imaging with film or EPID) and low density markers in the periphery (used for Cone Beam Computed Tomography, CBCT imaging). It also features strategically placed gold markers near the posterior and right surfaces, which can be used to calculate phantom rotations on MV images. Slit-like openings allow insertion of film or other detectors.The phantom was scanned and small field treatment plans were created. The fields do not traverse any inhomogeneities of the phantom on their paths to the measurement location. The phantom was setup at the delivery system using CBCT imaging. The calculated treatment fields were delivered, each with a piece of radiochromic film (EBT3) placed in the anterior film holder of the phantom. MU had been selected in planning to achieve similar exposures on all films. Calibration films were exposed in solid water for dose levels around the expected doses. Films were scanned and analysed following established procedures. Results: Setup of the cube showed excellent suitability for CBCT 3D alignment. MV imaging with EPID allowed for clear identification of all markers. Film based dose measurements showed good agreement for MLC created fields down to 0.5 mm × 0.5 mm. Conclusion: An end-to-end audit phantom for stereotactic radiotherapy has been developed and tested.

  8. Evaluation of Techniques to Detect Significant Network Performance Problems using End-to-End Active Network Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Cottrell, R.Les; Logg, Connie; Chhaparia, Mahesh; /SLAC; Grigoriev, Maxim; /Fermilab; Haro, Felipe; /Chile U., Catolica; Nazir, Fawad; /NUST, Rawalpindi; Sandford, Mark

    2006-01-25

    End-to-End fault and performance problems detection in wide area production networks is becoming increasingly hard as the complexity of the paths, the diversity of the performance, and dependency on the network increase. Several monitoring infrastructures are built to monitor different network metrics and collect monitoring information from thousands of hosts around the globe. Typically there are hundreds to thousands of time-series plots of network metrics which need to be looked at to identify network performance problems or anomalous variations in the traffic. Furthermore, most commercial products rely on a comparison with user configured static thresholds and often require access to SNMP-MIB information, to which a typical end-user does not usually have access. In our paper we propose new techniques to detect network performance problems proactively in close to realtime and we do not rely on static thresholds and SNMP-MIB information. We describe and compare the use of several different algorithms that we have implemented to detect persistent network problems using anomalous variations analysis in real end-to-end Internet performance measurements. We also provide methods and/or guidance for how to set the user settable parameters. The measurements are based on active probes running on 40 production network paths with bottlenecks varying from 0.5Mbits/s to 1000Mbit/s. For well behaved data (no missed measurements and no very large outliers) with small seasonal changes most algorithms identify similar events. We compare the algorithms' robustness with respect to false positives and missed events especially when there are large seasonal effects in the data. Our proposed techniques cover a wide variety of network paths and traffic patterns. We also discuss the applicability of the algorithms in terms of their intuitiveness, their speed of execution as implemented, and areas of applicability. Our encouraging results compare and evaluate the accuracy of our

  9. A new and efficient transient noise analysis technique for simulation of CCD image sensors or particle detectors

    International Nuclear Information System (INIS)

    Bolcato, P.; Jarron, P.; Poujois, R.

    1993-01-01

    CCD image sensors or switched capacitor circuits used for particle detectors have a certain noise level affecting the resolution of the detector. A new noise simulation technique for these devices is presented that has been implemented in the circuit simulator ELDO. The approach is particularly useful for noise simulation in analog sampling circuits. Comparison between simulations and experimental results has been made and is shown for a 1.5 μ CMOS current mode amplifier designed for high-rate particle detectors. (R.P.) 5 refs., 7 figs

  10. End-to-end models for marine ecosystems: Are we on the precipice of a significant advance or just putting lipstick on a pig?

    Directory of Open Access Journals (Sweden)

    Kenneth A. Rose

    2012-02-01

    Full Text Available There has been a rapid rise in the development of end-to-end models for marine ecosystems over the past decade. Some reasons for this rise include need for predicting effects of climate change on biota and dissatisfaction with existing models. While the benefits of a well-implemented end-to-end model are straightforward, there are many challenges. In the short term, my view is that the major role of end-to-end models is to push the modelling community forward, and to identify critical data so that these data can be collected now and thus be available for the next generation of end-to-end models. I think we should emulate physicists and build theoretically-oriented models first, and then collect the data. In the long-term, end-to-end models will increase their skill, data collection will catch up, and end-to-end models will move towards site-specific applications with forecasting and management capabilities. One pathway into the future is individual efforts, over-promise, and repackaging of poorly performing component submodels (“lipstick on a pig”. The other pathway is a community-based collaborative effort, with appropriate caution and thoughtfulness, so that the needed improvements are achieved (“significant advance”. The promise of end-to-end modelling is great. We should act now to avoid missing a great opportunity.

  11. An End-to-End Modeling and Simulation Testbed (EMAST) to Support Detailed Quantitative Evaluations of GIG Transport Services

    National Research Council Canada - National Science Library

    Comparetto, G; Schult, N; Mirhakkak, M; Chen, L; Wade, R; Duffalo, S

    2005-01-01

    .... A variety of services must be provided to the users including management of resources to support QoS, a transition path from IPv4 to IPv6, and efficient networking across heterogeneous networks (i.e...

  12. Double 90 Degrees Counterrotated End-to-End-Anastomosis: An Experimental Study of an Intestinal Anastomosis Technique.

    Science.gov (United States)

    Holzner, Philipp; Kulemann, Birte; Seifert, Gabriel; Glatz, Torben; Chikhladze, Sophia; Höppner, Jens; Hopt, Ulrich; Timme, Sylvia; Bronsert, Peter; Sick, Olivia; Zhou, Cheng; Marjanovic, Goran

    2015-06-01

    The aim of the article is to investigate a new anastomotic technique compared with standardized intestinal anastomotic procedures. A total of 32 male Wistar rats were randomized to three groups. In the Experimental Group (n = 10), the new double 90 degrees inversely rotated anastomosis was used, in the End Group (n = 10) a single-layer end-to-end anastomosis, and in the Side Group (n = 12) a single-layer side-to-side anastomosis. All anastomoses were done using interrupted sutures. On postoperative day 4, rats were relaparotomized. Bursting pressure, hydroxyproline concentration, a semiquantitative adhesion score and two histological anastomotic healing scores (mucosal healing according to Chiu and overall anastomotic healing according to Verhofstad) were collected. Most data are presented as median (range). p < 0.05 was considered significant. Anastomotic insufficiency occurred only in one rat of the Side Group. Median bursting pressure in the Experimental Group was 105 mm Hg (range = 72-161 mm Hg), significantly higher in the End Group (164 mm Hg; range = 99-210 mm Hg; p = 0.021) and lower in the Side Group by trend (81 mm Hg; range = 59-122 mm Hg; p = 0.093). Hydroxyproline concentration did not differ significantly in between the groups. The adhesion score was 2.5 (range = 1-3) in the Experimental Group, 2 (range = 1-2) in the End Group, but there were significantly more adhesions in the Side Group (range = 3-4); p = 0.020 versus Experimental Group, p < 0.001 versus End Group. The Chiu Score showed the worst mucosal healing in the Experimental Group. The overall Verhofstad Score was significantly worse (mean = 2.032; standard deviation [SD] = 0.842) p = 0.031 and p = 0.002 in the Experimental Group, compared with the Side Group (mean = 1.729; SD = 0.682) and the End Group (mean = 1.571; SD = 0.612). The new anastomotic technique is feasible and did not show any relevant complication. Even though it was superior to the side-to-side anastomosis by trend with

  13. A novel end-to-end classifier using domain transferred deep convolutional neural networks for biomedical images.

    Science.gov (United States)

    Pang, Shuchao; Yu, Zhezhou; Orgun, Mehmet A

    2017-03-01

    Highly accurate classification of biomedical images is an essential task in the clinical diagnosis of numerous medical diseases identified from those images. Traditional image classification methods combined with hand-crafted image feature descriptors and various classifiers are not able to effectively improve the accuracy rate and meet the high requirements of classification of biomedical images. The same also holds true for artificial neural network models directly trained with limited biomedical images used as training data or directly used as a black box to extract the deep features based on another distant dataset. In this study, we propose a highly reliable and accurate end-to-end classifier for all kinds of biomedical images via deep learning and transfer learning. We first apply domain transferred deep convolutional neural network for building a deep model; and then develop an overall deep learning architecture based on the raw pixels of original biomedical images using supervised training. In our model, we do not need the manual design of the feature space, seek an effective feature vector classifier or segment specific detection object and image patches, which are the main technological difficulties in the adoption of traditional image classification methods. Moreover, we do not need to be concerned with whether there are large training sets of annotated biomedical images, affordable parallel computing resources featuring GPUs or long times to wait for training a perfect deep model, which are the main problems to train deep neural networks for biomedical image classification as observed in recent works. With the utilization of a simple data augmentation method and fast convergence speed, our algorithm can achieve the best accuracy rate and outstanding classification ability for biomedical images. We have evaluated our classifier on several well-known public biomedical datasets and compared it with several state-of-the-art approaches. We propose a robust

  14. End-to-end Cyberinfrastructure and Data Services for Earth System Science Education and Research: A vision for the future

    Science.gov (United States)

    Ramamurthy, M. K.

    2006-05-01

    yet revolutionary way of building applications and methods to connect and exchange information over the Web. This new approach, based on XML - a widely accepted format for exchanging data and corresponding semantics over the Internet - enables applications, computer systems, and information processes to work together in fundamentally different ways. Likewise, the advent of digital libraries, grid computing platforms, interoperable frameworks, standards and protocols, open-source software, and community atmospheric models have been important drivers in shaping the use of a new generation of end-to-end cyberinfrastructure for solving some of the most challenging scientific and educational problems. In this talk, I will present an overview of the scientific, technological, and educational landscape, discuss recent developments in cyberinfrastructure, and Unidata's role in and vision for providing easy-to use, robust, end-to-end data services for solving geoscientific problems and advancing student learning.

  15. End-to-end Cyberinfrastructure and Data Services for Earth System Science Education and Research: Unidata's Plans and Directions

    Science.gov (United States)

    Ramamurthy, M.

    2005-12-01

    work together in a fundamentally different way. Likewise, the advent of digital libraries, grid computing platforms, interoperable frameworks, standards and protocols, open-source software, and community atmospheric models have been important drivers in shaping the use of a new generation of end-to-end cyberinfrastructure for solving some of the most challenging scientific and educational problems. In this talk, I will present an overview of the scientific, technological, and educational drivers and discuss recent developments in cyberinfrastructure and Unidata's role and directions in providing robust, end-to-end data services for solving geoscientific problems and advancing student learning.

  16. SU-F-J-177: A Novel Image Analysis Technique (center Pixel Method) to Quantify End-To-End Tests

    Energy Technology Data Exchange (ETDEWEB)

    Wen, N; Chetty, I [Henry Ford Health System, Detroit, MI (United States); Snyder, K [Henry Ford Hospital System, Detroit, MI (United States); Scheib, S [Varian Medical System, Barton (Switzerland); Qin, Y; Li, H [Henry Ford Health System, Detroit, Michigan (United States)

    2016-06-15

    Purpose: To implement a novel image analysis technique, “center pixel method”, to quantify end-to-end tests accuracy of a frameless, image guided stereotactic radiosurgery system. Methods: The localization accuracy was determined by delivering radiation to an end-to-end prototype phantom. The phantom was scanned with 0.8 mm slice thickness. The treatment isocenter was placed at the center of the phantom. In the treatment room, CBCT images of the phantom (kVp=77, mAs=1022, slice thickness 1 mm) were acquired to register to the reference CT images. 6D couch correction were applied based on the registration results. Electronic Portal Imaging Device (EPID)-based Winston Lutz (WL) tests were performed to quantify the errors of the targeting accuracy of the system at 15 combinations of gantry, collimator and couch positions. The images were analyzed using two different methods. a) The classic method. The deviation was calculated by measuring the radial distance between the center of the central BB and the full width at half maximum of the radiation field. b) The center pixel method. Since the imager projection offset from the treatment isocenter was known from the IsoCal calibration, the deviation was determined between the center of the BB and the central pixel of the imager panel. Results: Using the automatic registration method to localize the phantom and the classic method of measuring the deviation of the BB center, the mean and standard deviation of the radial distance was 0.44 ± 0.25, 0.47 ± 0.26, and 0.43 ± 0.13 mm for the jaw, MLC and cone defined field sizes respectively. When the center pixel method was used, the mean and standard deviation was 0.32 ± 0.18, 0.32 ± 0.17, and 0.32 ± 0.19 mm respectively. Conclusion: Our results demonstrated that the center pixel method accurately analyzes the WL images to evaluate the targeting accuracy of the radiosurgery system. The work was supported by a Research Scholar Grant, RSG-15-137-01-CCE from the American

  17. End-to-end probability for an interacting center vortex world line in Yang-Mills theory

    International Nuclear Information System (INIS)

    Teixeira, Bruno F.I.; Lemos, Andre L.L. de; Oxman, Luis E.

    2011-01-01

    Full text: The understanding of quark confinement is a very important open problem in Yang-Mills theory. In this regard, nontrivial topological defects are expected to play a relevant role to achieve a solution. Here we are interested in how to deal with these structures, relying on the Cho-Faddeev-Niemi decomposition and the possibility it offers to describe defects in terms of a local color frame. In particular, the path integral for a single center vortex is a fundamental object to handle the ensemble integration. As is well-known, in three dimensions center vortices are string-like and the associated physics is closely related with that of polymers. Using recent techniques developed in the latter context, we present in this work a detailed derivation of the equation for the end-to-end probability for a center vortex world line, including the effects of interactions. Its solution can be associated with a Green function that depends on the position and orientation at the boundaries, where monopole-like instantons are placed. In the limit of semi flexible polymers, an expansion only keeping the lower angular momenta for the final orientation leads to a reduced Green function for a complex vortex field minimally coupled to the dual Yang-Mills fields. This constitutes a key ingredient to propose an effective model for correlated monopoles, center vortices and the dual fields. (author)

  18. Minimizing Barriers in Learning for On-Call Radiology Residents-End-to-End Web-Based Resident Feedback System.

    Science.gov (United States)

    Choi, Hailey H; Clark, Jennifer; Jay, Ann K; Filice, Ross W

    2018-02-01

    Feedback is an essential part of medical training, where trainees are provided with information regarding their performance and further directions for improvement. In diagnostic radiology, feedback entails a detailed review of the differences between the residents' preliminary interpretation and the attendings' final interpretation of imaging studies. While the on-call experience of independently interpreting complex cases is important to resident education, the more traditional synchronous "read-out" or joint review is impossible due to multiple constraints. Without an efficient method to compare reports, grade discrepancies, convey salient teaching points, and view images, valuable lessons in image interpretation and report construction are lost. We developed a streamlined web-based system, including report comparison and image viewing, to minimize barriers in asynchronous communication between attending radiologists and on-call residents. Our system provides real-time, end-to-end delivery of case-specific and user-specific feedback in a streamlined, easy-to-view format. We assessed quality improvement subjectively through surveys and objectively through participation metrics. Our web-based feedback system improved user satisfaction for both attending and resident radiologists, and increased attending participation, particularly with regards to cases where substantive discrepancies were identified.

  19. An anthropomorphic multimodality (CT/MRI) head phantom prototype for end-to-end tests in ion radiotherapy

    International Nuclear Information System (INIS)

    Gallas, Raya R.; Huenemohr, Nora; Runz, Armin; Niebuhr, Nina I.; Greilich, Steffen; Jaekel, Oliver

    2015-01-01

    With the increasing complexity of external beam therapy ''end-to-end'' tests are intended to cover every step from therapy planning through to follow-up in order to fulfill the higher demands on quality assurance. As magnetic resonance imaging (MRI) has become an important part of the treatment process, established phantoms such as the Alderson head cannot fully be used for those tests and novel phantoms have to be developed. Here, we present a feasibility study of a customizable multimodality head phantom. It is initially intended for ion radiotherapy but may also be used in photon therapy. As basis for the anthropomorphic head shape we have used a set of patient computed tomography (CT) images. The phantom recipient consisting of epoxy resin was produced by using a 3D printer. It includes a nasal air cavity, a cranial bone surrogate (based on dipotassium phosphate), a brain surrogate (based on agarose gel), and a surrogate for cerebrospinal fluid (based on distilled water). Furthermore, a volume filled with normoxic dosimetric gel mimicked a tumor. The entire workflow of a proton therapy could be successfully applied to the phantom. CT measurements revealed CT numbers agreeing with reference values for all surrogates in the range from 2 HU to 978 HU (120 kV). MRI showed the desired contrasts between the different phantom materials especially in T2-weighted images (except for the bone surrogate). T2-weighted readout of the polymerization gel dosimeter allowed approximate range verification.

  20. Vision-based mobile robot navigation through deep convolutional neural networks and end-to-end learning

    Science.gov (United States)

    Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling

    2017-09-01

    In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters

  1. SampleCNN: End-to-End Deep Convolutional Neural Networks Using Very Small Filters for Music Classification

    Directory of Open Access Journals (Sweden)

    Jongpil Lee

    2018-01-01

    Full Text Available Convolutional Neural Networks (CNN have been applied to diverse machine learning tasks for different modalities of raw data in an end-to-end fashion. In the audio domain, a raw waveform-based approach has been explored to directly learn hierarchical characteristics of audio. However, the majority of previous studies have limited their model capacity by taking a frame-level structure similar to short-time Fourier transforms. We previously proposed a CNN architecture which learns representations using sample-level filters beyond typical frame-level input representations. The architecture showed comparable performance to the spectrogram-based CNN model in music auto-tagging. In this paper, we extend the previous work in three ways. First, considering the sample-level model requires much longer training time, we progressively downsample the input signals and examine how it affects the performance. Second, we extend the model using multi-level and multi-scale feature aggregation technique and subsequently conduct transfer learning for several music classification tasks. Finally, we visualize filters learned by the sample-level CNN in each layer to identify hierarchically learned features and show that they are sensitive to log-scaled frequency.

  2. An anthropomorphic multimodality (CT/MRI) head phantom prototype for end-to-end tests in ion radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Gallas, Raya R.; Huenemohr, Nora; Runz, Armin; Niebuhr, Nina I.; Greilich, Steffen [German Cancer Research Center (DKFZ), Heidelberg (Germany). Div. of Medical Physics in Radiation Oncology; National Center for Radiation Research in Oncology, Heidelberg (Germany). Heidelberg Institute of Radiation Oncology (HIRO); Jaekel, Oliver [German Cancer Research Center (DKFZ), Heidelberg (Germany). Div. of Medical Physics in Radiation Oncology; National Center for Radiation Research in Oncology, Heidelberg (Germany). Heidelberg Institute of Radiation Oncology (HIRO); Heidelberg University Hospital (Germany). Dept. of Radiation Oncology; Heidelberg Ion-Beam Therapy Center (HIT), Heidelberg (Germany)

    2015-07-01

    With the increasing complexity of external beam therapy ''end-to-end'' tests are intended to cover every step from therapy planning through to follow-up in order to fulfill the higher demands on quality assurance. As magnetic resonance imaging (MRI) has become an important part of the treatment process, established phantoms such as the Alderson head cannot fully be used for those tests and novel phantoms have to be developed. Here, we present a feasibility study of a customizable multimodality head phantom. It is initially intended for ion radiotherapy but may also be used in photon therapy. As basis for the anthropomorphic head shape we have used a set of patient computed tomography (CT) images. The phantom recipient consisting of epoxy resin was produced by using a 3D printer. It includes a nasal air cavity, a cranial bone surrogate (based on dipotassium phosphate), a brain surrogate (based on agarose gel), and a surrogate for cerebrospinal fluid (based on distilled water). Furthermore, a volume filled with normoxic dosimetric gel mimicked a tumor. The entire workflow of a proton therapy could be successfully applied to the phantom. CT measurements revealed CT numbers agreeing with reference values for all surrogates in the range from 2 HU to 978 HU (120 kV). MRI showed the desired contrasts between the different phantom materials especially in T2-weighted images (except for the bone surrogate). T2-weighted readout of the polymerization gel dosimeter allowed approximate range verification.

  3. Innovative strategy for effective critical laboratory result management: end-to-end process using automation and manual call centre.

    Science.gov (United States)

    Ti, Lian Kah; Ang, Sophia Bee Leng; Saw, Sharon; Sethi, Sunil Kumar; Yip, James W L

    2012-08-01

    Timely reporting and acknowledgement are crucial steps in critical laboratory results (CLR) management. The authors previously showed that an automated pathway incorporating short messaging system (SMS) texts, auto-escalation, and manual telephone back-up improved the rate and speed of physician acknowledgement compared with manual telephone calling alone. This study investigated if it also improved the rate and speed of physician intervention to CLR and whether utilising the manual back-up affected intervention rates. Data from seven audits between November 2007 and January 2011 were analysed. These audits were carried out to assess the robustness of CLR reporting process in the authors' institution. Comparisons were made in the rate and speed of acknowledgement and intervention between the audits performed before and after automation. Using the automation audits, the authors compared intervention data between communication with SMS only and when manual intervention was required. 1680 CLR were reported during the audit periods. Automation improved the rate (100% vs 84.2%; pautomation audits, the use of SMS only did not improve physician intervention rates. The automated communication pathway improved physician intervention rate and time in tandem with improved acknowledgement rate and time when compared with manual telephone calling. The use of manual intervention to augment automation did not adversely affect physician intervention rate, implying that an end-to-end pathway was more important than automation alone.

  4. Delayed primary end-to-end anastomosis for traumatic long segment urethral stricture and its short-term outcomes

    Directory of Open Access Journals (Sweden)

    Rajarshi Kumar

    2017-01-01

    Full Text Available Background: The purpose of this study is to evaluate the aetiology of posterior urethral stricture in children and analysis of results after delayed primary repair with extensive distal urethral mobilisation. Materials and Methods: This was a retrospective study carried out in a tertiary care centre from January 2009 to December 2013. Results: Eight children with median age 7.5 years (range 4–11 years, underwent delayed anastomotic urethroplasty: Six through perineal and two through combined perineal and transpubic approach. All the eight children had long-segment >2 cm stricture: Three posterior and five anterior urethral stricture. On a mean follow-up period of 33 months (range 24–48 m, all were passing urine with good flow and stream. Conclusion: End-to-end anastomosis in post-traumatic long segment posterior urethral stricture between prostatic and penile urethra in children is possible by perineal or combined perineal and transpubic approach with good results without any urethral replacement.

  5. A real-time 3D end-to-end augmented reality system (and its representation transformations)

    Science.gov (United States)

    Tytgat, Donny; Aerts, Maarten; De Busser, Jeroen; Lievens, Sammy; Rondao Alface, Patrice; Macq, Jean-Francois

    2016-09-01

    The new generation of HMDs coming to the market is expected to enable many new applications that allow free viewpoint experiences with captured video objects. Current applications usually rely on 3D content that is manually created or captured in an offline manner. In contrast, this paper focuses on augmented reality applications that use live captured 3D objects while maintaining free viewpoint interaction. We present a system that allows live dynamic 3D objects (e.g. a person who is talking) to be captured in real-time. Real-time performance is achieved by traversing a number of representation formats and exploiting their specific benefits. For instance, depth images are maintained for fast neighborhood retrieval and occlusion determination, while implicit surfaces are used to facilitate multi-source aggregation for both geometry and texture. The result is a 3D reconstruction system that outputs multi-textured triangle meshes at real-time rates. An end-to-end system is presented that captures and reconstructs live 3D data and allows for this data to be used on a networked (AR) device. For allocating the different functional blocks onto the available physical devices, a number of alternatives are proposed considering the available computational power and bandwidth for each of the components. As we will show, the representation format can play an important role in this functional allocation and allows for a flexible system that can support a highly heterogeneous infrastructure.

  6. Increasing gas producer profitability with virtual well visibility via an end-to-end, wireless Internet gas monitoring system

    Energy Technology Data Exchange (ETDEWEB)

    McDougall, M.; Coleman, K.; Beck, R.; Lyon, R.; Potts, R. [Northrock Resources Ltd., Calgary, AB (Canada); Benterud, K. [Zed.i solutions, Calgary, AB (Canada)

    2003-07-01

    Most gas producing companies still use 100-year old technology to measure gas volumes because of the prohibitive costs of implementing corporate wide electronic information systems to replace circular mechanical chart technology. This paper describes how Northrock Resources Ltd. increased profitability using Smart-Alek{sup TM} while avoiding high implementation costs. Smart-Alek is a new type of fully integrated end-to-end electronic gas flow measurement (GFM) system based on Field Intelligence Network and End User Interference (FINE). Smart-Alek can analyze gas production through public wireless communications and a web-browser delivery system. The system has enabled Northrock to increase gas volumes with more accurate measurement and reduced downtime. In addition, operating costs were also decreased because the frequency of well visits was reduced and the administrative procedures of data collection was more efficient. The real-time well visibility of the tool has proven to be very effective in optimizing business profitability. 9 refs., 1 tab., 9 figs.

  7. End-to-end gene fusions and their impact on the production of multifunctional biomass degrading enzymes

    International Nuclear Information System (INIS)

    Rizk, Mazen; Antranikian, Garabed; Elleuche, Skander

    2012-01-01

    Highlights: ► Multifunctional enzymes offer an interesting approach for biomass degradation. ► Size and conformation of separate constructs play a role in the effectiveness of chimeras. ► A connecting linker allows for maximal flexibility and increased thermostability. ► Genes with functional similarities are the best choice for fusion candidates. -- Abstract: The reduction of fossil fuels, coupled with its increase in price, has made the search for alternative energy resources more plausible. One of the topics gaining fast interest is the utilization of lignocellulose, the main component of plants. Its primary constituents, cellulose and hemicellulose, can be degraded by a series of enzymes present in microorganisms, into simple sugars, later used for bioethanol production. Thermophilic bacteria have proven to be an interesting source of enzymes required for hydrolysis since they can withstand high and denaturing temperatures, which are usually required for processes involving biomass degradation. However, the cost associated with the whole enzymatic process is staggering. A solution for cost effective and highly active production is through the construction of multifunctional enzyme complexes harboring the function of more than one enzyme needed for the hydrolysis process. There are various strategies for the degradation of complex biomass ranging from the regulation of the enzymes involved, to cellulosomes, and proteins harboring more than one enzymatic activity. In this review, the construction of multifunctional biomass degrading enzymes through end-to-end gene fusions, and its impact on production and activity by choosing the enzymes and linkers is assessed.

  8. An anthropomorphic multimodality (CT/MRI) head phantom prototype for end-to-end tests in ion radiotherapy.

    Science.gov (United States)

    Gallas, Raya R; Hünemohr, Nora; Runz, Armin; Niebuhr, Nina I; Jäkel, Oliver; Greilich, Steffen

    2015-12-01

    With the increasing complexity of external beam therapy "end-to-end" tests are intended to cover every step from therapy planning through to follow-up in order to fulfill the higher demands on quality assurance. As magnetic resonance imaging (MRI) has become an important part of the treatment process, established phantoms such as the Alderson head cannot fully be used for those tests and novel phantoms have to be developed. Here, we present a feasibility study of a customizable multimodality head phantom. It is initially intended for ion radiotherapy but may also be used in photon therapy. As basis for the anthropomorphic head shape we have used a set of patient computed tomography (CT) images. The phantom recipient consisting of epoxy resin was produced by using a 3D printer. It includes a nasal air cavity, a cranial bone surrogate (based on dipotassium phosphate), a brain surrogate (based on agarose gel), and a surrogate for cerebrospinal fluid (based on distilled water). Furthermore, a volume filled with normoxic dosimetric gel mimicked a tumor. The entire workflow of a proton therapy could be successfully applied to the phantom. CT measurements revealed CT numbers agreeing with reference values for all surrogates in the range from 2 HU to 978 HU (120 kV). MRI showed the desired contrasts between the different phantom materials especially in T2-weighted images (except for the bone surrogate). T2-weighted readout of the polymerization gel dosimeter allowed approximate range verification. Copyright © 2015. Published by Elsevier GmbH.

  9. On cryptographic security of end-to-end encrypted connections in WhatsApp and Telegram messengers

    Directory of Open Access Journals (Sweden)

    Sergey V. Zapechnikov

    2017-11-01

    Full Text Available The aim of this work is to analyze the available possibilities for improving secure messaging with end-to-end connections under conditions of external violator actions and distrusted service provider. We made a comparative analysis of cryptographic security mechanisms for two widely used messengers: Telegram and WhatsApp. It was found that Telegram is based on MTProto protocol, while WhatsApp is based on the alternative Signal protocol. We examine the specific features of messengers implementation associated with random number generation on the most popular Android mobile platform. It was shown that Signal has better security properties. It is used in several other popular messengers such as TextSecure, RedPhone, GoogleAllo, FacebookMessenger, Signal along with WhatsApp. A number of possible attacks on both messengers were analyzed in details. In particular, we demonstrate that the metadata are poorly protected in both messengers. Metadata security may be one of the goals for further studies.

  10. Secondary link adaptation in cognitive radio networks: End-to-end performance with cross-layer design

    KAUST Repository

    Ma, Hao

    2012-04-01

    Under spectrum-sharing constraints, we consider the secondary link exploiting cross-layer combining of adaptive modulation and coding (AMC) at the physical layer with truncated automatic repeat request (T-ARQ) at the data link layer in cognitive radio networks. Both, basic AMC and aggressive AMC, are adopted to optimize the overall average spectral efficiency, subject to the interference constraints imposed by the primary user of the shared spectrum band and a target packet loss rate. We achieve the optimal boundary points in closed form to choose the AMC transmission modes by taking into account the channel state information from the secondary transmitter to both the primary receiver and the secondary receiver. Moreover, numerical results substantiate that, without any cost in the transmitter/receiver design nor the end-to-end delay, the scheme with aggressive AMC outperforms that with conventional AMC. The main reason is that, with aggressive AMC, different transmission modes utilized in the initial packet transmission and the following retransmissions match the time-varying channel conditions better than the basic pattern. © 2012 IEEE.

  11. Chinese Medical Question Answer Matching Using End-to-End Character-Level Multi-Scale CNNs

    Directory of Open Access Journals (Sweden)

    Sheng Zhang

    2017-07-01

    Full Text Available This paper focuses mainly on the problem of Chinese medical question answer matching, which is arguably more challenging than open-domain question answer matching in English due to the combination of its domain-restricted nature and the language-specific features of Chinese. We present an end-to-end character-level multi-scale convolutional neural framework in which character embeddings instead of word embeddings are used to avoid Chinese word segmentation in text preprocessing, and multi-scale convolutional neural networks (CNNs are then introduced to extract contextual information from either question or answer sentences over different scales. The proposed framework can be trained with minimal human supervision and does not require any handcrafted features, rule-based patterns, or external resources. To validate our framework, we create a new text corpus, named cMedQA, by harvesting questions and answers from an online Chinese health and wellness community. The experimental results on the cMedQA dataset show that our framework significantly outperforms several strong baselines, and achieves an improvement of top-1 accuracy by up to 19%.

  12. West Coast fish, mammal, and bird species diets - Developing end-to-end models of the California Current Large Marine Ecosystem

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The purpose of this project is to develop spatially discrete end-to-end models of the California Current LME, linking oceanography, biogeochemistry, food web...

  13. Gulf of California species and catch spatial distributions and historical time series - Developing end-to-end models of the Gulf of California

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The purpose of this project is to develop spatially discrete end-to-end models of the northern Gulf of California, linking oceanography, biogeochemistry, food web...

  14. West Coast fish, mammal, bird life history and abunance parameters - Developing end-to-end models of the California Current Large Marine Ecosystem

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The purpose of this project is to develop spatially discrete end-to-end models of the California Current LME, linking oceanography, biogeochemistry, food web...

  15. Adaptation and validation of a commercial head phantom for cranial radiosurgery dosimetry end-to-end audit.

    Science.gov (United States)

    Dimitriadis, Alexis; Palmer, Antony L; Thomas, Russell A S; Nisbet, Andrew; Clark, Catharine H

    2017-06-01

    To adapt and validate an anthropomorphic head phantom for use in a cranial radiosurgery audit. Two bespoke inserts were produced for the phantom: one for providing the target and organ at risk for delineation and the other for performing dose measurements. The inserts were tested to assess their positional accuracy. A basic treatment plan dose verification with an ionization chamber was performed to establish a baseline accuracy for the phantom and beam model. The phantom and inserts were then used to perform dose verification measurements of a radiosurgery plan. The dose was measured with alanine pellets, EBT extended dose film and a plastic scintillation detector (PSD). Both inserts showed reproducible positioning (±0.5 mm) and good positional agreement between them (±0.6 mm). The basic treatment plan measurements showed agreement to the treatment planning system (TPS) within 0.5%. Repeated film measurements showed consistent gamma passing rates with good agreement to the TPS. For 2%-2 mm global gamma, the mean passing rate was 96.7% and the variation in passing rates did not exceed 2.1%. The alanine pellets and PSD showed good agreement with the TPS (-0.1% and 0.3% dose difference in the target) and good agreement with each other (within 1%). The adaptations to the phantom showed acceptable accuracies. The presence of alanine and PSD do not affect film measurements significantly, enabling simultaneous measurements by all three detectors. Advances in knowledge: A novel method for thorough end-to-end test of radiosurgery, with capability to incorporate all steps of the clinical pathway in a time-efficient and reproducible manner, suitable for a national audit.

  16. The End-To-End Safety Verification Process Implemented to Ensure Safe Operations of the Columbus Research Module

    Science.gov (United States)

    Arndt, J.; Kreimer, J.

    2010-09-01

    The European Space Laboratory COLUMBUS was launched in February 2008 with NASA Space Shuttle Atlantis. Since successful docking and activation this manned laboratory forms part of the International Space Station(ISS). Depending on the objectives of the Mission Increments the on-orbit configuration of the COLUMBUS Module varies with each increment. This paper describes the end-to-end verification which has been implemented to ensure safe operations under the condition of a changing on-orbit configuration. That verification process has to cover not only the configuration changes as foreseen by the Mission Increment planning but also those configuration changes on short notice which become necessary due to near real-time requests initiated by crew or Flight Control, and changes - most challenging since unpredictable - due to on-orbit anomalies. Subject of the safety verification is on one hand the on orbit configuration itself including the hardware and software products, on the other hand the related Ground facilities needed for commanding of and communication to the on-orbit System. But also the operational products, e.g. the procedures prepared for crew and ground control in accordance to increment planning, are subject of the overall safety verification. In order to analyse the on-orbit configuration for potential hazards and to verify the implementation of the related Safety required hazard controls, a hierarchical approach is applied. The key element of the analytical safety integration of the whole COLUMBUS Payload Complement including hardware owned by International Partners is the Integrated Experiment Hazard Assessment(IEHA). The IEHA especially identifies those hazardous scenarios which could potentially arise through physical and operational interaction of experiments. A major challenge is the implementation of a Safety process which owns quite some rigidity in order to provide reliable verification of on-board Safety and which likewise provides enough

  17. First Demonstration of Real-Time End-to-End 40 Gb/s PAM-4 System using 10-G Transmitter for Next Generation Access Applications

    DEFF Research Database (Denmark)

    Wei, Jinlong; Eiselt, Nicklas; Griesser, Helmut

    We demonstrate the first known experiment of a real-time end-to-end 40-Gb/s PAM-4 system for next generation access applications using 10G class transmitters only. Up to 25-dB upstream link budget for 20 km SMF is achieved.......We demonstrate the first known experiment of a real-time end-to-end 40-Gb/s PAM-4 system for next generation access applications using 10G class transmitters only. Up to 25-dB upstream link budget for 20 km SMF is achieved....

  18. Partial QoS-Aware Opportunistic Relay Selection Over Two-Hop Channels: End-to-End Performance Under Spectrum-Sharing Requirements

    KAUST Repository

    Yuli Yang,; Hao Ma,; Aissa, Sonia

    2014-01-01

    it with transmission constraints imposed on the transmit power budget and interference to other users. By analyzing the statistics of received SNRs in the first and second hops, we obtain the end-to-end PLR of this scheme in closed form under the considered scenario

  19. The End-to-end Demonstrator for improved decision making in the water sector in Europe (EDgE)

    Science.gov (United States)

    Wood, Eric; Wanders, Niko; Pan, Ming; Sheffield, Justin; Samaniego, Luis; Thober, Stephan; Kumar, Rohinni; Prudhomme, Christel; Houghton-Carr, Helen

    2017-04-01

    High-resolution simulations of water resources from hydrological models are vital to supporting important climate services. Apart from a high level of detail, both spatially and temporally, it is important to provide simulations that consistently cover a range of timescales, from historical reanalysis to seasonal forecast and future projections. In the new EDgE project commissioned by the ECMWF (C3S) we try to fulfill these requirements. EDgE is a proof-of-concept project which combines climate data and state-of-the-art hydrological modelling to demonstrate a water-oriented information system implemented through a web application. EDgE is working with key European stakeholders representative of private and public sectors to jointly develop and tailor approaches and techniques. With these tools, stakeholders are assisted in using improved climate information in decision-making, and supported in the development of climate change adaptation and mitigation policies. Here, we present the first results of the EDgE modelling chain, which is divided into three main processes: 1) pre-processing and downscaling; 2) hydrological modelling; 3) post-processing. Consistent downscaling and bias corrections for historical simulations, seasonal forecasts and climate projections ensure that the results across scales are robust. The daily temporal resolution and 5km spatial resolution ensure locally relevant simulations. With the use of four hydrological models (PCR-GLOBWB, VIC, mHM, Noah-MP), uncertainty between models is properly addressed, while consistency is guaranteed by using identical input data for static land surface parameterizations. The forecast results are communicated to stakeholders via Sectoral Climate Impact Indicators (SCIIs) that have been created in collaboration with the end-user community of the EDgE project. The final product of this project is composed of 15 years of seasonal forecast and 10 climate change projections, all combined with four hydrological

  20. Including 10-Gigabit-capable Passive Optical Network under End-to-End Generalized Multi-Protocol Label Switching Provisioned Quality of Service

    DEFF Research Database (Denmark)

    Brewka, Lukasz Jerzy; Gavler, Anders; Wessing, Henrik

    2012-01-01

    of the network where quality of service signaling is bridged. This article proposes strategies for generalized multi-protocol label switching control over next emerging passive optical network standard, i.e., the 10-gigabit-capable passive optical network. Node management and resource allocation approaches...... are discussed, and possible issues are raised. The analysis shows that consideration of a 10-gigabit-capable passive optical network as a generalized multi-protocol label switching controlled domain is valid and may advance end-to-end quality of service provisioning for passive optical network based customers.......End-to-end quality of service provisioning is still a challenging task despite many years of research and development in this area. Considering a generalized multi-protocol label switching based core/metro network and resource reservation protocol capable home gateways, it is the access part...

  1. Influence of suture technique and suture material selection on the mechanics of end-to-end and end-to-side anastomoses.

    Science.gov (United States)

    Baumgartner, N; Dobrin, P B; Morasch, M; Dong, Q S; Mrkvicka, R

    1996-05-01

    Experiments were performed in dogs to evaluate the mechanics of 26 end-to-end and 42 end-to-side artery-vein graft anastomoses constructed with continuous polypropylene sutures (Surgilene; Davis & Geck, Division of American Cyanamid Co., Danbury, Conn.), continuous polybutester sutures (Novafil; Davis & Geck), and interrupted stitches with either suture material. After construction, the grafts and adjoining arteries were excised, mounted in vitro at in situ length, filled with a dilute barium sulfate suspension, and pressurized in 25 mm Hg steps up to 200 mm Hg. Radiographs were obtained at each pressure. The computed cross-sectional areas of the anastomoses were compared with those of the native arteries at corresponding pressures. Results showed that for the end-to-end anastomoses at 100 mm Hg the cross-sectional areas of the continuous Surgilene anastomoses were 70% of the native artery cross-sectional areas, the cross-sectional areas of the continuous Novafil anastomoses were 90% of the native artery cross-sectional areas, and the cross-sectional areas of the interrupted anastomoses were 107% of the native artery cross-sectional areas (p anastomoses demonstrated no differences in cross-sectional areas or compliance for the three suture techniques. This suggests that, unlike with end-to-end anastomoses, when constructing an end-to-side anastomosis in patients any of the three suture techniques may be acceptable.

  2. One stage functional end-to-end stapled intestinal anastomosis and resection performed by nonexpert surgeons for the treatment of small intestinal obstruction in 30 dogs.

    Science.gov (United States)

    Jardel, Nicolas; Hidalgo, Antoine; Leperlier, Dimitri; Manassero, Mathieu; Gomes, Aymeric; Bedu, Anne Sophie; Moissonnier, Pierre; Fayolle, Pascal; Begon, Dominique; Riquois, Elisabeth; Viateau, Véronique

    2011-02-01

    To describe stapled 1-stage functional end-to-end intestinal anastomosis for treatment of small intestinal obstruction in dogs and evaluate outcome when the technique is performed by nonexpert surgeons after limited training in the technique. Case series. Dogs (n=30) with intestinal lesions requiring an enterectomy. Stapled 1-stage functional end-to-end anastomosis and resection using a GIA-60 and a TA-55 stapling devices were performed under supervision of senior residents and faculty surgeons by junior surgeons previously trained in the technique on pigs. Procedure duration and technical problems were recorded. Short-term results were collected during hospitalization and at suture removal. Long-term outcome was established by clinical and ultrasonographic examinations at least 2 months after surgery and from written questionnaires, completed by owners. Mean±SD procedure duration was 15±12 minutes. Postoperative recovery was uneventful in 25 dogs. One dog had anastomotic leakage, 1 had a localized abscess at the transverse staple line, and 3 dogs developed an incisional abdominal wall abscess. No long-term complications occurred (follow-up, 2-32 months). Stapled 1-stage functional end-to-end anastomosis and resection is a fast and safe procedure in the hand of nonexpert but trained surgeons. © Copyright 2011 by The American College of Veterinary Surgeons.

  3. Primary and secondary structure dependence of peptide flexibility assessed by fluorescence-based measurement of end-to-end collision rates.

    Science.gov (United States)

    Huang, Fang; Hudgins, Robert R; Nau, Werner M

    2004-12-22

    The intrachain fluorescence quenching of the fluorophore 2,3-diazabicyclo[2.2.2]oct-2-ene (DBO) is measured in short peptide fragments, namely the two strands and the turn of the N-terminal beta-hairpin of ubiquitin. The investigated peptides adopt a random-coil conformation in aqueous solution according to CD and NMR experiments. The combination of quenchers with different quenching efficiencies, namely tryptophan and tyrosine, allows the extrapolation of the rate constants for end-to-end collision rates as well as the dissociation of the end-to-end encounter complex. The measured activation energies for fluorescence quenching demonstrate that the end-to-end collision process in peptides is partially controlled by internal friction within the backbone, while measurements in solvents of different viscosities (H2O, D2O, and 7.0 M guanidinium chloride) suggest that solvent friction is an additional important factor in determining the collision rate. The extrapolated end-to-end collision rates, which are only slightly larger than the experimental rates for the DBO/Trp probe/quencher system, provide a measure of the conformational flexibility of the peptide backbone. The chain flexibility is found to be strongly dependent on the type of secondary structure that the peptides represent. The collision rates for peptides derived from the beta-strand motifs (ca. 1 x 10(7) s(-1)) are ca. 4 times slower than that derived from the beta-turn. The results provide further support for the hypothesis that chain flexibility is an important factor in the preorganization of protein fragments during protein folding. Mutations to the beta-turn peptide show that subtle sequence changes strongly affect the flexibility of peptides as well. The protonation and charge status of the peptides, however, are shown to have no significant effect on the flexibility of the investigated peptides. The meaning and definition of end-to-end collision rates in the context of protein folding are critically

  4. A fully automatic end-to-end method for content-based image retrieval of CT scans with similar liver lesion annotations.

    Science.gov (United States)

    Spanier, A B; Caplan, N; Sosna, J; Acar, B; Joskowicz, L

    2018-01-01

    The goal of medical content-based image retrieval (M-CBIR) is to assist radiologists in the decision-making process by retrieving medical cases similar to a given image. One of the key interests of radiologists is lesions and their annotations, since the patient treatment depends on the lesion diagnosis. Therefore, a key feature of M-CBIR systems is the retrieval of scans with the most similar lesion annotations. To be of value, M-CBIR systems should be fully automatic to handle large case databases. We present a fully automatic end-to-end method for the retrieval of CT scans with similar liver lesion annotations. The input is a database of abdominal CT scans labeled with liver lesions, a query CT scan, and optionally one radiologist-specified lesion annotation of interest. The output is an ordered list of the database CT scans with the most similar liver lesion annotations. The method starts by automatically segmenting the liver in the scan. It then extracts a histogram-based features vector from the segmented region, learns the features' relative importance, and ranks the database scans according to the relative importance measure. The main advantages of our method are that it fully automates the end-to-end querying process, that it uses simple and efficient techniques that are scalable to large datasets, and that it produces quality retrieval results using an unannotated CT scan. Our experimental results on 9 CT queries on a dataset of 41 volumetric CT scans from the 2014 Image CLEF Liver Annotation Task yield an average retrieval accuracy (Normalized Discounted Cumulative Gain index) of 0.77 and 0.84 without/with annotation, respectively. Fully automatic end-to-end retrieval of similar cases based on image information alone, rather that on disease diagnosis, may help radiologists to better diagnose liver lesions.

  5. Experience of using MOSFET detectors for dose verification measurements in an end-to-end 192Ir brachytherapy quality assurance system.

    Science.gov (United States)

    Persson, Maria; Nilsson, Josef; Carlsson Tedgren, Åsa

    Establishment of an end-to-end system for the brachytherapy (BT) dosimetric chain could be valuable in clinical quality assurance. Here, the development of such a system using MOSFET (metal oxide semiconductor field effect transistor) detectors and experience gained during 2 years of use are reported with focus on the performance of the MOSFET detectors. A bolus phantom was constructed with two implants, mimicking prostate and head & neck treatments, using steel needles and plastic catheters to guide the 192 Ir source and house the MOSFET detectors. The phantom was taken through the BT treatment chain from image acquisition to dose evaluation. During the 2-year evaluation-period, delivered doses were verified a total of 56 times using MOSFET detectors which had been calibrated in an external 60 Co beam. An initial experimental investigation on beam quality differences between 192 Ir and 60 Co is reported. The standard deviation in repeated MOSFET measurements was below 3% in the six measurement points with dose levels above 2 Gy. MOSFET measurements overestimated treatment planning system doses by 2-7%. Distance-dependent experimental beam quality correction factors derived in a phantom of similar size as that used for end-to-end tests applied on a time-resolved measurement improved the agreement. MOSFET detectors provide values stable over time and function well for use as detectors for end-to-end quality assurance purposes in 192 Ir BT. Beam quality correction factors should address not only distance from source but also phantom dimensions. Copyright © 2017 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  6. SU-F-P-37: Implementation of An End-To-End QA Test of the Radiation Therapy Imaging, Planning and Delivery Process to Identify and Correct Possible Sources of Deviation

    International Nuclear Information System (INIS)

    Salinas Aranda, F; Suarez, V; Arbiser, S; Sansogne, R

    2016-01-01

    Purpose: To implement an end-to-end QA test of the radiation therapy imaging, planning and delivery process, aimed to assess the dosimetric agreement accuracy between planned and delivered treatment, in order to identify and correct possible sources of deviation. To establish an internal standard for machine commissioning acceptance. Methods: A test involving all steps of the radiation therapy: imaging, planning and delivery process was designed. The test includes analysis of point dose and planar dose distributions agreement between TPS calculated and measured dose. An ad hoc 16 cm diameter PMMA phantom was constructed with one central and four peripheral bores that can accommodate calibrated electron density inserts. Using Varian Eclipse 10.0 and Elekta XiO 4.50 planning systems, IMRT, RapidArc and 3DCRT with hard and dynamic wedges plans were planned on the phantom and tested. An Exradin A1SL chamber is used with a Keithley 35617EBS electrometer for point dose measurements in the phantom. 2D dose distributions were acquired using MapCheck and Varian aS1000 EPID.Gamma analysis was performed for evaluation of 2D dose distribution agreement using MapCheck software and Varian Portal Dosimetry Application.Varian high energy Clinacs Trilogy, 2100C/CD, 2000CR and low energy 6X/EX where tested.TPS-CT# vs. electron density table were checked for CT-scanners used. Results: Calculated point doses were accurate to 0.127% SD: 0.93%, 0.507% SD: 0.82%, 0.246% SD: 1.39% and 0.012% SD: 0.01% for LoX-3DCRT, HiX-3DCRT, IMRT and RapidArc plans respectively. Planar doses pass gamma 3% 3mm in all cases and 2% 2mm for VMAT plans. Conclusion: Implementation of a simple and reliable quality assurance tool was accomplished. The end-to-end proved efficient, showing excellent agreement between planned and delivered dose evidencing strong consistency of the whole process from imaging through planning to delivery. This test can be used as a first step in beam model acceptance for clinical

  7. Interoperable End-to-End Remote Patient Monitoring Platform Based on IEEE 11073 PHD and ZigBee Health Care Profile.

    Science.gov (United States)

    Clarke, Malcolm; de Folter, Joost; Verma, Vivek; Gokalp, Hulya

    2018-05-01

    This paper describes the implementation of an end-to-end remote monitoring platform based on the IEEE 11073 standards for personal health devices (PHD). It provides an overview of the concepts and approaches and describes how the standard has been optimized for small devices with limited resources of processor, memory, and power that use short-range wireless technology. It explains aspects of IEEE 11073, including the domain information model, state model, and nomenclature, and how these support its plug-and-play architecture. It shows how these aspects underpin a much larger ecosystem of interoperable devices and systems that include IHE PCD-01, HL7, and BlueTooth LE medical devices, and the relationship to the Continua Guidelines, advocating the adoption of data standards and nomenclature to support semantic interoperability between health and ambient assisted living in future platforms. The paper further describes the adaptions that have been made in order to implement the standard on the ZigBee Health Care Profile and the experiences of implementing an end-to-end platform that has been deployed to frail elderly patients with chronic disease(s) and patients with diabetes.

  8. User-oriented end-to-end transport protocols for the real-time distribution of telemetry data from NASA spacecraft

    Science.gov (United States)

    Hooke, A. J.

    1979-01-01

    A set of standard telemetry protocols for downlink data flow facilitating the end-to-end transport of instrument data from the spacecraft to the user in real time is proposed. The direct switching of data by autonomous message 'packets' that are assembled by the source instrument on the spacecraft is discussed. The data system consists thus of a format on a message rather than word basis, and such packet telemetry would include standardized protocol headers. Standards are being developed within the NASA End-to-End Data System (NEEDS) program for the source packet and transport frame protocols. The source packet protocol contains identification of both the sequence number of the packet as it is generated by the source and the total length of the packet, while the transport frame protocol includes a sequence count defining the serial number of the frame as it is generated by the spacecraft data system, and a field specifying any 'options' selected in the format of the frame itself.

  9. Ferromagnetic interaction in an asymmetric end-to-end azido double-bridged copper(II) dinuclear complex: a combined structure, magnetic, polarized neutron diffraction and theoretical study.

    Science.gov (United States)

    Aronica, Christophe; Jeanneau, Erwann; El Moll, Hani; Luneau, Dominique; Gillon, Béatrice; Goujon, Antoine; Cousson, Alain; Carvajal, Maria Angels; Robert, Vincent

    2007-01-01

    A new end-to-end azido double-bridged copper(II) complex [Cu(2)L(2)(N(3))2] (1) was synthesized and characterized (L=1,1,1-trifluoro-7-(dimethylamino)-4-methyl-5-aza-3-hepten-2-onato). Despite the rather long Cu-Cu distance (5.105(1) A), the magnetic interaction is ferromagnetic with J= +16 cm(-1) (H=-JS(1)S(2)), a value that has been confirmed by DFT and high-level correlated ab initio calculations. The spin distribution was studied by using the results from polarized neutron diffraction. This is the first such study on an end-to-end system. The experimental spin density was found to be localized mainly on the copper(II) ions, with a small degree of delocalization on the ligand (L) and terminal azido nitrogens. There was zero delocalization on the central nitrogen, in agreement with DFT calculations. Such a picture corresponds to an important contribution of the d(x2-y2) orbital and a small population of the d(z2) orbital, in agreement with our calculations. Based on a correlated wavefunction analysis, the ferromagnetic behavior results from a dominant double spin polarization contribution and vanishingly small ionic forms.

  10. Mixed integer nonlinear programming model of wireless pricing scheme with QoS attribute of bandwidth and end-to-end delay

    Science.gov (United States)

    Irmeilyana, Puspita, Fitri Maya; Indrawati

    2016-02-01

    The pricing for wireless networks is developed by considering linearity factors, elasticity price and price factors. Mixed Integer Nonlinear Programming of wireless pricing model is proposed as the nonlinear programming problem that can be solved optimally using LINGO 13.0. The solutions are expected to give some information about the connections between the acceptance factor and the price. Previous model worked on the model that focuses on bandwidth as the QoS attribute. The models attempt to maximize the total price for a connection based on QoS parameter. The QoS attributes used will be the bandwidth and the end to end delay that affect the traffic. The maximum goal to maximum price is achieved when the provider determine the requirement for the increment or decrement of price change due to QoS change and amount of QoS value.

  11. Crystal structure of Aquifex aeolicus gene product Aq1627: a putative phosphoglucosamine mutase reveals a unique C-terminal end-to-end disulfide linkage.

    Science.gov (United States)

    Sridharan, Upasana; Kuramitsu, Seiki; Yokoyama, Shigeyuki; Kumarevel, Thirumananseri; Ponnuraj, Karthe

    2017-06-27

    The Aq1627 gene from Aquifex aeolicus, a hyperthermophilic bacterium has been cloned and overexpressed in Escherichia coli. The protein was purified to homogeneity and its X-ray crystal structure was determined to 1.3 Å resolution using multiple wavelength anomalous dispersion phasing. The structural and sequence analysis of Aq1627 is suggestive of a putative phosphoglucosamine mutase. The structural features of Aq1627 further indicate that it could belong to a new subclass of the phosphoglucosamine mutase family. Aq1627 structure contains a unique C-terminal end-to-end disulfide bond, which links two monomers and this structural information can be used in protein engineering to make proteins more stable in different applications.

  12. Reconstruction after ureteral resection during HIPEC surgery: Re-implantation with uretero-neocystostomy seems safer than end-to-end anastomosis.

    Science.gov (United States)

    Pinar, U; Tremblay, J-F; Passot, G; Dazza, M; Glehen, O; Tuech, J-J; Pocard, M

    2017-09-01

    Resection of the pelvic ureter may be necessary in cytoreductive surgery for peritoneal carcinomatosis in combination with hyperthermic intraperitoneal chemotherapy (HIPEC). As the morbidity for cytoreductive surgery with HIPEC has decreased, expert teams have begun to perform increasingly complex surgical procedures associated with HIPEC, including pelvic reconstructions. After ureteral resection, two types of reconstruction are possible: uretero-ureteral end-to-end anastomosis and uretero-vesical re-implantation or uretero-neocystostomy (the so-called psoas hitch technique). By compiling the experience of three surgical teams that perform HIPEC surgeries, we have tried to compare the effectiveness of these two techniques. A retrospective comparative case-matched multicenter study was conducted for patients undergoing operation between 2005 and 2014. Patients included had undergone resection of the pelvic ureter during cytoreductive surgery with HIPEC for peritoneal carcinomatomosis; ureteral reconstruction was by either end-to-end anastomosis (EEA group) or re-implantation uretero-neocystostomy (RUC group). The primary endpoint was the occurrence of urinary fistula in postoperative follow-up. There were 14 patients in the EEA group and 14 in the RUC group. The groups were comparable for age, extent of carcinomatosis (PCI index) and operative duration. Four urinary fistulas occurred in the EEA group (28.5%) versus zero fistulas in the RUC group (0%) (P=0.0308). Re-implantation with uretero-neocystostomy during cytoreductive surgery with HIPEC is the preferred technique for reconstruction after ureteral resection in case of renal conservation. Copyright © 2017. Published by Elsevier Masson SAS.

  13. Poster - 44: Development and implementation of a comprehensive end-to-end testing methodology for linac-based frameless SRS QA using a modified commercial stereotactic anthropomorphic phantom

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Derek; Mutanga, Theodore [University of Toronto, Carlo Fidani Peel Regional Cancer Center (Canada)

    2016-08-15

    Purpose: An end-to-end testing methodology was designed to evaluate the overall SRS treatment fidelity, incorporating all steps in the linac-based frameless radiosurgery treatment delivery process. The study details our commissioning experience of the Steev (CIRS, Norfolk, VA) stereotactic anthropomorphic head phantom including modification, test design, and baseline measurements. Methods: Repeated MR and CT scans were performed with interchanging inserts. MR-CT fusion accuracy was evaluated and the insert spatial coincidence was verified on CT. Five non-coplanar arcs delivered a prescription dose to a 15 mm spherical CTV with 2 mm PTV margin. Following setup, CBCT-based shifts were applied as per protocol. Sequential measurements were performed by interchanging inserts without disturbing the setup. Spatial and dosimetric accuracy was assessed by a combination of CBCT hidden target, radiochromic film, and ion chamber measurements. To facilitate film registration, the film insert was modified in-house by etching marks. Results: MR fusion error and insert spatial coincidences were within 0.3 mm. Both CBCT and film measurements showed spatial displacements of 1.0 mm in similar directions. Both coronal and sagittal films reported 2.3 % higher target dose relative to the treatment plan. The corrected ion chamber measurement was similarly greater by 1.0 %. The 3 %/2 mm gamma pass rate was 99% for both films Conclusions: A comprehensive end-to-end testing methodology was implemented for our SRS QA program. The Steev phantom enabled realistic evaluation of the entire treatment process. Overall spatial and dosimetric accuracy of the delivery were 1 mm and 3 % respectively.

  14. A Validation Approach of an End-to-End Whole Genome Sequencing Workflow for Source Tracking of Listeria monocytogenes and Salmonella enterica

    Directory of Open Access Journals (Sweden)

    Anne-Catherine Portmann

    2018-03-01

    Full Text Available Whole genome sequencing (WGS, using high throughput sequencing technology, reveals the complete sequence of the bacterial genome in a few days. WGS is increasingly being used for source tracking, pathogen surveillance and outbreak investigation due to its high discriminatory power. In the food industry, WGS used for source tracking is beneficial to support contamination investigations. Despite its increased use, no standards or guidelines are available today for the use of WGS in outbreak and/or trace-back investigations. Here we present a validation of our complete (end-to-end WGS workflow for Listeria monocytogenes and Salmonella enterica including: subculture of isolates, DNA extraction, sequencing and bioinformatics analysis. This end-to-end WGS workflow was evaluated according to the following performance criteria: stability, repeatability, reproducibility, discriminatory power, and epidemiological concordance. The current study showed that few single nucleotide polymorphism (SNPs were observed for L. monocytogenes and S. enterica when comparing genome sequences from five independent colonies from the first subculture and five independent colonies after the tenth subculture. Consequently, the stability of the WGS workflow for L. monocytogenes and S. enterica was demonstrated despite the few genomic variations that can occur during subculturing steps. Repeatability and reproducibility were also demonstrated. The WGS workflow was shown to have a high discriminatory power and has the ability to show genetic relatedness. Additionally, the WGS workflow was able to reproduce published outbreak investigation results, illustrating its capability of showing epidemiological concordance. The current study proposes a validation approach comprising all steps of a WGS workflow and demonstrates that the workflow can be applied to L. monocytogenes or S. enterica.

  15. SU-F-T-76: Total Skin Electron Therapy: An-End-To-End Examination of the Absolute Dosimetry with a Rando Phantom

    Energy Technology Data Exchange (ETDEWEB)

    Cui, G; Ha, J; Zhou, S; Cui, J; Shiu, A [University Southern California, Los Angeles, CA (United States)

    2016-06-15

    Purpose: To examine and validate the absolute dose for total skin electron therapy (TSET) through an end-to-end test with a Rando phantom using optically stimulated luminescent dosimeters (OSLDs) and EBT3 radiochromic films. Methods: A Varian Trilogy linear accelerator equipped with the special procedure 6 MeV HDTSe- was used to perform TSET irradiations using a modified Stanford 6-dual-field technique. The absolute dose was calibrated using a Markus ion chamber at a reference depth of 1.3cm at 100 cm SSD with a field size of 36 × 36 cm at the isocenter in solid water slabs. The absolute dose was cross validated by a farmer ion chamber. Then the dose rate in the unit of cGy/Mu was calibrated using the Markus chamber at the treatment position. OSLDs were used to independently verify the dose using the calibrated dose rate. Finally, a patient treatment plan (200 cGy/cycle) was delivered in the QA mode to a Rando phantom, which had 16 pairs of OSLDs and EBT3 films taped onto its surface at different anatomical positions. The doses recorded were read out to validate the absolute dosimetry for TSET. Results: The OSLD measurements were within 7% agreement with the planned dose except the shoulder areas, where the doses recorded were 23% lower on average than those of the planned. The EBT3 film measurements were within 10% agreement with the planned dose except the shoulder and the scalp vertex areas, where the respective doses recorded were 18% and 14% lower on average than those of the planned. The OSLDs gave more consistent dose measurements than those of the EBT3 films. Conclusion: The absolute dosimetry for TSET was validated by an end-to-end test with a Rando phantom using the OSLDs and EBT3 films. The beam calibration and monitor unit calculations were confirmed.

  16. Imaging and dosimetric errors in 4D PET/CT-guided radiotherapy from patient-specific respiratory patterns: a dynamic motion phantom end-to-end study.

    Science.gov (United States)

    Bowen, S R; Nyflot, M J; Herrmann, C; Groh, C M; Meyer, J; Wollenweber, S D; Stearns, C W; Kinahan, P E; Sandison, G A

    2015-05-07

    Effective positron emission tomography / computed tomography (PET/CT) guidance in radiotherapy of lung cancer requires estimation and mitigation of errors due to respiratory motion. An end-to-end workflow was developed to measure patient-specific motion-induced uncertainties in imaging, treatment planning, and radiation delivery with respiratory motion phantoms and dosimeters. A custom torso phantom with inserts mimicking normal lung tissue and lung lesion was filled with [(18)F]FDG. The lung lesion insert was driven by six different patient-specific respiratory patterns or kept stationary. PET/CT images were acquired under motionless ground truth, tidal breathing motion-averaged (3D), and respiratory phase-correlated (4D) conditions. Target volumes were estimated by standardized uptake value (SUV) thresholds that accurately defined the ground-truth lesion volume. Non-uniform dose-painting plans using volumetrically modulated arc therapy were optimized for fixed normal lung and spinal cord objectives and variable PET-based target objectives. Resulting plans were delivered to a cylindrical diode array at rest, in motion on a platform driven by the same respiratory patterns (3D), or motion-compensated by a robotic couch with an infrared camera tracking system (4D). Errors were estimated relative to the static ground truth condition for mean target-to-background (T/Bmean) ratios, target volumes, planned equivalent uniform target doses, and 2%-2 mm gamma delivery passing rates. Relative to motionless ground truth conditions, PET/CT imaging errors were on the order of 10-20%, treatment planning errors were 5-10%, and treatment delivery errors were 5-30% without motion compensation. Errors from residual motion following compensation methods were reduced to 5-10% in PET/CT imaging, PET/CT imaging to RT planning, and RT delivery under a dose painting paradigm is feasible within an integrated respiratory motion phantom workflow. For a limited set of cases, the magnitude

  17. Imaging and dosimetric errors in 4D PET/CT-guided radiotherapy from patient-specific respiratory patterns: a dynamic motion phantom end-to-end study

    International Nuclear Information System (INIS)

    Bowen, S R; Nyflot, M J; Meyer, J; Sandison, G A; Herrmann, C; Groh, C M; Wollenweber, S D; Stearns, C W; Kinahan, P E

    2015-01-01

    Effective positron emission tomography / computed tomography (PET/CT) guidance in radiotherapy of lung cancer requires estimation and mitigation of errors due to respiratory motion. An end-to-end workflow was developed to measure patient-specific motion-induced uncertainties in imaging, treatment planning, and radiation delivery with respiratory motion phantoms and dosimeters. A custom torso phantom with inserts mimicking normal lung tissue and lung lesion was filled with [ 18 F]FDG. The lung lesion insert was driven by six different patient-specific respiratory patterns or kept stationary. PET/CT images were acquired under motionless ground truth, tidal breathing motion-averaged (3D), and respiratory phase-correlated (4D) conditions. Target volumes were estimated by standardized uptake value (SUV) thresholds that accurately defined the ground-truth lesion volume. Non-uniform dose-painting plans using volumetrically modulated arc therapy were optimized for fixed normal lung and spinal cord objectives and variable PET-based target objectives. Resulting plans were delivered to a cylindrical diode array at rest, in motion on a platform driven by the same respiratory patterns (3D), or motion-compensated by a robotic couch with an infrared camera tracking system (4D). Errors were estimated relative to the static ground truth condition for mean target-to-background (T/B mean ) ratios, target volumes, planned equivalent uniform target doses, and 2%-2 mm gamma delivery passing rates. Relative to motionless ground truth conditions, PET/CT imaging errors were on the order of 10–20%, treatment planning errors were 5–10%, and treatment delivery errors were 5–30% without motion compensation. Errors from residual motion following compensation methods were reduced to 5–10% in PET/CT imaging, <5% in treatment planning, and <2% in treatment delivery. We have demonstrated that estimation of respiratory motion uncertainty and its propagation from PET/CT imaging to RT

  18. Imaging and dosimetric errors in 4D PET/CT-guided radiotherapy from patient-specific respiratory patterns: a dynamic motion phantom end-to-end study

    Science.gov (United States)

    Bowen, S R; Nyflot, M J; Hermann, C; Groh, C; Meyer, J; Wollenweber, S D; Stearns, C W; Kinahan, P E; Sandison, G A

    2015-01-01

    Effective positron emission tomography/computed tomography (PET/CT) guidance in radiotherapy of lung cancer requires estimation and mitigation of errors due to respiratory motion. An end-to-end workflow was developed to measure patient-specific motion-induced uncertainties in imaging, treatment planning, and radiation delivery with respiratory motion phantoms and dosimeters. A custom torso phantom with inserts mimicking normal lung tissue and lung lesion was filled with [18F]FDG. The lung lesion insert was driven by 6 different patient-specific respiratory patterns or kept stationary. PET/CT images were acquired under motionless ground truth, tidal breathing motion-averaged (3D), and respiratory phase-correlated (4D) conditions. Target volumes were estimated by standardized uptake value (SUV) thresholds that accurately defined the ground-truth lesion volume. Non-uniform dose-painting plans using volumetrically modulated arc therapy (VMAT) were optimized for fixed normal lung and spinal cord objectives and variable PET-based target objectives. Resulting plans were delivered to a cylindrical diode array at rest, in motion on a platform driven by the same respiratory patterns (3D), or motion-compensated by a robotic couch with an infrared camera tracking system (4D). Errors were estimated relative to the static ground truth condition for mean target-to-background (T/Bmean) ratios, target volumes, planned equivalent uniform target doses (EUD), and 2%-2mm gamma delivery passing rates. Relative to motionless ground truth conditions, PET/CT imaging errors were on the order of 10–20%, treatment planning errors were 5–10%, and treatment delivery errors were 5–30% without motion compensation. Errors from residual motion following compensation methods were reduced to 5–10% in PET/CT imaging, PET/CT imaging to RT planning, and RT delivery under a dose painting paradigm is feasible within an integrated respiratory motion phantom workflow. For a limited set of cases, the

  19. Presence of calcium in the vessel walls after end-to-end arterial anastomoses with polydioxanone and polypropylene sutures in growing dogs.

    Science.gov (United States)

    Gersak, B

    1993-10-01

    The presence of calcium in the vessel walls after end-to-end arterial anastomoses performed with polydioxanone and polypropylene interrupted sutures was studied in 140 anastomoses in 35 10-week-old German shepherd dogs. Histologic examination with hematoxylin and eosin, van Gieson, and von Kossa staining techniques was performed after the animals were killed 6 months after the operation. Ketamine hydrochloride was used as an anesthetic agent. At the start of the investigation the dogs weighed 14.5 +/- 2.6 kg (mean +/- standard deviation, n = 35), and after 6 months they weighed 45.3 +/- 3.1 kg (mean +/- standard deviation, n = 35). The diameter of the sutured arteries in the first operation was 2.6 +/- 0.5 mm (mean +/- standard deviation, n = 140). With each dog, both brachial and both femoral arteries were used--one artery for each different type of suture. In different dogs, different arteries were used for the same type of suture. The prevalence of calcifications after 6 months was determined from the numeric density of calcifications with standard stereologic techniques. The sutured and sutureless parts taken from longitudinal sections from each artery were studied, and t test values were calculated as follows: In paired samples, statistically significant differences in numerical density of calcifications were seen between sutured and sutureless arterial parts for both materials (sutureless part versus part with polydioxanone sutures, p 0.05, n = 70) and sutureless parts (p > 0.05, n = 70).

  20. Poly(ethyl glyoxylate)-Poly(ethylene oxide) Nanoparticles: Stimuli-Responsive Drug Release via End-to-End Polyglyoxylate Depolymerization.

    Science.gov (United States)

    Fan, Bo; Gillies, Elizabeth R

    2017-08-07

    The ability to disrupt polymer assemblies in response to specific stimuli provides the potential to release drugs selectively at certain sites or conditions in vivo. However, most stimuli-responsive delivery systems require many stimuli-initiated events to release drugs. "Self-immolative polymers" offer the potential to provide amplified responses to stimuli as they undergo complete end-to-end depolymerization following the cleavage of a single end-cap. Herein, linker end-caps were developed to conjugate self-immolative poly(ethyl glyoxylate) (PEtG) with poly(ethylene oxide) (PEO) to form amphiphilic block copolymers. These copolymers were self-assembled to form nanoparticles in aqueous solution. Cleavage of the linker end-caps were triggered by a thiol reducing agent, UV light, H 2 O 2 , and combinations of these stimuli, resulting in nanoparticle disintegration. Low stimuli concentrations were effective in rapidly disrupting the nanoparticles. Nile red, doxorubin, and curcumin were encapsulated into the nanoparticles and were selectively released upon application of the appropriate stimulus. The ability to tune the stimuli-responsiveness simply by changing the linker end-cap makes this new platform highly attractive for applications in drug delivery.

  1. System for Informatics in the Molecular Pathology Laboratory: An Open-Source End-to-End Solution for Next-Generation Sequencing Clinical Data Management.

    Science.gov (United States)

    Kang, Wenjun; Kadri, Sabah; Puranik, Rutika; Wurst, Michelle N; Patil, Sushant A; Mujacic, Ibro; Benhamed, Sonia; Niu, Nifang; Zhen, Chao Jie; Ameti, Bekim; Long, Bradley C; Galbo, Filipo; Montes, David; Iracheta, Crystal; Gamboa, Venessa L; Lopez, Daisy; Yourshaw, Michael; Lawrence, Carolyn A; Aisner, Dara L; Fitzpatrick, Carrie; McNerney, Megan E; Wang, Y Lynn; Andrade, Jorge; Volchenboum, Samuel L; Furtado, Larissa V; Ritterhouse, Lauren L; Segal, Jeremy P

    2018-04-24

    Next-generation sequencing (NGS) diagnostic assays increasingly are becoming the standard of care in oncology practice. As the scale of an NGS laboratory grows, management of these assays requires organizing large amounts of information, including patient data, laboratory processes, genomic data, as well as variant interpretation and reporting. Although several Laboratory Information Systems and/or Laboratory Information Management Systems are commercially available, they may not meet all of the needs of a given laboratory, in addition to being frequently cost-prohibitive. Herein, we present the System for Informatics in the Molecular Pathology Laboratory, a free and open-source Laboratory Information System/Laboratory Information Management System for academic and nonprofit molecular pathology NGS laboratories, developed at the Genomic and Molecular Pathology Division at the University of Chicago Medicine. The System for Informatics in the Molecular Pathology Laboratory was designed as a modular end-to-end information system to handle all stages of the NGS laboratory workload from test order to reporting. We describe the features of the system, its clinical validation at the Genomic and Molecular Pathology Division at the University of Chicago Medicine, and its installation and testing within a different academic center laboratory (University of Colorado), and we propose a platform for future community co-development and interlaboratory data sharing. Copyright © 2018. Published by Elsevier Inc.

  2. Partial QoS-Aware Opportunistic Relay Selection Over Two-Hop Channels: End-to-End Performance Under Spectrum-Sharing Requirements

    KAUST Repository

    Yuli Yang,

    2014-10-01

    In this paper, we propose a partial quality-of-service (QoS)-oriented relay selection scheme with a decode-and-forward (DF) relaying protocol, to reduce the feedback amount required for relay selection. In the proposed scheme, the activated relay is the one with the maximum signal-to-noise power ratio (SNR) in the second hop among those whose packet loss rates (PLRs) in the first hop achieve a predetermined QoS level. For the purpose of evaluating the performance of the proposed scheme, we exploit it with transmission constraints imposed on the transmit power budget and interference to other users. By analyzing the statistics of received SNRs in the first and second hops, we obtain the end-to-end PLR of this scheme in closed form under the considered scenario. Moreover, to compare the proposed scheme with popular relay selection schemes, we also derive the closed-form PLR expressions for partial relay selection (PRS) and opportunistic relay selection (ORS) criteria in the same scenario under study. Illustrative numerical results demonstrate the accuracy of our derivations and substantiate that the proposed relay selection scheme is a promising alternative with respect to the tradeoff between performance and complexity.

  3. Albert-Lembert versus hybrid-layered suture in hand sewn end-to-end cervical esophagogastric anastomosis after esophageal squamous cell carcinoma resection.

    Science.gov (United States)

    Feng, Fan; Sun, Li; Xu, Guanghui; Hong, Liu; Yang, Jianjun; Cai, Lei; Li, Guocai; Guo, Man; Lian, Xiao; Zhang, Hongwei

    2015-11-01

    Hand sewn cervical esophagogastric anastomosis (CEGA) is regarded as preferred technique by surgeons after esophagectomy. However, considering the anastomotic leakage and stricture, the optimal technique for performing this anastomosis is still under debate. Between November 2010 and September 2012, 230 patients who underwent esophagectomy with hand sewn end-to-end (ETE) CEGA for esophageal squamous cell carcinoma (ESCC) were analyzed retrospectively, including 111 patients underwent Albert-Lembert suture anastomosis and 119 patients underwent hybrid-layered suture anastomosis. Anastomosis construction time was recorded during operation. Anastomotic leakage was recorded through upper gastrointestinal water-soluble contrast examination. Anastomotic stricture was recorded during follow up. The hybrid-layered suture was faster than Albert-Lembert suture (29.40±1.24 min vs. 33.83±1.41 min, P=0.02). The overall anastomotic leak rate was 7.82%, the leak rate in hybrid-layered suture group was significantly lower than that in Albert-Lembert suture group (3.36% vs. 12.61%, P=0.01). The overall anastomotic stricture rate was 9.13%, the stricture rate in hybrid-layered suture group was significantly lower than that in Albert-Lembert suture group (5.04% vs. 13.51%, P=0.04). Hand sewn ETE CEGA with hybrid-layered suture is associated with lower anastomotic leakage and stricture rate compared to hand sewn ETE CEGA with Albert-Lembert suture.

  4. Stapled side-to-side anastomosis might be better than handsewn end-to-end anastomosis in ileocolic resection for Crohn's disease: a meta-analysis.

    Science.gov (United States)

    He, Xiaosheng; Chen, Zexian; Huang, Juanni; Lian, Lei; Rouniyar, Santosh; Wu, Xiaojian; Lan, Ping

    2014-07-01

    Ileocolic anastomosis is an essential step in the treatment to restore continuity of the gastrointestinal tract following ileocolic resection in patients with Crohn's disease (CD). However, the association between anastomotic type and surgical outcome is controversial. The aim of this meta-analysis is to compare surgical outcomes between stapled side-to-side anastomosis (SSSA) and handsewn end-to-end anastomosis (HEEA) after ileocolic resection in patients with CD. Studies comparing SSSA with HEEA after ileocolic resection in patients with CD were identified in PubMed and EMBASE. Outcomes such as complication, recurrence, and re-operation were evaluated. Eight studies (three randomized controlled trials, one prospective non-randomized trial, and four non-randomized retrospective trials) comparing SSSA (396 cases) and HEEA (425 cases) were included. As compared with HEEA, SSSA was superior in terms of overall postoperative complications [odds ratio (OR), 0.54; 95 % confidence interval (CI) 0.32-0.93], anastomotic leak (OR 0.45; 95 % CI 0.20-1.00), recurrence (OR 0.20; 95 % CI 0.07-0.55), and re-operation for recurrence (OR 0.18; 95 % CI 0.07-0.45). Postoperative hospital stay, mortality, and complications other than anastomotic leak were comparable. Based on the results of our meta-analysis, SSSA would appear to be the preferred procedure after ileocolic resection for CD, with reduced overall postoperative complications, especially anastomotic leak, and a decreased recurrence and re-operation rate.

  5. Distributed Large Data-Object Environments: End-to-End Performance Analysis of High Speed Distributed Storage Systems in Wide Area ATM Networks

    Science.gov (United States)

    Johnston, William; Tierney, Brian; Lee, Jason; Hoo, Gary; Thompson, Mary

    1996-01-01

    We have developed and deployed a distributed-parallel storage system (DPSS) in several high speed asynchronous transfer mode (ATM) wide area networks (WAN) testbeds to support several different types of data-intensive applications. Architecturally, the DPSS is a network striped disk array, but is fairly unique in that its implementation allows applications complete freedom to determine optimal data layout, replication and/or coding redundancy strategy, security policy, and dynamic reconfiguration. In conjunction with the DPSS, we have developed a 'top-to-bottom, end-to-end' performance monitoring and analysis methodology that has allowed us to characterize all aspects of the DPSS operating in high speed ATM networks. In particular, we have run a variety of performance monitoring experiments involving the DPSS in the MAGIC testbed, which is a large scale, high speed, ATM network and we describe our experience using the monitoring methodology to identify and correct problems that limit the performance of high speed distributed applications. Finally, the DPSS is part of an overall architecture for using high speed, WAN's for enabling the routine, location independent use of large data-objects. Since this is part of the motivation for a distributed storage system, we describe this architecture.

  6. A programmable CCD driver circuit for multiphase CCD operation

    International Nuclear Information System (INIS)

    Ewin, A.J.; Reed, K.V.

    1989-01-01

    A programmable CCD driver circuit was designed to drive CCD's in multiphased modes. The purpose of the drive electronics was to operate developmental CCD imaging arrays for NASA's Moderate Resolution Imaging Spectrometer - Tiltable (MODIS-T). Five prototype arrays were designed. Valid's Graphics Editor (GED) was used to design the driver. With this driver design, any of the five arrays can be readout. Designing the driver with GED allowed functional simulation, timing verification, and certain packaging analyses to be done on the design before fabrication. The driver verified its function with the master clock running up to 10 MHz. This suggests a maximum rate of 400 Kpixels/sec. Timing and packaging parameters were verified. the design uses 54 TTL component chips

  7. An end-to-end examination of geometric accuracy of IGRT using a new digital accelerator equipped with onboard imaging system.

    Science.gov (United States)

    Wang, Lei; Kielar, Kayla N; Mok, Ed; Hsu, Annie; Dieterich, Sonja; Xing, Lei

    2012-02-07

    The Varian's new digital linear accelerator (LINAC), TrueBeam STx, is equipped with a high dose rate flattening filter free (FFF) mode (6 MV and 10 MV), a high definition multileaf collimator (2.5 mm leaf width), as well as onboard imaging capabilities. A series of end-to-end phantom tests were performed, TrueBeam-based image guided radiation therapy (IGRT), to determine the geometric accuracy of the image-guided setup and dose delivery process for all beam modalities delivered using intensity modulated radiation therapy (IMRT) and RapidArc. In these tests, an anthropomorphic phantom with a Ball Cube II insert and the analysis software (FilmQA (3cognition)) were used to evaluate the accuracy of TrueBeam image-guided setup and dose delivery. Laser cut EBT2 films with 0.15 mm accuracy were embedded into the phantom. The phantom with the film inserted was first scanned with a GE Discovery-ST CT scanner, and the images were then imported to the planning system. Plans with steep dose fall off surrounding hypothetical targets of different sizes were created using RapidArc and IMRT with FFF and WFF (with flattening filter) beams. Four RapidArc plans (6 MV and 10 MV FFF) and five IMRT plans (6 MV and 10 MV FFF; 6 MV, 10 MV and 15 MV WFF) were studied. The RapidArc plans with 6 MV FFF were planned with target diameters of 1 cm (0.52 cc), 2 cm (4.2 cc) and 3 cm (14.1 cc), and all other plans with a target diameter of 3 cm. Both onboard planar and volumetric imaging procedures were used for phantom setup and target localization. The IMRT and RapidArc plans were then delivered, and the film measurements were compared with the original treatment plans using a gamma criteria of 3%/1 mm and 3%/2 mm. The shifts required in order to align the film measured dose with the calculated dose distributions was attributed to be the targeting error. Targeting accuracy of image-guided treatment using TrueBeam was found to be within 1 mm. For irradiation of the 3 cm target, the gammas (3%, 1

  8. Healing of esophageal anastomoses performed with the biofragmentable anastomosis ring versus the end-to-end anastomosis stapler: comparative experimental study in dogs.

    Science.gov (United States)

    Kovács, Tibor; Köves, István; Orosz, Zsolt; Németh, Tibor; Pandi, Erzsébet; Kralovanszky, Judit

    2003-04-01

    The biofragmentable anastomosis ring (BAR) has been used successfully for anastomoses from the stomach to the upper rectum. The healing of intrathoracic esophageal anastomoses performed with the BAR or an end-to-end anastomosis (EEA) stapler on an experimental model was compared. Parameters of tissue repair were evaluated: macroscopic examination, bursting strength (BS), collagen (hydroxyproline, or HP), histology (H&E and Picrosirius red staining for collagen). A series of 48 mongrel dogs were randomly separated into two groups (30 BAR, 18 stapler) and subgroups according to the time of autopsy (days 4, 7, 14, 28). Mortality was 13.3% (4 BAR cases) with two deaths not related to surgery (excluded). There were four leaks in the BAR group (14.3%) and no leaks or deaths but two strictures in the stapler group. BS was significantly higher in the BAR group during the first week, and values were almost equal from the second week with both methods. The HP rate was significantly reduced on days 4 and 7 in both groups compared to the reference values; the values were close to reference values from the second week (lower in the BAR group). Stapled anastomoses caused less pronounced inflammation and were associated with an earlier start of regeneration, but the difference was not significant compared to that in the BAR group. Accumulation of new collagen (green polarization) started on day 7 in both groups, but maturation (orange-red polarization) was significantly more advanced in the BAR group after the second week. A strong linear correlation between the BS and HP rate was found with both methods. There was no significant difference in the complication rate or healing of intrathoracic BAR and stapled anastomoses. The BAR method is simple, quick, and safe; and it seems to be a feasible procedure for creating intrathoracic esophageal anastomoses in dogs.

  9. End-to-end process of hollow spacecraft structures with high frequency and low mass obtained with in-house structural optimization tool and additive manufacturing

    Directory of Open Access Journals (Sweden)

    Alexandru-Mihai CISMILIANU

    2017-09-01

    Full Text Available In the space sector the most decisive elements are: mass reduction, cost saving and minimum lead time; here, structural optimization and additive layer manufacturing (ALM fit best. The design must be driven by stiffness, because an important requirement for spacecraft (S/C structures is to reduce the dynamic coupling between the S/C and the launch vehicle. The objective is to create an end-to-end process, from the input given by the customer to the manufacturing of an aluminum part as light as possible but at the same time considerably stiffer while taking the full advantage of the design flexibility given by ALM. To design and optimize the parts, a specialized in-house tool was used, guaranteeing a load-sufficient material distribution. Using topological optimization, the iterations between the design and the stress departments were diminished, thus greatly reducing the lead time. In order to improve and lighten the obtained structure a design with internal cavities and hollow beams was considered. This implied developing of a procedure for powder evacuation through iterations with the manufacturer while optimizing the design for ALM. The resulted part can be then manufactured via ALM with no need of further design adjustments. To achieve a high-quality part with maximum efficiency, it is essential to have a loop between the design team and the manufacturer. Topological optimization and ALM work hand in hand if used properly. The team achieved a more efficient structure using topology optimization and ALM, than using conventional design and manufacturing methods.

  10. Automated Detection of Clinically Significant Prostate Cancer in mp-MRI Images Based on an End-to-End Deep Neural Network.

    Science.gov (United States)

    Wang, Zhiwei; Liu, Chaoyue; Cheng, Danpeng; Wang, Liang; Yang, Xin; Cheng, Kwang-Ting

    2018-05-01

    Automated methods for detecting clinically significant (CS) prostate cancer (PCa) in multi-parameter magnetic resonance images (mp-MRI) are of high demand. Existing methods typically employ several separate steps, each of which is optimized individually without considering the error tolerance of other steps. As a result, they could either involve unnecessary computational cost or suffer from errors accumulated over steps. In this paper, we present an automated CS PCa detection system, where all steps are optimized jointly in an end-to-end trainable deep neural network. The proposed neural network consists of concatenated subnets: 1) a novel tissue deformation network (TDN) for automated prostate detection and multimodal registration and 2) a dual-path convolutional neural network (CNN) for CS PCa detection. Three types of loss functions, i.e., classification loss, inconsistency loss, and overlap loss, are employed for optimizing all parameters of the proposed TDN and CNN. In the training phase, the two nets mutually affect each other and effectively guide registration and extraction of representative CS PCa-relevant features to achieve results with sufficient accuracy. The entire network is trained in a weakly supervised manner by providing only image-level annotations (i.e., presence/absence of PCa) without exact priors of lesions' locations. Compared with most existing systems which require supervised labels, e.g., manual delineation of PCa lesions, it is much more convenient for clinical usage. Comprehensive evaluation based on fivefold cross validation using 360 patient data demonstrates that our system achieves a high accuracy for CS PCa detection, i.e., a sensitivity of 0.6374 and 0.8978 at 0.1 and 1 false positives per normal/benign patient.

  11. Combined fishing and climate forcing in the southern Benguela upwelling ecosystem: an end-to-end modelling approach reveals dampened effects.

    Directory of Open Access Journals (Sweden)

    Morgane Travers-Trolet

    Full Text Available The effects of climate and fishing on marine ecosystems have usually been studied separately, but their interactions make ecosystem dynamics difficult to understand and predict. Of particular interest to management, the potential synergism or antagonism between fishing pressure and climate forcing is analysed in this paper, using an end-to-end ecosystem model of the southern Benguela ecosystem, built from coupling hydrodynamic, biogeochemical and multispecies fish models (ROMS-N2P2Z2D2-OSMOSE. Scenarios of different intensities of upwelling-favourable wind stress combined with scenarios of fishing top-predator fish were tested. Analyses of isolated drivers show that the bottom-up effect of the climate forcing propagates up the food chain whereas the top-down effect of fishing cascades down to zooplankton in unfavourable environmental conditions but dampens before it reaches phytoplankton. When considering both climate and fishing drivers together, it appears that top-down control dominates the link between top-predator fish and forage fish, whereas interactions between the lower trophic levels are dominated by bottom-up control. The forage fish functional group appears to be a central component of this ecosystem, being the meeting point of two opposite trophic controls. The set of combined scenarios shows that fishing pressure and upwelling-favourable wind stress have mostly dampened effects on fish populations, compared to predictions from the separate effects of the stressors. Dampened effects result in biomass accumulation at the top predator fish level but a depletion of biomass at the forage fish level. This should draw our attention to the evolution of this functional group, which appears as both structurally important in the trophic functioning of the ecosystem, and very sensitive to climate and fishing pressures. In particular, diagnoses considering fishing pressure only might be more optimistic than those that consider combined effects

  12. Laboratory simulation of Euclid-like sky images to study the impact of CCD radiation damage on weak gravitational lensing

    Science.gov (United States)

    Prod'homme, T.; Verhoeve, P.; Oosterbroek, T.; Boudin, N.; Short, A.; Kohley, R.

    2014-07-01

    Euclid is the ESA mission to map the geometry of the dark universe. It uses weak gravitational lensing, which requires the accurate measurement of galaxy shapes over a large area in the sky. Radiation damage in the 36 Charge-Coupled Devices (CCDs) composing the Euclid visible imager focal plane has already been identified as a major contributor to the weak-lensing error budget; radiation-induced charge transfer inefficiency (CTI) distorts the galaxy images and introduces a bias in the galaxy shape measurement. We designed a laboratory experiment to project Euclid-like sky images onto an irradiated Euclid CCD. In this way - and for the first time - we are able to directly assess the effect of CTI on the Euclid weak-lensing measurement free of modelling uncertainties. We present here the experiment concept, setup, and first results. The results of such an experiment provide test data critical to refine models, design and test the Euclid data processing CTI mitigation scheme, and further optimize the Euclid CCD operation.

  13. A novel PON based UMTS broadband wireless access network architecture with an algorithm to guarantee end to end QoS

    Science.gov (United States)

    Sana, Ajaz; Hussain, Shahab; Ali, Mohammed A.; Ahmed, Samir

    2007-09-01

    In this paper we proposes a novel Passive Optical Network (PON) based broadband wireless access network architecture to provide multimedia services (video telephony, video streaming, mobile TV, mobile emails etc) to mobile users. In the conventional wireless access networks, the base stations (Node B) and Radio Network Controllers (RNC) are connected by point to point T1/E1 lines (Iub interface). The T1/E1 lines are expensive and add up to operating costs. Also the resources (transceivers and T1/E1) are designed for peak hours traffic, so most of the time the dedicated resources are idle and wasted. Further more the T1/E1 lines are not capable of supporting bandwidth (BW) required by next generation wireless multimedia services proposed by High Speed Packet Access (HSPA, Rel.5) for Universal Mobile Telecommunications System (UMTS) and Evolution Data only (EV-DO) for Code Division Multiple Access 2000 (CDMA2000). The proposed PON based back haul can provide Giga bit data rates and Iub interface can be dynamically shared by Node Bs. The BW is dynamically allocated and the unused BW from lightly loaded Node Bs is assigned to heavily loaded Node Bs. We also propose a novel algorithm to provide end to end Quality of Service (QoS) (between RNC and user equipment).The algorithm provides QoS bounds in the wired domain as well as in wireless domain with compensation for wireless link errors. Because of the air interface there can be certain times when the user equipment (UE) is unable to communicate with Node B (usually referred to as link error). Since the link errors are bursty and location dependent. For a proposed approach, the scheduler at the Node B maps priorities and weights for QoS into wireless MAC. The compensations for errored links is provided by the swapping of services between the active users and the user data is divided into flows, with flows allowed to lag or lead. The algorithm guarantees (1)delay and throughput for error-free flows,(2)short term fairness

  14. SU-E-J-25: End-To-End (E2E) Testing On TomoHDA System Using a Real Pig Head for Intracranial Radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Corradini, N; Leick, M; Bonetti, M; Negretti, L [Clinica Luganese, Radiotherapy Center, Lugano (Switzerland)

    2015-06-15

    Purpose: To determine the MVCT imaging uncertainty on the TomoHDA system for intracranial radiosurgery treatments. To determine the end-to-end (E2E) overall accuracy of the TomoHDA system for intracranial radiosurgery. Methods: A pig head was obtained from the butcher, cut coronally through the brain, and preserved in formaldehyde. The base of the head was fixed to a positioning plate allowing precise movement, i.e. translation and rotation, in all 6 axes. A repeatability test was performed on the pig head to determine uncertainty in the image bone registration algorithm. Furthermore, the test studied images with MVCT slice thicknesses of 1 and 3 mm in unison with differing scan lengths. A sensitivity test was performed to determine the registration algorithm’s ability to find the absolute position of known translations/rotations of the pig head. The algorithm’s ability to determine absolute position was compared against that of manual operators, i.e. a radiation therapist and radiation oncologist. Finally, E2E tests for intracranial radiosurgery were performed by measuring the delivered dose distributions within the pig head using Gafchromic films. Results: The repeatability test uncertainty was lowest for the MVCTs of 1-mm slice thickness, which measured less than 0.10 mm and 0.12 deg for all axes. For the sensitivity tests, the bone registration algorithm performed better than human eyes and a maximum difference of 0.3 mm and 0.4 deg was observed for the axes. E2E test results in absolute position difference measured 0.03 ± 0.21 mm in x-axis and 0.28 ± 0.18 mm in y-axis. A maximum difference of 0.32 and 0.66 mm was observed in x and y, respectively. The average peak dose difference between measured and calculated dose was 2.7 cGy or 0.4%. Conclusion: Our tests using a pig head phantom estimate the TomoHDA system to have a submillimeter overall accuracy for intracranial radiosurgery.

  15. SU-E-J-25: End-To-End (E2E) Testing On TomoHDA System Using a Real Pig Head for Intracranial Radiosurgery

    International Nuclear Information System (INIS)

    Corradini, N; Leick, M; Bonetti, M; Negretti, L

    2015-01-01

    Purpose: To determine the MVCT imaging uncertainty on the TomoHDA system for intracranial radiosurgery treatments. To determine the end-to-end (E2E) overall accuracy of the TomoHDA system for intracranial radiosurgery. Methods: A pig head was obtained from the butcher, cut coronally through the brain, and preserved in formaldehyde. The base of the head was fixed to a positioning plate allowing precise movement, i.e. translation and rotation, in all 6 axes. A repeatability test was performed on the pig head to determine uncertainty in the image bone registration algorithm. Furthermore, the test studied images with MVCT slice thicknesses of 1 and 3 mm in unison with differing scan lengths. A sensitivity test was performed to determine the registration algorithm’s ability to find the absolute position of known translations/rotations of the pig head. The algorithm’s ability to determine absolute position was compared against that of manual operators, i.e. a radiation therapist and radiation oncologist. Finally, E2E tests for intracranial radiosurgery were performed by measuring the delivered dose distributions within the pig head using Gafchromic films. Results: The repeatability test uncertainty was lowest for the MVCTs of 1-mm slice thickness, which measured less than 0.10 mm and 0.12 deg for all axes. For the sensitivity tests, the bone registration algorithm performed better than human eyes and a maximum difference of 0.3 mm and 0.4 deg was observed for the axes. E2E test results in absolute position difference measured 0.03 ± 0.21 mm in x-axis and 0.28 ± 0.18 mm in y-axis. A maximum difference of 0.32 and 0.66 mm was observed in x and y, respectively. The average peak dose difference between measured and calculated dose was 2.7 cGy or 0.4%. Conclusion: Our tests using a pig head phantom estimate the TomoHDA system to have a submillimeter overall accuracy for intracranial radiosurgery

  16. OpenCyto: an open source infrastructure for scalable, robust, reproducible, and automated, end-to-end flow cytometry data analysis.

    Directory of Open Access Journals (Sweden)

    Greg Finak

    2014-08-01

    Full Text Available Flow cytometry is used increasingly in clinical research for cancer, immunology and vaccines. Technological advances in cytometry instrumentation are increasing the size and dimensionality of data sets, posing a challenge for traditional data management and analysis. Automated analysis methods, despite a general consensus of their importance to the future of the field, have been slow to gain widespread adoption. Here we present OpenCyto, a new BioConductor infrastructure and data analysis framework designed to lower the barrier of entry to automated flow data analysis algorithms by addressing key areas that we believe have held back wider adoption of automated approaches. OpenCyto supports end-to-end data analysis that is robust and reproducible while generating results that are easy to interpret. We have improved the existing, widely used core BioConductor flow cytometry infrastructure by allowing analysis to scale in a memory efficient manner to the large flow data sets that arise in clinical trials, and integrating domain-specific knowledge as part of the pipeline through the hierarchical relationships among cell populations. Pipelines are defined through a text-based csv file, limiting the need to write data-specific code, and are data agnostic to simplify repetitive analysis for core facilities. We demonstrate how to analyze two large cytometry data sets: an intracellular cytokine staining (ICS data set from a published HIV vaccine trial focused on detecting rare, antigen-specific T-cell populations, where we identify a new subset of CD8 T-cells with a vaccine-regimen specific response that could not be identified through manual analysis, and a CyTOF T-cell phenotyping data set where a large staining panel and many cell populations are a challenge for traditional analysis. The substantial improvements to the core BioConductor flow cytometry packages give OpenCyto the potential for wide adoption. It can rapidly leverage new developments in

  17. SU-E-T-19: A New End-To-End Test Method for ExacTrac for Radiation and Plan Isocenter Congruence

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S; Nguyen, N; Liu, F; Huang, Y [Rhode Island Hospital / Warren Alpert Medical, Providence, RI (United States); Sio, T [Mayo Clinic, Rochester, MN (United States); Jung, J [East Carolina University, Greenville, North Carolina (United States); Pyakuryal, A [UniversityIllinois at Chicago, Chicago, IL (United States); Jang, S [Princeton Radiation Oncology Ctr., Jamesburg, NJ (United States)

    2014-06-01

    Purpose: To combine and integrate quality assurance (QA) of target localization and radiation isocenter End to End (E2E) test of BrainLAB ExacTrac system, a new QA approach was devised using anthropomorphic head and neck phantom. This test insures the target localization as well as radiation isocenter congruence which is one step ahead the current ExacTrac QA procedures. Methods: The head and neck phantom typically used for CyberKnife E2E test was irradiated to the sphere target that was visible in CT-sim images. The CT-sim was performed using 1 mm thickness slice with helical scanning technique. The size of the sphere was 3-cm diameter and contoured as a target volume using iPlan V.4.5.2. A conformal arc plan was generated using MLC-based with 7 fields, and five of them were include couch rotations. The prescription dose was 5 Gy and 95% coverage to the target volume. For the irradiation, two Gafchromic films were perpendicularly inserted into the cube that hold sphere inside. The linac used for the irradiation was TrueBeam STx equipped with HD120 MLC. In order to use ExacTrac, infra-red head–array was used to correlate orthogonal X-ray images. Results: Using orthogonal X-rays of ExacTrac the phantom was positioned. For each field, phantom was check again with X-rays and re-positioned if necessary. After each setup using ExacTrac, the target was irradiated. The films were analyzed to determine the deviation of the radiation isocenter in all three dimensions: superior-inferior, left-right and anterior-posterior. The total combining error was found to be 0.76 mm ± 0.05 mm which was within sub-millimeter accuracy. Conclusion: Until now, E2E test for ExacTrac was separately implemented to test image localization and radiation isocenter. This new method can be used for periodic QA procedures.

  18. RTEMP: Exploring an end-to-end, agnostic platform for multidisciplinary real-time analytics in the space physics community and beyond

    Science.gov (United States)

    Chaddock, D.; Donovan, E.; Spanswick, E.; Jackel, B. J.

    2014-12-01

    Large-scale, real-time, sensor-driven analytics are a highly effective set of tools in many research environments; however, the barrier to entry is expensive and the learning curve is steep. These systems need to operate efficiently from end to end, with the key aspects being data transmission, acquisition, management and organization, and retrieval. When building a generic multidisciplinary platform, acquisition and data management needs to be designed with scalability and flexibility as the primary focus. Additionally, in order to leverage current sensor web technologies, the integration of common sensor data standards (ie. SensorML and SWE Services) should be supported. Perhaps most important, researchers should be able to get started and integrate the platform into their set of research tools as easily and quickly as possible. The largest issue with current platforms is that the sensor data must be formed and described using the previously mentioned standards. As useful as these standards are for organizing data, they are cumbersome to adopt, often restrictive, and are required to be geospatially-driven. Our solution, RTEMP (Real-time Environment Monitoring Platform), is a real-time analytics platform with over ten years and an estimated two million dollars of investment. It has been developed for our continuously expanding requirements of operating and building remote sensors and supporting equipment for space physics research. A key benefit of our approach is RTEMP's ability to manage agnostic data. This allows data that flows through the system to be structured in any way that best addresses the needs of the sensor operators and data users, enabling extensive flexibility and streamlined development and research. Here we begin with an overview of RTEMP and how it is structured. Additionally, we will showcase the ways that we are using RTEMP and how it is being adopted by researchers in an increasingly broad range of other research fields. We will lay out a

  19. End-to-End System Test of the Relative Precision and Stability of the Photometric Method for Detecting Earth-Size Extrasolar Planets

    Science.gov (United States)

    Dunham, Edward W.

    2000-01-01

    We developed the CCD camera system for the laboratory test demonstration and designed the optical system for this test. The camera system was delivered to Ames in April, 1999 with continuing support mostly in the software area as the test progressed. The camera system has been operating successfully since delivery. The optical system performed well during the test. The laboratory demonstration activity is now nearly complete and is considered to be successful by the Technical Advisory Group, which met on 8 February, 2000 at the SETI Institute. A final report for the Technical Advisory Group and NASA Headquarters will be produced in the next few months. This report will be a comprehensive report on all facets of the test including those covered under this grant. A copy will be forwarded, if desired, when it is complete.

  20. SU-F-J-150: Development of An End-To-End Chain Test for the First-In-Man MR-Guided Treatments with the MRI Linear Accelerator by Using the Alderson Phantom

    Energy Technology Data Exchange (ETDEWEB)

    Hoogcarspel, S; Kerkmeijer, L; Lagendijk, J; Van Vulpen, M; Raaymakers, B [University Medical Center Utrecht, Utrecht, Utrecht (Netherlands)

    2016-06-15

    The Alderson phantom is a human shaped quality assurance tool that has been used for over 30 years in radiotherapy. The phantom can provide integrated tests of the entire chain of treatment planning and delivery. The purpose of this research was to investigate if this phantom can be used to chain test a treatment on the MRI linear accelerator (MRL) which is currently being developed at the UMC Utrecht, in collaboration with Elekta and Philips. The latter was demonstrated by chain testing the future First-in-Man treatments with this system.An Alderson phantom was used to chain test an entire treatment with the MRL. First, a CT was acquired of the phantom with additional markers that are both visible on MR and CT. A treatment plan for treating bone metastases in the sacrum was made. The phantom was consecutively placed in the MRL. For MRI imaging, an 3D volume was acquired. The initially developed treatment plan was then simulated on the new MRI dataset. For simulation, both the MR and CT data was used by registering them together. Before treatment delivery a MV image was acquired and compared with a DRR that was calculated form the MR/CT registration data. Finally, the treatment was delivered. Figure 1 shows both the T1 weighted MR-image of the phantom and the CT that was registered to the MR image. Figure 2 shows both the calculated and measured MV image that was acquired by the MV panel. Figure 3 shows the dose distribution that was simulated. The total elapsed time for the entire procedure excluding irradiation was 13:35 minutes.The Alderson Phantom yields sufficient MR contrast and can be used for full MR guided radiotherapy treatment chain testing. As a result, we are able to perform an end-to-end chain test of the future First-in-Man treatments.

  1. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    Science.gov (United States)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The development of the Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large complex systems engineering challenge being addressed in part by focusing on the specific subsystems handling of off-nominal mission and fault tolerance. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML), the Mission and Fault Management (M&FM) algorithms are crafted and vetted in specialized Integrated Development Teams composed of multiple development disciplines. NASA also has formed an M&FM team for addressing fault management early in the development lifecycle. This team has developed a dedicated Vehicle Management End-to-End Testbed (VMET) that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. The flexibility of VMET enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the algorithms utilizing actual subsystem models. The intent is to validate the algorithms and substantiate them with performance baselines for each of the vehicle subsystems in an independent platform exterior to flight software test processes. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test processes. Risk reduction is addressed by working with other organizations such as S

  2. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    Science.gov (United States)

    Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen

    2015-01-01

    integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. Additionally, the team has developed processes for implementing and validating these algorithms for concept validation and risk reduction for the SLS program. The flexibility of the Vehicle Management End-to-end Testbed (VMET) enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS. The intent of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software development infrastructure and its related testing entities. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test cases into flight software compounded with potential human errors throughout the development lifecycle. Risk reduction is addressed by the M&FM analysis group working with other organizations such as S&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses that can be tested in VMET to ensure that failures can be detected, and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW

  3. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASAs Space Launch System

    Science.gov (United States)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The engineering development of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS) requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The nominal and off-nominal characteristics of SLS's elements and subsystems must be understood and matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex systems engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model-based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model-based algorithms and their development lifecycle from inception through FSW certification are an important focus of SLS's development effort to further ensure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. To test and validate these M&FM algorithms a dedicated test-bed was developed for full Vehicle Management End-to-End Testing (VMET). For addressing fault management (FM

  4. Demonstration of the First Real-Time End-to-End 40-Gb/s PAM-4 for Next-Generation Access Applications using 10-Gb/s Transmitter

    DEFF Research Database (Denmark)

    Wei, J. L.; Eiselt, Nicklas; Griesser, Helmut

    2016-01-01

    We demonstrate the first known experiment of a real-time end-to-end 40-Gb/s PAM-4 system for next-generation access applications using 10-Gb/s class transmitters only. Based on the measurement of a real-time 40-Gb/s PAM system, low-cost upstream and downstream link power budgets are estimated. Up...

  5. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  6. Treatment of a partially thrombosed giant aneurysm of the vertebral artery by aneurysm trapping and direct vertebral artery-posterior inferior cerebellar artery end-to-end anastomosis: technical case report.

    Science.gov (United States)

    Benes, Ludwig; Kappus, Christoph; Sure, Ulrich; Bertalanffy, Helmut

    2006-07-01

    The purpose of this article is to focus for the first time on the operative management of a direct vertebral artery (VA)-posterior inferior cerebellar artery (PICA) end-to-end anastomosis in a partially thrombosed giant VA-PICA-complex aneurysm and to underline its usefulness as an additional treatment option. The operative technique of a direct VA-PICA end-to-end anatomosis is described in detail. The VA was entering the large aneurysm sack. Distally, the PICA originated from the aneurysm sack-VA-complex. The donor and recipient vessel were cut close to the aneurysm. Whereas the VA was cut in a straight manner, the PICA was cut at an oblique 45-degree angle to enlarge the vascular end diameter. Vessel ends were flushed with heparinized saline and sutured. The thrombotic material inside the aneurysm sack was removed and the distal VA clipped, leaving the anterior spinal artery and brainstem perforators free. The patient regained consciousness without additional morbidity. Magnetic resonance imaging scans revealed a completely decompressed brainstem without infarction. The postoperative angiograms demonstrated a good filling of the anastomosed PICA. Despite the caliber mistmatch of these two vessels the direct VA-PICA end-to-end anastomosis provides an accurate alternative in addition to other anastomoses and bypass techniques, when donor and recipient vessels are suitable and medullary perforators do not have to be disrupted.

  7. National Renewable Energy Laboratory (NREL) Topic 2 Final Report: End-to-End Communication and Control System to Support Clean Energy Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Hudgins, Andrew P. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Carrillo, Ismael M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Jin, Xin [National Renewable Energy Lab. (NREL), Golden, CO (United States); Simmins, John [Electric Power Research Inst. (EPRI), Palo Alto, CA (United States)

    2018-02-21

    This document is the final report of a two-year development, test, and demonstration project, 'Cohesive Application of Standards- Based Connected Devices to Enable Clean Energy Technologies.' The project was part of the National Renewable Energy Laboratory's (NREL's) Integrated Network Testbed for Energy Grid Research and Technology (INTEGRATE) initiative hosted at Energy Systems Integration Facility (ESIF). This project demonstrated techniques to control distribution grid events using the coordination of traditional distribution grid devices and high-penetration renewable resources and demand response. Using standard communication protocols and semantic standards, the project examined the use cases of high/low distribution voltage, requests for volt-ampere-reactive (VAR) power support, and transactive energy strategies using Volttron. Open source software, written by EPRI to control distributed energy resources (DER) and demand response (DR), was used by an advanced distribution management system (ADMS) to abstract the resources reporting to a collection of capabilities rather than needing to know specific resource types. This architecture allows for scaling both horizontally and vertically. Several new technologies were developed and tested. Messages from the ADMS based on the common information model (CIM) were developed to control the DER and DR management systems. The OpenADR standard was used to help manage grid events by turning loads off and on. Volttron technology was used to simulate a homeowner choosing the price at which to enter the demand response market. Finally, the ADMS used newly developed algorithms to coordinate these resources with a capacitor bank and voltage regulator to respond to grid events.

  8. SU-F-P-39: End-To-End Validation of a 6 MV High Dose Rate Photon Beam, Configured for Eclipse AAA Algorithm Using Golden Beam Data, for SBRT Treatments Using RapidArc

    Energy Technology Data Exchange (ETDEWEB)

    Ferreyra, M; Salinas Aranda, F; Dodat, D; Sansogne, R; Arbiser, S [Vidt Centro Medico, Ciudad Autonoma De Buenos Aires, Ciudad Autonoma de Buenos Aire (Argentina)

    2016-06-15

    Purpose: To use end-to-end testing to validate a 6 MV high dose rate photon beam, configured for Eclipse AAA algorithm using Golden Beam Data (GBD), for SBRT treatments using RapidArc. Methods: Beam data was configured for Varian Eclipse AAA algorithm using the GBD provided by the vendor. Transverse and diagonals dose profiles, PDDs and output factors down to a field size of 2×2 cm2 were measured on a Varian Trilogy Linac and compared with GBD library using 2% 2mm 1D gamma analysis. The MLC transmission factor and dosimetric leaf gap were determined to characterize the MLC in Eclipse. Mechanical and dosimetric tests were performed combining different gantry rotation speeds, dose rates and leaf speeds to evaluate the delivery system performance according to VMAT accuracy requirements. An end-to-end test was implemented planning several SBRT RapidArc treatments on a CIRS 002LFC IMRT Thorax Phantom. The CT scanner calibration curve was acquired and loaded in Eclipse. PTW 31013 ionization chamber was used with Keithley 35617EBS electrometer for absolute point dose measurements in water and lung equivalent inserts. TPS calculated planar dose distributions were compared to those measured using EPID and MapCheck, as an independent verification method. Results were evaluated with gamma criteria of 2% dose difference and 2mm DTA for 95% of points. Results: GBD set vs. measured data passed 2% 2mm 1D gamma analysis even for small fields. Machine performance tests show results are independent of machine delivery configuration, as expected. Absolute point dosimetry comparison resulted within 4% for the worst case scenario in lung. Over 97% of the points evaluated in dose distributions passed gamma index analysis. Conclusion: Eclipse AAA algorithm configuration of the 6 MV high dose rate photon beam using GBD proved efficient. End-to-end test dose calculation results indicate it can be used clinically for SBRT using RapidArc.

  9. SIP end to end performance metrics

    OpenAIRE

    Vozňák, Miroslav; Rozhon, Jan

    2012-01-01

    The paper deals with a SIP performance testing methodology. The main contribution to the field of performance testing of SIP infrastructure consists in the possibility to perform the standardized stress tests with the developed SIP TesterApp without a deeper knowledge in the area of SIP communication. The developed tool exploits several of open-source applications such as jQuery, Python, JSON and the cornerstone SIP generator SIPp, the result is highly modifiable and the ...

  10. CASTOR end-to-end monitoring

    International Nuclear Information System (INIS)

    Rekatsinas, Theodoros; Duellmann, Dirk; Pokorski, Witold; Ponce, Sebastien; Rabacal, Bartolomeu; Waldron, Dennis; Wojcieszuk, Jacek

    2010-01-01

    With the start of Large Hadron Collider approaching, storage and management of raw event data, as well as reconstruction and analysis data, is of crucial importance for the researchers. The CERN Advanced STORage system (CASTOR) is a hierarchical system developed at CERN, used to store physics production files and user files. CASTOR, as one of the essential software tools used by the LHC experiments, has to provide reliable services for storing and managing data. Monitoring of this complicated system is mandatory in order to assure its stable operation and improve its future performance. This paper presents the new monitoring system of CASTOR which provides operation and user request specific metrics. This system is build around a dedicated, optimized database schema. The schema is populated by PL/SQL procedures, which process a stream of incoming raw metadata from different CASTOR components, initially collected by the Distributed Logging Facility (DLF). A web interface has been developed for the visualization of the monitoring data. The different histograms and plots are created using PHP scripts which query the monitoring database.

  11. End-to-end energy efficient communication

    DEFF Research Database (Denmark)

    Dittmann, Lars

    Awareness of energy consumption in communication networks such as the Internet is currently gaining momentum as it is commonly acknowledged that increased network capacity (currently driven by video applications) requires significant more electrical power. This paper stresses the importance...

  12. Charge diffusion in CCD X-ray detectors

    International Nuclear Information System (INIS)

    Pavlov, George G.; Nousek, John A.

    1999-01-01

    Critical to the detection of X-rays by CCDs, is the detailed process of charge diffusion and drift within the device. We reexamine the prescriptions currently used in the modeling of X-ray CCD detectors to provide analytic expressions for the charge distribution over the CCD pixels which are suitable for use in numerical simulations of CCD response. Our treatment results in models which predict charge distributions which are more centrally peaked and have flatter wings than the Gaussian shapes predicted by previous work and adopted in current CCD modeling codes

  13. CCD's at TPC

    International Nuclear Information System (INIS)

    Zeller, M.E.

    1977-01-01

    The CCD, Charge Coupled Device, is an analog shift register for which application to the readout of particle detectors has recently been realized. These devices can be used to detect optical information directly, providing an automated readout for streamer or other optical chambers, or as a single input shift register, acting in this instance as a delay line for analog information. A description is given of the latter mode of operation and its utility as a readout method for drift chambers. Most of the information contained herein has been obtained from tests performed in connection with PEP TPC project, PEP-4. That detector will employ approximately 10 4 CCD's making it a reasonable testing ground for ISABELLE size detectors

  14. Electronic remote blood issue: a combination of remote blood issue with a system for end-to-end electronic control of transfusion to provide a "total solution" for a safe and timely hospital blood transfusion service.

    Science.gov (United States)

    Staves, Julie; Davies, Amanda; Kay, Jonathan; Pearson, Oliver; Johnson, Tony; Murphy, Michael F

    2008-03-01

    The rapid provision of red cell (RBC) units to patients needing blood urgently is an issue of major importance in transfusion medicine. The development of electronic issue (sometimes termed "electronic crossmatch") has facilitated rapid provision of RBC units by avoidance of the serologic crossmatch in eligible patients. A further development is the issue of blood under electronic control at blood refrigerator remote from the blood bank. This study evaluated a system for electronic remote blood issue (ERBI) developed as an enhancement of a system for end-to-end electronic control of hospital transfusion. Practice was evaluated before and after its introduction in cardiac surgery. Before the implementation of ERBI, the median time to deliver urgently required RBC units to the patient was 24 minutes. After its implementation, RBC units were obtained from the nearby blood refrigerator in a median time of 59 seconds (range, 30 sec to 2 min). The study also found that unused requests were reduced significantly from 42 to 20 percent, the number of RBC units issued reduced by 52 percent, the number of issued units that were transfused increased from 40 to 62 percent, and there was a significant reduction in the workload of both blood bank and clinical staff. This study evaluated a combination of remote blood issue with an end-to-end electronically controlled hospital transfusion process, ERBI. ERBI reduced the time to make blood available for surgical patients and improved the efficiency of hospital transfusion.

  15. Safety and efficacy of the NiTi Shape Memory Compression Anastomosis Ring (CAR/ColonRing) for end-to-end compression anastomosis in anterior resection or low anterior resection.

    Science.gov (United States)

    Kang, Jeonghyun; Park, Min Geun; Hur, Hyuk; Min, Byung Soh; Lee, Kang Young; Kim, Nam Kyu

    2013-04-01

    Compression anastomoses may represent an improvement over traditional hand-sewn or stapled techniques. This prospective exploratory study aimed to assess the efficacy and complication rates in patients undergoing anterior resection (AR) or low anterior resection (LAR) anastomosed with a novel end-to-end compression anastomosis ring, the ColonRing. In all, 20 patients (13 male) undergoing AR or LAR were enrolled to be anastomosed using the NiTi Shape Memory End-to-End Compression Anastomosis Ring (NiTi Medical Technologies Ltd, Netanya, Israel). Demographic, intraoperative, and postoperative data were collected. Patients underwent AR (11/20) or LAR using laparoscopy (75%), robotic (10%) surgery, or an open laparotomy (15%) approach, with a median anastomotic level of 14.5 cm (range, 4-25 cm). Defunctioning loop ileostomies were formed in 6 patients for low anastomoses. Surgeons rated the ColonRing device as either easy or very easy to use. One patient developed an anastomotic leakage in the early postoperative period; there were no late postoperative complications. Mean time to passage of first flatus and commencement of oral fluids was 2.5 days and 3.2 days, respectively. Average hospital stay was 12.6 days (range, 8-23 days). Finally, the device was expelled on average 15.3 days postoperatively without difficulty. This is the first study reporting results in a significant number of LAR patients and the first reported experience from South Korea; it shows that the compression technique is surgically feasible, easy to use, and without significant complication rates. A large randomized controlled trial is warranted to investigate the benefits of the ColonRing over traditional stapling techniques.

  16. Efficacy and safety of a NiTi CAR 27 compression ring for end-to-end anastomosis compared with conventional staplers: A real-world analysis in Chinese colorectal cancer patients

    Science.gov (United States)

    Lu, Zhenhai; Peng, Jianhong; Li, Cong; Wang, Fulong; Jiang, Wu; Fan, Wenhua; Lin, Junzhong; Wu, Xiaojun; Wan, Desen; Pan, Zhizhong

    2016-01-01

    OBJECTIVES: This study aimed to evaluate the safety and efficacy of a new nickel-titanium shape memory alloy compression anastomosis ring, NiTi CAR 27, in constructing an anastomosis for colorectal cancer resection compared with conventional staples. METHODS: In total, 234 consecutive patients diagnosed with colorectal cancer receiving sigmoidectomy and anterior resection for end-to-end anastomosis from May 2010 to June 2012 were retrospectively analyzed. The postoperative clinical parameters, postoperative complications and 3-year overall survival in 77 patients using a NiTi CAR 27 compression ring (CAR group) and 157 patients with conventional circular staplers (STA group) were compared. RESULTS: There were no statistically significant differences between the patients in the two groups in terms of general demographics and tumor features. A clinically apparent anastomotic leak occurred in 2 patients (2.6%) in the CAR group and in 5 patients (3.2%) in the STA group (p=0.804). These eight patients received a temporary diverting ileostomy. One patient (1.3%) in the CAR group was diagnosed with anastomotic stricture through an electronic colonoscopy after 3 months postoperatively. The incidence of postoperative intestinal obstruction was comparable between the two groups (p=0.192). With a median follow-up duration of 39.6 months, the 3-year overall survival rate was 83.1% in the CAR group and 89.0% in the STA group (p=0.152). CONCLUSIONS: NiTi CAR 27 is safe and effective for colorectal end-to-end anastomosis. Its use is equivalent to that of the conventional circular staplers. This study suggests that NiTi CAR 27 may be a beneficial alternative in colorectal anastomosis in Chinese colorectal cancer patients. PMID:27276395

  17. Transmission electron microscope CCD camera

    Science.gov (United States)

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  18. STIS-01 CCD Functional

    Science.gov (United States)

    Valenti, Jeff

    2001-07-01

    This activity measures the baseline performance and commandability of the CCD subsystem. Only primary amplifier D is used. Bias, Dark, and Flat Field exposures are taken in order to measure read noise, dark current, CTE, and gain. Numerous bias frames are taken to permit construction of "superbias" frames in which the effects of read noise have been rendered negligible. Dark exposures are made outside the SAA. Full frame and binned observations are made, with binning factors of 1x1 and 2x2. Finally, tungsten lamp exposures are taken through narrow slits to confirm the slit positions in the current database. All exposures are internals. This is a reincarnation of SM3A proposal 8502 with some unnecessary tests removed from the program.

  19. Enhanced performance CCD output amplifier

    Science.gov (United States)

    Dunham, Mark E.; Morley, David W.

    1996-01-01

    A low-noise FET amplifier is connected to amplify output charge from a che coupled device (CCD). The FET has its gate connected to the CCD in common source configuration for receiving the output charge signal from the CCD and output an intermediate signal at a drain of the FET. An intermediate amplifier is connected to the drain of the FET for receiving the intermediate signal and outputting a low-noise signal functionally related to the output charge signal from the CCD. The amplifier is preferably connected as a virtual ground to the FET drain. The inherent shunt capacitance of the FET is selected to be at least equal to the sum of the remaining capacitances.

  20. Rearrangement of potassium ions and Kv1.1/Kv1.2 potassium channels in regenerating axons following end-to-end neurorrhaphy: ionic images from TOF-SIMS.

    Science.gov (United States)

    Liu, Chiung-Hui; Chang, Hung-Ming; Wu, Tsung-Huan; Chen, Li-You; Yang, Yin-Shuo; Tseng, To-Jung; Liao, Wen-Chieh

    2017-10-01

    The voltage-gated potassium channels Kv1.1 and Kv1.2 that cluster at juxtaparanodal (JXP) regions are essential in the regulation of nerve excitability and play a critical role in axonal conduction. When demyelination occurs, Kv1.1/Kv1.2 activity increases, suppressing the membrane potential nearly to the equilibrium potential of K + , which results in an axonal conduction blockade. The recovery of K + -dependent communication signals and proper clustering of Kv1.1/Kv1.2 channels at JXP regions may directly reflect nerve regeneration following peripheral nerve injury. However, little is known about potassium channel expression and its relationship with the dynamic potassium ion distribution at the node of Ranvier during the regenerative process of peripheral nerve injury (PNI). In the present study, end-to-end neurorrhaphy (EEN) was performed using an in vivo model of PNI. The distribution of K + at regenerating axons following EEN was detected by time-of-flight secondary-ion mass spectrometry. The specific localization and expression of Kv1.1/Kv1.2 channels were examined by confocal microscopy and western blotting. Our data showed that the re-establishment of K + distribution and intensity was correlated with the functional recovery of compound muscle action potential morphology in EEN rats. Furthermore, the re-clustering of Kv1.1/1.2 channels 1 and 3 months after EEN at the nodal region of the regenerating nerve corresponded to changes in the K + distribution. This study provided direct evidence of K + distribution in regenerating axons for the first time. We proposed that the Kv1.1/Kv1.2 channels re-clustered at the JXP regions of regenerating axons are essential for modulating the proper patterns of K + distribution in axons for maintaining membrane potential stability after EEN.

  1. More Than Bar Codes: Integrating Global Standards-Based Bar Code Technology Into National Health Information Systems in Ethiopia and Pakistan to Increase End-to-End Supply Chain Visibility.

    Science.gov (United States)

    Hara, Liuichi; Guirguis, Ramy; Hummel, Keith; Villanueva, Monica

    2017-12-28

    The United Nations Population Fund (UNFPA) and the United States Agency for International Development (USAID) DELIVER PROJECT work together to strengthen public health commodity supply chains by standardizing bar coding under a single set of global standards. From 2015, UNFPA and USAID collaborated to pilot test how tracking and tracing of bar coded health products could be operationalized in the public health supply chains of Ethiopia and Pakistan and inform the ecosystem needed to begin full implementation. Pakistan had been using proprietary bar codes for inventory management of contraceptive supplies but transitioned to global standards-based bar codes during the pilot. The transition allowed Pakistan to leverage the original bar codes that were preprinted by global manufacturers as opposed to printing new bar codes at the central warehouse. However, barriers at lower service delivery levels prevented full realization of end-to-end data visibility. Key barriers at the district level were the lack of a digital inventory management system and absence of bar codes at the primary-level packaging level, such as single blister packs. The team in Ethiopia developed an open-sourced smartphone application that allowed the team to scan bar codes using the mobile phone's camera and to push the captured data to the country's data mart. Real-time tracking and tracing occurred from the central warehouse to the Addis Ababa distribution hub and to 2 health centers. These pilots demonstrated that standardized product identification and bar codes can significantly improve accuracy over manual stock counts while significantly streamlining the stock-taking process, resulting in efficiencies. The pilots also showed that bar coding technology by itself is not sufficient to ensure data visibility. Rather, by using global standards for identification and data capture of pharmaceuticals and medical devices, and integrating the data captured into national and global tracking systems

  2. CCD and IR array controllers

    Science.gov (United States)

    Leach, Robert W.; Low, Frank J.

    2000-08-01

    A family of controllers has bene developed that is powerful and flexible enough to operate a wide range of CCD and IR focal plane arrays in a variety of ground-based applications. These include fast readout of small CCD and IR arrays for adaptive optics applications, slow readout of large CCD and IR mosaics, and single CCD and IR array operation at low background/low noise regimes as well as high background/high speed regimes. The CCD and IR controllers have a common digital core based on user- programmable digital signal processors that are used to generate the array clocking and signal processing signals customized for each application. A fiber optic link passes image data and commands to VME or PCI interface boards resident in a host computer to the controller. CCD signal processing is done with a dual slope integrator operating at speeds of up to one Megapixel per second per channel. Signal processing of IR arrays is done either with a dual channel video processor or a four channel video processor that has built-in image memory and a coadder to 32-bit precision for operating high background arrays. Recent developments underway include the implementation of a fast fiber optic data link operating at a speed of 12.5 Megapixels per second for fast image transfer from the controller to the host computer, and supporting image acquisition software and device drivers for the PCI interface board for the Sun Solaris, Linux and Windows 2000 operating systems.

  3. Modelling charge storage in Euclid CCD structures

    International Nuclear Information System (INIS)

    Clarke, A S; Holland, A; Hall, D J; Burt, D

    2012-01-01

    The primary aim of ESA's proposed Euclid mission is to observe the distribution of galaxies and galaxy clusters, enabling the mapping of the dark architecture of the universe [1]. This requires a high performance detector, designed to endure a harsh radiation environment. The e2v CCD204 image sensor was redesigned for use on the Euclid mission [2]. The resulting e2v CCD273 has a narrower serial register electrode and transfer channel compared to its predecessor, causing a reduction in the size of charge packets stored, thus reducing the number of traps encountered by the signal electrons during charge transfer and improving the serial Charge Transfer Efficiency (CTE) under irradiation [3]. The proposed Euclid CCD has been modelled using the Silvaco TCAD software [4], to test preliminary calculations for the Full Well Capacity (FWC) and the channel potential of the device and provide indications of the volume occupied by varying signals. These results are essential for the realisation of the mission objectives and for radiation damage studies, with the aim of producing empirically derived formulae to approximate signal-volume characteristics in the devices. These formulae will be used in the radiation damage (charge trapping) models. The Silvaco simulations have been tested against real devices to compare the experimental measurements to those predicted in the models. Using these results, the implications of this study on the Euclid mission can be investigated in more detail.

  4. CCD characterization and measurements automation

    Czech Academy of Sciences Publication Activity Database

    Kotov, I.V.; Frank, J.; Kotov, A.I.; Kubánek, Petr; O´Connor, P.; Prouza, Michael; Radeka, V.; Takacs, P.

    2012-01-01

    Roč. 695, Dec (2012), 188-192 ISSN 0168-9002 R&D Projects: GA MŠk ME09052 Institutional research plan: CEZ:AV0Z10100502 Keywords : CCD * characterization * test automation Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 1.142, year: 2012

  5. Image differencing using masked CCD

    International Nuclear Information System (INIS)

    Rushbrooke, J.G.; Ansorge, R.E.; Webber, C.J. St. J.

    1987-01-01

    A charge coupled device has some of its ''pixels'' masked by a material which is opaque to the radiation to which the device is to be exposed, each masked region being employed as a storage zone into which the charge pattern from the unmasked pixels can be transferred to enable a subsequent charge pattern to be established on further exposure of the unmasked pixels. The components of the resulting video signal corresponding to the respective charge patterns read-out from the CCD are subtracted to produce a video signal corresponding to the difference between the two images which formed the respective charge patterns. Alternate rows of pixels may be masked, or chequer-board pattern masking may be employed. In an X-ray imaging system the CCD is coupled to image intensifying and converting means. (author)

  6. The OCA CCD Camera Controller

    Science.gov (United States)

    1996-01-01

    multi CCD arrays for wide field telescopes with an array of 8x8 1K CCDs in use at Las Campanas Observatory in Chile . The same group is also involved...Verify key EPROM -292H VIH . VIH Program security bitl 1 29AH . VPP Program security’ bit 2 *. .298H -Vpp Verify security bits - 9HVIH ViI NOTE: 1...Pulsed from V.. to VIL and returned to VIH . EPROM PROGRAMMING AND VERIFICATION ..t= 21’C to-+27 ’rC:-VCC= 5V ±10%VS3 = OV. SYMBOL I .-- PARAMETER MIN MAX

  7. Intensified CCD for ultrafast diagnostics

    International Nuclear Information System (INIS)

    Cheng, J.; Tripp, G.; Coleman, L.

    1978-01-01

    Many of the present laser fusion diagnostics are recorded on either ultrafast streak cameras or on oscilloscopes. For those experiments in which a large volume of data is accumulated, direct computer processing of the information becomes important. We describe an approach which uses a RCA 52501 back-thinned CCD sensor to obtain direct electron readouts for both the streak camera and the CRT. Performance of the 100 GHz streak camera and the 4 GHz CRT are presented. Design parameters and computer interfacing for both systems are described in detail

  8. CCD research. [design, fabrication, and applications

    Science.gov (United States)

    Gassaway, J. D.

    1976-01-01

    The fundamental problems encountered in designing, fabricating, and applying CCD's are reviewed. Investigations are described and results and conclusions are given for the following: (1) the development of design analyses employing computer aided techniques and their application to the design of a grapped structure; (2) the role of CCD's in applications to electronic functions, in particular, signal processing; (3) extending the CCD to silicon films on sapphire (SOS); and (4) all aluminum transfer structure with low noise input-output circuits. Related work on CCD imaging devices is summarized.

  9. Identification of the main processes underlying ecosystem functioning in the Eastern English Channel, with a focus on flatfish species, as revealed through the application of the Atlantis end-to-end model

    Science.gov (United States)

    Girardin, Raphaël; Fulton, Elizabeth A.; Lehuta, Sigrid; Rolland, Marie; Thébaud, Olivier; Travers-Trolet, Morgane; Vermard, Youen; Marchal, Paul

    2018-02-01

    The ecosystem model Atlantis was used to investigate the key dynamics and processes that structure the Eastern English Channel ecosystem, with a particular focus on two commercial flatfish species, sole (Solea solea) and plaice (Pleuronectes platessa). This complex model was parameterized with data collected from diverse sources (a literature review, survey data, as well as landings and stock assessment information) and tuned so both simulated biomass and catch fit 2002-2011 observations. Here, the outputs are mainly presented for the two focus species and for some other vertebrates found to be important in the trophic network. The calibration process revealed the importance of coastal areas in the Eastern English Channel and of nutrient inputs from estuaries: a lack of river nutrients decreases the productivity of nursery grounds and adversely affects the production of sole and plaice. The role of discards in the trophic network is also highlighted. While sole and plaice did not have a strong influence on the trophic network of vertebrates, they are important predators for benthic invertebrates and compete for food with crustaceans, whiting (Merlangius merlangus) and other demersal fish. We also found that two key species, cod (Gadus morhua) and whiting, thoroughly structured the Eastern English Channel trophic network.

  10. CCD photometry of NGC 2419

    International Nuclear Information System (INIS)

    Christian, C.A.; Heasley, J.N.

    1988-01-01

    The properties of the globular cluster NGC 2419 are reexamined using CCD photometry deepened to the vicinity of the main-sequence turnoff. A new color-magnitude diagram is derived that extends to V = 24.5 mag. It is concluded that NGC 2419 is an outer-halo analog of the metal-poor globulars closer to the Galactic center. NGC 2419 is probably nearly the same age as M15 and differs only slightly, if at all, in metallicity. NGC 2419 has many similarities with the clusters NGC 5466, M15, and M92. Comparison of the data with the isochrones of VandenBerg and Bell (1985) implies a distance modulus of 20.1 with Delta (B-V) = 0.18 mag. Oxygen-rich models can be fit to the data; such a comparison yields a lower limit to the acceptable distance modulus of the cluster. 26 references

  11. CCD characterization and measurements automation

    International Nuclear Information System (INIS)

    Kotov, I.V.; Frank, J.; Kotov, A.I.; Kubanek, P.; O'Connor, P.; Prouza, M.; Radeka, V.; Takacs, P.

    2012-01-01

    Modern mosaic cameras have grown both in size and in number of sensors. The required volume of sensor testing and characterization has grown accordingly. For camera projects as large as the LSST, test automation becomes a necessity. A CCD testing and characterization laboratory was built and is in operation for the LSST project. Characterization of LSST study contract sensors has been performed. The characterization process and its automation are discussed, and results are presented. Our system automatically acquires images, populates a database with metadata information, and runs express analysis. This approach is illustrated on 55 Fe data analysis. 55 Fe data are used to measure gain, charge transfer efficiency and charge diffusion. Examples of express analysis results are presented and discussed.

  12. Timing generator of scientific grade CCD camera and its implementation based on FPGA technology

    Science.gov (United States)

    Si, Guoliang; Li, Yunfei; Guo, Yongfei

    2010-10-01

    The Timing Generator's functions of Scientific Grade CCD Camera is briefly presented: it generates various kinds of impulse sequence for the TDI-CCD, video processor and imaging data output, acting as the synchronous coordinator for time in the CCD imaging unit. The IL-E2TDI-CCD sensor produced by DALSA Co.Ltd. use in the Scientific Grade CCD Camera. Driving schedules of IL-E2 TDI-CCD sensor has been examined in detail, the timing generator has been designed for Scientific Grade CCD Camera. FPGA is chosen as the hardware design platform, schedule generator is described with VHDL. The designed generator has been successfully fulfilled function simulation with EDA software and fitted into XC2VP20-FF1152 (a kind of FPGA products made by XILINX). The experiments indicate that the new method improves the integrated level of the system. The Scientific Grade CCD camera system's high reliability, stability and low power supply are achieved. At the same time, the period of design and experiment is sharply shorted.

  13. High-resolution CCD imaging alternatives

    Science.gov (United States)

    Brown, D. L.; Acker, D. E.

    1992-08-01

    High resolution CCD color cameras have recently stimulated the interest of a large number of potential end-users for a wide range of practical applications. Real-time High Definition Television (HDTV) systems are now being used or considered for use in applications ranging from entertainment program origination through digital image storage to medical and scientific research. HDTV generation of electronic images offers significant cost and time-saving advantages over the use of film in such applications. Further in still image systems electronic image capture is faster and more efficient than conventional image scanners. The CCD still camera can capture 3-dimensional objects into the computing environment directly without having to shoot a picture on film develop it and then scan the image into a computer. 2. EXTENDING CCD TECHNOLOGY BEYOND BROADCAST Most standard production CCD sensor chips are made for broadcast-compatible systems. One popular CCD and the basis for this discussion offers arrays of roughly 750 x 580 picture elements (pixels) or a total array of approximately 435 pixels (see Fig. 1). FOR. A has developed a technique to increase the number of available pixels for a given image compared to that produced by the standard CCD itself. Using an inter-lined CCD with an overall spatial structure several times larger than the photo-sensitive sensor areas each of the CCD sensors is shifted in two dimensions in order to fill in spatial gaps between adjacent sensors.

  14. Tolerance analysis through computational imaging simulations

    Science.gov (United States)

    Birch, Gabriel C.; LaCasse, Charles F.; Stubbs, Jaclynn J.; Dagel, Amber L.; Bradley, Jon

    2017-11-01

    The modeling and simulation of non-traditional imaging systems require holistic consideration of the end-to-end system. We demonstrate this approach through a tolerance analysis of a random scattering lensless imaging system.

  15. End-to-End Multi-View Lipreading

    NARCIS (Netherlands)

    Petridis, Stavros; Wang, Yujiang; Li, Zuwei; Pantic, Maja

    2017-01-01

    Non-frontal lip views contain useful information which can be used to enhance the performance of frontal view lipreading. However, the vast majority of recent lipreading works, including the deep learning approaches which significantly outperform traditional approaches, have focused on frontal mouth

  16. End-to-end visual speech recognition with LSTMS

    NARCIS (Netherlands)

    Petridis, Stavros; Li, Zuwei; Pantic, Maja

    2017-01-01

    Traditional visual speech recognition systems consist of two stages, feature extraction and classification. Recently, several deep learning approaches have been presented which automatically extract features from the mouth images and aim to replace the feature extraction stage. However, research on

  17. CMDS System Integration and IAMD End-to-End Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — The Cruise Missile Defense Systems (CMDS) Project Office is establishing a secure System Integration Laboratory at the AMRDEC. This lab will contain tactical Signal...

  18. End-to-End Service Oriented Architectures (SOA) Security Project

    Science.gov (United States)

    2012-02-01

    Java 6.0 (javax.ws) platform and deployed on boston.cs.purdue.edu. TB stores all data regarding sessions and services in a MySQL database, setup on...pointcut designators. JBoss AOP [JBO2] and AspectJ [ASP1] are powerful frameworks that implement AOP for Java programs. Its pointcut designators... hibernate cglib enhanced proxies <attribute name="Ignore">*$$EnhancerByCGLIB$$*</attribute> --> <attribute name="Optimized">true</attribute

  19. End-to-end experiment management in HPC

    Energy Technology Data Exchange (ETDEWEB)

    Bent, John M [Los Alamos National Laboratory; Kroiss, Ryan R [Los Alamos National Laboratory; Torrez, Alfred [Los Alamos National Laboratory; Wingate, Meghan [Los Alamos National Laboratory

    2010-01-01

    Experiment management in any domain is challenging. There is a perpetual feedback loop cycling through planning, execution, measurement, and analysis. The lifetime of a particular experiment can be limited to a single cycle although many require myriad more cycles before definite results can be obtained. Within each cycle, a large number of subexperiments may be executed in order to measure the effects of one or more independent variables. Experiment management in high performance computing (HPC) follows this general pattern but also has three unique characteristics. One, computational science applications running on large supercomputers must deal with frequent platform failures which can interrupt, perturb, or terminate running experiments. Two, these applications typically integrate in parallel using MPI as their communication medium. Three, there is typically a scheduling system (e.g. Condor, Moab, SGE, etc.) acting as a gate-keeper for the HPC resources. In this paper, we introduce LANL Experiment Management (LEM), an experimental management framework simplifying all four phases of experiment management. LEM simplifies experiment planning by allowing the user to describe their experimental goals without having to fully construct the individual parameters for each task. To simplify execution, LEM dispatches the subexperiments itself thereby freeing the user from remembering the often arcane methods for interacting with the various scheduling systems. LEM provides transducers for experiments that automatically measure and record important information about each subexperiment; these transducers can easily be extended to collect additional measurements specific to each experiment. Finally, experiment analysis is simplified by providing a general database visualization framework that allows users to quickly and easily interact with their measured data.

  20. Using SIM for strong end-to-end Application Authentication

    OpenAIRE

    Lunde, Lars; Wangensteen, Audun

    2006-01-01

    Today the Internet is mostly used for services that require low or none security. The commercial and governmental applications have started to emerge but met problems since they require strong authentication, which is both difficult and costly to realize. The SIM card used in mobile phones is a tamper resistant device that contains strong authentication mechanisms. It would be very convenient and cost-efficient if Internet services could use authentication methods based on the SIM. This mast...

  1. Network analysis on skype end-to-end video quality

    NARCIS (Netherlands)

    Exarchakos, Georgios; Druda, Luca; Menkovski, Vlado; Liotta, Antonio

    2015-01-01

    Purpose – This paper aims to argue on the efficiency of Quality of Service (QoS)-based adaptive streamingwith regards to perceived quality Quality of Experience (QoE). Although QoS parameters are extensivelyused even by high-end adaptive streaming algorithms, achieved QoE fails to justify their use

  2. End to End Beam Dynamics of the ESS Linac

    DEFF Research Database (Denmark)

    Thomsen, Heine Dølrath

    2012-01-01

    The European Spallation Source, ESS, uses a linear accelerator to deliver a high intensity proton beam to the target station. The nominal beam power on target will be 5 MW at an energy of 2.5 GeV. We briefly describe the individual accelerating structures and transport lines through which we have...

  3. CCD-based vertex detectors

    CERN Document Server

    Damerell, C J S

    2005-01-01

    Over the past 20 years, CCD-based vertex detectors have been used to construct some of the most precise 'tracking microscopes' in particle physics. They were initially used by the ACCMOR collaboration for fixed target experiments in CERN, where they enabled the lifetimes of some of the shortest-lived charm particles to be measured precisely. The migration to collider experiments was accomplished in the SLD experiment, where the original 120 Mpixel detector was later upgraded to one with 307 Mpixels. This detector was used in a range of physics studies which exceeded the capability of the LEP detectors, including the most precise limit to date on the Bs mixing parameter. This success, and the high background hit densities that will inevitably be encountered at the future TeV-scale linear collider, have established the need for a silicon pixel-based vertex detector at this machine. The technical options have now been broadened to include a wide range of possible silicon imaging technologies as well as CCDs (mon...

  4. Modeling the impact of preflushing on CTE in proton irradiated CCD-based detectors

    Science.gov (United States)

    Philbrick, R. H.

    2002-04-01

    A software model is described that performs a "real world" simulation of the operation of several types of charge-coupled device (CCD)-based detectors in order to accurately predict the impact that high-energy proton radiation has on image distortion and modulation transfer function (MTF). The model was written primarily to predict the effectiveness of vertical preflushing on the custom full frame CCD-based detectors intended for use on the proposed Kepler Discovery mission, but it is capable of simulating many other types of CCD detectors and operating modes as well. The model keeps track of the occupancy of all phosphorous-silicon (P-V), divacancy (V-V) and oxygen-silicon (O-V) defect centers under every CCD electrode over the entire detector area. The integrated image is read out by simulating every electrode-to-electrode charge transfer in both the vertical and horizontal CCD registers. A signal level dependency on the capture and emission of signal is included and the current state of each electrode (e.g., barrier or storage) is considered when distributing integrated and emitted signal. Options for performing preflushing, preflashing, and including mini-channels are available on both the vertical and horizontal CCD registers. In addition, dark signal generation and image transfer smear can be selectively enabled or disabled. A comparison of the charge transfer efficiency (CTE) data measured on the Hubble space telescope imaging spectrometer (STIS) CCD with the CTE extracted from model simulations of the STIS CCD show good agreement.

  5. Optimum color filters for CCD digital cameras

    Science.gov (United States)

    Engelhardt, Kai; Kunz, Rino E.; Seitz, Peter; Brunner, Harald; Knop, Karl

    1993-12-01

    As part of the ESPRIT II project No. 2103 (MASCOT) a high performance prototype color CCD still video camera was developed. Intended for professional usage such as in the graphic arts, the camera provides a maximum resolution of 3k X 3k full color pixels. A high colorimetric performance was achieved through specially designed dielectric filters and optimized matrixing. The color transformation was obtained by computer simulation of the camera system and non-linear optimization which minimized the perceivable color errors as measured in the 1976 CIELUV uniform color space for a set of about 200 carefully selected test colors. The color filters were designed to allow perfect colorimetric reproduction in principle and at the same time with imperceptible color noise and with special attention to fabrication tolerances. The camera system includes a special real-time digital color processor which carries out the color transformation. The transformation can be selected from a set of sixteen matrices optimized for different illuminants and output devices. Because the actual filter design was based on slightly incorrect data the prototype camera showed a mean colorimetric error of 2.7 j.n.d. (CIELUV) in experiments. Using correct input data in the redesign of the filters, a mean colorimetric error of only 1 j.n.d. (CIELUV) seems to be feasible, implying that it is possible with such an optimized color camera to achieve such a high colorimetric performance that the reproduced colors in an image cannot be distinguished from the original colors in a scene, even in direct comparison.

  6. Active Pixel Sensors: Are CCD's Dinosaurs?

    Science.gov (United States)

    Fossum, Eric R.

    1993-01-01

    Charge-coupled devices (CCD's) are presently the technology of choice for most imaging applications. In the 23 years since their invention in 1970, they have evolved to a sophisticated level of performance. However, as with all technologies, we can be certain that they will be supplanted someday. In this paper, the Active Pixel Sensor (APS) technology is explored as a possible successor to the CCD. An active pixel is defined as a detector array technology that has at least one active transistor within the pixel unit cell. The APS eliminates the need for nearly perfect charge transfer -- the Achilles' heel of CCDs. This perfect charge transfer makes CCD's radiation 'soft,' difficult to use under low light conditions, difficult to manufacture in large array sizes, difficult to integrate with on-chip electronics, difficult to use at low temperatures, difficult to use at high frame rates, and difficult to manufacture in non-silicon materials that extend wavelength response.

  7. THE ACCURACY OF Hβ CCD PHOTOMETRY

    Directory of Open Access Journals (Sweden)

    C. Kim

    1994-12-01

    Full Text Available We have undertaken CCD observations of field standard stars with Hβ photometric system to investigate the reliability of Hβ CCD photometry. Flat fielding with dome flat and sky flat for Hβw and Hβn filter was compared with that of B filter in UBV system and, from these, we have not found any difference. It was confirmed that there is a good linear relationship between our Hβ values observed with 2.3m reflector and standard values. However, Hβ values observed with 60cm reflector at Sobaeksan Astronomy Observatory showed very poor relationship. To investigate the accuracy of Hβ CCD photometry for fainter objects, open cluster NGC2437 was observed and reduced with DoPHOT, and the results were compared with those for photoelectric photometry of Stetson (1981.

  8. Typical effects of laser dazzling CCD camera

    Science.gov (United States)

    Zhang, Zhen; Zhang, Jianmin; Shao, Bibo; Cheng, Deyan; Ye, Xisheng; Feng, Guobin

    2015-05-01

    In this article, an overview of laser dazzling effect to buried channel CCD camera is given. The CCDs are sorted into staring and scanning types. The former includes the frame transfer and interline transfer types. The latter includes linear and time delay integration types. All CCDs must perform four primary tasks in generating an image, which are called charge generation, charge collection, charge transfer and charge measurement. In camera, the lenses are needed to input the optical signal to the CCD sensors, in which the techniques for erasing stray light are used. And the electron circuits are needed to process the output signal of CCD, in which many electronic techniques are used. The dazzling effects are the conjunct result of light distribution distortion and charge distribution distortion, which respectively derive from the lens and the sensor. Strictly speaking, in lens, the light distribution is not distorted. In general, the lens are so well designed and fabricated that its stray light can be neglected. But the laser is of much enough intensity to make its stray light obvious. In CCD image sensors, laser can induce a so large electrons generation. Charges transfer inefficiency and charges blooming will cause the distortion of the charge distribution. Commonly, the largest signal outputted from CCD sensor is restricted by capability of the collection well of CCD, and can't go beyond the dynamic range for the subsequent electron circuits maintaining normal work. So the signal is not distorted in the post-processing circuits. But some techniques in the circuit can make some dazzling effects present different phenomenon in final image.

  9. A FORTRAN realization of the block adjustment of CCD frames

    Science.gov (United States)

    Yu, Yong; Tang, Zhenghong; Li, Jinling; Zhao, Ming

    A FORTRAN version realization of the block adjustment (BA) of overlapping CCD frames is developed. The flowchart is introduced including (a) data collection, (b) preprocessing, and (c) BA and object positioning. The subroutines and their functions are also demonstrated. The program package is tested by simulated data with/without the application of white noises. It is also preliminarily applied to the reduction of optical positions of four extragalactic radio sources. The results show that because of the increase in the sky coverage and number of reference stars, the precision of deducted positions is improved compared with single plate adjustment.

  10. Scalable Track Detection in SAR CCD Images

    Energy Technology Data Exchange (ETDEWEB)

    Chow, James G [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Quach, Tu-Thach [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images ta ken at different times of the same scene, rely on simple, fast models to label track pixels. These models, however, are often too simple to capture natural track features such as continuity and parallelism. We present a simple convolutional network architecture consisting of a series of 3-by-3 convolutions to detect tracks. The network is trained end-to-end to learn natural track features entirely from data. The network is computationally efficient and improves the F-score on a standard dataset to 0.988, up fr om 0.907 obtained by the current state-of-the-art method.

  11. Protein diffraction experiments with Atlas CCD detector

    Czech Academy of Sciences Publication Activity Database

    Dohnálek, Jan; Kovaľ, Tomáš; Dušek, Michal

    2008-01-01

    Roč. 64, Suppl. - abstracts (2008), C192 ISSN 0108-7673. [Congress of the International Union of Crystallography (IUCr) /21./. 23.08.2008-31.08.2008, Osaka] Institutional research plan: CEZ:AV0Z10100521 Keywords : x-ray data collection * CCD detectors * protein crystallography applications Subject RIV: BM - Solid Matter Physics ; Magnetism

  12. Custom CCD for adaptive optics applications

    Science.gov (United States)

    Downing, Mark; Arsenault, Robin; Baade, Dietrich; Balard, Philippe; Bell, Ray; Burt, David; Denney, Sandy; Feautrier, Philippe; Fusco, Thierry; Gach, Jean-Luc; Diaz Garcia, José Javier; Guillaume, Christian; Hubin, Norbert; Jorden, Paul; Kasper, Markus; Meyer, Manfred; Pool, Peter; Reyes, Javier; Skegg, Michael; Stadler, Eric; Suske, Wolfgang; Wheeler, Patrick

    2006-06-01

    ESO and JRA2 OPTICON have funded e2v technologies to develop a compact packaged Peltier cooled 24 μm square 240x240 pixels split frame transfer 8-output back-illuminated L3Vision CCD3, L3Vision CCD for Adaptive Optic Wave Front Sensor (AO WFS) applications. The device is designed to achieve sub-electron read noise at frame rates from 25 Hz to 1,500 Hz and dark current lower than 0.01 e-/pixel/frame. The development has many unique features. To obtain high frame rates, multi-output EMCCD gain registers and metal buttressing of row clock lines are used. The baseline device is built in standard silicon. In addition, a split wafer run has enabled two speculative variants to be built; deep depletion silicon devices to improve red response and devices with an electronic shutter to extend use to Rayleigh and Pulsed Laser Guide Star applications. These are all firsts for L3Vision CCDs. The designs of the CCD and Peltier package have passed their reviews and fabrication has begun. This paper will describe the progress to date, the requirements and the design of the CCD and compact Peltier package, technology trade-offs, schedule and proposed test plan. High readout speed, low noise and compactness (requirement to fit in confined spaces) provide special challenges to ESO's AO variant of its NGC, New General detector Controller to drive this CCD. This paper will describe progress made on the design of the controller to meet these special needs.

  13. Noise characteristics of neutron images obtained by cooled CCD device

    International Nuclear Information System (INIS)

    Taniguchi, Ryoichi; Sasaki, Ryoya; Okuda, Shuichi; Okamoto, Ken-Ichi; Ogawa, Yoshihiro; Tsujimoto, Tadashi

    2009-01-01

    The noise characteristics of a cooled CCD device induced by neutron and gamma ray irradiation have been investigated. In the cooled CCD images, characteristic white spot noises (CCD noise) frequently appeared, which have a shape like a pixel in most cases and their brightness is extremely high compared with that of the image pattern. They could be divided into the two groups, fixed pattern noise (FPN) and random noise. The former always appeared in the same position in the image and the latter appeared at any position. In the background image, nearly all of the CCD noises were found to be the FPN, while many of them were the random noise during the irradiation. The random CCD noises increased with irradiation and decreased soon after the irradiation. In the case of large irradiation, a part of the CCD noise remained as the FPN. These facts suggest that the CCD noise is a phenomenon strongly relating to radiation damage of the CCD device.

  14. Fully depleted back-illuminated p-channel CCD development

    Energy Technology Data Exchange (ETDEWEB)

    Bebek, Chris J.; Bercovitz, John H.; Groom, Donald E.; Holland, Stephen E.; Kadel, Richard W.; Karcher, Armin; Kolbe, William F.; Oluseyi, Hakeem M.; Palaio, Nicholas P.; Prasad, Val; Turko, Bojan T.; Wang, Guobin

    2003-07-08

    An overview of CCD development efforts at Lawrence Berkeley National Laboratory is presented. Operation of fully-depleted, back-illuminated CCD's fabricated on high resistivity silicon is described, along with results on the use of such CCD's at ground-based observatories. Radiation damage and point-spread function measurements are described, as well as discussion of CCD fabrication technologies.

  15. A self triggered intensified Ccd (Stic)

    International Nuclear Information System (INIS)

    Charon, Y.; Laniece, P.; Bendali, M.

    1990-01-01

    We are developing a new device based on the results reported previously of the successfull coincidence detection of β- particles with a high spatial resolution [1]. The novelty of the device consists in triggering an intensified CCD, i.e. a CCD coupled to an image intensifier (II), by an electrical signal collected from the II itself. This is a suitable procedure for detecting with high efficiency and high resolution low light rare events. The trigger pulse is obtained from the secondary electrons produced by multiplication in a double microchannel plate (MCP) and collected on the aluminized layer protecting the phosphor screen in the II. Triggering efficiencies up to 80% has been already achieved

  16. CCD Photometry Using Multiple Comparison Stars

    Directory of Open Access Journals (Sweden)

    Yonggi Kim

    2004-09-01

    Full Text Available The accuracy of CCD observations obtained at the Korean 1.8 m telescope has been studied. Seventeen comparison stars in the vicinity of the cataclysmic variable BG CMi have been measured. The ``artificial" star has been used instead of the ``control" star, what made possible to increase accuracy estimates by a factor of 1.3-2.1 times for ``good" and ``cloudy" nights, respectively. The algorithm of iterative determination of accuracy and weights of few comparison stars contributing to the artificial star, has been presented. The accuracy estimates for 13-mag stars are around 0.002 m mag for exposure times of 30 sec.

  17. Electromagnetic Compatibility Assessment of CCD Detector Acquisition Chains not Synchronized

    Science.gov (United States)

    Nicoletto, M.; Boschetti, D.; Ciancetta, E.; Maiorano, E.; Stagnaro, L.

    2016-05-01

    Euclid is a space observatory managed by the European Space Agency; it is the second medium class mission (see Figure 1) in the frame of Cosmic Vision 2015-2025 program.In the frame of this project, the electromagnetic interference between two different and not synchronized Charge Coupled Device (CCD) (see Figure 2) acquisition chains has been evaluated. The key parameter used for this assessment is the electromagnetic noise induced on each other. Taking into account the specificity of the issue, radiation coupling at relative low frequency and in near field conditions, classical approach based on simulations and testing on qualification model cannot be directly applied. Based on that, it has been decided to investigate the issue by test in an incremental way.

  18. New Design Concept for Universal CCD Controller

    Directory of Open Access Journals (Sweden)

    Wonyong Han

    1994-06-01

    Full Text Available Currently, the CCDs are widely used in astronomical observations either in direct imaging use or spectroscopic mode. However according to the recent technical advances, new large format CCDs are rapidly developed which have better performances with higher quantum efficiency and sensitivity. In many cases, some microprocessors have been adopted to deal with necessary digital logic for a CCD imaging system. This could often lack the flexibility of a system for a user to upgrade with new devices, especially of it is a commercial product. A new design concept has been explored which could provide the opportunity to deal with any format of devices from ant manufactures effectively for astronomical purposes. Recently available PLD (Programmable Logic Devices technology makes it possible to develop such digital circuit design, which can be integrated into a single component, instead of using microprocessors. The design concept could dramatically increase the efficiency and flexibility of a CCD imaging system, particularly when new or large format devices are available and to upgrade the performance of a system. Some variable system control parameters can be selected by a user with a wider range of choice. The software can support such functional requirements very conveniently. This approach can be applied not only to astronomical purpose, but also to some related fields, such as remote sensing and industrial applications.

  19. CCD developed for scientific application by Hamamatsu

    CERN Document Server

    Miyaguchi, K; Dezaki, J; Yamamoto, K

    1999-01-01

    We have developed CCDs for scientific applications that feature a low readout noise of less than 5 e-rms and low dark current of 10-25 pA/cm sup 2 at room temperature. CCDs with these characteristics will prove extremely useful in applications such as spectroscopic measurement and dental radiography. In addition, a large-area CCD of 2kx4k pixels and 15 mu m square pixel size has recently been completed for optical use in astronomical observations. Applications to X-ray astronomy require the most challenging device performance in terms of deep depletion, high CTE, and focal plane size, among others. An abuttable X-ray CCD, having 1024x1024 pixels and 24 mu m square pixel size, is to be installed in an international space station (ISS). We are now striving to achieve the lowest usable cooling temperature by means of a built-in TEC with limited power consumption. Details on the development status are described in this report. We would also like to present our future plans for a large active area and deep depleti...

  20. Comparação entre dois fios de sutura não absorvíveis na anastomose traqueal término-terminal em cães Comparison of two nonabsorbable suture materials in the end-to-end tracheal anastomosis in dogs

    Directory of Open Access Journals (Sweden)

    Sheila Canevese Rahal

    1995-01-01

    Full Text Available Doze cães sem raça definida, com idade variando entre 1 e 6 anos e peso de 6 a 20kg, foram submetidos a ressecção traqueal e anastomose término-terminal, na qual foram testados os fios poliéster trançado não capilar e náilon monofilamento. Seis animais, cada três com um mesmo tipo de fio de sutura, sofreram a excisão equivalente a três anéis traqueais. Com 15 dias foi executada uma nova intervenção onde se ressecou o equivalente a mais seis anéis, perfazendo um total de nove. Ao final de outros 15 dias foram sacrificados. Os outros seis animais, cada três com um mesmo tipo de fio, foram submetidos à excisão equivalente a três anéis traqueais e mantidos por 43 dias. As traquéias foram avaliadas por exames clínicos, radiográficos, macroscópicos e histopatológicos. O fio de náilon monofilamento apresentou menos reação tecidual do que o poliéster trançado não capilar, promoveu uma anastomose segura e com menor chance de formação de granuloma.Twelve mongrel dogs, with age between 1 and 6 years old and weight between 12 and 40 pounds, were submitted to tracheal resection and end-to-end anastomosis in which were tested braided polyester no capillary and monofilament nylon materiais. Six animais, every threeones with a same type of suture material, suffered the excision equivalent to three tracheal rings. A new intervention was performed with fifteen days, in which the equivalent of more six tracheal rings were removed, completing the total of nine. At the end of more fifteen days they were sacrificed. The other six animals, every three with a same type of suture material, were submitted to the excision equivalent to three tracheal rings and maintained for 43 days. The tracheal anastomosis were evaluated to clinic, radiographic, macroscopic and histopathologic studies. The monofilament nylon material exhibited less reaction than polyester and promoted a secure anastomosis with less risk of granuloma formation.

  1. Optimal CCD readout by digital correlated double sampling

    Science.gov (United States)

    Alessandri, C.; Abusleme, A.; Guzman, D.; Passalacqua, I.; Alvarez-Fontecilla, E.; Guarini, M.

    2016-01-01

    Digital correlated double sampling (DCDS), a readout technique for charge-coupled devices (CCD), is gaining popularity in astronomical applications. By using an oversampling ADC and a digital filter, a DCDS system can achieve a better performance than traditional analogue readout techniques at the expense of a more complex system analysis. Several attempts to analyse and optimize a DCDS system have been reported, but most of the work presented in the literature has been experimental. Some approximate analytical tools have been presented for independent parameters of the system, but the overall performance and trade-offs have not been yet modelled. Furthermore, there is disagreement among experimental results that cannot be explained by the analytical tools available. In this work, a theoretical analysis of a generic DCDS readout system is presented, including key aspects such as the signal conditioning stage, the ADC resolution, the sampling frequency and the digital filter implementation. By using a time-domain noise model, the effect of the digital filter is properly modelled as a discrete-time process, thus avoiding the imprecision of continuous-time approximations that have been used so far. As a result, an accurate, closed-form expression for the signal-to-noise ratio at the output of the readout system is reached. This expression can be easily optimized in order to meet a set of specifications for a given CCD, thus providing a systematic design methodology for an optimal readout system. Simulated results are presented to validate the theory, obtained with both time- and frequency-domain noise generation models for completeness.

  2. BVRI CCD photometry of Omega Centauri

    International Nuclear Information System (INIS)

    Alcaino, G.; Liller, W.

    1987-01-01

    Color-magnitude diagrams (CMDs) of V vs B-V, V vs V-I, and V vs B-I have been constructed based on 179 BVRI CCD frames of two adjoining 4x2.5-arcmin fields in Omega Cen (NGC 5139) obtained with the 1.54-m Danish La Silla telescope. The spread in the main sequences noted in the three CMDs indicates that the wide range in chemical composition among the evolved stars in this cluster persists as well in the unevolved stars. This result suggests that the abundance variations are primordial. A difference in magnitude between the turnoff and the horizontal branch of 3.8 + or - 0.15 is found which is greater than a previous value. 38 references

  3. Programmable Clock Waveform Generation for CCD Readout

    Energy Technology Data Exchange (ETDEWEB)

    Vicente, J. de; Castilla, J.; Martinez, G.; Marin, J.

    2006-07-01

    Charge transfer efficiency in CCDs is closely related to the clock waveform. In this paper, an experimental framework to explore different FPGA based clock waveform generator designs is described. Two alternative design approaches for controlling the rise/fall edge times and pulse width of the CCD clock signal have been implemented: level-control and time-control. Both approaches provide similar characteristics regarding the edge linearity and noise. Nevertheless, dissimilarities have been found with respect to the area and frequency range of application. Thus, while the time-control approach consumes less area, the level control approach provides a wider range of clock frequencies since it does not suffer capacitor discharge effect. (Author) 8 refs.

  4. CCD camera eases the control of a soda recovery boiler; CCD-kamera helpottaa soodakattilan valvontaa

    Energy Technology Data Exchange (ETDEWEB)

    Kinnunen, L.

    2001-07-01

    Fortum Technology has developed a CCD firebox camera, based on semiconductor technology, enduring hard conditions of soda recovery boiler longer than traditional cameras. The firebox camera air- cooled and the same air is pressed over the main lens so it remains clean despite of the alkaline liquor splashing around in the boiler. The image of the boiler is transferred through the main lens, image transfer lens and a special filter, mounted inside the camera tube, into the CCD camera. The first CCD camera system has been in use since 1999 in Sunila pulp mill in Kotka, owned by Myllykoski Oy and Enso Oyj. The mill has two medium-sized soda recovery boilers. The amount of black liquor, formed daily, is about 2000 tons DS, which is more than enough for the heat generation. Even electric power generation exceeds sometimes the demand, so the surplus power can be sold out. Black liquor is sprayed inside the soda recovery boiler with high pressure. The liquor form droplets in the boiler, the temperature of which is over 1000 deg C. A full-hot pile is formed at the bottom of the boiler after burning. The size and shape of the pile effect on the efficiency and the emissions of the boiler. The camera has operated well.

  5. 15 CFR 740.19 - Consumer Communications Devices (CCD).

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Consumer Communications Devices (CCD... EXCEPTIONS § 740.19 Consumer Communications Devices (CCD). (a) Authorization. This License Exception... controllers designed for chemical processing) designated EAR99; (4) Graphics accelerators and graphics...

  6. BVI CCD photometry of 47 Tucanae

    International Nuclear Information System (INIS)

    Alcaino, G.; Liller, W.

    1987-01-01

    CCD BVI main-sequence photometry of 47 Tuc is presented, matched to the recent BVI isochrones of VandenBerg and Bell (1985). The main-sequence turnoffs are found to be at V = 17.60 + or - 0.1, B-V = 0.56 + or - 0.02; V-I = 0.68 + or - 0.02, and B-I = 1.24 + or - 0.02. The magnitude difference between the main-sequence turnoff and the horizontal branch is 3.55 + or - 0.15 for all three color indices. A consistent age for 47 Tuc of 17 Gyr and a consistent distance modulus of (m-M)v = 13.2 are obtained for all three indices, and an absolute magnitude of Mv = 0.85 is determined for the horizontal branch stars. The results also favor the adoption of (Fe/H) near -0.5 as the best abundance value for 47 Tuc. 38 references

  7. A new approach to modelling radiation noise in CCD's

    International Nuclear Information System (INIS)

    Chugg, A.; Hopkinson, G.

    1998-01-01

    The energy depositions reported by Monte Carlo electron-photon irradiation transport codes are subject to a random error due to the finite number of particle histories used to generate the results. These statistical variations, normally a nuisance, may also be identified with the real radiation noise effects experienced by CCD pixels in persistent radiation environments. This paper explores the practicability of such radiation noise modelling by applying the ACCEPT code from the ITS suite to the case of a shielded CCD exposed to an electron flux. The results are compared with those obtained in a subsequent electron irradiation of the CCD by a Van de Graaff accelerator

  8. CCD camera system for use with a streamer chamber

    International Nuclear Information System (INIS)

    Angius, S.A.; Au, R.; Crawley, G.C.; Djalali, C.; Fox, R.; Maier, M.; Ogilvie, C.A.; Molen, A. van der; Westfall, G.D.; Tickle, R.S.

    1988-01-01

    A system based on three charge-coupled-device (CCD) cameras is described here. It has been used to acquire images from a streamer chamber and consists of three identical subsystems, one for each camera. Each subsystem contains an optical lens, CCD camera head, camera controller, an interface between the CCD and a microprocessor, and a link to a minicomputer for data recording and on-line analysis. Image analysis techniques have been developed to enhance the quality of the particle tracks. Some steps have been made to automatically identify tracks and reconstruct the event. (orig.)

  9. Enzymatic study on AtCCD4 and AtCCD7 and their potential to form acyclic regulatory metabolites

    KAUST Repository

    Bruno, Mark

    2016-09-29

    The Arabidopsis carotenoid cleavage dioxygenase 4 (AtCCD4) is a negative regulator of the carotenoid content of seeds and has recently been suggested as a candidate for the generation of retrograde signals that are thought to derive from the cleavage of poly-cis-configured carotene desaturation intermediates. In this work, we investigated the activity of AtCCD4 in vitro and used dynamic modeling to determine its substrate preference. Our results document strict regional specificity for cleavage at the C9–C10 double bond in carotenoids and apocarotenoids, with preference for carotenoid substrates and an obstructing effect on hydroxyl functions, and demonstrate the specificity for all-trans-configured carotenes and xanthophylls. AtCCD4 cleaved substrates with at least one ionone ring and did not convert acyclic carotene desaturation intermediates, independent of their isomeric states. These results do not support a direct involvement of AtCCD4 in generating the supposed regulatory metabolites. In contrast, the strigolactone biosynthetic enzyme AtCCD7 converted 9-cis-configured acyclic carotenes, such as 9-cis-ζ-carotene, 9\\'-cis-neurosporene, and 9-cis-lycopene, yielding 9-cis-configured products and indicating that AtCCD7, rather than AtCCD4, is the candidate for forming acyclic retrograde signals.

  10. Enzymatic study on AtCCD4 and AtCCD7 and their potential to form acyclic regulatory metabolites

    KAUST Repository

    Bruno, Mark; Koschmieder, Julian; Wuest, Florian; Schaub, Patrick; Fehling-Kaschek, Mirjam; Timmer, Jens; Beyer, Peter; Al-Babili, Salim

    2016-01-01

    The Arabidopsis carotenoid cleavage dioxygenase 4 (AtCCD4) is a negative regulator of the carotenoid content of seeds and has recently been suggested as a candidate for the generation of retrograde signals that are thought to derive from the cleavage of poly-cis-configured carotene desaturation intermediates. In this work, we investigated the activity of AtCCD4 in vitro and used dynamic modeling to determine its substrate preference. Our results document strict regional specificity for cleavage at the C9–C10 double bond in carotenoids and apocarotenoids, with preference for carotenoid substrates and an obstructing effect on hydroxyl functions, and demonstrate the specificity for all-trans-configured carotenes and xanthophylls. AtCCD4 cleaved substrates with at least one ionone ring and did not convert acyclic carotene desaturation intermediates, independent of their isomeric states. These results do not support a direct involvement of AtCCD4 in generating the supposed regulatory metabolites. In contrast, the strigolactone biosynthetic enzyme AtCCD7 converted 9-cis-configured acyclic carotenes, such as 9-cis-ζ-carotene, 9'-cis-neurosporene, and 9-cis-lycopene, yielding 9-cis-configured products and indicating that AtCCD7, rather than AtCCD4, is the candidate for forming acyclic retrograde signals.

  11. Enzymatic study on AtCCD4 and AtCCD7 and their potential to form acyclic regulatory metabolites

    Science.gov (United States)

    Bruno, Mark; Koschmieder, Julian; Wuest, Florian; Schaub, Patrick; Fehling-Kaschek, Mirjam; Timmer, Jens; Beyer, Peter; Al-Babili, Salim

    2016-01-01

    The Arabidopsis carotenoid cleavage dioxygenase 4 (AtCCD4) is a negative regulator of the carotenoid content of seeds and has recently been suggested as a candidate for the generation of retrograde signals that are thought to derive from the cleavage of poly-cis-configured carotene desaturation intermediates. In this work, we investigated the activity of AtCCD4 in vitro and used dynamic modeling to determine its substrate preference. Our results document strict regional specificity for cleavage at the C9–C10 double bond in carotenoids and apocarotenoids, with preference for carotenoid substrates and an obstructing effect on hydroxyl functions, and demonstrate the specificity for all-trans-configured carotenes and xanthophylls. AtCCD4 cleaved substrates with at least one ionone ring and did not convert acyclic carotene desaturation intermediates, independent of their isomeric states. These results do not support a direct involvement of AtCCD4 in generating the supposed regulatory metabolites. In contrast, the strigolactone biosynthetic enzyme AtCCD7 converted 9-cis-configured acyclic carotenes, such as 9-cis-ζ-carotene, 9'-cis-neurosporene, and 9-cis-lycopene, yielding 9-cis-configured products and indicating that AtCCD7, rather than AtCCD4, is the candidate for forming acyclic retrograde signals. PMID:27811075

  12. Noise in off-axis type holograms including reconstruction and CCD camera parameters

    Energy Technology Data Exchange (ETDEWEB)

    Voelkl, Edgar, E-mail: edgar.voelkl@fei.com [FEI Company, 5350 NE Dawson Creek Drive, Hillsboro, OR 97124-5793 (United States)

    2010-02-15

    Phase and amplitude images as contained in digital holograms are commonly extracted via a process called 'reconstruction'. Expressions for the expected noise in these images have been given in the past by several authors; however, the effect of the actual reconstruction process has not been fully appreciated. By starting with the Quantum Mechanical intensity distribution of the off-axis type interference pattern, then building the digital hologram on an electron-by-electron base while simultaneously reconstructing the phase/amplitude images and evaluating their noise levels, an expression is derived that consistently describes the noise in simulated and experimental phase/amplitude images and contains the reconstruction parameters. Because of the necessity to discretize the intensity distribution function, the digitization effects of an ideal CCD camera had to be included. Subsequently, this allowed a comparison between real and simulated holograms which then led to a comparison between the performance of an 'ideal' CCD camera versus a real device. It was concluded that significant improvement of the phase and amplitude noise may be obtained if CCD cameras were optimized for digitizing intensity distributions at low sampling rates.

  13. An X-ray CCD signal generator with true random arrival time

    International Nuclear Information System (INIS)

    Huo Jia; Xu Yuming; Chen Yong; Cui Weiwei; Li Wei; Zhang Ziliang; Han Dawei; Wang Yusan; Wang Juan

    2011-01-01

    An FPGA-based true random signal generator with adjustable amplitude and exponential distribution of time interval is presented. Since traditional true random number generators (TRNG) are resource costly and difficult to transplant, we employed a method of random number generation based on jitter and phase noise in ring oscillators formed by gates in an FPGA. In order to improve the random characteristics, a combination of two different pseudo-random processing circuits is used for post processing. The effects of the design parameters, such as sample frequency are discussed. Statistical tests indicate that the generator can well simulate the timing behavior of random signals with Poisson distribution. The X-ray CCD signal generator will be used in debugging the CCD readout system of the Low Energy X-ray Instrument onboard the Hard X-ray Modulation Telescope (HXMT). (authors)

  14. Design method of general-purpose driving circuit for CCD based on CPLD

    International Nuclear Information System (INIS)

    Zhang Yong; Tang Benqi; Xiao Zhigang; Wang Zujun; Huang Shaoyan

    2005-01-01

    It is very important for studying the radiation damage effects and mechanism systematically about CCD to develop a general-purpose test platform. The paper discusses the design method of general-purpose driving circuit for CCD based on CPLD and the realization approach. A main controller has being designed to read the data file from the outer memory, setup the correlative parameter registers and produce the driving pulses according with parameter request strictly, which is based on MAX7000S by using MAX-PLUS II software. The basic driving circuit module has being finished based on this method. The output waveform of the module is the same figure as the simulation waveform. The result indicates that the design method is feasible. (authors)

  15. Micrometer and CCD measurements of double stars (Series 51

    Directory of Open Access Journals (Sweden)

    Popović G.M.

    1998-01-01

    Full Text Available 36 micrometric measurements of 20 double or multiple systems carried out with the Zeiss 65/1055 cm Refractor of Belgrade Observatory are communicated. Also 35 CCD measurements of 15 double or multiple systems are included.

  16. Experience in CCD Photometry at the Tartu Observatory

    Directory of Open Access Journals (Sweden)

    Tuvikene T.

    2003-12-01

    Full Text Available We give overview of the CCD instrumentation and data reduction techniques used at the Tartu Observatory. The first results from photometric observations of the peculiar variable V838 Mon are presented.

  17. A GRAPH READER USING A CCD IMAGE SENSOR

    African Journals Online (AJOL)

    2008-01-18

    Jan 18, 2008 ... using a stepper motor controlled by a software program in a ... Keywords: CCD sensor, microcontrollen stepper motor and microcomputer. 1. ... commercial applications (Awcock and ... on-chip amplifier, one pixel at a tirtjie.

  18. Correlation and image compression for limited-bandwidth CCD.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, Douglas G.

    2005-07-01

    As radars move to Unmanned Aerial Vehicles with limited-bandwidth data downlinks, the amount of data stored and transmitted with each image becomes more significant. This document gives the results of a study to determine the effect of lossy compression in the image magnitude and phase on Coherent Change Detection (CCD). We examine 44 lossy compression types, plus lossless zlib compression, and test each compression method with over 600 CCD image pairs. We also derive theoretical predictions for the correlation for most of these compression schemes, which compare favorably with the experimental results. We recommend image transmission formats for limited-bandwidth programs having various requirements for CCD, including programs which cannot allow performance degradation and those which have stricter bandwidth requirements at the expense of CCD performance.

  19. Follow-up study of children with cerebral coordination disturbance (CCD, Vojta).

    Science.gov (United States)

    Imamura, S; Sakuma, K; Takahashi, T

    1983-01-01

    713 children (from newborn to 12-month-old) with delayed motor development were carefully examined and classified into normal, very light cerebral coordination disturbance (CCD, Vojta), light CCD, moderate CCD, severe CCD, suspected cerebral palsy (CP) and other diseases at their first visit, and were followed up carefully. Finally, 89.0% of very light CCD, 71.4% of light CCD, 56.0% of moderate CCD and 30.0% of severe CCD developed into normal. 59.5% of moderate CCD and 45.5% of severe CCD among children who were given Vojta's physiotherapy developed into normal. The classification of cases with delayed motor development into very light, light, moderate and severe CCD based on the extent of abnormality in their postural reflexes is useful and well correlated with their prognosis. Treatment by Vojta's method seems to be efficient and helpful for young children with delayed motor development.

  20. CCD Astrophotography High-Quality Imaging from the Suburbs

    CERN Document Server

    Stuart, Adam

    2006-01-01

    This is a reference book for amateur astronomers who have become interested in CCD imaging. Those glorious astronomical images found in astronomy magazines might seem out of reach to newcomers to CCD imaging, but this is not the case. Great pictures are attainable with modest equipment. Adam Stuart’s many beautiful images, reproduced in this book, attest to the quality of – initially – a beginner’s efforts. Chilled-chip astronomical CCD-cameras and software are also wonderful tools for cutting through seemingly impenetrable light-pollution. CCD Astrophotography from the Suburbs describes one man’s successful approach to the problem of getting high-quality astronomical images under some of the most light-polluted conditions. Here is a complete and thoroughly tested program that will help every CCD-beginner to work towards digital imaging of the highest quality. It is equally useful to astronomers who have perfect observing conditions, as to those who have to observe from light-polluted city skies.

  1. Purification and crystallization of Vibrio fischeri CcdB and its complexes with fragments of gyrase and CcdA

    Energy Technology Data Exchange (ETDEWEB)

    De Jonge, Natalie, E-mail: ndejonge@vub.ac.be; Buts, Lieven; Vangelooven, Joris [Department of Molecular and Cellular Interactions, VIB, Pleinlaan 2, 1050 Brussels (Belgium); Laboratorium voor Ultrastructuur, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels (Belgium); Mine, Natacha; Van Melderen, Laurence [Laboratoire de Génétique des Procaryotes, Institut de Biologie et de Médecine, Université Libre de Bruxelles, Gosselies (Belgium); Wyns, Lode; Loris, Remy [Department of Molecular and Cellular Interactions, VIB, Pleinlaan 2, 1050 Brussels (Belgium); Laboratorium voor Ultrastructuur, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels (Belgium)

    2007-04-01

    A CcdB homologue from V. fischeri was overexpressed in E. coli and purified. The free protein was crystallized, as were its complexes with fragments of E. coli and V. fischeri gyrase and with the F-plasmid CcdA C-terminal domain. The ccd toxin–antitoxin module from the Escherichia coli F plasmid has a homologue on the Vibrio fischeri integron. The homologue of the toxin (CcdB{sub Vfi}) was crystallized in two different crystal forms. The first form belongs to space group I23 or I2{sub 1}3, with unit-cell parameter a = 84.5 Å, and diffracts to 1.5 Å resolution. The second crystal form belongs to space group C2, with unit-cell parameters a = 58.5, b = 43.6, c = 37.5 Å, β = 110.0°, and diffracts to 1.7 Å resolution. The complex of CcdB{sub Vfi} with the GyrA14{sub Vfi} fragment of V. fischeri gyrase crystallizes in space group P2{sub 1}2{sub 1}2{sub 1}, with unit-cell parameters a = 53.5, b = 94.6, c = 58.1 Å, and diffracts to 2.2 Å resolution. The corresponding mixed complex with E. coli GyrA14{sub Ec} crystallizes in space group C2, with unit-cell parameters a = 130.1, b = 90.8, c = 58.1 Å, β = 102.6°, and diffracts to 1.95 Å. Finally, a complex between CcdB{sub Vfi} and part of the F-plasmid antitoxin CcdA{sub F} crystallizes in space group P2{sub 1}2{sub 1}2{sub 1}, with unit-cell parameters a = 46.9, b = 62.6, c = 82.0 Å, and diffracts to 1.9 Å resolution.

  2. Purification and crystallization of Vibrio fischeri CcdB and its complexes with fragments of gyrase and CcdA

    International Nuclear Information System (INIS)

    De Jonge, Natalie; Buts, Lieven; Vangelooven, Joris; Mine, Natacha; Van Melderen, Laurence; Wyns, Lode; Loris, Remy

    2007-01-01

    A CcdB homologue from V. fischeri was overexpressed in E. coli and purified. The free protein was crystallized, as were its complexes with fragments of E. coli and V. fischeri gyrase and with the F-plasmid CcdA C-terminal domain. The ccd toxin–antitoxin module from the Escherichia coli F plasmid has a homologue on the Vibrio fischeri integron. The homologue of the toxin (CcdB Vfi ) was crystallized in two different crystal forms. The first form belongs to space group I23 or I2 1 3, with unit-cell parameter a = 84.5 Å, and diffracts to 1.5 Å resolution. The second crystal form belongs to space group C2, with unit-cell parameters a = 58.5, b = 43.6, c = 37.5 Å, β = 110.0°, and diffracts to 1.7 Å resolution. The complex of CcdB Vfi with the GyrA14 Vfi fragment of V. fischeri gyrase crystallizes in space group P2 1 2 1 2 1 , with unit-cell parameters a = 53.5, b = 94.6, c = 58.1 Å, and diffracts to 2.2 Å resolution. The corresponding mixed complex with E. coli GyrA14 Ec crystallizes in space group C2, with unit-cell parameters a = 130.1, b = 90.8, c = 58.1 Å, β = 102.6°, and diffracts to 1.95 Å. Finally, a complex between CcdB Vfi and part of the F-plasmid antitoxin CcdA F crystallizes in space group P2 1 2 1 2 1 , with unit-cell parameters a = 46.9, b = 62.6, c = 82.0 Å, and diffracts to 1.9 Å resolution

  3. Noise analysis for CCD-based ultraviolet and visible spectrophotometry.

    Science.gov (United States)

    Davenport, John J; Hodgkinson, Jane; Saffell, John R; Tatam, Ralph P

    2015-09-20

    We present the results of a detailed analysis of the noise behavior of two CCD spectrometers in common use, an AvaSpec-3648 CCD UV spectrometer and an Ocean Optics S2000 Vis spectrometer. Light sources used include a deuterium UV/Vis lamp and UV and visible LEDs. Common noise phenomena include source fluctuation noise, photoresponse nonuniformity, dark current noise, fixed pattern noise, and read noise. These were identified and characterized by varying light source, spectrometer settings, or temperature. A number of noise-limiting techniques are proposed, demonstrating a best-case spectroscopic noise equivalent absorbance of 3.5×10(-4)  AU for the AvaSpec-3648 and 5.6×10(-4)  AU for the Ocean Optics S2000 over a 30 s integration period. These techniques can be used on other CCD spectrometers to optimize performance.

  4. Investigation of radiation damage effects in neutron irradiated CCD

    International Nuclear Information System (INIS)

    Brau, James E.; Igonkina, Olga; Potter, Chris T.; Sinev, Nikolai B.

    2005-01-01

    A Charge Coupled Devices (CCD)-based vertex detector is a leading option for vertex detection at the future linear collider. A major issue for this application is the radiation hardness of such devices. Tests of radiation hardness of CCDs used in the SLD vertex detector, VXD3, have been reported earlier. The first measurements of 1998 involved a spare VXD3 CCD that was irradiated with neutrons from a radioactive source (Pu-Be), and from a nuclear reactor. In 2003, we had the opportunity to disassemble the VXD3 detector and study the nature of the radiation damage it incurred during 3 years of operation at SLC. In the preparation for this study, additional experiments with the spare VXD3 CCD were performed. These included measurements of trapping times in neutron irradiated CCDs. Results, reported here, will help us better understand the mechanism of radiation damage effects and develop techniques to minimize performance degradation due to radiation damage

  5. CCD image sensor induced error in PIV applications

    Science.gov (United States)

    Legrand, M.; Nogueira, J.; Vargas, A. A.; Ventas, R.; Rodríguez-Hidalgo, M. C.

    2014-06-01

    The readout procedure of charge-coupled device (CCD) cameras is known to generate some image degradation in different scientific imaging fields, especially in astrophysics. In the particular field of particle image velocimetry (PIV), widely extended in the scientific community, the readout procedure of the interline CCD sensor induces a bias in the registered position of particle images. This work proposes simple procedures to predict the magnitude of the associated measurement error. Generally, there are differences in the position bias for the different images of a certain particle at each PIV frame. This leads to a substantial bias error in the PIV velocity measurement (˜0.1 pixels). This is the order of magnitude that other typical PIV errors such as peak-locking may reach. Based on modern CCD technology and architecture, this work offers a description of the readout phenomenon and proposes a modeling for the CCD readout bias error magnitude. This bias, in turn, generates a velocity measurement bias error when there is an illumination difference between two successive PIV exposures. The model predictions match the experiments performed with two 12-bit-depth interline CCD cameras (MegaPlus ES 4.0/E incorporating the Kodak KAI-4000M CCD sensor with 4 megapixels). For different cameras, only two constant values are needed to fit the proposed calibration model and predict the error from the readout procedure. Tests by different researchers using different cameras would allow verification of the model, that can be used to optimize acquisition setups. Simple procedures to obtain these two calibration values are also described.

  6. CCD image sensor induced error in PIV applications

    International Nuclear Information System (INIS)

    Legrand, M; Nogueira, J; Vargas, A A; Ventas, R; Rodríguez-Hidalgo, M C

    2014-01-01

    The readout procedure of charge-coupled device (CCD) cameras is known to generate some image degradation in different scientific imaging fields, especially in astrophysics. In the particular field of particle image velocimetry (PIV), widely extended in the scientific community, the readout procedure of the interline CCD sensor induces a bias in the registered position of particle images. This work proposes simple procedures to predict the magnitude of the associated measurement error. Generally, there are differences in the position bias for the different images of a certain particle at each PIV frame. This leads to a substantial bias error in the PIV velocity measurement (∼0.1 pixels). This is the order of magnitude that other typical PIV errors such as peak-locking may reach. Based on modern CCD technology and architecture, this work offers a description of the readout phenomenon and proposes a modeling for the CCD readout bias error magnitude. This bias, in turn, generates a velocity measurement bias error when there is an illumination difference between two successive PIV exposures. The model predictions match the experiments performed with two 12-bit-depth interline CCD cameras (MegaPlus ES 4.0/E incorporating the Kodak KAI-4000M CCD sensor with 4 megapixels). For different cameras, only two constant values are needed to fit the proposed calibration model and predict the error from the readout procedure. Tests by different researchers using different cameras would allow verification of the model, that can be used to optimize acquisition setups. Simple procedures to obtain these two calibration values are also described. (paper)

  7. Defect testing of large aperture optics based on high resolution CCD camera

    International Nuclear Information System (INIS)

    Cheng Xiaofeng; Xu Xu; Zhang Lin; He Qun; Yuan Xiaodong; Jiang Xiaodong; Zheng Wanguo

    2009-01-01

    A fast testing method on inspecting defects of large aperture optics was introduced. With uniform illumination by LED source at grazing incidence, the image of defects on the surface of and inside the large aperture optics could be enlarged due to scattering. The images of defects were got by high resolution CCD camera and microscope, and the approximate mathematical relation between viewing dimension and real dimension of defects was simulated. Thus the approximate real dimension and location of all defects could be calculated through the high resolution pictures. (authors)

  8. Stroboscope Based Synchronization of Full Frame CCD Sensors.

    Science.gov (United States)

    Shen, Liang; Feng, Xiaobing; Zhang, Yuan; Shi, Min; Zhu, Dengming; Wang, Zhaoqi

    2017-04-07

    The key obstacle to the use of consumer cameras in computer vision and computer graphics applications is the lack of synchronization hardware. We present a stroboscope based synchronization approach for the charge-coupled device (CCD) consumer cameras. The synchronization is realized by first aligning the frames from different video sequences based on the smear dots of the stroboscope, and then matching the sequences using a hidden Markov model. Compared with current synchronized capture equipment, the proposed approach greatly reduces the cost by using inexpensive CCD cameras and one stroboscope. The results show that our method could reach a high accuracy much better than the frame-level synchronization of traditional software methods.

  9. Digital Printing Quality Detection and Analysis Technology Based on CCD

    Science.gov (United States)

    He, Ming; Zheng, Liping

    2017-12-01

    With the help of CCD digital printing quality detection and analysis technology, it can carry out rapid evaluation and objective detection of printing quality, and can play a certain control effect on printing quality. It can be said CDD digital printing quality testing and analysis of the rational application of technology, its digital printing and printing materials for a variety of printing equipments to improve the quality of a very positive role. In this paper, we do an in-depth study and discussion based on the CCD digital print quality testing and analysis technology.

  10. Calibration of the CCD photonic measuring system for railway inspection

    Science.gov (United States)

    Popov, D. V.; Ryabichenko, R. B.; Krivosheina, E. A.

    2005-08-01

    Increasing of traffic speed is the most important task in Moscow Metro. Requirements for traffic safety grow up simultaneously with the speed increasing. Currently for track inspection in Moscow Metro is used track measurement car has built in 1954. The main drawbacks of this system are absence of automated data processing and low accuracy. Non-contact photonic measurement system (KSIR) is developed for solving this problem. New track inspection car will be built in several months. This car will use two different track inspection systems and car locating subsystem based on track circuit counting. The KSIR consists of four subsystems: rail wear, height and track gauge measurement (BFSM); rail slump measurement (FIP); contact rail measurement (FKR); speed, level and car locating (USI). Currently new subsystem for wheel flange wear (IRK) is developed. The KSIR carry out measurements in real-time mode. The BFSM subsystem contains 4 matrix CCD cameras and 4 infrared stripe illuminators. The FIP subsystem contains 4 line CCD cameras and 4 spot illuminators. The FKR subsystem contains 2 matrix CCD cameras and 2 stripe illuminators. The IRK subsystem contains 2 CCD cameras and 2 stripe illuminators. Each system calibration was carried out for their adjustment. On the first step KSIR obtains data from photonic sensors which is valued in internal measurement units. Due to the calibration on the second step non-contact system converts the data to metric measurement system.

  11. Development of CCD Imaging System Using Thermoelectric Cooling Method

    Directory of Open Access Journals (Sweden)

    Youngsik Park

    2000-06-01

    Full Text Available We developed low light CCD imaging system using thermoelectric cooling method collaboration with a company to design a commercial model. It consists of Kodak KAF-0401E (768x512 pixels CCD chip,thermoelectric module manufactured by Thermotek. This TEC system can reach an operative temperature of -25deg. We employed an Uniblitz VS25S shutter and it has capability a minimum exposure time 80ms. The system components are an interface card using a Korea Astronomy Observatory (hereafter KAO ISA bus controller, image acquisition with AD9816 chip, that is 12bit video processor. The performance test with this imaging system showed good operation within the initial specification of our design. It shows a dark current less than 0.4e-/pixel/sec at a temperature of -10deg, a linearity 99.9+/-0.1%, gain 4.24e-adu, and system noise is 25.3e- (rms. For low temperature CCD operation, we designed a TEC, which uses a one-stage peltier module and forced air heat exchanger. This TEC imaging system enables accurate photometry (+/-0.01mag even though the CCD is not at 'conventional' cryogenic temperatures (140K. The system can be a useful instrument for any other imaging applications. Finally, with this system, we obtained several images of astronomical objects for system performance tests.

  12. A CCD portrait of Comet P/Tempel 2

    International Nuclear Information System (INIS)

    Jewitt, D.; Luu, J.

    1989-01-01

    The development of activity in Comet P/Tempel 2 is studied from aphelion (R = 4 AU) to perihelion (R = 1.4 AU) using extensive time-series CCD photometry and CCD spectra. The comet undergoes a profound morphological change at R of about 2-2.5 AU, from a bare nucleus at larger distances to an active comet supporting a coma of gas and dust. Cyclic photometric variations with the period T = 8.95 + or - 0.01 hr. are present at all R, and are attributed to the rotation of the nucleus at this period. The nucleus is prolate (axes a:b:c = 1.9:1:1), a property shared with other nuclei studied using CCD photometry. Novel results include a limit on the bulk density of the nucleus, rho above 300 kg/cu m, and a 20-A-resolution CCD spectrum of the nucleus. Spatially and temporally resolved photometry is used to study the effects of nucleus rotation on the coma. The coma does not share the dramatic photometric variations shown by the nucleus. It possesses a steep surface-brightness distribution, which is attributable to progressive destruction of the coma grains with increasing space exposure. 41 refs

  13. The possibilities of CCD photometry of optical afterglows of GRBs

    Czech Academy of Sciences Publication Activity Database

    Šimon, Vojtěch; Polášek, Cyril; Jelínek, M.; Hudec, René; Štrobl, Jan

    -, č. 125 (2010), s. 24-28 ISSN 1801-5964. [Conference on Variable Stars Research /41./. Prague, 27.11.2009-29.11.2009] Institutional research plan: CEZ:AV0Z10030501 Keywords : gamma-ray bursts * optical afterglows * CCD photometry Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics

  14. Flatfielding Errors in Strömvil CCD Photometry

    Directory of Open Access Journals (Sweden)

    Boyle R. P.

    2003-12-01

    Full Text Available The importance of determining the error of the flat field in CCD photometry is detailed and our methods of doing this are described. We now have reached a precision of 1-1.5 % in our photometry. Color-magnitude diagrams of the open cluster M67 (ours and Laugalys et al. 2003 are compared.

  15. BVRI CCD photometry of the globular cluster NGC 2808

    International Nuclear Information System (INIS)

    Alcaino, G.; Liller, W.; Alvarado, F.; Wenderoth, E.

    1990-01-01

    As a part of a continuing program, CCD color-magnitude diagrams are presented for the bright globular cluster NGC 2808 in the four colors comprising BVRI. From a comparison of four different CMDs with theoretical isochrones, an age of 16 + or - 2 Gyr is obtained, assuming a value for Fe/H near -1.3. 28 refs

  16. Research of optical coherence tomography microscope based on CCD detector

    Science.gov (United States)

    Zhang, Hua; Xu, Zhongbao; Zhang, Shuomo

    2008-12-01

    The reference wave phase was modulated with a sinusoidal vibrating mirror attached to a Piezoelectric Transducer (PZT), the integration was performed by a CCD, and the charge storage period of the CCD image sensor was one-quarter period of the sinusoidal phase modulation. With the frequency- synchronous detection technique, four images (four frames of interference pattern) were recorded during one period of the phase modulation. In order to obtain the optimum modulation parameter, the values of amplitude and phase of the sinusoidal phase modulation were determined by considering the measurement error caused by the additive noise contained in the detected values. The PZT oscillation was controlled by a closed loop control system based on PID controller. An ideal discrete digital sine function at 50Hz with adjustable amplitude was used to adjust the vibrating of PZT, and a digital phase shift techniques was used to adjust vibrating phase of PZT so that the phase of the modulation could reach their optimum values. The CCD detector was triggered with software at 200Hz. Based on work above a small coherent signal masked by the preponderant incoherent background with a CCD detector was obtained.

  17. Technical challenges and recent progress in CCD imagers

    International Nuclear Information System (INIS)

    Bosiers, Jan T.; Peters, Inge M.; Draijer, Cees; Theuwissen, Albert

    2006-01-01

    This paper gives a review of the performance of charge-coupled device (CCD) imagers for use in consumer, professional and scientific applications. An overview of recent developments and the current state-of-the-art are presented. An extensive list of references is included

  18. A data-acquisition system for high speed linear CCD

    International Nuclear Information System (INIS)

    Liu Zhiyan; Chen Xiangcai; Jiang Xiaoshan; Zhang Hongyu; Liang Zhongwang; Xiang Haisheng; Hu Jun

    2010-01-01

    A data-acquisition system for high speed linear CCD (Charge Coupled device) is mainly introduced. The optical fiber transmission technology is used. The data is sent to PC through USB or PCI interface. The construction of the system, the design of the PCI interface hardware, software design and the design of the control program running on host computer are also introduced. (authors)

  19. CCD photometry of apparent dwarf galaxies in Fornax

    International Nuclear Information System (INIS)

    Phillipps, S.; Grimley, P.L.; Disney, M.J.; Cawson, M.G.M.; Kibblewhite, E.J.

    1986-01-01

    Blue and red CCD surface photometry of two apparent dwarf galaxies in the Fornax cluster region is presented. Luminosity profiles are derived and their form discussed. The fainter galaxy resembles an archetypal diffuse dwarf elliptical but the brighter of the pair is either an unusual red dwarf or a background galaxy in chance juxtaposition. (author)

  20. Measurements of 42 Wide CPM Pairs with a CCD

    Science.gov (United States)

    Harshaw, Richard

    2015-11-01

    This paper addresses the use of a Skyris 618C color CCD camera as a means of obtaining data for analysis in the measurement of wide common proper motion stars. The equipment setup is described and data collection procedure outlined. Results of the measures of 42 CPM stars are presented, showing the Skyris is a reliable device for the measurement of double stars.

  1. A large area cooled-CCD detector for electron microscopy

    International Nuclear Information System (INIS)

    Faruqi, A.R.; Andrews, H.N.; Raeburn, C.

    1994-01-01

    Large area cooled-CCDs are an excellent medium for (indirectly) recording electron images and electron diffraction patterns in real time and for use in electron tomography; real-time imaging is extremely useful in making rapid adjustments in the electron microscope. CCDs provide high sensitivity (useful for minimising dosage to radiation-sensitive biological specimen), good resolution, stable performance, excellent dynamic range and linearity and a reasonably fast readout.We have built an electron imaging device based on the EEV 1152 by 814 pixel CCD which is controlled from a unix based SUN Sparcstation operating under X-Windows. The incident 100 kV electrons are converted to visible light in a 0.5 mm thick YAG single crystal which is imaged through a lens on to the CCD.The CCD electronics is designed to be as flexible as possible and allows a wide variation in the readout speed to cater for the relatively fast application where readout noise is less critical and low readout noise applications where the extra few seconds of readout time are not significant. The CCD electronics is built in VME format which is controlled through a S-bus to VME driver. With two parallel channels of readout the whole image can be read out in similar 1 s (using the faster readout speed) with 16 bit precision and the image is displayed under X-Windows in a few seconds. The present readout works at 500 kHz and has a noise of similar 30 e rms per pixel. With a Peltier cooling device we can operate the CCD at similar -40 circle C which reduces the dark current adequately to allow exposures of up to several minutes. Several examples of patterns collected with the system on a Philips CM12 microscope will be presented. ((orig.))

  2. Study of x-ray CCD image sensor and application

    Science.gov (United States)

    Wang, Shuyun; Li, Tianze

    2008-12-01

    In this paper, we expounded the composing, specialty, parameter, its working process, key techniques and methods for charge coupled devices (CCD) twice value treatment. Disposal process for CCD video signal quantification was expatiated; X-ray image intensifier's constitutes, function of constitutes, coupling technique of X-ray image intensifier and CCD were analyzed. We analyzed two effective methods to reduce the harm to human beings when X-ray was used in the medical image. One was to reduce X-ray's radiation and adopt to intensify the image penetrated by X-ray to gain the same effect. The other was to use the image sensor to transfer the images to the safe area for observation. On this base, a new method was presented that CCD image sensor and X-ray image intensifier were combined organically. A practical medical X-ray photo electricity system was designed which can be used in the records and time of the human's penetrating images. The system was mainly made up with the medical X-ray, X-ray image intensifier, CCD vidicon with high resolution, image processor, display and so on. Its characteristics are: change the invisible X-ray into the visible light image; output the vivid images; short image recording time etc. At the same time we analyzed the main aspects which affect the system's resolution. Medical photo electricity system using X-ray image sensor can reduce the X-ray harm to human sharply when it is used in the medical diagnoses. At last we analyzed and looked forward the system's application in medical engineering and the related fields.

  3. Programmable CCD imaging system for synchrotron radiation studies

    International Nuclear Information System (INIS)

    Rodricks, B.; Brizard, C.

    1992-01-01

    A real-time imaging system for x-ray detection has been developed. The CAMAC-based system has a Charge Coupled Device (CCD) as its active detection element. The electronics consist of a CAMAC-crate-based dedicated microprocessor coupled to arbitrary waveform generators, programmable timing, and ADC modules. The hardware flexibility achievable through this system enables one to use virtually any commercially available CCD. A dedicated CAMAC-based display driver allows for real-time imaging on a high-resolution color monitor. An optional front end consisting of a fiber-optic taper and a focusing optical lens system coupled to a phosphor screen allows for large area imaging. Further, programming flexibility, in which the detector can be used in different read-out modes, enables it to be exploited for time-resolved experiments. In one mode, sections of the CCD can be read-out with millisecond time-resolution and, in another, the use of the CCD as a storage device is exploited resulting in microsecond time-resolution. Three different CCDs with radically different read-out timings and waveforms have been tested: the TI 4849, a 39Ox584 pixel array; TC 215, a 1024x1O24 pixel array; and the TH 7883, a 576x384 pixel array. The TC 215 and TI 4849 are single-phase CCDs manufactured by Texas Instruments, and the TH 7883 is a four-phase device manufactured by Thomson-CSF. The CCD characterized for uniformity, charge transfer efficiency (CTE), linearity, and sensitivity is the TC215

  4. Direct and indirect signal detection of 122 keV photons with a novel detector combining a pnCCD and a CsI(Tl) scintillator

    Energy Technology Data Exchange (ETDEWEB)

    Schlosser, D.M., E-mail: dieter.schlosser@pnsensor.de [PNSensor GmbH, Sckellstraße 3, 81667 München (Germany); Huth, M.; Hartmann, R. [PNSensor GmbH, Sckellstraße 3, 81667 München (Germany); Abboud, A.; Send, S. [Universität Siegen, Walter-Flex-Straße 3, 57072 Siegen (Germany); Conka-Nurdan, T. [Türkisch-Deutsche Universität, Sakinkaya Cad. 86, Beykoz, 34820 Istanbul (Turkey); Shokr, M.; Pietsch, U. [Universität Siegen, Walter-Flex-Straße 3, 57072 Siegen (Germany); Strüder, L. [PNSensor GmbH, Sckellstraße 3, 81667 München (Germany); Universität Siegen, Walter-Flex-Straße 3, 57072 Siegen (Germany)

    2016-01-01

    By combining a low noise fully depleted pnCCD detector with a CsI(Tl) scintillator, an energy-dispersive area detector can be realized with a high quantum efficiency (QE) in the range from below 1 keV to above 100 keV. In direct detection mode the pnCCD exhibits a relative energy resolution of 1% at 122 keV and spatial resolution of less than 75 µm, the pixel size of the pnCCD. In the indirect detection mode, i.e. conversion of the incoming X-rays in the scintillator, the measured energy resolution was about 9–13% at 122 keV, depending on the depth of interaction in the scintillator, while the position resolution, extracted with the help of simulations, was 30 µm only. We show simulated data for incident photons of 122 keV and compare the various interaction processes and relevant physical parameters to experimental results obtained with a radioactive {sup 57}Co source. - Highlights: • Position and energy resolving pnCCD+CsI(Tl) detector for energies from 1-150 keV • Detection in the pnCCD (122keV): 1% energy and <75µm spatial resolution • Detection in the scintillator (122keV): 9-12% energy and ~30µm spatial resolution.

  5. Optical CT scanning of PRESAGETM polyurethane samples with a CCD-based readout system

    International Nuclear Information System (INIS)

    Doran, S J; Krstajic, N; Adamovics, J; Jenneson, P M

    2004-01-01

    This article demonstrates the resolution capabilities of the CCD scanner under ideal circumstances and describes the first CCD-based optical CT experiments on a new class of dosimeter, known as PRESAGE TM (Heuris Pharma, Skillman, NJ)

  6. Multi-hop Relaying: An End-to-End Delay Analysis

    KAUST Repository

    Chaaban, Anas

    2015-12-01

    The impact of multi-hopping schemes on the communication latency in a relay channel is studied. The main aim is to characterize conditions under which such schemes decrease the communication latency given a reliability requirement. Both decode-forward (DF) and amplify-forward (AF) with block coding are considered, and are compared with the point-to-point (P2P) scheme which ignores the relay. Latency expressions for the three schemes are derived, and conditions under which DF and AF reduce latency are obtained for high signal-to-noise ratio (SNR). Interestingly, these conditions are more strict when compared to the conditions under which the same multi-hopping schemes achieve higher long-term (information-theoretic) rates than P2P. It turns out that the relation between the sourcedestination SNR and the harmonic mean of the SNR’s of the channels to and from the relay dictates whether multi-hopping reduces latency or not.

  7. Adaptive end-to-end optimization of mobile video streaming using QoS negotiation

    NARCIS (Netherlands)

    Taal, Jacco R.; Langendoen, Koen; van der Schaaf, Arjen; van Dijk, H.W.; Lagendijk, R. (Inald) L.

    Video streaming over wireless links is a non-trivial problem due to the large and frequent changes in the quality of the underlying radio channel combined with latency constraints. We believe that every layer in a mobile system must be prepared to adapt its behavior to its environment. Thus layers

  8. End-to-End Verification of Information-Flow Security for C and Assembly Programs

    Science.gov (United States)

    2016-04-01

    seL4 security verification [18] avoids this issue in the same way. In that work, the authors frame their solution as a restriction that disallows...identical: (σ, σ′1) ∈ TM ∧ (σ, σ′2) ∈ TM =⇒ Ol(σ′1) = Ol(σ′2) The successful security verifications of both seL4 and mCertiKOS provide reasonable...evidence that this restriction on specifications is not a major hindrance for usability. Unlike the seL4 verification, however, our framework runs into a

  9. MONTAGE: A Methodology for Designing Composable End-to-End Secure Distributed Systems

    Science.gov (United States)

    2012-08-01

    and verification, from PSOS [NF03] to the recent seL4 [KEH+09]. While they make considerable progress toward high-assurance OS, these works are not...of the specification itself. Examples include the seL4 microkernel work by Klein et al. [KEH+09], which presents the experience of formally proving...David Cock, Philip Derrin, Dhammika Elkaduwe, Kai Engelhardt, Rafal Kolanski, Michael Norrish, Thomas Sewell, Harvey Tuch, and Simon Winwood. sel4

  10. Future Wireless Network: MyNET Platform and End-to-End Network Slicing

    OpenAIRE

    Zhang, Hang

    2016-01-01

    Future wireless networks are facing new challenges. These new challenges require new solutions and strategies of the network deployment, management, and operation. Many driving factors are decisive in the re-definition and re-design of the future wireless network architecture. In the previously published paper "5G Wireless Network - MyNET and SONAC", MyNET and SONAC, a future network architecture, are described. This paper elaborates MyNET platform with more details. The design principles of ...

  11. The Knowledge Graph for End-to-End Learning on Heterogeneous Knowledge

    NARCIS (Netherlands)

    Wilcke, W.X.; Bloem, P.; de Boer, Viktor

    2018-01-01

    In modern machine learning,raw data is the preferred input for our models. Where a decade ago data scientists were still engineering features, manually picking out the details we thought salient, they now prefer the data in their raw form. As long as we can assume that all relevant and irrelevant

  12. Network Slicing in Industry 4.0 Applications: Abstraction Methods and End-to-End Analysis

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Popovski, Petar; Kalør, Anders Ellersgaard

    2018-01-01

    Industry 4.0 refers to the fourth industrial revolution, and introduces modern communication and computation technologies such as 5G, cloud computing and Internet of Things to industrial manufacturing systems. As a result, many devices, machines and applications will rely on connectivity, while...... having different requirements from the network, ranging from high reliability and low latency to high data rates. Furthermore, these industrial networks will be highly heterogeneous as they will feature a number of diverse communication technologies. In this article, we propose network slicing...

  13. End-to-End Deep Learning Model For Automatic Sleep Staging Using Raw PSG Waveforms

    DEFF Research Database (Denmark)

    Olesen, Alexander Neergaard; Peppard, P. E.; Sorensen, H. B.

    2018-01-01

    Deep learning has seen significant progress over the last few years, especially in computer vision, where competitions such as the ImageNet challenge have been the driving factor behind many new model architectures far superior to humans in image recognition. We propose a novel method for automatic...... accuracy, precision and recall were 84.93%, 97.42% and 97.02%, respectively. Evaluating on the validation set yielded an overall accuracy of 85.07% and overall precision/recall of 98.54% and 95.72%, respectively. Conclusion: Preliminary results indicate that state of the art deep learning models can...... sleep staging, which relies on current advances in computer vision models eliminating the need for feature engineering or other transformations of input data. By exploiting the high capacity for complex learning in a state of the art object recognition model, we can effectively use raw PSG signals...

  14. End-to-End Mechanisms for Rate-Adaptive Multicast Streaming over the Internet

    OpenAIRE

    Rimac, Ivica

    2005-01-01

    Continuous media applications over packet-switched networks are becoming more and more popular. Radio stations, for example, already use streaming technology to disseminate their content to users on the Internet, and video streaming services are expected to experience similar popularity. In contrast to traditional television and radio broadcast systems, however, prevalent Internet streaming solutions are based on unicast communication and raise scalability and efficiency issues. Multicast com...

  15. An end-to-end security auditing approach for service oriented architectures

    NARCIS (Netherlands)

    Azarmi, M.; Bhargava, B.; Angin, P.; Ranchal, R.; Ahmed, N.; Sinclair, A.; Linderman, M.; Ben Othmane, L.

    2012-01-01

    Service-Oriented Architecture (SOA) is becoming a major paradigm for distributed application development in the recent explosion of Internet services and cloud computing. However, SOA introduces new security challenges not present in the single-hop client-server architectures due to the involvement

  16. Enhancing end-to-end QoS for multimedia streaming in IMS-based networks

    NARCIS (Netherlands)

    Ozcelebi, T.; Radovanovic, I.; Chaudron, M.R.V.

    2007-01-01

    Convergence of the emerging IP Multimedia Subsystem(IMS) includes unlicensed, nondedicated and nondeterministic hence uncontrollable. computer access, networks for IP multimedia services. It enables provision of resource demanding real-time services and multimedia communication raising new

  17. An end-to-end computing model for the Square Kilometre Array

    NARCIS (Netherlands)

    Jongerius, R.; Wijnholds, S.; Nijboer, R.; Corporaal, H.

    2014-01-01

    For next-generation radio telescopes such as the Square Kilometre Array, seemingly minor changes in scientific constraints can easily push computing requirements into the exascale domain. The authors propose a model for engineers and astronomers to understand these relations and make tradeoffs in

  18. AAL Security and Privacy: transferring XACML policies for end-to-end acess and usage control

    NARCIS (Netherlands)

    Vlamings, H.G.M.; Koster, R.P.

    2010-01-01

    Ambient Assisted Living (AAL) systems and services aim to provide a solution for growing healthcare expenses and degradation of life quality of elderly using information and communication technology. Inparticular AAL solutions are being created that are heavily based on web services an sensor

  19. Topological Constraints on Identifying Additive Link Metrics via End-to-end Paths Measurements

    Science.gov (United States)

    2012-09-20

    identifiable if and only ifR in (1) has full column rank, i.e., rank(R) = n. In other words, to uniquely determine w, there must be n linearly...be identified from paths traversing l1; similar argument applies to l2. Moreover, similar analysis as in the proof of this lemma shows that none of

  20. Exploring Two Approaches for an End-to-End Scientific Analysis Workflow

    Science.gov (United States)

    Dodelson, Scott; Kent, Steve; Kowalkowski, Jim; Paterno, Marc; Sehrish, Saba

    2015-12-01

    The scientific discovery process can be advanced by the integration of independently-developed programs run on disparate computing facilities into coherent workflows usable by scientists who are not experts in computing. For such advancement, we need a system which scientists can use to formulate analysis workflows, to integrate new components to these workflows, and to execute different components on resources that are best suited to run those components. In addition, we need to monitor the status of the workflow as components get scheduled and executed, and to access the intermediate and final output for visual exploration and analysis. Finally, it is important for scientists to be able to share their workflows with collaborators. We have explored two approaches for such an analysis framework for the Large Synoptic Survey Telescope (LSST) Dark Energy Science Collaboration (DESC); the first one is based on the use and extension of Galaxy, a web-based portal for biomedical research, and the second one is based on a programming language, Python. In this paper, we present a brief description of the two approaches, describe the kinds of extensions to the Galaxy system we have found necessary in order to support the wide variety of scientific analysis in the cosmology community, and discuss how similar efforts might be of benefit to the HEP community.

  1. End-to-end requirements management for multiprojects in the construction industry

    DEFF Research Database (Denmark)

    Wörösch, Michael

    Performance Concrete and insulation materials – is used. By means of action research and interviews of case project staff it has become evident that many elements of formalized requirements management are missing in the case project. To fill those gaps and be able to manage requirements end...... with regards to requirements management. As the literature study gives little new information, a series of interviews are initiated with experts from industry and universities. Those interviews reveal major shortcomings in the way requirements are handled in Danish construction companies today. In order...... to give managers of construction projects a useful and guiding tool for formally managing requirements that is rooted in practice, the “Conceptual requirements management framework”, is created. The framework builds upon the gathered empirical data, obtained by action research, interviews, and available...

  2. Ubiquitous Monitoring Solution for Wireless Sensor Networks with Push Notifications and End-to-End Connectivity

    Directory of Open Access Journals (Sweden)

    Luis M. L. Oliveira

    2014-01-01

    Full Text Available Wireless Sensor Networks (WSNs belongs to a new trend in technology in which tiny and resource constrained devices are wirelessly interconnected and are able to interact with the surrounding environment by collecting data such as temperature and humidity. Recently, due to the huge growth in the use of mobile devices with Internet connection, smartphones are becoming the center of future ubiquitous wireless networks. Interconnecting WSNs with smartphones and the Internet is a big challenge and new architectures are required due to the heterogeneity of these devices. Taking into account that people are using smartphones with Internet connection, there is a good opportunity to propose a new architecture for wireless sensors monitoring using push notifications and smartphones. Then, this paper proposes a ubiquitous approach for WSN monitoring based on a REST Web Service, a relational database, and an Android mobile application. Real-time data sensed by WSNs are sent directly to a smartphone or stored in a database and requested by the mobile application using a well-defined RESTful interface. A push notification system was created in order to alert mobile users when a sensor parameter overcomes a given threshold. The proposed architecture and mobile application were evaluated and validated using a laboratory WSN testbed and are ready for use.

  3. Designing a holistic end-to-end intelligent network analysis and security platform

    Science.gov (United States)

    Alzahrani, M.

    2018-03-01

    Firewall protects a network from outside attacks, however, once an attack entering a network, it is difficult to detect. Recent significance accidents happened. i.e.: millions of Yahoo email account were stolen and crucial data from institutions are held for ransom. Within two year Yahoo’s system administrators were not aware that there are intruder inside the network. This happened due to the lack of intelligent tools to monitor user behaviour in internal network. This paper discusses a design of an intelligent anomaly/malware detection system with proper proactive actions. The aim is to equip the system administrator with a proper tool to battle the insider attackers. The proposed system adopts machine learning to analyse user’s behaviour through the runtime behaviour of each node in the network. The machine learning techniques include: deep learning, evolving machine learning perceptron, hybrid of Neural Network and Fuzzy, as well as predictive memory techniques. The proposed system is expanded to deal with larger network using agent techniques.

  4. Improving End-To-End Tsunami Warning for Risk Reduction on Canada’s West Coast

    Science.gov (United States)

    2015-01-01

    in 2014, up from 455 cals in 2013 (Chamber of Shipping, 2014). Even the more traditional forms of marine tourism such as sports fishing have been...some of the most noteworthy areas of new economic activity to emerge have been aquaculture, recreation and tourism , research and oil, gas and other...Risk Reduction on Canada’s West Coast (CSSP-2013-TI-1033) 3   annual value of output over $590 milion (Fisheries and Oceans Canada, 2013). Tourism

  5. Research on the Establishment and Evaluation of End - to - End Service Quality Index System

    Science.gov (United States)

    Wei, Chen; Jing, Tao; Ji, Yutong

    2018-01-01

    From the perspective of power data networks, put forward the index system model to measure the quality of service, covering user experience, business performance, network capacity support, etc., and gives the establishment and use of each layer index in the model.

  6. Increasing Army Supply Chain Performance: Using an Integrated End to End Metrics System

    Science.gov (United States)

    2017-01-01

    Sched Deliver Sched Delinquent Contracts Current Metrics PQDR/SDRs Forecasting Accuracy Reliability Demand Management Asset Mgmt Strategies Pipeline...are identified and characterized by statistical analysis. The study proposed a framework and tool for inventory management based on factors such as

  7. End-to-end unsupervised deformable image registration with a convolutional neural network

    NARCIS (Netherlands)

    de Vos, Bob D.; Berendsen, Floris; Viergever, Max A.; Staring, Marius; Išgum, Ivana

    2017-01-01

    In this work we propose a deep learning network for deformable image registration (DIRNet). The DIRNet consists of a convolutional neural network (ConvNet) regressor, a spatial transformer, and a resampler. The ConvNet analyzes a pair of fixed and moving images and outputs parameters for the spatial

  8. An end-to-end assessment of extreme weather impacts on food security

    Science.gov (United States)

    Chavez, Erik; Conway, Gordon; Ghil, Michael; Sadler, Marc

    2015-11-01

    Both governments and the private sector urgently require better estimates of the likely incidence of extreme weather events, their impacts on food crop production and the potential consequent social and economic losses. Current assessments of climate change impacts on agriculture mostly focus on average crop yield vulnerability to climate and adaptation scenarios. Also, although new-generation climate models have improved and there has been an exponential increase in available data, the uncertainties in their projections over years and decades, and at regional and local scale, have not decreased. We need to understand and quantify the non-stationary, annual and decadal climate impacts using simple and communicable risk metrics that will help public and private stakeholders manage the hazards to food security. Here we present an `end-to-end’ methodological construct based on weather indices and machine learning that integrates current understanding of the various interacting systems of climate, crops and the economy to determine short- to long-term risk estimates of crop production loss, in different climate and adaptation scenarios. For provinces north and south of the Yangtze River in China, we have found that risk profiles for crop yields that translate climate into economic variability follow marked regional patterns, shaped by drivers of continental-scale climate. We conclude that to be cost-effective, region-specific policies have to be tailored to optimally combine different categories of risk management instruments.

  9. End-to-End Key Exchange through Disjoint Paths in P2P Networks

    Directory of Open Access Journals (Sweden)

    Daouda Ahmat

    2015-01-01

    Full Text Available Due to their inherent features, P2P networks have proven to be effective in the exchange of data between autonomous peers. Unfortunately, these networks are subject to various security threats that cannot be addressed readily since traditional security infrastructures, which are centralized, cannot be applied to them. Furthermore, communication reliability across the Internet is threatened by various attacks, including usurpation of identity, eavesdropping or traffic modification. Thus, in order to overcome these security issues and allow peers to securely exchange data, we propose a new key management scheme over P2P networks. Our approach introduces a new method that enables a secret key exchange through disjoint paths in the absence of a trusted central coordination point which would be required in traditional centralized security systems.

  10. Effect of 3 Key Factors on Average End to End Delay and Jitter in MANET

    Directory of Open Access Journals (Sweden)

    Saqib Hakak

    2015-01-01

    Full Text Available A mobile ad-hoc network (MANET is a self-configuring infrastructure-less network of mobile devices connected by wireless links where each node or mobile device is independent to move in any desired direction and thus the links keep moving from one node to another. In such a network, the mobile nodes are equipped with CSMA/CA (carrier sense multiple access with collision avoidance transceivers and communicate with each other via radio. In MANETs, routing is considered one of the most difficult and challenging tasks. Because of this, most studies on MANETs have focused on comparing protocols under varying network conditions. But to the best of our knowledge no one has studied the effect of other factors on network performance indicators like throughput, jitter and so on, revealing how much influence a particular factor or group of factors has on each network performance indicator. Thus, in this study the effects of three key factors, i.e. routing protocol, packet size and DSSS rate, were evaluated on key network performance metrics, i.e. average delay and average jitter, as these parameters are crucial for network performance and directly affect the buffering requirements for all video devices and downstream networks.

  11. Intelligent End-To-End Resource Virtualization Using Service Oriented Architecture

    NARCIS (Netherlands)

    Onur, E.; Sfakianakis, E.; Papagianni, C.; Karagiannis, Georgios; Kontos, T.; Niemegeers, I.G.M.M.; Niemegeers, I.; Chochliouros, I.; Heemstra de Groot, S.M.; Sjödin, P.; Hidell, M.; Cinkler, T.; Maliosz, M.; Kaklamani, D.I.; Carapinha, J.; Belesioti, M.; Futrps, E.

    2009-01-01

    Service-oriented architecture can be considered as a philosophy or paradigm in organizing and utilizing services and capabilities that may be under the control of different ownership domains. Virtualization provides abstraction and isolation of lower level functionalities, enabling portability of

  12. End-to-end workflow for finite element analysis of tumor treating fields in glioblastomas

    Science.gov (United States)

    Timmons, Joshua J.; Lok, Edwin; San, Pyay; Bui, Kevin; Wong, Eric T.

    2017-11-01

    Tumor Treating Fields (TTFields) therapy is an approved modality of treatment for glioblastoma. Patient anatomy-based finite element analysis (FEA) has the potential to reveal not only how these fields affect tumor control but also how to improve efficacy. While the automated tools for segmentation speed up the generation of FEA models, multi-step manual corrections are required, including removal of disconnected voxels, incorporation of unsegmented structures and the addition of 36 electrodes plus gel layers matching the TTFields transducers. Existing approaches are also not scalable for the high throughput analysis of large patient volumes. A semi-automated workflow was developed to prepare FEA models for TTFields mapping in the human brain. Magnetic resonance imaging (MRI) pre-processing, segmentation, electrode and gel placement, and post-processing were all automated. The material properties of each tissue were applied to their corresponding mask in silico using COMSOL Multiphysics (COMSOL, Burlington, MA, USA). The fidelity of the segmentations with and without post-processing was compared against the full semi-automated segmentation workflow approach using Dice coefficient analysis. The average relative differences for the electric fields generated by COMSOL were calculated in addition to observed differences in electric field-volume histograms. Furthermore, the mesh file formats in MPHTXT and NASTRAN were also compared using the differences in the electric field-volume histogram. The Dice coefficient was less for auto-segmentation without versus auto-segmentation with post-processing, indicating convergence on a manually corrected model. An existent but marginal relative difference of electric field maps from models with manual correction versus those without was identified, and a clear advantage of using the NASTRAN mesh file format was found. The software and workflow outlined in this article may be used to accelerate the investigation of TTFields in glioblastoma patients by facilitating the creation of FEA models derived from patient MRI datasets.

  13. Hardware Support for Malware Defense and End-to-End Trust

    Science.gov (United States)

    2017-02-01

    this problem is described in section 3.1.5. 3.1.3. SOFTWARE ARCHITECTURE Starting from the Chromebook hardware platform, this project removed the...personalities (KVM Virtual Machines) of Android , while including our overall integrity architecture with integrity measurement, appraisal, and...attestation, both for the native Linux, and for the Android guests. The overall architecture developed in this project is shown in Figure 1. 3.1.4

  14. CLOUD SECURITY AND COMPLIANCE - A SEMANTIC APPROACH IN END TO END SECURITY

    OpenAIRE

    Kalaiprasath, R.; Elankavi, R.; Udayakumar, R.

    2017-01-01

    The Cloud services are becoming an essential part of many organizations. Cloud providers have to adhere to security and privacy policies to ensure their users' data remains confidential and secure. Though there are some ongoing efforts on developing cloud security standards, most cloud providers are implementing a mish-mash of security and privacy controls. This has led to confusion among cloud consumers as to what security measures they should expect from the cloud services, and whether thes...

  15. End-to-end information extraction without token-level supervision

    DEFF Research Database (Denmark)

    Palm, Rasmus Berg; Hovy, Dirk; Laws, Florian

    2017-01-01

    Most state-of-the-art information extraction approaches rely on token-level labels to find the areas of interest in text. Unfortunately, these labels are time-consuming and costly to create, and consequently, not available for many real-life IE tasks. To make matters worse, token-level labels...... and output text. We evaluate our model on the ATIS data set, MIT restaurant corpus and the MIT movie corpus and compare to neural baselines that do use token-level labels. We achieve competitive results, within a few percentage points of the baselines, showing the feasibility of E2E information extraction...

  16. Multi-hop Relaying: An End-to-End Delay Analysis

    KAUST Repository

    Chaaban, Anas; Sezgin, Aydin

    2015-01-01

    The impact of multi-hopping schemes on the communication latency in a relay channel is studied. The main aim is to characterize conditions under which such schemes decrease the communication latency given a reliability requirement. Both decode

  17. Development of an End-to-End Active Debris Removal (ADR) Mission Strategic Plan

    Data.gov (United States)

    National Aeronautics and Space Administration — The original proposal was to develop an ADR mission strategic plan. However, the task was picked up by the OCT. Subsequently the award was de-scoped to $30K to...

  18. End-to-end workflow for finite element analysis of tumor treating fields in glioblastomas.

    Science.gov (United States)

    Timmons, Joshua J; Lok, Edwin; San, Pyay; Bui, Kevin; Wong, Eric T

    2017-10-12

    Tumor Treating Fields (TTFields) therapy is an approved modality of treatment for glioblastoma. Patient anatomy-based finite element analysis (FEA) has the potential to reveal not only how these fields affect tumor control but also how to improve efficacy. While the automated tools for segmentation speed up the generation of FEA models, multi-step manual corrections are required, including removal of disconnected voxels, incorporation of unsegmented structures and the addition of 36 electrodes plus gel layers matching the TTFields transducers. Existing approaches are also not scalable for the high throughput analysis of large patient volumes. A semi-automated workflow was developed to prepare FEA models for TTFields mapping in the human brain. Magnetic resonance imaging (MRI) pre-processing, segmentation, electrode and gel placement, and post-processing were all automated. The material properties of each tissue were applied to their corresponding mask in silico using COMSOL Multiphysics (COMSOL, Burlington, MA, USA). The fidelity of the segmentations with and without post-processing was compared against the full semi-automated segmentation workflow approach using Dice coefficient analysis. The average relative differences for the electric fields generated by COMSOL were calculated in addition to observed differences in electric field-volume histograms. Furthermore, the mesh file formats in MPHTXT and NASTRAN were also compared using the differences in the electric field-volume histogram. The Dice coefficient was less for auto-segmentation without versus auto-segmentation with post-processing, indicating convergence on a manually corrected model. An existent but marginal relative difference of electric field maps from models with manual correction versus those without was identified, and a clear advantage of using the NASTRAN mesh file format was found. The software and workflow outlined in this article may be used to accelerate the investigation of TTFields in glioblastoma patients by facilitating the creation of FEA models derived from patient MRI datasets.

  19. Towards End-to-End Lane Detection: an Instance Segmentation Approach

    OpenAIRE

    Neven, Davy; De Brabandere, Bert; Georgoulis, Stamatios; Proesmans, Marc; Van Gool, Luc

    2018-01-01

    Modern cars are incorporating an increasing number of driver assist features, among which automatic lane keeping. The latter allows the car to properly position itself within the road lanes, which is also crucial for any subsequent lane departure or trajectory planning decision in fully autonomous cars. Traditional lane detection methods rely on a combination of highly-specialized, hand-crafted features and heuristics, usually followed by post-processing techniques, that are computationally e...

  20. The Challenge of Ensuring Human Rights in the End-to-End Supply Chain

    DEFF Research Database (Denmark)

    Wieland, Andreas; Handfield, Robert B.

    2014-01-01

    Certification programs have their merits and their limitations. With the growing availability of social media, analytics tools, and supply chain data, a smarter set of solutions could soon be possible....

  1. Design and Evaluation for the End-to-End Detection of TCP/IP Header Manipulation

    Science.gov (United States)

    2014-06-01

    3.2.2 Outsourcing middleboxes Jingling [86] is a prototype outsourcing architecture where the network forwards data out to external “Feature...The relation to our problem is that Jingling could help proactively address broken and inadvertent middlebox behaviors, depending on the administrative

  2. Mining Fashion Outfit Composition Using An End-to-End Deep Learning Approach on Set Data

    OpenAIRE

    Li, Yuncheng; Cao, LiangLiang; Zhu, Jiang; Luo, Jiebo

    2016-01-01

    Composing fashion outfits involves deep understanding of fashion standards while incorporating creativity for choosing multiple fashion items (e.g., Jewelry, Bag, Pants, Dress). In fashion websites, popular or high-quality fashion outfits are usually designed by fashion experts and followed by large audiences. In this paper, we propose a machine learning system to compose fashion outfits automatically. The core of the proposed automatic composition system is to score fashion outfit candidates...

  3. Building an End-to-end System for Long Term Soil Monitoring

    Science.gov (United States)

    Szlavecz, K.; Terzis, A.; Musaloiu-E., R.; Cogan, J.; Szalay, A.; Gray, J.

    2006-05-01

    We have developed and deployed an experimental soil monitoring system in an urban forest. Wireless sensor nodes collect data on soil temperature, soil moisture, air temperature, and light. Data are uploaded into a SQL Server database, where they are calibrated and reorganized into an OLAP data cube. The data are accessible on-line using a web services interface with various visual tools. Our prototype system of ten nodes has been live since Sep 2005, and in 5 months of operation over 6 million measurements have been collected. At a high level, our experiment was a success: we detected variations in soil condition corresponding to topography and external environmental parameters as expected. However, we encountered a number of challenging technical problems: need for low-level programming at multiple levels, calibration across space and time, and cross- reference of measurements with external sources. Based upon the experience with this system we are now deploying 200 mode nodes with close to a thousand sensors spread over multiple sites in the context of the Baltimore Ecosystem Study LTER. www

  4. End-to-end integrated security and performance analysis on the DEGAS Choreographer platform

    DEFF Research Database (Denmark)

    Buchholtz, Mikael; Gilmore, Stephen; Haenel, Valentin

    2005-01-01

    We present a software tool platform which facilitates security and performance analysis of systems which starts and ends with UML model descriptions. A UML project is presented to the platform for analysis, formal content is extracted in the form of process calculi descriptions, analysed with the......We present a software tool platform which facilitates security and performance analysis of systems which starts and ends with UML model descriptions. A UML project is presented to the platform for analysis, formal content is extracted in the form of process calculi descriptions, analysed...

  5. Hoe kunnen end-to-end processen worden geborgd in de organisatie?

    NARCIS (Netherlands)

    Strikwerda, H.

    2017-01-01

    Processen waarin kennis, informatie en materiaal worden getransformeerd in goederen en diensten, vormen de kern van organiseren. Dat is een van de oudste uitgangspunten in de bedrijfskunde. Processen zijn in het scientific management en daarmee in lean six sigma het object van analyse en verbetering

  6. SecMon: End-to-End Quality and Security Monitoring System

    OpenAIRE

    Ciszkowski, Tomasz; Eliasson, Charlott; Fiedler, Markus; Kotulski, Zbigniew; Lupu, Radu; Mazurczyk, Wojciech

    2008-01-01

    The Voice over Internet Protocol (VoIP) is becoming a more available and popular way of communicating for Internet users. This also applies to Peer-to-Peer (P2P) systems and merging these two have already proven to be successful (e.g. Skype). Even the existing standards of VoIP provide an assurance of security and Quality of Service (QoS), however, these features are usually optional and supported by limited number of implementations. As a result, the lack of mandatory and widely applicable Q...

  7. Optimization of polarimetry sensitivity for X-ray CCD

    CERN Document Server

    Hayashida, K; Tsunemi, H; Hashimoto, Y; Ohtani, M

    1999-01-01

    X-ray polarimetry with CCD has been performed using a polarized X-ray beam from an electron impact X-ray source. The standard data reduction method employing double-pixel events yields the modulation factor M of 0.14 at 27 keV and 0.24 at 43 keV for the 12 mu m pixel size CCD chip. We develop a new data reduction method, in which multi-pixel events are employed, and which approximates the charge spread as an oval shape. We optimize the reduction parameters, so that we improve the P sub m sub i sub n (minimum detectable polarization degree) by factor of three from the value obtained through the usual double-pixel event method.

  8. Entirely saturated unilateral smear of laser spot in CCD

    International Nuclear Information System (INIS)

    Zhang Zhen; Zhou Menglian; Zhang Jianmin; Lin Xinwei

    2013-01-01

    In the video of linear CCD camera being irradiated by 532 nm CW laser, the entirely saturated unilateral smear of laser spot was found. The smear area does not represent the distribution of laser. Since this smear lies merely in one side of laser spot, it can not be induced by light leaking or carriers blooming, and it may be induced by charge transfer loss. However, the feature that the smear area is entirely saturated can not be explained by the current constant model of charge transfer inefficiency. Based on the inner structure and operating principle of buried channel CCD, a new model of charge transfer inefficiency that varies with charge quantity is proposed, which can explain the entirely saturated unilateral smear of laser spot. (authors)

  9. Two-dimensional spectrophotometry of planetary nebulae by CCD imaging

    International Nuclear Information System (INIS)

    Jacoby, G.H.; Africano, J.L.; Quigley, R.J.; Western Washington Univ., Bellingham, WA)

    1987-01-01

    The spatial distribution of the electron temperature and density and the ionic abundances of O(+), O(2+), N(+), and S(+) have been derived from CCD images of the planetary nebulae NGC 40 and NGC 6826 taken in the important emission lines of forbidden O II, forbidden O III, H-beta, forbidden N II, and forbidden S II. The steps required in the derivation of the absolute fluxes, line, ratios, and ionic abundances are outlined and then discussed in greater detail. The results show that the CCD imaging technique for two-dimensional spectrophotometry can effectively compete with classical spectrophotometry, providing the added benefits of complete spatial coverage at seeing-disk spatial resolution. The multiplexing in the spatial dimension, however, results in a loss of spectral information, since only one emission line is observed at any one time. 37 references

  10. Stroboscope Based Synchronization of Full Frame CCD Sensors

    Directory of Open Access Journals (Sweden)

    Liang Shen

    2017-04-01

    Full Text Available The key obstacle to the use of consumer cameras in computer vision and computer graphics applications is the lack of synchronization hardware. We present a stroboscope based synchronization approach for the charge-coupled device (CCD consumer cameras. The synchronization is realized by first aligning the frames from different video sequences based on the smear dots of the stroboscope, and then matching the sequences using a hidden Markov model. Compared with current synchronized capture equipment, the proposed approach greatly reduces the cost by using inexpensive CCD cameras and one stroboscope. The results show that our method could reach a high accuracy much better than the frame-level synchronization of traditional software methods.

  11. CCD-based thermoreflectance microscopy: principles and applications

    International Nuclear Information System (INIS)

    Farzaneh, M; Maize, K; Shakouri, A; Lueerssen, D; Summers, J A; Hudgings, Janice A; Mayer, P M; Ram, R J; Raad, P E; Pipe, K P

    2009-01-01

    CCD-based thermoreflectance microscopy has emerged as a high resolution, non-contact imaging technique for thermal profiling and performance and reliability analysis of numerous electronic and optoelectronic devices at the micro-scale. This thermography technique, which is based on measuring the relative change in reflectivity of the device surface as a function of change in temperature, provides high-resolution thermal images that are useful for hot spot detection and failure analysis, mapping of temperature distribution, measurement of thermal transient, optical characterization of photonic devices and measurement of thermal conductivity in thin films. In this paper we review the basic physical principle behind thermoreflectance as a thermography tool, discuss the experimental setup, resolutions achieved, signal processing procedures and calibration techniques, and review the current applications of CCD-based thermoreflectance microscopy in various devices. (topical review)

  12. Stroboscope Based Synchronization of Full Frame CCD Sensors

    OpenAIRE

    Shen, Liang; Feng, Xiaobing; Zhang, Yuan; Shi, Min; Zhu, Dengming; Wang, Zhaoqi

    2017-01-01

    The key obstacle to the use of consumer cameras in computer vision and computer graphics applications is the lack of synchronization hardware. We present a stroboscope based synchronization approach for the charge-coupled device (CCD) consumer cameras. The synchronization is realized by first aligning the frames from different video sequences based on the smear dots of the stroboscope, and then matching the sequences using a hidden Markov model. Compared with current synchronized capture equi...

  13. CCD photometry of Cepheid sequences in four nearby galaxies

    International Nuclear Information System (INIS)

    Metcalfe, N.; Shanks, T.

    1991-01-01

    We present Isaac Newton Telescope B and V CCD observations of deep photometric sequences in the vicinity of Cepheid variable stars in three nearby galaxies - M31, M33 and NGC 2403. We have also checked the photometry of the brightest stars in M81 and its dwarf companion, Holmberg IX. We use our data, combined with other recent results, to re-analyse the Cepheid distances to these galaxies. (author)

  14. Ultraviolet downconverting phosphor for use with silicon CCD imagers

    Science.gov (United States)

    Blouke, M. M.; Cowens, M. W.; Hall, J. E.; Westphal, J. A.; Christensen, A. B.

    1980-01-01

    The properties and application of a UV downconverting phosphor (coronene) to silicon charge coupled devices are discussed. Measurements of the absorption spectrum have been extended to below 1000 A, and preliminary results indicate the existence of useful response to at least 584 A. The average conversion efficiency of coronene was measured to be approximately 20% at 2537 A. Imagery at 3650 A using a backside illuminated 800 x 800 CCD coated with coronene is presented.

  15. CCD Photometry of W UMa Type Binary TY UMa

    Directory of Open Access Journals (Sweden)

    Young Woon Kang

    2001-06-01

    Full Text Available We present VRI CCD photometry of W UMa type binary TY UMa. The light curves show that the secondary minimum is deeper than theprimary minimum and the maximum I (0.p25 is 0.m023 brighter than the maximum II (0.p75. The V light curve has beenanalyzed and the photometric solutions have been determined by the method of Wilson & Devinney differential correction. Weadopted the spot model to explain the asymetric light curve.

  16. STARL -- a Program to Correct CCD Image Defects

    Science.gov (United States)

    Narbutis, D.; Vanagas, R.; Vansevičius, V.

    We present a program tool, STARL, designed for automatic detection and correction of various defects in CCD images. It uses genetic algorithm for deblending and restoring of overlapping saturated stars in crowded stellar fields. Using Subaru Telescope Suprime-Cam images we demonstrate that the program can be implemented in the wide-field survey data processing pipelines for production of high quality color mosaics. The source code and examples are available at the STARL website.

  17. A CCD fitted to the UV Prime spectrograph: Performance

    International Nuclear Information System (INIS)

    Boulade, O.

    1986-10-01

    A CCD camera was fitted to the 3.6 m French-Canadian telescope in Hawai. Performance of the system and observations of elliptic galaxies (stellar content and galactic evolution in a cluster) and quasars (absorption lines in spectra) are reported. In spite of its resolution being only average, the extremely rapid optics of the UV spectrograph gives good signal to noise ratios enabling redshifts and velocity scatter to be calculated with an accuracy better than 30 km/sec [fr

  18. A Bridge Deflection Monitoring System Based on CCD

    Directory of Open Access Journals (Sweden)

    Baohua Shan

    2016-01-01

    Full Text Available For long-term monitoring of the midspan deflection of Songjiazhuang cloverleaf junction on 309 national roads in Zibo city, this paper proposes Zhang’s calibration-based DIC deflection monitoring method. CCD cameras are used to track the change of targets’ position, Zhang’s calibration algorithm is introduced to acquire the intrinsic and extrinsic parameters of CCD cameras, and the DIC method is combined with Zhang’s calibration algorithm to measure bridge deflection. The comparative test between Zhang’s calibration and scale calibration is conducted in lab, and experimental results indicate that the proposed method has higher precision. According to the deflection monitoring scheme, the deflection monitoring software for Songjiazhuang cloverleaf junction is developed by MATLAB, and a 4-channel CCD deflection monitoring system for Songjiazhuang cloverleaf junction is integrated in this paper. This deflection monitoring system includes functions such as image preview, simultaneous collection, camera calibration, deflection display, and data storage. In situ deflection curves show a consistent trend; this suggests that the proposed method is reliable and is suitable for the long-term monitoring of bridge deflection.

  19. Stellar CCD Photometry: New Approach, Principles and Application

    Science.gov (United States)

    El-Bassuny Alawy, A.

    A new approach is proposed and developed to handle pre-processed CCD frames in order to identify stellar images and derive their relevant parameters. It relies on: 1) Identifying stellar images and assigning approximate positions of their centres using an artificial intelligence technique, (Knowledge Based System), 2) Accurate determination of the centre co-ordinates applying an elementary statistical concept and 3) Estimating the image peak intensity as a stellar magnitude measure employing simple numerical analysis approach. The method has been coded for personal computer users. A CCD frame of the star cluster M67 was adopted as a test case. The results obtained are discussed in comparison with the DAOPHOTII ones and the corresponding published data. Exact coincidence has been found between both results except in very few cases. These exceptions have been discussed in the light of the basis of both methods and the cluster plates. It has been realised that the method suggested represents a very simple, extremely fast, high precision method of stellar CCD photometry. Moreover, it is more capable than DAOPHOTII of handling blended and distorted stellar images. These characteristics show the usefulness of the present method in some astronomical applications, such as auto-focusing and auto-guiding, beside the main purpose, viz. stellar photometry.

  20. Design and implementation of fast bipolar clock drivers for CCD imaging systems in space applications

    Science.gov (United States)

    Jayarajan, Jayesh; Kumar, Nishant; Verma, Amarnath; Thaker, Ramkrishna

    2016-05-01

    Drive electronics for generating fast, bipolar clocks, which can drive capacitive loads of the order of 5-10nF are indispensable for present day Charge Coupled Devices (CCDs). Design of these high speed bipolar clocks is challenging because of the capacitive loads that have to be driven and a strict constraint on the rise and fall times. Designing drive electronics circuits for space applications becomes even more challenging due to limited number of available discrete devices, which can survive in the harsh radiation prone space environment. This paper presents the design, simulations and test results of a set of such high speed, bipolar clock drivers. The design has been tested under a thermal cycle of -15 deg C to +55 deg C under vacuum conditions and has been designed using radiation hardened components. The test results show that the design meets the stringent rise/fall time requirements of 50+/-10ns for Multiple Vertical CCD (VCCD) clocks and 20+/-5ns for Horizontal CCD (HCCD) clocks with sufficient design margins across full temperature range, with a pixel readout rate of 6.6MHz. The full design has been realized in flexi-rigid PCB with package volume of 140x160x50 mm3.

  1. A directional fast neutron detector using scintillating fibers and an intensified CCD camera system

    International Nuclear Information System (INIS)

    Holslin, Daniel; Armstrong, A.W.; Hagan, William; Shreve, David; Smith, Scott

    1994-01-01

    We have been developing and testing a scintillating fiber detector (SFD) for use as a fast neutron sensor which can discriminate against neutrons entering at angles non-parallel to the fiber axis (''directionality''). The detector/convertor component is a fiber bundle constructed of plastic scintillating fibers each measuring 10 cm long and either 0.3 mm or 0.5 mm in diameter. Extensive Monte Carlo simulations were made to optimize the bundle response to a range of fast neutron energies and to intense fluxes of high energy gamma-rays. The bundle is coupled to a set of gamma-ray insenitive electro-optic intensifiers whose output is viewed by a CCD camera directly coupled to the intensifiers. Two types of CCD cameras were utilized: 1) a standard, interline RS-170 camera with electronic shuttering and 2) a high-speed (up to 850 frame/s) field-transfer camera. Measurements of the neutron detection efficiency and directionality were made using 14 MeV neutrons, and the response to gamma-rays was performed using intense fluxes from radioisotopic sources (up to 20 R/h). Recently, the detector was constructed and tested using a large 10 cm by 10 cm square fiber bundle coupled to a 10 cm diameter GEN I intensifier tube. We present a description of the various detector systems and report the results of experimental tests. ((orig.))

  2. An LOD with improved breakdown voltage in full-frame CCD devices

    Science.gov (United States)

    Banghart, Edmund K.; Stevens, Eric G.; Doan, Hung Q.; Shepherd, John P.; Meisenzahl, Eric J.

    2005-02-01

    In full-frame image sensors, lateral overflow drain (LOD) structures are typically formed along the vertical CCD shift registers to provide a means for preventing charge blooming in the imager pixels. In a conventional LOD structure, the n-type LOD implant is made through the thin gate dielectric stack in the device active area and adjacent to the thick field oxidation that isolates the vertical CCD columns of the imager. In this paper, a novel LOD structure is described in which the n-type LOD impurities are placed directly under the field oxidation and are, therefore, electrically isolated from the gate electrodes. By reducing the electrical fields that cause breakdown at the silicon surface, this new structure permits a larger amount of n-type impurities to be implanted for the purpose of increasing the LOD conductivity. As a consequence of the improved conductance, the LOD width can be significantly reduced, enabling the design of higher resolution imaging arrays without sacrificing charge capacity in the pixels. Numerical simulations with MEDICI of the LOD leakage current are presented that identify the breakdown mechanism, while three-dimensional solutions to Poisson's equation are used to determine the charge capacity as a function of pixel dimension.

  3. Scan-free grazing emission XRF measurements in the laboratory using a CCD

    International Nuclear Information System (INIS)

    Szwedowski, Veronika; Baumann, Jonas; Mantouvalou, Ioanna; Bauer, Leona; Malzer, Wolfgang; Kanngiesser, Birgit

    2017-01-01

    The rapid development of new classes of nanomaterials calls for easy access methods in order to quantify properties essential for their functionality, e.g., interdiffusion of elements at interfaces, or elemental dopant, or depth profiles. Non-destructive methods, like X-ray fluorescence (XRF), are of special interest, for preserving materials and offering the possibility to incorporate the analysis in a production process. In-depth XRF methods for the characterization of nanomaterials are up until now limited to synchrotron radiation facilities. A novel scan-free grazing emission XRF (GEXRF) setup is presented utilizing conventional and low-cost hardware, acting as a transfer of a synchrotron method into the laboratory. A chromium target X-ray tube with a polycapillary lens is used as X-ray source and a conventional CCD as the 2D energy-dispersive detector. To confirm the feasibility of the described setup a nanometer-layered titanium-aluminium sample is measured. An energy-dispersive spectrum is obtained in single-photon-counting-mode from the CCD measurements and first GEXRF profiles generated. A semi-quantitative evaluation of this setup is implemented by comparing the measured results with simulations, allowing conclusions about the investigated samples' elemental, compositional, and structural layer-by-layer characteristics. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  4. C.C.D. readout of a picosecond streak camera with an intensified C.C.D

    International Nuclear Information System (INIS)

    Lemonier, M.; Richard, J.C.; Cavailler, C.; Mens, A.; Raze, G.

    1984-08-01

    This paper deals with a digital streak camera readout device. The device consists in a low light level television camera made of a solid state C.C.D. array coupled to an image intensifier associated to a video-digitizer coupled to a micro-computer system. The streak camera images are picked-up as a video signal, digitized and stored. This system allows the fast recording and the automatic processing of the data provided by the streak tube

  5. Primitive chain network simulations of probe rheology.

    Science.gov (United States)

    Masubuchi, Yuichi; Amamoto, Yoshifumi; Pandey, Ankita; Liu, Cheng-Yang

    2017-09-27

    Probe rheology experiments, in which the dynamics of a small amount of probe chains dissolved in immobile matrix chains is discussed, have been performed for the development of molecular theories for entangled polymer dynamics. Although probe chain dynamics in probe rheology is considered hypothetically as single chain dynamics in fixed tube-shaped confinement, it has not been fully elucidated. For instance, the end-to-end relaxation of probe chains is slower than that for monodisperse melts, unlike the conventional molecular theories. In this study, the viscoelastic and dielectric relaxations of probe chains were calculated by primitive chain network simulations. The simulations semi-quantitatively reproduced the dielectric relaxation, which reflects the effect of constraint release on the end-to-end relaxation. Fair agreement was also obtained for the viscoelastic relaxation time. However, the viscoelastic relaxation intensity was underestimated, possibly due to some flaws in the model for the inter-chain cross-correlations between probe and matrix chains.

  6. Crystallization of the C-terminal domain of the addiction antidote CcdA in complex with its toxin CcdB

    International Nuclear Information System (INIS)

    Buts, Lieven; De Jonge, Natalie; Loris, Remy; Wyns, Lode; Dao-Thi, Minh-Hoa

    2005-01-01

    The CcdA C-terminal domain was crystallized in complex with CcdB in two crystal forms that diffract to beyond 2.0 Å resolution. CcdA and CcdB are the antidote and toxin of the ccd addiction module of Escherichia coli plasmid F. The CcdA C-terminal domain (CcdA C36 ; 36 amino acids) was crystallized in complex with CcdB (dimer of 2 × 101 amino acids) in three different crystal forms, two of which diffract to high resolution. Form II belongs to space group P2 1 2 1 2 1 , with unit-cell parameters a = 37.6, b = 60.5, c = 83.8 Å and diffracts to 1.8 Å resolution. Form III belongs to space group P2 1 , with unit-cell parameters a = 41.0, b = 37.9, c = 69.6 Å, β = 96.9°, and diffracts to 1.9 Å resolution

  7. The development of high-speed 100 fps CCD camera

    International Nuclear Information System (INIS)

    Hoffberg, M.; Laird, R.; Lenkzsus, F.; Liu, C.; Rodricks, B.

    1997-01-01

    This paper describes the development of a high-speed CCD digital camera system. The system has been designed to use CCDs from various manufacturers with minimal modifications. The first camera built on this design utilizes a Thomson 512 x 512 pixel CCD as its sensor, which is read out from two parallel outputs at a speed of 15 MHz/pixel/output. The data undergo correlated double sampling after which it is digitized into 12 bits. The throughput of the system translates into 60 MB/second, which is either stored directly in a PC or transferred to a custom-designed VXI module. The PC data acquisition version of the camera can collect sustained data in real time that is limited to the memory installed in the PC. The VXI version of the camera, also controlled by a PC, stores 512 MB of real-time data before it must be read out to the PC disk storage. The uncooled CCD can be used either with lenses for visible light imaging or with a phosphor screen for X-ray imaging. This camera has been tested with a phosphor screen coupled to a fiber-optic face plate for high-resolution, high-speed X-ray imaging. The camera is controlled through a custom event-driven user-friendly Windows package. The pixel clock speed can be changed from 1 to 15 MHz. The noise was measured to be 1.05 bits at a 13.3 MHz pixel clock. This paper will describe the electronics, software, and characterizations that have been performed using both visible and X-ray photons. (orig.)

  8. CCD Development Progress at Lawrence Berkeley National Laboratory

    OpenAIRE

    Kolbe, W.F.; Holland, S.E.; Bebek, C.J.

    2006-01-01

    P-channel CCD imagers, 200-300um thick, fully depleted, and back-illuminat ed are being developed for scientific applications including ground- and space-based astronomy and x-ray detection. These thick devices have extended IR response, good point-spread function (PSF) and excellent radiation tolerance. Initially, these CCDs were made in-house at LBNL using 100 mm diameter wafers. Fabrication on high-resistivity 150 mm wafers is now proceeding according to a model in which the wafers are fir...

  9. A luminescence imaging system based on a CCD camera

    DEFF Research Database (Denmark)

    Duller, G.A.T.; Bøtter-Jensen, L.; Markey, B.G.

    1997-01-01

    Stimulated luminescence arising from naturally occurring minerals is likely to be spatially heterogeneous. Standard luminescence detection systems are unable to resolve this variability. Several research groups have attempted to use imaging photon detectors, or image intensifiers linked...... to photographic systems, in order to obtain spatially resolved data. However, the former option is extremely expensive and it is difficult to obtain quantitative data from the latter. This paper describes the use of a CCD camera for imaging both thermoluminescence and optically stimulated luminescence. The system...

  10. Neutral-beam performance analysis using a CCD camera

    International Nuclear Information System (INIS)

    Hill, D.N.; Allen, S.L.; Pincosy, P.A.

    1986-01-01

    We have developed an optical diagnostic system suitable for characterizing the performance of energetic neutral beams. An absolutely calibrated CCD video camera is used to view the neutral beam as it passes through a relatively high pressure (10 -5 Torr) region outside the neutralizer: collisional excitation of the fast deuterium atoms produces H/sub proportional to/ emission (lambda = 6561A) that is proportional to the local atomic current density, independent of the species mix of accelerated ions over the energy range 5 to 20 keV. Digital processing of the video signal provides profile and aiming information for beam optimization. 6 refs., 3 figs

  11. Case study of atmospheric correction on CCD data of HJ-1 satellite based on 6S model

    International Nuclear Information System (INIS)

    Xue, Xiaoiuan; Meng, Oingyan; Xie, Yong; Sun, Zhangli; Wang, Chang; Zhao, Hang

    2014-01-01

    In this study, atmospheric radiative transfer model 6S was used to simulate the radioactive transfer process in the surface-atmosphere-sensor. An algorithm based on the look-up table (LUT) founded by 6S model was used to correct (HJ-1) CCD image pixel by pixel. Then, the effect of atmospheric correction on CCD data of HJ-1 satellite was analyzed in terms of the spectral curves and evaluated against the measured reflectance acquired during HJ-1B satellite overpass, finally, the normalized difference vegetation index (NDVI) before and after atmospheric correction were compared. The results showed: (1) Atmospheric correction on CCD data of HJ-1 satellite can reduce the ''increase'' effect of the atmosphere. (2) Apparent reflectance are higher than those of surface reflectance corrected by 6S model in band1∼band3, but they are lower in the near-infrared band; the surface reflectance values corrected agree with the measured reflectance values well. (3)The NDVI increases significantly after atmospheric correction, which indicates the atmospheric correction can highlight the vegetation information

  12. A study on characteristics of X-ray detector for CCD-based EPID

    International Nuclear Information System (INIS)

    Chung, Yong Hyun

    1999-02-01

    The combination of the metal plate/phosphor screen as a x-ray detector with a CCD camera is the most popular detector system among various electronic portal imaging devices (EPIDs). There is a need to optimize the thickness of the metal plate/phosphor screen with high detection efficiency and high spatial resolution for effective transferring of anatomical information. In this study, the thickness dependency on the detection efficiency and the spatial resolution of the metal plate/phosphor screen was investigated by calculation and measurement. The result can be used to determine the optimal thickness of the metal plate as well as of the phosphor screen for the x-ray detector design of therapeutic x-ray imaging and for any specific application. Bremsstrahlung spectrum was calculated by Monte Carlo simulation and by Schiff formula. The detection efficiency was calculated from the total absorbed energy in the phosphor screen using the Monte Carlo simulation and the light output was measured. The spatial resolution, which was defined from the spatial distribution of the absorbed energy, was also calculated and the edge spread function was measured. It was found that the detection efficiency and the spatial resolution were mainly determined by the thickness of metal plate and phosphor screen, respectively. It was also revealed that the detection efficiency and the spatial resolution have trade-off in term of the thickness of the phosphor screen. As the phosphor thickness increases, the detection efficiency increases but the spatial resolution decreases. The curve illustrating the trade-off between the detection efficiency and the spatial resolution of the metal plate/phosphor screen detector is obtained as a function of the phosphor thickness. Based on the calculations, prototype CCD-based EPID was developed and then tested by acquiring phantom images for 6 MV x-ray beam. While, among the captured images, each frame suffered from quantum noise, the frame averaging

  13. CCD Camera Lens Interface for Real-Time Theodolite Alignment

    Science.gov (United States)

    Wake, Shane; Scott, V. Stanley, III

    2012-01-01

    Theodolites are a common instrument in the testing, alignment, and building of various systems ranging from a single optical component to an entire instrument. They provide a precise way to measure horizontal and vertical angles. They can be used to align multiple objects in a desired way at specific angles. They can also be used to reference a specific location or orientation of an object that has moved. Some systems may require a small margin of error in position of components. A theodolite can assist with accurately measuring and/or minimizing that error. The technology is an adapter for a CCD camera with lens to attach to a Leica Wild T3000 Theodolite eyepiece that enables viewing on a connected monitor, and thus can be utilized with multiple theodolites simultaneously. This technology removes a substantial part of human error by relying on the CCD camera and monitors. It also allows image recording of the alignment, and therefore provides a quantitative means to measure such error.

  14. Fast event recorder utilizing a CCD analog shift register

    International Nuclear Information System (INIS)

    Ducar, R.J.; McIntyre, P.M.

    1978-01-01

    A system of electronics has been developed to allow the capture and recording of relatively fast, low-amplitude analog events. The heart of the system is a dual 455-cell analog shift register charge-coupled device, Fairchild CCD321ADC-3. The CCD is operated in a dual clock mode. The input is sampled at a selectable clock rate of .25-20 MHz. The stored analog data is then clocked out at a slower rate, typically about .25 MHz. The time base expansion of the analog data allows for analog-to-digital conversion and memory storage using conventional medium-speed devices. The digital data is sequentially loaded into a static RAM and may then be block transferred to a computer. The analog electronics are housed in a single-width NIM module, and the RAM memory in a single-width CAMAC module. Each pair of modules provides six parallel channels. Cost is about $200.00 per channel. Applications are described for ionization imaging (TPC, IRC) and long-drift calorimetry in liquid argon

  15. Design and Development of Multi-Purpose CCD Camera System with Thermoelectric Cooling: Hardware

    Directory of Open Access Journals (Sweden)

    Y.-W. Kang

    2007-12-01

    Full Text Available We designed and developed a multi-purpose CCD camera system for three kinds of CCDs; KAF-0401E(768×512, KAF-1602E(1536×1024, KAF-3200E(2184×1472 made by KODAK Co.. The system supports fast USB port as well as parallel port for data I/O and control signal. The packing is based on two stage circuit boards for size reduction and contains built-in filter wheel. Basic hardware components include clock pattern circuit, A/D conversion circuit, CCD data flow control circuit, and CCD temperature control unit. The CCD temperature can be controlled with accuracy of approximately 0.4° C in the max. range of temperature, Δ 33° C. This CCD camera system has with readout noise 6 e^{-}, and system gain 5 e^{-}/ADU. A total of 10 CCD camera systems were produced and our tests show that all of them show passable performance.

  16. First tests with fully depleted PN-CCD's

    International Nuclear Information System (INIS)

    Strueder, L.; Lutz, G.; Sterzik, M.; Holl, P.; Kemmer, J.; Prechtel, U.; Ziemann, T.; Rehak, P.

    1987-01-01

    We have fabricated 280 μm thick fully depletable pn CCD's on high resistivity silicon (/rho/ ∼ 2.5 kΩcm). Its operation is based on the semiconductor drift chamber principle proposed by Gatti and Rheak. They are designed as energy and position sensitive radiation detector for (minimum) ionizing particles and X-ray imaging. Two dimensional semiconductor device modeling demonstrates the basic charge transer mechanisms. Prototypes of the detectors have been tested in static and dynamic conditions. A preliminary charge transfer inefficiency was determined to 6 x 10/sup/minus/3/. The charge loss during the transfer is discussed and as a consequence we have developed an improved design for a second fabrication iteration which is now being produced. 4 refs., 15 figs

  17. BV photographic and CCD photometry of IC 4651

    International Nuclear Information System (INIS)

    Anthony-Twarog, B.J.; Mukherjee, K.; Twarog, B.A.; Caldwell, N.

    1988-01-01

    A BV photometric survey in IC 4651 based on photographic and CCD material calibrated with photoelectric photometry from Eggen (1971) and Anthony-Twarog and Twarog (1987) has been completed. The color-magnitude diagram is consistent with an age of 2.4 + or - 0.3 x 10 to the 9th yr derived by comparison with the isochrones of VandenBerg (1985) if the apparent distance modulus and reddening derived from uvby photometry in Anthony-Twarog and Twarog (1987) are employed. While evidence is found of a hook in the upper main sequence, no evidence is found of a significantly bifurcated main sequence for this cluster, although it is similar in age to NGC 752 and NGC 3680, where this phenomenon has been noted. Finally, the survey has not resolved the apparent deficit of main-sequence stars fainter than V = 14.5 noted in Anthony-Twarog and Twarog (1987). 16 references

  18. A CCD camera probe for a superconducting cyclotron

    International Nuclear Information System (INIS)

    Marti, F.; Blue, R.; Kuchar, J.; Nolen, J.A.; Sherrill, B.; Yurkon, J.

    1991-01-01

    The traditional internal beam probes in cyclotrons have consisted of a differential element, a wire or thin strip, and a main probe with several fingers to determine the vertical distribution of the beam. The resolution of these probes is limited, especially in the vertical direction. The authors have developed a probe for their K1200 superconducting cyclotron based on a CCD TV camera that works in a 6 T magnetic field. The camera looks at the beam spot on a scintillating screen. The TV image is processed by a frame grabber that digitizes and displays the image in pseudocolor in real time. This probe has much better resolution than traditional probes. They can see beams with total currents as low as 0.1 pA, with position resolution of about 0.05 mm

  19. CCD photometry of the distant young open cluster NGC 7510

    International Nuclear Information System (INIS)

    Sagar, R.; Bonn Univ.; Griffiths, W.K.

    1991-01-01

    CCD observations in B, V and I passbands have been used to generate deep V, (B-V) and V,(V-I) colour-magnitude diagrams for the open cluster NGC 7510. The sample consists of 592 stars reaching down to V=21 mag. There appears to be non-uniform extinction over the face of the cluster with the value of colour excess, E(B-V), ranging from 1.0 to 1.3 mag. The law of interstellar extinction in the direction of the cluster is found to be normal. A broad main sequence is clearly visible in both colour-magnitude diagrams. From the bluest part of the colour-magnitude diagrams, the true distance modulus to the cluster has been estimated as 12.5±0.3 mag and an upper limit of 10 Myr has been assigned for the cluster age. (author)

  20. Chromatic Modulator for High Resolution CCD or APS Devices

    Science.gov (United States)

    Hartley, Frank T. (Inventor); Hull, Anthony B. (Inventor)

    2003-01-01

    A system for providing high-resolution color separation in electronic imaging. Comb drives controllably oscillate a red-green-blue (RGB) color strip filter system (or otherwise) over an electronic imaging system such as a charge-coupled device (CCD) or active pixel sensor (APS). The color filter is modulated over the imaging array at a rate three or more times the frame rate of the imaging array. In so doing, the underlying active imaging elements are then able to detect separate color-separated images, which are then combined to provide a color-accurate frame which is then recorded as the representation of the recorded image. High pixel resolution is maintained. Registration is obtained between the color strip filter and the underlying imaging array through the use of electrostatic comb drives in conjunction with a spring suspension system.

  1. Method to implement the CCD timing generator based on FPGA

    Science.gov (United States)

    Li, Binhua; Song, Qian; He, Chun; Jin, Jianhui; He, Lin

    2010-07-01

    With the advance of the PFPA technology, the design methodology of digital systems is changing. In recent years we develop a method to implement the CCD timing generator based on FPGA and VHDL. This paper presents the principles and implementation skills of the method. Taking a developed camera as an example, we introduce the structure, input and output clocks/signals of a timing generator implemented in the camera. The generator is composed of a top module and a bottom module. The bottom one is made up of 4 sub-modules which correspond to 4 different operation modes. The modules are implemented by 5 VHDL programs. Frame charts of the architecture of these programs are shown in the paper. We also describe implementation steps of the timing generator in Quartus II, and the interconnections between the generator and a Nios soft core processor which is the controller of this generator. Some test results are presented in the end.

  2. DOUBLE STARS IN THE USNO CCD ASTROGRAPHIC CATALOG

    Energy Technology Data Exchange (ETDEWEB)

    Hartkopf, William I.; Mason, Brian D.; Finch, Charlie T.; Zacharias, Norbert; Wycoff, Gary L.; Hsu, Danley, E-mail: wih@usno.navy.mil, E-mail: bdm@usno.navy.mil, E-mail: finch@usno.navy.mil, E-mail: nz@usno.navy.mil [US Naval Observatory, Washington, DC 20392 (United States)

    2013-10-01

    The newly completed Fourth USNO CCD Astrographic Catalog (UCAC4) has proven to be a rich source of double star astrometry and photometry. Following initial comparisons of UCAC4 results against those obtained by speckle interferometry, the UCAC4 catalog was matched against known double stars in the Washington Double Star Catalog in order to provide additional differential astrometry and photometry for these pairs. Matches to 58,131 pairs yielded 61,895 astrometric and 68,935 photometric measurements. Finally, a search for possible new common proper motion (CPM) pairs was made using new UCAC4 proper motion data; this resulted in 4755 new potential CPM doubles (and an additional 27,718 astrometric and photometric measures from UCAC and other sources)

  3. THE THIRD US NAVAL OBSERVATORY CCD ASTROGRAPH CATALOG (UCAC3)

    International Nuclear Information System (INIS)

    Zacharias, N.; Finch, C.; Wycoff, G.; Zacharias, M. I.; Corbin, T.; Dutta, S.; Gaume, R.; Gauss, S.; Hall, D.; Hartkopf, W.; Hsu, D.; Holdenried, E.; Makarov, V.; Mason, B.; Girard, T.; Hambly, N.; Castillo, D.; DiVittorio, M.; Germain, M.; Martines, M.

    2010-01-01

    The third US Naval Observatory (USNO) CCD Astrograph Catalog, UCAC3, was released at the IAU General Assembly on 2009 August 10. It is the first all-sky release in this series and contains just over 100 million objects, about 95 million of them with proper motions, covering about R = 8-16 mag. Current epoch positions are obtained from the observations with the 20 cm aperture USNO Astrograph's 'red lens', equipped with a 4k x 4k CCD. Proper motions are derived by combining these observations with over 140 ground- and space-based catalogs, including Hipparcos/Tycho and the AC2000.2, as well as unpublished measures of over 5000 plates from other astrographs. For most of the faint stars in the southern hemisphere, the Yale/San Juan first epoch plates from the Southern Proper Motion (SPM) program (YSJ1) form the basis for proper motions. These data are supplemented by all-sky Schmidt plate survey astrometry and photometry obtained from the SuperCOSMOS project, as well as 2MASS near-IR photometry. Major differences of UCAC3 data as compared with UCAC2 include a completely new raw data reduction with improved control over systematic errors in positions, significantly improved photometry, slightly deeper limiting magnitude, coverage of the north pole region, greater completeness by inclusion of double stars, and weak detections. This of course leads to a catalog which is not as 'clean' as UCAC2 and problem areas are outlined for the user in this paper. The positional accuracy of stars in UCAC3 is about 15-100 mas per coordinate, depending on magnitude, while the errors in proper motions range from 1 to 10 mas yr -1 depending on magnitude and observing history, with a significant improvement over UCAC2 achieved due to the re-reduced SPM data and inclusion of more astrograph plate data unavailable at the time of UCAC2.

  4. Contribution of the Chromosomal ccdAB Operon to Bacterial Drug Tolerance.

    Science.gov (United States)

    Gupta, Kritika; Tripathi, Arti; Sahu, Alishan; Varadarajan, Raghavan

    2017-10-01

    One of the first identified and best-studied toxin-antitoxin (TA) systems in Escherichia coli is the F-plasmid-based CcdAB system. This system is involved in plasmid maintenance through postsegregational killing. More recently, ccdAB homologs have been found on the chromosome, including in pathogenic strains of E. coli and other bacteria. However, the functional role of chromosomal ccdAB genes, if any, has remained unclear. We show that both the native ccd operon of the E. coli O157 strain ( ccd O157 ) and the ccd operon from the F plasmid ( ccd F ), when inserted on the E. coli chromosome, lead to protection from cell death under multiple antibiotic stress conditions through formation of persisters, with the O157 operon showing higher protection. While the plasmid-encoded CcdB toxin is a potent gyrase inhibitor and leads to bacterial cell death even under fully repressed conditions, the chromosomally encoded toxin leads to growth inhibition, except at high expression levels, where some cell death is seen. This was further confirmed by transiently activating the chromosomal ccd operon through overexpression of an active-site inactive mutant of F-plasmid-encoded CcdB. Both the ccd F and ccd O157 operons may share common mechanisms for activation under stress conditions, eventually leading to multidrug-tolerant persister cells. This study clearly demonstrates an important role for chromosomal ccd systems in bacterial persistence. IMPORTANCE A large number of free-living and pathogenic bacteria are known to harbor multiple toxin-antitoxin systems, on plasmids as well as on chromosomes. The F-plasmid CcdAB system has been extensively studied and is known to be involved in plasmid maintenance. However, little is known about the function of its chromosomal counterpart, found in several pathogenic E. coli strains. We show that the native chromosomal ccd operon of the E. coli O157 strain is involved in drug tolerance and confers protection from cell death under multiple

  5. GNC Architecture Design for ARES Simulation. Revision 3.0. Revision 3.0

    Science.gov (United States)

    Gay, Robert

    2006-01-01

    The purpose of this document is to describe the GNC architecture and associated interfaces for all ARES simulations. Establishing a common architecture facilitates development across the ARES simulations and provides an efficient mechanism for creating an end-to-end simulation capability. In general, the GNC architecture is the frame work in which all GNC development takes place, including sensor and effector models. All GNC software applications have a standard location within the architecture making integration easier and, thus more efficient.

  6. Converting structures to optimize the Synchrotron X radiation detection by CCD systems

    International Nuclear Information System (INIS)

    Zanella, G.; Zannoni, R.

    1987-01-01

    It is pointed out how the quantum efficiency of X ray detection for CCD detecting system can be improved enlarging their sensivity range by means of heavy element converting structures. So the problem of fabricating CCD with a deep emptying layer is avoided

  7. Design of offline measuring system for radiation damage effects on linear CCD

    International Nuclear Information System (INIS)

    Zhang Yong; Tang Benqi; Xiao Zhigang; Wang Zujun; Huang Fang; Huang Shaoyan

    2004-01-01

    The paper discusses the hardware design of offline measuring system for radiation damage effects on linear CCD. Some credible results were achieved by using this system. The test results indicate that the system is available for the study of the radiation damage effects on linear CCD. (authors)

  8. High-Voltage Clock Driver for Photon-Counting CCD Characterization

    Science.gov (United States)

    Baker, Robert

    2013-01-01

    A document discusses the CCD97 from e2v technologies as it is being evaluated at Goddard Space Flight Center's Detector Characterization Laboratory (DCL) for possible use in ultra-low background noise space astronomy applications, such as Terrestrial Planet Finder Coronagraph (TPF-C). The CCD97 includes a photoncounting mode where the equivalent output noise is less than one electron. Use of this mode requires a clock signal at a voltage level greater than the level achievable by the existing CCD (charge-coupled-device) electronics. A high-voltage waveform generator has been developed in code 660/601 to support the CCD97 evaluation. The unit generates required clock waveforms at voltage levels from -20 to +50 V. It deals with standard and arbitrary waveforms and supports pixel rates from 50 to 500 kHz. The system is designed to interface with existing Leach CCD electronics.

  9. Flight code validation simulator

    Science.gov (United States)

    Sims, Brent A.

    1996-05-01

    An End-To-End Simulation capability for software development and validation of missile flight software on the actual embedded computer has been developed utilizing a 486 PC, i860 DSP coprocessor, embedded flight computer and custom dual port memory interface hardware. This system allows real-time interrupt driven embedded flight software development and checkout. The flight software runs in a Sandia Digital Airborne Computer and reads and writes actual hardware sensor locations in which Inertial Measurement Unit data resides. The simulator provides six degree of freedom real-time dynamic simulation, accurate real-time discrete sensor data and acts on commands and discretes from the flight computer. This system was utilized in the development and validation of the successful premier flight of the Digital Miniature Attitude Reference System in January of 1995 at the White Sands Missile Range on a two stage attitude controlled sounding rocket.

  10. Simulations

    CERN Document Server

    Ngada, Narcisse

    2015-06-15

    The complexity and cost of building and running high-power electrical systems make the use of simulations unavoidable. The simulations available today provide great understanding about how systems really operate. This paper helps the reader to gain an insight into simulation in the field of power converters for particle accelerators. Starting with the definition and basic principles of simulation, two simulation types, as well as their leading tools, are presented: analog and numerical simulations. Some practical applications of each simulation type are also considered. The final conclusion then summarizes the main important items to keep in mind before opting for a simulation tool or before performing a simulation.

  11. Upgrade of ESO's FIERA CCD Controller and PULPO Subsystem

    Science.gov (United States)

    Reyes-Moreno, J.; Geimer, C.; Balestra, A.; Haddad, N.

    An overview of FIERA is presented with emphasis on its recent upgrade to PCI. The PCI board hosts two DSPs, one for real time control of the camera and another for on-the-fly processing of the incoming video data. In addition, the board is able to make DMA transfers, to synchronize to other boards alike, to be synchronized by a TIM bus and to control PULPO via RS232. The design is based on the IOP480 chip from PLX, for which we have developed a device driver for both Solaris and Linux. One computer is able to host more than one board and therefore can control an array of FIERA detector electronics. PULPO is a multifunctional subsystem widely used at ESO for the housekeeping of CCD cryostat heads and for shutter control. The upgrade of PULPO is based on an embedded PC running Linux. The upgraded PULPO is able to handle 29 temperature sensors, control 8 heaters and one shutter, read out one vacuum sensor and log any combination of parameters.

  12. BVI CCD photometry of the globular cluster M4

    International Nuclear Information System (INIS)

    Alcaino, G.; Liller, W.; Alvarado, F.

    1988-01-01

    CCD BV1 main-sequence (MS) photometry of M4, the globular cluster closest to the sun, is presented. The photometry is matched to the BVI isochrones of VandenBerg and Bell (1985). The MS turnoffs are found to be at V = 16.90 + or - 0.05, B-V = 0.81 + or - 0.02, V-I = 0.96 + or - 0.02, and B - I = 1.77 + or - 0.02. The magnitude difference between the MS turnoff and the horizontal branch is Delta M(V) = 3.52 + or - 0.1 for all three color indices. Using Y = 0.2, (Fe/H) = - 1.27, and alpha = 1.65, with a distance modulus of (m-M)V = 12.7 and E(B-V) = 0.41, a consistent age for M4 is deduced in all three color indices of 17 + or - 1.5 Gyr. 34 references

  13. Deep CCD photometry in globular clusters. VII. M30

    International Nuclear Information System (INIS)

    Richer, H.B.; Fahlman, G.G.; Vandenberg, D.A.

    1988-01-01

    New UBV CCD photometry in a single field of the globular cluster M30 was obtained, and the data were used to obtain the color magnitude diagram (CMD) of the cluster, its luminosity function, and to derive fundamental cluster parameters. No blue stragglers were found, nor any evidence of a binary sequence in the data even though the field under study is only 21 core radii from the cluster center. The cluster reddening is observed to be 0.068 + or - 0.035, significantly higher than that adopted in most current papers on M30. An intercomparison of the CMDs of three very metal-poor clusters clearly shows that there is no evidence for any age difference between them. The age of M30 itself is found to be about 14 Gyr. The luminosity function of M30 is determined to be M(V) = 8. Comparison of this function with one found by Bolte (1987) at 65 core radii shows clear evidence of mass segregation in the low-mass stars. 44 references

  14. Deep CCD photometry in globular clusters III. M15

    International Nuclear Information System (INIS)

    Fahlman, G.G.; Richer, H.B.; Vandenberg, D.A.

    1985-01-01

    CCD photometry in U, B, and V is presented for a 5' x 3' field in the globular cluster M15. The location of the main sequence in the color-magnitude diagram is found here to be significantly bluer than previous studies have indicated. The luminosity function of the cluster is studied down to V = 22.8 (Mroughly-equal7.5) and shown to be consistent with a power-law mass function, n(M) = QM/sup -alpha/ with α = 2.5 +- 1.0, to the limit of our data. The field star population brighter than V = 21.5, is examined in some detail. There appears to be about 50% more stars belonging to the disk in the field as compared with the Bahcall-Soneira standard galaxy model. The reddening to the cluster is found to be E(B-V) = 0.11 +- 0.04 from nine bright field stars. A new value for the ultraviolet excess of the cluster main-sequence stars is obtained, delta(0.6) = 0.25 +- 0.02, and confirms the well-known fact that M15 is among the metal poorest of the globular clusters

  15. LAMOST CCD camera-control system based on RTS2

    Science.gov (United States)

    Tian, Yuan; Wang, Zheng; Li, Jian; Cao, Zi-Huang; Dai, Wei; Wei, Shou-Lin; Zhao, Yong-Heng

    2018-05-01

    The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) is the largest existing spectroscopic survey telescope, having 32 scientific charge-coupled-device (CCD) cameras for acquiring spectra. Stability and automation of the camera-control software are essential, but cannot be provided by the existing system. The Remote Telescope System 2nd Version (RTS2) is an open-source and automatic observatory-control system. However, all previous RTS2 applications were developed for small telescopes. This paper focuses on implementation of an RTS2-based camera-control system for the 32 CCDs of LAMOST. A virtual camera module inherited from the RTS2 camera module is built as a device component working on the RTS2 framework. To improve the controllability and robustness, a virtualized layer is designed using the master-slave software paradigm, and the virtual camera module is mapped to the 32 real cameras of LAMOST. The new system is deployed in the actual environment and experimentally tested. Finally, multiple observations are conducted using this new RTS2-framework-based control system. The new camera-control system is found to satisfy the requirements for automatic camera control in LAMOST. This is the first time that RTS2 has been applied to a large telescope, and provides a referential solution for full RTS2 introduction to the LAMOST observatory control system.

  16. CCD imaging technology and the war on crime

    Science.gov (United States)

    McNeill, Glenn E.

    1992-08-01

    Linear array based CCD technology has been successfully used in the development of an Automatic Currency Reader/Comparator (ACR/C) system. The ACR/C system is designed to provide a method for tracking US currency in the organized crime and drug trafficking environments where large amounts of cash are involved in illegal transactions and money laundering activities. United States currency notes can be uniquely identified by the combination of the denomination serial number and series year. The ACR/C system processes notes at five notes per second using a custom transport a stationary linear array and optical character recognition (OCR) techniques to make such identifications. In this way large sums of money can be " marked" (using the system to read and store their identifiers) and then circulated within various crime networks. The system can later be used to read and compare confiscated notes to the known sets of identifiers from the " marked" set to document a trail of criminal activities. With the ACR/C law enforcement agencies can efficiently identify currency without actually marking it. This provides an undetectable means for making each note individually traceable and facilitates record keeping for providing evidence in a court of law. In addition when multiple systems are used in conjunction with a central data base the system can be used to track currency geographically. 1.

  17. Deep CCD survey - galaxy luminosity and color evolution

    International Nuclear Information System (INIS)

    Tyson, J.A.

    1988-01-01

    Imaging and photometric observations of a statistically complete sample of galaxies in 12 high-latitude fields, obtained in the BJ (360-520 nm), R (580-720 nm) and I (780-1100 nm) bands using CCD detectors on the 4-m telescopes at CTIO and KPNO, are reported. The data are presented in extensive graphs and sample images and analyzed in detail with reference to theoretical models of galactic origin and evolution. The galaxy number-count slopes, d(log N)/dm, are found to be sub-Euclidean, varying from 0.34 in the I band to 0.45 in the BJ band, where the corrected counts appear to saturate at about 27 mag. The predictions of no-evolution models are shown to underpredict the counts at 25 mag (BJ) by a factor of 5-15 and the extragalactic background light from all galaxies (6.8 x 10 to the -6th erg/sq cm sec sr micron at 450 nm) by a factor greater than 2. 114 references

  18. CCD Parallaxes for 309 Late-type Dwarfs and Subdwarfs

    Energy Technology Data Exchange (ETDEWEB)

    Dahn, Conard C.; Harris, Hugh C.; Subasavage, John P.; Ables, Harold D.; Guetter, Harry H.; Harris, Fred H.; Luginbuhl, Christian B.; Monet, Alice B.; Monet, David G.; Munn, Jeffrey A.; Pier, Jeffrey R.; Stone, Ronald C.; Vrba, Frederick J.; Walker, Richard L.; Tilleman, Trudy M. [US Naval Observatory, Flagstaff Station, 10391 W. Naval Observatory Road, Flagstaff, AZ 86005-8521 (United States); Canzian, Blaise J. [L-3 Communications/Brashear, 615 Epsilon Drive, Pittsburgh, PA 15238-2807 (United States); Henden, Arne H. [AAVSO, Cambridge, MA 02138 (United States); Leggett, S. K. [Gemini Observatory, Northern Operations Center, 670 N. A’ohoku Place, Hilo, HI 96720 (United States); Levine, Stephen E., E-mail: jsubasavage@nofs.navy.mil [Lowell Observatory, 1400 W. Mars Hill Road, Flagstaff, AZ 86001-4499 (United States)

    2017-10-01

    New, updated, and/or revised CCD parallaxes determined with the Strand Astrometric Reflector at the Naval Observatory Flagstaff Station are presented. Included are results for 309 late-type dwarf and subdwarf stars observed over the 30+ years that the program operated. For 124 of the stars, parallax determinations from other investigators have already appeared in the literature and we compare the different results. Also included here are new or updated VI photometry on the Johnson–Kron-Cousins system for all but a few of the faintest targets. Together with 2MASS JHK{sub s} near-infrared photometry, a sample of absolute magnitude versus color and color versus color diagrams are constructed. Because large proper motion was a prime criterion for targeting the stars, the majority turn out to be either M-type subdwarfs or late M-type dwarfs. The sample also includes 50 dwarf or subdwarf L-type stars, and four T dwarfs. Possible halo subdwarfs are identified in the sample based on tangential velocity, subluminosity, and spectral type. Residuals from the solutions for parallax and proper motion for several stars show evidence of astrometric perturbations.

  19. Solution structure and elevator mechanism of the membrane electron transporter CcdA.

    Science.gov (United States)

    Zhou, Yunpeng; Bushweller, John H

    2018-02-01

    Membrane oxidoreductase CcdA plays a central role in supplying reducing equivalents from the bacterial cytoplasm to the envelope. It transports electrons across the membrane using a single pair of cysteines by a mechanism that has not yet been elucidated. Here we report an NMR structure of the Thermus thermophilus CcdA (TtCcdA) in an oxidized and outward-facing state. CcdA consists of two inverted structural repeats of three transmembrane helices (2 × 3-TM). We computationally modeled and experimentally validated an inward-facing state, which suggests that CcdA uses an elevator-type movement to shuttle the reactive cysteines across the membrane. CcdA belongs to the LysE superfamily, and thus its structure may be relevant to other LysE clan transporters. Structure comparisons of CcdA, semiSWEET, Pnu, and major facilitator superfamily (MFS) transporters provide insights into membrane transporter architecture and mechanism.

  20. Laser jamming experiment of varifocal colour CCD imaging system%变焦彩色CCD成像系统的激光干扰实验

    Institute of Scientific and Technical Information of China (English)

    汤伟; 王锐; 王挺峰; 郭劲

    2017-01-01

    Out-field laser jamming experiment of varifocal colour CCD imaging system irradiated by semiconductor laser was done.Laser jamming effects of colour CCD imaging system with different focal lengths were measured.Laser jamming model was set-up,and theoretical proving and analysis on experimental results were completed.Theorical and experimental results show that laser jamming effect of colour CCD imaging system irradiated by 750 nm laser is obvious,and CCD surface appears obvious light saturation and crosstalk phenomena.In the same initial laser irradiating conduction,laser power truncated by the aperture gradually decreases with increase of focal length f,and light saturation area on the CCD surface gradually increases.When focal length f of colour CCD imaging system is 17 mm,light saturation area on the CCD surface is 0.33 mm×0.29 mm.While focal length f of colour CCD imaging system increases to 120 mm,light saturation area on the CCD surface is 1.8 mm×1.2 mm.Simulation results are coincident with experimental results,and it proves laser jamming model is correct.The conclusions have a reference value for colour CCD in the practical application.%开展了变焦彩色CCD成像系统的激光外场干扰实验,测得了半导体激光(750 nm)对变焦距(17~187 mm)彩色CCD相机的干扰效果;同时利用典型的激光干扰CCD模型,完成了对实验结果的验证与理论分析.理论与实验结果表明:750 nm激光对彩色CCD成像系统的干扰效果明显,CCD靶面出现了明显的光饱和和串扰现象;在激光辐照条件相同情况下,光学系统焦距f越大,被光阑截断的激光就越少,到靶的激光功率密度就越高,CCD靶面的光饱和面积就越大;光学系统焦距f为17mm时,CCD靶面的光饱和面积为0.33 mm×0.29 mm,而当光学系统焦距f增大至120 mm时,CCD靶面的光饱和面积为1.8 mm×1.2 mm.仿真结果与实验结果基本一致,证明了理论模型的正确性.研究结果将对CCD器件的实际应用具有一定的指导意义.

  1. A model for measurement of noise in CCD digital-video cameras

    International Nuclear Information System (INIS)

    Irie, K; Woodhead, I M; McKinnon, A E; Unsworth, K

    2008-01-01

    This study presents a comprehensive measurement of CCD digital-video camera noise. Knowledge of noise detail within images or video streams allows for the development of more sophisticated algorithms for separating true image content from the noise generated in an image sensor. The robustness and performance of an image-processing algorithm is fundamentally limited by sensor noise. The individual noise sources present in CCD sensors are well understood, but there has been little literature on the development of a complete noise model for CCD digital-video cameras, incorporating the effects of quantization and demosaicing

  2. Discriminação de variedades de citros em imagens CCD/CBERS-2 Discrimination of citrus varieties using CCD/CBERS-2 satellite imagery

    Directory of Open Access Journals (Sweden)

    Ieda Del'Arco Sanches

    2008-02-01

    Full Text Available O presente trabalho teve o objetivo de avaliar as imagens CCD/CBERS-2 quanto à possibilidade de discriminarem variedades de citros. A área de estudo localiza-se em Itirapina (SP e, para este estudo, foram utilizadas imagens CCD de três datas (30/05/2004, 16/08/2004 e 11/09/2004. Um modelo que integra os elementos componentes da cena citrícola sensoriada é proposto com o objetivo de explicar a variabilidade das respostas das parcelas de citros em imagens orbitais do tipo CCD/CBERS-2. Foram feitas classificações pelos algoritmos Isoseg e Maxver e, de acordo com o índice kappa, concluiu-se que é possível obterem-se exatidões qualificadas como muito boas, sendo que as melhores classificações foram conseguidas com imagens da estação seca.This paper was aimed at evaluating the possibility of discriminating citrus varieties in CCD imageries from CBERS-2 satellite ("China-Brazil Earth Resouces Satellite". The study area is located in Itirapina, São Paulo State. For this study, three CCD images from 2004 were acquired (May 30, August 16, and September 11. In order to acquire a better understanding and for explaining the variability of the spectral behavior of the citrus areas in orbital images (like as the CCD/CBERS-2 images a model that integrates the elements of the citrus scene is proposed and discussed. The images were classified by Isoseg and MaxVer classifiers. According to kappa index, it was possible to obtain classifications qualified as 'very good'. The best results were obtained with the images from the dry season.

  3. BVRI CCD photometry of the globular cluster NGC 6362

    International Nuclear Information System (INIS)

    Alcaino, G.; Liller, W.

    1986-01-01

    We have obtained 78 BVRI CCD frames with the 1.54 m Danish telescope at ESO, La Silla, and have constructed V vs B-V, V vs V-R, V vs R-I, V vs V-I, and V vs B-I color-magnitude diagrams in a 4' x 2X5 field of the globular cluster NGC 6362. From these five CMDs we find that the main-sequence turnoffs are all close to the same magnitude, namely V/sub TO/ = 18.75 +- 0.1, and the color turn- offs at B-V = 0.50 +- 0.02, V-R = 0.31 +- 0.02, R-I = 0.35 +- 0.02, V-I = 0.68 +- 0.02, and B-I = 1.18 +- 0.03. The magnitude difference between the turnoff and the horizontal branch for the five diagrams is ΔM/sub V/ = 3.40 +- 0.15 in excellent agreement with the value given by Sandage (1982). Using Y = 0.2, Z = 0.001 ([Fe/H] = -1.27), α = 1.65, a distance modulus of (m-M)/sub V/ = 14.74, and E(B-V) = 0.10, we find that the VandenBerg and Bell isochrones (1985) yield a consistent age for NGC 6362 in all colors indexes of 16 +- 1.5 x 10 9 yr. The solar distance to the cluster is 7.7 kpc and the galactic distance is 5.6 kpc assuming R 0 = 9 kpc

  4. Background study for the pn-CCD detector of CERN Axion Solar Telescope

    CERN Document Server

    Cebrián, S; Kuster, M.; Beltran, B.; Gomez, H.; Hartmann, R.; Irastorza, I. G.; Kotthaus, R.; Luzon, G.; Morales, J.; Ruz, J.; Struder, L.; Villar, J. A.

    2007-01-01

    The CERN Axion Solar Telescope (CAST) experiment searches for axions from the Sun converted into photons with energies up to around 10 keV via the inverse Primakoff effect in the high magnetic field of a superconducting Large Hadron Collider (LHC) prototype magnet. A backside illuminated pn-CCD detector in conjunction with an X-ray mirror optics is one of the three detectors used in CAST to register the expected photon signal. Since this signal is very rare and different background components (environmental gamma radiation, cosmic rays, intrinsic radioactive impurities in the set-up, ...) entangle it, a detailed study of the detector background has been undertaken with the aim to understand and further reduce the background level of the detector. The analysis is based on measured data taken during the Phase I of CAST and on Monte Carlo simulations of different background components. This study will show that the observed background level (at a rate of (8.00+-0.07)10^-5 counts/cm^2/s/keV between 1 and 7 keV) s...

  5. Photometric correction for an optical CCD-based system based on the sparsity of an eight-neighborhood gray gradient.

    Science.gov (United States)

    Zhang, Yuzhong; Zhang, Yan

    2016-07-01

    In an optical measurement and analysis system based on a CCD, due to the existence of optical vignetting and natural vignetting, photometric distortion, in which the intensity falls off away from the image center, affects the subsequent processing and measuring precision severely. To deal with this problem, an easy and straightforward method used for photometric distortion correction is presented in this paper. This method introduces a simple polynomial fitting model of the photometric distortion function and employs a particle swarm optimization algorithm to get these model parameters by means of a minimizing eight-neighborhood gray gradient. Compared with conventional calibration methods, this method can obtain the profile information of photometric distortion from only a single common image captured by the optical CCD-based system, with no need for a uniform luminance area source used as a standard reference source and relevant optical and geometric parameters in advance. To illustrate the applicability of this method, numerical simulations and photometric distortions with different lens parameters are evaluated using this method in this paper. Moreover, the application example of temperature field correction for casting billets also demonstrates the effectiveness of this method. The experimental results show that the proposed method is able to achieve the maximum absolute error for vignetting estimation of 0.0765 and the relative error for vignetting estimation from different background images of 3.86%.

  6. Measuring the Flatness of Focal Plane for Very Large Mosaic CCD Camera

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Jiangang; Estrada, Juan; Cease, Herman; Diehl, H.Thomas; Flaugher, Brenna L.; Kubik, Donna; Kuk, Keivin; Kuropatkine, Nickolai; Lin, Huan; Montes, Jorge; Scarpine, Vic; /Fermilab

    2010-06-08

    Large mosaic multiCCD camera is the key instrument for modern digital sky survey. DECam is an extremely red sensitive 520 Megapixel camera designed for the incoming Dark Energy Survey (DES). It is consist of sixty two 4k x 2k and twelve 2k x 2k 250-micron thick fully-depleted CCDs, with a focal plane of 44 cm in diameter and a field of view of 2.2 square degree. It will be attached to the Blanco 4-meter telescope at CTIO. The DES will cover 5000 square-degrees of the southern galactic cap in 5 color bands (g, r, i, z, Y) in 5 years starting from 2011. To achieve the science goal of constraining the Dark Energy evolution, stringent requirements are laid down for the design of DECam. Among them, the flatness of the focal plane needs to be controlled within a 60-micron envelope in order to achieve the specified PSF variation limit. It is very challenging to measure the flatness of the focal plane to such precision when it is placed in a high vacuum dewar at 173 K. We developed two image based techniques to measure the flatness of the focal plane. By imaging a regular grid of dots on the focal plane, the CCD offset along the optical axis is converted to the variation the grid spacings at different positions on the focal plane. After extracting the patterns and comparing the change in spacings, we can measure the flatness to high precision. In method 1, the regular dots are kept in high sub micron precision and cover the whole focal plane. In method 2, no high precision for the grid is required. Instead, we use a precise XY stage moves the pattern across the whole focal plane and comparing the variations of the spacing when it is imaged by different CCDs. Simulation and real measurements show that the two methods work very well for our purpose, and are in good agreement with the direct optical measurements.

  7. A tilted fiber-optic plate coupled CCD detector for high resolution neutron imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jongyul; Cho, Gyuseong [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Kim, Jongyul; Hwy, Limchang; Kim, Taejoo; Lee, Kyehong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Lee, Seungwook [Pusan National Univ., Pusan (Korea, Republic of)

    2013-05-15

    One of these efforts is that a tilted scintillator geometry and lens coupled CCD detector for neutron imaging system were used to improve spatial resolution in one dimension. The increased spatial resolution in one dimension was applied to fuel cell study. However, a lens coupled CCD detector has lower sensitivity than a fiber-optic plate coupled CCD detector due to light loss. In this research, a tilted detector using fiber-optic plate coupled CCD detector was developed to improve resolution and sensitivity. In addition, a tilted detector can prevent an image sensor from direct radiation damage. Neutron imaging has been used for fuel cell study, lithium ion battery study, and many scientific applications. High quality neutron imaging is demanded for more detailed studies of applications, and spatial resolution should be considered to get high quality neutron imaging. Therefore, there were many efforts to improve spatial resolution.

  8. Evaluation of the Accuracy of the Dark Frame Subtraction Method in CCD Image Processing

    National Research Council Canada - National Science Library

    Levesque, Martin P; Lelievre, Mario

    2007-01-01

    .... This method is frequently used for removing the image background gradient (a thermal artefact) in CCD images. This report demonstrates that this method may not be suitable for the detection of objects with very low signal-to-noise ratio...

  9. Researchers develop CCD image sensor with 20ns per row parallel readout time

    CERN Multimedia

    Bush, S

    2004-01-01

    "Scientists at the Rutherford Appleton Laboratory (RAL) in Oxfordshire have developed what they claim is the fastest CCD (charge-coupled device) image sensor, with a readout time which is 20ns per row" (1/2 page)

  10. The interaction of DNA gyrase with the bacterial toxin CcdB

    DEFF Research Database (Denmark)

    Kampranis, S C; Howells, A J; Maxwell, A

    1999-01-01

    CcdB is a bacterial toxin that targets DNA gyrase. Analysis of the interaction of CcdB with gyrase reveals two distinct complexes. An initial complex (alpha) is formed by direct interaction between GyrA and CcdB; this complex can be detected by affinity column and gel-shift analysis, and has...... of this initial complex with ATP in the presence of GyrB and DNA slowly converts it to a second complex (beta), which has a lower rate of ATP hydrolysis and is unable to catalyse supercoiling. The efficiency of formation of this inactive complex is dependent on the concentrations of ATP and CcdB. We suggest...

  11. Performance of an area variable MOS varicap weighted programmable CCD transversal filter

    OpenAIRE

    Bhattacharyya, A.B.; Shankarnarayan, L.; Kapur, N.; Wallinga, Hans

    1981-01-01

    The performance of an electrically programmable CCD transversal filter (PTF) is presented in which tap-weight multiplication is performed by a novel and compact on chip voltage controlled area variable MOS varicap.

  12. CCD-based X-ray detectors for X-ray diffraction studies

    International Nuclear Information System (INIS)

    Ito, K.; Amemiya, Y.

    1999-01-01

    CCD-based X-ray detectors are getting to be used for X-ray diffraction studies especially in the studies where real time (automated) measurements and time-resolved measurements are required. Principles and designs of two typical types of CCD-based detectors are described; one is ths system in which x-ray image intensifiers are coupled to maximize the detective quantum efficiency for time-resolved measurements, and the other is the system in which tapered optical fibers are coupled for the reduction of the image into the CCD, which is optimized for automated measurements for protein crystallography. These CCD-based X-ray detectors have an image distortion and non-uniformity of response to be corrected by software. Correction schemes which we have developed are also described. (author)

  13. Software design of control system of CCD side-scatter lidar

    Science.gov (United States)

    Kuang, Zhiqiang; Liu, Dong; Deng, Qian; Zhang, Zhanye; Wang, Zhenzhu; Yu, Siqi; Tao, Zongming; Xie, Chenbo; Wang, Yingjian

    2018-03-01

    Because of the existence of blind zone and transition zone, the application of backscattering lidar in near-ground is limited. The side-scatter lidar equipped with the Charge Coupled Devices (CCD) can separate the transmitting and receiving devices to avoid the impact of the geometric factors which is exited in the backscattering lidar and, detect the more precise near-ground aerosol signals continuously. Theories of CCD side-scatter lidar and the design of control system are introduced. The visible control of laser and CCD and automatic data processing method of the side-scatter lidar are developed by using the software of Visual C #. The results which are compared with the calibration of the atmospheric aerosol lidar data show that signals from the CCD side- scatter lidar are convincible.

  14. Colony Collapse Disorder (CCD) and bee age impact honey bee pathophysiology.

    Science.gov (United States)

    vanEngelsdorp, Dennis; Traynor, Kirsten S; Andree, Michael; Lichtenberg, Elinor M; Chen, Yanping; Saegerman, Claude; Cox-Foster, Diana L

    2017-01-01

    Honey bee (Apis mellifera) colonies continue to experience high annual losses that remain poorly explained. Numerous interacting factors have been linked to colony declines. Understanding the pathways linking pathophysiology with symptoms is an important step in understanding the mechanisms of disease. In this study we examined the specific pathologies associated with honey bees collected from colonies suffering from Colony Collapse Disorder (CCD) and compared these with bees collected from apparently healthy colonies. We identified a set of pathological physical characteristics that occurred at different rates in CCD diagnosed colonies prior to their collapse: rectum distension, Malpighian tubule iridescence, fecal matter consistency, rectal enteroliths (hard concretions), and venom sac color. The multiple differences in rectum symptomology in bees from CCD apiaries and colonies suggest effected bees had trouble regulating water. To ensure that pathologies we found associated with CCD were indeed pathologies and not due to normal changes in physical appearances that occur as an adult bee ages (CCD colonies are assumed to be composed mostly of young bees), we documented the changes in bees of different ages taken from healthy colonies. We found that young bees had much greater incidences of white nodules than older cohorts. Prevalent in newly-emerged bees, these white nodules or cellular encapsulations indicate an active immune response. Comparing the two sets of characteristics, we determined a subset of pathologies that reliably predict CCD status rather than bee age (fecal matter consistency, rectal distension size, rectal enteroliths and Malpighian tubule iridescence) and that may serve as biomarkers for colony health. In addition, these pathologies suggest that CCD bees are experiencing disrupted excretory physiology. Our identification of these symptoms is an important first step in understanding the physiological pathways that underlie CCD and factors

  15. Discussion on the fusing methods for HR and CCD images of CBERS

    International Nuclear Information System (INIS)

    Gao Zhangsheng; Zhao Yingjun

    2010-01-01

    CBERS-02B multi-spectral CCD data are different from HR panchromatic data in resolution, which causes difficulty in image fusion. With the method of Pansharping, HPF, Brovey transform, IHS transform, principal component transform, Gram Schmidt (GS) transform and wavelet transform, the authors have tested the fusion methods for CCD data and HR data of CBERS, and the fusion results are discussed and evaluated qualitatively and quantitatively. (authors)

  16. Colony Collapse Disorder (CCD and bee age impact honey bee pathophysiology.

    Directory of Open Access Journals (Sweden)

    Dennis vanEngelsdorp

    Full Text Available Honey bee (Apis mellifera colonies continue to experience high annual losses that remain poorly explained. Numerous interacting factors have been linked to colony declines. Understanding the pathways linking pathophysiology with symptoms is an important step in understanding the mechanisms of disease. In this study we examined the specific pathologies associated with honey bees collected from colonies suffering from Colony Collapse Disorder (CCD and compared these with bees collected from apparently healthy colonies. We identified a set of pathological physical characteristics that occurred at different rates in CCD diagnosed colonies prior to their collapse: rectum distension, Malpighian tubule iridescence, fecal matter consistency, rectal enteroliths (hard concretions, and venom sac color. The multiple differences in rectum symptomology in bees from CCD apiaries and colonies suggest effected bees had trouble regulating water. To ensure that pathologies we found associated with CCD were indeed pathologies and not due to normal changes in physical appearances that occur as an adult bee ages (CCD colonies are assumed to be composed mostly of young bees, we documented the changes in bees of different ages taken from healthy colonies. We found that young bees had much greater incidences of white nodules than older cohorts. Prevalent in newly-emerged bees, these white nodules or cellular encapsulations indicate an active immune response. Comparing the two sets of characteristics, we determined a subset of pathologies that reliably predict CCD status rather than bee age (fecal matter consistency, rectal distension size, rectal enteroliths and Malpighian tubule iridescence and that may serve as biomarkers for colony health. In addition, these pathologies suggest that CCD bees are experiencing disrupted excretory physiology. Our identification of these symptoms is an important first step in understanding the physiological pathways that underlie CCD and

  17. A FORTRAN version implementation of block adjustment of CCD frames and its preliminary application

    Science.gov (United States)

    Yu, Y.; Tang, Z.-H.; Li, J.-L.; Zhao, M.

    2005-09-01

    A FORTRAN version implementation of the block adjustment (BA) of overlapping CCD frames is developed and its flowchart is shown. The program is preliminarily applied to obtain the optical positions of four extragalactic radio sources. The results show that because of the increase in the number and sky coverage of reference stars the precision of optical positions with BA is improved compared with the single CCD frame adjustment.

  18. Design of area array CCD image acquisition and display system based on FPGA

    Science.gov (United States)

    Li, Lei; Zhang, Ning; Li, Tianting; Pan, Yue; Dai, Yuming

    2014-09-01

    With the development of science and technology, CCD(Charge-coupled Device) has been widely applied in various fields and plays an important role in the modern sensing system, therefore researching a real-time image acquisition and display plan based on CCD device has great significance. This paper introduces an image data acquisition and display system of area array CCD based on FPGA. Several key technical challenges and problems of the system have also been analyzed and followed solutions put forward .The FPGA works as the core processing unit in the system that controls the integral time sequence .The ICX285AL area array CCD image sensor produced by SONY Corporation has been used in the system. The FPGA works to complete the driver of the area array CCD, then analog front end (AFE) processes the signal of the CCD image, including amplification, filtering, noise elimination, CDS correlation double sampling, etc. AD9945 produced by ADI Corporation to convert analog signal to digital signal. Developed Camera Link high-speed data transmission circuit, and completed the PC-end software design of the image acquisition, and realized the real-time display of images. The result through practical testing indicates that the system in the image acquisition and control is stable and reliable, and the indicators meet the actual project requirements.

  19. A Design and Development of Multi-Purpose CCD Camera System with Thermoelectric Cooling: Software

    Directory of Open Access Journals (Sweden)

    S. H. Oh

    2007-12-01

    Full Text Available We present a software which we developed for the multi-purpose CCD camera. This software can be used on the all 3 types of CCD - KAF-0401E (768×512, KAF-1602E (15367times;1024, KAF-3200E (2184×1472 made in KODAK Co.. For the efficient CCD camera control, the software is operated with two independent processes of the CCD control program and the temperature/shutter operation program. This software is designed to fully automatic operation as well as manually operation under LINUX system, and is controled by LINUX user signal procedure. We plan to use this software for all sky survey system and also night sky monitoring and sky observation. As our results, the read-out time of each CCD are about 15sec, 64sec, 134sec for KAF-0401E, KAF-1602E, KAF-3200E., because these time are limited by the data transmission speed of parallel port. For larger format CCD, the data transmission is required more high speed. we are considering this control software to one using USB port for high speed data transmission.

  20. One method for HJ-1-A HSI and CCD data fusion

    International Nuclear Information System (INIS)

    Xiong, Wencheng; Shao, Yun; Shen, Wenming; Xiao, Rulin; Fu, Zhuo; Shi, Yuanli

    2014-01-01

    HJ-1-A satellite, developed by China independently, was equipped with two sensors of Hyper Spectral Imager (HSI) and multispectral sensor (CCD). In this paper, we examine the benefits of combining data from CCD data (high-spatial-resolution, low-spectral-resolution image) with HSI data (low -spatial-resolution, high -spectral-resolution image). Due to the same imaging time and similar spectral regime, the CCD and HSI data can be registered with each other well, and the difference between CCD and HSI data mainly is systematic bias. The approach we have been investigating compares the spectral information present in the multispectral image to the spectral content in the hyperspectral image, and derives a set of equations to approximately acquire the systematic bias between the two sensors. The systematic bias is then applied to the interpolated high-spectral CCD image to produce a fused product. This fused image has the spectral resolution of the hyperspectral image (HSI) and the spatial resolution of the multispectral image (CCD). It is capable of full exploitation as a hyperspectral image. We evaluate this technique using the data of Honghe wetland and show both good spectral and visual fidelity. An analysis of SAM classification test case shows good result when compared to original image. All in all, the approach we developed here provides a means for fusing data from HJ-1-A satellite to produce a spatial-resolution-enhanced hyperspectral data cube that can be further analyzed by spectral classification and detection algorithms

  1. Chromatic Modulator for a High-Resolution CCD or APS

    Science.gov (United States)

    Hartley, Frank; Hull, Anthony

    2008-01-01

    A chromatic modulator has been proposed to enable the separate detection of the red, green, and blue (RGB) color components of the same scene by a single charge-coupled device (CCD), active-pixel sensor (APS), or similar electronic image detector. Traditionally, the RGB color-separation problem in an electronic camera has been solved by use of either (1) fixed color filters over three separate image detectors; (2) a filter wheel that repeatedly imposes a red, then a green, then a blue filter over a single image detector; or (3) different fixed color filters over adjacent pixels. The use of separate image detectors necessitates precise registration of the detectors and the use of complicated optics; filter wheels are expensive and add considerably to the bulk of the camera; and fixed pixelated color filters reduce spatial resolution and introduce color-aliasing effects. The proposed chromatic modulator would not exhibit any of these shortcomings. The proposed chromatic modulator would be an electromechanical device fabricated by micromachining. It would include a filter having a spatially periodic pattern of RGB strips at a pitch equal to that of the pixels of the image detector. The filter would be placed in front of the image detector, supported at its periphery by a spring suspension and electrostatic comb drive. The spring suspension would bias the filter toward a middle position in which each filter strip would be registered with a row of pixels of the image detector. Hard stops would limit the excursion of the spring suspension to precisely one pixel row above and one pixel row below the middle position. In operation, the electrostatic comb drive would be actuated to repeatedly snap the filter to the upper extreme, middle, and lower extreme positions. This action would repeatedly place a succession of the differently colored filter strips in front of each pixel of the image detector. To simplify the processing, it would be desirable to encode information on

  2. pnCCD for photon detection from near-infrared to X-rays

    International Nuclear Information System (INIS)

    Meidinger, Norbert; Andritschke, Robert; Hartmann, Robert; Herrmann, Sven; Holl, Peter; Lutz, Gerhard; Strueder, Lothar

    2006-01-01

    A pnCCD is a special type of charge-coupled device developed for spectroscopy and imaging of X-rays with high time resolution and quantum efficiency. Its most famous application is the operation on the XMM-Newton satellite, an X-ray astronomy mission that was launched by the European space agency in 1999. The excellent performance of the focal plane camera has been maintained for more than 6 years in orbit. The energy resolution in particular has shown hardly any degradation since launch. In order to satisfy the requirements of future X-ray astronomy missions as well as those of ground-based experiments, a new type of pnCCD has been developed. This 'frame-store pnCCD' shows an enhanced performance compared to the XMM-Newton type of pnCCD. Now, more options in device design and operation are available to tailor the detector to its respective application. Part of this concept is a programmable analog signal processor, which has been developed for the readout of the CCD signals. The electronic noise of the new detector has a value of only 2 electrons equivalent noise charge (ENC), which is less than half of the figure achieved for the XMM-Newton-type pnCCD. The energy resolution for the Mn-K α line at 5.9 keV is approximately 130 eV FWHM. We have close to 100% quantum efficiency for both low- and high-energy photon detection (e.g. the C-K line at 277 eV, and the Ge-K α line at 10 keV, respectively). Very high frame rates of 1000 images/s have been achieved due to the ultra-fast readout accomplished by the parallel architecture of the pnCCD and the analog signal processor. Excellent spectroscopic performance is shown even at the relatively high operating temperature of -25 deg. C that can be achieved by a Peltier cooler. The applications of the low-noise and fast pnCCD detector are not limited to the detection of X-rays. With an anti-reflective coating deposited on the photon entrance window, we achieve high quantum efficiency also for near-infrared and optical

  3. Characterization of a pnCCD for applications with synchrotron radiation

    Energy Technology Data Exchange (ETDEWEB)

    Send, S., E-mail: send@physik.uni-siegen.de [University of Siegen, Department of Physics, Walter-Flex-Straße 3, 57068 Siegen (Germany); Abboud, A. [University of Siegen, Department of Physics, Walter-Flex-Straße 3, 57068 Siegen (Germany); Hartmann, R.; Huth, M. [PNSensor GmbH, Römerstraße 28, 80803 München (Germany); Leitenberger, W. [University of Potsdam, Department of Physics, Karl-Liebknecht-Straße 24/25, 14476 Potsdam (Germany); Pashniak, N. [University of Siegen, Department of Physics, Walter-Flex-Straße 3, 57068 Siegen (Germany); Schmidt, J. [PNSensor GmbH, Römerstraße 28, 80803 München (Germany); Strüder, L. [University of Siegen, Department of Physics, Walter-Flex-Straße 3, 57068 Siegen (Germany); PNSensor GmbH, Römerstraße 28, 80803 München (Germany); Max-Planck-Institut für extraterrestrische Physik, Giessenbachstraße, 85748 Garching (Germany); Pietsch, U. [University of Siegen, Department of Physics, Walter-Flex-Straße 3, 57068 Siegen (Germany)

    2013-05-21

    In this work we study the response of a pnCCD by means of X-ray spectroscopy in the energy range between 6 keV and 20 keV and by Laue diffraction techniques. The analyses include measurements of characteristic detector parameters like energy resolution, count rate capability and effects of different gain settings. The limit of a single photon counting operation in white beam X-ray diffraction experiments is discussed with regard to the occurrence of pile-up events, for which the energy information about individual photons is lost. In case of monochromatic illumination the pnCCD can be used as a fast conventional CCD with a charge handling capacity (CHC) of about 300,000 electrons per pixel. If the CHC is exceeded, any surplus charge will spill to neighboring pixels perpendicular to the transfer direction due to electrostatic repulsion. The possibilities of increasing the number of storable electrons are investigated for different voltage settings by exposing a single pixel with X-rays generated by a microfocus X-ray source. The pixel binning mode is tested as an alternative approach that enables a pnCCD operation with significantly shorter readout times. -- Highlights: ► The pnCCD acts as a four-dimensional detector for white X-rays. ► Its performance for applications with synchrotron radiation is investigated. ► The pnCCD can be used for single photon counting and photon integration. ► The operation mode depends on the local frequencies of pile-up events. ► The pnCCD can be optimized for X-ray spectroscopy and X-ray imaging.

  4. Energy dependent charge spread function in a dedicated synchrotron beam pnCCD detector

    International Nuclear Information System (INIS)

    Yousef, Hazem

    2011-01-01

    A scan on the pixel edges is the method which is used to resolve the electron cloud size in the pixel array of the pnCCD detector. The EDR synchrotron radiation in BESSY is the source of the X-ray photons which are used in the scans. The radius of the electron cloud as a function of the impinging photon energy is analyzed. The angle of incidence of the X-ray beam is employed in the measurements. The measurements are validated by the numerical simulation models. The inclined X-ray track leads to distribute the electron clouds in a certain number of pixels according to the incident angle of the X-ray beam. The pixels detect different electron clouds according to their generation position in the detector bulk. A collimated X-ray beam of 12.14 keV is used in the measurements with 30 and 40 entrance angles. It is shown that the two factors that leads to expand the electron clouds namely the diffusion and the mutual electrostatic repulsion can be separated from the measured electron clouds. It is noticed as well that the influence of the mutual electrostatic repulsion dominates the cloud expansion over the diffusion process in the collection time of the detector. The perpendicular X-ray track leads to determine the average radius of the electron cloud per photon energy. The results show that the size of the electron clouds (RMS) in the energy range of [5.0-21.6] keV is smaller than the pixel size. (orig.)

  5. Energy dependent charge spread function in a dedicated synchrotron beam pnCCD detector

    Energy Technology Data Exchange (ETDEWEB)

    Yousef, Hazem

    2011-05-20

    A scan on the pixel edges is the method which is used to resolve the electron cloud size in the pixel array of the pnCCD detector. The EDR synchrotron radiation in BESSY is the source of the X-ray photons which are used in the scans. The radius of the electron cloud as a function of the impinging photon energy is analyzed. The angle of incidence of the X-ray beam is employed in the measurements. The measurements are validated by the numerical simulation models. The inclined X-ray track leads to distribute the electron clouds in a certain number of pixels according to the incident angle of the X-ray beam. The pixels detect different electron clouds according to their generation position in the detector bulk. A collimated X-ray beam of 12.14 keV is used in the measurements with 30 and 40 entrance angles. It is shown that the two factors that leads to expand the electron clouds namely the diffusion and the mutual electrostatic repulsion can be separated from the measured electron clouds. It is noticed as well that the influence of the mutual electrostatic repulsion dominates the cloud expansion over the diffusion process in the collection time of the detector. The perpendicular X-ray track leads to determine the average radius of the electron cloud per photon energy. The results show that the size of the electron clouds (RMS) in the energy range of [5.0-21.6] keV is smaller than the pixel size. (orig.)

  6. End-To-End Solution for Integrated Workload and Data Management using glideinWMS and Globus Online

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Grid computing has enabled scientific communities to effectively share computing resources distributed over many independent sites. Several such communities, or Virtual Organizations (VO), in the Open Science Grid and the European Grid Infrastructure use the glideinWMS system to run complex application work-flows. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, dynamically-sized overlay Condor batch system on Grid resources. While the WMS addresses the management of compute resources, however, data management in the Grid is still the responsibility of the VO. In general, large VOs have resources to develop complex custom solutions, while small VOs would rather push this responsibility to the infrastructure. The latter requires a tight integration of the WMS and the data management layers, an approach still not common in modern Grids. In this paper we describe a solution developed to address this shortcoming in the context of Center for Enabling Distributed Petascale Scienc...

  7. End-to-End Deep Neural Networks and Transfer Learning for Automatic Analysis of Nation-State Malware

    Directory of Open Access Journals (Sweden)

    Ishai Rosenberg

    2018-05-01

    Full Text Available Malware allegedly developed by nation-states, also known as advanced persistent threats (APT, are becoming more common. The task of attributing an APT to a specific nation-state or classifying it to the correct APT family is challenging for several reasons. First, each nation-state has more than a single cyber unit that develops such malware, rendering traditional authorship attribution algorithms useless. Furthermore, the dataset of such available APTs is still extremely small. Finally, those APTs use state-of-the-art evasion techniques, making feature extraction challenging. In this paper, we use a deep neural network (DNN as a classifier for nation-state APT attribution. We record the dynamic behavior of the APT when run in a sandbox and use it as raw input for the neural network, allowing the DNN to learn high level feature abstractions of the APTs itself. We also use the same raw features for APT family classification. Finally, we use the feature abstractions learned by the APT family classifier to solve the attribution problem. Using a test set of 1000 Chinese and Russian developed APTs, we achieved an accuracy rate of 98.6%

  8. End-to-end encryption in on-line payment systems : The industry reluctance and the role of laws

    NARCIS (Netherlands)

    Kasiyanto, Safari

    2016-01-01

    Various security breaches at third-party payment processors show that online payment systems are the primary target for cyber-criminals. In general, the security of online payment systems relies on a number of factors, namely technical factors, processing factors, and legal factors. The industry

  9. Portable air quality sensor unit for participatory monitoring: an end-to-end VESNA-AQ based prototype

    Science.gov (United States)

    Vucnik, Matevz; Robinson, Johanna; Smolnikar, Miha; Kocman, David; Horvat, Milena; Mohorcic, Mihael

    2015-04-01

    Key words: portable air quality sensor, CITI-SENSE, participatory monitoring, VESNA-AQ The emergence of low-cost easy to use portable air quality sensors units is opening new possibilities for individuals to assess their exposure to air pollutants at specific place and time, and share this information through the Internet connection. Such portable sensors units are being used in an ongoing citizen science project called CITI-SENSE, which enables citizens to measure and share the data. The project aims through creating citizens observatories' to empower citizens to contribute to and participate in environmental governance, enabling them to support and influence community and societal priorities as well as associated decision making. An air quality measurement system based on VESNA sensor platform was primarily designed within the project for the use as portable sensor unit in selected pilot cities (Belgrade, Ljubljana and Vienna) for monitoring outdoor exposure to pollutants. However, functionally the same unit with different set of sensors could be used for example as an indoor platform. The version designed for the pilot studies was equipped with the following sensors: NO2, O3, CO, temperature, relative humidity, pressure and accelerometer. The personal sensor unit is battery powered and housed in a plastic box. The VESNA-based air quality (AQ) monitoring system comprises the VESNA-AQ portable sensor unit, a smartphone app and the remote server. Personal sensor unit supports wireless connection to an Android smartphone via built-in Wi-Fi. The smartphone in turn serves also as the communication gateway towards the remote server using any of available data connections. Besides the gateway functionality the role of smartphone is to enrich data coming from the personal sensor unit with the GPS location, timestamps and user defined context. This, together with an accelerometer, enables the user to better estimate ones exposure in relation to physical activities, time and location. The end user can monitor the measured parameters through a smartphone application. The smartphone app implements a custom developed LCSP (Lightweight Client Server Protocol) protocol which is used to send requests to the VESNA-AQ unit and to exchange information. When the data is obtained from the VESNA-AQ unit, the mobile application visualizes the data. It also has an option to forward the data to the remote server in a custom JSON structure over a HTTP POST request. The server stores the data in the database and in parallel translates the data to WFS and forwards it to the main CITI-SENSE platform over WFS-T in a common XML format over HTTP POST request. From there data can be accessed through the Internet and visualised in different forms and web applications developed by the CITI-SENSE project. In the course of the project, the collected data will be made publicly available enabling the citizens to participate in environmental governance. Acknowledgements: CITI-SENSE is a Collaborative Project partly funded by the EU FP7-ENV-2012 under grant agreement no 308524 (www.citi-sense.eu).

  10. End-To-End Solution for Integrated Workload and Data Management using GlideinWMS and Globus Online

    International Nuclear Information System (INIS)

    Mhashilkar, Parag; Miller, Zachary; Weiss, Cathrin; Kettimuthu, Rajkumar; Garzoglio, Gabriele; Holzman, Burt; Duan, Xi; Lacinski, Lukasz

    2012-01-01

    Grid computing has enabled scientific communities to effectively share computing resources distributed over many independent sites. Several such communities, or Virtual Organizations (VO), in the Open Science Grid and the European Grid Infrastructure use the GlideinWMS system to run complex application work-flows. GlideinWMS is a pilot-based workload management system (WMS) that creates an on-demand, dynamically-sized overlay Condor batch system on Grid resources. While the WMS addresses the management of compute resources, however, data management in the Grid is still the responsibility of the VO. In general, large VOs have resources to develop complex custom solutions, while small VOs would rather push this responsibility to the infrastructure. The latter requires a tight integration of the WMS and the data management layers, an approach still not common in modern Grids. In this paper we describe a solution developed to address this shortcoming in the context of Center for Enabling Distributed Peta-scale Science (CEDPS) by integrating GlideinWMS with Globus Online (GO). Globus Online is a fast, reliable file transfer service that makes it easy for any user to move data. The solution eliminates the need for the users to provide custom data transfer solutions in the application by making this functionality part of the GlideinWMS infrastructure. To achieve this, GlideinWMS uses the file transfer plug-in architecture of Condor. The paper describes the system architecture and how this solution can be extended to support data transfer services other than Globus Online when used with Condor or GlideinWMS.

  11. Supporting end-to-end resource virtualization for Web 2.0 applications using Service Oriented Architecture

    NARCIS (Netherlands)

    Papagianni, C.; Karagiannis, Georgios; Tselikas, N. D.; Sfakianakis, E.; Chochliouros, I. P.; Kabilafkas, D.; Cinkler, T.; Westberg, L.; Sjödin, P.; Hidell, M.; Heemstra de Groot, S.M.; Kontos, T.; Katsigiannis, C.; Pappas, C.; Antonakopoulou, A.; Venieris, I.S.

    2008-01-01

    In recent years, technologies have been introduced offering a large amount of computing and networking resources. New applications such as Google AdSense and BitTorrent can profit from the use of these resources. An efficient way of discovering and reserving these resources is by using the Service

  12. Probability distribution function of the polymer end-to-end molecule vector after retraction and its application to step deformation

    Czech Academy of Sciences Publication Activity Database

    Kharlamov, Alexander; Rolón-Garrido, V. H.; Filip, Petr

    2010-01-01

    Roč. 19, č. 4 (2010), s. 190-194 ISSN 1022-1344 R&D Projects: GA ČR GA103/09/2066 Institutional research plan: CEZ:AV0Z20600510 Keywords : polymer chains * molecular modeling * shear * stress Subject RIV: BK - Fluid Dynamics Impact factor: 1.440, year: 2010

  13. End-to-End Privacy Protection for Facebook Mobile Chat based on AES with Multi-Layered MD5

    Directory of Open Access Journals (Sweden)

    Wibisono Sukmo Wardhono

    2018-01-01

    Full Text Available As social media environments become more interactive and amount of users grown tremendously, privacy is a matter of increasing concern. When personal data become a commodity, social media company can share users data to another party such as government. Facebook, inc is one of the social media company that frequently asked for user’s data. Although this private data request mechanism through a formal and valid legal process, it still undermine the fundamental right to information privacy. In This Case, social media users need protection against privacy violation from social media platform provider itself.  Private chat is the most favorite feature of a social media. Inside a chat room, user can share their private information contents. Cryptography is one of data protection methods that can be used to hides private communication data from unauthorized parties. In our study, we proposed a system that can encrypt chatting content based on AES and multi-layered MD5 to ensure social media users have privacy protection against social media company that use user informations as a commodity. In addition, this system can make users convenience to share their private information through social media platform.

  14. Investigating end-to-end security in the fifth generation wireless capabilities and IoT extensions

    Science.gov (United States)

    Uher, J.; Harper, J.; Mennecke, R. G.; Patton, P.; Farroha, B.

    2016-05-01

    The emerging 5th generation wireless network will be architected and specified to meet the vision of allowing the billions of devices and millions of human users to share spectrum to communicate and deliver services. The expansion of wireless networks from its current role to serve these diverse communities of interest introduces new paradigms that require multi-tiered approaches. The introduction of inherently low security components, like IoT devices, necessitates that critical data be better secured to protect the networks and users. Moreover high-speed communications that are meant to enable the autonomous vehicles require ultra reliable and low latency paths. This research explores security within the proposed new architectures and the cross interconnection of the highly protected assets with low cost/low security components forming the overarching 5th generation wireless infrastructure.

  15. An Anthological Review of Research Utilizing MontyLingua: a Python-Based End-to-End Text Processor

    Directory of Open Access Journals (Sweden)

    2008-06-01

    Full Text Available MontyLingua, an integral part of ConceptNet which is currently the largest commonsense knowledge base, is an English text processor developed using Python programming language in MIT Media Lab. The main feature of MontyLingua is the coverage for all aspects of English text processing from raw input text to semantic meanings and summary generation, yet each component in MontyLingua is loosely-coupled to each other at the architectural and code level, which enabled individual components to be used independently or substituted. However, there has been no review exploring the role of MontyLingua in recent research work utilizing it. This paper aims to review the use of and roles played by MontyLingua and its components in research work published in 19 articles between October 2004 and August 2006. We had observed a diversified use of MontyLingua in many different areas, both generic and domain-specific. Although the use of text summarizing component had not been observe, we are optimistic that it will have a crucial role in managing the current trend of information overload in future research.

  16. SPAN: A Network Providing Integrated, End-to-End, Sensor-to-Database Solutions for Environmental Sciences

    Science.gov (United States)

    Benzel, T.; Cho, Y. H.; Deschon, A.; Gullapalli, S.; Silva, F.

    2009-12-01

    In recent years, advances in sensor network technology have shown great promise to revolutionize environmental data collection. Still, wide spread adoption of these systems by domain experts has been lacking, and these have remained the purview of the engineers who design them. While there are many data logging options for basic data collection in the field currently, scientists are often required to visit the deployment sites to retrieve their data and manually import it into spreadsheets. Some advanced commercial software systems do allow scientists to collect data remotely, but most of these systems only allow point-to-point access, and require proprietary hardware. Furthermore, these commercial solutions preclude the use of sensors from other manufacturers or integration with internet based database repositories and compute engines. Therefore, scientists often must download and manually reformat their data before uploading it to the repositories if they wish to share their data. We present an open-source, low-cost, extensible, turnkey solution called Sensor Processing and Acquisition Network (SPAN) which provides a robust and flexible sensor network service. At the deployment site, SPAN leverages low-power generic embedded processors to integrate variety of commercially available sensor hardware to the network of environmental observation systems. By bringing intelligence close to the sensed phenomena, we can remotely control configuration and re-use, establish rules to trigger sensor activity, manage power requirements, and control the two-way flow of sensed data as well as control information to the sensors. Key features of our design include (1) adoption of a hardware agnostic architecture: our solutions are compatible with several programmable platforms, sensor systems, communication devices and protocols. (2) information standardization: our system supports several popular communication protocols and data formats, and (3) extensible data support: our system works with several existing data storage systems, data models and web based services as needed by the domain experts; examples include standard MySQL databases, Sensorbase (from UCLA), as well as SPAN Cloud, a system built using Google's Application Engine that allows scientists to use Google's cloud computing cyber-infrastructure. We provide a simple, yet flexible data access control mechanism that allows groups of researchers to share their data in SPAN Cloud. In this talk, we will describe the SPAN architecture, its components, our development plans, our vision for the future and results from current deployments that continue to drive the design of our system.

  17. Secondary link adaptation in cognitive radio networks: End-to-end performance with cross-layer design

    KAUST Repository

    Ma, Hao; Yang, Yuli; Aissa, Sonia

    2012-01-01

    the optimal boundary points in closed form to choose the AMC transmission modes by taking into account the channel state information from the secondary transmitter to both the primary receiver and the secondary receiver. Moreover, numerical results

  18. End-to-end performance of cooperative relaying in spectrum-sharing systems with quality of service requirements

    KAUST Repository

    Asghari, Vahid Reza; Aissa, Sonia

    2011-01-01

    We propose adopting a cooperative relaying technique in spectrum-sharing cognitive radio (CR) systems to more effectively and efficiently utilize available transmission resources, such as power, rate, and bandwidth, while adhering to the quality

  19. Defense Computers: DOD Y2K Functional End-to-End Testing Progress and Test Event Management

    National Research Council Canada - National Science Library

    1999-01-01

    ... (DOD) which relies on a complex and broad array of interconnected computer systems-including weapons, command and control, satellite, inventory management, transportation management, health, financial...

  20. Generic Black-Box End-to-End Attack Against State of the Art API Call Based Malware Classifiers

    OpenAIRE

    Rosenberg, Ishai; Shabtai, Asaf; Rokach, Lior; Elovici, Yuval

    2017-01-01

    In this paper, we present a black-box attack against API call based machine learning malware classifiers, focusing on generating adversarial sequences combining API calls and static features (e.g., printable strings) that will be misclassified by the classifier without affecting the malware functionality. We show that this attack is effective against many classifiers due to the transferability principle between RNN variants, feed forward DNNs, and traditional machine learning classifiers such...

  1. Understanding Effect of Constraint Release Environment on End-to-End Vector Relaxation of Linear Polymer Chains

    KAUST Repository

    Shivokhin, Maksim E.; Read, Daniel J.; Kouloumasis, Dimitris; Kocen, Rok; Zhuge, Flanco; Bailly, Christian; Hadjichristidis, Nikolaos; Likhtman, Alexei E.

    2017-01-01

    of a linear probe chain. For this purpose we first validate the ability of the model to consistently predict both the viscoelastic and dielectric response of monodisperse and binary mixtures of type A polymers, based on published experimental data. We

  2. Molecular dynamics simulation of joining process of Ag-Au nanowires and mechanical properties of the hybrid nanojoint

    Directory of Open Access Journals (Sweden)

    Su Ding

    2015-05-01

    Full Text Available The nanojoining process of Ag-Au hybrid nanowires at 800K was comprehensively studied by virtue of molecular dynamics (MD simulation. Three kinds of configurations including end-to-end, T-like and X-like were built in the simulation aiming to understand the nanojoining mechanism. The detailed dynamic evolution of atoms, crystal structure transformation and defects development during the nanojoining processes were performed. The results indicate that there are two stages in the nanojoining process of Ag-Au nanowires which are atom diffusion and new bonds formation. Temperature is a key parameter affecting both stages ascribed to the energy supply and the optimum temperature for Ag-Au nanojoint with diameter of 4.08 nm has been discussed. The mechanical properties of the nanojoint were examined with simulation of tensile test on the end-to-end joint. It was revealed that the nanojoint was strong enough to resist fracture at the joining area.

  3. Phase shifting white light interferometry using colour CCD for optical metrology and bio-imaging applications

    Science.gov (United States)

    Upputuri, Paul Kumar; Pramanik, Manojit

    2018-02-01

    Phase shifting white light interferometry (PSWLI) has been widely used for optical metrology applications because of their precision, reliability, and versatility. White light interferometry using monochrome CCD makes the measurement process slow for metrology applications. WLI integrated with Red-Green-Blue (RGB) CCD camera is finding imaging applications in the fields optical metrology and bio-imaging. Wavelength dependent refractive index profiles of biological samples were computed from colour white light interferograms. In recent years, whole-filed refractive index profiles of red blood cells (RBCs), onion skin, fish cornea, etc. were measured from RGB interferograms. In this paper, we discuss the bio-imaging applications of colour CCD based white light interferometry. The approach makes the measurement faster, easier, cost-effective, and even dynamic by using single fringe analysis methods, for industrial applications.

  4. High performance CCD camera system for digitalisation of 2D DIGE gels.

    Science.gov (United States)

    Strijkstra, Annemieke; Trautwein, Kathleen; Roesler, Stefan; Feenders, Christoph; Danzer, Daniel; Riemenschneider, Udo; Blasius, Bernd; Rabus, Ralf

    2016-07-01

    An essential step in 2D DIGE-based analysis of differential proteome profiles is the accurate and sensitive digitalisation of 2D DIGE gels. The performance progress of commercially available charge-coupled device (CCD) camera-based systems combined with light emitting diodes (LED) opens up a new possibility for this type of digitalisation. Here, we assessed the performance of a CCD camera system (Intas Advanced 2D Imager) as alternative to a traditionally employed, high-end laser scanner system (Typhoon 9400) for digitalisation of differential protein profiles from three different environmental bacteria. Overall, the performance of the CCD camera system was comparable to the laser scanner, as evident from very similar protein abundance changes (irrespective of spot position and volume), as well as from linear range and limit of detection. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. The development of large-aperture test system of infrared camera and visible CCD camera

    Science.gov (United States)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  6. A fluorescent screen + CCD system for quality assurance of therapeutic scanned ion beams

    Energy Technology Data Exchange (ETDEWEB)

    Takeshita, E., E-mail: eriuli@nirs.go.jp [National Institute of Radiological Sciences, Chiba (Japan); Furukawa, T., E-mail: t_furu@nirs.go.jp [National Institute of Radiological Sciences, Chiba (Japan); Inaniwa, T., E-mail: taku@nirs.go.jp [National Institute of Radiological Sciences, Chiba (Japan); Sato, S., E-mail: shin_s@nirs.go.jp [National Institute of Radiological Sciences, Chiba (Japan); Himukai, T., E-mail: himukai@nirs.go.jp [National Institute of Radiological Sciences, Chiba (Japan); Shirai, T., E-mail: t_shirai@nirs.go.jp [National Institute of Radiological Sciences, Chiba (Japan); Noda, K., E-mail: noda_k@nirs.go.jp [National Institute of Radiological Sciences, Chiba (Japan)

    2011-12-15

    A fluorescent screen + a charge coupled device (CCD) system were developed to verify the performance of scanned ion beams at the HIMAC. The fluorescent light from the screen is observed by the CCD camera. Two-dimensional fields, produced by the scanning process, i.e., the position and the size of the beam for each scan, represent of the important issues in scanning irradiation. In the developed system, the two-dimensional relative fluence and the flatness of the irradiation field were measured in a straightforward technique from the luminance distribution on the screen. The position and the size of the beams were obtained from centroid computation results of the brightness. By the good sensitivity and spatial resolution of the fluorescent screen + CCD system, the scanned ion beams were verified as the measurements at the HIMAC prototype scanning system.

  7. A fluorescent screen + CCD system for quality assurance of therapeutic scanned ion beams

    Science.gov (United States)

    Takeshita, E.; Furukawa, T.; Inaniwa, T.; Sato, S.; Himukai, T.; Shirai, T.; Noda, K.

    2011-12-01

    A fluorescent screen + a charge coupled device (CCD) system were developed to verify the performance of scanned ion beams at the HIMAC. The fluorescent light from the screen is observed by the CCD camera. Two-dimensional fields, produced by the scanning process, i.e., the position and the size of the beam for each scan, represent of the important issues in scanning irradiation. In the developed system, the two-dimensional relative fluence and the flatness of the irradiation field were measured in a straightforward technique from the luminance distribution on the screen. The position and the size of the beams were obtained from centroid computation results of the brightness. By the good sensitivity and spatial resolution of the fluorescent screen + CCD system, the scanned ion beams were verified as the measurements at the HIMAC prototype scanning system.

  8. Measurement of phase function of aerosol at different altitudes by CCD Lidar

    Science.gov (United States)

    Sun, Peiyu; Yuan, Ke'e.; Yang, Jie; Hu, Shunxing

    2018-02-01

    The aerosols near the ground are closely related to human health and climate change, the study on which has important significance. As we all know, the aerosol is inhomogeneous at different altitudes, of which the phase function is also different. In order to simplify the retrieval algorithm, it is usually assumed that the aerosol is uniform at different altitudes, which will bring measurement error. In this work, an experimental approach is demonstrated to measure the scattering phase function of atmospheric aerosol particles at different heights by CCD lidar system, which could solve the problem of the traditional CCD lidar system in assumption of phase function. The phase functions obtained by the new experimental approach are used to retrieve the aerosol extinction coefficient profiles. By comparison of the aerosol extinction coefficient retrieved by Mie-scattering aerosol lidar and CCD lidar at night, the reliability of new experimental approach is verified.

  9. Simulation

    DEFF Research Database (Denmark)

    Gould, Derek A; Chalmers, Nicholas; Johnson, Sheena J

    2012-01-01

    Recognition of the many limitations of traditional apprenticeship training is driving new approaches to learning medical procedural skills. Among simulation technologies and methods available today, computer-based systems are topical and bring the benefits of automated, repeatable, and reliable p...... performance assessments. Human factors research is central to simulator model development that is relevant to real-world imaging-guided interventional tasks and to the credentialing programs in which it would be used....

  10. Application of the CCD Fabry-Perot Annular Summing Technique to Thermospheric O(1)D.

    Science.gov (United States)

    Coakley, Monica Marie

    1995-01-01

    This work will detail the verification of the advantages of the Fabry-Perot charge coupled device (CCD) annular summing technique, the development of the technique for analysis of daysky spectra, and the implications of the resulting spectra for neutral temperature and wind measurements in the daysky thermosphere. The daysky spectral feature of interest is the bright (1 kilo-Rayleigh) thermospheric (OI) emission at 6300 A which had been observed in the nightsky in order to determine winds and temperatures in the vicinity of the altitude of 250 km. In the daysky, the emission line sits on top of a bright Rayleigh scattered continuum background which significantly complicates the observation. With a triple etalon Fabry-Perot spectrometer, the continuum background can be reduced while maintaining high throughput and high resolution. The inclusion of a CCD camera results in significant savings in integration time over the two more standard scanning photomultiplier systems that have made the same wind and temperature measurements in the past. A comparable CCD system can experience an order of magnitude savings in integration time over a PMT system. Laboratory and field tests which address the advantages and limitations of both the Fabry-Perot CCD annular summing technique and the daysky CCD imaging are included in Chap. 2 and Chap. 3. With a sufficiently large throughput associated with the spectrometer and a CCD detector, rapid observations (~4 minute integrations) can be made. Extraction of the line width and line center from the daysky near-continuum background is complicated compared to the nightsky case, but possible. Methods of fitting the line are included in Chap. 4. The daysky O ^1D temperatures are consistent with a lower average emission height than predicted by models. The data and models are discussed in Chap. 5. Although some discrepancies exist between resulting temperatures and models, the observations indicate the potential for other direct measurements

  11. Deflection control system for prestressed concrete bridges by CCD camera. CCD camera ni yoru prestressed concrete kyo no tawami kanri system

    Energy Technology Data Exchange (ETDEWEB)

    Noda, Y.; Nakayama, Y.; Arai, T. (Kawada Construction Co. Ltd., Tokyo (Japan))

    1994-03-15

    For the long-span prestressed concrete bridge (continuous box girder and cable stayed bridge), the design and construction control becomes increasingly complicated as construction proceeds because of its cyclic works. This paper describes the method and operation of an automatic levelling module using CCD camera and the experimental results by this system. For this automatic levelling system, the altitude can be automatically measured by measuring the center location of gravity of the target on the bridge surface using CCD camera. The present deflection control system developed compares the measured value by the automatic levelling system with the design value obtained by the design calculation system, and manages them. From the real-time continuous measurement for the long term, in which the CCD camera was set on the bridge surface, it was found that the stable measurement accuracy can be obtained. Successful application of this system demonstrates that the system is an effective and efficient construction aid. 11 refs., 19 figs., 1 tab.

  12. Construction of a photochemical reactor combining a CCD spectrophotometer and a LED radiation source.

    Science.gov (United States)

    Gombár, Melinda; Józsa, Éva; Braun, Mihály; Ősz, Katalin

    2012-10-01

    An inexpensive photoreactor using LED light sources and a fibre-optic CCD spectrophotometer as a detector was built by designing a special cell holder for standard 1.000 cm cuvettes. The use of this device was demonstrated by studying the aqueous photochemical reaction of 2,5-dichloro-1,4-benzoquinone. The developed method combines the highly quantitative data collection of CCD spectrophotometers with the possibility of illuminating the sample independently of the detecting light beam, which is a substantial improvement of the method using diode array spectrophotometers as photoreactors.

  13. Computer-aided diagnosis of pneumoconiosis abnormalities extracted from chest radiographs scanned with a CCD scanner

    International Nuclear Information System (INIS)

    Abe, Koji; Minami, Masahide; Nakamura, Munehiro

    2011-01-01

    This paper presents a computer-aided diagnosis for pneumoconiosis radiographs obtained with a common charge-coupled devices (CCD) scanner. Since the current computer-aided diagnosis systems of pneumoconiosis are not practical for medical doctors due to high costs of usage for a special scanner, we propose a novel system which measures abnormalities of pneumoconiosis from lung images obtained with a common CCD scanner. Experimental results of discriminations between normal and abnormal cases for 56 right-lung images including 6 standard pneumoconiosis images have shown that the proposed abnormalities are well extracted according to the standards of pneumoconiosis categories. (author)

  14. New low noise CCD cameras for Pi-of-the-Sky project

    Science.gov (United States)

    Kasprowicz, G.; Czyrkowski, H.; Dabrowski, R.; Dominik, W.; Mankiewicz, L.; Pozniak, K.; Romaniuk, R.; Sitek, P.; Sokolowski, M.; Sulej, R.; Uzycki, J.; Wrochna, G.

    2006-10-01

    Modern research trends require observation of fainter and fainter astronomical objects on large areas of the sky. This implies usage of systems with high temporal and optical resolution with computer based data acquisition and processing. Therefore Charge Coupled Devices (CCD) became so popular. They offer quick picture conversion with much better quality than film based technologies. This work is theoretical and practical study of the CCD based picture acquisition system. The system was optimized for "Pi of The Sky" project. But it can be adapted to another professional astronomical researches. The work includes issue of picture conversion, signal acquisition, data transfer and mechanical construction of the device.

  15. Measuring a narrow Bessel beam spot by scanning a charge-coupled device (CCD) pixel

    International Nuclear Information System (INIS)

    Tiwari, S K; Ram, S P; Jayabalan, J; Mishra, S R

    2010-01-01

    By scanning a charge-coupled device (CCD) camera transverse to the beam axis and observing the variation in counts on a marked pixel, we demonstrate that we can measure a laser beam spot size smaller than the size of the CCD-pixel. We find this method particularly attractive for measuring the size of central spot of a Bessel beam, for which the established scanning knife-edge method does not work appropriately because of the large contribution of the rings surrounding the central spot to the signal

  16. Development of online cable eccentricity detection system based on X-ray CCD

    International Nuclear Information System (INIS)

    Chen Jianzhen; Li Bin; Wei Kaixia; Guo Lanying; Qu Guopu

    2008-01-01

    An improved technology of online cable eccentricity detection, based on X-ray CCD, greatly improves the measuring precision and the responding speed. The theory of eccentricity measuring based on X-ray CCD, and the structure of an apparatus are described. The apparatus is composed of scanning drive subsystem, X-ray generation components, data acquiring subsystem and high performance computer system. The measuring results are also presented. The features of this cable eccentricity detection technology are compared with the features of other technologies. (authors)

  17. A CCD-based area detector for X-ray crystallography using synchrotron and laboratory sources

    International Nuclear Information System (INIS)

    Phillips, W.C.; Li Youli; Stanton, M.; Xie Yuanhui; O'Mara, D.; Kalata, K.

    1993-01-01

    The design and characteristics of a CCD-based area detector suitable for X-ray crystallographic studies using both synchrotron and laboratory sources are described. The active area is 75 mm in diameter, the FWHM of the point response function is 0.20 mm, and for Bragg peaks the dynamic range is 900 and the DQE ∼0.3. The 1320x1035-pixel Kodak CCD is read out into an 8 Mbyte memory system in 0.14 s and digitized to 12 bits. X-ray crystallographic data collected at the NSLS synchrotron from cubic insulin crystals are presented. (orig.)

  18. Atmospheric radiation environment analyses based-on CCD camera at various mountain altitudes and underground sites

    Directory of Open Access Journals (Sweden)

    Li Cavoli Pierre

    2016-01-01

    Full Text Available The purpose of this paper is to discriminate secondary atmospheric particles and identify muons by measuring the natural radiative environment in atmospheric and underground locations. A CCD camera has been used as a cosmic ray sensor. The Low Noise Underground Laboratory of Rustrel (LSBB, France gives the access to a unique low-noise scientific environment deep enough to ensure the screening from the neutron and proton radiative components. Analyses of the charge levels in pixels of the CCD camera induced by radiation events and cartographies of the charge events versus the hit pixel are proposed.

  19. Comparative Analysis of Chinese HJ-1 CCD, GF-1 WFV and ZY-3 MUX Sensor Data for Leaf Area Index Estimations for Maize

    Directory of Open Access Journals (Sweden)

    Jing Zhao

    2018-01-01

    Full Text Available In recent years, China has developed and launched several satellites with high spatial resolutions, such as the resources satellite No. 3 (ZY-3 with a multi-spectral camera (MUX and 5.8 m spatial resolution, the satellite GaoFen No. 1 (GF-1 with a wide field of view (WFV camera and 16 m spatial resolution, and the environment satellite (HJ-1A/B with a charge-coupled device (CCD sensor and 30 m spatial resolution. First, to analyze the potential application of ZY-3 MUX, GF-1 WFV, and HJ-1 CCD to extract the leaf area index (LAI at the regional scale, this study estimated LAI from the relationships between physical model-based spectral vegetation indices (SVIs and LAI values that were generated from look-up tables (LUTs, simulated from the combination of the PROSPECT-5B leaf model and the scattering by arbitrarily inclined leaves with the hot-spot effect (SAILH canopy reflectance model. Second, to assess the surface reflectance quality of these sensors after data preprocessing, the well-processed surface reflectance products of the Landsat-8 operational land imager (OLI sensor with a convincing data quality were used to compare the performances of ZY-3 MUX, GF-1 WFV, and HJ-1 CCD sensors both in theory and reality. Apart from several reflectance fluctuations, the reflectance trends were coincident, and the reflectance values of the red and near-infrared (NIR bands were comparable among these sensors. Finally, to analyze the accuracy of the LAI estimated from ZY-3 MUX, GF-1 WFV, and HJ-1 CCD, the LAI estimations from these sensors were validated based on LAI field measurements in Huailai, Hebei Province, China. The results showed that the performance of the LAI that was inversed from ZY-3 MUX was better than that from GF-1 WFV, and HJ-1 CCD, both of which tended to be systematically underestimated. In addition, the value ranges and accuracies of the LAI inversions both decreased with decreasing spatial resolution.

  20. Simulation

    CERN Document Server

    Ross, Sheldon

    2006-01-01

    Ross's Simulation, Fourth Edition introduces aspiring and practicing actuaries, engineers, computer scientists and others to the practical aspects of constructing computerized simulation studies to analyze and interpret real phenomena. Readers learn to apply results of these analyses to problems in a wide variety of fields to obtain effective, accurate solutions and make predictions about future outcomes. This text explains how a computer can be used to generate random numbers, and how to use these random numbers to generate the behavior of a stochastic model over time. It presents the statist